idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
29,501
What makes Hoeffding's inequality an important statistical concept?
Hoeffding's inequality provides an upper bound on the probability that the sum of independent bounded random variables deviates from its expected value. This inequality provides a simple way to create a confidence interval for the binomial parameter $p$ corresponding to the probability of success. It can be used in the context of empirical risk minimization to estimate the error rate in classification.
What makes Hoeffding's inequality an important statistical concept?
Hoeffding's inequality provides an upper bound on the probability that the sum of independent bounded random variables deviates from its expected value. This inequality provides a simple way to create
What makes Hoeffding's inequality an important statistical concept? Hoeffding's inequality provides an upper bound on the probability that the sum of independent bounded random variables deviates from its expected value. This inequality provides a simple way to create a confidence interval for the binomial parameter $p$ corresponding to the probability of success. It can be used in the context of empirical risk minimization to estimate the error rate in classification.
What makes Hoeffding's inequality an important statistical concept? Hoeffding's inequality provides an upper bound on the probability that the sum of independent bounded random variables deviates from its expected value. This inequality provides a simple way to create
29,502
Forecasting of density function
One important application lies in demographics, e.g., forecasting the development of age pyramids, which are really nothing but time-varying histograms, which in turn are density estimators. Try your approach on that. Here are a few ideas about how to get longitudinal demographic density data. I finally went with the German dataset, which had the finest granularity, giving the annual pyramid in 1-year steps - most other datasets only binned each year's pyramid in 5-year-age bins. If you find a better source of demographic density time series, please tell us at that thread. Hyndman and Shang (2009) is a paper on forecasting functional time series. They apply their method to fertility rates. I'd also recommend the rainbow package for R also by Shang and Hyndman, for visualization of functional data. Or you can visualize your forecasts using animations. Here is a little animated GIF I created for the future German population pyramid (men on the left, women on the right):
Forecasting of density function
One important application lies in demographics, e.g., forecasting the development of age pyramids, which are really nothing but time-varying histograms, which in turn are density estimators. Try your
Forecasting of density function One important application lies in demographics, e.g., forecasting the development of age pyramids, which are really nothing but time-varying histograms, which in turn are density estimators. Try your approach on that. Here are a few ideas about how to get longitudinal demographic density data. I finally went with the German dataset, which had the finest granularity, giving the annual pyramid in 1-year steps - most other datasets only binned each year's pyramid in 5-year-age bins. If you find a better source of demographic density time series, please tell us at that thread. Hyndman and Shang (2009) is a paper on forecasting functional time series. They apply their method to fertility rates. I'd also recommend the rainbow package for R also by Shang and Hyndman, for visualization of functional data. Or you can visualize your forecasts using animations. Here is a little animated GIF I created for the future German population pyramid (men on the left, women on the right):
Forecasting of density function One important application lies in demographics, e.g., forecasting the development of age pyramids, which are really nothing but time-varying histograms, which in turn are density estimators. Try your
29,503
Forecasting of density function
There's a growing interdisciplinary literature on forecasting probability densities (as opposed to just forecasting the mean of a series). The following reference is a recent survey which discusses both methodology and applications in economics, meteorology, etc. Gneiting, T. and M. Katzfuss (2014): "Probabilistic Forecasting", Annual Review of Statistics and Its Application 1, 125-151. Available at http://www.annualreviews.org/doi/abs/10.1146/annurev-statistics-062713-085831
Forecasting of density function
There's a growing interdisciplinary literature on forecasting probability densities (as opposed to just forecasting the mean of a series). The following reference is a recent survey which discusses bo
Forecasting of density function There's a growing interdisciplinary literature on forecasting probability densities (as opposed to just forecasting the mean of a series). The following reference is a recent survey which discusses both methodology and applications in economics, meteorology, etc. Gneiting, T. and M. Katzfuss (2014): "Probabilistic Forecasting", Annual Review of Statistics and Its Application 1, 125-151. Available at http://www.annualreviews.org/doi/abs/10.1146/annurev-statistics-062713-085831
Forecasting of density function There's a growing interdisciplinary literature on forecasting probability densities (as opposed to just forecasting the mean of a series). The following reference is a recent survey which discusses bo
29,504
Forecasting of density function
In fixed-income finance, you can observe the term structure time series of an asset. Concretely, for credit default swaps, how much you have to pay to get insured against a company's default for $t$ years. This price is directly linked to the probability of default of the company. At instant $t=0$ the probability of default is $P(t=0) = 0$, at instant $t=\infty$ the probability of default is $P(t=\infty) = 1$, in between it is nondecreasing. You thus have a cumulative distribution function, and by derivation a probability density function. Since you can observe this curve on a daily basis, you have a time series of PDF which may have interesting dynamics. Tell me if you are interested by a more detailed story about that.
Forecasting of density function
In fixed-income finance, you can observe the term structure time series of an asset. Concretely, for credit default swaps, how much you have to pay to get insured against a company's default for $t$ y
Forecasting of density function In fixed-income finance, you can observe the term structure time series of an asset. Concretely, for credit default swaps, how much you have to pay to get insured against a company's default for $t$ years. This price is directly linked to the probability of default of the company. At instant $t=0$ the probability of default is $P(t=0) = 0$, at instant $t=\infty$ the probability of default is $P(t=\infty) = 1$, in between it is nondecreasing. You thus have a cumulative distribution function, and by derivation a probability density function. Since you can observe this curve on a daily basis, you have a time series of PDF which may have interesting dynamics. Tell me if you are interested by a more detailed story about that.
Forecasting of density function In fixed-income finance, you can observe the term structure time series of an asset. Concretely, for credit default swaps, how much you have to pay to get insured against a company's default for $t$ y
29,505
Understanding and interpreting consistency of OLS
An estimator is consistent if $\hat{\beta} \rightarrow_{p} \beta$ Or $\lim_{n \rightarrow \infty} \mbox{Pr}(|\hat{\beta} - \beta| < \epsilon) = 1 $ for all positive real $\epsilon$. Consistency in the literal sense means that sampling the world will get us what we want. There are inconsistent minimum variance estimators (failing to find the famous example by Google at this point). Unbiased minimum variance is a good starting place for thinking about estimators. Sometimes, it's easier to understand that we may have other criteria for "best" estimators. There is the general class of minimax estimators, and there are estimators that minimize MSE instead of variance (a little bit of bias in exchange for a whole lot less variance can be good). These estimators can be consistent because they asymptotically converge to the population estimates. The interpretation of the slope parameter comes from the context of the data you've collected. For instance, if $Y$ is fasting blood gluclose and $X$ is the previous week's caloric intake, then the interpretation of $\beta$ in the linear model $E[Y|X] = \alpha + \beta X$ is an associated difference in fasting blood glucose comparing individuals differing by 1 kCal in weekly diet (it may make sense to standardize $X$ by a denominator of $2,000$. That is what you consistently estimate with OLS, the more that $n$ increases. WRT #2 Linear regression is a projection. The predictors we obtain from projecting the observed responses into the fitted space necessarily generates it's additive orthogonal error component. These errors are always 0 mean and independent of the fitted values in the sample data (their dot product sums to zero always). This holds regardless of homoscedasticity, normality, linearity, or any of the classical assumptions of regression models.
Understanding and interpreting consistency of OLS
An estimator is consistent if $\hat{\beta} \rightarrow_{p} \beta$ Or $\lim_{n \rightarrow \infty} \mbox{Pr}(|\hat{\beta} - \beta| < \epsilon) = 1 $ for all positive real $\epsilon$. Consistency in the
Understanding and interpreting consistency of OLS An estimator is consistent if $\hat{\beta} \rightarrow_{p} \beta$ Or $\lim_{n \rightarrow \infty} \mbox{Pr}(|\hat{\beta} - \beta| < \epsilon) = 1 $ for all positive real $\epsilon$. Consistency in the literal sense means that sampling the world will get us what we want. There are inconsistent minimum variance estimators (failing to find the famous example by Google at this point). Unbiased minimum variance is a good starting place for thinking about estimators. Sometimes, it's easier to understand that we may have other criteria for "best" estimators. There is the general class of minimax estimators, and there are estimators that minimize MSE instead of variance (a little bit of bias in exchange for a whole lot less variance can be good). These estimators can be consistent because they asymptotically converge to the population estimates. The interpretation of the slope parameter comes from the context of the data you've collected. For instance, if $Y$ is fasting blood gluclose and $X$ is the previous week's caloric intake, then the interpretation of $\beta$ in the linear model $E[Y|X] = \alpha + \beta X$ is an associated difference in fasting blood glucose comparing individuals differing by 1 kCal in weekly diet (it may make sense to standardize $X$ by a denominator of $2,000$. That is what you consistently estimate with OLS, the more that $n$ increases. WRT #2 Linear regression is a projection. The predictors we obtain from projecting the observed responses into the fitted space necessarily generates it's additive orthogonal error component. These errors are always 0 mean and independent of the fitted values in the sample data (their dot product sums to zero always). This holds regardless of homoscedasticity, normality, linearity, or any of the classical assumptions of regression models.
Understanding and interpreting consistency of OLS An estimator is consistent if $\hat{\beta} \rightarrow_{p} \beta$ Or $\lim_{n \rightarrow \infty} \mbox{Pr}(|\hat{\beta} - \beta| < \epsilon) = 1 $ for all positive real $\epsilon$. Consistency in the
29,506
Understanding and interpreting consistency of OLS
When we talk about consistent estimation, we mean consistency of estimating the parameters $\beta$ from a regression like $$y = \alpha + \beta x + u$$ $\newcommand{\plim}{{\rm plim}}\newcommand{\Cov}{{\rm Cov}}\newcommand{\Var}{{\rm Var}}$ We don't know the true value of the slope of $x$ in this linear model, i.e. we don't know the true value of $\beta$. This is why we estimate it in the first place. When you estimate this model and if OLS is BLUE, then $$\plim\: \widehat{\beta}_{OLS} = \beta $$ This means, as your sample size becomes larger and larger, your estimate $\widehat{\beta}$ converges to the true value $\beta$. In this case you are "consistently" estimating this parameter. If you had the entire population as a sample, you would get $\widehat{\beta} = \beta$ As concerns your (1) and (2), $\Cov(X,u) = 0$ is one of the requirements for an estimator to be best, linear and unbiased (BLU). This doesn't mean that every estimator fulfills these requirements. If $\Cov(X,u) \neq 0$, OLS is biased (but it may still be "best", i.e. it has the smallest variance, and it will be "linear"). Consider again a linear regression model: $$y = \alpha + \beta x + \gamma d + u$$ Suppose that $\gamma \neq 0$, $\Cov(x,d) \neq 0$, and that $d$ is missing from the regression, so we only regress: $$y = \alpha + \beta x + e$$ This means that the $d$ is included in the error: $e = u + \gamma d$, and because $x$ is correlated with $d$, our OLS estimator is not BLUE anymore because $\Cov(x,e) \neq 0$ (since $d$ is inside $e$). Therefore, our estimate $\widehat{\beta}$ will be biased and inconsistent with $$\plim\: \widehat{\beta} = \beta + \gamma \frac{\Cov(x,d)}{\Var(x)}$$ As the sample size gets bigger and bigger, your estimate $\widehat{\beta}$ will not converge to the true value, i.e. it is inconsistently estimated. Instead it converges to the true value plus some bias (which depends on the size of $\gamma$, the correlation between $x$ and $d$ and the variance of $d$). So you see that OLS is not BLUE by definition as you describe it in point (1). It is only BLUE if it fulfills the conditions set by the Gauss-Markov theorem. Concerning point (2), if OLS satisfies these conditions, then it is a best linear predictor of the conditional expectation.
Understanding and interpreting consistency of OLS
When we talk about consistent estimation, we mean consistency of estimating the parameters $\beta$ from a regression like $$y = \alpha + \beta x + u$$ $\newcommand{\plim}{{\rm plim}}\newcommand{\Cov}{
Understanding and interpreting consistency of OLS When we talk about consistent estimation, we mean consistency of estimating the parameters $\beta$ from a regression like $$y = \alpha + \beta x + u$$ $\newcommand{\plim}{{\rm plim}}\newcommand{\Cov}{{\rm Cov}}\newcommand{\Var}{{\rm Var}}$ We don't know the true value of the slope of $x$ in this linear model, i.e. we don't know the true value of $\beta$. This is why we estimate it in the first place. When you estimate this model and if OLS is BLUE, then $$\plim\: \widehat{\beta}_{OLS} = \beta $$ This means, as your sample size becomes larger and larger, your estimate $\widehat{\beta}$ converges to the true value $\beta$. In this case you are "consistently" estimating this parameter. If you had the entire population as a sample, you would get $\widehat{\beta} = \beta$ As concerns your (1) and (2), $\Cov(X,u) = 0$ is one of the requirements for an estimator to be best, linear and unbiased (BLU). This doesn't mean that every estimator fulfills these requirements. If $\Cov(X,u) \neq 0$, OLS is biased (but it may still be "best", i.e. it has the smallest variance, and it will be "linear"). Consider again a linear regression model: $$y = \alpha + \beta x + \gamma d + u$$ Suppose that $\gamma \neq 0$, $\Cov(x,d) \neq 0$, and that $d$ is missing from the regression, so we only regress: $$y = \alpha + \beta x + e$$ This means that the $d$ is included in the error: $e = u + \gamma d$, and because $x$ is correlated with $d$, our OLS estimator is not BLUE anymore because $\Cov(x,e) \neq 0$ (since $d$ is inside $e$). Therefore, our estimate $\widehat{\beta}$ will be biased and inconsistent with $$\plim\: \widehat{\beta} = \beta + \gamma \frac{\Cov(x,d)}{\Var(x)}$$ As the sample size gets bigger and bigger, your estimate $\widehat{\beta}$ will not converge to the true value, i.e. it is inconsistently estimated. Instead it converges to the true value plus some bias (which depends on the size of $\gamma$, the correlation between $x$ and $d$ and the variance of $d$). So you see that OLS is not BLUE by definition as you describe it in point (1). It is only BLUE if it fulfills the conditions set by the Gauss-Markov theorem. Concerning point (2), if OLS satisfies these conditions, then it is a best linear predictor of the conditional expectation.
Understanding and interpreting consistency of OLS When we talk about consistent estimation, we mean consistency of estimating the parameters $\beta$ from a regression like $$y = \alpha + \beta x + u$$ $\newcommand{\plim}{{\rm plim}}\newcommand{\Cov}{
29,507
Penalized methods for categorical data: combining levels in a factor
It is possible. We can use a variant of the fused lasso to accomplish this. We can use the estimator $$\hat{\beta} = \arg\min_{\beta} \frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i} \right) + \sum_{\textrm{factors g}} \lambda_g \left(\sum_{j \in g} |\beta_j| + \frac{1}{2} \sum_{j,k \in g} |\beta_j - \beta_k| \right).$$ Note that $\frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i} \right)$ is the loss function for log-linear models. This encourages the coefficients within a group to be equal. This equality of coefficients is equivalent to collapsing the $j^{th}$ and $k^{th}$ levels of the factor together. In the case of when $\hat{\beta}_j=0$, it's equivalent to collapsing the $j^{th}$ level with the reference level. The tuning parameters $\lambda_g$ can be treated as constant, but this if there's only a few factors, it could be better to treat them as separate. The estimator is a minimizer of a convex function, so it can be computed efficiently via arbitrary solvers. It's possible that if a factor has many, many levels, these pairwise differences will get out of hands---in this case, knowing more structure about possible patterns of collapse will be necessary. Note that this is all accomplished in one step! This is part of what makes lasso-type estimators so cool! Another interesting approach is to use the OSCAR estimator, which is like above except the penalty $\|[-1 \, 1] \cdot [\beta_i \, \beta_j]'\|_1$ is replaced by $\|[\beta_i \, \beta_j]\|_\infty$.
Penalized methods for categorical data: combining levels in a factor
It is possible. We can use a variant of the fused lasso to accomplish this. We can use the estimator $$\hat{\beta} = \arg\min_{\beta} \frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i}
Penalized methods for categorical data: combining levels in a factor It is possible. We can use a variant of the fused lasso to accomplish this. We can use the estimator $$\hat{\beta} = \arg\min_{\beta} \frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i} \right) + \sum_{\textrm{factors g}} \lambda_g \left(\sum_{j \in g} |\beta_j| + \frac{1}{2} \sum_{j,k \in g} |\beta_j - \beta_k| \right).$$ Note that $\frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i} \right)$ is the loss function for log-linear models. This encourages the coefficients within a group to be equal. This equality of coefficients is equivalent to collapsing the $j^{th}$ and $k^{th}$ levels of the factor together. In the case of when $\hat{\beta}_j=0$, it's equivalent to collapsing the $j^{th}$ level with the reference level. The tuning parameters $\lambda_g$ can be treated as constant, but this if there's only a few factors, it could be better to treat them as separate. The estimator is a minimizer of a convex function, so it can be computed efficiently via arbitrary solvers. It's possible that if a factor has many, many levels, these pairwise differences will get out of hands---in this case, knowing more structure about possible patterns of collapse will be necessary. Note that this is all accomplished in one step! This is part of what makes lasso-type estimators so cool! Another interesting approach is to use the OSCAR estimator, which is like above except the penalty $\|[-1 \, 1] \cdot [\beta_i \, \beta_j]'\|_1$ is replaced by $\|[\beta_i \, \beta_j]\|_\infty$.
Penalized methods for categorical data: combining levels in a factor It is possible. We can use a variant of the fused lasso to accomplish this. We can use the estimator $$\hat{\beta} = \arg\min_{\beta} \frac{-1}{n} \sum_{i=1}^n \left(y_i \beta^T x_i - e^{\beta^T x_i}
29,508
Putting a prior on the concentration parameter in a Dirichlet process
I don't see how what you've written is fundamentally different from Escobar and West. \begin{eqnarray*} \pi(\alpha|t) &\propto& \pi(\alpha)\pi(t|\alpha) = \pi(\alpha)L(\alpha|t) \\ &\propto& \pi(\alpha)\alpha^t\frac{\Gamma(\alpha)}{\Gamma(\alpha+n)} \\ &\propto& \pi(\alpha)\alpha^t\frac{\Gamma(\alpha)\Gamma(n)}{\Gamma(\alpha+n)} \\ &=& \pi(\alpha)\alpha^tB(\alpha,n) \\ &=& \pi(\alpha)\alpha^{t-1}(\alpha+n)B(\alpha+1,n) \end{eqnarray*} where the second to last line is how you have it and the last line is how E&W have it and they are equal since \begin{eqnarray*} \alpha B(\alpha,n) &=& \alpha \frac{\Gamma(\alpha)\Gamma(n)}{\Gamma(\alpha + n)} = \frac{(\alpha\Gamma(\alpha))\Gamma(n)(\alpha+n)}{(\Gamma(\alpha + n)(\alpha+n))} = (\alpha+n) \frac{\Gamma(\alpha + 1)\Gamma(n)}{\Gamma(\alpha + n + 1)} \\ &=& (\alpha+n)B(\alpha+1,n) \end{eqnarray*} recalling that $\Gamma(z+1) = z\Gamma(z)$. I'm guessing that they prefered their formulation over yours because it only has the Beta function term, not the product of a Beta and a Gamma, but I could be wrong. I didn't quite follow the last bit you've written, could you be more explicit about your sampling scheme?
Putting a prior on the concentration parameter in a Dirichlet process
I don't see how what you've written is fundamentally different from Escobar and West. \begin{eqnarray*} \pi(\alpha|t) &\propto& \pi(\alpha)\pi(t|\alpha) = \pi(\alpha)L(\alpha|t) \\ &\propto& \pi(\alp
Putting a prior on the concentration parameter in a Dirichlet process I don't see how what you've written is fundamentally different from Escobar and West. \begin{eqnarray*} \pi(\alpha|t) &\propto& \pi(\alpha)\pi(t|\alpha) = \pi(\alpha)L(\alpha|t) \\ &\propto& \pi(\alpha)\alpha^t\frac{\Gamma(\alpha)}{\Gamma(\alpha+n)} \\ &\propto& \pi(\alpha)\alpha^t\frac{\Gamma(\alpha)\Gamma(n)}{\Gamma(\alpha+n)} \\ &=& \pi(\alpha)\alpha^tB(\alpha,n) \\ &=& \pi(\alpha)\alpha^{t-1}(\alpha+n)B(\alpha+1,n) \end{eqnarray*} where the second to last line is how you have it and the last line is how E&W have it and they are equal since \begin{eqnarray*} \alpha B(\alpha,n) &=& \alpha \frac{\Gamma(\alpha)\Gamma(n)}{\Gamma(\alpha + n)} = \frac{(\alpha\Gamma(\alpha))\Gamma(n)(\alpha+n)}{(\Gamma(\alpha + n)(\alpha+n))} = (\alpha+n) \frac{\Gamma(\alpha + 1)\Gamma(n)}{\Gamma(\alpha + n + 1)} \\ &=& (\alpha+n)B(\alpha+1,n) \end{eqnarray*} recalling that $\Gamma(z+1) = z\Gamma(z)$. I'm guessing that they prefered their formulation over yours because it only has the Beta function term, not the product of a Beta and a Gamma, but I could be wrong. I didn't quite follow the last bit you've written, could you be more explicit about your sampling scheme?
Putting a prior on the concentration parameter in a Dirichlet process I don't see how what you've written is fundamentally different from Escobar and West. \begin{eqnarray*} \pi(\alpha|t) &\propto& \pi(\alpha)\pi(t|\alpha) = \pi(\alpha)L(\alpha|t) \\ &\propto& \pi(\alp
29,509
Density of robots doing random walk in an infinite random geometric graph
Here's a start. Let $r = d/2$ be the radius of the ball you're considering. First, read up on random walks: http://en.wikipedia.org/wiki/Random_walk. Assume you have only one robot, and assume your random walk is on a two dimensional lattice. For small $t$, this is easy to compute with matrix multiplication. You know there are only $n = 1 + 4t + 2t(t-1)$ possible points in the lattice on which you can step on or land on after $t$ steps. Let $A_t$ be the $n \times n$ adjacency matrix of these $n$ vertices. Let $e_{i,t} \in \{0,1\}^n$ be the vector of all $0$s except for a $1$ in the $i$th spot. Assume that the first row (and column) of $A_t$ corresponds to the origin. Then, the probability that you are at vertex $i$ after $t$ steps is $e_{1,t}' A_t^t e_{i,t}$ (where the prime means transpose, and $A^t = A \times A \cdots \times A$ is $A$ raised to the $t$th power). I'm pretty sure you should be able to solve this explicitly. You can use the fact that everything the same distance from the origin in the $\cal L_1$ norm should have the same density. After that warm up, let's move on to your original question. After $t$ steps, you only need to consider the finite graph that is within the radius $r(t+1)$ ball around the origin (everywhere else has probability $0$ of being reachable after only $t$ steps). Try to make the adjacency matrix of that graph and work with it in the same way as the lattice case -- I don't know how to do this, but I would guess there's some Markov theory out there to help you out. One thing you can take advantage of us the fact that you know this distribution must be symmetric around the origin, in particular the density is only a function of the distance from the origin. This should make things easier, so all you need to consider is the probability that you are distance $q$ from the origin after $t$ steps. Once you solve this problem, call your density at the location $(x,y)$ after $t$ steps $f_t(x,y)$. Note that $f_t$ will be a function of $r$. Let $X$ be a random variable sampled from this distribution. Now you also need to consider starting with multiple robots. Supposing that multiple robots are allowed to be at the same vertex, this doesn't make it much harder than the one robot case. The robots can start uniformly on the circle, call the random variable that is sampled uniformly on this circle $U$. There will be a Poisson number of robots that you start with, let $M$ be a random variable sampled from this Poisson distribution. So the density you get from multiple robots is just $MU + X$. I think this is a reasonable start to the solution except that I didn't fully define the distribution of $X$. Good luck, and neat question.
Density of robots doing random walk in an infinite random geometric graph
Here's a start. Let $r = d/2$ be the radius of the ball you're considering. First, read up on random walks: http://en.wikipedia.org/wiki/Random_walk. Assume you have only one robot, and assume your ra
Density of robots doing random walk in an infinite random geometric graph Here's a start. Let $r = d/2$ be the radius of the ball you're considering. First, read up on random walks: http://en.wikipedia.org/wiki/Random_walk. Assume you have only one robot, and assume your random walk is on a two dimensional lattice. For small $t$, this is easy to compute with matrix multiplication. You know there are only $n = 1 + 4t + 2t(t-1)$ possible points in the lattice on which you can step on or land on after $t$ steps. Let $A_t$ be the $n \times n$ adjacency matrix of these $n$ vertices. Let $e_{i,t} \in \{0,1\}^n$ be the vector of all $0$s except for a $1$ in the $i$th spot. Assume that the first row (and column) of $A_t$ corresponds to the origin. Then, the probability that you are at vertex $i$ after $t$ steps is $e_{1,t}' A_t^t e_{i,t}$ (where the prime means transpose, and $A^t = A \times A \cdots \times A$ is $A$ raised to the $t$th power). I'm pretty sure you should be able to solve this explicitly. You can use the fact that everything the same distance from the origin in the $\cal L_1$ norm should have the same density. After that warm up, let's move on to your original question. After $t$ steps, you only need to consider the finite graph that is within the radius $r(t+1)$ ball around the origin (everywhere else has probability $0$ of being reachable after only $t$ steps). Try to make the adjacency matrix of that graph and work with it in the same way as the lattice case -- I don't know how to do this, but I would guess there's some Markov theory out there to help you out. One thing you can take advantage of us the fact that you know this distribution must be symmetric around the origin, in particular the density is only a function of the distance from the origin. This should make things easier, so all you need to consider is the probability that you are distance $q$ from the origin after $t$ steps. Once you solve this problem, call your density at the location $(x,y)$ after $t$ steps $f_t(x,y)$. Note that $f_t$ will be a function of $r$. Let $X$ be a random variable sampled from this distribution. Now you also need to consider starting with multiple robots. Supposing that multiple robots are allowed to be at the same vertex, this doesn't make it much harder than the one robot case. The robots can start uniformly on the circle, call the random variable that is sampled uniformly on this circle $U$. There will be a Poisson number of robots that you start with, let $M$ be a random variable sampled from this Poisson distribution. So the density you get from multiple robots is just $MU + X$. I think this is a reasonable start to the solution except that I didn't fully define the distribution of $X$. Good luck, and neat question.
Density of robots doing random walk in an infinite random geometric graph Here's a start. Let $r = d/2$ be the radius of the ball you're considering. First, read up on random walks: http://en.wikipedia.org/wiki/Random_walk. Assume you have only one robot, and assume your ra
29,510
Learning from relational data
I started studying this subject by reading this paper: Macskassy, S., & Provost, F. (2003). A simple relational classifier. My advisor told me it is the simplest classification approach in relational learning he knows.
Learning from relational data
I started studying this subject by reading this paper: Macskassy, S., & Provost, F. (2003). A simple relational classifier. My advisor told me it is the simplest classification approach in relational
Learning from relational data I started studying this subject by reading this paper: Macskassy, S., & Provost, F. (2003). A simple relational classifier. My advisor told me it is the simplest classification approach in relational learning he knows.
Learning from relational data I started studying this subject by reading this paper: Macskassy, S., & Provost, F. (2003). A simple relational classifier. My advisor told me it is the simplest classification approach in relational
29,511
Learning from relational data
This is a good introduction book: De Raedt, Luc, ed. Logical and relational learning. Springer, 2008. Try using ACE for TILDE and WARMR.
Learning from relational data
This is a good introduction book: De Raedt, Luc, ed. Logical and relational learning. Springer, 2008. Try using ACE for TILDE and WARMR.
Learning from relational data This is a good introduction book: De Raedt, Luc, ed. Logical and relational learning. Springer, 2008. Try using ACE for TILDE and WARMR.
Learning from relational data This is a good introduction book: De Raedt, Luc, ed. Logical and relational learning. Springer, 2008. Try using ACE for TILDE and WARMR.
29,512
Detecting Clusters of "similar" source codes
The obvious pre-processing step is to merge files that are truly identical. After that the key is normalization. At some point, students will start refactoring the code, renaming variables and such. Or reword the comments. A letter histogram is too much affected by this (plus it will capture a lot of the language properties). A common technique is to use a language-specific parser and transform the source code into an abstract syntax tree. Then extract features from this. And maybe analyze the comments separately in parallel. Then there's the line-based "longest common subsequence" approach. If you have a reasonably good similarity on single lines, you can search for the longest common subsequence of any two files. This will also yield a number of matches.
Detecting Clusters of "similar" source codes
The obvious pre-processing step is to merge files that are truly identical. After that the key is normalization. At some point, students will start refactoring the code, renaming variables and such. O
Detecting Clusters of "similar" source codes The obvious pre-processing step is to merge files that are truly identical. After that the key is normalization. At some point, students will start refactoring the code, renaming variables and such. Or reword the comments. A letter histogram is too much affected by this (plus it will capture a lot of the language properties). A common technique is to use a language-specific parser and transform the source code into an abstract syntax tree. Then extract features from this. And maybe analyze the comments separately in parallel. Then there's the line-based "longest common subsequence" approach. If you have a reasonably good similarity on single lines, you can search for the longest common subsequence of any two files. This will also yield a number of matches.
Detecting Clusters of "similar" source codes The obvious pre-processing step is to merge files that are truly identical. After that the key is normalization. At some point, students will start refactoring the code, renaming variables and such. O
29,513
Detecting Clusters of "similar" source codes
From the anti plagiarism world, I previously came across the notion of "Graph Isomorphism". Maybe you can take a look at that too. LCS - Longest Common Subsequence is possible too. But try to compare all these solutions and see what's the best :)
Detecting Clusters of "similar" source codes
From the anti plagiarism world, I previously came across the notion of "Graph Isomorphism". Maybe you can take a look at that too. LCS - Longest Common Subsequence is possible too. But try to compare
Detecting Clusters of "similar" source codes From the anti plagiarism world, I previously came across the notion of "Graph Isomorphism". Maybe you can take a look at that too. LCS - Longest Common Subsequence is possible too. But try to compare all these solutions and see what's the best :)
Detecting Clusters of "similar" source codes From the anti plagiarism world, I previously came across the notion of "Graph Isomorphism". Maybe you can take a look at that too. LCS - Longest Common Subsequence is possible too. But try to compare
29,514
Asynchronous (irregular) Time Series Analysis
I know of one possible solution, but it is sufficiently complicated that I'm going to take the easy option and link you to the relevant academic paper (a critically under-rated paper in my opinion): Frank de Jong, Theo Nijman (1997) "High Frequency Analysis of Lead-Lag Relationships Between Financial Markets" I'm sure more work must have been done on this problem since then. A good way to find it is to use the "citations" page on ideas.repec. A link to the relevant page for the above-mentioned paper is here. A few titles look quite relevant.
Asynchronous (irregular) Time Series Analysis
I know of one possible solution, but it is sufficiently complicated that I'm going to take the easy option and link you to the relevant academic paper (a critically under-rated paper in my opinion): F
Asynchronous (irregular) Time Series Analysis I know of one possible solution, but it is sufficiently complicated that I'm going to take the easy option and link you to the relevant academic paper (a critically under-rated paper in my opinion): Frank de Jong, Theo Nijman (1997) "High Frequency Analysis of Lead-Lag Relationships Between Financial Markets" I'm sure more work must have been done on this problem since then. A good way to find it is to use the "citations" page on ideas.repec. A link to the relevant page for the above-mentioned paper is here. A few titles look quite relevant.
Asynchronous (irregular) Time Series Analysis I know of one possible solution, but it is sufficiently complicated that I'm going to take the easy option and link you to the relevant academic paper (a critically under-rated paper in my opinion): F
29,515
Dynamic factor analysis vs factor analysis on differences
Here goes: In my field (developmental science) we apply DFA to intensive multivariate time-series data for an individual. Intensive small samples are key. DFA allows us to examine both the structure and time-lagged relationships of latent factors. Model parameters are constant across time, so stationary time-series (i.e., probability distributions of stationarity of stochastic process is constant) is really what you are looking at with these models. However, researchers have relaxed this a bit by including time-varying covariates. There are many ways to estimate the DFA, most of which involve the Toeplitz matrices: maximum likelihood (ML) estimation with block Toeplitz matrices (Molenaar, 1985), generalized least squares estimation with block Toeplitz matrices (Molenaar & Nesselroade, 1998), ordinary least squares estimation with lagged correlation matrices (Browne & Zhang, 2007), raw data ML estimation with the Kalman filter (Engle & Watson, 1981; Hamaker, Dolan, & Molenaar, 2005), and the Bayesian approach (Z. Zhang, Hamaker, & Nesselroade, 2008). In my field DFA has become an essential tool in modeling nomothetic relations at a latent level, while also capturing idiosyncratic features of the manifest indicators: the idiographic filter. The P-technique was a precursor to DFA, so you might want to check that out, as well as what came after... state-space models. Read any of the references in the list for estimation procedures for nice overviews.
Dynamic factor analysis vs factor analysis on differences
Here goes: In my field (developmental science) we apply DFA to intensive multivariate time-series data for an individual. Intensive small samples are key. DFA allows us to examine both the structure a
Dynamic factor analysis vs factor analysis on differences Here goes: In my field (developmental science) we apply DFA to intensive multivariate time-series data for an individual. Intensive small samples are key. DFA allows us to examine both the structure and time-lagged relationships of latent factors. Model parameters are constant across time, so stationary time-series (i.e., probability distributions of stationarity of stochastic process is constant) is really what you are looking at with these models. However, researchers have relaxed this a bit by including time-varying covariates. There are many ways to estimate the DFA, most of which involve the Toeplitz matrices: maximum likelihood (ML) estimation with block Toeplitz matrices (Molenaar, 1985), generalized least squares estimation with block Toeplitz matrices (Molenaar & Nesselroade, 1998), ordinary least squares estimation with lagged correlation matrices (Browne & Zhang, 2007), raw data ML estimation with the Kalman filter (Engle & Watson, 1981; Hamaker, Dolan, & Molenaar, 2005), and the Bayesian approach (Z. Zhang, Hamaker, & Nesselroade, 2008). In my field DFA has become an essential tool in modeling nomothetic relations at a latent level, while also capturing idiosyncratic features of the manifest indicators: the idiographic filter. The P-technique was a precursor to DFA, so you might want to check that out, as well as what came after... state-space models. Read any of the references in the list for estimation procedures for nice overviews.
Dynamic factor analysis vs factor analysis on differences Here goes: In my field (developmental science) we apply DFA to intensive multivariate time-series data for an individual. Intensive small samples are key. DFA allows us to examine both the structure a
29,516
Pattern of mouse (or keybord) clicks and predicting computer user's activity
Great question that I wish I had the time to investigate myself. I am confident that it is tractable. Do you have any data? Your signal is a multidimensional ($n$D for $n$ buttons) binary times series; each bit indicating whether or not the button is depressed. You could also incorporate the position of the cursor into the feature vector as a 2D trajectory. Presumably, you have training data for each activity. So this means you have a classification problem. You can reduce the dimensionality by approximating and efficiently encoding the trajectory (references on request), and taking the first-difference of the mouse click frequency (i.e., if the frequency of clicks is not changing, store zero). I would also estimate the distribution of the inter-arrival time of the clicks to see if you can classify from it. For a jumping point into the literature see Activity recognition using eye-gaze movements and traditional interactions. You should find more leads in the "ubiquitous/pervasive computing", and "human–computer interaction" communities. To obtain data I suggest generating it yourself using a keylogger. I suggest asking for help on a forum related to computer security or hacking. Most of them log the keyboard but there might be something for the mouse too. Failing that, you could write your own software.
Pattern of mouse (or keybord) clicks and predicting computer user's activity
Great question that I wish I had the time to investigate myself. I am confident that it is tractable. Do you have any data? Your signal is a multidimensional ($n$D for $n$ buttons) binary times series
Pattern of mouse (or keybord) clicks and predicting computer user's activity Great question that I wish I had the time to investigate myself. I am confident that it is tractable. Do you have any data? Your signal is a multidimensional ($n$D for $n$ buttons) binary times series; each bit indicating whether or not the button is depressed. You could also incorporate the position of the cursor into the feature vector as a 2D trajectory. Presumably, you have training data for each activity. So this means you have a classification problem. You can reduce the dimensionality by approximating and efficiently encoding the trajectory (references on request), and taking the first-difference of the mouse click frequency (i.e., if the frequency of clicks is not changing, store zero). I would also estimate the distribution of the inter-arrival time of the clicks to see if you can classify from it. For a jumping point into the literature see Activity recognition using eye-gaze movements and traditional interactions. You should find more leads in the "ubiquitous/pervasive computing", and "human–computer interaction" communities. To obtain data I suggest generating it yourself using a keylogger. I suggest asking for help on a forum related to computer security or hacking. Most of them log the keyboard but there might be something for the mouse too. Failing that, you could write your own software.
Pattern of mouse (or keybord) clicks and predicting computer user's activity Great question that I wish I had the time to investigate myself. I am confident that it is tractable. Do you have any data? Your signal is a multidimensional ($n$D for $n$ buttons) binary times series
29,517
Regularization $L_1$ norm and $L_2$ norm empirical study
Let consider a penalized linear model. The $L_0$ penalty is not very used and is often replaced by the $L_1$ norm that is mathematically more flexible. The $L_1$ regularization has the property to build a sparse model. This means that only few variables will have a non 0 regression coefficient. It is particularly used if you assume that only few variables have a real impact on the output variables. If there are very correlated variables only one of these will be selected with a non 0 coefficient. The $L_2$ penalty is like if you add a value $\lambda$ on the diagonal of the input matrix. It can be used for example in situations where the number of variables is larger than the number of samples. In order to obtain a square matrix. With the $L_2$ norm penalty all the variables have non zero regression coefficient.
Regularization $L_1$ norm and $L_2$ norm empirical study
Let consider a penalized linear model. The $L_0$ penalty is not very used and is often replaced by the $L_1$ norm that is mathematically more flexible. The $L_1$ regularization has the property to bu
Regularization $L_1$ norm and $L_2$ norm empirical study Let consider a penalized linear model. The $L_0$ penalty is not very used and is often replaced by the $L_1$ norm that is mathematically more flexible. The $L_1$ regularization has the property to build a sparse model. This means that only few variables will have a non 0 regression coefficient. It is particularly used if you assume that only few variables have a real impact on the output variables. If there are very correlated variables only one of these will be selected with a non 0 coefficient. The $L_2$ penalty is like if you add a value $\lambda$ on the diagonal of the input matrix. It can be used for example in situations where the number of variables is larger than the number of samples. In order to obtain a square matrix. With the $L_2$ norm penalty all the variables have non zero regression coefficient.
Regularization $L_1$ norm and $L_2$ norm empirical study Let consider a penalized linear model. The $L_0$ penalty is not very used and is often replaced by the $L_1$ norm that is mathematically more flexible. The $L_1$ regularization has the property to bu
29,518
Regularization $L_1$ norm and $L_2$ norm empirical study
A few additions to the answer of @Donbeo 1) The L0 norm is not a norm in the true sense. It is the number of non zero entries in a vector. This norm is clearly not a convex norm and is not a norm in the true sense. Hence you might see terms like L0 'norm'. It becomes a combinatorial problem and is hence NP hard. 2) The L1 norm gives a sparse solution (look up the LASSO). There are seminal results by Candes, Donoho etc. who show that if the true solution is really sparse the L1 penalized methods will recover it. If the underlying solution is not sparse you will not get the underlying solution in cases when p>>n. There are nice results which show that the Lasso is consistent. 3) There are methods like the Elastic net by Zhou and Hastie which combine L2 and L1 penalized solutions.
Regularization $L_1$ norm and $L_2$ norm empirical study
A few additions to the answer of @Donbeo 1) The L0 norm is not a norm in the true sense. It is the number of non zero entries in a vector. This norm is clearly not a convex norm and is not a norm in t
Regularization $L_1$ norm and $L_2$ norm empirical study A few additions to the answer of @Donbeo 1) The L0 norm is not a norm in the true sense. It is the number of non zero entries in a vector. This norm is clearly not a convex norm and is not a norm in the true sense. Hence you might see terms like L0 'norm'. It becomes a combinatorial problem and is hence NP hard. 2) The L1 norm gives a sparse solution (look up the LASSO). There are seminal results by Candes, Donoho etc. who show that if the true solution is really sparse the L1 penalized methods will recover it. If the underlying solution is not sparse you will not get the underlying solution in cases when p>>n. There are nice results which show that the Lasso is consistent. 3) There are methods like the Elastic net by Zhou and Hastie which combine L2 and L1 penalized solutions.
Regularization $L_1$ norm and $L_2$ norm empirical study A few additions to the answer of @Donbeo 1) The L0 norm is not a norm in the true sense. It is the number of non zero entries in a vector. This norm is clearly not a convex norm and is not a norm in t
29,519
What is the distribution of the error around logistic growth data?
As Michael Chernick pointed out, the scaled beta distribution makes the best sense for this. However, for all practical purposes, and expecting that you will NEVER get the model perfectly right, you would be better off just modeling the mean via nonlinear regression according to your logistic growth equation and wrapping this up with standard errors that are robust to heteroskedasticity. Putting this into maximum likelihood context will create a false feeling of great accuracy. If the ecological theory would produce a distribution, you should fit that distribution. If your theory only produces the prediction for the mean, you should stick to this interpretation and don't try to come up with anything more than that, like a full-blown distribution. (Pearson's system of curves was surely fancy 100 years ago, but random processes do not follow differential equations to produce the density curves, which was his motivation with these density curves -- rather, one would think in terms of the central limit theorem as a way things may be working to produce distributions approximating what we see in practice.) I would expect that the variability goes up with the $N_t$ itself -- I am thinking of the Poisson distribution as an example -- and I am not entirely sure that this effect will be captured by the scaled beta distribution; it would, on the contrary, get compressed as you pull the mean towards its theoretical upper bound, which you may have to do. If your measurement device has an upper bound of the measurements, it does not mean that your actual process must have an upper bound; I would rather say that the measurement error introduced by your devices becomes critical as the process reaches that upper bound of being measured reasonably accurately. If you confound the measurement with the underlying process, you should recognize that explicitly, but I would imagine you have a greater interest in the process than in describing how your device works. (The process will be there 10 years from now; new measurement devices may become available, so your work will become obsolete.)
What is the distribution of the error around logistic growth data?
As Michael Chernick pointed out, the scaled beta distribution makes the best sense for this. However, for all practical purposes, and expecting that you will NEVER get the model perfectly right, you w
What is the distribution of the error around logistic growth data? As Michael Chernick pointed out, the scaled beta distribution makes the best sense for this. However, for all practical purposes, and expecting that you will NEVER get the model perfectly right, you would be better off just modeling the mean via nonlinear regression according to your logistic growth equation and wrapping this up with standard errors that are robust to heteroskedasticity. Putting this into maximum likelihood context will create a false feeling of great accuracy. If the ecological theory would produce a distribution, you should fit that distribution. If your theory only produces the prediction for the mean, you should stick to this interpretation and don't try to come up with anything more than that, like a full-blown distribution. (Pearson's system of curves was surely fancy 100 years ago, but random processes do not follow differential equations to produce the density curves, which was his motivation with these density curves -- rather, one would think in terms of the central limit theorem as a way things may be working to produce distributions approximating what we see in practice.) I would expect that the variability goes up with the $N_t$ itself -- I am thinking of the Poisson distribution as an example -- and I am not entirely sure that this effect will be captured by the scaled beta distribution; it would, on the contrary, get compressed as you pull the mean towards its theoretical upper bound, which you may have to do. If your measurement device has an upper bound of the measurements, it does not mean that your actual process must have an upper bound; I would rather say that the measurement error introduced by your devices becomes critical as the process reaches that upper bound of being measured reasonably accurately. If you confound the measurement with the underlying process, you should recognize that explicitly, but I would imagine you have a greater interest in the process than in describing how your device works. (The process will be there 10 years from now; new measurement devices may become available, so your work will become obsolete.)
What is the distribution of the error around logistic growth data? As Michael Chernick pointed out, the scaled beta distribution makes the best sense for this. However, for all practical purposes, and expecting that you will NEVER get the model perfectly right, you w
29,520
What is the distribution of the error around logistic growth data?
@whuber is correct that there is no necessary relationship of the structural part of this model to the distribution of error terms. So there is no answer to your question for the theoretical error distribution. This doesn't mean that it isn't a good question though - just that the answer will have to be largely empirical. You seem to be assuming that the randomness is additive. I see no reason (other than computational convenience) for this to be the case. Is an alternative that there is a random element somewhere else in the model? For example see the following, where randomness is introduced as Normally distributed with mean of 1, variance the only thing to estimate. I have no reason for thinking this is the right thing to do other than that it produces plausible results that seem to match what you want to see. Whether it would be practical to use something like this as the basis for estimating a model I don't know. loggrowth <- function(K, N, r, time, rand=1){ K*N*exp(rand*r*time)/(K+N*exp(rand*r*time-1)))} plot(1:100, loggrowth(100,20,.08,1:100, rnorm(100,1,0.1)), type="p", ylab="", xlab="time") lines(1:100, loggrowth(100,20,.08,1:100))
What is the distribution of the error around logistic growth data?
@whuber is correct that there is no necessary relationship of the structural part of this model to the distribution of error terms. So there is no answer to your question for the theoretical error di
What is the distribution of the error around logistic growth data? @whuber is correct that there is no necessary relationship of the structural part of this model to the distribution of error terms. So there is no answer to your question for the theoretical error distribution. This doesn't mean that it isn't a good question though - just that the answer will have to be largely empirical. You seem to be assuming that the randomness is additive. I see no reason (other than computational convenience) for this to be the case. Is an alternative that there is a random element somewhere else in the model? For example see the following, where randomness is introduced as Normally distributed with mean of 1, variance the only thing to estimate. I have no reason for thinking this is the right thing to do other than that it produces plausible results that seem to match what you want to see. Whether it would be practical to use something like this as the basis for estimating a model I don't know. loggrowth <- function(K, N, r, time, rand=1){ K*N*exp(rand*r*time)/(K+N*exp(rand*r*time-1)))} plot(1:100, loggrowth(100,20,.08,1:100, rnorm(100,1,0.1)), type="p", ylab="", xlab="time") lines(1:100, loggrowth(100,20,.08,1:100))
What is the distribution of the error around logistic growth data? @whuber is correct that there is no necessary relationship of the structural part of this model to the distribution of error terms. So there is no answer to your question for the theoretical error di
29,521
Creating a maximum entropy Markov model from an existing multi-input maximum entropy classifier
(This really is a real question I'm facing and the ML StackExchange site going live was pretty much perfect timing: I'd done a few days of book reading and online research and was about to start implementing. Here are my results. Although they aren't rigorous I think they do answer my own question. I shall leave the question open for now in-case anyone has any useful input, has tried something similar, or have some useful references.) Okay over the past couple of days I've coded this up. The code is not very efficient - lots of collection creation & copying, but the object of the exercise was to see if it would work, and how well it works. I am splitting my data randomly into two lists: training data, and test data. I am running the test data through the conventional Maximum Entropy POS Tagger; and my new MEMM tagger. Hence they see the same test data, allowing direct comparisons - due to the randomness in the data being chosen, I see some variation between tests (typically about 0.2-0.4%). First test uses an MEMM tagger with a single stage (ie. a true Markov Chain). This consistently performed better than the simple ME tagger by about 0.1-0.25%. Next I tried the two stage approach which seems like it should be more correct. However the results were even more marginal. Often the results would be identical, occasionally it would be slightly inferior, but probably a majority of times it was slightly better (so +/-0.05%). The MEMM tagger is slow. Okay I haven't applied any optimizations, but the 1 stage (true Markov Chain) is N times slower (where N = Number of labels) because this is the number of paths which are transferred between each step. The 2 stage implementation is N*N slower (because of the greater number of paths transferred). Although optimizations might improve things, I this is probably too slow for most practical applications. One thing I am trying is to apply a lower probability limit to the paths. Ie. the Viterbi paths are pruned during each iteration with all paths below a certain probability (currently Log(total path P)<-20.0) are pruned. This does run quite a bit faster, but the question remains as to whether it is worth it. I think it probably isn't. Why do we not see any improvement? I think this is primarily due to the way POS tags behave and the Maximum Entropy model. Although the model takes features based on the previous two tags, the immediate previous tag is much more important compared to the one before it. Intuitively this would make sense for the English language (eg. an adjective is usually followed by a noun or an another adjective but this doesn't really depend on what was before the adjective).
Creating a maximum entropy Markov model from an existing multi-input maximum entropy classifier
(This really is a real question I'm facing and the ML StackExchange site going live was pretty much perfect timing: I'd done a few days of book reading and online research and was about to start imple
Creating a maximum entropy Markov model from an existing multi-input maximum entropy classifier (This really is a real question I'm facing and the ML StackExchange site going live was pretty much perfect timing: I'd done a few days of book reading and online research and was about to start implementing. Here are my results. Although they aren't rigorous I think they do answer my own question. I shall leave the question open for now in-case anyone has any useful input, has tried something similar, or have some useful references.) Okay over the past couple of days I've coded this up. The code is not very efficient - lots of collection creation & copying, but the object of the exercise was to see if it would work, and how well it works. I am splitting my data randomly into two lists: training data, and test data. I am running the test data through the conventional Maximum Entropy POS Tagger; and my new MEMM tagger. Hence they see the same test data, allowing direct comparisons - due to the randomness in the data being chosen, I see some variation between tests (typically about 0.2-0.4%). First test uses an MEMM tagger with a single stage (ie. a true Markov Chain). This consistently performed better than the simple ME tagger by about 0.1-0.25%. Next I tried the two stage approach which seems like it should be more correct. However the results were even more marginal. Often the results would be identical, occasionally it would be slightly inferior, but probably a majority of times it was slightly better (so +/-0.05%). The MEMM tagger is slow. Okay I haven't applied any optimizations, but the 1 stage (true Markov Chain) is N times slower (where N = Number of labels) because this is the number of paths which are transferred between each step. The 2 stage implementation is N*N slower (because of the greater number of paths transferred). Although optimizations might improve things, I this is probably too slow for most practical applications. One thing I am trying is to apply a lower probability limit to the paths. Ie. the Viterbi paths are pruned during each iteration with all paths below a certain probability (currently Log(total path P)<-20.0) are pruned. This does run quite a bit faster, but the question remains as to whether it is worth it. I think it probably isn't. Why do we not see any improvement? I think this is primarily due to the way POS tags behave and the Maximum Entropy model. Although the model takes features based on the previous two tags, the immediate previous tag is much more important compared to the one before it. Intuitively this would make sense for the English language (eg. an adjective is usually followed by a noun or an another adjective but this doesn't really depend on what was before the adjective).
Creating a maximum entropy Markov model from an existing multi-input maximum entropy classifier (This really is a real question I'm facing and the ML StackExchange site going live was pretty much perfect timing: I'd done a few days of book reading and online research and was about to start imple
29,522
Stratified sampling with multiple variables?
See my comment above re whether variables 2 and 3 really can be used as a basis for stratification (they can't unless the survey you refer to there is a different survey to the one you are discussing the sampling method for now). If you try to select your sample based on three categorical variables you quickly end up with a lot of strata and complex sampling and weighting problems. You would need to calculate the population in each cell of a three dimensional array, where each cell is a particular combination of the three variables; then specify a proportion of that population you are going to include in your survey (doesn't need to be the same proportion for each cell). You also need to know each potential samplee's values on those three variables as part of your sample selection process. An alternative to using all three for sampling might be to select your sample on the basis of just one of your variables as strata, and bring the other two in through post-stratification weighting. Further, if you use the raking technique you can get around the problem of so many "cells" in your population array, while still making sure that the weights for each total category of each variable (ie the marginal totals in your three dimensional array) add up to the correct amount, and this can help in keeping the standard errors down to a reasonable size. If you're doing post-stratification (raking or otherwise) you still need to know the population values for your categorical variables - essential for calculating the right weights. If I'm right in my suspicion that you don't really know the population values of your variables 2 and 3 (which need to be measured by survey), your best bet will be just to stratify on the basis of previous examination results, and then calculate weights to population based just on that variable. I've found Thomas Lumley's survey package for R relatively straightforward to use and it has the advantage of being free. I would say this or something equivalent is essential for decent survey analysis. It has a good website and an even better book - you probably need to get hold of the book or an equivalent for all this to make sense
Stratified sampling with multiple variables?
See my comment above re whether variables 2 and 3 really can be used as a basis for stratification (they can't unless the survey you refer to there is a different survey to the one you are discussing
Stratified sampling with multiple variables? See my comment above re whether variables 2 and 3 really can be used as a basis for stratification (they can't unless the survey you refer to there is a different survey to the one you are discussing the sampling method for now). If you try to select your sample based on three categorical variables you quickly end up with a lot of strata and complex sampling and weighting problems. You would need to calculate the population in each cell of a three dimensional array, where each cell is a particular combination of the three variables; then specify a proportion of that population you are going to include in your survey (doesn't need to be the same proportion for each cell). You also need to know each potential samplee's values on those three variables as part of your sample selection process. An alternative to using all three for sampling might be to select your sample on the basis of just one of your variables as strata, and bring the other two in through post-stratification weighting. Further, if you use the raking technique you can get around the problem of so many "cells" in your population array, while still making sure that the weights for each total category of each variable (ie the marginal totals in your three dimensional array) add up to the correct amount, and this can help in keeping the standard errors down to a reasonable size. If you're doing post-stratification (raking or otherwise) you still need to know the population values for your categorical variables - essential for calculating the right weights. If I'm right in my suspicion that you don't really know the population values of your variables 2 and 3 (which need to be measured by survey), your best bet will be just to stratify on the basis of previous examination results, and then calculate weights to population based just on that variable. I've found Thomas Lumley's survey package for R relatively straightforward to use and it has the advantage of being free. I would say this or something equivalent is essential for decent survey analysis. It has a good website and an even better book - you probably need to get hold of the book or an equivalent for all this to make sense
Stratified sampling with multiple variables? See my comment above re whether variables 2 and 3 really can be used as a basis for stratification (they can't unless the survey you refer to there is a different survey to the one you are discussing
29,523
Estimate center and radius of a sphere from points on the surface
Here is some R code that shows one approach using least squares: # set parameters mu.x <- 8 mu.y <- 13 mu.z <- 20 mu.r <- 5 sigma <- 0.5 # create data tmp <- matrix(rnorm(300), ncol=3) tmp <- tmp/apply(tmp,1,function(x) sqrt(sum(x^2))) r <- rnorm(100, mu.r, sigma) tmp2 <- tmp*r x <- tmp2[,1] + mu.x y <- tmp2[,2] + mu.y z <- tmp2[,3] + mu.z # function to minimize tmpfun <- function(pars) { x.center <- pars[1] y.center <- pars[2] z.center <- pars[3] rhat <- pars[4] r <- sqrt( (x-x.center)^2 + (y-y.center)^2 + (z-z.center)^2 ) sum( (r-rhat)^2 ) } # run optim out <- optim( c(mean(x),mean(y),mean(z),diff(range(x))/2), tmpfun ) out # now try a hemisphere (harder problem) tmp <- matrix(rnorm(300), ncol=3) tmp[,1] <- abs(tmp[,1]) tmp <- tmp/apply(tmp,1,function(x) sqrt(sum(x^2))) r <- rnorm(100, mu.r, sigma) tmp2 <- tmp*r x <- tmp2[,1] + mu.x y <- tmp2[,2] + mu.y z <- tmp2[,3] + mu.z out <- optim( c(mean(x),mean(y),mean(z),diff(range(y))/2), tmpfun ) out If you don't use R then you should still be able to follow the logic and traslate it into another language. Technically the radius parameter should be bounded by 0, but if the variability is small relative to the true radius then the unbounded method should work fine, or optim has options for doing the bounded optimization, (or you could just do the absolute value of the radius in the function to minimize).
Estimate center and radius of a sphere from points on the surface
Here is some R code that shows one approach using least squares: # set parameters mu.x <- 8 mu.y <- 13 mu.z <- 20 mu.r <- 5 sigma <- 0.5 # create data tmp <- matrix(rnorm(300), ncol=3) tmp <- tmp/ap
Estimate center and radius of a sphere from points on the surface Here is some R code that shows one approach using least squares: # set parameters mu.x <- 8 mu.y <- 13 mu.z <- 20 mu.r <- 5 sigma <- 0.5 # create data tmp <- matrix(rnorm(300), ncol=3) tmp <- tmp/apply(tmp,1,function(x) sqrt(sum(x^2))) r <- rnorm(100, mu.r, sigma) tmp2 <- tmp*r x <- tmp2[,1] + mu.x y <- tmp2[,2] + mu.y z <- tmp2[,3] + mu.z # function to minimize tmpfun <- function(pars) { x.center <- pars[1] y.center <- pars[2] z.center <- pars[3] rhat <- pars[4] r <- sqrt( (x-x.center)^2 + (y-y.center)^2 + (z-z.center)^2 ) sum( (r-rhat)^2 ) } # run optim out <- optim( c(mean(x),mean(y),mean(z),diff(range(x))/2), tmpfun ) out # now try a hemisphere (harder problem) tmp <- matrix(rnorm(300), ncol=3) tmp[,1] <- abs(tmp[,1]) tmp <- tmp/apply(tmp,1,function(x) sqrt(sum(x^2))) r <- rnorm(100, mu.r, sigma) tmp2 <- tmp*r x <- tmp2[,1] + mu.x y <- tmp2[,2] + mu.y z <- tmp2[,3] + mu.z out <- optim( c(mean(x),mean(y),mean(z),diff(range(y))/2), tmpfun ) out If you don't use R then you should still be able to follow the logic and traslate it into another language. Technically the radius parameter should be bounded by 0, but if the variability is small relative to the true radius then the unbounded method should work fine, or optim has options for doing the bounded optimization, (or you could just do the absolute value of the radius in the function to minimize).
Estimate center and radius of a sphere from points on the surface Here is some R code that shows one approach using least squares: # set parameters mu.x <- 8 mu.y <- 13 mu.z <- 20 mu.r <- 5 sigma <- 0.5 # create data tmp <- matrix(rnorm(300), ncol=3) tmp <- tmp/ap
29,524
Estimate center and radius of a sphere from points on the surface
You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus): see the appendix of the open access paper of Cerisier et al. in J. Comput. Biol. 24(11), 1134-1137 (2017), https://doi.org/10.1089/cmb.2017.0061 It works when the data points are weighted (it works even for continuous distributions; as a by-product, when d=1, a well-known inequality is retrieved: the kurtosis is always greater than the squared skewness plus 1).
Estimate center and radius of a sphere from points on the surface
You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus)
Estimate center and radius of a sphere from points on the surface You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus): see the appendix of the open access paper of Cerisier et al. in J. Comput. Biol. 24(11), 1134-1137 (2017), https://doi.org/10.1089/cmb.2017.0061 It works when the data points are weighted (it works even for continuous distributions; as a by-product, when d=1, a well-known inequality is retrieved: the kurtosis is always greater than the squared skewness plus 1).
Estimate center and radius of a sphere from points on the surface You may be interested by the best fit d-dimensional sphere, i.e. minimizing the variance of the population of the squared distances to the center; it has a simple analytical solution (matrix calculus)
29,525
How should one control for group and individual differences in pre-treatment scores in a randomised controlled trial?
{I'm cheating, adding a comment too long for the comment box.} Thanks for your explication. Sounds as if you've found some great sources, and done a lot to extract good lessons from them. There are other sources worth reading, e.g., a chapter in Cook and Campbell's Quasi- Experimentation; a section in Geoffrey Keppel's Design and Analysis; and I think at least one article by Donald Rubin. I'll also offer a lesson I've gleaned (paraphrased) from Damian Betebenner's work on student test scores: Is it reasonable to expect that no improvement would occur absent a certain intervention? If so, it makes sense to analyze gain scores, as with analysis of variance. Is it instead reasonable to think that all students would improve to some degree even without the intervention, and that their posttest score could be predicted as a linear function of their pretest score? If so, analysis of covariance would make sense. from ANOVA/ANCOVA Flow Chart Also, perhaps you know this, but Lord' s Paradox, referred to by Betebenner, involves the possibility of obtaining, with the same data, a result of zero mean difference using one of these two methods but a significant difference using the other. My take, based on readings perhaps more limited than yours, is that both methods have a place and that Everitt and perhaps also Gelman, great as they are, are in this case taking too hard a line.
How should one control for group and individual differences in pre-treatment scores in a randomised
{I'm cheating, adding a comment too long for the comment box.} Thanks for your explication. Sounds as if you've found some great sources, and done a lot to extract good lessons from them. There are
How should one control for group and individual differences in pre-treatment scores in a randomised controlled trial? {I'm cheating, adding a comment too long for the comment box.} Thanks for your explication. Sounds as if you've found some great sources, and done a lot to extract good lessons from them. There are other sources worth reading, e.g., a chapter in Cook and Campbell's Quasi- Experimentation; a section in Geoffrey Keppel's Design and Analysis; and I think at least one article by Donald Rubin. I'll also offer a lesson I've gleaned (paraphrased) from Damian Betebenner's work on student test scores: Is it reasonable to expect that no improvement would occur absent a certain intervention? If so, it makes sense to analyze gain scores, as with analysis of variance. Is it instead reasonable to think that all students would improve to some degree even without the intervention, and that their posttest score could be predicted as a linear function of their pretest score? If so, analysis of covariance would make sense. from ANOVA/ANCOVA Flow Chart Also, perhaps you know this, but Lord' s Paradox, referred to by Betebenner, involves the possibility of obtaining, with the same data, a result of zero mean difference using one of these two methods but a significant difference using the other. My take, based on readings perhaps more limited than yours, is that both methods have a place and that Everitt and perhaps also Gelman, great as they are, are in this case taking too hard a line.
How should one control for group and individual differences in pre-treatment scores in a randomised {I'm cheating, adding a comment too long for the comment box.} Thanks for your explication. Sounds as if you've found some great sources, and done a lot to extract good lessons from them. There are
29,526
Case weighted logistic regression
glm holds a parameter weights exactly for this purpose. You provide it with a vector of numbers on any scale, that holds the same number of weights as you have observations. I only now realize that you may not be talking R. If not, you might want to.
Case weighted logistic regression
glm holds a parameter weights exactly for this purpose. You provide it with a vector of numbers on any scale, that holds the same number of weights as you have observations. I only now realize that yo
Case weighted logistic regression glm holds a parameter weights exactly for this purpose. You provide it with a vector of numbers on any scale, that holds the same number of weights as you have observations. I only now realize that you may not be talking R. If not, you might want to.
Case weighted logistic regression glm holds a parameter weights exactly for this purpose. You provide it with a vector of numbers on any scale, that holds the same number of weights as you have observations. I only now realize that yo
29,527
Case weighted logistic regression
If you have access to SAS, this is very easily accomplished using PROC GENMOD. As long as each observation has a weight variable, the use of the weight statement will allow you do perform the kind of analysis you're looking for. I've mostly used it using Inverse-Probability-of-Treatment weights, but I see no reason why you couldn't assign weights to your data to emphasize certain types of cases, so long as you make sure your N remains constant. You'll also want to make sure to include some sort of ID variable, because technically the upweighted cases are repeated observations. Example code, with an observation ID of 'id' and a weight variable of 'wt': proc genmod data=work.dataset descending; class id; model exposure = outcome covariate / dist=bin link=logit; weight wt; repeated subject=id/type=ind; run;
Case weighted logistic regression
If you have access to SAS, this is very easily accomplished using PROC GENMOD. As long as each observation has a weight variable, the use of the weight statement will allow you do perform the kind of
Case weighted logistic regression If you have access to SAS, this is very easily accomplished using PROC GENMOD. As long as each observation has a weight variable, the use of the weight statement will allow you do perform the kind of analysis you're looking for. I've mostly used it using Inverse-Probability-of-Treatment weights, but I see no reason why you couldn't assign weights to your data to emphasize certain types of cases, so long as you make sure your N remains constant. You'll also want to make sure to include some sort of ID variable, because technically the upweighted cases are repeated observations. Example code, with an observation ID of 'id' and a weight variable of 'wt': proc genmod data=work.dataset descending; class id; model exposure = outcome covariate / dist=bin link=logit; weight wt; repeated subject=id/type=ind; run;
Case weighted logistic regression If you have access to SAS, this is very easily accomplished using PROC GENMOD. As long as each observation has a weight variable, the use of the weight statement will allow you do perform the kind of
29,528
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of just relative comparisons?
In line with what Macro suggested I think the term you are looking for is a performance measure. Though it is not a safe way to asses predictive power, it is a very usefull way to compare the fitting quality of various models. An example measure would be the Mean Average Percentage Error, but more of them can easily be found. Suppose you use SetA with modelA to describe the number of holes in a road, and you use SetB and modelB to describe the number of people in a country, then of course you cannot say that one model is better than the other, but you can at least see which model provides a more accurate description.
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of ju
In line with what Macro suggested I think the term you are looking for is a performance measure. Though it is not a safe way to asses predictive power, it is a very usefull way to compare the fitting
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of just relative comparisons? In line with what Macro suggested I think the term you are looking for is a performance measure. Though it is not a safe way to asses predictive power, it is a very usefull way to compare the fitting quality of various models. An example measure would be the Mean Average Percentage Error, but more of them can easily be found. Suppose you use SetA with modelA to describe the number of holes in a road, and you use SetB and modelB to describe the number of people in a country, then of course you cannot say that one model is better than the other, but you can at least see which model provides a more accurate description.
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of ju In line with what Macro suggested I think the term you are looking for is a performance measure. Though it is not a safe way to asses predictive power, it is a very usefull way to compare the fitting
29,529
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of just relative comparisons?
There are some new-ish papers exploring exactly what you are looking for, I think; Nakagawa and Schielzeth (2013) present an R² statistic for mixed-effects models called "R2 GLMM" to define the amount of unexplained variance in a model. Conditional R²GLMM is interpreted as variance explained by both fixed and random factors; Marginal R²GLMM represents the variance explained by fixed factors. In 2014, Johnson updated the equation to account for random slopes models. Happily, you can easily calculate both marginal and conditional R²GLMM using the package "MuMIn" in R (Barton, 2015).
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of ju
There are some new-ish papers exploring exactly what you are looking for, I think; Nakagawa and Schielzeth (2013) present an R² statistic for mixed-effects models called "R2 GLMM" to define the amount
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of just relative comparisons? There are some new-ish papers exploring exactly what you are looking for, I think; Nakagawa and Schielzeth (2013) present an R² statistic for mixed-effects models called "R2 GLMM" to define the amount of unexplained variance in a model. Conditional R²GLMM is interpreted as variance explained by both fixed and random factors; Marginal R²GLMM represents the variance explained by fixed factors. In 2014, Johnson updated the equation to account for random slopes models. Happily, you can easily calculate both marginal and conditional R²GLMM using the package "MuMIn" in R (Barton, 2015).
Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of ju There are some new-ish papers exploring exactly what you are looking for, I think; Nakagawa and Schielzeth (2013) present an R² statistic for mixed-effects models called "R2 GLMM" to define the amount
29,530
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R
I'm still stuck with this problem. I have received some suggestions from the R mailing list (thanks to Christian Hennig) that I attach here: Have you considered the dbscan function in library fpc, or was it another one? The fpc::dbscan() function doesn't have a "distance" parameter but several options, one of which may resolve your memory problem (look up the documentation of the "memory" parameter). Using a distance matrix for hundreds of thousands of points is a recipe for disaster (memory-wise). I'm not sure whether the function that you used did that, but fpc::dbscan() can avoid it. It is true that fpc::dbscan() requires tuning constants that the user has to provide. There is unfortunately no general rule how to do this; it would be necessary to understand the method and the meaning of the constants, and how this translates into the requirements of your application. You may try several different choices and do some cluster validation to see what works, but I can't explain this in general terms easily via email. I have made some attempts with my data but without any success: "Yes, I have tried dbscan from fpc but I'm still stuck on the memory problem. Regarding your answer, I'm not sure which memory parameter should I look at. Following is the code I tried with dbscan parameters, maybe you can see if there is any mistake. > sstdat=read.csv("sst.dat",sep=";",header=F,col.names=c("lon","lat","sst")) > library(fpc) > sst1=subset(sstdat, sst<50) > sst2=subset(sst1, lon>-6) > sst2=subset(sst2, lon<40) > sst2=subset(sst2, lat<46) > dbscan(sst2$sst, 0.1, MinPts = 5, scale = FALSE, method = c("hybrid"), seeds = FALSE, showplot = FALSE, countmode = NULL) Error: no se puede ubicar un vector de tamaño 858.2 Mb > head(sst2) lon lat sst 1257 35.18 24.98 26.78 1258 35.22 24.98 26.78 1259 35.27 24.98 26.78 1260 35.31 24.98 26.78 1261 35.35 24.98 26.78 1262 35.40 24.98 26.85 In this example I only apply dbscan() to temperature values, not lon/lat, so eps parameter is 0.1. As it is a gridded data set any point is surrounded by eight data points, then I thought that at least 5 of the surrounding points should be within the reachability distance. But I'm not sure I'm getting the right approach by only considering temperature value, maybe then I'm missing spatial information. How should I deal with longitude and latitude data? Dimensions of sst2 are: 152243 rows x 3 columns " I share this mail messages here in case any of you can share some light on R and DBSCAN. Thanks again
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R
I'm still stuck with this problem. I have received some suggestions from the R mailing list (thanks to Christian Hennig) that I attach here: Have you considered the dbscan function in library fpc, or
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R I'm still stuck with this problem. I have received some suggestions from the R mailing list (thanks to Christian Hennig) that I attach here: Have you considered the dbscan function in library fpc, or was it another one? The fpc::dbscan() function doesn't have a "distance" parameter but several options, one of which may resolve your memory problem (look up the documentation of the "memory" parameter). Using a distance matrix for hundreds of thousands of points is a recipe for disaster (memory-wise). I'm not sure whether the function that you used did that, but fpc::dbscan() can avoid it. It is true that fpc::dbscan() requires tuning constants that the user has to provide. There is unfortunately no general rule how to do this; it would be necessary to understand the method and the meaning of the constants, and how this translates into the requirements of your application. You may try several different choices and do some cluster validation to see what works, but I can't explain this in general terms easily via email. I have made some attempts with my data but without any success: "Yes, I have tried dbscan from fpc but I'm still stuck on the memory problem. Regarding your answer, I'm not sure which memory parameter should I look at. Following is the code I tried with dbscan parameters, maybe you can see if there is any mistake. > sstdat=read.csv("sst.dat",sep=";",header=F,col.names=c("lon","lat","sst")) > library(fpc) > sst1=subset(sstdat, sst<50) > sst2=subset(sst1, lon>-6) > sst2=subset(sst2, lon<40) > sst2=subset(sst2, lat<46) > dbscan(sst2$sst, 0.1, MinPts = 5, scale = FALSE, method = c("hybrid"), seeds = FALSE, showplot = FALSE, countmode = NULL) Error: no se puede ubicar un vector de tamaño 858.2 Mb > head(sst2) lon lat sst 1257 35.18 24.98 26.78 1258 35.22 24.98 26.78 1259 35.27 24.98 26.78 1260 35.31 24.98 26.78 1261 35.35 24.98 26.78 1262 35.40 24.98 26.85 In this example I only apply dbscan() to temperature values, not lon/lat, so eps parameter is 0.1. As it is a gridded data set any point is surrounded by eight data points, then I thought that at least 5 of the surrounding points should be within the reachability distance. But I'm not sure I'm getting the right approach by only considering temperature value, maybe then I'm missing spatial information. How should I deal with longitude and latitude data? Dimensions of sst2 are: 152243 rows x 3 columns " I share this mail messages here in case any of you can share some light on R and DBSCAN. Thanks again
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R I'm still stuck with this problem. I have received some suggestions from the R mailing list (thanks to Christian Hennig) that I attach here: Have you considered the dbscan function in library fpc, or
29,531
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R
The problem here is with R. For DBSCAN to be effective, you need to have an appropriate index structure (that needs to match your distance). R however doesn't really do indexing. Additionally, the fpc package is a minimalistic implementation of DBSCAN, only offering a small part of its functionality. As for the distance function, this is where your "domain knowledge" is needed. If you have a flexible enough DBSCAN implementation (it is really easy to implement, the index to make it faster than $O(n^2)$ is much harder!) you should be able to put in an arbitrary distance. You can even make that two distance functions and epsilon values: points must be at most $10 km$ away, and the difference in temperature must be less than $1 K.$ Look at "Generalized DBSCAN" for the general principles that DBSCAN needs: a notion of "neighborhood" and a notion of "core points" (or "density").
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R
The problem here is with R. For DBSCAN to be effective, you need to have an appropriate index structure (that needs to match your distance). R however doesn't really do indexing. Additionally, the fpc
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R The problem here is with R. For DBSCAN to be effective, you need to have an appropriate index structure (that needs to match your distance). R however doesn't really do indexing. Additionally, the fpc package is a minimalistic implementation of DBSCAN, only offering a small part of its functionality. As for the distance function, this is where your "domain knowledge" is needed. If you have a flexible enough DBSCAN implementation (it is really easy to implement, the index to make it faster than $O(n^2)$ is much harder!) you should be able to put in an arbitrary distance. You can even make that two distance functions and epsilon values: points must be at most $10 km$ away, and the difference in temperature must be less than $1 K.$ Look at "Generalized DBSCAN" for the general principles that DBSCAN needs: a notion of "neighborhood" and a notion of "core points" (or "density").
Density-based spatial clustering of applications with noise (DBSCAN) clustering in R The problem here is with R. For DBSCAN to be effective, you need to have an appropriate index structure (that needs to match your distance). R however doesn't really do indexing. Additionally, the fpc
29,532
How to summarize and compare non-linear relationships?
Check out Generalized Additive Models, which permit fitting non-linear functions without a priori specification of the non-linear form. I'm not sure how one would go about comparing the subsequent fits however. Another similar (in that I believe they both employ cubic splines) approach is achieved by Functional Data Analysis, where I understand there are methods for characterizing differences between fitted functions.
How to summarize and compare non-linear relationships?
Check out Generalized Additive Models, which permit fitting non-linear functions without a priori specification of the non-linear form. I'm not sure how one would go about comparing the subsequent fit
How to summarize and compare non-linear relationships? Check out Generalized Additive Models, which permit fitting non-linear functions without a priori specification of the non-linear form. I'm not sure how one would go about comparing the subsequent fits however. Another similar (in that I believe they both employ cubic splines) approach is achieved by Functional Data Analysis, where I understand there are methods for characterizing differences between fitted functions.
How to summarize and compare non-linear relationships? Check out Generalized Additive Models, which permit fitting non-linear functions without a priori specification of the non-linear form. I'm not sure how one would go about comparing the subsequent fit
29,533
How to summarize and compare non-linear relationships?
For comparison sake, it will be helpful to parametrize the relationship between OM (organic matter) and SED (sediment) similarly across lakes -- so that you are estimating the same model for each lake. That way, you can directly compare coefficient estimates. If you limit potential nonlinear relationships to an order two polynomial (quadratic), then it would be as simple as adding a second term to a linear model: OM = beta_0 + beta_1 * SED + beta_2 * (SED^2) You could then do a t-test to see if the coefficients of two lakes are equal... to each other, or to zero depending on the questions you are trying to answer. You stated your question as: "I am interested in comparing how lakes differ in the relationship between percent organic matter and sediment depth (i.e., slope)." If you word your question more specifically, this will aid in selecting the right approach. Why would the relationship between OM and SED differ across lakes? Is there some other observable that would explain the differing relationship? If so, you might want to include this explanatory variable in your model, via an interaction term or elsewhere. Without more information on the specific question you are trying to answer -- other than "is the relationship between OM and SED the same across lakes?" -- it is difficult to suggest a more specific approach.
How to summarize and compare non-linear relationships?
For comparison sake, it will be helpful to parametrize the relationship between OM (organic matter) and SED (sediment) similarly across lakes -- so that you are estimating the same model for each lake
How to summarize and compare non-linear relationships? For comparison sake, it will be helpful to parametrize the relationship between OM (organic matter) and SED (sediment) similarly across lakes -- so that you are estimating the same model for each lake. That way, you can directly compare coefficient estimates. If you limit potential nonlinear relationships to an order two polynomial (quadratic), then it would be as simple as adding a second term to a linear model: OM = beta_0 + beta_1 * SED + beta_2 * (SED^2) You could then do a t-test to see if the coefficients of two lakes are equal... to each other, or to zero depending on the questions you are trying to answer. You stated your question as: "I am interested in comparing how lakes differ in the relationship between percent organic matter and sediment depth (i.e., slope)." If you word your question more specifically, this will aid in selecting the right approach. Why would the relationship between OM and SED differ across lakes? Is there some other observable that would explain the differing relationship? If so, you might want to include this explanatory variable in your model, via an interaction term or elsewhere. Without more information on the specific question you are trying to answer -- other than "is the relationship between OM and SED the same across lakes?" -- it is difficult to suggest a more specific approach.
How to summarize and compare non-linear relationships? For comparison sake, it will be helpful to parametrize the relationship between OM (organic matter) and SED (sediment) similarly across lakes -- so that you are estimating the same model for each lake
29,534
Generating random vectors with constraints
If I understand you correctly, only points in some small volume of n-dimensional space meet your constraints. Your first constraint constrains it to the interior of a hypersphere, which reminds me of the comp.graphics.algorithms FAQ "Uniform random points on sphere" and How to generate uniformly distributed points in the 3-d unit ball? The second constraint slices a bit out of the hypersphere, and the other constraints further whittle away at the volume that meets your constraints. I think the simplest thing to do is one of the approaches suggested by the FAQ: choose some arbitrary Axis-aligned bounding box that we are sure contains the entire volume. In this case, -c < a_1 < c, -c < a_2 < c, ... -c < a_n < c contains the entire constrained volume, since it contains the hypersphere described by the first constraint, and the other constraints keep whittling away at that volume. The algorithm uniformly picks points throughout that bounding box. In this case, the algorithm independently sets each coordinate of a candidate vector to some independent uniformly distributed random number from -c to +c. (I am assuming you want points distributed with equal density throughout this volume. I suppose you could make the algorithm select some or all coordinates with a Poisson distribution or some other non-uniform distribution, if you had some reason to do that). Once you have a candidate vector, check each constraint. If it fails any of them, go back and pick another point. Once you have a candidate vector, store it somewhere for later use. If you don't have enough stored vectors, go back and try to generate another one. With sufficiently high-quality random number generator, this gives you a set of stored coordinates that meet your criteria with (expected) uniform density. Alas, if your have a relatively high dimensionality n (i.e., if you construct each vectors out of a relatively long list of coordinates), the inscribed sphere (much less your whittled-down volume) has a surprisingly small part of the total volume of the total bounding box, so it might need to execute many iterations, most of them generating rejected points outside your constrained area, before finding a point inside your constrained area. Since computers these days are pretty fast, will that be fast enough?
Generating random vectors with constraints
If I understand you correctly, only points in some small volume of n-dimensional space meet your constraints. Your first constraint constrains it to the interior of a hypersphere, which reminds me of
Generating random vectors with constraints If I understand you correctly, only points in some small volume of n-dimensional space meet your constraints. Your first constraint constrains it to the interior of a hypersphere, which reminds me of the comp.graphics.algorithms FAQ "Uniform random points on sphere" and How to generate uniformly distributed points in the 3-d unit ball? The second constraint slices a bit out of the hypersphere, and the other constraints further whittle away at the volume that meets your constraints. I think the simplest thing to do is one of the approaches suggested by the FAQ: choose some arbitrary Axis-aligned bounding box that we are sure contains the entire volume. In this case, -c < a_1 < c, -c < a_2 < c, ... -c < a_n < c contains the entire constrained volume, since it contains the hypersphere described by the first constraint, and the other constraints keep whittling away at that volume. The algorithm uniformly picks points throughout that bounding box. In this case, the algorithm independently sets each coordinate of a candidate vector to some independent uniformly distributed random number from -c to +c. (I am assuming you want points distributed with equal density throughout this volume. I suppose you could make the algorithm select some or all coordinates with a Poisson distribution or some other non-uniform distribution, if you had some reason to do that). Once you have a candidate vector, check each constraint. If it fails any of them, go back and pick another point. Once you have a candidate vector, store it somewhere for later use. If you don't have enough stored vectors, go back and try to generate another one. With sufficiently high-quality random number generator, this gives you a set of stored coordinates that meet your criteria with (expected) uniform density. Alas, if your have a relatively high dimensionality n (i.e., if you construct each vectors out of a relatively long list of coordinates), the inscribed sphere (much less your whittled-down volume) has a surprisingly small part of the total volume of the total bounding box, so it might need to execute many iterations, most of them generating rejected points outside your constrained area, before finding a point inside your constrained area. Since computers these days are pretty fast, will that be fast enough?
Generating random vectors with constraints If I understand you correctly, only points in some small volume of n-dimensional space meet your constraints. Your first constraint constrains it to the interior of a hypersphere, which reminds me of
29,535
Posterior variance vs variance of the posterior mean
There is no particular reason that $\pi(\theta|\mathbf{y})$ should look anything like $f(\hat{\theta}|\theta)$ as functions of $\theta$. The latter is a sampling distribution for the parameter estimator $\hat{\theta}$, which may have a substantially different form to the likelihood function for the data $\mathbf{y}$. Moreover, the parameter estimator in Bayesian analysis is hardly ever unbiased, since it incorporates a prior distribution. For those reasons I think it is extremely unlikely that the integrands in these variance equations are going to "look like" each other, except perhaps in unusual special cases. Does it even make sense to compare these quantities? What kind of information can be gained from comparing them? Since these are variances for entirely different quantities, conditional on different things, any useful comparison is probably going to come through some corresponding interval estimator for $\theta$. The posterior standard deviation $\mathbb{S}(\theta|\mathbf{y})$ ought to give you a rough idea of the width of a credible interval for $\theta$, whereas the standard error $\mathbb{S}(\hat{\theta}|\theta)$ ought to give you a rough idea of the width of a confidence interval for $\theta$. Consequently, if you were willing to discard "shape information" for the distributions then you could reasonably say that the comparison of the two variances will give you an idea of the relative accuracy of the credible interval versus the confidence interval. This would be a bit tentative in my view, but it might be possible under some simplifying assumptions. 2 . Can anybody point me to some references dealing with this specific issue? (I've tried searching this, but can't find anything on this site or otherwise dealing with this specific issue). I'm not familiar with any literature on this topic, but perhaps you could search for comparisons of accuracy/width of credible intervals and confidence intervals. If there is any literature on that subject then I imagine it will involve these two variance quantities.
Posterior variance vs variance of the posterior mean
There is no particular reason that $\pi(\theta|\mathbf{y})$ should look anything like $f(\hat{\theta}|\theta)$ as functions of $\theta$. The latter is a sampling distribution for the parameter estima
Posterior variance vs variance of the posterior mean There is no particular reason that $\pi(\theta|\mathbf{y})$ should look anything like $f(\hat{\theta}|\theta)$ as functions of $\theta$. The latter is a sampling distribution for the parameter estimator $\hat{\theta}$, which may have a substantially different form to the likelihood function for the data $\mathbf{y}$. Moreover, the parameter estimator in Bayesian analysis is hardly ever unbiased, since it incorporates a prior distribution. For those reasons I think it is extremely unlikely that the integrands in these variance equations are going to "look like" each other, except perhaps in unusual special cases. Does it even make sense to compare these quantities? What kind of information can be gained from comparing them? Since these are variances for entirely different quantities, conditional on different things, any useful comparison is probably going to come through some corresponding interval estimator for $\theta$. The posterior standard deviation $\mathbb{S}(\theta|\mathbf{y})$ ought to give you a rough idea of the width of a credible interval for $\theta$, whereas the standard error $\mathbb{S}(\hat{\theta}|\theta)$ ought to give you a rough idea of the width of a confidence interval for $\theta$. Consequently, if you were willing to discard "shape information" for the distributions then you could reasonably say that the comparison of the two variances will give you an idea of the relative accuracy of the credible interval versus the confidence interval. This would be a bit tentative in my view, but it might be possible under some simplifying assumptions. 2 . Can anybody point me to some references dealing with this specific issue? (I've tried searching this, but can't find anything on this site or otherwise dealing with this specific issue). I'm not familiar with any literature on this topic, but perhaps you could search for comparisons of accuracy/width of credible intervals and confidence intervals. If there is any literature on that subject then I imagine it will involve these two variance quantities.
Posterior variance vs variance of the posterior mean There is no particular reason that $\pi(\theta|\mathbf{y})$ should look anything like $f(\hat{\theta}|\theta)$ as functions of $\theta$. The latter is a sampling distribution for the parameter estima
29,536
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner?
As already said by whuber, one way to answer your question is to de-biase your estimator. If the robust estimator is biased, maybe you can subtract the theoretical bias (according to a parametric model), there are some work that try to do that or to subtract an approximation of the bias (I don't remember a ref but I could search for it if you are interested). For instance, think about the empirical median in an exponential model. We can compute its expectation and then substract this expectation, if you want I can make the computations this is rather simple ... this becomes more difficult if the estimator is more complicated than the median and this works only in parametric models. A maybe less ambitious question is whether we can construct a consistent robust estimator. This we can do but we have to be careful of what we call robust. If your definition of robust is having a non-zero asymptotic breakdown point, then already we can prove that this is impossible. Suppose that your estimator is called $T_n$ and it converges to $\mathbb{E}[X]$. $T_n$ has a non-zero breakdown point which means that there can be a portion $\varepsilon>0$ of the data arbitrarily bad and nonetheless $T_n$ will not be arbitrarily large. But this can't be because at the limit, if a portion of the data is an outlier, this translates: with probability $1-\varepsilon$, $X$ is sampled from the target distribution $P$ and with probability $\varepsilon$ $X$ is arbitrary, but this makes $\mathbb{E}[X]$ arbitrary also (if you want me to put it formally, I can) which is in contradiction with the non-asymptotic breakdown point of $T_n$. Finally, to conclude on this, we can take the non-asymptotic point of view. Saying that we don't care about the asymptotic breakdown point, what is important is either a the non-asymptotic breakdown point (something like a breakdown point of $1/\sqrt{n}$. Or to be efficient on heavy-tailed data. In this case, there are estimators that are robust and consistent estimators of $\mathbb{E}[X]$. For instance, we can use Huber's estimator with a parameter that goes to infinity or we can use the median-of-means estimator with a number of blocks that tends to infinity. References for this line of thought are "Challenging the empirical mean and empirical variance: A deviation study" by Olivier Catoni or "Sub-Gaussian mean estimators" by Devroye et al (these ref are in the theoretical community, they may be complicated if you are not familiar with empirical processes and concentration inequalities).
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner?
As already said by whuber, one way to answer your question is to de-biase your estimator. If the robust estimator is biased, maybe you can subtract the theoretical bias (according to a parametric mode
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner? As already said by whuber, one way to answer your question is to de-biase your estimator. If the robust estimator is biased, maybe you can subtract the theoretical bias (according to a parametric model), there are some work that try to do that or to subtract an approximation of the bias (I don't remember a ref but I could search for it if you are interested). For instance, think about the empirical median in an exponential model. We can compute its expectation and then substract this expectation, if you want I can make the computations this is rather simple ... this becomes more difficult if the estimator is more complicated than the median and this works only in parametric models. A maybe less ambitious question is whether we can construct a consistent robust estimator. This we can do but we have to be careful of what we call robust. If your definition of robust is having a non-zero asymptotic breakdown point, then already we can prove that this is impossible. Suppose that your estimator is called $T_n$ and it converges to $\mathbb{E}[X]$. $T_n$ has a non-zero breakdown point which means that there can be a portion $\varepsilon>0$ of the data arbitrarily bad and nonetheless $T_n$ will not be arbitrarily large. But this can't be because at the limit, if a portion of the data is an outlier, this translates: with probability $1-\varepsilon$, $X$ is sampled from the target distribution $P$ and with probability $\varepsilon$ $X$ is arbitrary, but this makes $\mathbb{E}[X]$ arbitrary also (if you want me to put it formally, I can) which is in contradiction with the non-asymptotic breakdown point of $T_n$. Finally, to conclude on this, we can take the non-asymptotic point of view. Saying that we don't care about the asymptotic breakdown point, what is important is either a the non-asymptotic breakdown point (something like a breakdown point of $1/\sqrt{n}$. Or to be efficient on heavy-tailed data. In this case, there are estimators that are robust and consistent estimators of $\mathbb{E}[X]$. For instance, we can use Huber's estimator with a parameter that goes to infinity or we can use the median-of-means estimator with a number of blocks that tends to infinity. References for this line of thought are "Challenging the empirical mean and empirical variance: A deviation study" by Olivier Catoni or "Sub-Gaussian mean estimators" by Devroye et al (these ref are in the theoretical community, they may be complicated if you are not familiar with empirical processes and concentration inequalities).
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner? As already said by whuber, one way to answer your question is to de-biase your estimator. If the robust estimator is biased, maybe you can subtract the theoretical bias (according to a parametric mode
29,537
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner?
This is not an unbiased estimate, but it is consistent (you can let the bias approach to zero as the sample size grows). You can take a trimmed sample (remove the highest and lowest values) and use the mean of the trimmed sample as the estimate. In the case of a know distribution then you might use an appropriate scaling to make the estimate less biased (or not biased at all), or otherwise the bias will just decrease when you take smaller samples.
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner?
This is not an unbiased estimate, but it is consistent (you can let the bias approach to zero as the sample size grows). You can take a trimmed sample (remove the highest and lowest values) and use th
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner? This is not an unbiased estimate, but it is consistent (you can let the bias approach to zero as the sample size grows). You can take a trimmed sample (remove the highest and lowest values) and use the mean of the trimmed sample as the estimate. In the case of a know distribution then you might use an appropriate scaling to make the estimate less biased (or not biased at all), or otherwise the bias will just decrease when you take smaller samples.
Can we estimate the mean of an asymmetric distribution in an unbiased and robust manner? This is not an unbiased estimate, but it is consistent (you can let the bias approach to zero as the sample size grows). You can take a trimmed sample (remove the highest and lowest values) and use th
29,538
Calculating R-squared using standard errors
(Note: This is (meant as) a self-answer.) Update 1: Using https://people.duke.edu/~rnau/mathreg.htm, that is, the formulas $SE_R = \sqrt{1 - \bar{R}^2} \cdot \hat{\sigma}_y$ and $\bar{R}^2 = 1 - \frac{N-1}{N-k-1}(1 - R^2)$ I do arrive at $R^2 \approx 0.4491$, but it seems to me that this result can be reached more easily (that is, without computing $\bar{R}^2$), but how? Update 2: Yes, indeed, there is. We already know that $SSR = 39.3601$, so in order to compute $R^2$ using the simple formula $R^2 = 1 - \frac{SSR}{SST}$ we only have to determine $SST$. We have that $\hat{\sigma}_y = 0.8861$ (average sample standard deviation (of the dependent variable)), so $MST = \hat{\sigma}_y^2 \approx 0.7852$ (average sample variance) and it then follows that $SST = 91 \cdot MST \approx 39.3601$ (total sample variance) and ultimately that $R^2 \approx 0.4491$ (which was the correct answer). Remaining question: Can the formula $SE_R = (1-R^2)\hat{\sigma}_y$ also be used to calculate $R^2$? If so, what goes wrong at $(*)$? Solution The correct formula is $\hat{\sigma}_{\hat{u},\color{red}{unbiased}} = (1-R^2)\hat{\sigma}_y$. It is important to realize that $SE_R = \hat{\sigma}_{\hat{u},\color{red}{biased}}$! We can determine $\hat{\sigma}_{\hat{u},unbiased}$ by using the formula $SE_R = \hat{\sigma}_{\hat{u},unbiased} \cdot \sqrt{\frac{N-1}{N-k-1}}$ (i.e. by performing a bias correction), which yields $\hat{\sigma}_{\hat{u},unbiased} \approx 0.6577$. Finally, using $\hat{\sigma}_{\hat{u},unbiased} = (1-R^2)\hat{\sigma}_y$ we find that $R^2 \approx 0.4491$ (which was the correct answer).
Calculating R-squared using standard errors
(Note: This is (meant as) a self-answer.) Update 1: Using https://people.duke.edu/~rnau/mathreg.htm, that is, the formulas $SE_R = \sqrt{1 - \bar{R}^2} \cdot \hat{\sigma}_y$ and $\bar{R}^2 = 1 - \frac
Calculating R-squared using standard errors (Note: This is (meant as) a self-answer.) Update 1: Using https://people.duke.edu/~rnau/mathreg.htm, that is, the formulas $SE_R = \sqrt{1 - \bar{R}^2} \cdot \hat{\sigma}_y$ and $\bar{R}^2 = 1 - \frac{N-1}{N-k-1}(1 - R^2)$ I do arrive at $R^2 \approx 0.4491$, but it seems to me that this result can be reached more easily (that is, without computing $\bar{R}^2$), but how? Update 2: Yes, indeed, there is. We already know that $SSR = 39.3601$, so in order to compute $R^2$ using the simple formula $R^2 = 1 - \frac{SSR}{SST}$ we only have to determine $SST$. We have that $\hat{\sigma}_y = 0.8861$ (average sample standard deviation (of the dependent variable)), so $MST = \hat{\sigma}_y^2 \approx 0.7852$ (average sample variance) and it then follows that $SST = 91 \cdot MST \approx 39.3601$ (total sample variance) and ultimately that $R^2 \approx 0.4491$ (which was the correct answer). Remaining question: Can the formula $SE_R = (1-R^2)\hat{\sigma}_y$ also be used to calculate $R^2$? If so, what goes wrong at $(*)$? Solution The correct formula is $\hat{\sigma}_{\hat{u},\color{red}{unbiased}} = (1-R^2)\hat{\sigma}_y$. It is important to realize that $SE_R = \hat{\sigma}_{\hat{u},\color{red}{biased}}$! We can determine $\hat{\sigma}_{\hat{u},unbiased}$ by using the formula $SE_R = \hat{\sigma}_{\hat{u},unbiased} \cdot \sqrt{\frac{N-1}{N-k-1}}$ (i.e. by performing a bias correction), which yields $\hat{\sigma}_{\hat{u},unbiased} \approx 0.6577$. Finally, using $\hat{\sigma}_{\hat{u},unbiased} = (1-R^2)\hat{\sigma}_y$ we find that $R^2 \approx 0.4491$ (which was the correct answer).
Calculating R-squared using standard errors (Note: This is (meant as) a self-answer.) Update 1: Using https://people.duke.edu/~rnau/mathreg.htm, that is, the formulas $SE_R = \sqrt{1 - \bar{R}^2} \cdot \hat{\sigma}_y$ and $\bar{R}^2 = 1 - \frac
29,539
Finding MLE and MSE of $\theta$ where $f_X(x\mid\theta)=\theta x^{−2} I_{x\geq\theta}(x)$
This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is: $$\begin{aligned} L_\mathbf{x}(\theta) &= \prod_{i=1}^n f_X(x_i|\theta) \\[6pt] &= \prod_{i=1}^n \frac{\theta}{x_i^2} \cdot \mathbb{I}(x_i \geqslant) \\[6pt] &\propto \prod_{i=1}^n \theta \cdot \mathbb{I}(x_i \geqslant \theta) \\[12pt] &= \theta^n \cdot \mathbb{I}(0 < \theta \leqslant x_{(1)}). \\[6pt] \end{aligned}$$ This function is strictly increasing over the range $0 < \theta \leqslant x_{(1)}$ so the MLE is: $$\hat{\theta} = x_{(1)}.$$ Mean-squared-error of MLE: Rather than deriving the distribution of the estimator, it is quicker in this case to derive the distribution of the estimation error. Define the estimation error as $T \equiv \hat{\theta} - \theta$ and note that it has distribution function: $$\begin{aligned} F_T(t) \equiv \mathbb{P}(\hat{\theta} - \theta \leqslant t) &= 1-\mathbb{P}(\hat{\theta} > \theta + t) \\[6pt] &= 1-\prod_{i=1}^n \mathbb{P}(X_i > \theta + t) \\[6pt] &= 1-(1-F_X(\theta + t))^n \\[6pt] &= \begin{cases} 0 & & \text{for } t < 0, \\[6pt] 1 - \Big( \frac{\theta}{\theta + t} \Big)^n & & \text{for } t \geqslant 0. \\[6pt] \end{cases} \end{aligned}$$ Thus, the density has support over $t \geqslant 0$, where we have: $$\begin{aligned} f_T(t) \equiv \frac{d F_T}{dt}(t) &= - n \Big( - \frac{\theta}{(\theta + t)^2} \Big) \Big( \frac{\theta}{\theta + t} \Big)^{n-1} \\[6pt] &= \frac{n \theta^n}{(\theta + t)^{n+1}}. \\[6pt] \end{aligned}$$ Assuming that $n>2$, the mean-squared error of the estimator is therefore given by: $$\begin{aligned} \text{MSE}(\hat{\theta}) = \mathbb{E}(T^2) &= \int \limits_0^\infty t^2 \frac{n \theta^n}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_0^\infty \frac{t^2}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_\theta^\infty \frac{(r-\theta)^2}{r^{n+1}} \ dr \\[6pt] &= n \theta^n \int \limits_\theta^\infty \Big[ r^{-(n-1)} - 2 \theta r^{-n} + \theta^2 r^{-(n+1)} \Big] \ dr \\[6pt] &= n \theta^n \Bigg[ -\frac{r^{-(n-2)}}{n-2} + \frac{2 \theta r^{-(n-1)}}{n-1} - \frac{\theta^2 r^{-n}}{n} \Bigg]_{r = \theta}^{r \rightarrow \infty} \\[6pt] &= n \theta^n \Bigg[ \frac{\theta^{-(n-2)}}{n-2} - \frac{2 \theta^{-(n-2)}}{n-1} + \frac{\theta^{-(n-2)}}{n} \Bigg] \\[6pt] &= n \theta^2 \Bigg[ \frac{1}{n-2} - \frac{2}{n-1} + \frac{1}{n} \Bigg] \\[6pt] &= \theta^2 \cdot \frac{n(n-1) - 2n(n-2) + (n-1)(n-2)}{(n-1)(n-2)} \\[6pt] &= \theta^2 \cdot \frac{n^2 - n - 2n^2 + 4n + n^2 - 3n + 2}{(n-1)(n-2)} \\[6pt] &= \frac{2\theta^2}{(n-1)(n-2)}. \\[6pt] \end{aligned}$$
Finding MLE and MSE of $\theta$ where $f_X(x\mid\theta)=\theta x^{−2} I_{x\geq\theta}(x)$
This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is: $$\begin{aligned} L_\math
Finding MLE and MSE of $\theta$ where $f_X(x\mid\theta)=\theta x^{−2} I_{x\geq\theta}(x)$ This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is: $$\begin{aligned} L_\mathbf{x}(\theta) &= \prod_{i=1}^n f_X(x_i|\theta) \\[6pt] &= \prod_{i=1}^n \frac{\theta}{x_i^2} \cdot \mathbb{I}(x_i \geqslant) \\[6pt] &\propto \prod_{i=1}^n \theta \cdot \mathbb{I}(x_i \geqslant \theta) \\[12pt] &= \theta^n \cdot \mathbb{I}(0 < \theta \leqslant x_{(1)}). \\[6pt] \end{aligned}$$ This function is strictly increasing over the range $0 < \theta \leqslant x_{(1)}$ so the MLE is: $$\hat{\theta} = x_{(1)}.$$ Mean-squared-error of MLE: Rather than deriving the distribution of the estimator, it is quicker in this case to derive the distribution of the estimation error. Define the estimation error as $T \equiv \hat{\theta} - \theta$ and note that it has distribution function: $$\begin{aligned} F_T(t) \equiv \mathbb{P}(\hat{\theta} - \theta \leqslant t) &= 1-\mathbb{P}(\hat{\theta} > \theta + t) \\[6pt] &= 1-\prod_{i=1}^n \mathbb{P}(X_i > \theta + t) \\[6pt] &= 1-(1-F_X(\theta + t))^n \\[6pt] &= \begin{cases} 0 & & \text{for } t < 0, \\[6pt] 1 - \Big( \frac{\theta}{\theta + t} \Big)^n & & \text{for } t \geqslant 0. \\[6pt] \end{cases} \end{aligned}$$ Thus, the density has support over $t \geqslant 0$, where we have: $$\begin{aligned} f_T(t) \equiv \frac{d F_T}{dt}(t) &= - n \Big( - \frac{\theta}{(\theta + t)^2} \Big) \Big( \frac{\theta}{\theta + t} \Big)^{n-1} \\[6pt] &= \frac{n \theta^n}{(\theta + t)^{n+1}}. \\[6pt] \end{aligned}$$ Assuming that $n>2$, the mean-squared error of the estimator is therefore given by: $$\begin{aligned} \text{MSE}(\hat{\theta}) = \mathbb{E}(T^2) &= \int \limits_0^\infty t^2 \frac{n \theta^n}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_0^\infty \frac{t^2}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_\theta^\infty \frac{(r-\theta)^2}{r^{n+1}} \ dr \\[6pt] &= n \theta^n \int \limits_\theta^\infty \Big[ r^{-(n-1)} - 2 \theta r^{-n} + \theta^2 r^{-(n+1)} \Big] \ dr \\[6pt] &= n \theta^n \Bigg[ -\frac{r^{-(n-2)}}{n-2} + \frac{2 \theta r^{-(n-1)}}{n-1} - \frac{\theta^2 r^{-n}}{n} \Bigg]_{r = \theta}^{r \rightarrow \infty} \\[6pt] &= n \theta^n \Bigg[ \frac{\theta^{-(n-2)}}{n-2} - \frac{2 \theta^{-(n-2)}}{n-1} + \frac{\theta^{-(n-2)}}{n} \Bigg] \\[6pt] &= n \theta^2 \Bigg[ \frac{1}{n-2} - \frac{2}{n-1} + \frac{1}{n} \Bigg] \\[6pt] &= \theta^2 \cdot \frac{n(n-1) - 2n(n-2) + (n-1)(n-2)}{(n-1)(n-2)} \\[6pt] &= \theta^2 \cdot \frac{n^2 - n - 2n^2 + 4n + n^2 - 3n + 2}{(n-1)(n-2)} \\[6pt] &= \frac{2\theta^2}{(n-1)(n-2)}. \\[6pt] \end{aligned}$$
Finding MLE and MSE of $\theta$ where $f_X(x\mid\theta)=\theta x^{−2} I_{x\geq\theta}(x)$ This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is: $$\begin{aligned} L_\math
29,540
Correlation of non-stationary time series
Find correlation between two time series. Theory and practice (R) is a good place to start your education. Note the discussion that points to the flaw of interpreting ( not computing ! ) correlation coefficients when you have auto-correlated data ...as you do . This problem was recognized for time series as early as 1926 by Yule in his presidential address to the Royal Statistical Society and nearly 100 years later we have Google https://www.google.com/trends/correlate/tutorial and tons of others promoting the erroneous interpretation ( i.e. using standard significance testing !) of time series correlations.
Correlation of non-stationary time series
Find correlation between two time series. Theory and practice (R) is a good place to start your education. Note the discussion that points to the flaw of interpreting ( not computing ! ) correlation c
Correlation of non-stationary time series Find correlation between two time series. Theory and practice (R) is a good place to start your education. Note the discussion that points to the flaw of interpreting ( not computing ! ) correlation coefficients when you have auto-correlated data ...as you do . This problem was recognized for time series as early as 1926 by Yule in his presidential address to the Royal Statistical Society and nearly 100 years later we have Google https://www.google.com/trends/correlate/tutorial and tons of others promoting the erroneous interpretation ( i.e. using standard significance testing !) of time series correlations.
Correlation of non-stationary time series Find correlation between two time series. Theory and practice (R) is a good place to start your education. Note the discussion that points to the flaw of interpreting ( not computing ! ) correlation c
29,541
comparing groups in repeated measures FE models, with a nested error component, estimated using plm
The following code implemented the practice of putting interaction between Female dummy and year. The F test at the bottom test your null $\beta_{Female} = \beta_{Male}$. The t-statistic from plm output tests your null $\beta_{Female:year=1.5}=\beta_{Male:year=1.5}$. In particular, for year=1.5, the p-value is 0.32. library(plm) # Use plm library(car) # Use F-test in command linearHypothesis library(tidyverse) data(egsingle, package = 'mlmRev') dta <- egsingle %>% mutate(Female = recode(female, .default = 0L, `Female` = 1L)) plm1 <- plm(math ~ Female * (year), data = dta, index = c('childid', 'year', 'schoolid'), model = 'within') # Output from `summary(plm1)` --- I deleted a few lines to save space. # Coefficients: # Estimate Std. Error t-value Pr(>|t|) # year-1.5 0.8842 0.1008 8.77 <2e-16 *** # year-0.5 1.8821 0.1007 18.70 <2e-16 *** # year0.5 2.5626 0.1011 25.36 <2e-16 *** # year1.5 3.1680 0.1016 31.18 <2e-16 *** # year2.5 3.9841 0.1022 38.98 <2e-16 *** # Female:year-1.5 -0.0918 0.1248 -0.74 0.46 # Female:year-0.5 -0.0773 0.1246 -0.62 0.53 # Female:year0.5 -0.0517 0.1255 -0.41 0.68 # Female:year1.5 -0.1265 0.1265 -1.00 0.32 # Female:year2.5 -0.1465 0.1275 -1.15 0.25 # --- xnames <- names(coef(plm1)) # a vector of all independent variables' names in 'plm1' # Use 'grepl' to construct a vector of logic value that is TRUE if the variable # name starts with 'Female:' at the beginning. This is generic, to pick up # every variable that starts with 'year' at the beginning, just write # 'grepl('^year+', xnames)'. picked <- grepl('^Female:+', xnames) linearHypothesis(plm1, xnames[picked]) # Hypothesis: # Female:year - 1.5 = 0 # Female:year - 0.5 = 0 # Female:year0.5 = 0 # Female:year1.5 = 0 # Female:year2.5 = 0 # # Model 1: restricted model # Model 2: math ~ Female * (year) # # Res.Df Df Chisq Pr(>Chisq) # 1 5504 # 2 5499 5 6.15 0.29
comparing groups in repeated measures FE models, with a nested error component, estimated using plm
The following code implemented the practice of putting interaction between Female dummy and year. The F test at the bottom test your null $\beta_{Female} = \beta_{Male}$. The t-statistic from plm outp
comparing groups in repeated measures FE models, with a nested error component, estimated using plm The following code implemented the practice of putting interaction between Female dummy and year. The F test at the bottom test your null $\beta_{Female} = \beta_{Male}$. The t-statistic from plm output tests your null $\beta_{Female:year=1.5}=\beta_{Male:year=1.5}$. In particular, for year=1.5, the p-value is 0.32. library(plm) # Use plm library(car) # Use F-test in command linearHypothesis library(tidyverse) data(egsingle, package = 'mlmRev') dta <- egsingle %>% mutate(Female = recode(female, .default = 0L, `Female` = 1L)) plm1 <- plm(math ~ Female * (year), data = dta, index = c('childid', 'year', 'schoolid'), model = 'within') # Output from `summary(plm1)` --- I deleted a few lines to save space. # Coefficients: # Estimate Std. Error t-value Pr(>|t|) # year-1.5 0.8842 0.1008 8.77 <2e-16 *** # year-0.5 1.8821 0.1007 18.70 <2e-16 *** # year0.5 2.5626 0.1011 25.36 <2e-16 *** # year1.5 3.1680 0.1016 31.18 <2e-16 *** # year2.5 3.9841 0.1022 38.98 <2e-16 *** # Female:year-1.5 -0.0918 0.1248 -0.74 0.46 # Female:year-0.5 -0.0773 0.1246 -0.62 0.53 # Female:year0.5 -0.0517 0.1255 -0.41 0.68 # Female:year1.5 -0.1265 0.1265 -1.00 0.32 # Female:year2.5 -0.1465 0.1275 -1.15 0.25 # --- xnames <- names(coef(plm1)) # a vector of all independent variables' names in 'plm1' # Use 'grepl' to construct a vector of logic value that is TRUE if the variable # name starts with 'Female:' at the beginning. This is generic, to pick up # every variable that starts with 'year' at the beginning, just write # 'grepl('^year+', xnames)'. picked <- grepl('^Female:+', xnames) linearHypothesis(plm1, xnames[picked]) # Hypothesis: # Female:year - 1.5 = 0 # Female:year - 0.5 = 0 # Female:year0.5 = 0 # Female:year1.5 = 0 # Female:year2.5 = 0 # # Model 1: restricted model # Model 2: math ~ Female * (year) # # Res.Df Df Chisq Pr(>Chisq) # 1 5504 # 2 5499 5 6.15 0.29
comparing groups in repeated measures FE models, with a nested error component, estimated using plm The following code implemented the practice of putting interaction between Female dummy and year. The F test at the bottom test your null $\beta_{Female} = \beta_{Male}$. The t-statistic from plm outp
29,542
Why LSTM performs worse in information latching than vanilla recurrent neuron network
There is a bug in your code, since the first half of your constructed examples are positive and the rest are negative, but keras does not shuffle before splitting the data into train and val, which means all of the val set is negative, and the train set is biased towards positive, which is why you got strange results such as 0 accuracy (worse than chance). In addition, I tweaked some parameters (such as the learning rate, number of epochs, and batch size) to make sure training always converged. Finally, I ran only for 5 and 100 time steps to save on computation. Curiously, the LSTM doesn't train properly, although the GRU almost does as well as the RNN. I tried on a slightly more difficult task: in positive sequences, the sign of the first element and an element halfway through the sequence is the same (both +1 or both -1), in negative sequences, the signs are different. I was hoping that the additional memory cell in the LSTM would benefit here It ended up working better than RNN, but only marginally, and the GRU wins out for some reason. I don't have a complete answer to why the RNN does better than the LSTM on the simple task. I think it must be that we haven't found the right hyperparameters to properly train the LSTM, in addition to the fact that the problem is easy for a simple RNN. Possibly, a model with so few parameters is also more prone to getting stuck in local minimum. The modified code
Why LSTM performs worse in information latching than vanilla recurrent neuron network
There is a bug in your code, since the first half of your constructed examples are positive and the rest are negative, but keras does not shuffle before splitting the data into train and val, which me
Why LSTM performs worse in information latching than vanilla recurrent neuron network There is a bug in your code, since the first half of your constructed examples are positive and the rest are negative, but keras does not shuffle before splitting the data into train and val, which means all of the val set is negative, and the train set is biased towards positive, which is why you got strange results such as 0 accuracy (worse than chance). In addition, I tweaked some parameters (such as the learning rate, number of epochs, and batch size) to make sure training always converged. Finally, I ran only for 5 and 100 time steps to save on computation. Curiously, the LSTM doesn't train properly, although the GRU almost does as well as the RNN. I tried on a slightly more difficult task: in positive sequences, the sign of the first element and an element halfway through the sequence is the same (both +1 or both -1), in negative sequences, the signs are different. I was hoping that the additional memory cell in the LSTM would benefit here It ended up working better than RNN, but only marginally, and the GRU wins out for some reason. I don't have a complete answer to why the RNN does better than the LSTM on the simple task. I think it must be that we haven't found the right hyperparameters to properly train the LSTM, in addition to the fact that the problem is easy for a simple RNN. Possibly, a model with so few parameters is also more prone to getting stuck in local minimum. The modified code
Why LSTM performs worse in information latching than vanilla recurrent neuron network There is a bug in your code, since the first half of your constructed examples are positive and the rest are negative, but keras does not shuffle before splitting the data into train and val, which me
29,543
Why LSTM performs worse in information latching than vanilla recurrent neuron network
I don't know if it will a difference, but I'd use: out = Dense (1, activation='sigmoid', ... and model.compile(loss='binary_crossentropy', ... for a binary classification problem.
Why LSTM performs worse in information latching than vanilla recurrent neuron network
I don't know if it will a difference, but I'd use: out = Dense (1, activation='sigmoid', ... and model.compile(loss='binary_crossentropy', ... for a binary classification problem.
Why LSTM performs worse in information latching than vanilla recurrent neuron network I don't know if it will a difference, but I'd use: out = Dense (1, activation='sigmoid', ... and model.compile(loss='binary_crossentropy', ... for a binary classification problem.
Why LSTM performs worse in information latching than vanilla recurrent neuron network I don't know if it will a difference, but I'd use: out = Dense (1, activation='sigmoid', ... and model.compile(loss='binary_crossentropy', ... for a binary classification problem.
29,544
Feature selection on a Bayesian hierarchical generalized linear model
The way I'd tackle (1) would be involve a spike and slab model something like: $\beta_{g,k} = z_{k}m_{g,k}$ $z_k \sim Bern(p)$ $m_{g,k} \sim N(\mu, \Sigma)$ $\mu, \Sigma \sim NIW_{v_0}(\mu_0, V_0^{-1})$ This: Retains the flexibility on the $\beta$'s from the NIW prior on $\mu, \Sigma$. Models selection of variables for all groups at once. Easily extensible by adding a sub-index for group to $z_{g,k}$ and having a common beta prior for each location $k$. Of course, I think this is the kind of problem where there are a number of valid approaches.
Feature selection on a Bayesian hierarchical generalized linear model
The way I'd tackle (1) would be involve a spike and slab model something like: $\beta_{g,k} = z_{k}m_{g,k}$ $z_k \sim Bern(p)$ $m_{g,k} \sim N(\mu, \Sigma)$ $\mu, \Sigma \sim NIW_{v_0}(\mu_0, V_0^{-1}
Feature selection on a Bayesian hierarchical generalized linear model The way I'd tackle (1) would be involve a spike and slab model something like: $\beta_{g,k} = z_{k}m_{g,k}$ $z_k \sim Bern(p)$ $m_{g,k} \sim N(\mu, \Sigma)$ $\mu, \Sigma \sim NIW_{v_0}(\mu_0, V_0^{-1})$ This: Retains the flexibility on the $\beta$'s from the NIW prior on $\mu, \Sigma$. Models selection of variables for all groups at once. Easily extensible by adding a sub-index for group to $z_{g,k}$ and having a common beta prior for each location $k$. Of course, I think this is the kind of problem where there are a number of valid approaches.
Feature selection on a Bayesian hierarchical generalized linear model The way I'd tackle (1) would be involve a spike and slab model something like: $\beta_{g,k} = z_{k}m_{g,k}$ $z_k \sim Bern(p)$ $m_{g,k} \sim N(\mu, \Sigma)$ $\mu, \Sigma \sim NIW_{v_0}(\mu_0, V_0^{-1}
29,545
Feature selection on a Bayesian hierarchical generalized linear model
Feature selection is not a great goal to have in an analysis. Unless all the predictors are uncorrelated with each other and your sample size is immense, the data will be unable to reliably tell you the answer. Model specification is more important than model selection. Details are in my RMS Course Notes. But shrinkage, without feature selection (e.g., ridge or $L_{2}$ penalized maximum likelihood estimation) can be a good idea. Hierarchical Bayesian models are even better because they allow for statistical inference in the shrunken model whereas we lose most of the inferential tools in the frequentist world after shrinking.
Feature selection on a Bayesian hierarchical generalized linear model
Feature selection is not a great goal to have in an analysis. Unless all the predictors are uncorrelated with each other and your sample size is immense, the data will be unable to reliably tell you
Feature selection on a Bayesian hierarchical generalized linear model Feature selection is not a great goal to have in an analysis. Unless all the predictors are uncorrelated with each other and your sample size is immense, the data will be unable to reliably tell you the answer. Model specification is more important than model selection. Details are in my RMS Course Notes. But shrinkage, without feature selection (e.g., ridge or $L_{2}$ penalized maximum likelihood estimation) can be a good idea. Hierarchical Bayesian models are even better because they allow for statistical inference in the shrunken model whereas we lose most of the inferential tools in the frequentist world after shrinking.
Feature selection on a Bayesian hierarchical generalized linear model Feature selection is not a great goal to have in an analysis. Unless all the predictors are uncorrelated with each other and your sample size is immense, the data will be unable to reliably tell you
29,546
Linear Regression: How to favour less "sensitive" parameters?
The only way to obtain an rmse that has two local minima is for the residuals of model and data to be nonlinear. Since one of these, the model, is linear (in 2D), the other, i.e., the $y$ data, must be nonlinear either with respect to the underlying tendency of the data or the noise function of that data, or both. Therefore, a better model, a nonlinear one, would be the starting point for investigating the data. Moreover, without knowing something more about the data, one cannot say what regression method should be used with any certainty. I can offer that Tikhonov regularization, or related ridge regression, would be a good way to address the OP question. However, what smoothing factor should be used would depend on what one is trying to obtain by modelling. The assumption here appears to be that the least rmse makes the best model as we do not have a regression goal (other than OLS which is THE "go to" default method most often used to when a physically defined regression target is not even conceptualized). So, what is the purpose of performing this regression, please? Without defining that purpose, there is no regression goal or target and we are just finding a regression for cosmetic purposes.
Linear Regression: How to favour less "sensitive" parameters?
The only way to obtain an rmse that has two local minima is for the residuals of model and data to be nonlinear. Since one of these, the model, is linear (in 2D), the other, i.e., the $y$ data, must b
Linear Regression: How to favour less "sensitive" parameters? The only way to obtain an rmse that has two local minima is for the residuals of model and data to be nonlinear. Since one of these, the model, is linear (in 2D), the other, i.e., the $y$ data, must be nonlinear either with respect to the underlying tendency of the data or the noise function of that data, or both. Therefore, a better model, a nonlinear one, would be the starting point for investigating the data. Moreover, without knowing something more about the data, one cannot say what regression method should be used with any certainty. I can offer that Tikhonov regularization, or related ridge regression, would be a good way to address the OP question. However, what smoothing factor should be used would depend on what one is trying to obtain by modelling. The assumption here appears to be that the least rmse makes the best model as we do not have a regression goal (other than OLS which is THE "go to" default method most often used to when a physically defined regression target is not even conceptualized). So, what is the purpose of performing this regression, please? Without defining that purpose, there is no regression goal or target and we are just finding a regression for cosmetic purposes.
Linear Regression: How to favour less "sensitive" parameters? The only way to obtain an rmse that has two local minima is for the residuals of model and data to be nonlinear. Since one of these, the model, is linear (in 2D), the other, i.e., the $y$ data, must b
29,547
When to use Deming regression
To address part of your concerns here: Deming regression appears to offer a poor fit in plot panel B, but this is because the plot is incorrect. A quick way to assess whether this has been done correctly is to look at the X & Y values along the Deming regression line. For any DL-P value in panel A, it should have a corressponding CAL-P value that is identical in both panels (NOT true for OLS, and the fundamental difference between them). But in these plots, where DL-P = 20, CAL-P in panel A is ~15 and in panel B ~27. The error appears to be that the Deming regression line has been drawn by just swapping the CAL-P and DL-P terms in the equation. The equation for panel A is: CAL-P = 0.75 + 0.71*DL-P Rearranging, this implies that the equation for panel B should be: DL-P = (CAL-P - 0.75) / 0.71 And NOT: DL-P = 0.75 + 0.71*CAL-P (which is what has been plotted)
When to use Deming regression
To address part of your concerns here: Deming regression appears to offer a poor fit in plot panel B, but this is because the plot is incorrect. A quick way to assess whether this has been done correc
When to use Deming regression To address part of your concerns here: Deming regression appears to offer a poor fit in plot panel B, but this is because the plot is incorrect. A quick way to assess whether this has been done correctly is to look at the X & Y values along the Deming regression line. For any DL-P value in panel A, it should have a corressponding CAL-P value that is identical in both panels (NOT true for OLS, and the fundamental difference between them). But in these plots, where DL-P = 20, CAL-P in panel A is ~15 and in panel B ~27. The error appears to be that the Deming regression line has been drawn by just swapping the CAL-P and DL-P terms in the equation. The equation for panel A is: CAL-P = 0.75 + 0.71*DL-P Rearranging, this implies that the equation for panel B should be: DL-P = (CAL-P - 0.75) / 0.71 And NOT: DL-P = 0.75 + 0.71*CAL-P (which is what has been plotted)
When to use Deming regression To address part of your concerns here: Deming regression appears to offer a poor fit in plot panel B, but this is because the plot is incorrect. A quick way to assess whether this has been done correc
29,548
Alternatives to L1, L2 and Dropout generalization
Given that it's financial data, it's likely that the feature distributions in your train and validation sets are different - a phenomenon known as covariate shift - and neural networks don't tend to play well with this. Having different feature distributions can cause overfitting even if the network is relatively small. Given that l1 and l2 don't help things I suspect other standard regularization measures like adding noise to inputs/weights/gradients probably won't help, but it might be worth a try. I would be tempted to try a classification algorithm which is less affected by the absolute magnitudes of features, such a gradient boosted treees.
Alternatives to L1, L2 and Dropout generalization
Given that it's financial data, it's likely that the feature distributions in your train and validation sets are different - a phenomenon known as covariate shift - and neural networks don't tend to p
Alternatives to L1, L2 and Dropout generalization Given that it's financial data, it's likely that the feature distributions in your train and validation sets are different - a phenomenon known as covariate shift - and neural networks don't tend to play well with this. Having different feature distributions can cause overfitting even if the network is relatively small. Given that l1 and l2 don't help things I suspect other standard regularization measures like adding noise to inputs/weights/gradients probably won't help, but it might be worth a try. I would be tempted to try a classification algorithm which is less affected by the absolute magnitudes of features, such a gradient boosted treees.
Alternatives to L1, L2 and Dropout generalization Given that it's financial data, it's likely that the feature distributions in your train and validation sets are different - a phenomenon known as covariate shift - and neural networks don't tend to p
29,549
Moment/mgf of cosine of directional vectors?
Hey Yaroslav, you really do not have to hurry accepting my answer on MO and are more than welcomed to ask further details :). Since you reformulate the question in 3-dim I can see exactly what you want to do. In MO post I thought you only need to calculate the largest cosine between two random variables. Now the problem seems tougher. First, we calculate the normalized Gaussian $\frac{X}{\|X\|}$, which is not a trivial job since it actually has a name "projected normal distribution" because we can rewrite the multivariate normal density $X$ in terms of its polar coordinate $(\|X\|,\frac{X}{\|X\|})=(r,\boldsymbol{\theta})$. And the marginal density for $\boldsymbol{\theta}$ can be obtained in $$\int_{\mathbb{R}^{+}}f(r,\boldsymbol{\theta})dr$$ An important instance is that in which $x$ has a bivariate normal distribution $N_2(\mu,\Sigma)$, in which $\|x\|^{-1}x$ is said to have a projected normal (or angular Gaussian or offset normal) distribution.[Mardia&Peter]p.46 In this step we can obtain distributions $\mathcal{PN}_{k}$ for $\frac{X}{\|X\|}\perp\frac{Y}{\|Y\|}$, and hence their joint density $(\frac{X}{\|X\|},\frac{Y}{\|Y\|})$ due to independence. As for a concrete density function of projected normal distribution, see [Mardia&Peter] Chap 10. or [2] Equation (4) or [1] . (Notice that in [2] they also assume a special form of covariance matrix $\Sigma=\left(\begin{array}{cc} \Gamma & \gamma\\ \gamma' & 1 \end{array}\right)$) Second, since we already obtained their joint density, their inner product can be readily derived using transformation formula $$\left(\frac{X}{\|X\|},\frac{Y}{\|Y\|}\right)\mapsto\frac{X}{\|X\|}\cdot\frac{Y}{\|Y\|}$$. Also see [3]. As long as we computed the density, the second moment is only a problem of integration. References: [Mardia&Peter]Mardia, Kanti V., and Peter E. Jupp. Directional statistics. Vol. 494. John Wiley & Sons, 2009. [1]Wang, Fangpo, and Alan E. Gelfand. "Directional data analysis under the general projected normal distribution." Statistical methodology 10.1 (2013): 113-127. [2]Hernandez-Stumpfhauser, Daniel, F. Jay Breidt, and Mark J. van der Woerd. "The general projected normal distribution of arbitrary dimension: modeling and Bayesian inference." Bayesian Analysis (2016). https://projecteuclid.org/download/pdfview_1/euclid.ba/1453211962 [3]Moment generating function of the inner product of two gaussian random vectors
Moment/mgf of cosine of directional vectors?
Hey Yaroslav, you really do not have to hurry accepting my answer on MO and are more than welcomed to ask further details :). Since you reformulate the question in 3-dim I can see exactly what you wan
Moment/mgf of cosine of directional vectors? Hey Yaroslav, you really do not have to hurry accepting my answer on MO and are more than welcomed to ask further details :). Since you reformulate the question in 3-dim I can see exactly what you want to do. In MO post I thought you only need to calculate the largest cosine between two random variables. Now the problem seems tougher. First, we calculate the normalized Gaussian $\frac{X}{\|X\|}$, which is not a trivial job since it actually has a name "projected normal distribution" because we can rewrite the multivariate normal density $X$ in terms of its polar coordinate $(\|X\|,\frac{X}{\|X\|})=(r,\boldsymbol{\theta})$. And the marginal density for $\boldsymbol{\theta}$ can be obtained in $$\int_{\mathbb{R}^{+}}f(r,\boldsymbol{\theta})dr$$ An important instance is that in which $x$ has a bivariate normal distribution $N_2(\mu,\Sigma)$, in which $\|x\|^{-1}x$ is said to have a projected normal (or angular Gaussian or offset normal) distribution.[Mardia&Peter]p.46 In this step we can obtain distributions $\mathcal{PN}_{k}$ for $\frac{X}{\|X\|}\perp\frac{Y}{\|Y\|}$, and hence their joint density $(\frac{X}{\|X\|},\frac{Y}{\|Y\|})$ due to independence. As for a concrete density function of projected normal distribution, see [Mardia&Peter] Chap 10. or [2] Equation (4) or [1] . (Notice that in [2] they also assume a special form of covariance matrix $\Sigma=\left(\begin{array}{cc} \Gamma & \gamma\\ \gamma' & 1 \end{array}\right)$) Second, since we already obtained their joint density, their inner product can be readily derived using transformation formula $$\left(\frac{X}{\|X\|},\frac{Y}{\|Y\|}\right)\mapsto\frac{X}{\|X\|}\cdot\frac{Y}{\|Y\|}$$. Also see [3]. As long as we computed the density, the second moment is only a problem of integration. References: [Mardia&Peter]Mardia, Kanti V., and Peter E. Jupp. Directional statistics. Vol. 494. John Wiley & Sons, 2009. [1]Wang, Fangpo, and Alan E. Gelfand. "Directional data analysis under the general projected normal distribution." Statistical methodology 10.1 (2013): 113-127. [2]Hernandez-Stumpfhauser, Daniel, F. Jay Breidt, and Mark J. van der Woerd. "The general projected normal distribution of arbitrary dimension: modeling and Bayesian inference." Bayesian Analysis (2016). https://projecteuclid.org/download/pdfview_1/euclid.ba/1453211962 [3]Moment generating function of the inner product of two gaussian random vectors
Moment/mgf of cosine of directional vectors? Hey Yaroslav, you really do not have to hurry accepting my answer on MO and are more than welcomed to ask further details :). Since you reformulate the question in 3-dim I can see exactly what you wan
29,550
What is the function that is being optimized in word2vec?
How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? See Input vector representation vs output vector representation in word2vec What is the objective function which is being minimized? The original word2vec papers are notoriously unclear on some points pertaining to the training of the neural network (Why do so many publishing venues limit the length of paper submissions?). I advise you look at {1-4}, which answer this question. References: {1} Rong, Xin. "word2vec parameter learning explained." arXiv preprint arXiv:1411.2738 (2014). https://arxiv.org/abs/1411.2738 {2} Goldberg, Yoav, and Omer Levy. "word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method." arXiv preprint arXiv:1402.3722 (2014). https://arxiv.org/abs/1402.3722 {3} TensorFlow's tutorial on Vector Representations of Words {4} Stanford CS224N: NLP with Deep Learning by Christopher Manning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses. https://youtu.be/kEMJRjEdNzM?t=1565 (mirror)
What is the function that is being optimized in word2vec?
How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? See Input vector representation vs output
What is the function that is being optimized in word2vec? How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? See Input vector representation vs output vector representation in word2vec What is the objective function which is being minimized? The original word2vec papers are notoriously unclear on some points pertaining to the training of the neural network (Why do so many publishing venues limit the length of paper submissions?). I advise you look at {1-4}, which answer this question. References: {1} Rong, Xin. "word2vec parameter learning explained." arXiv preprint arXiv:1411.2738 (2014). https://arxiv.org/abs/1411.2738 {2} Goldberg, Yoav, and Omer Levy. "word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method." arXiv preprint arXiv:1402.3722 (2014). https://arxiv.org/abs/1402.3722 {3} TensorFlow's tutorial on Vector Representations of Words {4} Stanford CS224N: NLP with Deep Learning by Christopher Manning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses. https://youtu.be/kEMJRjEdNzM?t=1565 (mirror)
What is the function that is being optimized in word2vec? How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? See Input vector representation vs output
29,551
What is the function that is being optimized in word2vec?
How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? As we can see from the above diagram, the words, "Hope" and "Set", are indexed as 1's in the vector, and then the $W_{3*5}$ matrix is used to derive the vector representation of the words. What part of the neural network are the context vectors pulled from? Word embedding vectors are fulled from the $W_{3*5}$ matrix, and context vectors are fulled from the $W'_{5*3}$ matrix. What is the objective function which is being minimized? The objective function is the cross entropy to compare the predicted probabilities and the actual targets. There are two features in Word2Vec to speed things up: Skip-gram Negative Sampling (SGNS) It changes the Softmax over the whole vocabulary to a multilable classification(multiple binary softmax functions) function over one right target and a few negatives sampled randomly, and then instead of updating all weights only a small part weight should be updated in each backpropogation pass. Hierarchical Softmax Only the nodes along the path in the Huffman tree from the root to the word are considered[2].
What is the function that is being optimized in word2vec?
How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? As we can see from the above diagram, the
What is the function that is being optimized in word2vec? How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? As we can see from the above diagram, the words, "Hope" and "Set", are indexed as 1's in the vector, and then the $W_{3*5}$ matrix is used to derive the vector representation of the words. What part of the neural network are the context vectors pulled from? Word embedding vectors are fulled from the $W_{3*5}$ matrix, and context vectors are fulled from the $W'_{5*3}$ matrix. What is the objective function which is being minimized? The objective function is the cross entropy to compare the predicted probabilities and the actual targets. There are two features in Word2Vec to speed things up: Skip-gram Negative Sampling (SGNS) It changes the Softmax over the whole vocabulary to a multilable classification(multiple binary softmax functions) function over one right target and a few negatives sampled randomly, and then instead of updating all weights only a small part weight should be updated in each backpropogation pass. Hierarchical Softmax Only the nodes along the path in the Huffman tree from the root to the word are considered[2].
What is the function that is being optimized in word2vec? How are the words inputted into a Word2Vec model? In other words, what part of the neural network is used to derive the vector representations of the words? As we can see from the above diagram, the
29,552
Mixed effects model with splines
As mdewey says then refit the model without the REML estimation method. As the warning says, comparisons are not meaningful when you have different fixed effects structures. The next issues is that the models are not nested so the F-test presumably does not makes sense. You could look at the information criteria. Both favor fitLME2.
Mixed effects model with splines
As mdewey says then refit the model without the REML estimation method. As the warning says, comparisons are not meaningful when you have different fixed effects structures. The next issues is that t
Mixed effects model with splines As mdewey says then refit the model without the REML estimation method. As the warning says, comparisons are not meaningful when you have different fixed effects structures. The next issues is that the models are not nested so the F-test presumably does not makes sense. You could look at the information criteria. Both favor fitLME2.
Mixed effects model with splines As mdewey says then refit the model without the REML estimation method. As the warning says, comparisons are not meaningful when you have different fixed effects structures. The next issues is that t
29,553
Asymptotic distribution of censored samples from $\exp(\lambda)$
Since $\lambda$ is just a scale factor, without loss of generality choose units of measurement that make $\lambda=1$, making the underlying distribution function $F(x)=1-\exp(-x)$ with density $f(x)=\exp(-x)$. From considerations paralleling those at Central limit theorem for sample medians, $X_{(m)}$ is asymptotically Normal with mean $F^{-1}(p)=-\log(1-p)$ and variance $$\operatorname{Var}(X_{(m)}) = \frac{p(1-p)}{n f(-\log(1-p))^2} = \frac{p}{n(1-p)}.$$ Due to the memoryless property of the exponential distribution, the variables $(X_{(m+1)}, \ldots, X_{(n)})$ act like the order statistics of a random sample of $n-m$ draws from $F$, to which $X_{(m)}$ has been added. Writing $$Y = \frac{1}{n-m}\sum_{i=m+1}^n X_{(i)}$$ for their mean, it is immediate that the mean of $Y$ is the mean of $F$ (equal to $1$) and the variance of $Y$ is $1/(n-m)$ times the variance of $F$ (also equal to $1$). The Central Limit Theorem implies the standardized $Y$ is asymptotically Standard Normal. Moreover, because $Y$ is conditionally independent of $X_{(m)}$, we simultaneously have the standardized version of $X_{(m)}$ becoming asymptotically Standard Normal and uncorrelated with $Y$. That is, $$\left(\frac{X_{(m)} + \log(1-p)}{\sqrt{p/(n(1-p))}}, \frac{Y - X_{(m)} - 1}{\sqrt{n-m}}\right)\tag{1}$$ asymptotically has a bivariate Standard Normal distribution. The graphics report on simulated data for samples of $n=1000$ ($500$ iterations) and $p=0.95$. A trace of positive skewness remains, but the approach to bivariate normality is evident in the lack of relationship between $Y-X_{(m)}$ and $X_{(m)}$ and the closeness of the histograms to the Standard Normal density (shown in red dots). The covariance matrix of the standardized values (as in formula $(1)$) for this simulation was $$\pmatrix{0.967 & -0.021 \\ -0.021 & 1.010},$$ comfortably close to the unit matrix which it approximates. The R code that produced these graphics is readily modified to study other values of $n$, $p$, and simulation size. n <- 1e3 p <- 0.95 n.sim <- 5e3 # # Perform the simulation. # X_m will be in the first column and Y in the second. # set.seed(17) m <- floor(p * n) X <- apply(matrix(rexp(n.sim * n), nrow = n), 2, sort) X <- cbind(X[m, ], colMeans(X[(m+1):n, , drop=FALSE])) # # Display the results. # par(mfrow=c(2,2)) plot(X[,1], X[,2], pch=16, col="#00000020", xlab=expression(X[(m)]), ylab="Y", main="Y vs X", sub=paste("n =", n, "and p =", signif(p, 2))) plot(X[,1], X[,2]-X[,1], pch=16, col="#00000020", xlab=expression(X[(m)]), ylab=expression(Y - X[(m)]), main="Y-X vs X", sub="Loess smooth shown") lines(lowess(X[,2]-X[,1] ~ X[,1]), col="Red", lwd=3, lty=1) x <- (X[,1] + log(1-p)) / sqrt(p/(n*(1-p))) hist(x, main="Standardized X", freq=FALSE, xlab="Value") curve(dnorm(x), add=TRUE, col="Red", lty=3, lwd=2) y <- (X[,2] - X[,1] - 1) * sqrt(n-m) hist(y, main="Standardized Y-X", freq=FALSE, xlab="Value") curve(dnorm(x), add=TRUE, col="Red", lty=3, lwd=2) par(mfrow=c(1,1)) round(var(cbind(x,y)), 3) # Should be close to the unit matrix
Asymptotic distribution of censored samples from $\exp(\lambda)$
Since $\lambda$ is just a scale factor, without loss of generality choose units of measurement that make $\lambda=1$, making the underlying distribution function $F(x)=1-\exp(-x)$ with density $f(x)=\
Asymptotic distribution of censored samples from $\exp(\lambda)$ Since $\lambda$ is just a scale factor, without loss of generality choose units of measurement that make $\lambda=1$, making the underlying distribution function $F(x)=1-\exp(-x)$ with density $f(x)=\exp(-x)$. From considerations paralleling those at Central limit theorem for sample medians, $X_{(m)}$ is asymptotically Normal with mean $F^{-1}(p)=-\log(1-p)$ and variance $$\operatorname{Var}(X_{(m)}) = \frac{p(1-p)}{n f(-\log(1-p))^2} = \frac{p}{n(1-p)}.$$ Due to the memoryless property of the exponential distribution, the variables $(X_{(m+1)}, \ldots, X_{(n)})$ act like the order statistics of a random sample of $n-m$ draws from $F$, to which $X_{(m)}$ has been added. Writing $$Y = \frac{1}{n-m}\sum_{i=m+1}^n X_{(i)}$$ for their mean, it is immediate that the mean of $Y$ is the mean of $F$ (equal to $1$) and the variance of $Y$ is $1/(n-m)$ times the variance of $F$ (also equal to $1$). The Central Limit Theorem implies the standardized $Y$ is asymptotically Standard Normal. Moreover, because $Y$ is conditionally independent of $X_{(m)}$, we simultaneously have the standardized version of $X_{(m)}$ becoming asymptotically Standard Normal and uncorrelated with $Y$. That is, $$\left(\frac{X_{(m)} + \log(1-p)}{\sqrt{p/(n(1-p))}}, \frac{Y - X_{(m)} - 1}{\sqrt{n-m}}\right)\tag{1}$$ asymptotically has a bivariate Standard Normal distribution. The graphics report on simulated data for samples of $n=1000$ ($500$ iterations) and $p=0.95$. A trace of positive skewness remains, but the approach to bivariate normality is evident in the lack of relationship between $Y-X_{(m)}$ and $X_{(m)}$ and the closeness of the histograms to the Standard Normal density (shown in red dots). The covariance matrix of the standardized values (as in formula $(1)$) for this simulation was $$\pmatrix{0.967 & -0.021 \\ -0.021 & 1.010},$$ comfortably close to the unit matrix which it approximates. The R code that produced these graphics is readily modified to study other values of $n$, $p$, and simulation size. n <- 1e3 p <- 0.95 n.sim <- 5e3 # # Perform the simulation. # X_m will be in the first column and Y in the second. # set.seed(17) m <- floor(p * n) X <- apply(matrix(rexp(n.sim * n), nrow = n), 2, sort) X <- cbind(X[m, ], colMeans(X[(m+1):n, , drop=FALSE])) # # Display the results. # par(mfrow=c(2,2)) plot(X[,1], X[,2], pch=16, col="#00000020", xlab=expression(X[(m)]), ylab="Y", main="Y vs X", sub=paste("n =", n, "and p =", signif(p, 2))) plot(X[,1], X[,2]-X[,1], pch=16, col="#00000020", xlab=expression(X[(m)]), ylab=expression(Y - X[(m)]), main="Y-X vs X", sub="Loess smooth shown") lines(lowess(X[,2]-X[,1] ~ X[,1]), col="Red", lwd=3, lty=1) x <- (X[,1] + log(1-p)) / sqrt(p/(n*(1-p))) hist(x, main="Standardized X", freq=FALSE, xlab="Value") curve(dnorm(x), add=TRUE, col="Red", lty=3, lwd=2) y <- (X[,2] - X[,1] - 1) * sqrt(n-m) hist(y, main="Standardized Y-X", freq=FALSE, xlab="Value") curve(dnorm(x), add=TRUE, col="Red", lty=3, lwd=2) par(mfrow=c(1,1)) round(var(cbind(x,y)), 3) # Should be close to the unit matrix
Asymptotic distribution of censored samples from $\exp(\lambda)$ Since $\lambda$ is just a scale factor, without loss of generality choose units of measurement that make $\lambda=1$, making the underlying distribution function $F(x)=1-\exp(-x)$ with density $f(x)=\
29,554
Show estimate converges to percentile through order statistics
The variance of $Y$ is not finite. This is because an alpha-stable variable $X$ with $\alpha=3/2$ (a Holtzmark distribution) does have a finite expectation $\mu$ but its variance is infinite. If $Y$ had a finite variance $\sigma^2$, then by exploiting the independence of the $X_i$ and the definition of variance we could compute $$\eqalign{ \sigma^2 = \operatorname{Var}(Y) &= \mathbb{E}(Y^2) - \mathbb{E}(Y)^2 \\ &= \mathbb{E}(X_1^2X_2^2X_3^2) - \mathbb{E}(X_1X_2X_3)^2 \\ &= \mathbb{E}(X^2)^3 - \left(\mathbb{E}(X)^3\right)^2 \\ &= \left(\operatorname{Var}(X) + \mathbb{E}(X)^2\right)^3 - \mu^6 \\ &= \left(\operatorname{Var}(X) + \mu^2\right)^3 - \mu^6. }$$ This cubic equation in $\operatorname{Var}(X)$ has at least one real solution (and up to three solutions, but no more), implying $\operatorname{Var}(X)$ would be finite--but it's not. This contradiction proves the claim. Let's turn to the second question. Any sample quantile converges to the true quantile as the sample grows large. The next few paragraphs prove this general point. Let the associated probability be $q=0.01$ (or any other value between $0$ and $1$, exclusive). Write $F$ for the distribution function, so that $Z_q=F^{-1}(q)$ is the $q^{\text{th}}$ quantile. All we need to assume is that $F^{-1}$ (the quantile function) is continuous. This assures us that for any $\epsilon\gt 0$ there are probabilities $q_-\lt q$ and $q_+\gt q$ for which $$F(Z_q - \epsilon) = q_-,\quad F(Z_q + \epsilon) = q_+,$$ and that as $\epsilon\to 0$, the limit of the interval $[q_-, q_+]$ is $\{q\}$. Consider any iid sample of size $n$. The number of elements of this sample that are less than $Z_{q_-}$ has a Binomial$(q_-, n)$ distribution, because each element independently has a chance $q_-$ of being less than $Z_{q_-}$. The Central Limit Theorem (the usual one!) implies that for sufficiently large $n$, the number of elements less than $Z_{q_-}$ is given by a Normal distribution with mean $nq_-$ and variance $nq_-(1-q_-)$ (to an arbitrarily good approximation). Let the CDF of the standard Normal distribution be $\Phi$. The chance that this quantity exceeds $nq$ therefore is arbitrarily close to $$1-\Phi\left(\frac{nq - nq_-}{\sqrt{nq_-(1-q_-)}}\right) = 1-\Phi\left(\sqrt{n}\frac{q - q_-}{\sqrt{q_-(1-q_-)}}\right).$$ Because the argument on $\Phi$ on the right hand side is a fixed multiple of $\sqrt{n}$, it grows arbitrarily large as $n$ grows. Since $\Phi$ is a CDF, its value approaches arbitrarily close to $1$, showing the limiting value of this probability is zero. In words: in the limit, it is almost surely the case that $nq$ of the sample elements are not less than $Z_{q_-}$. An analogous argument proves it is almost surely the case that $nq$ of the sample elements are not greater than $Z_{q_+}$. Together, these imply the $q$ quantile of a sufficiently large sample is extremely likely to lie between $Z_q-\epsilon$ and $Z_q+\epsilon$. That's all we need in order to know that simulation will work. You may choose any desired degree of accuracy $\epsilon$ and confidence level $1-\alpha$ and know that for a sufficiently large sample size $n$, the order statistic closest to $nq$ in that sample will have a chance at least $1-\alpha$ of being within $\epsilon$ of the true quantile $Z_q$. Having established that a simulation will work, the rest is easy. Confidence limits can be obtained from limits for the Binomial distribution and then back-transformed. Further explanation (for the $q=0.50$ quantile, but generalizing to all quantiles) can be found in the answers at Central limit theorem for sample medians. The $q=0.01$ quantile of $Y$ is negative. Its sampling distribution is highly skewed. To reduce the skew, this figure shows a histogram of the logarithms of the negatives of 1,000 simulated samples of $n=300$ values of $Y$. library(stabledist) n <- 3e2 q <- 0.01 n.sim <- 1e3 Y.q <- replicate(n.sim, { Y <- apply(matrix(rstable(3*n, 3/2, 0, 1, 1), nrow=3), 2, prod) - 1 log(-quantile(Y, 0.01)) }) m <- median(-exp(Y.q)) hist(Y.q, freq=FALSE, main=paste("Histogram of the", q, "quantile of Y for", n.sim, "iterations" ), xlab="Log(-Y_q)", sub=paste("Median is", signif(m, 4), "Negative log is", signif(log(-m), 4)), cex.sub=0.8) abline(v=log(-m), col="Red", lwd=2)
Show estimate converges to percentile through order statistics
The variance of $Y$ is not finite. This is because an alpha-stable variable $X$ with $\alpha=3/2$ (a Holtzmark distribution) does have a finite expectation $\mu$ but its variance is infinite. If $Y$
Show estimate converges to percentile through order statistics The variance of $Y$ is not finite. This is because an alpha-stable variable $X$ with $\alpha=3/2$ (a Holtzmark distribution) does have a finite expectation $\mu$ but its variance is infinite. If $Y$ had a finite variance $\sigma^2$, then by exploiting the independence of the $X_i$ and the definition of variance we could compute $$\eqalign{ \sigma^2 = \operatorname{Var}(Y) &= \mathbb{E}(Y^2) - \mathbb{E}(Y)^2 \\ &= \mathbb{E}(X_1^2X_2^2X_3^2) - \mathbb{E}(X_1X_2X_3)^2 \\ &= \mathbb{E}(X^2)^3 - \left(\mathbb{E}(X)^3\right)^2 \\ &= \left(\operatorname{Var}(X) + \mathbb{E}(X)^2\right)^3 - \mu^6 \\ &= \left(\operatorname{Var}(X) + \mu^2\right)^3 - \mu^6. }$$ This cubic equation in $\operatorname{Var}(X)$ has at least one real solution (and up to three solutions, but no more), implying $\operatorname{Var}(X)$ would be finite--but it's not. This contradiction proves the claim. Let's turn to the second question. Any sample quantile converges to the true quantile as the sample grows large. The next few paragraphs prove this general point. Let the associated probability be $q=0.01$ (or any other value between $0$ and $1$, exclusive). Write $F$ for the distribution function, so that $Z_q=F^{-1}(q)$ is the $q^{\text{th}}$ quantile. All we need to assume is that $F^{-1}$ (the quantile function) is continuous. This assures us that for any $\epsilon\gt 0$ there are probabilities $q_-\lt q$ and $q_+\gt q$ for which $$F(Z_q - \epsilon) = q_-,\quad F(Z_q + \epsilon) = q_+,$$ and that as $\epsilon\to 0$, the limit of the interval $[q_-, q_+]$ is $\{q\}$. Consider any iid sample of size $n$. The number of elements of this sample that are less than $Z_{q_-}$ has a Binomial$(q_-, n)$ distribution, because each element independently has a chance $q_-$ of being less than $Z_{q_-}$. The Central Limit Theorem (the usual one!) implies that for sufficiently large $n$, the number of elements less than $Z_{q_-}$ is given by a Normal distribution with mean $nq_-$ and variance $nq_-(1-q_-)$ (to an arbitrarily good approximation). Let the CDF of the standard Normal distribution be $\Phi$. The chance that this quantity exceeds $nq$ therefore is arbitrarily close to $$1-\Phi\left(\frac{nq - nq_-}{\sqrt{nq_-(1-q_-)}}\right) = 1-\Phi\left(\sqrt{n}\frac{q - q_-}{\sqrt{q_-(1-q_-)}}\right).$$ Because the argument on $\Phi$ on the right hand side is a fixed multiple of $\sqrt{n}$, it grows arbitrarily large as $n$ grows. Since $\Phi$ is a CDF, its value approaches arbitrarily close to $1$, showing the limiting value of this probability is zero. In words: in the limit, it is almost surely the case that $nq$ of the sample elements are not less than $Z_{q_-}$. An analogous argument proves it is almost surely the case that $nq$ of the sample elements are not greater than $Z_{q_+}$. Together, these imply the $q$ quantile of a sufficiently large sample is extremely likely to lie between $Z_q-\epsilon$ and $Z_q+\epsilon$. That's all we need in order to know that simulation will work. You may choose any desired degree of accuracy $\epsilon$ and confidence level $1-\alpha$ and know that for a sufficiently large sample size $n$, the order statistic closest to $nq$ in that sample will have a chance at least $1-\alpha$ of being within $\epsilon$ of the true quantile $Z_q$. Having established that a simulation will work, the rest is easy. Confidence limits can be obtained from limits for the Binomial distribution and then back-transformed. Further explanation (for the $q=0.50$ quantile, but generalizing to all quantiles) can be found in the answers at Central limit theorem for sample medians. The $q=0.01$ quantile of $Y$ is negative. Its sampling distribution is highly skewed. To reduce the skew, this figure shows a histogram of the logarithms of the negatives of 1,000 simulated samples of $n=300$ values of $Y$. library(stabledist) n <- 3e2 q <- 0.01 n.sim <- 1e3 Y.q <- replicate(n.sim, { Y <- apply(matrix(rstable(3*n, 3/2, 0, 1, 1), nrow=3), 2, prod) - 1 log(-quantile(Y, 0.01)) }) m <- median(-exp(Y.q)) hist(Y.q, freq=FALSE, main=paste("Histogram of the", q, "quantile of Y for", n.sim, "iterations" ), xlab="Log(-Y_q)", sub=paste("Median is", signif(m, 4), "Negative log is", signif(log(-m), 4)), cex.sub=0.8) abline(v=log(-m), col="Red", lwd=2)
Show estimate converges to percentile through order statistics The variance of $Y$ is not finite. This is because an alpha-stable variable $X$ with $\alpha=3/2$ (a Holtzmark distribution) does have a finite expectation $\mu$ but its variance is infinite. If $Y$
29,555
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in reducing overfit
In the linear model $Y=X\beta + \epsilon$, assuming uncorrelated errors with mean zero and $X$ having full column rank, the least squares estimator $(X^TX)^{-1}X^TY$ is an unbiased estimator for the parameter $\beta$. However, this estimator can have high variance. For example, when two of the columns of $X$ are highly correlated. The penalty parameter $\lambda$ makes $\hat{w}$ a biased estimator of $\beta$, but it decreases its variance. Also, $\hat{w}$ is the posterior expectation of $\beta$ in a Bayesian regression with a $N(0,\frac{1}{\lambda}I)$ prior on $\beta$. In that sense, we include some information into the analysis that says the components of $\beta$ ought not be too far from zero. Again, this leads us to a biased point estimate of $\beta$ but reduces the variance of the estimate. In a setting where $X$ high dimensional, say $N \approx p$, the least squares fit will match the data almost perfectly. Although unbiased, this estimate will be highly sensitive to fluctuations in the data because in such high dimensions, there will be many points with high leverage. In such situations the sign of some components of $\hat{\beta}$ can determined by a single observation. The penalty term has the effect of shrinking these estimates towards zero, which can reduce the MSE of the estimator by reducing the variance. Edit: In my initial response I provided a link to a relevant paper and in my haste I removed it. Here it is: http://www.jarad.me/stat615/papers/Ridge_Regression_in_Practice.pdf
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in
In the linear model $Y=X\beta + \epsilon$, assuming uncorrelated errors with mean zero and $X$ having full column rank, the least squares estimator $(X^TX)^{-1}X^TY$ is an unbiased estimator for the p
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in reducing overfit In the linear model $Y=X\beta + \epsilon$, assuming uncorrelated errors with mean zero and $X$ having full column rank, the least squares estimator $(X^TX)^{-1}X^TY$ is an unbiased estimator for the parameter $\beta$. However, this estimator can have high variance. For example, when two of the columns of $X$ are highly correlated. The penalty parameter $\lambda$ makes $\hat{w}$ a biased estimator of $\beta$, but it decreases its variance. Also, $\hat{w}$ is the posterior expectation of $\beta$ in a Bayesian regression with a $N(0,\frac{1}{\lambda}I)$ prior on $\beta$. In that sense, we include some information into the analysis that says the components of $\beta$ ought not be too far from zero. Again, this leads us to a biased point estimate of $\beta$ but reduces the variance of the estimate. In a setting where $X$ high dimensional, say $N \approx p$, the least squares fit will match the data almost perfectly. Although unbiased, this estimate will be highly sensitive to fluctuations in the data because in such high dimensions, there will be many points with high leverage. In such situations the sign of some components of $\hat{\beta}$ can determined by a single observation. The penalty term has the effect of shrinking these estimates towards zero, which can reduce the MSE of the estimator by reducing the variance. Edit: In my initial response I provided a link to a relevant paper and in my haste I removed it. Here it is: http://www.jarad.me/stat615/papers/Ridge_Regression_in_Practice.pdf
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in In the linear model $Y=X\beta + \epsilon$, assuming uncorrelated errors with mean zero and $X$ having full column rank, the least squares estimator $(X^TX)^{-1}X^TY$ is an unbiased estimator for the p
29,556
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in reducing overfit
Numerical stability and overfitting are in some sense related but different issues. The classic OLS problem: Consider the classic least squares problem: $$\operatorname*{minimize}(\text{over $\mathbf{b}$}) \quad(\mathbf y-X\mathbf{b})^T(\boldsymbol{y}-X\mathbf{b}) $$ The solution is the classic $\hat{\mathbf{b}} = (X'X)^{-1}(X'\mathbf{y})$. An idea is that by the law of large numbers: $$ \lim_{n \rightarrow \infty} \frac{1}{n} X'X \rightarrow \mathrm{E}[\mathbf{x}\mathbf{x}'] \quad \quad \quad \lim_{n \rightarrow \infty} \frac{1}{n} X'\mathbf{y} \rightarrow \mathrm{E}[\mathbf{x}y]$$ Hence the OLS estimate $\hat{\mathbf{b}}$ also converges to $\mathrm{E}[\mathbf{x}\mathbf{x}']^{-1}\mathrm{E}[\mathbf{x}y]$. (In linear algebra terms, this is the linear projection of random variable $y$ onto the linear span of random variables $x_1, x_2, \ldots, x_k$.) Problems? Mechanically, what can go wrong? What are possible problems? For small samples, our sample estimates of $\mathrm{E}[\mathbf{x}\mathbf{x}']$ and $\mathrm{E}[\mathbf{x}y]$ may be poor. If columns of $X$ are collinear (either due to inherent collinearity or small sample size), the problem will have a continuum of solutions! The solution may not be unique. This occurs if $\mathrm{E}[\mathbf{x}\mathbf{x}']$ is rank deficient. This also occurs if $X'X$ is rank deficient due to small sample size relative to the number of regressor issues. Problem (1) can lead to overfitting as estimate $\hat{\mathbf{b}}$ starts reflecting patterns in the sample that aren't there in the underlying population. The estimate may reflect patterns in $\frac{1}{n}X'X$ and $\frac{1}{n}X'\mathbf{y}$ that don't actually exist in $\mathrm{E}[\mathbf{x}\mathbf{x}']$ and $\mathrm{E}[\mathbf{x}y]$ Problem (2) means a solution isn't unique. Imagine we're trying to estimate the price of individual shoes but pairs of shoes are always sold together. This is an ill-posed problem, but let's say we're doing it anyway. We may believe the left shoe price plus the right shoe price equals \$50, but how can we come up with individual prices? Is setting left shoe prices $p_l = 45$ and right shoe price $p_r = 5$ ok? How can we choose from all the possibilities? Introducing $L_2$ penalty: Now consider: $$\operatorname*{minimize}(\text{over }\mathbf{b})\quad (\mathbf y-X\mathbf{b})^T(\boldsymbol{y}-X\mathbf{b}) + \lambda\|\boldsymbol{b}\|^2 $$ This may help us with both types of problems. The $L_2$ penalty pushes our estimate of $\mathbf{b}$ towards zero. This functions effectively as a Bayesian prior that the distribution over coefficient values are centered around $\mathbf{0}$. That helps with overfitting. Our estimate will reflect both the data and our initial beliefs that $\mathbf{b}$ is near zero. $L_2$ regularization also always us to find a unique solution to ill-posed problems. If we know the price of left and right shoes total to $\$50$, the solution that also minimizes the $L_2$ norm is to choose $p_l = p_r = 25$. Is this magic? No. Regularization isn't the same as adding data that would actually allow us to answer the question. $L_2$ regularization in some sense adopts the view that if you lack data, choose estimates closer to $0$.
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in
Numerical stability and overfitting are in some sense related but different issues. The classic OLS problem: Consider the classic least squares problem: $$\operatorname*{minimize}(\text{over $\mathbf{
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in reducing overfit Numerical stability and overfitting are in some sense related but different issues. The classic OLS problem: Consider the classic least squares problem: $$\operatorname*{minimize}(\text{over $\mathbf{b}$}) \quad(\mathbf y-X\mathbf{b})^T(\boldsymbol{y}-X\mathbf{b}) $$ The solution is the classic $\hat{\mathbf{b}} = (X'X)^{-1}(X'\mathbf{y})$. An idea is that by the law of large numbers: $$ \lim_{n \rightarrow \infty} \frac{1}{n} X'X \rightarrow \mathrm{E}[\mathbf{x}\mathbf{x}'] \quad \quad \quad \lim_{n \rightarrow \infty} \frac{1}{n} X'\mathbf{y} \rightarrow \mathrm{E}[\mathbf{x}y]$$ Hence the OLS estimate $\hat{\mathbf{b}}$ also converges to $\mathrm{E}[\mathbf{x}\mathbf{x}']^{-1}\mathrm{E}[\mathbf{x}y]$. (In linear algebra terms, this is the linear projection of random variable $y$ onto the linear span of random variables $x_1, x_2, \ldots, x_k$.) Problems? Mechanically, what can go wrong? What are possible problems? For small samples, our sample estimates of $\mathrm{E}[\mathbf{x}\mathbf{x}']$ and $\mathrm{E}[\mathbf{x}y]$ may be poor. If columns of $X$ are collinear (either due to inherent collinearity or small sample size), the problem will have a continuum of solutions! The solution may not be unique. This occurs if $\mathrm{E}[\mathbf{x}\mathbf{x}']$ is rank deficient. This also occurs if $X'X$ is rank deficient due to small sample size relative to the number of regressor issues. Problem (1) can lead to overfitting as estimate $\hat{\mathbf{b}}$ starts reflecting patterns in the sample that aren't there in the underlying population. The estimate may reflect patterns in $\frac{1}{n}X'X$ and $\frac{1}{n}X'\mathbf{y}$ that don't actually exist in $\mathrm{E}[\mathbf{x}\mathbf{x}']$ and $\mathrm{E}[\mathbf{x}y]$ Problem (2) means a solution isn't unique. Imagine we're trying to estimate the price of individual shoes but pairs of shoes are always sold together. This is an ill-posed problem, but let's say we're doing it anyway. We may believe the left shoe price plus the right shoe price equals \$50, but how can we come up with individual prices? Is setting left shoe prices $p_l = 45$ and right shoe price $p_r = 5$ ok? How can we choose from all the possibilities? Introducing $L_2$ penalty: Now consider: $$\operatorname*{minimize}(\text{over }\mathbf{b})\quad (\mathbf y-X\mathbf{b})^T(\boldsymbol{y}-X\mathbf{b}) + \lambda\|\boldsymbol{b}\|^2 $$ This may help us with both types of problems. The $L_2$ penalty pushes our estimate of $\mathbf{b}$ towards zero. This functions effectively as a Bayesian prior that the distribution over coefficient values are centered around $\mathbf{0}$. That helps with overfitting. Our estimate will reflect both the data and our initial beliefs that $\mathbf{b}$ is near zero. $L_2$ regularization also always us to find a unique solution to ill-posed problems. If we know the price of left and right shoes total to $\$50$, the solution that also minimizes the $L_2$ norm is to choose $p_l = p_r = 25$. Is this magic? No. Regularization isn't the same as adding data that would actually allow us to answer the question. $L_2$ regularization in some sense adopts the view that if you lack data, choose estimates closer to $0$.
Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in Numerical stability and overfitting are in some sense related but different issues. The classic OLS problem: Consider the classic least squares problem: $$\operatorname*{minimize}(\text{over $\mathbf{
29,557
Random overlapping intervals
|----------------||----------------|-----------------------------------|----------------||----------------| $x_0 - l/2\ \ \ \ \ x_0 \ \ \ \ \ \ \ \ \ \ x_0 + l/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x_0+L - l/2 \ \ \ \ x_0+L \ \ \ \ x_0 + L + l/2$ The probability of a point in $[x_0,x_0 + L]$ to be occupied by a single dropped bar is $x \in [x_0, x_0 + l/2): \ P_{o} = \frac{1}{L}(x-x_0+l/2)$ $x \in [x_0 + l/2, x_0 + L - l/2]: \ P_{o} = \frac{l}{L}$ $x \in (x_0 + L - l/2, x_0 + L]: \ P_{o} = \frac{1}{L}(-x+x_0+l/2+L)$. Correspondingly, the probability to be empty is $P_{e} = 1 - P_o$. The probability that a given point is still empty after $n$ dropped bars is $P_e^{n}$, and to be occupied is $P_{o,n} = 1 - (1-P_o)^n = 1- (1-\frac{nP_o}{n})^n \approx 1 - e^{-nP_o}$ for large $n$. Then, the mean occupied length in $[x_0,x_0 + L]$ after $n$ random "bar drops" is $\langle D \rangle = L\langle P_{o,n} \rangle = \int_{x_0}^{x_0+L} P_{o,n}dx$.
Random overlapping intervals
|----------------||----------------|-----------------------------------|----------------||----------------| $x_0 - l/2\ \ \ \ \ x_0 \ \ \ \ \ \ \ \ \ \ x_0 + l/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
Random overlapping intervals |----------------||----------------|-----------------------------------|----------------||----------------| $x_0 - l/2\ \ \ \ \ x_0 \ \ \ \ \ \ \ \ \ \ x_0 + l/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x_0+L - l/2 \ \ \ \ x_0+L \ \ \ \ x_0 + L + l/2$ The probability of a point in $[x_0,x_0 + L]$ to be occupied by a single dropped bar is $x \in [x_0, x_0 + l/2): \ P_{o} = \frac{1}{L}(x-x_0+l/2)$ $x \in [x_0 + l/2, x_0 + L - l/2]: \ P_{o} = \frac{l}{L}$ $x \in (x_0 + L - l/2, x_0 + L]: \ P_{o} = \frac{1}{L}(-x+x_0+l/2+L)$. Correspondingly, the probability to be empty is $P_{e} = 1 - P_o$. The probability that a given point is still empty after $n$ dropped bars is $P_e^{n}$, and to be occupied is $P_{o,n} = 1 - (1-P_o)^n = 1- (1-\frac{nP_o}{n})^n \approx 1 - e^{-nP_o}$ for large $n$. Then, the mean occupied length in $[x_0,x_0 + L]$ after $n$ random "bar drops" is $\langle D \rangle = L\langle P_{o,n} \rangle = \int_{x_0}^{x_0+L} P_{o,n}dx$.
Random overlapping intervals |----------------||----------------|-----------------------------------|----------------||----------------| $x_0 - l/2\ \ \ \ \ x_0 \ \ \ \ \ \ \ \ \ \ x_0 + l/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
29,558
How do you calculate the standard error of $R^2$
One easy and robust estimator of the standard error of $R^2$ is bootstrapping. Obtain bootstrap samples of your data set (say there are $n$ observations) by sampling $n$ observations from your data with replacement $B$ times (e.g., $B = 1,000$). For each bootstrap sample $b = 1, 2, \ldots, B$, compute $R^2_b$ (the $R^2$ estimate for the $b$th bootstrap sample). As a result, you will have $B$ estimates of $R^2$ which already incorporate the sampling variability which you are trying to estimate. The bootstrap estimate of $SE(R^2)$ is simply the standard deviation of $R^2_1, R^2_2, \ldots, R^2_B$, $$\hat{SE}(R^2) = \frac{1}{B-1} \sqrt{\sum_{b=1}^B \left[R^2_b - \left(B^{-1} \sum_{b=1}^B R^2_b\right)\right]^2}.$$ For more information, see, e.g., Wikipedia page on bootstrapping and the excellent introductory text An Introduction to the Bootstrap by Efron & Tibshirani.
How do you calculate the standard error of $R^2$
One easy and robust estimator of the standard error of $R^2$ is bootstrapping. Obtain bootstrap samples of your data set (say there are $n$ observations) by sampling $n$ observations from your data wi
How do you calculate the standard error of $R^2$ One easy and robust estimator of the standard error of $R^2$ is bootstrapping. Obtain bootstrap samples of your data set (say there are $n$ observations) by sampling $n$ observations from your data with replacement $B$ times (e.g., $B = 1,000$). For each bootstrap sample $b = 1, 2, \ldots, B$, compute $R^2_b$ (the $R^2$ estimate for the $b$th bootstrap sample). As a result, you will have $B$ estimates of $R^2$ which already incorporate the sampling variability which you are trying to estimate. The bootstrap estimate of $SE(R^2)$ is simply the standard deviation of $R^2_1, R^2_2, \ldots, R^2_B$, $$\hat{SE}(R^2) = \frac{1}{B-1} \sqrt{\sum_{b=1}^B \left[R^2_b - \left(B^{-1} \sum_{b=1}^B R^2_b\right)\right]^2}.$$ For more information, see, e.g., Wikipedia page on bootstrapping and the excellent introductory text An Introduction to the Bootstrap by Efron & Tibshirani.
How do you calculate the standard error of $R^2$ One easy and robust estimator of the standard error of $R^2$ is bootstrapping. Obtain bootstrap samples of your data set (say there are $n$ observations) by sampling $n$ observations from your data wi
29,559
How do you calculate the standard error of $R^2$
I noticed that the MBESS package in R has the Variance.R2 function: [It is a] function to determine the variance of the squared multiple correlation coefficient given the population squared multiple correlation coefficient, sample size, and the number of predictors. That said, its not quite what you are after, because it assumes a given population r-squared value. If anything, sample adjusted r-squared would be a better estimate of population r-squared.
How do you calculate the standard error of $R^2$
I noticed that the MBESS package in R has the Variance.R2 function: [It is a] function to determine the variance of the squared multiple correlation coefficient given the population squared multiple
How do you calculate the standard error of $R^2$ I noticed that the MBESS package in R has the Variance.R2 function: [It is a] function to determine the variance of the squared multiple correlation coefficient given the population squared multiple correlation coefficient, sample size, and the number of predictors. That said, its not quite what you are after, because it assumes a given population r-squared value. If anything, sample adjusted r-squared would be a better estimate of population r-squared.
How do you calculate the standard error of $R^2$ I noticed that the MBESS package in R has the Variance.R2 function: [It is a] function to determine the variance of the squared multiple correlation coefficient given the population squared multiple
29,560
Survival rate trends in case-control studies
From the above there are a few possibilities for the Cox model: SEPARATE MODELS FOR EACH TIME-PERIOD: Use one observation for each person; calculate observation time (regardles of when censoring/death occured during follow-up) and then calculate the hazard ratio each period. Then compare the hazard ratios directly. CALCULATE THE RELATIVE CHANGE IN HAZARD IN SMOKERS AND NON-SMOKERS SEPARATELY: one observation per person; calculate observation time (regardless of when censoring/event occurs) and then use all patients (from 1995 to 2014) in the model, use time period as a categorical variable and set one of the periods as the reference value. COUNTING PROCESS FORMULATION: this sounds appealing, but I'm not sure on how to use survival time, start stop intervals and calendar year.
Survival rate trends in case-control studies
From the above there are a few possibilities for the Cox model: SEPARATE MODELS FOR EACH TIME-PERIOD: Use one observation for each person; calculate observation time (regardles of when censoring/deat
Survival rate trends in case-control studies From the above there are a few possibilities for the Cox model: SEPARATE MODELS FOR EACH TIME-PERIOD: Use one observation for each person; calculate observation time (regardles of when censoring/death occured during follow-up) and then calculate the hazard ratio each period. Then compare the hazard ratios directly. CALCULATE THE RELATIVE CHANGE IN HAZARD IN SMOKERS AND NON-SMOKERS SEPARATELY: one observation per person; calculate observation time (regardless of when censoring/event occurs) and then use all patients (from 1995 to 2014) in the model, use time period as a categorical variable and set one of the periods as the reference value. COUNTING PROCESS FORMULATION: this sounds appealing, but I'm not sure on how to use survival time, start stop intervals and calendar year.
Survival rate trends in case-control studies From the above there are a few possibilities for the Cox model: SEPARATE MODELS FOR EACH TIME-PERIOD: Use one observation for each person; calculate observation time (regardles of when censoring/deat
29,561
Survival rate trends in case-control studies
Although it's dangerous to read too much into the cryptic comments of a reviewer, I would guess that the objection has to do with whether the censoring is informative. Interpretation of survival models typically is based on the assumption that an individual censored at time $T$ is representative of all subjects who survive to time $T$ after entry into the study. (Wording adapted from this introduction to survival analysis.) Then the censoring is non-informative. In your analysis, however, those who were censored were those who survived through 2014. If you think that there had been a change in excess risk of death due to smoking over the previous 20 years (or even if there were parallel changes in death rates for both groups), then those censored individuals may not be representative of those who survived for the same time but entered the study earlier. Under your hypothesis, the censoring might be informative. It's possible that the details of the design of your analysis avoided this problem but that wasn't clear in the manuscript as reviewed. Or perhaps the reviewer didn't like the study for some additional reasons and found this to be a way to reject it that the editor wouldn't question. Nevertheless, this does seem to be a potential objection to the way you analyzed these data and you should make sure that it is handled properly. (This is beyond my personal expertise; others on this site might have pointers on how to proceed. A more precise title to this question, with more details on the study design and analysis, might get more helpful answers.) It's not clear to me from your question and clarifying comment that the Cox analyses are adding anything useful to simple modeling of death rates per year (or over 2-year intervals). Plus, your hypothesis seems to imply that hazards are not proportional over time between non-smokers and smokers, the basis of standard Cox analyses. If you are interested in the difference of death rates between smokers and non-smokers as a function of calendar year, that's the most straightforward measure to model (although you might have to take into account the presumed enrichment of non-smokers in your study sample as their matched smoking counterparts die).
Survival rate trends in case-control studies
Although it's dangerous to read too much into the cryptic comments of a reviewer, I would guess that the objection has to do with whether the censoring is informative. Interpretation of survival model
Survival rate trends in case-control studies Although it's dangerous to read too much into the cryptic comments of a reviewer, I would guess that the objection has to do with whether the censoring is informative. Interpretation of survival models typically is based on the assumption that an individual censored at time $T$ is representative of all subjects who survive to time $T$ after entry into the study. (Wording adapted from this introduction to survival analysis.) Then the censoring is non-informative. In your analysis, however, those who were censored were those who survived through 2014. If you think that there had been a change in excess risk of death due to smoking over the previous 20 years (or even if there were parallel changes in death rates for both groups), then those censored individuals may not be representative of those who survived for the same time but entered the study earlier. Under your hypothesis, the censoring might be informative. It's possible that the details of the design of your analysis avoided this problem but that wasn't clear in the manuscript as reviewed. Or perhaps the reviewer didn't like the study for some additional reasons and found this to be a way to reject it that the editor wouldn't question. Nevertheless, this does seem to be a potential objection to the way you analyzed these data and you should make sure that it is handled properly. (This is beyond my personal expertise; others on this site might have pointers on how to proceed. A more precise title to this question, with more details on the study design and analysis, might get more helpful answers.) It's not clear to me from your question and clarifying comment that the Cox analyses are adding anything useful to simple modeling of death rates per year (or over 2-year intervals). Plus, your hypothesis seems to imply that hazards are not proportional over time between non-smokers and smokers, the basis of standard Cox analyses. If you are interested in the difference of death rates between smokers and non-smokers as a function of calendar year, that's the most straightforward measure to model (although you might have to take into account the presumed enrichment of non-smokers in your study sample as their matched smoking counterparts die).
Survival rate trends in case-control studies Although it's dangerous to read too much into the cryptic comments of a reviewer, I would guess that the objection has to do with whether the censoring is informative. Interpretation of survival model
29,562
Why do Elo rating system use wrong update rule?
The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered - both in expected performance and in match outcome - half a win and half a loss. An example from Elo page in Wikipedia: "A player's expected score is his probability of winning plus half his probability of drawing. Thus an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing. On the other extreme it could represent a 50% chance of winning, 0% chance of losing, and 50% chance of drawing." The probability of drawing, as I said, is not specified, and it leads to a simple two outcome update rule, $R_A^\prime = R_A + K(S_A - E_A)$, in which $S_A=1 \cdot (n_w + 0.5 \cdot n_d ) + 0 \cdot (0.5 \cdot n_d + n_l)$, so, after a single match, $S_A=1$ (win), or $S_A=0.5$ (draw, as half a win), or $S_A=0$ (loss). Like Elo, the Glicko system does not model draws but it makes an update as the average of a win and a loss (per player). Instead, in the TrueSkill ranking system, "draws are modelled by assuming that the performance difference in a particular game is small. Hence, the chance of drawing only depends on the difference of the two player's playing strength. However, empirical findings in the game of chess show that draws are more likely between professional players than beginners. Hence, chance of drawing also seems to depend on the skill level." This approach requires different specific modeling for every games (and TrueSkill is applied to a few Microsoft Xbox games), so it's suitable in Elo and Glicko (designed just for chess), and it's not for rankade, our multipurpose ranking system.
Why do Elo rating system use wrong update rule?
The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered - both in expected performance and in match outcome - half a win an
Why do Elo rating system use wrong update rule? The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered - both in expected performance and in match outcome - half a win and half a loss. An example from Elo page in Wikipedia: "A player's expected score is his probability of winning plus half his probability of drawing. Thus an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing. On the other extreme it could represent a 50% chance of winning, 0% chance of losing, and 50% chance of drawing." The probability of drawing, as I said, is not specified, and it leads to a simple two outcome update rule, $R_A^\prime = R_A + K(S_A - E_A)$, in which $S_A=1 \cdot (n_w + 0.5 \cdot n_d ) + 0 \cdot (0.5 \cdot n_d + n_l)$, so, after a single match, $S_A=1$ (win), or $S_A=0.5$ (draw, as half a win), or $S_A=0$ (loss). Like Elo, the Glicko system does not model draws but it makes an update as the average of a win and a loss (per player). Instead, in the TrueSkill ranking system, "draws are modelled by assuming that the performance difference in a particular game is small. Hence, the chance of drawing only depends on the difference of the two player's playing strength. However, empirical findings in the game of chess show that draws are more likely between professional players than beginners. Hence, chance of drawing also seems to depend on the skill level." This approach requires different specific modeling for every games (and TrueSkill is applied to a few Microsoft Xbox games), so it's suitable in Elo and Glicko (designed just for chess), and it's not for rankade, our multipurpose ranking system.
Why do Elo rating system use wrong update rule? The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered - both in expected performance and in match outcome - half a win an
29,563
Alternate weighting schemes for random effects meta-analysis: missing standard deviations
If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average effect estimate under the assumption that variability is the same across studies. I.e. the weights would be proportional to the ones you do would use, if the standard errors were all exactly $2\hat{\sigma}/\sqrt{n}$ for a standard deviation $\sigma$ that is assumed to be identical across trials. You will no longer get a meaningful overall standard error or confidence interval for your overall estimate though, because you are throwing away the information $\hat{\sigma}$ on the sampling variability. Also note that if groups are not of equal size $n$ is not the correct weight, because the the standard error for the difference of two normal distributions is $\sqrt{\sigma^2_1/n_1 + \sigma^2_2/n_2}$ and this only simplifies to $2\sigma/\sqrt{n}$, if $n_1=n_2=n/2$ (plus $\sigma=\sigma_1=\sigma_2$). You could of course impute the missing standard errors under the assumption that $\sigma$ is the same across the studies. Then studies without a reported standard error have the same underlying variability as the average of the studies, for which you know it and that's easy to do. Another thought is that using untransformed US dollars or US dollars per unit might or might not be problematic. Sometimes it can be desirable to use e.g. a log-transformation to meta-analyze and then to back-transform afterwards.
Alternate weighting schemes for random effects meta-analysis: missing standard deviations
If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average ef
Alternate weighting schemes for random effects meta-analysis: missing standard deviations If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average effect estimate under the assumption that variability is the same across studies. I.e. the weights would be proportional to the ones you do would use, if the standard errors were all exactly $2\hat{\sigma}/\sqrt{n}$ for a standard deviation $\sigma$ that is assumed to be identical across trials. You will no longer get a meaningful overall standard error or confidence interval for your overall estimate though, because you are throwing away the information $\hat{\sigma}$ on the sampling variability. Also note that if groups are not of equal size $n$ is not the correct weight, because the the standard error for the difference of two normal distributions is $\sqrt{\sigma^2_1/n_1 + \sigma^2_2/n_2}$ and this only simplifies to $2\sigma/\sqrt{n}$, if $n_1=n_2=n/2$ (plus $\sigma=\sigma_1=\sigma_2$). You could of course impute the missing standard errors under the assumption that $\sigma$ is the same across the studies. Then studies without a reported standard error have the same underlying variability as the average of the studies, for which you know it and that's easy to do. Another thought is that using untransformed US dollars or US dollars per unit might or might not be problematic. Sometimes it can be desirable to use e.g. a log-transformation to meta-analyze and then to back-transform afterwards.
Alternate weighting schemes for random effects meta-analysis: missing standard deviations If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average ef
29,564
Alternate weighting schemes for random effects meta-analysis: missing standard deviations
It would be useful to have more details on your dataset in general, and your meta-analytic estimates in particular. In addition, it would be interesting to know what are the averages and SDs of the complete studies you are including. Having said that, my pragmatic approach would be, as you hint, to use sample size weighting (why inverse?), but remember this will be at best a hypothesis-generating meta-analysis, whose greatest strength will be pinpointing at the drawbacks of primary studies. Here are some useful references on the potential use of sample weighting in meta-analysis: http://faculty.cas.usf.edu/mbrannick/papers/conf/SIOP08Wts.doc https://www.meta-analysis.com/downloads/Meta%20Analysis%20Fixed%20vs%20Random%20effects.pdf http://epm.sagepub.com/content/70/1/56.abstract
Alternate weighting schemes for random effects meta-analysis: missing standard deviations
It would be useful to have more details on your dataset in general, and your meta-analytic estimates in particular. In addition, it would be interesting to know what are the averages and SDs of the co
Alternate weighting schemes for random effects meta-analysis: missing standard deviations It would be useful to have more details on your dataset in general, and your meta-analytic estimates in particular. In addition, it would be interesting to know what are the averages and SDs of the complete studies you are including. Having said that, my pragmatic approach would be, as you hint, to use sample size weighting (why inverse?), but remember this will be at best a hypothesis-generating meta-analysis, whose greatest strength will be pinpointing at the drawbacks of primary studies. Here are some useful references on the potential use of sample weighting in meta-analysis: http://faculty.cas.usf.edu/mbrannick/papers/conf/SIOP08Wts.doc https://www.meta-analysis.com/downloads/Meta%20Analysis%20Fixed%20vs%20Random%20effects.pdf http://epm.sagepub.com/content/70/1/56.abstract
Alternate weighting schemes for random effects meta-analysis: missing standard deviations It would be useful to have more details on your dataset in general, and your meta-analytic estimates in particular. In addition, it would be interesting to know what are the averages and SDs of the co
29,565
The intuition behind the different scoring rules
One place where log scoring may be inappropriate: the comparison of human forecasters (who may tend to overstate their confidence). Log scoring strongly penalizes very overconfident wrong predictions. A wrong prediction that was made with 100% confidence gets an infinite penalty. For example, suppose a commentator says "I am 100% sure that Smith will win the election," and then Smith loses the election. Under log scoring, the average score of all the commentator's predictions is now permanently stuck at $-\infty$, the worst possible. It should be possible to distinguish that somebody who has made a single wrong 100% confidence prediction is a better forecaster than somebody who makes them all the time.
The intuition behind the different scoring rules
One place where log scoring may be inappropriate: the comparison of human forecasters (who may tend to overstate their confidence). Log scoring strongly penalizes very overconfident wrong predictions.
The intuition behind the different scoring rules One place where log scoring may be inappropriate: the comparison of human forecasters (who may tend to overstate their confidence). Log scoring strongly penalizes very overconfident wrong predictions. A wrong prediction that was made with 100% confidence gets an infinite penalty. For example, suppose a commentator says "I am 100% sure that Smith will win the election," and then Smith loses the election. Under log scoring, the average score of all the commentator's predictions is now permanently stuck at $-\infty$, the worst possible. It should be possible to distinguish that somebody who has made a single wrong 100% confidence prediction is a better forecaster than somebody who makes them all the time.
The intuition behind the different scoring rules One place where log scoring may be inappropriate: the comparison of human forecasters (who may tend to overstate their confidence). Log scoring strongly penalizes very overconfident wrong predictions.
29,566
The intuition behind the different scoring rules
Log The expected surprisal of the prediction when we discover the actual value. Brier $L^2$, RMSE, OLS. However, the fact that $p=2$ is the only value which turns the $L^p$ norm into proper scoring rule detracts from this intuition. Sphere The cosine of the angle between the prediction vector $(p,1-p)$ and the outcome vector (0,1) or (1,0). Note that the angle itself is not a proper scoring rule, which also detracts from the intuition.
The intuition behind the different scoring rules
Log The expected surprisal of the prediction when we discover the actual value. Brier $L^2$, RMSE, OLS. However, the fact that $p=2$ is the only value which turns the $L^p$ norm into proper scoring r
The intuition behind the different scoring rules Log The expected surprisal of the prediction when we discover the actual value. Brier $L^2$, RMSE, OLS. However, the fact that $p=2$ is the only value which turns the $L^p$ norm into proper scoring rule detracts from this intuition. Sphere The cosine of the angle between the prediction vector $(p,1-p)$ and the outcome vector (0,1) or (1,0). Note that the angle itself is not a proper scoring rule, which also detracts from the intuition.
The intuition behind the different scoring rules Log The expected surprisal of the prediction when we discover the actual value. Brier $L^2$, RMSE, OLS. However, the fact that $p=2$ is the only value which turns the $L^p$ norm into proper scoring r
29,567
Justification for feature selection by removing predictors with near zero variance
Based on my experience, I often remove near zero variance predictors (or predictors which have one value only) since they are considered to have less predictive power. In some cases such predictors can also cause numerical problems and cause a model to crash. This can occur either due to division by zero (if a standardization is performed in the data) or due to numerical precision issues. The paper (http://www.jstatsoft.org/v28/i05/paper) provides some reasoning though not rigorously proving it in pages 3 and 4. An example dealing with near zero predictors that I found useful is : https://tgmstat.wordpress.com/2014/03/06/near-zero-variance-predictors/ Hope this helps.
Justification for feature selection by removing predictors with near zero variance
Based on my experience, I often remove near zero variance predictors (or predictors which have one value only) since they are considered to have less predictive power. In some cases such predictors ca
Justification for feature selection by removing predictors with near zero variance Based on my experience, I often remove near zero variance predictors (or predictors which have one value only) since they are considered to have less predictive power. In some cases such predictors can also cause numerical problems and cause a model to crash. This can occur either due to division by zero (if a standardization is performed in the data) or due to numerical precision issues. The paper (http://www.jstatsoft.org/v28/i05/paper) provides some reasoning though not rigorously proving it in pages 3 and 4. An example dealing with near zero predictors that I found useful is : https://tgmstat.wordpress.com/2014/03/06/near-zero-variance-predictors/ Hope this helps.
Justification for feature selection by removing predictors with near zero variance Based on my experience, I often remove near zero variance predictors (or predictors which have one value only) since they are considered to have less predictive power. In some cases such predictors ca
29,568
Justification for feature selection by removing predictors with near zero variance
As I understand it, a zero variance variable is one whose values are all the same constant variable and a near-zero variance (NZV) variable is one where almost all values are constant and only a few have values that differ from that constant. I cannot give a definitive answer, but I think that there is a reasonable logic to the recommendation of removing such variables. (Although I can appreciate the logic, I think there might be a better solution that I give at the end of my answer.) Let us take for granted that we agree that zero-variance variables are bad, whether or not an algorithm automatically drops them. The question is whether NZV variables are also so bad that they ought to be dropped. Let's leave aside for the moment the nontrivial question of how near-zero is serious to be dropped--I will return to this issue shortly. Suppose we leave an NZV variable in the analysis and suppose this variable is very near zero. (Again, let's hold off on quantifying how much "very near zero" actually means--just stay with me for the concept.) Many typical machine learning workflows involve splitting an original dataset into several subsets. (Machine learning workflows are relevant here because that is Max Kuhn's context in the book you cited.) For example, we might start with a dataset of 10000 lines. Then we split it into a 70% training set (7000 lines) and a 30% test set (3000 lines). Then to compare and select among multiple models that are being trained, we might employ 10-fold cross-validation on the training set which involves splitting the training set such that there are ten overlapping subsets of 6300 lines each. With all these splits, it is quite plausible that a variable that has near-zero variance ends up having zero variance in one or more of the 6300-line cross-validation samples. That would be a problem. Some algorithms might crash on such cross-validation subsets. Other algorithms might automatically drop the problematic variable, but that would also be a problem because then the ten cross-validation folds would no longer have identical sets of variables, so the models created would not be comparable. Because of these potential problems, it makes sense to identify in advance which variables are so near zero variance that they could end up becoming zero variance with various data splits. That said, the question of "how near zero is too near" is challenging as it varies widely across datasets. As several comments have indicated, systematically removing NZV variables based on any criteria risks eliminating some variables that might have a strong relationship with the target outcome. I would suggest to rather run the data anyways even if there are some NZV variables and then dealing with any problems as they might come up. If the scenario I raised above never comes up (that is, no subset of the data ends up with zero variance), then there is no problem. However, if that situation comes up, then one possible solution would be to stratify the cross-validation folds by the NZV variable to ensure that every fold has some of the few minority cases. In short, whereas NZV variables are a legitimate concern, there might be better ways for dealing with the problem that do not risk eliminating these potential valuable estimators.
Justification for feature selection by removing predictors with near zero variance
As I understand it, a zero variance variable is one whose values are all the same constant variable and a near-zero variance (NZV) variable is one where almost all values are constant and only a few h
Justification for feature selection by removing predictors with near zero variance As I understand it, a zero variance variable is one whose values are all the same constant variable and a near-zero variance (NZV) variable is one where almost all values are constant and only a few have values that differ from that constant. I cannot give a definitive answer, but I think that there is a reasonable logic to the recommendation of removing such variables. (Although I can appreciate the logic, I think there might be a better solution that I give at the end of my answer.) Let us take for granted that we agree that zero-variance variables are bad, whether or not an algorithm automatically drops them. The question is whether NZV variables are also so bad that they ought to be dropped. Let's leave aside for the moment the nontrivial question of how near-zero is serious to be dropped--I will return to this issue shortly. Suppose we leave an NZV variable in the analysis and suppose this variable is very near zero. (Again, let's hold off on quantifying how much "very near zero" actually means--just stay with me for the concept.) Many typical machine learning workflows involve splitting an original dataset into several subsets. (Machine learning workflows are relevant here because that is Max Kuhn's context in the book you cited.) For example, we might start with a dataset of 10000 lines. Then we split it into a 70% training set (7000 lines) and a 30% test set (3000 lines). Then to compare and select among multiple models that are being trained, we might employ 10-fold cross-validation on the training set which involves splitting the training set such that there are ten overlapping subsets of 6300 lines each. With all these splits, it is quite plausible that a variable that has near-zero variance ends up having zero variance in one or more of the 6300-line cross-validation samples. That would be a problem. Some algorithms might crash on such cross-validation subsets. Other algorithms might automatically drop the problematic variable, but that would also be a problem because then the ten cross-validation folds would no longer have identical sets of variables, so the models created would not be comparable. Because of these potential problems, it makes sense to identify in advance which variables are so near zero variance that they could end up becoming zero variance with various data splits. That said, the question of "how near zero is too near" is challenging as it varies widely across datasets. As several comments have indicated, systematically removing NZV variables based on any criteria risks eliminating some variables that might have a strong relationship with the target outcome. I would suggest to rather run the data anyways even if there are some NZV variables and then dealing with any problems as they might come up. If the scenario I raised above never comes up (that is, no subset of the data ends up with zero variance), then there is no problem. However, if that situation comes up, then one possible solution would be to stratify the cross-validation folds by the NZV variable to ensure that every fold has some of the few minority cases. In short, whereas NZV variables are a legitimate concern, there might be better ways for dealing with the problem that do not risk eliminating these potential valuable estimators.
Justification for feature selection by removing predictors with near zero variance As I understand it, a zero variance variable is one whose values are all the same constant variable and a near-zero variance (NZV) variable is one where almost all values are constant and only a few h
29,569
Are data handling errors already 'priced in' to statistical analysis?
I second the suggestion of @Aksakal: If measurement error is seen by the analyst as potentially important, it can and should be modeled explicitly as part of the data-generating process. I see several considerations that argue against the introduction of a generic correction factor based on, e.g., the age of the data set. First, age may be a very poor proxy for the degree of data deterioration. The technology of duplication, compression, and conservation, and the degree of effort and care that went into verifying correct transcription, are apparently the important factors. Some ancient texts (e.g., The Bible) have been conserved for centuries with apparently zero degradation. Your VHS example, while legitimate, is actually unusual, in that each duplication event always introduces error, and there are not easy ways to check for and correct for transcription errors -- if one uses cheap, widely available technologies for duplication and storage. I expect that one lower the degree of introduced errors substantially, through investments in more expensive systems. This last point is more general: data conservation and propagation are economic activities. The quality of transmission depends greatly on the resources deployed. These choices will in turn depend on the perceived importance of the data to whoever is doing the duplicating and transmitting. Economic considerations apply to the analyst, as well. There are always more factors you can take into account when doing your analysis. Under what conditions will data transcription errors be substantial enough, and important enough, that they are worth taking into account? My hunch is: such conditions are not common. Moreover, if potential data degradation is seen as important enough to account for it in your analysis, then it is probably important enough to make the effort to model the process explicitly, rather than insert a generic "correction" step. Finally, there is no need to develop such a generic correction factor de novo. There exists already a substantial body of statistical theory and practice for analyzing data sets for which measurement error is seen as important. In sum: it's an interesting thought. But I don't think it should spur any changes in analytic practice.
Are data handling errors already 'priced in' to statistical analysis?
I second the suggestion of @Aksakal: If measurement error is seen by the analyst as potentially important, it can and should be modeled explicitly as part of the data-generating process. I see severa
Are data handling errors already 'priced in' to statistical analysis? I second the suggestion of @Aksakal: If measurement error is seen by the analyst as potentially important, it can and should be modeled explicitly as part of the data-generating process. I see several considerations that argue against the introduction of a generic correction factor based on, e.g., the age of the data set. First, age may be a very poor proxy for the degree of data deterioration. The technology of duplication, compression, and conservation, and the degree of effort and care that went into verifying correct transcription, are apparently the important factors. Some ancient texts (e.g., The Bible) have been conserved for centuries with apparently zero degradation. Your VHS example, while legitimate, is actually unusual, in that each duplication event always introduces error, and there are not easy ways to check for and correct for transcription errors -- if one uses cheap, widely available technologies for duplication and storage. I expect that one lower the degree of introduced errors substantially, through investments in more expensive systems. This last point is more general: data conservation and propagation are economic activities. The quality of transmission depends greatly on the resources deployed. These choices will in turn depend on the perceived importance of the data to whoever is doing the duplicating and transmitting. Economic considerations apply to the analyst, as well. There are always more factors you can take into account when doing your analysis. Under what conditions will data transcription errors be substantial enough, and important enough, that they are worth taking into account? My hunch is: such conditions are not common. Moreover, if potential data degradation is seen as important enough to account for it in your analysis, then it is probably important enough to make the effort to model the process explicitly, rather than insert a generic "correction" step. Finally, there is no need to develop such a generic correction factor de novo. There exists already a substantial body of statistical theory and practice for analyzing data sets for which measurement error is seen as important. In sum: it's an interesting thought. But I don't think it should spur any changes in analytic practice.
Are data handling errors already 'priced in' to statistical analysis? I second the suggestion of @Aksakal: If measurement error is seen by the analyst as potentially important, it can and should be modeled explicitly as part of the data-generating process. I see severa
29,570
Checking the proportional odds assumption holds in an ordinal logistic regression using polr function
The dependent variable has 8 ordered levels so in the graph to test the proportional odds assumption you should see 8 different symbols for every independent variable. You see only 2 symbols for every independent variable probably because you have chosen a too short interval for the values of the x-axis. If my conjecture is correct, you just need to use a wider interval for the values of the x-axis. Try this code: par(mfrow=c(1,1)) plot(t, which=1:8, pch=1:8, xlab='logit', main=' ', xlim=range(s[,3:9]))
Checking the proportional odds assumption holds in an ordinal logistic regression using polr functio
The dependent variable has 8 ordered levels so in the graph to test the proportional odds assumption you should see 8 different symbols for every independent variable. You see only 2 symbols for every
Checking the proportional odds assumption holds in an ordinal logistic regression using polr function The dependent variable has 8 ordered levels so in the graph to test the proportional odds assumption you should see 8 different symbols for every independent variable. You see only 2 symbols for every independent variable probably because you have chosen a too short interval for the values of the x-axis. If my conjecture is correct, you just need to use a wider interval for the values of the x-axis. Try this code: par(mfrow=c(1,1)) plot(t, which=1:8, pch=1:8, xlab='logit', main=' ', xlim=range(s[,3:9]))
Checking the proportional odds assumption holds in an ordinal logistic regression using polr functio The dependent variable has 8 ordered levels so in the graph to test the proportional odds assumption you should see 8 different symbols for every independent variable. You see only 2 symbols for every
29,571
Checking the proportional odds assumption holds in an ordinal logistic regression using polr function
So, I found this via googling around and I think an answer still might be useful for that reason. I think the mistake is in sf <- function(y) { c('VC>=1' = qlogis(mean(FG1_val_cat >= 1)), 'VC>=2' = qlogis(mean(FG1_val_cat >= 2)), 'VC>=3' = qlogis(mean(FG1_val_cat >= 3)), 'VC>=4' = qlogis(mean(FG1_val_cat >= 4)), 'VC>=5' = qlogis(mean(FG1_val_cat >= 5)), 'VC>=6' = qlogis(mean(FG1_val_cat >= 6)), 'VC>=7' = qlogis(mean(FG1_val_cat >= 7)), 'VC>=8' = qlogis(mean(FG1_val_cat >= 8))) } where you use FG1_val_cat rather than y. Using the example from Harrell's Regression Modeling Strategies: library(Hmisc) getHdata(support) support <- support[complete.cases(support[, c("sfdm2", "adlsc", "sex", "age", "meanbp")]), ] sfdm <- as.integer (support$sfdm2 ) - 1 sf1 <- function (y) { c(' Y ≥ 1 ' = qlogis (mean(sfdm >= 1)), ' Y ≥ 2 ' = qlogis (mean(sfdm >= 2)), ' Y ≥ 3 ' = qlogis (mean(sfdm >= 3)) ) } sf2 <- function (y) { c(' Y ≥ 1 ' = qlogis (mean(y >= 1)), ' Y ≥ 2 ' = qlogis (mean(y >= 2)), ' Y ≥ 3 ' = qlogis (mean(y >= 3)) ) } s1 <- summary(sfdm ~ adlsc + sex + age + meanbp, fun=sf1, data = support) s2 <- summary(sfdm ~ adlsc + sex + age + meanbp, fun=sf2, data = support) plot(s1, which =1:3, pch =1:3, xlab = ' logit ', main = ' ', width.factor = 1.4, cex.lab = 0.75) plot(s2, which =1:3, pch =1:3, xlab = ' logit ', main = ' ', width.factor = 1.4, cex.lab = 0.75) vs
Checking the proportional odds assumption holds in an ordinal logistic regression using polr functio
So, I found this via googling around and I think an answer still might be useful for that reason. I think the mistake is in sf <- function(y) { c('VC>=1' = qlogis(mean(FG1_val_cat >= 1)), 'VC>=
Checking the proportional odds assumption holds in an ordinal logistic regression using polr function So, I found this via googling around and I think an answer still might be useful for that reason. I think the mistake is in sf <- function(y) { c('VC>=1' = qlogis(mean(FG1_val_cat >= 1)), 'VC>=2' = qlogis(mean(FG1_val_cat >= 2)), 'VC>=3' = qlogis(mean(FG1_val_cat >= 3)), 'VC>=4' = qlogis(mean(FG1_val_cat >= 4)), 'VC>=5' = qlogis(mean(FG1_val_cat >= 5)), 'VC>=6' = qlogis(mean(FG1_val_cat >= 6)), 'VC>=7' = qlogis(mean(FG1_val_cat >= 7)), 'VC>=8' = qlogis(mean(FG1_val_cat >= 8))) } where you use FG1_val_cat rather than y. Using the example from Harrell's Regression Modeling Strategies: library(Hmisc) getHdata(support) support <- support[complete.cases(support[, c("sfdm2", "adlsc", "sex", "age", "meanbp")]), ] sfdm <- as.integer (support$sfdm2 ) - 1 sf1 <- function (y) { c(' Y ≥ 1 ' = qlogis (mean(sfdm >= 1)), ' Y ≥ 2 ' = qlogis (mean(sfdm >= 2)), ' Y ≥ 3 ' = qlogis (mean(sfdm >= 3)) ) } sf2 <- function (y) { c(' Y ≥ 1 ' = qlogis (mean(y >= 1)), ' Y ≥ 2 ' = qlogis (mean(y >= 2)), ' Y ≥ 3 ' = qlogis (mean(y >= 3)) ) } s1 <- summary(sfdm ~ adlsc + sex + age + meanbp, fun=sf1, data = support) s2 <- summary(sfdm ~ adlsc + sex + age + meanbp, fun=sf2, data = support) plot(s1, which =1:3, pch =1:3, xlab = ' logit ', main = ' ', width.factor = 1.4, cex.lab = 0.75) plot(s2, which =1:3, pch =1:3, xlab = ' logit ', main = ' ', width.factor = 1.4, cex.lab = 0.75) vs
Checking the proportional odds assumption holds in an ordinal logistic regression using polr functio So, I found this via googling around and I think an answer still might be useful for that reason. I think the mistake is in sf <- function(y) { c('VC>=1' = qlogis(mean(FG1_val_cat >= 1)), 'VC>=
29,572
Testing for statistical differences of quantile regression line slopes
I would approach this similar to how I would approach it in "regular" linear regression (OLS): model the two variables and their interaction, and do inference on the interaction term. The interpretation of the interaction term is that it is the difference in slopes between the two levels of the group variable, and this is the same whether the approach is OLS or quantile regression. (The exact interpretation changes, since quantile regression models conditional quantiles instead of conditional means, but the idea of a line with a slope is the same.) $$ \mathbb E[y\vert x_1, x_2] = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_1x_2 $$ Instead of fitting this model with OLS, fit it with quantile regression. Then you get a point estimate for $\beta_3$, the difference in slopes between the two groups in $x_1$. Then you get into how to test for significance or create a confidence interval. I like the idea of testing via confidence interval and examining if $0$ is in the confidence interval. The quantreg package in R has a number of methods for calculating confidence intervals. I would do it with bootstrap. If you truly need a p-value, perhaps a permutation test of $H_0: \beta_3 = 0$ would work for you.
Testing for statistical differences of quantile regression line slopes
I would approach this similar to how I would approach it in "regular" linear regression (OLS): model the two variables and their interaction, and do inference on the interaction term. The interpretati
Testing for statistical differences of quantile regression line slopes I would approach this similar to how I would approach it in "regular" linear regression (OLS): model the two variables and their interaction, and do inference on the interaction term. The interpretation of the interaction term is that it is the difference in slopes between the two levels of the group variable, and this is the same whether the approach is OLS or quantile regression. (The exact interpretation changes, since quantile regression models conditional quantiles instead of conditional means, but the idea of a line with a slope is the same.) $$ \mathbb E[y\vert x_1, x_2] = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_1x_2 $$ Instead of fitting this model with OLS, fit it with quantile regression. Then you get a point estimate for $\beta_3$, the difference in slopes between the two groups in $x_1$. Then you get into how to test for significance or create a confidence interval. I like the idea of testing via confidence interval and examining if $0$ is in the confidence interval. The quantreg package in R has a number of methods for calculating confidence intervals. I would do it with bootstrap. If you truly need a p-value, perhaps a permutation test of $H_0: \beta_3 = 0$ would work for you.
Testing for statistical differences of quantile regression line slopes I would approach this similar to how I would approach it in "regular" linear regression (OLS): model the two variables and their interaction, and do inference on the interaction term. The interpretati
29,573
Testing for statistical differences of quantile regression line slopes
Couldn't you just run a pooled quantile regression y=betax + deltax*groupDummy? Seems easier to me, and presumably your favorite stats package will give you a confidence interval for delta. But I may be missing something.
Testing for statistical differences of quantile regression line slopes
Couldn't you just run a pooled quantile regression y=betax + deltax*groupDummy? Seems easier to me, and presumably your favorite stats package will give you a confidence interval for delta. But I may
Testing for statistical differences of quantile regression line slopes Couldn't you just run a pooled quantile regression y=betax + deltax*groupDummy? Seems easier to me, and presumably your favorite stats package will give you a confidence interval for delta. But I may be missing something.
Testing for statistical differences of quantile regression line slopes Couldn't you just run a pooled quantile regression y=betax + deltax*groupDummy? Seems easier to me, and presumably your favorite stats package will give you a confidence interval for delta. But I may
29,574
Tweedie p parameter Interpretation
The Generalized Linear Models with Examples in R book by Peter Dunn and Gordon Smyth contains an illuminating discussion of Tweedie distributions. If I maybe so blunt, and summarize your excellent question: What is the relation of $p$ and the underlying Poisson-Gamma model for a Tweedie distribution with $1 < p < 2$? As you already note in your question, the Tweedie distribution with $1 < p < 2$ can be understood as a Poisson-Gamma model. To make it more concrete what this means, let's assume that $$ N \sim \text{Pois}(\lambda^{*}) $$ and $$ z_i \sim \text{Gam}(\mu^{*}, \phi^{*}) $$ the observed $y$ is $$ y = \sum_{i = 1}^{N}{z_i}. $$ Dunn & Smyth give an example for this model, where $N$ is the number of insurance claims and $z_i$ is the average cost for each claim. In that case the model would describe the total insurance payout. The relation of $p$ to the parameters of the Poisson and Gamma distribution is \begin{equation} \begin{aligned} \lambda^{*} &= \frac{\mu^{2-p}}{\phi (2-p)} \\ \mu^{*} &= (2 - p)\phi\mu^{p - 1} \\ \phi^{*} &= (2 - p)(p - 1) \phi^2\mu^{2(p - 1)}, \end{aligned} \end{equation} where $\mu$ and $\phi$ are the mean and overdispersion parameters from the generalized linear model definition.
Tweedie p parameter Interpretation
The Generalized Linear Models with Examples in R book by Peter Dunn and Gordon Smyth contains an illuminating discussion of Tweedie distributions. If I maybe so blunt, and summarize your excellent que
Tweedie p parameter Interpretation The Generalized Linear Models with Examples in R book by Peter Dunn and Gordon Smyth contains an illuminating discussion of Tweedie distributions. If I maybe so blunt, and summarize your excellent question: What is the relation of $p$ and the underlying Poisson-Gamma model for a Tweedie distribution with $1 < p < 2$? As you already note in your question, the Tweedie distribution with $1 < p < 2$ can be understood as a Poisson-Gamma model. To make it more concrete what this means, let's assume that $$ N \sim \text{Pois}(\lambda^{*}) $$ and $$ z_i \sim \text{Gam}(\mu^{*}, \phi^{*}) $$ the observed $y$ is $$ y = \sum_{i = 1}^{N}{z_i}. $$ Dunn & Smyth give an example for this model, where $N$ is the number of insurance claims and $z_i$ is the average cost for each claim. In that case the model would describe the total insurance payout. The relation of $p$ to the parameters of the Poisson and Gamma distribution is \begin{equation} \begin{aligned} \lambda^{*} &= \frac{\mu^{2-p}}{\phi (2-p)} \\ \mu^{*} &= (2 - p)\phi\mu^{p - 1} \\ \phi^{*} &= (2 - p)(p - 1) \phi^2\mu^{2(p - 1)}, \end{aligned} \end{equation} where $\mu$ and $\phi$ are the mean and overdispersion parameters from the generalized linear model definition.
Tweedie p parameter Interpretation The Generalized Linear Models with Examples in R book by Peter Dunn and Gordon Smyth contains an illuminating discussion of Tweedie distributions. If I maybe so blunt, and summarize your excellent que
29,575
Classification algorithms that return confidence?
Three questions: "How do we define confidence in machine learning and how to generate it (if not generated automatically by scikit-learn)?" Here's a great summary on different types of confidence measures in machine learning. The specific metric used depends on which algorithm/model you generate. "how can I classify new instances but then choose only those with the highest confidence? and, "What should I change in this approach if I had more that 2 potential classes?" Here's a quick script that you can play around with that expands on what you started with so you can see how to handle an arbitrary number of classes and finding the likely classes for each predicted example. I like numpy and pandas (which you're likely using if you're using sklearn). from sklearn import neighbors import pandas as pd import numpy as np number_of_classes = 3 # number of possible classes number_of_features = 2 # number of features for each example train_size = 20 # number of training examples predict_size = 5 # number of examples to predict # Generate a random 2-variable training set with random classes assigned X = np.random.randint(100, size=(train_size, 2)) y = np.random.randint(number_of_classes, size=train_size) # initialize NearestNeighbor classifier knn = neighbors.KNeighborsClassifier(n_neighbors=3) # train model knn.fit(X, y) # values to predict classes for predict = np.random.randint(100, size=(predict_size, 2)) print "generated examples to predict:\n",predict,"\n" # predict class probabilities for each class for each value and convert to DataFrame probs = pd.DataFrame(knn.predict_proba(predict)) print "all probabilities:\n", probs, "\n" for c in range(number_of_classes): likely=probs[probs[c] > 0.5] print "class" + str(c) + " probability > 0.5:\n", likely print "indexes of likely class" + str(c) + ":", likely.index.tolist(), "\n"
Classification algorithms that return confidence?
Three questions: "How do we define confidence in machine learning and how to generate it (if not generated automatically by scikit-learn)?" Here's a great summary on different types of confidence mea
Classification algorithms that return confidence? Three questions: "How do we define confidence in machine learning and how to generate it (if not generated automatically by scikit-learn)?" Here's a great summary on different types of confidence measures in machine learning. The specific metric used depends on which algorithm/model you generate. "how can I classify new instances but then choose only those with the highest confidence? and, "What should I change in this approach if I had more that 2 potential classes?" Here's a quick script that you can play around with that expands on what you started with so you can see how to handle an arbitrary number of classes and finding the likely classes for each predicted example. I like numpy and pandas (which you're likely using if you're using sklearn). from sklearn import neighbors import pandas as pd import numpy as np number_of_classes = 3 # number of possible classes number_of_features = 2 # number of features for each example train_size = 20 # number of training examples predict_size = 5 # number of examples to predict # Generate a random 2-variable training set with random classes assigned X = np.random.randint(100, size=(train_size, 2)) y = np.random.randint(number_of_classes, size=train_size) # initialize NearestNeighbor classifier knn = neighbors.KNeighborsClassifier(n_neighbors=3) # train model knn.fit(X, y) # values to predict classes for predict = np.random.randint(100, size=(predict_size, 2)) print "generated examples to predict:\n",predict,"\n" # predict class probabilities for each class for each value and convert to DataFrame probs = pd.DataFrame(knn.predict_proba(predict)) print "all probabilities:\n", probs, "\n" for c in range(number_of_classes): likely=probs[probs[c] > 0.5] print "class" + str(c) + " probability > 0.5:\n", likely print "indexes of likely class" + str(c) + ":", likely.index.tolist(), "\n"
Classification algorithms that return confidence? Three questions: "How do we define confidence in machine learning and how to generate it (if not generated automatically by scikit-learn)?" Here's a great summary on different types of confidence mea
29,576
Classification algorithms that return confidence?
If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, like logistic regression; the second approach is using calibration, like for svm or CART tree. you can find related modules in scikit-learn.
Classification algorithms that return confidence?
If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, like logistic regression; the second approach is using calibrati
Classification algorithms that return confidence? If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, like logistic regression; the second approach is using calibration, like for svm or CART tree. you can find related modules in scikit-learn.
Classification algorithms that return confidence? If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, like logistic regression; the second approach is using calibrati
29,577
Understanding Singular Value Decomposition in the context of LSI
Matrix factorization using SVD decomposes the input matrix into three parts: The left singular vectors $U$. The first column of this matrix specifies on which axis the rows of the input matrix vary the most. In your case, the first column tells you which words vary together the most. The singular values $D$. These are scalings. These are relative to each other. If the first value of $D$ is twice as big as the second it means that the first singular vector (in $U$ and $V^T$) explain twice as much variation as the seconds singular vector. The right singular vectors $V^T$. The first row of this matrix specifies on which axis the columns of the input matrix vary the most. In your case, the first row tells you which documents vary together the most. When words or documents vary together it indicates that they are similar. For example, if the word doctor occurs more often in a document, the word nurse and hospital also occur more. This is shown by the first scaled left singular vector, the first column of $WordSim$.You can validate this result by looking at the input data. Notice that when nurse does occur, hospital also occurs and when it does not occur, hospital also does not occur.
Understanding Singular Value Decomposition in the context of LSI
Matrix factorization using SVD decomposes the input matrix into three parts: The left singular vectors $U$. The first column of this matrix specifies on which axis the rows of the input matrix vary t
Understanding Singular Value Decomposition in the context of LSI Matrix factorization using SVD decomposes the input matrix into three parts: The left singular vectors $U$. The first column of this matrix specifies on which axis the rows of the input matrix vary the most. In your case, the first column tells you which words vary together the most. The singular values $D$. These are scalings. These are relative to each other. If the first value of $D$ is twice as big as the second it means that the first singular vector (in $U$ and $V^T$) explain twice as much variation as the seconds singular vector. The right singular vectors $V^T$. The first row of this matrix specifies on which axis the columns of the input matrix vary the most. In your case, the first row tells you which documents vary together the most. When words or documents vary together it indicates that they are similar. For example, if the word doctor occurs more often in a document, the word nurse and hospital also occur more. This is shown by the first scaled left singular vector, the first column of $WordSim$.You can validate this result by looking at the input data. Notice that when nurse does occur, hospital also occurs and when it does not occur, hospital also does not occur.
Understanding Singular Value Decomposition in the context of LSI Matrix factorization using SVD decomposes the input matrix into three parts: The left singular vectors $U$. The first column of this matrix specifies on which axis the rows of the input matrix vary t
29,578
Relationship between paired t test and simple mixed model
I think the problem is the way the paired t test is computed. Try this: t.test(all_by_sub$var[all_by_sub$condition==1], all_by_sub$var[all_by_sub$condition==0], paired=TRUE) This gives: data: all_by_sub$var[all_by_sub$condition == 1] and all_by_sub$var[all_by_sub$condition == 0] t = 2.0529, df = 37, p-value = 0.0472 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.0009400428 0.1435297005 sample estimates: mean of the differences 0.07223487 Obviously, t.test computes x minus y in the paired t test. That's why the sign of the estimate and the t value was reversed. Beyond that, both the estimate (0.072 vs. 0.082) and the t value (2.05 vs. 2.19) of the mixed model are very close to the results of the t test: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.118 0.028 106.772 4.159 0.000 condition 0.082 0.037 462.992 2.192 0.029
Relationship between paired t test and simple mixed model
I think the problem is the way the paired t test is computed. Try this: t.test(all_by_sub$var[all_by_sub$condition==1], all_by_sub$var[all_by_sub$condition==0], paired=TRUE) This gives: data: all_by
Relationship between paired t test and simple mixed model I think the problem is the way the paired t test is computed. Try this: t.test(all_by_sub$var[all_by_sub$condition==1], all_by_sub$var[all_by_sub$condition==0], paired=TRUE) This gives: data: all_by_sub$var[all_by_sub$condition == 1] and all_by_sub$var[all_by_sub$condition == 0] t = 2.0529, df = 37, p-value = 0.0472 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.0009400428 0.1435297005 sample estimates: mean of the differences 0.07223487 Obviously, t.test computes x minus y in the paired t test. That's why the sign of the estimate and the t value was reversed. Beyond that, both the estimate (0.072 vs. 0.082) and the t value (2.05 vs. 2.19) of the mixed model are very close to the results of the t test: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.118 0.028 106.772 4.159 0.000 condition 0.082 0.037 462.992 2.192 0.029
Relationship between paired t test and simple mixed model I think the problem is the way the paired t test is computed. Try this: t.test(all_by_sub$var[all_by_sub$condition==1], all_by_sub$var[all_by_sub$condition==0], paired=TRUE) This gives: data: all_by
29,579
Which distribution to use to model web page read time?
It looks like a Weibull distribution is appropriate, as seen here
Which distribution to use to model web page read time?
It looks like a Weibull distribution is appropriate, as seen here
Which distribution to use to model web page read time? It looks like a Weibull distribution is appropriate, as seen here
Which distribution to use to model web page read time? It looks like a Weibull distribution is appropriate, as seen here
29,580
Estimating the size of an intersection of multiple sets by using a sample of one set
If your set $A_0$ has repeated elements (i.e., it is actually a multiset), the size of the intersection will be overestimated by your procedure because your scaling factor uses the number of elements sampled and not the number of unique "types" sampled. You can correct the estimate by computing the factor as the ratio of the number unique elements in your random sample to the number of unique elements in the full set $A_0$.
Estimating the size of an intersection of multiple sets by using a sample of one set
If your set $A_0$ has repeated elements (i.e., it is actually a multiset), the size of the intersection will be overestimated by your procedure because your scaling factor uses the number of elements
Estimating the size of an intersection of multiple sets by using a sample of one set If your set $A_0$ has repeated elements (i.e., it is actually a multiset), the size of the intersection will be overestimated by your procedure because your scaling factor uses the number of elements sampled and not the number of unique "types" sampled. You can correct the estimate by computing the factor as the ratio of the number unique elements in your random sample to the number of unique elements in the full set $A_0$.
Estimating the size of an intersection of multiple sets by using a sample of one set If your set $A_0$ has repeated elements (i.e., it is actually a multiset), the size of the intersection will be overestimated by your procedure because your scaling factor uses the number of elements
29,581
Estimating the size of an intersection of multiple sets by using a sample of one set
As Innuo points out, my problem was because of duplicates in my sampled set $A_0$, which caused factor in my pseudocode to be to low, which in turn caused the final extrapolation of z to be too high because it was generated via the inverse of factor. Removing duplicates solved this problem, and now the algorithm generates a delta vs. sample size plot more along the lines of what I'd expect (the lines indicate the margin of error at a 95% confidence level for that sample size against the total population):
Estimating the size of an intersection of multiple sets by using a sample of one set
As Innuo points out, my problem was because of duplicates in my sampled set $A_0$, which caused factor in my pseudocode to be to low, which in turn caused the final extrapolation of z to be too high b
Estimating the size of an intersection of multiple sets by using a sample of one set As Innuo points out, my problem was because of duplicates in my sampled set $A_0$, which caused factor in my pseudocode to be to low, which in turn caused the final extrapolation of z to be too high because it was generated via the inverse of factor. Removing duplicates solved this problem, and now the algorithm generates a delta vs. sample size plot more along the lines of what I'd expect (the lines indicate the margin of error at a 95% confidence level for that sample size against the total population):
Estimating the size of an intersection of multiple sets by using a sample of one set As Innuo points out, my problem was because of duplicates in my sampled set $A_0$, which caused factor in my pseudocode to be to low, which in turn caused the final extrapolation of z to be too high b
29,582
Explaining Kalman filters in state space models
I think what you say is correct, and I do not think it is messy. A way of phrasing it would be to say that the Kalman filter is an error-correction algorithm, that modifies predictions in the light of the discrepancies with current observations. This correction is made in your step 4) using the gain matrix $A_t$.
Explaining Kalman filters in state space models
I think what you say is correct, and I do not think it is messy. A way of phrasing it would be to say that the Kalman filter is an error-correction algorithm, that modifies predictions in the light of
Explaining Kalman filters in state space models I think what you say is correct, and I do not think it is messy. A way of phrasing it would be to say that the Kalman filter is an error-correction algorithm, that modifies predictions in the light of the discrepancies with current observations. This correction is made in your step 4) using the gain matrix $A_t$.
Explaining Kalman filters in state space models I think what you say is correct, and I do not think it is messy. A way of phrasing it would be to say that the Kalman filter is an error-correction algorithm, that modifies predictions in the light of
29,583
Monte Carlo Integration for non-square integrable functions
You could just use other scale/dispersion measures such as the interquantile range, which are not affected by tail asymptotics and thus square integrability. With the added benefit that often they are in general more robust anyway. Obviously one shall apply them to a resampling/bootstrap followed by the mean estimator, not directly just to the raw output from MC sampling of the function before averaging. You can also check in general L-estimators and adapt one of them to merge these two steps into one for performance, but mentally the two distributions shall not be confused, even though the estimator PDF will naturally inherit some characteristics (included maybe lack of square integrability).
Monte Carlo Integration for non-square integrable functions
You could just use other scale/dispersion measures such as the interquantile range, which are not affected by tail asymptotics and thus square integrability. With the added benefit that often they are
Monte Carlo Integration for non-square integrable functions You could just use other scale/dispersion measures such as the interquantile range, which are not affected by tail asymptotics and thus square integrability. With the added benefit that often they are in general more robust anyway. Obviously one shall apply them to a resampling/bootstrap followed by the mean estimator, not directly just to the raw output from MC sampling of the function before averaging. You can also check in general L-estimators and adapt one of them to merge these two steps into one for performance, but mentally the two distributions shall not be confused, even though the estimator PDF will naturally inherit some characteristics (included maybe lack of square integrability).
Monte Carlo Integration for non-square integrable functions You could just use other scale/dispersion measures such as the interquantile range, which are not affected by tail asymptotics and thus square integrability. With the added benefit that often they are
29,584
How do I incorporate an innovative outlier at observation 48 in my ARIMA model?
If $$Y(t) = [\theta/\phi][A(t)+\text{IO}(t)]$$ then $$Y^\text{*}(t) = [\theta/\phi][A(t)] + [\theta/\phi][\text{IO}(t)].$$ If $$\theta = 1\ \ \text{and}\ \ \phi = [1-.5B]$$ for example ... then $$Y^\text{*}(t) = [1/(1-.5B)][A(t)] \\ \quad\quad\quad\quad+ \text{IO}(t) - .5\cdot \text{IO}(t-1) + .25\cdot \text{IO}(t-2) - .125\cdot \text{IO}(t-3)-\cdots\,.$$ If for example the estimate of the IO effect is $10.0$, then $$Y^{*}(t) = [1/(1-.5B)][A(t)] \\ \quad\quad\quad\quad+ 10\cdot \text{IO}(t) - 5\cdot \text{IO}(t-1) + 2.5\cdot \text{IO}(t-2) - 1.25\cdot \text{IO}(t-3)-\cdots\,.$$ where the indicator variable for $\text{IO}$ is 0 or 1. In this way you can see that the impact of the anomaly not only is instantaneous but has memory. Software like AUTOBOX (which I am familiar with) does not identify IO effects (but rather AO effects) would identify a sequence of anomalies with values 10, -5, 2.5, -1.25,... starting at period $t$ . The user upon seeing this rare event could restate the transfer between the AO intervention with a dynamic structure $[w(b)/d(b)]$ rather than a pure numerator structure $[w(b)]$ yielding the same result as if an IO effect was incorporated. Anytime you incorporate memory, be it a result of a differencing operator or ARMA structure, it is a tacit admission of ignorance due to omitted causal series. This is also true of the need to incorporate Intervention deterministic series such as Pulses/Level Shifts, Seasonal Pulses or Local Time Trends. These dummy variables are a neede proxy for omitted determinstic user-specified causal variables. Oftentime all you have is the series of interest and given the qualifiers that I have spelled out, you can forecast the future based upon the past in total ignorance of exactly the nature of the data being analyzed. The only problem is you are using the rear-window to predict the road ahead ....a dangerous thing indeed. To stand up and declare the forecasts is based solely on the past of the series and some proxy ARIMA stuff and some proxy deterministic stuff is quite silly BUT in the absence of the knowledge of the true causals , it can be useful, As G.E.P.BOX said "all model are wrong, but some are useful" after the data was posted ... A reasonable model is a (1,1,0) is and the AO anomalies were identified at periods 39,41,47,21 and 69 (not period 48) . The residuals from this model appear to be free of evident structure. AND The fice AO values an optimal representation of the activity reflected by activity not in the history of the time series. I would think that the ACF of the OP's over-differenced model would reflect model inadequacy. Here is the model. Again there is no R code delivered as the problem or opportunity is in the realm of model identification/revision/validation. Finally, a plot of the actual/fitted and forecasted series.
How do I incorporate an innovative outlier at observation 48 in my ARIMA model?
If $$Y(t) = [\theta/\phi][A(t)+\text{IO}(t)]$$ then $$Y^\text{*}(t) = [\theta/\phi][A(t)] + [\theta/\phi][\text{IO}(t)].$$ If $$\theta = 1\ \ \text{and}\ \ \phi = [1-.5B]$$ for example ... then $$Y^\t
How do I incorporate an innovative outlier at observation 48 in my ARIMA model? If $$Y(t) = [\theta/\phi][A(t)+\text{IO}(t)]$$ then $$Y^\text{*}(t) = [\theta/\phi][A(t)] + [\theta/\phi][\text{IO}(t)].$$ If $$\theta = 1\ \ \text{and}\ \ \phi = [1-.5B]$$ for example ... then $$Y^\text{*}(t) = [1/(1-.5B)][A(t)] \\ \quad\quad\quad\quad+ \text{IO}(t) - .5\cdot \text{IO}(t-1) + .25\cdot \text{IO}(t-2) - .125\cdot \text{IO}(t-3)-\cdots\,.$$ If for example the estimate of the IO effect is $10.0$, then $$Y^{*}(t) = [1/(1-.5B)][A(t)] \\ \quad\quad\quad\quad+ 10\cdot \text{IO}(t) - 5\cdot \text{IO}(t-1) + 2.5\cdot \text{IO}(t-2) - 1.25\cdot \text{IO}(t-3)-\cdots\,.$$ where the indicator variable for $\text{IO}$ is 0 or 1. In this way you can see that the impact of the anomaly not only is instantaneous but has memory. Software like AUTOBOX (which I am familiar with) does not identify IO effects (but rather AO effects) would identify a sequence of anomalies with values 10, -5, 2.5, -1.25,... starting at period $t$ . The user upon seeing this rare event could restate the transfer between the AO intervention with a dynamic structure $[w(b)/d(b)]$ rather than a pure numerator structure $[w(b)]$ yielding the same result as if an IO effect was incorporated. Anytime you incorporate memory, be it a result of a differencing operator or ARMA structure, it is a tacit admission of ignorance due to omitted causal series. This is also true of the need to incorporate Intervention deterministic series such as Pulses/Level Shifts, Seasonal Pulses or Local Time Trends. These dummy variables are a neede proxy for omitted determinstic user-specified causal variables. Oftentime all you have is the series of interest and given the qualifiers that I have spelled out, you can forecast the future based upon the past in total ignorance of exactly the nature of the data being analyzed. The only problem is you are using the rear-window to predict the road ahead ....a dangerous thing indeed. To stand up and declare the forecasts is based solely on the past of the series and some proxy ARIMA stuff and some proxy deterministic stuff is quite silly BUT in the absence of the knowledge of the true causals , it can be useful, As G.E.P.BOX said "all model are wrong, but some are useful" after the data was posted ... A reasonable model is a (1,1,0) is and the AO anomalies were identified at periods 39,41,47,21 and 69 (not period 48) . The residuals from this model appear to be free of evident structure. AND The fice AO values an optimal representation of the activity reflected by activity not in the history of the time series. I would think that the ACF of the OP's over-differenced model would reflect model inadequacy. Here is the model. Again there is no R code delivered as the problem or opportunity is in the realm of model identification/revision/validation. Finally, a plot of the actual/fitted and forecasted series.
How do I incorporate an innovative outlier at observation 48 in my ARIMA model? If $$Y(t) = [\theta/\phi][A(t)+\text{IO}(t)]$$ then $$Y^\text{*}(t) = [\theta/\phi][A(t)] + [\theta/\phi][\text{IO}(t)].$$ If $$\theta = 1\ \ \text{and}\ \ \phi = [1-.5B]$$ for example ... then $$Y^\t
29,585
(interacting) MCMC for multimodal posterior
Thirst of all I would recommend to look for a better method, or at least a method with more in-depth description, since the "Distributed Markov chain Monte Carlo" from the paper that you are refering doesn't seem to be clearly stated. The advantages and disadvantages are not well explored. There is a method, that showed up in arxiv quite recently called "Wormhole Hamiltonian Monte Carlo", I would recommend to check it. Going back to the paper that you gave reference, the remote proposal $R_{i}(\theta_{i})$ is very vaguely discribed. In the application part it is described as "maximum likelihood Gaussian over the preceeding t/2 samples". Maybe this means that you average the last t/2 values of the $i^{th}$ chain? A bit hard to guess with the poor description given in the reference. [UPDATE:] Interaction between several chains and the application of this idea to sample from posterior distribution can be found in parallel MCMC methods, for example here. However, running several chains and forcing them to interact may not fit for the multimodal posterior: for example, if there is a very pronounced region where most of the posterior distribution is concentrated the interaction of the chains may even worsten things by sticking to that specific region and not exploring other, less pronounced, regions/modes. So, I would strongly recommend to look for MCMC specifically designed for multimodal problems. And if you want to create another/new method, then after you know what is available in the "market", you can create more efficient method.
(interacting) MCMC for multimodal posterior
Thirst of all I would recommend to look for a better method, or at least a method with more in-depth description, since the "Distributed Markov chain Monte Carlo" from the paper that you are refering
(interacting) MCMC for multimodal posterior Thirst of all I would recommend to look for a better method, or at least a method with more in-depth description, since the "Distributed Markov chain Monte Carlo" from the paper that you are refering doesn't seem to be clearly stated. The advantages and disadvantages are not well explored. There is a method, that showed up in arxiv quite recently called "Wormhole Hamiltonian Monte Carlo", I would recommend to check it. Going back to the paper that you gave reference, the remote proposal $R_{i}(\theta_{i})$ is very vaguely discribed. In the application part it is described as "maximum likelihood Gaussian over the preceeding t/2 samples". Maybe this means that you average the last t/2 values of the $i^{th}$ chain? A bit hard to guess with the poor description given in the reference. [UPDATE:] Interaction between several chains and the application of this idea to sample from posterior distribution can be found in parallel MCMC methods, for example here. However, running several chains and forcing them to interact may not fit for the multimodal posterior: for example, if there is a very pronounced region where most of the posterior distribution is concentrated the interaction of the chains may even worsten things by sticking to that specific region and not exploring other, less pronounced, regions/modes. So, I would strongly recommend to look for MCMC specifically designed for multimodal problems. And if you want to create another/new method, then after you know what is available in the "market", you can create more efficient method.
(interacting) MCMC for multimodal posterior Thirst of all I would recommend to look for a better method, or at least a method with more in-depth description, since the "Distributed Markov chain Monte Carlo" from the paper that you are refering
29,586
(interacting) MCMC for multimodal posterior
You should try multinest: https://arxiv.org/pdf/0809.3437.pdf https://github.com/JohannesBuchner/MultiNest It's a bayesian inference engine that will give you parameter samples for a multimodal distribution. The github link contains multinest source code that you compile and install as per instructions. it also has a python wrapper that's easier to use. The example codes have a prior section which serves to constrain your parameters, and a likelihood section which contains your likelihood. the settings file contains all your settings, and chains folder multinest output after fitting. it will give you samples of your parameters
(interacting) MCMC for multimodal posterior
You should try multinest: https://arxiv.org/pdf/0809.3437.pdf https://github.com/JohannesBuchner/MultiNest It's a bayesian inference engine that will give you parameter samples for a multimodal distri
(interacting) MCMC for multimodal posterior You should try multinest: https://arxiv.org/pdf/0809.3437.pdf https://github.com/JohannesBuchner/MultiNest It's a bayesian inference engine that will give you parameter samples for a multimodal distribution. The github link contains multinest source code that you compile and install as per instructions. it also has a python wrapper that's easier to use. The example codes have a prior section which serves to constrain your parameters, and a likelihood section which contains your likelihood. the settings file contains all your settings, and chains folder multinest output after fitting. it will give you samples of your parameters
(interacting) MCMC for multimodal posterior You should try multinest: https://arxiv.org/pdf/0809.3437.pdf https://github.com/JohannesBuchner/MultiNest It's a bayesian inference engine that will give you parameter samples for a multimodal distri
29,587
(interacting) MCMC for multimodal posterior
This appears to be a difficult and ongoing problem in computational stats. However, there are a couple of less state-of-the-art methods that should work ok. Say you have already found several distinct modes of the posterior and you are happy that these are the most important modes, and if the posterior around these modes is reasonably normal. Then you can calculate the hessian at these modes (say, using optim in R with hessian=T) and you can approximate the posterior as a mixture of normals (or t distributions). See p318-319 in Gelman et al. (2003) "Bayesian Data Analysis" for details. Then you can use the normal/t-mixture approximation as the proposal distribution in an independence sampler to obtain samples from the full posterior. Another idea, that I haven't tried, is Annealed Importance Sampling (Radford Neal, 1998, link here).
(interacting) MCMC for multimodal posterior
This appears to be a difficult and ongoing problem in computational stats. However, there are a couple of less state-of-the-art methods that should work ok. Say you have already found several distinct
(interacting) MCMC for multimodal posterior This appears to be a difficult and ongoing problem in computational stats. However, there are a couple of less state-of-the-art methods that should work ok. Say you have already found several distinct modes of the posterior and you are happy that these are the most important modes, and if the posterior around these modes is reasonably normal. Then you can calculate the hessian at these modes (say, using optim in R with hessian=T) and you can approximate the posterior as a mixture of normals (or t distributions). See p318-319 in Gelman et al. (2003) "Bayesian Data Analysis" for details. Then you can use the normal/t-mixture approximation as the proposal distribution in an independence sampler to obtain samples from the full posterior. Another idea, that I haven't tried, is Annealed Importance Sampling (Radford Neal, 1998, link here).
(interacting) MCMC for multimodal posterior This appears to be a difficult and ongoing problem in computational stats. However, there are a couple of less state-of-the-art methods that should work ok. Say you have already found several distinct
29,588
(interacting) MCMC for multimodal posterior
What about trying a new MCMC method for multimodality, a repulsive-attractive Metropolis algorithm (http://arxiv.org/abs/1601.05633)? This multimodal sampler works with a single tuning parameter like a Metropolis algorithm and is simple to implement.
(interacting) MCMC for multimodal posterior
What about trying a new MCMC method for multimodality, a repulsive-attractive Metropolis algorithm (http://arxiv.org/abs/1601.05633)? This multimodal sampler works with a single tuning parameter like
(interacting) MCMC for multimodal posterior What about trying a new MCMC method for multimodality, a repulsive-attractive Metropolis algorithm (http://arxiv.org/abs/1601.05633)? This multimodal sampler works with a single tuning parameter like a Metropolis algorithm and is simple to implement.
(interacting) MCMC for multimodal posterior What about trying a new MCMC method for multimodality, a repulsive-attractive Metropolis algorithm (http://arxiv.org/abs/1601.05633)? This multimodal sampler works with a single tuning parameter like
29,589
Predicting football match winner based only on the outcome of previous matches between the two teams
What about improving your dataset by means of taking into consideration also some data about the matches vs the same opponent? Example: TeamA vs TeamC: 1-0 TeamB vs TeamC: 2-0 => "infer" the fake outcome: TeamA vs TeamB: 1-2 Furthermore, in my opinion this kind of date are better than the data that you proposed, because the last year teams are often very different teams.
Predicting football match winner based only on the outcome of previous matches between the two teams
What about improving your dataset by means of taking into consideration also some data about the matches vs the same opponent? Example: TeamA vs TeamC: 1-0 TeamB vs TeamC: 2-0 => "infer" the fake out
Predicting football match winner based only on the outcome of previous matches between the two teams What about improving your dataset by means of taking into consideration also some data about the matches vs the same opponent? Example: TeamA vs TeamC: 1-0 TeamB vs TeamC: 2-0 => "infer" the fake outcome: TeamA vs TeamB: 1-2 Furthermore, in my opinion this kind of date are better than the data that you proposed, because the last year teams are often very different teams.
Predicting football match winner based only on the outcome of previous matches between the two teams What about improving your dataset by means of taking into consideration also some data about the matches vs the same opponent? Example: TeamA vs TeamC: 1-0 TeamB vs TeamC: 2-0 => "infer" the fake out
29,590
Bayesian estimation of Dirichlet distribution parameters
To calculate the density of any conjugate prior see here. However, you don't need to evaluate the conjugate prior of the Dirichlet in order to perform Bayesian estimation of its parameters. Just average the sufficient statistics of all the samples, which are the vectors of log-probabilities of the components of your observed categorical distribution parameters. This average sufficient statistic are the expectation parameters of the maximum likelihood Dirichlet fitting the data $(\chi_i)_{i=1}^n$. To go from expectation parameters to source parameters, say $(\alpha_i)_{i=1}^n$, you need to solve using numerical methods: \begin{align} \chi_i = \psi(\alpha_i) - \psi\left(\sum_j\alpha_j\right) \qquad \forall i \end{align} where $\psi$ is the digamma function. To answer your first question, a mixture of Dirichlets is not Dirichlet because, for one thing, it can be multimodal.
Bayesian estimation of Dirichlet distribution parameters
To calculate the density of any conjugate prior see here. However, you don't need to evaluate the conjugate prior of the Dirichlet in order to perform Bayesian estimation of its parameters. Just aver
Bayesian estimation of Dirichlet distribution parameters To calculate the density of any conjugate prior see here. However, you don't need to evaluate the conjugate prior of the Dirichlet in order to perform Bayesian estimation of its parameters. Just average the sufficient statistics of all the samples, which are the vectors of log-probabilities of the components of your observed categorical distribution parameters. This average sufficient statistic are the expectation parameters of the maximum likelihood Dirichlet fitting the data $(\chi_i)_{i=1}^n$. To go from expectation parameters to source parameters, say $(\alpha_i)_{i=1}^n$, you need to solve using numerical methods: \begin{align} \chi_i = \psi(\alpha_i) - \psi\left(\sum_j\alpha_j\right) \qquad \forall i \end{align} where $\psi$ is the digamma function. To answer your first question, a mixture of Dirichlets is not Dirichlet because, for one thing, it can be multimodal.
Bayesian estimation of Dirichlet distribution parameters To calculate the density of any conjugate prior see here. However, you don't need to evaluate the conjugate prior of the Dirichlet in order to perform Bayesian estimation of its parameters. Just aver
29,591
SVM regression with longitudinal data
This is an interesting question and I did a quick research. The OP asked about regression for continuous data. But the paper cited by @Vikram only works for classification. Lu, Z., Kaye, J., & Leen, T. K. (2009). Hierarchical Fisher Kernels for Longitudinal Data. In Advances in Neural Information Processing Systems. A related paper for regression I found is the following. Technical details can be found in Section 2.3. Seok, K. H., Shim, J., Cho, D., Noh, G. J., & Hwang, C. (2011). Semiparametric mixed-effect least squares support vector machine for analyzing pharmacokinetic and pharmacodynamic data. Neurocomputing, 74(17), 3412-3419. No public software is found but the authors claimed the ease of use at the end of the paper. The main advantage of the proposed LS-SVM ... is that regression estimators can be easily computed by softwares solving a simple linear equation system. This makes it easier to apply the proposed approach to the analysis of repeated measurement data in practice. To elaborate a bit more, there are two approaches for regression analysis using SVM (support vector machine): support vector regression (SVR) [Drucker, Harris; Burges, Christopher J. C.; Kaufman, Linda; Smola, Alexander J.; and Vapnik, Vladimir N. (1997); "Support Vector Regression Machines", in Advances in Neural Information Processing Systems 9, NIPS 1996, 155–161] least squares support vector machine (LS-SVM) [Suykens, Johan A. K.; Vandewalle, Joos P. L.; Least squares support vector machine classifiers, Neural Processing Letters, vol. 9, no. 3, Jun. 1999, pp. 293–300.] The aforementioned Seol et al. (2011) adopted the LS-VSM approach.
SVM regression with longitudinal data
This is an interesting question and I did a quick research. The OP asked about regression for continuous data. But the paper cited by @Vikram only works for classification. Lu, Z., Kaye, J., & Leen,
SVM regression with longitudinal data This is an interesting question and I did a quick research. The OP asked about regression for continuous data. But the paper cited by @Vikram only works for classification. Lu, Z., Kaye, J., & Leen, T. K. (2009). Hierarchical Fisher Kernels for Longitudinal Data. In Advances in Neural Information Processing Systems. A related paper for regression I found is the following. Technical details can be found in Section 2.3. Seok, K. H., Shim, J., Cho, D., Noh, G. J., & Hwang, C. (2011). Semiparametric mixed-effect least squares support vector machine for analyzing pharmacokinetic and pharmacodynamic data. Neurocomputing, 74(17), 3412-3419. No public software is found but the authors claimed the ease of use at the end of the paper. The main advantage of the proposed LS-SVM ... is that regression estimators can be easily computed by softwares solving a simple linear equation system. This makes it easier to apply the proposed approach to the analysis of repeated measurement data in practice. To elaborate a bit more, there are two approaches for regression analysis using SVM (support vector machine): support vector regression (SVR) [Drucker, Harris; Burges, Christopher J. C.; Kaufman, Linda; Smola, Alexander J.; and Vapnik, Vladimir N. (1997); "Support Vector Regression Machines", in Advances in Neural Information Processing Systems 9, NIPS 1996, 155–161] least squares support vector machine (LS-SVM) [Suykens, Johan A. K.; Vandewalle, Joos P. L.; Least squares support vector machine classifiers, Neural Processing Letters, vol. 9, no. 3, Jun. 1999, pp. 293–300.] The aforementioned Seol et al. (2011) adopted the LS-VSM approach.
SVM regression with longitudinal data This is an interesting question and I did a quick research. The OP asked about regression for continuous data. But the paper cited by @Vikram only works for classification. Lu, Z., Kaye, J., & Leen,
29,592
SVM regression with longitudinal data
Yes, this is possible. Except that in longitudinal data using Fisher Kernel works better than RBF or Linear ones. A similar setting like that of yours is given in this NIPS paper: http://research.microsoft.com/pubs/147234/NIPS08.pdf
SVM regression with longitudinal data
Yes, this is possible. Except that in longitudinal data using Fisher Kernel works better than RBF or Linear ones. A similar setting like that of yours is given in this NIPS paper: http://research.micr
SVM regression with longitudinal data Yes, this is possible. Except that in longitudinal data using Fisher Kernel works better than RBF or Linear ones. A similar setting like that of yours is given in this NIPS paper: http://research.microsoft.com/pubs/147234/NIPS08.pdf
SVM regression with longitudinal data Yes, this is possible. Except that in longitudinal data using Fisher Kernel works better than RBF or Linear ones. A similar setting like that of yours is given in this NIPS paper: http://research.micr
29,593
Confidence and prediction intervals of linear regression model
I understand some of your questions but others are not clear. Let me answer and state some facts and maybe that will clear up all of your confusion. The fit you have is remarkably good. The confidence intervals should be very tight. There are two types of confidence regions that can be considered. The simultaneous region is intended to cover the entire true regression function with the given confidence level. The others which are what you are looking at are the confidence intervals for the fitted regression points. They are only intended to cover the fitted value of y at the given value(s) of the covariate(s). They are not intended to cover y values at other values of the covariates. In fact, if the intervals are very tight as they should be in your case they will not cover many if any of the data points as you get away from the fixed value(s) of the covariate(s). For that type of coverage you need to get the simultaneous confidence curves (upper and lower bound curves). Now it is true that if you predict a y at a given value of a covariate and you want the same confidence level for the prediction interval as you used for the confidence interval for y at the given value of the covariate the interval will be wider. The reason is that the model tells you that there will be added variability because a new y will have its own independent error that must be accounted for in the interval. That error component does not enter into the estimates based on the data used in the fit.
Confidence and prediction intervals of linear regression model
I understand some of your questions but others are not clear. Let me answer and state some facts and maybe that will clear up all of your confusion. The fit you have is remarkably good. The confidenc
Confidence and prediction intervals of linear regression model I understand some of your questions but others are not clear. Let me answer and state some facts and maybe that will clear up all of your confusion. The fit you have is remarkably good. The confidence intervals should be very tight. There are two types of confidence regions that can be considered. The simultaneous region is intended to cover the entire true regression function with the given confidence level. The others which are what you are looking at are the confidence intervals for the fitted regression points. They are only intended to cover the fitted value of y at the given value(s) of the covariate(s). They are not intended to cover y values at other values of the covariates. In fact, if the intervals are very tight as they should be in your case they will not cover many if any of the data points as you get away from the fixed value(s) of the covariate(s). For that type of coverage you need to get the simultaneous confidence curves (upper and lower bound curves). Now it is true that if you predict a y at a given value of a covariate and you want the same confidence level for the prediction interval as you used for the confidence interval for y at the given value of the covariate the interval will be wider. The reason is that the model tells you that there will be added variability because a new y will have its own independent error that must be accounted for in the interval. That error component does not enter into the estimates based on the data used in the fit.
Confidence and prediction intervals of linear regression model I understand some of your questions but others are not clear. Let me answer and state some facts and maybe that will clear up all of your confusion. The fit you have is remarkably good. The confidenc
29,594
Gigantic kurtosis?
Have a look at heavy-tail Lambert W x F or skewed Lambert W x F distributions a try (disclaimer: I am the author). In R they are implemented in the LambertW package. Related posts: What's the distribution of these data? How to transform data to normality? One advantage over Cauchy or student-t distribution with fixed degrees of freedom is that the tail parameters can be estimated from the data -- so you can let the data decide what moments exist. Moreover the Lambert W x F framework allows you to transform your data and remove skewness / heavy-tails. Itt is important to note though that OLS does not require Normality of $y$ or $X$. However, for your EDA it might be worthwhile. Here is an example of Lambert W x Gaussian estimates applied to equity fund returns. library(fEcofin) ret <- ts(equityFunds[, -1] * 100) plot(ret) The summary metrics of the returns are similar (not as extreme) as in OP's post. data_metrics <- function(x) { c(mean = mean(x), sd = sd(x), min = min(x), max = max(x), skewness = skewness(x), kurtosis = kurtosis(x)) } ret.metrics <- t(apply(ret, 2, data_metrics)) ret.metrics ## mean sd min max skewness kurtosis ## EASTEU 0.1300 1.538 -18.42 12.38 -1.855 28.95 ## LATAM 0.1206 1.468 -6.06 5.66 -0.434 4.21 ## CHINA 0.0864 0.911 -4.71 4.27 -0.322 5.42 ## INDIA 0.1515 1.502 -12.72 14.05 -0.505 15.22 ## ENERGY 0.0997 1.187 -5.00 5.02 -0.271 4.48 ## MINING 0.1315 1.394 -7.72 5.69 -0.692 5.64 ## GOLD 0.1098 1.855 -10.14 6.99 -0.350 5.11 ## WATER 0.0628 0.748 -5.07 3.72 -0.405 6.08 Most series show clearly non-Normal characteristics (strong skewness and/or large kurtosis). Let's Gaussianize each series using a heavy tailed Lambert W x Gaussian distribution (= Tukey's h) using a methods of moments estimator (IGMM). library(LambertW) ret.gauss <- Gaussianize(ret, type = "h", method = "IGMM") colnames(ret.gauss) <- gsub(".X", "", colnames(ret.gauss)) plot(ts(ret.gauss)) The time series plots show much fewer tails and also more stable variation over time (not constant though). Computing the metrics again on the Gaussianized time series yields: ret.gauss.metrics <- t(apply(ret.gauss, 2, data_metrics)) ret.gauss.metrics ## mean sd min max skewness kurtosis ## EASTEU 0.1663 0.962 -3.50 3.46 -0.193 3 ## LATAM 0.1371 1.279 -3.91 3.93 -0.253 3 ## CHINA 0.0933 0.734 -2.32 2.36 -0.102 3 ## INDIA 0.1819 1.002 -3.35 3.78 -0.193 3 ## ENERGY 0.1088 1.006 -3.03 3.18 -0.144 3 ## MINING 0.1610 1.109 -3.55 3.34 -0.298 3 ## GOLD 0.1241 1.537 -5.15 4.48 -0.123 3 ## WATER 0.0704 0.607 -2.17 2.02 -0.157 3 The IGMM algorithm achieved exactly what it was set forth to do: transform the data to have kurtosis equal to $3$. Interestingly, all time series now have negative skewness, which is in line with most financial time series literature. Important to point out here that Gaussianize() operates only marginally, not jointly (analogously to scale()). Simple bivariate regression To consider the effect of Gaussianization on OLS consider predicting "EASTEU" return from "INDIA" returns and vice versa. Even though we are looking at same day returns between $r_{EASTEU, t}$ on $r_{INDIA,t}$ (no lagged variables), it still provides value for a stock market prediction given the 6h+ time difference between India and Europe. layout(matrix(1:2, ncol = 2, byrow = TRUE)) plot(ret[, "INDIA"], ret[, "EASTEU"]) grid() plot(ret.gauss[, "INDIA"], ret.gauss[, "EASTEU"]) grid() The left scatterplot of the original series shows that the strong outliers did not occur at the same days, but at different times in India and Europe; other than that it is not clear if the data cloud in the center supports no correlation or negative/positive dependency. Since outliers strongly affect variance and correlation estimates, it is worthwhile to look at the dependency with heavy tails removed (right scatterplot). Here the patterns are much more clear and the positive relation between India and Eastern Europe market becomes apparent. # try these models on your own mod <- lm(EASTEU ~ INDIA * CHINA, data = ret) mod.robust <- rlm(EASTEU ~ INDIA, data = ret) mod.gauss <- lm(EASTEU ~ INDIA, data = ret.gauss) summary(mod) summary(mod.robust) summary(mod.gauss) Granger causality A Granger causality test based on a $VAR(5)$ model (I use $p = 5$ to capture the week effect of daily trades) for "EASTEU" and "INDIA" rejects "no Granger causality" for either direction. library(vars) mod.vars <- vars::VAR(ret[, c("EASTEU", "INDIA")], p = 5) causality(mod.vars, "INDIA")$Granger ## ## Granger causality H0: INDIA do not Granger-cause EASTEU ## ## data: VAR object mod.vars ## F-Test = 3, df1 = 5, df2 = 3000, p-value = 0.02 causality(mod.vars, "EASTEU")$Granger ## ## Granger causality H0: EASTEU do not Granger-cause INDIA ## ## data: VAR object mod.vars ## F-Test = 4, df1 = 5, df2 = 3000, p-value = 0.003 However, for the Gaussianized data the answer is different! Here the test can not reject H0 that "INDIA does not Granger-cause EASTEU", but still rejects that "EASTEU does not Granger-cause INDIA". So the Gaussianized data supports the hypothesis that European markets drive markets in India the following day. mod.vars.gauss <- vars::VAR(ret.gauss[, c("EASTEU", "INDIA")], p = 5) causality(mod.vars.gauss, "INDIA")$Granger ## ## Granger causality H0: INDIA do not Granger-cause EASTEU ## ## data: VAR object mod.vars.gauss ## F-Test = 0.8, df1 = 5, df2 = 3000, p-value = 0.5 causality(mod.vars.gauss, "EASTEU")$Granger ## ## Granger causality H0: EASTEU do not Granger-cause INDIA ## ## data: VAR object mod.vars.gauss ## F-Test = 2, df1 = 5, df2 = 3000, p-value = 0.06 Note that it is not clear to me which one is the right answer (if any), but it's an interesting observation to make. Needless to say that this entire Causality testing is contingent on the $VAR(5)$ being the correct model -- which it is most likely not; but I think it serves well for illustratiton.
Gigantic kurtosis?
Have a look at heavy-tail Lambert W x F or skewed Lambert W x F distributions a try (disclaimer: I am the author). In R they are implemented in the LambertW package. Related posts: What's the distri
Gigantic kurtosis? Have a look at heavy-tail Lambert W x F or skewed Lambert W x F distributions a try (disclaimer: I am the author). In R they are implemented in the LambertW package. Related posts: What's the distribution of these data? How to transform data to normality? One advantage over Cauchy or student-t distribution with fixed degrees of freedom is that the tail parameters can be estimated from the data -- so you can let the data decide what moments exist. Moreover the Lambert W x F framework allows you to transform your data and remove skewness / heavy-tails. Itt is important to note though that OLS does not require Normality of $y$ or $X$. However, for your EDA it might be worthwhile. Here is an example of Lambert W x Gaussian estimates applied to equity fund returns. library(fEcofin) ret <- ts(equityFunds[, -1] * 100) plot(ret) The summary metrics of the returns are similar (not as extreme) as in OP's post. data_metrics <- function(x) { c(mean = mean(x), sd = sd(x), min = min(x), max = max(x), skewness = skewness(x), kurtosis = kurtosis(x)) } ret.metrics <- t(apply(ret, 2, data_metrics)) ret.metrics ## mean sd min max skewness kurtosis ## EASTEU 0.1300 1.538 -18.42 12.38 -1.855 28.95 ## LATAM 0.1206 1.468 -6.06 5.66 -0.434 4.21 ## CHINA 0.0864 0.911 -4.71 4.27 -0.322 5.42 ## INDIA 0.1515 1.502 -12.72 14.05 -0.505 15.22 ## ENERGY 0.0997 1.187 -5.00 5.02 -0.271 4.48 ## MINING 0.1315 1.394 -7.72 5.69 -0.692 5.64 ## GOLD 0.1098 1.855 -10.14 6.99 -0.350 5.11 ## WATER 0.0628 0.748 -5.07 3.72 -0.405 6.08 Most series show clearly non-Normal characteristics (strong skewness and/or large kurtosis). Let's Gaussianize each series using a heavy tailed Lambert W x Gaussian distribution (= Tukey's h) using a methods of moments estimator (IGMM). library(LambertW) ret.gauss <- Gaussianize(ret, type = "h", method = "IGMM") colnames(ret.gauss) <- gsub(".X", "", colnames(ret.gauss)) plot(ts(ret.gauss)) The time series plots show much fewer tails and also more stable variation over time (not constant though). Computing the metrics again on the Gaussianized time series yields: ret.gauss.metrics <- t(apply(ret.gauss, 2, data_metrics)) ret.gauss.metrics ## mean sd min max skewness kurtosis ## EASTEU 0.1663 0.962 -3.50 3.46 -0.193 3 ## LATAM 0.1371 1.279 -3.91 3.93 -0.253 3 ## CHINA 0.0933 0.734 -2.32 2.36 -0.102 3 ## INDIA 0.1819 1.002 -3.35 3.78 -0.193 3 ## ENERGY 0.1088 1.006 -3.03 3.18 -0.144 3 ## MINING 0.1610 1.109 -3.55 3.34 -0.298 3 ## GOLD 0.1241 1.537 -5.15 4.48 -0.123 3 ## WATER 0.0704 0.607 -2.17 2.02 -0.157 3 The IGMM algorithm achieved exactly what it was set forth to do: transform the data to have kurtosis equal to $3$. Interestingly, all time series now have negative skewness, which is in line with most financial time series literature. Important to point out here that Gaussianize() operates only marginally, not jointly (analogously to scale()). Simple bivariate regression To consider the effect of Gaussianization on OLS consider predicting "EASTEU" return from "INDIA" returns and vice versa. Even though we are looking at same day returns between $r_{EASTEU, t}$ on $r_{INDIA,t}$ (no lagged variables), it still provides value for a stock market prediction given the 6h+ time difference between India and Europe. layout(matrix(1:2, ncol = 2, byrow = TRUE)) plot(ret[, "INDIA"], ret[, "EASTEU"]) grid() plot(ret.gauss[, "INDIA"], ret.gauss[, "EASTEU"]) grid() The left scatterplot of the original series shows that the strong outliers did not occur at the same days, but at different times in India and Europe; other than that it is not clear if the data cloud in the center supports no correlation or negative/positive dependency. Since outliers strongly affect variance and correlation estimates, it is worthwhile to look at the dependency with heavy tails removed (right scatterplot). Here the patterns are much more clear and the positive relation between India and Eastern Europe market becomes apparent. # try these models on your own mod <- lm(EASTEU ~ INDIA * CHINA, data = ret) mod.robust <- rlm(EASTEU ~ INDIA, data = ret) mod.gauss <- lm(EASTEU ~ INDIA, data = ret.gauss) summary(mod) summary(mod.robust) summary(mod.gauss) Granger causality A Granger causality test based on a $VAR(5)$ model (I use $p = 5$ to capture the week effect of daily trades) for "EASTEU" and "INDIA" rejects "no Granger causality" for either direction. library(vars) mod.vars <- vars::VAR(ret[, c("EASTEU", "INDIA")], p = 5) causality(mod.vars, "INDIA")$Granger ## ## Granger causality H0: INDIA do not Granger-cause EASTEU ## ## data: VAR object mod.vars ## F-Test = 3, df1 = 5, df2 = 3000, p-value = 0.02 causality(mod.vars, "EASTEU")$Granger ## ## Granger causality H0: EASTEU do not Granger-cause INDIA ## ## data: VAR object mod.vars ## F-Test = 4, df1 = 5, df2 = 3000, p-value = 0.003 However, for the Gaussianized data the answer is different! Here the test can not reject H0 that "INDIA does not Granger-cause EASTEU", but still rejects that "EASTEU does not Granger-cause INDIA". So the Gaussianized data supports the hypothesis that European markets drive markets in India the following day. mod.vars.gauss <- vars::VAR(ret.gauss[, c("EASTEU", "INDIA")], p = 5) causality(mod.vars.gauss, "INDIA")$Granger ## ## Granger causality H0: INDIA do not Granger-cause EASTEU ## ## data: VAR object mod.vars.gauss ## F-Test = 0.8, df1 = 5, df2 = 3000, p-value = 0.5 causality(mod.vars.gauss, "EASTEU")$Granger ## ## Granger causality H0: EASTEU do not Granger-cause INDIA ## ## data: VAR object mod.vars.gauss ## F-Test = 2, df1 = 5, df2 = 3000, p-value = 0.06 Note that it is not clear to me which one is the right answer (if any), but it's an interesting observation to make. Needless to say that this entire Causality testing is contingent on the $VAR(5)$ being the correct model -- which it is most likely not; but I think it serves well for illustratiton.
Gigantic kurtosis? Have a look at heavy-tail Lambert W x F or skewed Lambert W x F distributions a try (disclaimer: I am the author). In R they are implemented in the LambertW package. Related posts: What's the distri
29,595
Gigantic kurtosis?
What is needed is a probability distribution model that better fits the data. Sometimes, there are no defined moments. One such distribution is the Cauchy distribution. Although the Cauchy distribution has a median as an expected value, there is no stable mean value, and no stable higher moments. What this means is that when one collects data, actual measurements crop up that look like outliers, but are actual measurements. For example, if one has two normal distributions F and G, with mean zero, and one divides F/G, the result will have no first moment and is a Cauchy distribution. So we happily collect data, and it looks OK like 5,3,9,6,2,4 and we calculate a mean that looks stable, then, all of a sudden we get a -32739876 value and our mean value becomes meaningless, but note, the median is 4, stable. Such it is with long-tailed distributions. Find a more correct long tailed distribution for your data, and use the statistical measurements that that distribution implies, and your problem will go away. Edit: You might try Student's t-distribution with 2 degrees of freedom. That distribution has longer tails than the normal distribution, the skewness and kurtosis are unstable (Sic, do not exist), but the mean and variance are defined, i.e., are stable. Next edit: One possibility might be to use Theil regression. Anyway, it's a thought, because Theil will work well no matter what the tails look like. Theil can be done MLR (multiple linear regression using median slopes). I have never done Theil for histogram data fitting. But, I have done Theil with a jackknife variant to establish confidence intervals. The advantage of doing that is that Theil doesn't care what the distribution shapes are, and, the answers are generally less biased than with OLS because typically OLS is used when there is problematic independent axis variance. Not that Theil is totally unbaised, it is median slope. The answers have a different meaning as well, it finds a better agreement between the dependent and independent variables where OLS finds the least error predictor of the dependent variable, which latter is not always the question that we want an answer to.
Gigantic kurtosis?
What is needed is a probability distribution model that better fits the data. Sometimes, there are no defined moments. One such distribution is the Cauchy distribution. Although the Cauchy distributio
Gigantic kurtosis? What is needed is a probability distribution model that better fits the data. Sometimes, there are no defined moments. One such distribution is the Cauchy distribution. Although the Cauchy distribution has a median as an expected value, there is no stable mean value, and no stable higher moments. What this means is that when one collects data, actual measurements crop up that look like outliers, but are actual measurements. For example, if one has two normal distributions F and G, with mean zero, and one divides F/G, the result will have no first moment and is a Cauchy distribution. So we happily collect data, and it looks OK like 5,3,9,6,2,4 and we calculate a mean that looks stable, then, all of a sudden we get a -32739876 value and our mean value becomes meaningless, but note, the median is 4, stable. Such it is with long-tailed distributions. Find a more correct long tailed distribution for your data, and use the statistical measurements that that distribution implies, and your problem will go away. Edit: You might try Student's t-distribution with 2 degrees of freedom. That distribution has longer tails than the normal distribution, the skewness and kurtosis are unstable (Sic, do not exist), but the mean and variance are defined, i.e., are stable. Next edit: One possibility might be to use Theil regression. Anyway, it's a thought, because Theil will work well no matter what the tails look like. Theil can be done MLR (multiple linear regression using median slopes). I have never done Theil for histogram data fitting. But, I have done Theil with a jackknife variant to establish confidence intervals. The advantage of doing that is that Theil doesn't care what the distribution shapes are, and, the answers are generally less biased than with OLS because typically OLS is used when there is problematic independent axis variance. Not that Theil is totally unbaised, it is median slope. The answers have a different meaning as well, it finds a better agreement between the dependent and independent variables where OLS finds the least error predictor of the dependent variable, which latter is not always the question that we want an answer to.
Gigantic kurtosis? What is needed is a probability distribution model that better fits the data. Sometimes, there are no defined moments. One such distribution is the Cauchy distribution. Although the Cauchy distributio
29,596
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [closed]
Since there are no answers, I will at least post what I have gotten so far: The as.bugs.array function in the R2WinBUGS package was created for this purpose. According to the documentation (?as.bugs.array): Function converting results from Markov chain simulations, that might not be from BUGS, to bugs object. Used mainly to display results with plot.bugs. Thus, it is possible to obtain a plot from LINE.out in your example, although it does not plot the correct variables: plot(as.bugs.array(sims.array = as.array(LINE.out))) It will take a little bit more work to determine the correct way to transform the LINE.out, and the LINE.samples object from example(jags.samples) may be an easier place to start.
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [cl
Since there are no answers, I will at least post what I have gotten so far: The as.bugs.array function in the R2WinBUGS package was created for this purpose. According to the documentation (?as.bugs.a
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [closed] Since there are no answers, I will at least post what I have gotten so far: The as.bugs.array function in the R2WinBUGS package was created for this purpose. According to the documentation (?as.bugs.array): Function converting results from Markov chain simulations, that might not be from BUGS, to bugs object. Used mainly to display results with plot.bugs. Thus, it is possible to obtain a plot from LINE.out in your example, although it does not plot the correct variables: plot(as.bugs.array(sims.array = as.array(LINE.out))) It will take a little bit more work to determine the correct way to transform the LINE.out, and the LINE.samples object from example(jags.samples) may be an easier place to start.
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [cl Since there are no answers, I will at least post what I have gotten so far: The as.bugs.array function in the R2WinBUGS package was created for this purpose. According to the documentation (?as.bugs.a
29,597
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [closed]
The following seems to work for me: require(R2jags) m <-jags(data=d,inits=i,pars,n.iter=1000,n.chains=3,model.file="foo.txt",DIC=F) m <- autojags(m) plot(m) Here is a reproducible example: example(jags) plot(jagsfit)
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [cl
The following seems to work for me: require(R2jags) m <-jags(data=d,inits=i,pars,n.iter=1000,n.chains=3,model.file="foo.txt",DIC=F) m <- autojags(m) plot(m) Here is a reproducible example: example(ja
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [closed] The following seems to work for me: require(R2jags) m <-jags(data=d,inits=i,pars,n.iter=1000,n.chains=3,model.file="foo.txt",DIC=F) m <- autojags(m) plot(m) Here is a reproducible example: example(jags) plot(jagsfit)
How can I generate a plot similar to that produced by plot.bugs and plot.jags from an mcmc.list? [cl The following seems to work for me: require(R2jags) m <-jags(data=d,inits=i,pars,n.iter=1000,n.chains=3,model.file="foo.txt",DIC=F) m <- autojags(m) plot(m) Here is a reproducible example: example(ja
29,598
Is there a method to estimate distribution parameters given only quantiles?
i don't know what was in the other post but I have a response. One can look at the order statistics which represent specific quantiles of the distribution namely, the $k$'th order statistic, $X_{(k)}$, is an estimate of the $100 \cdot k/n$'th quantile of the distribution. There is a famous paper in Technometrics 1960 by Shanti Gupta that shows how to estimate the shape parameter of a gamma distribution using the order statistics. See this link: http://www.jstor.org/discover/10.2307/1266548
Is there a method to estimate distribution parameters given only quantiles?
i don't know what was in the other post but I have a response. One can look at the order statistics which represent specific quantiles of the distribution namely, the $k$'th order statistic, $X_{(k)}$
Is there a method to estimate distribution parameters given only quantiles? i don't know what was in the other post but I have a response. One can look at the order statistics which represent specific quantiles of the distribution namely, the $k$'th order statistic, $X_{(k)}$, is an estimate of the $100 \cdot k/n$'th quantile of the distribution. There is a famous paper in Technometrics 1960 by Shanti Gupta that shows how to estimate the shape parameter of a gamma distribution using the order statistics. See this link: http://www.jstor.org/discover/10.2307/1266548
Is there a method to estimate distribution parameters given only quantiles? i don't know what was in the other post but I have a response. One can look at the order statistics which represent specific quantiles of the distribution namely, the $k$'th order statistic, $X_{(k)}$
29,599
What is the difference between serial correlation and having a unit root?
A simpler explanation can be this: if you have an AR(1) process $$y_t = \rho y_{t-1} + \epsilon_t,$$ where $\epsilon_t$ is white noise, then testing for autocorrelation is $H_{0;\mbox{AC}}: \rho=0$ (and you can run OLS which behaves properly under the null), while testing for the unit root is $H_{0;\mbox{UR}}: \rho=1$. Now, with the unit root, the process is non-stationary under the null, and OLS utterly fails, so you have to go into the Dickey-Fuller trickery of taking the differences and such.
What is the difference between serial correlation and having a unit root?
A simpler explanation can be this: if you have an AR(1) process $$y_t = \rho y_{t-1} + \epsilon_t,$$ where $\epsilon_t$ is white noise, then testing for autocorrelation is $H_{0;\mbox{AC}}: \rho=0$ (a
What is the difference between serial correlation and having a unit root? A simpler explanation can be this: if you have an AR(1) process $$y_t = \rho y_{t-1} + \epsilon_t,$$ where $\epsilon_t$ is white noise, then testing for autocorrelation is $H_{0;\mbox{AC}}: \rho=0$ (and you can run OLS which behaves properly under the null), while testing for the unit root is $H_{0;\mbox{UR}}: \rho=1$. Now, with the unit root, the process is non-stationary under the null, and OLS utterly fails, so you have to go into the Dickey-Fuller trickery of taking the differences and such.
What is the difference between serial correlation and having a unit root? A simpler explanation can be this: if you have an AR(1) process $$y_t = \rho y_{t-1} + \epsilon_t,$$ where $\epsilon_t$ is white noise, then testing for autocorrelation is $H_{0;\mbox{AC}}: \rho=0$ (a
29,600
What is the difference between serial correlation and having a unit root?
If you have, say, an autoregressive process, and you look at what is called the characteristic polynomial, that polynomial has complex roots (maybe some or all are real roots). If all the roots are inside the unit circle the process is stationary otherwise it is non-stationary. A test for unit roots is looking to see if the specific process is stationary based on the observed data (parameters unknown). A test for serial correlation is entirely different. It looks at the autocorrelation function, testing to see whether or not all correlations are zero (sometimes referred to as a test for white noise). The answer to the second question is that different problems require different tests. I don't understand what your book is describing. I see these tests as tests on individual time series. I don't see where independent and dependent variables enter into it.
What is the difference between serial correlation and having a unit root?
If you have, say, an autoregressive process, and you look at what is called the characteristic polynomial, that polynomial has complex roots (maybe some or all are real roots). If all the roots are in
What is the difference between serial correlation and having a unit root? If you have, say, an autoregressive process, and you look at what is called the characteristic polynomial, that polynomial has complex roots (maybe some or all are real roots). If all the roots are inside the unit circle the process is stationary otherwise it is non-stationary. A test for unit roots is looking to see if the specific process is stationary based on the observed data (parameters unknown). A test for serial correlation is entirely different. It looks at the autocorrelation function, testing to see whether or not all correlations are zero (sometimes referred to as a test for white noise). The answer to the second question is that different problems require different tests. I don't understand what your book is describing. I see these tests as tests on individual time series. I don't see where independent and dependent variables enter into it.
What is the difference between serial correlation and having a unit root? If you have, say, an autoregressive process, and you look at what is called the characteristic polynomial, that polynomial has complex roots (maybe some or all are real roots). If all the roots are in