idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
26,201 | If you can't do it orthogonally, do it raw (polynomial regression) | To give a naive assessment of the situation:
generally: suppose you have two different system of basis functions $\{p_n\}_{n=1}^\infty$, as well as $\{\tilde{p}\}_{n=1}^\infty$ for some function (hilbert-) space, usual $L_2([a,b])$, i.e. the space of all square-integrable functions.
This means that each of the two bases can be used to explain each element of $L_2([a,b])$, i.e. for $y \in L_2([a,b])$ you have for some coefficients $\theta_n$ and $\tilde{\theta}_n \in \mathbb{R}$, $n=1,2,\dots$ (in the $L_2$-sense):
$$ \sum_{n=1}^\infty \tilde{\theta}_n \tilde{p}_n = y= \sum_{n=1}^\infty \theta_n p_n.$$
However, on the other hand, if you truncate both sets of basis functions at some number $k<\infty$, i.e. you take
$$\{p_n\}_{n=1}^k$$ as well as $$\{\tilde{p}\}_{n=1}^k,$$ these truncated sets of basis functions are very likely two describe "different parts" of $L_2([a,b])$.
However, here in the special case where one basis, $\{\tilde{p}\}_{n=1}^\infty$, is just an orthogonalization of the other basis, $\{p_n\}_{n=1}^\infty$, the overall prediction of $y$ will be the same for each truncated model ($\{p\}_{n=1}^k$ and their orthogonalized counterpart will describe the same $k$-dimensional subspace of $L_2([a,b])$).
But each individual basis function from the two "different" bases will yield a different contribution to this predcition (obviously as the functions/predictors are different!) resulting in different $p$-values and coefficients.
Hence, in terms of prediction there is (in this case) no difference.
From a computational point of view a model matrix consisting of orthogonal basis functions have nice numerical/computational properties for the least squares estimator.
While at the same time from the statistical point of view, the orthogonalization results in uncorrelated estimates, since $var(\hat{\tilde{\theta}}) = I \sigma²$ under the standard assumptions.
The natural question arises if there is a best truncated basis system. However the answer to the question is neither simple nor unique and depends for example on the definition of the word "best", i.e. what you are trying to archive. | If you can't do it orthogonally, do it raw (polynomial regression) | To give a naive assessment of the situation:
generally: suppose you have two different system of basis functions $\{p_n\}_{n=1}^\infty$, as well as $\{\tilde{p}\}_{n=1}^\infty$ for some function (hil | If you can't do it orthogonally, do it raw (polynomial regression)
To give a naive assessment of the situation:
generally: suppose you have two different system of basis functions $\{p_n\}_{n=1}^\infty$, as well as $\{\tilde{p}\}_{n=1}^\infty$ for some function (hilbert-) space, usual $L_2([a,b])$, i.e. the space of all square-integrable functions.
This means that each of the two bases can be used to explain each element of $L_2([a,b])$, i.e. for $y \in L_2([a,b])$ you have for some coefficients $\theta_n$ and $\tilde{\theta}_n \in \mathbb{R}$, $n=1,2,\dots$ (in the $L_2$-sense):
$$ \sum_{n=1}^\infty \tilde{\theta}_n \tilde{p}_n = y= \sum_{n=1}^\infty \theta_n p_n.$$
However, on the other hand, if you truncate both sets of basis functions at some number $k<\infty$, i.e. you take
$$\{p_n\}_{n=1}^k$$ as well as $$\{\tilde{p}\}_{n=1}^k,$$ these truncated sets of basis functions are very likely two describe "different parts" of $L_2([a,b])$.
However, here in the special case where one basis, $\{\tilde{p}\}_{n=1}^\infty$, is just an orthogonalization of the other basis, $\{p_n\}_{n=1}^\infty$, the overall prediction of $y$ will be the same for each truncated model ($\{p\}_{n=1}^k$ and their orthogonalized counterpart will describe the same $k$-dimensional subspace of $L_2([a,b])$).
But each individual basis function from the two "different" bases will yield a different contribution to this predcition (obviously as the functions/predictors are different!) resulting in different $p$-values and coefficients.
Hence, in terms of prediction there is (in this case) no difference.
From a computational point of view a model matrix consisting of orthogonal basis functions have nice numerical/computational properties for the least squares estimator.
While at the same time from the statistical point of view, the orthogonalization results in uncorrelated estimates, since $var(\hat{\tilde{\theta}}) = I \sigma²$ under the standard assumptions.
The natural question arises if there is a best truncated basis system. However the answer to the question is neither simple nor unique and depends for example on the definition of the word "best", i.e. what you are trying to archive. | If you can't do it orthogonally, do it raw (polynomial regression)
To give a naive assessment of the situation:
generally: suppose you have two different system of basis functions $\{p_n\}_{n=1}^\infty$, as well as $\{\tilde{p}\}_{n=1}^\infty$ for some function (hil |
26,202 | Regression models for log transformed data without multiplicative error | I illustrate five options to fit a model here. The assumption for all of them is that the relationship is actually $y = a \cdot x^b$ and we only need to decide on the appropriate error structure.
1.) First the OLS model $\ln{y} = a + b\cdot\ln{x}+\varepsilon$, i.e., a multiplicative error after back-transformation.
fit1 <- lm(log(y) ~ log(x), data = DF)
I would argue that this is actually an appropriate error model as you clearly have increasing scatter with increasing values.
2.) A non-linear model $y = \alpha\cdot x^b+\varepsilon$, i.e., an additive error.
fit2 <- nls(y ~ a * x^b, data = DF, start = list(a = exp(coef(fit1)[1]), b = coef(fit1)[2]))
3.) A Generalized Linear Model with Gaussian distribution and a log link function. We will see that this is actually the same model as 2 when we plot the result.
fit3 <- glm(y ~ log(x), data = DF, family = gaussian(link = "log"))
4.) A non-linear model as 2, but with a variance function $s^2(y) = \exp(2\cdot t \cdot y)$, which adds an additional parameter.
library(nlme)
fit4 <- gnls(y ~ a * x^b, params = list(a ~ 1, b ~ 1),
data = DF, start = list(a = exp(coef(fit1)[1]), b = coef(fit1)[2]),
weights = varExp(form = ~ y))
5.) A GLM with a gamma distribution and a log link.
fit5 <- glm(y ~ log(x), data = DF, family = Gamma(link = "log"))
Now let's plot them:
plot(y ~ x, data = DF)
curve(exp(predict(fit1, newdata = data.frame(x = x))), col = "green", add = TRUE)
curve(predict(fit2, newdata = data.frame(x = x)), col = "black", add = TRUE)
curve(predict(fit3, newdata = data.frame(x = x), type = "response"), col = "red", add = TRUE, lty = 2)
curve(predict(fit4, newdata = data.frame(x = x)), col = "brown", add = TRUE)
curve(predict(fit5, newdata = data.frame(x = x), type = "response"), col = "cyan", add = TRUE)
legend("topleft", legend = c("OLS", "nls", "Gauss GLM", "weighted nls", "Gamma GLM"),
col = c("green", "black", "red", "brown", "cyan"),
lty = c(1, 1, 2, 1, 1))
I hope these fits persuade you that you actually should use a model that allows larger variance for larger values. Even the model where I fit a variance model agrees on that. If you use the non-linear model or Gaussian GLM you place undue weight on the larger values.
Finally, you should consider carefully if the assumed relationship is the correct one. Is it supported by scientific theory? | Regression models for log transformed data without multiplicative error | I illustrate five options to fit a model here. The assumption for all of them is that the relationship is actually $y = a \cdot x^b$ and we only need to decide on the appropriate error structure.
1.) | Regression models for log transformed data without multiplicative error
I illustrate five options to fit a model here. The assumption for all of them is that the relationship is actually $y = a \cdot x^b$ and we only need to decide on the appropriate error structure.
1.) First the OLS model $\ln{y} = a + b\cdot\ln{x}+\varepsilon$, i.e., a multiplicative error after back-transformation.
fit1 <- lm(log(y) ~ log(x), data = DF)
I would argue that this is actually an appropriate error model as you clearly have increasing scatter with increasing values.
2.) A non-linear model $y = \alpha\cdot x^b+\varepsilon$, i.e., an additive error.
fit2 <- nls(y ~ a * x^b, data = DF, start = list(a = exp(coef(fit1)[1]), b = coef(fit1)[2]))
3.) A Generalized Linear Model with Gaussian distribution and a log link function. We will see that this is actually the same model as 2 when we plot the result.
fit3 <- glm(y ~ log(x), data = DF, family = gaussian(link = "log"))
4.) A non-linear model as 2, but with a variance function $s^2(y) = \exp(2\cdot t \cdot y)$, which adds an additional parameter.
library(nlme)
fit4 <- gnls(y ~ a * x^b, params = list(a ~ 1, b ~ 1),
data = DF, start = list(a = exp(coef(fit1)[1]), b = coef(fit1)[2]),
weights = varExp(form = ~ y))
5.) A GLM with a gamma distribution and a log link.
fit5 <- glm(y ~ log(x), data = DF, family = Gamma(link = "log"))
Now let's plot them:
plot(y ~ x, data = DF)
curve(exp(predict(fit1, newdata = data.frame(x = x))), col = "green", add = TRUE)
curve(predict(fit2, newdata = data.frame(x = x)), col = "black", add = TRUE)
curve(predict(fit3, newdata = data.frame(x = x), type = "response"), col = "red", add = TRUE, lty = 2)
curve(predict(fit4, newdata = data.frame(x = x)), col = "brown", add = TRUE)
curve(predict(fit5, newdata = data.frame(x = x), type = "response"), col = "cyan", add = TRUE)
legend("topleft", legend = c("OLS", "nls", "Gauss GLM", "weighted nls", "Gamma GLM"),
col = c("green", "black", "red", "brown", "cyan"),
lty = c(1, 1, 2, 1, 1))
I hope these fits persuade you that you actually should use a model that allows larger variance for larger values. Even the model where I fit a variance model agrees on that. If you use the non-linear model or Gaussian GLM you place undue weight on the larger values.
Finally, you should consider carefully if the assumed relationship is the correct one. Is it supported by scientific theory? | Regression models for log transformed data without multiplicative error
I illustrate five options to fit a model here. The assumption for all of them is that the relationship is actually $y = a \cdot x^b$ and we only need to decide on the appropriate error structure.
1.) |
26,203 | Prove that $E(X^n)^{1/n}$ is non-decreasing for non-negative random variables | Write $p$ in place of $n$ to emphasize it can be any positive real number, rather than just an integer as suggested by "$n$".
Let's go through some standard preliminary transformations to simplify subsequent calculations. It makes no difference to the result to rescale $X$. The result is trivial if $X$ is almost everywhere zero, so assume $\mathbb{E}(X)$ is nonzero, whence $\mathbb{E}(X^p)$ also is nonzero for all $p$. Now fix $p$ and divide $X$ by $\mathbb{E}(X^p)^{1/p}$ so that $$\mathbb{E}(X^p) = 1\tag{1},$$ with no loss of generality.
Here's how the reasoning might proceed when you're trying to figure it out the first time and you're trying not to work too hard. I will leave detailed justifications of each step to you.
The expression $\mathbb{E}(X^p)^{1/p}$ is nondecreasing if and only if its logarithm is nondecreasing. That log is differentiable and therefore is nondecreasing if and only if its derivative is non-negative. Exploiting $(1)$ we may compute (by differentiating within the expectation) this derivative as
$$\frac{d}{dp}\log\left( \mathbb{E}(X^p)^{1/p} \right) = -\frac{1}{p^2}\log\mathbb{E}(X^p) + \frac{\mathbb{E}(X^p \log X)}{\mathbb{E}(X^p)} = \frac{1}{p}\mathbb{E}(X^p \log(X^p)).$$
Writing $Y=X^p$, the right hand side is non-negative if and only if $$\mathbb{E}(Y\log(Y)) \ge 0.$$ But this is an immediate consequence of Jensen's Inequality applied to the function $f(y) = y\log(y)$ (continuous on the nonnegative reals and differentiable on the positive reals), because differentiating twice shows $$f^{\prime\prime}(y) = \frac{1}{y}\gt 0$$ for $y\gt 0$, whence $f$ is a convex function on the non-negative reals, yielding
$$\mathbb{E}(Y \log Y) = \mathbb{E}(f(Y)) \ge f\left(\mathbb{E}(Y)\right) = f(1) = 0,$$
QED.
Edit
Edward Nelson provides a wonderfully succinct demonstration. As a matter of (standard) notation, define $||x||_p = \mathbb{E}(|x|^p)^{1/p}$ for $1 \lt p \lt \infty$ (and $||x||_\infty = \sup |x|$). Upon observing that the function $f(x) = |x|^p$ is convex, he applies Jensen's Inequality to conclude
$$|\mathbb{E}(x)|^p \le \mathbb{E}(|x|^p).$$
Here is the rest of the demonstration in his own words:
Applied to $|x|$ this gives $$||x||_1 \le ||x||_p,$$ and applied to $|x|^r$, where $1 \le r \lt \infty$, this gives $$||x||_r \le ||x||_{rp},$$ so that $||x||_p$ is an increasing function of $p$ for $1 \le p \le \infty$.
Reference
Edward Nelson, Radically Elementary Probability Theory. Princeton University Press (1987): p. 5. | Prove that $E(X^n)^{1/n}$ is non-decreasing for non-negative random variables | Write $p$ in place of $n$ to emphasize it can be any positive real number, rather than just an integer as suggested by "$n$".
Let's go through some standard preliminary transformations to simplify sub | Prove that $E(X^n)^{1/n}$ is non-decreasing for non-negative random variables
Write $p$ in place of $n$ to emphasize it can be any positive real number, rather than just an integer as suggested by "$n$".
Let's go through some standard preliminary transformations to simplify subsequent calculations. It makes no difference to the result to rescale $X$. The result is trivial if $X$ is almost everywhere zero, so assume $\mathbb{E}(X)$ is nonzero, whence $\mathbb{E}(X^p)$ also is nonzero for all $p$. Now fix $p$ and divide $X$ by $\mathbb{E}(X^p)^{1/p}$ so that $$\mathbb{E}(X^p) = 1\tag{1},$$ with no loss of generality.
Here's how the reasoning might proceed when you're trying to figure it out the first time and you're trying not to work too hard. I will leave detailed justifications of each step to you.
The expression $\mathbb{E}(X^p)^{1/p}$ is nondecreasing if and only if its logarithm is nondecreasing. That log is differentiable and therefore is nondecreasing if and only if its derivative is non-negative. Exploiting $(1)$ we may compute (by differentiating within the expectation) this derivative as
$$\frac{d}{dp}\log\left( \mathbb{E}(X^p)^{1/p} \right) = -\frac{1}{p^2}\log\mathbb{E}(X^p) + \frac{\mathbb{E}(X^p \log X)}{\mathbb{E}(X^p)} = \frac{1}{p}\mathbb{E}(X^p \log(X^p)).$$
Writing $Y=X^p$, the right hand side is non-negative if and only if $$\mathbb{E}(Y\log(Y)) \ge 0.$$ But this is an immediate consequence of Jensen's Inequality applied to the function $f(y) = y\log(y)$ (continuous on the nonnegative reals and differentiable on the positive reals), because differentiating twice shows $$f^{\prime\prime}(y) = \frac{1}{y}\gt 0$$ for $y\gt 0$, whence $f$ is a convex function on the non-negative reals, yielding
$$\mathbb{E}(Y \log Y) = \mathbb{E}(f(Y)) \ge f\left(\mathbb{E}(Y)\right) = f(1) = 0,$$
QED.
Edit
Edward Nelson provides a wonderfully succinct demonstration. As a matter of (standard) notation, define $||x||_p = \mathbb{E}(|x|^p)^{1/p}$ for $1 \lt p \lt \infty$ (and $||x||_\infty = \sup |x|$). Upon observing that the function $f(x) = |x|^p$ is convex, he applies Jensen's Inequality to conclude
$$|\mathbb{E}(x)|^p \le \mathbb{E}(|x|^p).$$
Here is the rest of the demonstration in his own words:
Applied to $|x|$ this gives $$||x||_1 \le ||x||_p,$$ and applied to $|x|^r$, where $1 \le r \lt \infty$, this gives $$||x||_r \le ||x||_{rp},$$ so that $||x||_p$ is an increasing function of $p$ for $1 \le p \le \infty$.
Reference
Edward Nelson, Radically Elementary Probability Theory. Princeton University Press (1987): p. 5. | Prove that $E(X^n)^{1/n}$ is non-decreasing for non-negative random variables
Write $p$ in place of $n$ to emphasize it can be any positive real number, rather than just an integer as suggested by "$n$".
Let's go through some standard preliminary transformations to simplify sub |
26,204 | Preprocess categorical variables with many values [duplicate] | There are multiple questions here, and some of them are asked & answered earlier. First, the question about computation taking a long time. There are multiple methods to deal with that, see https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix and the paper by Maechler and Bates.
But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility.
Orthogonal to the last idea, you could try fused lasso, see Principled way of collapsing categorical variables with many categories which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels. | Preprocess categorical variables with many values [duplicate] | There are multiple questions here, and some of them are asked & answered earlier. First, the question about computation taking a long time. There are multiple methods to deal with that, see https:// | Preprocess categorical variables with many values [duplicate]
There are multiple questions here, and some of them are asked & answered earlier. First, the question about computation taking a long time. There are multiple methods to deal with that, see https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix and the paper by Maechler and Bates.
But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility.
Orthogonal to the last idea, you could try fused lasso, see Principled way of collapsing categorical variables with many categories which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels. | Preprocess categorical variables with many values [duplicate]
There are multiple questions here, and some of them are asked & answered earlier. First, the question about computation taking a long time. There are multiple methods to deal with that, see https:// |
26,205 | Preprocess categorical variables with many values [duplicate] | Think about the following problem. You have a huge matrix (with let say 1000 rows and 1000 columns). In each cell of this matrix you have one or no values. You need to create a predictive model that predicts the value in a cell given by the row ID and column ID.
The described problem faces the same problem as you do: As input you have only categorical variables (row ID and column ID are categorical) and each categorical variable has many possible values (number of rows and number of columns).
How does this problem is solved? One standard way to solve this problem is a matrix factorization . You basically assign different numerical vectors to each row and each column and then you calculate the value in the cell by applying a function to the vectors corresponding to the selected row and column. For example, in the case of the Non-Negative Matrix Factorization this function is just scalar product of the row-vector and column-vector.
So, if you want to apply the same approach to your problem, you need to map each value of each categorical variable into a numerical vector. And then you use these vectors as inputs to your model-function and as output you get your predictions.
The exact mapping from categorical variables to vectors and / or shape of the function are decided by the model-training.
Another way to approach your problem is inspired by collaborative filtering. To predict a value for a given row and column you need to find similar rows and columns and get values from them. Basically, in your cases it translates to a sort of k-NN (nearest neighbor) approach. Use values of the categorical variables to find row with similar values of categorical variables. Then take the values of the targets from the "neighbors" and combine them (for example by averaging them out, maybe with the weights proportional to similarity measure). | Preprocess categorical variables with many values [duplicate] | Think about the following problem. You have a huge matrix (with let say 1000 rows and 1000 columns). In each cell of this matrix you have one or no values. You need to create a predictive model that p | Preprocess categorical variables with many values [duplicate]
Think about the following problem. You have a huge matrix (with let say 1000 rows and 1000 columns). In each cell of this matrix you have one or no values. You need to create a predictive model that predicts the value in a cell given by the row ID and column ID.
The described problem faces the same problem as you do: As input you have only categorical variables (row ID and column ID are categorical) and each categorical variable has many possible values (number of rows and number of columns).
How does this problem is solved? One standard way to solve this problem is a matrix factorization . You basically assign different numerical vectors to each row and each column and then you calculate the value in the cell by applying a function to the vectors corresponding to the selected row and column. For example, in the case of the Non-Negative Matrix Factorization this function is just scalar product of the row-vector and column-vector.
So, if you want to apply the same approach to your problem, you need to map each value of each categorical variable into a numerical vector. And then you use these vectors as inputs to your model-function and as output you get your predictions.
The exact mapping from categorical variables to vectors and / or shape of the function are decided by the model-training.
Another way to approach your problem is inspired by collaborative filtering. To predict a value for a given row and column you need to find similar rows and columns and get values from them. Basically, in your cases it translates to a sort of k-NN (nearest neighbor) approach. Use values of the categorical variables to find row with similar values of categorical variables. Then take the values of the targets from the "neighbors" and combine them (for example by averaging them out, maybe with the weights proportional to similarity measure). | Preprocess categorical variables with many values [duplicate]
Think about the following problem. You have a huge matrix (with let say 1000 rows and 1000 columns). In each cell of this matrix you have one or no values. You need to create a predictive model that p |
26,206 | How to sample a truncated multinomial distribution? | If I understand you correctly, you want to sample $x_1,\dots,x_k$ values from multinomial distribution with probabilities $p_1,\dots,p_k$ such that $\sum_i x_i = n$, however you want the distribution to be truncated so $a_i \le x_i \le b_i$ for all $x_i$.
I see three solutions (neither as elegant as in non-truncated case):
Accept-reject. Sample from non-truncated multinomial, accept the sample if it fits
the truncation boundaries, otherwise reject and repeat the
process. It is fast, but can be very inefficient.
rtrmnomReject <- function(R, n, p, a, b) {
x <- t(rmultinom(R, n, p))
x[apply(a <= x & x <= b, 1, all) & rowSums(x) == n, ]
}
Direct simulation. Sample in fashion that resembles
the data-generating process, i.e.
sample single marble from a random urn and repeat this process until
you sampled $n$ marbles in total, but as you deploy the total number
of marbles from given urn ($x_i$ is already equal to $b_i$) then stop
drawing from such urn. I implemented this in a script below.
# single draw from truncated multinomial with a,b truncation points
rtrmnomDirect <- function(n, p, a, b) {
k <- length(p)
repeat {
pp <- p # reset pp
x <- numeric(k) # reset x
repeat {
if (sum(x<b) == 1) { # if only a single category is left
x[x<b] <- x[x<b] + n-sum(x) # fill this category with reminder
break
}
i <- sample.int(k, 1, prob = pp) # sample x[i]
x[i] <- x[i] + 1
if (x[i] == b[i]) pp[i] <- 0 # if x[i] is filled do
# not sample from it
if (sum(x) == n) break # if we picked n, stop
}
if (all(x >= a)) break # if all x>=a sample is valid
# otherwise reject
}
return(x)
}
Metropolis algorithm. Finally, the third and most
efficient approach would be to use Metropolis algorithm.
The algorithm is initialized by using direct simulation
(but can be initialized differently) to draw first sample $X_1$.
In following steps iteratively: proposal value $y = q(X_{i-1})$
is accepted as $X_i$ with probability $f(y)/f(X_{i-1})$,
otherwise $X_{i-1}$ value is taken in it's place,
where $f(x) \propto \prod_i p_i^{x_i}/x_i!$. As a proposal I used
function $q$ that takes $X_{i-1}$ value and randomly flips from 0
to step number of cases and moves it to another category.
# draw R values
# 'step' parameter defines magnitude of jumps
# for Meteropolis algorithm
# 'init' is a vector of values to start with
rtrmnomMetrop <- function(R, n, p, a, b,
step = 1,
init = rtrmnomDirect(n, p, a, b)) {
k <- length(p)
if (length(a)==1) a <- rep(a, k)
if (length(b)==1) b <- rep(b, k)
# approximate target log-density
lp <- log(p)
lf <- function(x) {
if(any(x < a) || any(x > b) || sum(x) != n)
return(-Inf)
sum(lp*x - lfactorial(x))
}
step <- max(2, step+1)
# proposal function
q <- function(x) {
idx <- sample.int(k, 2)
u <- sample.int(step, 1)-1
x[idx] <- x[idx] + c(-u, u)
x
}
tmp <- init
x <- matrix(nrow = R, ncol = k)
ar <- 0
for (i in 1:R) {
proposal <- q(tmp)
prob <- exp(lf(proposal) - lf(tmp))
if (runif(1) < prob) {
tmp <- proposal
ar <- ar + 1
}
x[i,] <- tmp
}
structure(x, acceptance.rate = ar/R, step = step-1)
}
The algorithm starts at $X_1$ and then wanders around the different regions of distribution. It is obviously faster then the previous ones, but you need to remember that if you'd use it to sample small number of cases, then you could end up with draws that are close to each other. Another problem is that you need to decide about step size, i.e. how big jumps should the algorithm make -- too small may lead to moving slowly, too big may lead to making too many invalid proposals and rejecting them. You can see example of it's usage below. On the plots you can see: marginal densities in the first row, traceplots in the second row and plots showing subsequent jumps for pairs of variables.
n <- 500
a <- 50
b <- 125
p <- c(1,5,2,4,3)/15
k <- length(p)
x <- rtrmnomMetrop(1e4, n, p, a, b, step = 15)
cmb <- combn(1:k, 2)
par.def <- par(mfrow=c(4,5), mar = c(2,2,2,2))
for (i in 1:k)
hist(x[,i], main = paste0("X",i))
for (i in 1:k)
plot(x[,i], main = paste0("X",i), type = "l", col = "lightblue")
for (i in 1:ncol(cmb))
plot(jitter(x[,cmb[1,i]]), jitter(x[,cmb[2,i]]),
type = "l", main = paste(paste0("X", cmb[,i]), collapse = ":"),
col = "gray")
par(par.def)
The problem with sampling from this distribution is that describes a very inefficient sampling strategy in general. Imagine that $p_1 \ne \dots \ne p_k$ and $a_1 = \dots = a_k$, $b_1 = \dots b_k$ and $a_i$'s are close to $b_i$'s, in such case you want to sample to categories with different probabilities, but expect similar frequencies in the end. In extreme case, imagine two-categorical distribution where $p_1 \gg p_2$, and $a_1 \ll a_2$, $b_1 \ll b_2$, in such case you expect something very rare event to happen (real-life example of such distribution would be researcher who repeats sampling until he finds the sample that is consistent with his hypothesis, so it has more to do with cheating than random sampling).
The distribution is much less problematic if you define it as Rukhin (2007, 2008) where you sample $np_i$ cases to each category, i.e. sample proportionally to $p_i$'s.
Rukhin, A.L. (2007). Normal order statistics and sums of geometric random variables in treatment allocation problems. Statistics & probability letters, 77(12), 1312-1321.
Rukhin, A. L. (2008). Stopping Rules in Balanced Allocation Problems: Exact and Asymptotic Distributions. Sequential Analysis, 27(3), 277-292. | How to sample a truncated multinomial distribution? | If I understand you correctly, you want to sample $x_1,\dots,x_k$ values from multinomial distribution with probabilities $p_1,\dots,p_k$ such that $\sum_i x_i = n$, however you want the distribution | How to sample a truncated multinomial distribution?
If I understand you correctly, you want to sample $x_1,\dots,x_k$ values from multinomial distribution with probabilities $p_1,\dots,p_k$ such that $\sum_i x_i = n$, however you want the distribution to be truncated so $a_i \le x_i \le b_i$ for all $x_i$.
I see three solutions (neither as elegant as in non-truncated case):
Accept-reject. Sample from non-truncated multinomial, accept the sample if it fits
the truncation boundaries, otherwise reject and repeat the
process. It is fast, but can be very inefficient.
rtrmnomReject <- function(R, n, p, a, b) {
x <- t(rmultinom(R, n, p))
x[apply(a <= x & x <= b, 1, all) & rowSums(x) == n, ]
}
Direct simulation. Sample in fashion that resembles
the data-generating process, i.e.
sample single marble from a random urn and repeat this process until
you sampled $n$ marbles in total, but as you deploy the total number
of marbles from given urn ($x_i$ is already equal to $b_i$) then stop
drawing from such urn. I implemented this in a script below.
# single draw from truncated multinomial with a,b truncation points
rtrmnomDirect <- function(n, p, a, b) {
k <- length(p)
repeat {
pp <- p # reset pp
x <- numeric(k) # reset x
repeat {
if (sum(x<b) == 1) { # if only a single category is left
x[x<b] <- x[x<b] + n-sum(x) # fill this category with reminder
break
}
i <- sample.int(k, 1, prob = pp) # sample x[i]
x[i] <- x[i] + 1
if (x[i] == b[i]) pp[i] <- 0 # if x[i] is filled do
# not sample from it
if (sum(x) == n) break # if we picked n, stop
}
if (all(x >= a)) break # if all x>=a sample is valid
# otherwise reject
}
return(x)
}
Metropolis algorithm. Finally, the third and most
efficient approach would be to use Metropolis algorithm.
The algorithm is initialized by using direct simulation
(but can be initialized differently) to draw first sample $X_1$.
In following steps iteratively: proposal value $y = q(X_{i-1})$
is accepted as $X_i$ with probability $f(y)/f(X_{i-1})$,
otherwise $X_{i-1}$ value is taken in it's place,
where $f(x) \propto \prod_i p_i^{x_i}/x_i!$. As a proposal I used
function $q$ that takes $X_{i-1}$ value and randomly flips from 0
to step number of cases and moves it to another category.
# draw R values
# 'step' parameter defines magnitude of jumps
# for Meteropolis algorithm
# 'init' is a vector of values to start with
rtrmnomMetrop <- function(R, n, p, a, b,
step = 1,
init = rtrmnomDirect(n, p, a, b)) {
k <- length(p)
if (length(a)==1) a <- rep(a, k)
if (length(b)==1) b <- rep(b, k)
# approximate target log-density
lp <- log(p)
lf <- function(x) {
if(any(x < a) || any(x > b) || sum(x) != n)
return(-Inf)
sum(lp*x - lfactorial(x))
}
step <- max(2, step+1)
# proposal function
q <- function(x) {
idx <- sample.int(k, 2)
u <- sample.int(step, 1)-1
x[idx] <- x[idx] + c(-u, u)
x
}
tmp <- init
x <- matrix(nrow = R, ncol = k)
ar <- 0
for (i in 1:R) {
proposal <- q(tmp)
prob <- exp(lf(proposal) - lf(tmp))
if (runif(1) < prob) {
tmp <- proposal
ar <- ar + 1
}
x[i,] <- tmp
}
structure(x, acceptance.rate = ar/R, step = step-1)
}
The algorithm starts at $X_1$ and then wanders around the different regions of distribution. It is obviously faster then the previous ones, but you need to remember that if you'd use it to sample small number of cases, then you could end up with draws that are close to each other. Another problem is that you need to decide about step size, i.e. how big jumps should the algorithm make -- too small may lead to moving slowly, too big may lead to making too many invalid proposals and rejecting them. You can see example of it's usage below. On the plots you can see: marginal densities in the first row, traceplots in the second row and plots showing subsequent jumps for pairs of variables.
n <- 500
a <- 50
b <- 125
p <- c(1,5,2,4,3)/15
k <- length(p)
x <- rtrmnomMetrop(1e4, n, p, a, b, step = 15)
cmb <- combn(1:k, 2)
par.def <- par(mfrow=c(4,5), mar = c(2,2,2,2))
for (i in 1:k)
hist(x[,i], main = paste0("X",i))
for (i in 1:k)
plot(x[,i], main = paste0("X",i), type = "l", col = "lightblue")
for (i in 1:ncol(cmb))
plot(jitter(x[,cmb[1,i]]), jitter(x[,cmb[2,i]]),
type = "l", main = paste(paste0("X", cmb[,i]), collapse = ":"),
col = "gray")
par(par.def)
The problem with sampling from this distribution is that describes a very inefficient sampling strategy in general. Imagine that $p_1 \ne \dots \ne p_k$ and $a_1 = \dots = a_k$, $b_1 = \dots b_k$ and $a_i$'s are close to $b_i$'s, in such case you want to sample to categories with different probabilities, but expect similar frequencies in the end. In extreme case, imagine two-categorical distribution where $p_1 \gg p_2$, and $a_1 \ll a_2$, $b_1 \ll b_2$, in such case you expect something very rare event to happen (real-life example of such distribution would be researcher who repeats sampling until he finds the sample that is consistent with his hypothesis, so it has more to do with cheating than random sampling).
The distribution is much less problematic if you define it as Rukhin (2007, 2008) where you sample $np_i$ cases to each category, i.e. sample proportionally to $p_i$'s.
Rukhin, A.L. (2007). Normal order statistics and sums of geometric random variables in treatment allocation problems. Statistics & probability letters, 77(12), 1312-1321.
Rukhin, A. L. (2008). Stopping Rules in Balanced Allocation Problems: Exact and Asymptotic Distributions. Sequential Analysis, 27(3), 277-292. | How to sample a truncated multinomial distribution?
If I understand you correctly, you want to sample $x_1,\dots,x_k$ values from multinomial distribution with probabilities $p_1,\dots,p_k$ such that $\sum_i x_i = n$, however you want the distribution |
26,207 | How to sample a truncated multinomial distribution? | Here is my effort in trying to translate Tim's R code to Python. Since I spent some time understanding this problem and coded the algorithms in Python, I thought to share them here in case people are interested.
Accept-Reject algorithm :
def sample_truncated_multinomial_accept_reject(k, pVec, a, b):
x = list(np.random.multinomial(k, pVec, size=1)[0])
h = [x[i] >= a[i] and x[i] <= b[i] for i in range(len(x))]
while sum(h) < len(h):
x = list(np.random.multinomial(k, pVec, size=1)[0])
h = [x[i] >= a[i] and x[i] <= b[i] for i in range(len(x))]
return x
Direct simulation
def truncated_multinomial_direct_sampling_from_urn(k, pVec, a, b):
n = len(pVec)
while True:
pp = pVec
x = [0 for _ in range(n)]
while True:
if sum([x[h] < b[h] for h in range(n)])==1:
indx = [h for h in range(n) if x[h] < b[h]][0]
x[indx] = k - sum(x)
break
i = np.random.choice(n, 1, p=pp)[0]
x[i] += 1
if x[i] == b[i]:
pp = [pp[j]/(1-pp[i]) for j in range(n)]
pp[i] = 0
if sum(x) == k:
break
if sum([x[h] < a[h] for h in range(n)]) == 0:
break
return x
Metropolis algorithm
def compute_log_function(x, pVec, a, b):
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
if x_less_a or x_more_a or sum(x) != k:
return float("-inf")
return np.sum(np.log(pVec)*x - np.array([math.lgamma(h+1) for h in x]))
def sampling_distribution(original, pVec, a, b, step):
x = copy.deepcopy(original)
idx = np.random.choice(len(x), 2, replace=False)
u = np.random.choice(step, 1)[0]
x[idx[0]] -= u
x[idx[1]] += u
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
while x_less_a or x_more_a or sum(x) != k:
x = copy.deepcopy(original)
idx = np.random.choice(len(x), 2, replace=False)
u = np.random.choice(step, 1)[0]
x[idx[0]] -= u
x[idx[1]] += u
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
return x
def sample_truncated_multinomial_metropolis_hasting(k, pVec, a, b, iters, step=1):
tmp=sample_truncated_multinomial_accept_reject(k, pVec, a, b)[0]
step = max(2, step)
for i in range(iters):
proposal = sampling_distribution(tmp, pVec, a, b, step)
if compute_log_function(proposal, pVec, a, b) == float("-inf"):
continue
prob = np.exp(np.array(compute_log_function(proposal, pVec, a, b)) -\
np.array(compute_log_function(tmp, pVec, a, b)))
if np.random.uniform() < prob:
tmp = proposal
step -= 1
return tmp
For a complete implementation of this code please see my Github repository at
https://github.com/mohsenkarimzadeh/sampling | How to sample a truncated multinomial distribution? | Here is my effort in trying to translate Tim's R code to Python. Since I spent some time understanding this problem and coded the algorithms in Python, I thought to share them here in case people are | How to sample a truncated multinomial distribution?
Here is my effort in trying to translate Tim's R code to Python. Since I spent some time understanding this problem and coded the algorithms in Python, I thought to share them here in case people are interested.
Accept-Reject algorithm :
def sample_truncated_multinomial_accept_reject(k, pVec, a, b):
x = list(np.random.multinomial(k, pVec, size=1)[0])
h = [x[i] >= a[i] and x[i] <= b[i] for i in range(len(x))]
while sum(h) < len(h):
x = list(np.random.multinomial(k, pVec, size=1)[0])
h = [x[i] >= a[i] and x[i] <= b[i] for i in range(len(x))]
return x
Direct simulation
def truncated_multinomial_direct_sampling_from_urn(k, pVec, a, b):
n = len(pVec)
while True:
pp = pVec
x = [0 for _ in range(n)]
while True:
if sum([x[h] < b[h] for h in range(n)])==1:
indx = [h for h in range(n) if x[h] < b[h]][0]
x[indx] = k - sum(x)
break
i = np.random.choice(n, 1, p=pp)[0]
x[i] += 1
if x[i] == b[i]:
pp = [pp[j]/(1-pp[i]) for j in range(n)]
pp[i] = 0
if sum(x) == k:
break
if sum([x[h] < a[h] for h in range(n)]) == 0:
break
return x
Metropolis algorithm
def compute_log_function(x, pVec, a, b):
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
if x_less_a or x_more_a or sum(x) != k:
return float("-inf")
return np.sum(np.log(pVec)*x - np.array([math.lgamma(h+1) for h in x]))
def sampling_distribution(original, pVec, a, b, step):
x = copy.deepcopy(original)
idx = np.random.choice(len(x), 2, replace=False)
u = np.random.choice(step, 1)[0]
x[idx[0]] -= u
x[idx[1]] += u
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
while x_less_a or x_more_a or sum(x) != k:
x = copy.deepcopy(original)
idx = np.random.choice(len(x), 2, replace=False)
u = np.random.choice(step, 1)[0]
x[idx[0]] -= u
x[idx[1]] += u
x_less_a = sum([x[i] < a[i] for i in range(len(pVec))])
x_more_a = sum([x[i] > b[i] for i in range(len(pVec))])
return x
def sample_truncated_multinomial_metropolis_hasting(k, pVec, a, b, iters, step=1):
tmp=sample_truncated_multinomial_accept_reject(k, pVec, a, b)[0]
step = max(2, step)
for i in range(iters):
proposal = sampling_distribution(tmp, pVec, a, b, step)
if compute_log_function(proposal, pVec, a, b) == float("-inf"):
continue
prob = np.exp(np.array(compute_log_function(proposal, pVec, a, b)) -\
np.array(compute_log_function(tmp, pVec, a, b)))
if np.random.uniform() < prob:
tmp = proposal
step -= 1
return tmp
For a complete implementation of this code please see my Github repository at
https://github.com/mohsenkarimzadeh/sampling | How to sample a truncated multinomial distribution?
Here is my effort in trying to translate Tim's R code to Python. Since I spent some time understanding this problem and coded the algorithms in Python, I thought to share them here in case people are |
26,208 | How to sample a truncated multinomial distribution? | I have been working on a related problem (particularly, where your lower bounds are 1 and the upper bound is eliminated, and I want to get a pmf rather than a random variate) but it got me thinking about this old question that I'd seen many times in my research.
You can do this by sampling from a series of $K-1$ truncated binomial distributions. The key is that given the count in the first component, say, $x_1$ with $n$ draws in total, the remaining components are independently multinomial distributed with the same probabilities, just with $p_1$ excluded and renormalized, and $n - x_1$ draws. If you write down the pmf for e.g. $X_1 ~ Bin(n, p_1)$ times the pmf for $X_2 \vert X_1 ~ Bin(n-x_1, \frac{p_2}{1-p_1})$ you'll find that it reduces to the expected multinomial when expanding the binomial coefficients.
The key is that when sampling, you need to ensure that constraints on all components are satisfiable. When sampling the first component, $x_1 \in [a_1, b_1]$ must of course be satisfied, but so must $n - x_1 \in [\sum_{i=2}^{K} a_i, \sum_{i=2}^{K} b_i]$. This doesn't guarantee that the individual component constraints are satisfied- just that we don't sample too many or too few from the first component to satisfy the remaining ones. Then for the second component sampling, the same process- but now $n - x_1 - X_2$ must be in the range of the sum from 3 to $K$ of the constraints above. I use lowercase to denote the previously sampled value, upper case for the random variable. So with constraints on both $n - X$ and $X$ in some form at each step, we can combine them into a single constraint on $X$ for the binomial and then sample from the truncated distribution. The question of how to do that is all that's left.
Fortunately, there's two very simple ways to handle sampling from the truncated binomial. Rejection sampling is obviously the easiest- sample from a binomial distribution and reject ones that don't satisfy the constraints. This could be extremely impractical for a truncated multinomial directly- the rejection rate could become very high depending on how much of the mass is truncated. Doing rejection sampling on the sequence of truncated binomial distributions, however, can avoid much of the difficulty especially if you sample from components with the "tightest" constraints first. A significant improvement is that it's turned into a univariate discrete sampling problem, so you can simply draw from a uniform distribution and compare to the CDF. To do this exactly, you need to find the appropriate constraints on whichever $X$ you're sampling from so that its constraints are satisfied and the constraints on all following components are satisfiable, call the lower and upper bounds $c$, $d$. If $f(k; n, p) = P(X = k)$ is the pmf for a binomial without truncation, you can find
$$ P(c \leq X \leq d) = \sum_{k = c}^{d} f(k; n, p) $$
Or, of course, 1 minus the probability of the constraint not being satisfied. Then the truncated pmf is given by $g(k; n, p) = \frac{f(k; n, p)}{P(c \leq X \leq d)}$ and you can sample directly from that.
If you have some components with very restrictive constraints (the probability of them being satisfied in the non truncated distribution is very small) then you will likely find that the exact sampling from $g$ is much faster than rejection sampling. The flip side is that rejection sampling may be much quicker for components with little restrictions, but by breaking the sampling down into one component at a time you can choose which method to use per component.
It's also worth noting that the truncated distributions on each component are still exchangeable and so Gibbs sampling is a potentially appealing option that has the benefit of simplicity. A full round of sampling is the same amount of computation with Gibbs sampling.
A final word of warning
The truncated distribution of a binomial is not going to be the same as drawing $X = Bin(b-a, p) + a$. This does not count the combinations correctly. The probability of drawing a 1 from a zero truncated distribution is $\frac{np(1-p)^{n-1}}{1 - (1-p)^{n}}$ whereas drawing a zero in n-1 and adding 1 has a probability of $(1-p)^{n-1}$. The difference is in the way of counting the permutations. Subtracting the sums of $a_i$ from n and sampling will get you different results because you're missing the number of ways that $a_i$ balls can be put in each urn and the $n - x_1 - a_2 \cdots$ balls are chosen for the subsequent sampling, in terms of the ball and urn analogy. | How to sample a truncated multinomial distribution? | I have been working on a related problem (particularly, where your lower bounds are 1 and the upper bound is eliminated, and I want to get a pmf rather than a random variate) but it got me thinking ab | How to sample a truncated multinomial distribution?
I have been working on a related problem (particularly, where your lower bounds are 1 and the upper bound is eliminated, and I want to get a pmf rather than a random variate) but it got me thinking about this old question that I'd seen many times in my research.
You can do this by sampling from a series of $K-1$ truncated binomial distributions. The key is that given the count in the first component, say, $x_1$ with $n$ draws in total, the remaining components are independently multinomial distributed with the same probabilities, just with $p_1$ excluded and renormalized, and $n - x_1$ draws. If you write down the pmf for e.g. $X_1 ~ Bin(n, p_1)$ times the pmf for $X_2 \vert X_1 ~ Bin(n-x_1, \frac{p_2}{1-p_1})$ you'll find that it reduces to the expected multinomial when expanding the binomial coefficients.
The key is that when sampling, you need to ensure that constraints on all components are satisfiable. When sampling the first component, $x_1 \in [a_1, b_1]$ must of course be satisfied, but so must $n - x_1 \in [\sum_{i=2}^{K} a_i, \sum_{i=2}^{K} b_i]$. This doesn't guarantee that the individual component constraints are satisfied- just that we don't sample too many or too few from the first component to satisfy the remaining ones. Then for the second component sampling, the same process- but now $n - x_1 - X_2$ must be in the range of the sum from 3 to $K$ of the constraints above. I use lowercase to denote the previously sampled value, upper case for the random variable. So with constraints on both $n - X$ and $X$ in some form at each step, we can combine them into a single constraint on $X$ for the binomial and then sample from the truncated distribution. The question of how to do that is all that's left.
Fortunately, there's two very simple ways to handle sampling from the truncated binomial. Rejection sampling is obviously the easiest- sample from a binomial distribution and reject ones that don't satisfy the constraints. This could be extremely impractical for a truncated multinomial directly- the rejection rate could become very high depending on how much of the mass is truncated. Doing rejection sampling on the sequence of truncated binomial distributions, however, can avoid much of the difficulty especially if you sample from components with the "tightest" constraints first. A significant improvement is that it's turned into a univariate discrete sampling problem, so you can simply draw from a uniform distribution and compare to the CDF. To do this exactly, you need to find the appropriate constraints on whichever $X$ you're sampling from so that its constraints are satisfied and the constraints on all following components are satisfiable, call the lower and upper bounds $c$, $d$. If $f(k; n, p) = P(X = k)$ is the pmf for a binomial without truncation, you can find
$$ P(c \leq X \leq d) = \sum_{k = c}^{d} f(k; n, p) $$
Or, of course, 1 minus the probability of the constraint not being satisfied. Then the truncated pmf is given by $g(k; n, p) = \frac{f(k; n, p)}{P(c \leq X \leq d)}$ and you can sample directly from that.
If you have some components with very restrictive constraints (the probability of them being satisfied in the non truncated distribution is very small) then you will likely find that the exact sampling from $g$ is much faster than rejection sampling. The flip side is that rejection sampling may be much quicker for components with little restrictions, but by breaking the sampling down into one component at a time you can choose which method to use per component.
It's also worth noting that the truncated distributions on each component are still exchangeable and so Gibbs sampling is a potentially appealing option that has the benefit of simplicity. A full round of sampling is the same amount of computation with Gibbs sampling.
A final word of warning
The truncated distribution of a binomial is not going to be the same as drawing $X = Bin(b-a, p) + a$. This does not count the combinations correctly. The probability of drawing a 1 from a zero truncated distribution is $\frac{np(1-p)^{n-1}}{1 - (1-p)^{n}}$ whereas drawing a zero in n-1 and adding 1 has a probability of $(1-p)^{n-1}$. The difference is in the way of counting the permutations. Subtracting the sums of $a_i$ from n and sampling will get you different results because you're missing the number of ways that $a_i$ balls can be put in each urn and the $n - x_1 - a_2 \cdots$ balls are chosen for the subsequent sampling, in terms of the ball and urn analogy. | How to sample a truncated multinomial distribution?
I have been working on a related problem (particularly, where your lower bounds are 1 and the upper bound is eliminated, and I want to get a pmf rather than a random variate) but it got me thinking ab |
26,209 | R squared and higher order polynomial regression | Consider a polynomial:
$$ \beta_0 + \beta_1 x + \beta_2 x^2 + \ldots + \beta_k x^k$$
Observe that the polynomial is non-linear in $x$ but that it is linear in $\boldsymbol{\beta}$. If we're trying to estimate $\boldsymbol{\beta}$, this is linear regression!
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \ldots + \beta_k x_i^k + \epsilon_i$$
Linearity in $\boldsymbol{\beta} = (\beta_0, \beta_1, \ldots, \beta_k)$ is what matters. When estimating the above equation by least squares, all of the results of linear regression will hold.
Let $\mathit{SST}$ be the total sum of squares, $\mathit{SSE}$ be the explained sum of squares, and $\mathit{SSR}$ be the residual sum of squares. The coefficient of determination $R^2$ is defined as:
$$ R^2 = 1 - \frac{\mathit{SSR}}{\mathit{SST}}$$
And the result of linear regression that $\mathit{SST} = \mathit{SSE} + \mathit{SSR}$ gives $R^2$ it's familiar interpretation as the fraction of variance explained by the model.
SST = SSE + SSR: When is it true and when is it not true?
Let $\hat{y}_i$ be the forecast value of $y_i$ and let $e_i = y_i - \hat{y}_i$ be the residual. Furthermore, let's define the demeaned forecast value as $f_i = \hat{y}_i - \bar{y}$.
Let $\langle ., . \rangle$ denote an inner product. Trivially we have:
\begin{align*}
\langle \mathbf{f} + \mathbf{e}, \mathbf{f} + \mathbf{e} \rangle &= \langle \mathbf{f}, \mathbf{f} \rangle + 2\langle \mathbf{f}, \mathbf{e} \rangle + \langle \mathbf{e}, \mathbf{e} \rangle \\
&= \langle \mathbf{f}, \mathbf{f} \rangle + \langle \mathbf{e}, \mathbf{e} \rangle \quad \quad\text{if $\mathbf{f}$ and $\mathbf{e}$ orthogonal, i.e. their inner product is 0}
\end{align*}
Observe that $\langle \mathbf{a}, \mathbf{b} \rangle = \sum_i a_i b_i$ is a valid inner product. Then we have:
$\langle \mathbf{f} + \mathbf{e}, \mathbf{f} + \mathbf{e} \rangle = \sum_i \left(y_i - \bar{y} \right)^2 $ is the total sum of squares (SST).
$\langle \mathbf{f}, \mathbf{f} \rangle = \sum_i \left(\hat{y}_i - \bar{y} \right)^2$ is the explained sum of squares (SSE).
$\langle \mathbf{e}, \mathbf{e} \rangle = \sum_i \left(y_i - \hat{y}_i \right)^2 $ is the residual sum of squares (SSR).
Thus $SST = SSE + SSR$ is true if the demeaned forecast $\mathbf{f}$ is orthogonal to the residual $\mathbf{e}$. This is true in ordinary least squares linear regression whenever a constant is included in the regression. Another interpretation of ordinary least squares is that you're projecting $\mathbf{y}$ onto the linear span of regressors, hence the residual is orthogonal to that space by construction. Orthogonality of right hand side variables and residuals is not in general true for forecasts $\hat{y}_i$ obtained in other ways. | R squared and higher order polynomial regression | Consider a polynomial:
$$ \beta_0 + \beta_1 x + \beta_2 x^2 + \ldots + \beta_k x^k$$
Observe that the polynomial is non-linear in $x$ but that it is linear in $\boldsymbol{\beta}$. If we're trying to | R squared and higher order polynomial regression
Consider a polynomial:
$$ \beta_0 + \beta_1 x + \beta_2 x^2 + \ldots + \beta_k x^k$$
Observe that the polynomial is non-linear in $x$ but that it is linear in $\boldsymbol{\beta}$. If we're trying to estimate $\boldsymbol{\beta}$, this is linear regression!
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \ldots + \beta_k x_i^k + \epsilon_i$$
Linearity in $\boldsymbol{\beta} = (\beta_0, \beta_1, \ldots, \beta_k)$ is what matters. When estimating the above equation by least squares, all of the results of linear regression will hold.
Let $\mathit{SST}$ be the total sum of squares, $\mathit{SSE}$ be the explained sum of squares, and $\mathit{SSR}$ be the residual sum of squares. The coefficient of determination $R^2$ is defined as:
$$ R^2 = 1 - \frac{\mathit{SSR}}{\mathit{SST}}$$
And the result of linear regression that $\mathit{SST} = \mathit{SSE} + \mathit{SSR}$ gives $R^2$ it's familiar interpretation as the fraction of variance explained by the model.
SST = SSE + SSR: When is it true and when is it not true?
Let $\hat{y}_i$ be the forecast value of $y_i$ and let $e_i = y_i - \hat{y}_i$ be the residual. Furthermore, let's define the demeaned forecast value as $f_i = \hat{y}_i - \bar{y}$.
Let $\langle ., . \rangle$ denote an inner product. Trivially we have:
\begin{align*}
\langle \mathbf{f} + \mathbf{e}, \mathbf{f} + \mathbf{e} \rangle &= \langle \mathbf{f}, \mathbf{f} \rangle + 2\langle \mathbf{f}, \mathbf{e} \rangle + \langle \mathbf{e}, \mathbf{e} \rangle \\
&= \langle \mathbf{f}, \mathbf{f} \rangle + \langle \mathbf{e}, \mathbf{e} \rangle \quad \quad\text{if $\mathbf{f}$ and $\mathbf{e}$ orthogonal, i.e. their inner product is 0}
\end{align*}
Observe that $\langle \mathbf{a}, \mathbf{b} \rangle = \sum_i a_i b_i$ is a valid inner product. Then we have:
$\langle \mathbf{f} + \mathbf{e}, \mathbf{f} + \mathbf{e} \rangle = \sum_i \left(y_i - \bar{y} \right)^2 $ is the total sum of squares (SST).
$\langle \mathbf{f}, \mathbf{f} \rangle = \sum_i \left(\hat{y}_i - \bar{y} \right)^2$ is the explained sum of squares (SSE).
$\langle \mathbf{e}, \mathbf{e} \rangle = \sum_i \left(y_i - \hat{y}_i \right)^2 $ is the residual sum of squares (SSR).
Thus $SST = SSE + SSR$ is true if the demeaned forecast $\mathbf{f}$ is orthogonal to the residual $\mathbf{e}$. This is true in ordinary least squares linear regression whenever a constant is included in the regression. Another interpretation of ordinary least squares is that you're projecting $\mathbf{y}$ onto the linear span of regressors, hence the residual is orthogonal to that space by construction. Orthogonality of right hand side variables and residuals is not in general true for forecasts $\hat{y}_i$ obtained in other ways. | R squared and higher order polynomial regression
Consider a polynomial:
$$ \beta_0 + \beta_1 x + \beta_2 x^2 + \ldots + \beta_k x^k$$
Observe that the polynomial is non-linear in $x$ but that it is linear in $\boldsymbol{\beta}$. If we're trying to |
26,210 | Preventing Pareto smoothed importance sampling (PSIS-LOO) from failing | For the record, I posted a similar question to the Stan users mailing list, which you can find here. I was answered by one of the authors of the original PSIS-LOO paper and by other contributors of Stan. What follows is my personal summary.
The short answer is that there are no known general methods to prevent PSIS-LOO from failing. If PSIS-LOO fails, it is usually because the model has issues, and fixing it is necessarily left to the user.
Specifically, the reason why PSIS-LOO may fail is usually because one or more LOO distributions are shifted and/or broader than the full posterior, likely due to influential observations, and the importance sampling distribution collapses to one or a few points.
I was thinking that you could try to adopt some form of parallel posterior tempering approach to solve this issue. The idea is not necessarily wrong, but it was pointed out to me that:
textbook posterior tempering would still require a lot of case-by-case fiddling to find the right temperature level(s), as there is no obvious nor known way to do that (incidentally, for this reason Stan does not include parallel tempering);
if you use more than two temperature levels (as it may be required to have a robust approach), the final computational cost approaches that of K-fold cross validation, or of running MCMC on the problematic LOO distributions.
In short, if PSIS-LOO fails, it seems to be hard to get a method that is as robust and general as other simple patches; that's why Vehtari, Gelman & Gabry suggested those methods as per the quote I posted in my original question. | Preventing Pareto smoothed importance sampling (PSIS-LOO) from failing | For the record, I posted a similar question to the Stan users mailing list, which you can find here. I was answered by one of the authors of the original PSIS-LOO paper and by other contributors of St | Preventing Pareto smoothed importance sampling (PSIS-LOO) from failing
For the record, I posted a similar question to the Stan users mailing list, which you can find here. I was answered by one of the authors of the original PSIS-LOO paper and by other contributors of Stan. What follows is my personal summary.
The short answer is that there are no known general methods to prevent PSIS-LOO from failing. If PSIS-LOO fails, it is usually because the model has issues, and fixing it is necessarily left to the user.
Specifically, the reason why PSIS-LOO may fail is usually because one or more LOO distributions are shifted and/or broader than the full posterior, likely due to influential observations, and the importance sampling distribution collapses to one or a few points.
I was thinking that you could try to adopt some form of parallel posterior tempering approach to solve this issue. The idea is not necessarily wrong, but it was pointed out to me that:
textbook posterior tempering would still require a lot of case-by-case fiddling to find the right temperature level(s), as there is no obvious nor known way to do that (incidentally, for this reason Stan does not include parallel tempering);
if you use more than two temperature levels (as it may be required to have a robust approach), the final computational cost approaches that of K-fold cross validation, or of running MCMC on the problematic LOO distributions.
In short, if PSIS-LOO fails, it seems to be hard to get a method that is as robust and general as other simple patches; that's why Vehtari, Gelman & Gabry suggested those methods as per the quote I posted in my original question. | Preventing Pareto smoothed importance sampling (PSIS-LOO) from failing
For the record, I posted a similar question to the Stan users mailing list, which you can find here. I was answered by one of the authors of the original PSIS-LOO paper and by other contributors of St |
26,211 | How to choose between sign test and Wilcoxon signed-rank test? | I am trying to pick one from these two tests to analyze paired data. Does anyone know any rules of thumb about which one to pick in general?
The signed rank test carries an assumption about symmetry of differences under the null that the sign test need not. (That assumption is necessary in order that the permutations of the signs attached to the unsigned ranks of differences be equally likely.)
On the other hand, if there is near-symmetry in the population and the tail is not very heavy, the signed rank should have more power.
[This should not be taken as advice to choose between them on the basis of the sample; in general that leads to test properties different from the nominal (tests may be biased, actual significance levels are no longer what they appear to be, calculated p-values don't represent true p-values and so on). Instead, where possible, characteristics should be evaluated based on knowledge external to the sample the test is applied to -- whether by subject area knowledge, familiarity with other data sets like this one, sample-splitting, ...]
In my case, the rank sum test has the largest p-value, sign test is the medium, signed-rank is the smallest. Therefore, it has more power.
That's not how you decide a test has more power - a lower p-value in respect of one sample may simply be due to the vagaries of that sample, whereas power is about the behavior across all random samples drawn from the same population.
That is, imagine that you're dealing with some specific situation in which the population of pair-differences are centered somewhat away from 0 (i.e. that $H_0$ is false in a specific way). Then under repeated sampling under the same conditions (including sample size), the power will be the rejection rate for that particular population.
In similar fashion we could calculate the rejection rate for a sequence of populations with different location* of pair-differences and obtain an entire power-curve. Then "higher power" would correspond to the entire power curve (or almost all of it, noting that both should be at the same significance level) for one test laying above the other.
* you could take it to be a median for the present discussion -- while the estimator for the signed rank test is the median of pairwise averages of pair-differences, under the symmetry assumption the location estimator should also be a suitable estimate of median pair difference.
Here's a related question How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples. One of the answers includes a (brief) discussion of the present issue. | How to choose between sign test and Wilcoxon signed-rank test? | I am trying to pick one from these two tests to analyze paired data. Does anyone know any rules of thumb about which one to pick in general?
The signed rank test carries an assumption about symmetry | How to choose between sign test and Wilcoxon signed-rank test?
I am trying to pick one from these two tests to analyze paired data. Does anyone know any rules of thumb about which one to pick in general?
The signed rank test carries an assumption about symmetry of differences under the null that the sign test need not. (That assumption is necessary in order that the permutations of the signs attached to the unsigned ranks of differences be equally likely.)
On the other hand, if there is near-symmetry in the population and the tail is not very heavy, the signed rank should have more power.
[This should not be taken as advice to choose between them on the basis of the sample; in general that leads to test properties different from the nominal (tests may be biased, actual significance levels are no longer what they appear to be, calculated p-values don't represent true p-values and so on). Instead, where possible, characteristics should be evaluated based on knowledge external to the sample the test is applied to -- whether by subject area knowledge, familiarity with other data sets like this one, sample-splitting, ...]
In my case, the rank sum test has the largest p-value, sign test is the medium, signed-rank is the smallest. Therefore, it has more power.
That's not how you decide a test has more power - a lower p-value in respect of one sample may simply be due to the vagaries of that sample, whereas power is about the behavior across all random samples drawn from the same population.
That is, imagine that you're dealing with some specific situation in which the population of pair-differences are centered somewhat away from 0 (i.e. that $H_0$ is false in a specific way). Then under repeated sampling under the same conditions (including sample size), the power will be the rejection rate for that particular population.
In similar fashion we could calculate the rejection rate for a sequence of populations with different location* of pair-differences and obtain an entire power-curve. Then "higher power" would correspond to the entire power curve (or almost all of it, noting that both should be at the same significance level) for one test laying above the other.
* you could take it to be a median for the present discussion -- while the estimator for the signed rank test is the median of pairwise averages of pair-differences, under the symmetry assumption the location estimator should also be a suitable estimate of median pair difference.
Here's a related question How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples. One of the answers includes a (brief) discussion of the present issue. | How to choose between sign test and Wilcoxon signed-rank test?
I am trying to pick one from these two tests to analyze paired data. Does anyone know any rules of thumb about which one to pick in general?
The signed rank test carries an assumption about symmetry |
26,212 | Latin Hypercube Sampling Asymptotics | Short answer: Yes, in a probabilistic way. It is possible to show that, given any distance $\epsilon>0$, any finite subset $\{x_1,…,x_m\}$ of the sample space and any prescribed ‘tolerance’ $\delta>0$, for suitably large sample sizes we can be sure that the probability that there is a sample point within a distance $\epsilon$ of $x_i$ is $>1-\delta$ for all $i=1,…,m$.
Long answer: I am not aware of any directly relevant citation (but see below). Most of the literature on Latin Hypercube Sampling (LHS) relates to its variance reduction properties. The other issue is, what does it mean to say that the sample size tends to $\infty$? For simple IID random sampling, a sample of size $n$ can be obtained from a sample of size $n-1$ by appending a further independent sample. For LHS I don't think you can do this as the number of samples is specified in advance as part of the procedure. So it appears that you would have to take a succession of independent LHS samples of size $1,2,3,...$.
There also needs to be some way of interpreting 'dense' in the limit as the sample size tends to $\infty$. Density does not seem to hold in a deterministic way for LHS e.g. in two dimensions, you could choose a sequence of LHS samples of size $1,2,3,...$ such that they all stick to the diagonal of $[0,1)^2$. So some kind of probabilistic definition seems necessary.
Let, for every $n$, $X_n=(X_{n1},X_{n2},...,X_{nn})$ be a sample of size $n$ generated according to some stochastic mechanism. Assume that, for different $n$, these samples are independent. Then to define asymptotic density we might require that, for every $\epsilon>0$, and for every $x$ in the sample space (assumed to be $[0,1)^d$), we have $P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)\to0$ (as $n\to \infty$).
If the sample $X_n$ is obtained by taking $n$ independent samples from the $U([0,1)^d)$ distribution ('IID random sampling') then $$P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)=\prod_{k=1}^n P(\|X_{nk}-x\|\geq \epsilon)\leq (1-v_\epsilon 2^{-d})^n \to 0$$ where $v_\epsilon$ is the volume of the $d$-dimensional ball of radius $\epsilon$. So certainly IID random sampling is asymptotically dense.
Now consider the case that the samples $X_n$ are obtained by LHS. Theorem 10.1 in these notes states that the members of the sample $X_n$ are all distributed as $U([0,1)^d)$. However, the permutations used in the definition of LHS (although independent for different dimensions) induce some dependence between the members of the sample ($X_{nk}, k\leq n$), so it is less obvious that the asymptotic density property holds.
Fix $\epsilon\gt 0$ and $x\in [0,1)^d$. Define $P_n=P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)$. We want to show that $P_n\to 0$. To do this, we can make use of Proposition 10.3 in those notes, which is a kind of Central Limit Theorem for Latin Hypercube Sampling. Define $f:[0,1]^d\to\mathbb{R}$ by $f(z)=1$ if $z$ is in the ball of radius $\epsilon$ around $x$, $f(z)=0$ otherwise. Then Proposition 10.3 tells us that $Y_n:=\sqrt n (\hat{\mu}_{LHS}-\mu)\xrightarrow{d} N(0,\Sigma)$ where $\mu=\int_{[0,1]^d} f(z) dz$ and $\hat{\mu}_{LHS}=\frac{1}{n}\sum_{i=1}^n f(X_{ni})$.
Take $L>0$. Eventually, for large enough $n$, we will have $-\sqrt n\mu\lt -L$. So eventually we will have $P_n=P(Y_n=-\sqrt n \mu)\le P(Y_n\lt -L)$. Therefore $\limsup P_n\le \limsup P(Y_n\lt -L)=\Phi(\frac{-L}{\sqrt\Sigma})$, where $\Phi$ is the standard normal cdf. Since $L$ was arbitrary, it follows that $P_n\to 0$ as required.
This proves asymptotic density (as defined above) for both iid random sampling and LHS. Informally, this means that given any $\epsilon$ and any $x$ in the sampling space, the probability that the sample gets to within $\epsilon$ of $x$ can be made as close to 1 as you please by choosing the sample size sufficiently large. It is easy to extend the concept of asymptotic density so as to apply to finite subsets of the sample space - by applying what we already know to each point in the finite subset. More formally, this means that we can show: for any $\epsilon>0$ and any finite subset $\{x_1,...,x_m\}$ of the sample space, $min_{1\leq j\leq m} P(min_{1\leq k\leq n} \|X_{nk}-x_j\|\lt \epsilon)\to 1$ (as $n\to\infty$). | Latin Hypercube Sampling Asymptotics | Short answer: Yes, in a probabilistic way. It is possible to show that, given any distance $\epsilon>0$, any finite subset $\{x_1,…,x_m\}$ of the sample space and any prescribed ‘tolerance’ $\delta>0$ | Latin Hypercube Sampling Asymptotics
Short answer: Yes, in a probabilistic way. It is possible to show that, given any distance $\epsilon>0$, any finite subset $\{x_1,…,x_m\}$ of the sample space and any prescribed ‘tolerance’ $\delta>0$, for suitably large sample sizes we can be sure that the probability that there is a sample point within a distance $\epsilon$ of $x_i$ is $>1-\delta$ for all $i=1,…,m$.
Long answer: I am not aware of any directly relevant citation (but see below). Most of the literature on Latin Hypercube Sampling (LHS) relates to its variance reduction properties. The other issue is, what does it mean to say that the sample size tends to $\infty$? For simple IID random sampling, a sample of size $n$ can be obtained from a sample of size $n-1$ by appending a further independent sample. For LHS I don't think you can do this as the number of samples is specified in advance as part of the procedure. So it appears that you would have to take a succession of independent LHS samples of size $1,2,3,...$.
There also needs to be some way of interpreting 'dense' in the limit as the sample size tends to $\infty$. Density does not seem to hold in a deterministic way for LHS e.g. in two dimensions, you could choose a sequence of LHS samples of size $1,2,3,...$ such that they all stick to the diagonal of $[0,1)^2$. So some kind of probabilistic definition seems necessary.
Let, for every $n$, $X_n=(X_{n1},X_{n2},...,X_{nn})$ be a sample of size $n$ generated according to some stochastic mechanism. Assume that, for different $n$, these samples are independent. Then to define asymptotic density we might require that, for every $\epsilon>0$, and for every $x$ in the sample space (assumed to be $[0,1)^d$), we have $P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)\to0$ (as $n\to \infty$).
If the sample $X_n$ is obtained by taking $n$ independent samples from the $U([0,1)^d)$ distribution ('IID random sampling') then $$P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)=\prod_{k=1}^n P(\|X_{nk}-x\|\geq \epsilon)\leq (1-v_\epsilon 2^{-d})^n \to 0$$ where $v_\epsilon$ is the volume of the $d$-dimensional ball of radius $\epsilon$. So certainly IID random sampling is asymptotically dense.
Now consider the case that the samples $X_n$ are obtained by LHS. Theorem 10.1 in these notes states that the members of the sample $X_n$ are all distributed as $U([0,1)^d)$. However, the permutations used in the definition of LHS (although independent for different dimensions) induce some dependence between the members of the sample ($X_{nk}, k\leq n$), so it is less obvious that the asymptotic density property holds.
Fix $\epsilon\gt 0$ and $x\in [0,1)^d$. Define $P_n=P(min_{1\leq k\leq n} \|X_{nk}-x\|\geq \epsilon)$. We want to show that $P_n\to 0$. To do this, we can make use of Proposition 10.3 in those notes, which is a kind of Central Limit Theorem for Latin Hypercube Sampling. Define $f:[0,1]^d\to\mathbb{R}$ by $f(z)=1$ if $z$ is in the ball of radius $\epsilon$ around $x$, $f(z)=0$ otherwise. Then Proposition 10.3 tells us that $Y_n:=\sqrt n (\hat{\mu}_{LHS}-\mu)\xrightarrow{d} N(0,\Sigma)$ where $\mu=\int_{[0,1]^d} f(z) dz$ and $\hat{\mu}_{LHS}=\frac{1}{n}\sum_{i=1}^n f(X_{ni})$.
Take $L>0$. Eventually, for large enough $n$, we will have $-\sqrt n\mu\lt -L$. So eventually we will have $P_n=P(Y_n=-\sqrt n \mu)\le P(Y_n\lt -L)$. Therefore $\limsup P_n\le \limsup P(Y_n\lt -L)=\Phi(\frac{-L}{\sqrt\Sigma})$, where $\Phi$ is the standard normal cdf. Since $L$ was arbitrary, it follows that $P_n\to 0$ as required.
This proves asymptotic density (as defined above) for both iid random sampling and LHS. Informally, this means that given any $\epsilon$ and any $x$ in the sampling space, the probability that the sample gets to within $\epsilon$ of $x$ can be made as close to 1 as you please by choosing the sample size sufficiently large. It is easy to extend the concept of asymptotic density so as to apply to finite subsets of the sample space - by applying what we already know to each point in the finite subset. More formally, this means that we can show: for any $\epsilon>0$ and any finite subset $\{x_1,...,x_m\}$ of the sample space, $min_{1\leq j\leq m} P(min_{1\leq k\leq n} \|X_{nk}-x_j\|\lt \epsilon)\to 1$ (as $n\to\infty$). | Latin Hypercube Sampling Asymptotics
Short answer: Yes, in a probabilistic way. It is possible to show that, given any distance $\epsilon>0$, any finite subset $\{x_1,…,x_m\}$ of the sample space and any prescribed ‘tolerance’ $\delta>0$ |
26,213 | Latin Hypercube Sampling Asymptotics | I'm not sure if this is quite what you want, but here goes.
You're LHS-sampling $n$ points from $[0,1)^d$, say. We'll argue very informally that, for any $\epsilon>0$, the expected number of empty (hyper)cuboids of size $\epsilon$ in each dimension goes to zero as $n\to\infty$.
Let $m=\lceil 2/\epsilon \rceil$ so that if we divide $[0,1)^d$ uniformly into $m^d$ tiny cuboids -- microcuboids, say -- of width $1/m$ then every width-$\epsilon$ cuboid contains at least one microcuboid. So if we can show that the expected number of unsampled microcuboids is zero, in the limit as $n\to\infty$, then we're done. (Note that our microcuboids are arranged on a regular grid, but the $\epsilon$-cuboids can be in any position.)
The chance of completely missing a given microcuboid with the first sample point is $1-m^{-d}$, independent of $n$, as the first set of $d$ sample coordinates (first sample point) can be chosen freely. Given that the first few sample points have all missed that microcuboid, subsequent sample points will find it harder to miss (on average), so the chance of all $n$ points missing it is less than $(1-m^{-d})^n$.
There are $m^d$ microcuboids in $[0,1)^d$, so the expected number that are missed is bounded above by $m^d(1-m^{-d})^n$ -- because expectations add -- which is zero in the limit as $n\to\infty$.
Updates ...
(1) Here's a picture showing how, for given $\epsilon$, you can pick $m$ large enough so that an $m\times m$ grid of "microcuboids" (squares in this 2-dimensional illustration) is guaranteed to have at least one microcuboid within any $\epsilon\times\epsilon$ sized region. I've shown two "randomly"-chosen $\epsilon\times\epsilon$ regions and have coloured-in purple the two microcuboids that they contain.
(2) Consider any particular microcuboid. It has volume $(1/m)^d$, a fraction $m^{-d}$ of the whole space. So the first LHS sample -- which is the only one chosen completely freely -- will miss it with probability $1-m^{-d}$. The only important fact is that this is a fixed value (we'll let $n\to\infty$, but keep $m$ constant) that's less than $1$.
(3) Now think about the number of sample points $n>m$. I've illustrated $n=6m$ in the picture. LHS works in a fine mesh of these super-tiny $n^{-1}\times n^{-1}$ sized "nanocuboids" (if you will), not the larger $m^{-1}\times m^{-1}$ sized "microcuboids", but actually that's not important in the proof. The proof only needs the slightly hand-waving statement that it gets gradually harder, on average, to keep missing a given microcuboid as you throw down more points. So it was a probability of $1-m^{-d}$ for the first LHS point missing, but less than $(1-m^{-d})^n$ for all $n$ of them missing: that's zero in the limit as $n\to\infty$.
(4) All these epsilons are fine for a proof but aren't great for your intuition. So here are a couple of pictures illustrating $n=10$ and $n=50$ sample points, with the largest empty rectangular area highlighted. (The grid is the LHS sampling grid -- the "nanocuboids" referred to earlier.) It should be "obvious" (in some vague intuitive sense) that the largest empty area will shrink to arbitrarily small size as the number of sample points $n\to\infty$. | Latin Hypercube Sampling Asymptotics | I'm not sure if this is quite what you want, but here goes.
You're LHS-sampling $n$ points from $[0,1)^d$, say. We'll argue very informally that, for any $\epsilon>0$, the expected number of empty (h | Latin Hypercube Sampling Asymptotics
I'm not sure if this is quite what you want, but here goes.
You're LHS-sampling $n$ points from $[0,1)^d$, say. We'll argue very informally that, for any $\epsilon>0$, the expected number of empty (hyper)cuboids of size $\epsilon$ in each dimension goes to zero as $n\to\infty$.
Let $m=\lceil 2/\epsilon \rceil$ so that if we divide $[0,1)^d$ uniformly into $m^d$ tiny cuboids -- microcuboids, say -- of width $1/m$ then every width-$\epsilon$ cuboid contains at least one microcuboid. So if we can show that the expected number of unsampled microcuboids is zero, in the limit as $n\to\infty$, then we're done. (Note that our microcuboids are arranged on a regular grid, but the $\epsilon$-cuboids can be in any position.)
The chance of completely missing a given microcuboid with the first sample point is $1-m^{-d}$, independent of $n$, as the first set of $d$ sample coordinates (first sample point) can be chosen freely. Given that the first few sample points have all missed that microcuboid, subsequent sample points will find it harder to miss (on average), so the chance of all $n$ points missing it is less than $(1-m^{-d})^n$.
There are $m^d$ microcuboids in $[0,1)^d$, so the expected number that are missed is bounded above by $m^d(1-m^{-d})^n$ -- because expectations add -- which is zero in the limit as $n\to\infty$.
Updates ...
(1) Here's a picture showing how, for given $\epsilon$, you can pick $m$ large enough so that an $m\times m$ grid of "microcuboids" (squares in this 2-dimensional illustration) is guaranteed to have at least one microcuboid within any $\epsilon\times\epsilon$ sized region. I've shown two "randomly"-chosen $\epsilon\times\epsilon$ regions and have coloured-in purple the two microcuboids that they contain.
(2) Consider any particular microcuboid. It has volume $(1/m)^d$, a fraction $m^{-d}$ of the whole space. So the first LHS sample -- which is the only one chosen completely freely -- will miss it with probability $1-m^{-d}$. The only important fact is that this is a fixed value (we'll let $n\to\infty$, but keep $m$ constant) that's less than $1$.
(3) Now think about the number of sample points $n>m$. I've illustrated $n=6m$ in the picture. LHS works in a fine mesh of these super-tiny $n^{-1}\times n^{-1}$ sized "nanocuboids" (if you will), not the larger $m^{-1}\times m^{-1}$ sized "microcuboids", but actually that's not important in the proof. The proof only needs the slightly hand-waving statement that it gets gradually harder, on average, to keep missing a given microcuboid as you throw down more points. So it was a probability of $1-m^{-d}$ for the first LHS point missing, but less than $(1-m^{-d})^n$ for all $n$ of them missing: that's zero in the limit as $n\to\infty$.
(4) All these epsilons are fine for a proof but aren't great for your intuition. So here are a couple of pictures illustrating $n=10$ and $n=50$ sample points, with the largest empty rectangular area highlighted. (The grid is the LHS sampling grid -- the "nanocuboids" referred to earlier.) It should be "obvious" (in some vague intuitive sense) that the largest empty area will shrink to arbitrarily small size as the number of sample points $n\to\infty$. | Latin Hypercube Sampling Asymptotics
I'm not sure if this is quite what you want, but here goes.
You're LHS-sampling $n$ points from $[0,1)^d$, say. We'll argue very informally that, for any $\epsilon>0$, the expected number of empty (h |
26,214 | Understanding the use of logarithms in the TF-IDF logarithm | The aspect emphasised is that the relevance of a term or a document does not increase proportionally with term (or document) frequency. Using a sub-linear function therefore helps dampen down this effect. To that extent, the influence of very large or very small values (e.g. very rare words) is also amortised. Finally, as most people intuitively perceive scoring functions to be somewhat additive, using logarithms will make probability of different independent terms from $P(A, B) = P(A) \, P(B)$ to look more like $\log(P(A,B)) = \log(P(A)) + \log(P(B))$.
As the Wikipedia article you link notes the justification of TF-IDF is still not well-established; it is/was a heuristic that we want to make rigorous, not a rigorous concept we want to transfer to the real world.
As mentioned by @Anony-Mousse as a very good read on the matter is Robertson's Understanding Inverse Document Frequency:
On theoretical arguments for IDF. It gives a broad overview of the whole framework and attempts to ground TF-IDF methodology to the relevance weighting of search terms. | Understanding the use of logarithms in the TF-IDF logarithm | The aspect emphasised is that the relevance of a term or a document does not increase proportionally with term (or document) frequency. Using a sub-linear function therefore helps dampen down this eff | Understanding the use of logarithms in the TF-IDF logarithm
The aspect emphasised is that the relevance of a term or a document does not increase proportionally with term (or document) frequency. Using a sub-linear function therefore helps dampen down this effect. To that extent, the influence of very large or very small values (e.g. very rare words) is also amortised. Finally, as most people intuitively perceive scoring functions to be somewhat additive, using logarithms will make probability of different independent terms from $P(A, B) = P(A) \, P(B)$ to look more like $\log(P(A,B)) = \log(P(A)) + \log(P(B))$.
As the Wikipedia article you link notes the justification of TF-IDF is still not well-established; it is/was a heuristic that we want to make rigorous, not a rigorous concept we want to transfer to the real world.
As mentioned by @Anony-Mousse as a very good read on the matter is Robertson's Understanding Inverse Document Frequency:
On theoretical arguments for IDF. It gives a broad overview of the whole framework and attempts to ground TF-IDF methodology to the relevance weighting of search terms. | Understanding the use of logarithms in the TF-IDF logarithm
The aspect emphasised is that the relevance of a term or a document does not increase proportionally with term (or document) frequency. Using a sub-linear function therefore helps dampen down this eff |
26,215 | Cross validating lasso regression in R | An example on how to do vanilla plain cross-validation for lasso in glmnet on mtcars
data set.
Load data set.
Prepare features (independent variables). They should be of matrix class. The easiest way to convert df containing categorical variables into matrix is via model.matrix. Mind you, by default glmnet fits intercept, so you'd better strip intercept from model matrix.
Prepare response (dependent variable). Let's code cars with above average mpg as efficient ('1') and the rest as inefficient ('0'). Convert this variable to factor.
Run cross-validation via cv.glmnet. It will pickup alpha=1 from default glmnet parameters, which is what you asked for: lasso regression.
By examining the output of cross-validation you may be interested in
at least 2 pieces of information:
lambda, that minimizes cross-validated error. glmnet actually provides 2 lambdas: lambda.min and lambda.1se. It's your judgement call as a practicing statistician which to use.
resulting regularized coefficients.
Please see the R code per the above instructions:
# Load data set
data("mtcars")
# Prepare data set
x <- model.matrix(~.-1, data= mtcars[,-1])
mpg <- ifelse( mtcars$mpg < mean(mtcars$mpg), 0, 1)
y <- factor(mpg, labels = c('notEfficient', 'efficient'))
library(glmnet)
# Run cross-validation
mod_cv <- cv.glmnet(x=x, y=y, family='binomial')
mod_cv$lambda.1se
[1] 0.108442
coef(mod_cv, mod_cv$lambda.1se)
1
(Intercept) 5.6971598
cyl -0.9822704
disp .
hp .
drat .
wt .
qsec .
vs .
am .
gear .
carb .
mod_cv$lambda.min
[1] 0.01537137
coef(mod_cv, mod_cv$lambda.min)
1
(Intercept) 6.04249733
cyl -0.95867199
disp .
hp -0.01962924
drat 0.83578090
wt .
qsec .
vs .
am 2.65798203
gear .
carb -0.67974620
Final comments:
note, the model's output says nothing about statistical significance of the coefficients, only values.
l1 penalizer (lasso), which you asked for, is notorious for instability as evidenced in this blog post and this stackexchange question. A better way could be to cross-validate on alpha too, which would let you decide on proper mix of l1 and l2 penalizers.
an alternative way to do cross-validation could be to turn to caret's train( ... method='glmnet')
and finally, the best way to learn more about cv.glmnet and it's defaults coming from glmnet is of course ?glmnet in R's console ))) | Cross validating lasso regression in R | An example on how to do vanilla plain cross-validation for lasso in glmnet on mtcars
data set.
Load data set.
Prepare features (independent variables). They should be of matrix class. The easiest wa | Cross validating lasso regression in R
An example on how to do vanilla plain cross-validation for lasso in glmnet on mtcars
data set.
Load data set.
Prepare features (independent variables). They should be of matrix class. The easiest way to convert df containing categorical variables into matrix is via model.matrix. Mind you, by default glmnet fits intercept, so you'd better strip intercept from model matrix.
Prepare response (dependent variable). Let's code cars with above average mpg as efficient ('1') and the rest as inefficient ('0'). Convert this variable to factor.
Run cross-validation via cv.glmnet. It will pickup alpha=1 from default glmnet parameters, which is what you asked for: lasso regression.
By examining the output of cross-validation you may be interested in
at least 2 pieces of information:
lambda, that minimizes cross-validated error. glmnet actually provides 2 lambdas: lambda.min and lambda.1se. It's your judgement call as a practicing statistician which to use.
resulting regularized coefficients.
Please see the R code per the above instructions:
# Load data set
data("mtcars")
# Prepare data set
x <- model.matrix(~.-1, data= mtcars[,-1])
mpg <- ifelse( mtcars$mpg < mean(mtcars$mpg), 0, 1)
y <- factor(mpg, labels = c('notEfficient', 'efficient'))
library(glmnet)
# Run cross-validation
mod_cv <- cv.glmnet(x=x, y=y, family='binomial')
mod_cv$lambda.1se
[1] 0.108442
coef(mod_cv, mod_cv$lambda.1se)
1
(Intercept) 5.6971598
cyl -0.9822704
disp .
hp .
drat .
wt .
qsec .
vs .
am .
gear .
carb .
mod_cv$lambda.min
[1] 0.01537137
coef(mod_cv, mod_cv$lambda.min)
1
(Intercept) 6.04249733
cyl -0.95867199
disp .
hp -0.01962924
drat 0.83578090
wt .
qsec .
vs .
am 2.65798203
gear .
carb -0.67974620
Final comments:
note, the model's output says nothing about statistical significance of the coefficients, only values.
l1 penalizer (lasso), which you asked for, is notorious for instability as evidenced in this blog post and this stackexchange question. A better way could be to cross-validate on alpha too, which would let you decide on proper mix of l1 and l2 penalizers.
an alternative way to do cross-validation could be to turn to caret's train( ... method='glmnet')
and finally, the best way to learn more about cv.glmnet and it's defaults coming from glmnet is of course ?glmnet in R's console ))) | Cross validating lasso regression in R
An example on how to do vanilla plain cross-validation for lasso in glmnet on mtcars
data set.
Load data set.
Prepare features (independent variables). They should be of matrix class. The easiest wa |
26,216 | D'Agostino-Pearson vs. Shapiro-Wilk for normality | The ultimate reason why the Shapiro Wilk is popular has, I think, less to do with Pubmed or NIST but rather with its excellent power in a wide variety of situations of interest (which would in turn lead to wider implementation and hence, popularity); it generally comes out toward the top against a wide variety of non-normal distributions in power comparisons with other possible choices. I wouldn't claim it's the best possible omnibus test of normality, but it's a very solid choice.
If you have ties beyond that due to the ordinary rounding real numbers to some reasonable number of figures, you can reject normality immediately (say, if your data were counts!).
it can be fairly rare that two samples give identical readings, though they may share commonality to several decimal places.
the occasional such tie -- or a small fraction of ties -- should present no problem for the Shapiro-Wilk.
The Shapiro Wilk is impacted by ties, but a few ties shouldn't be a big issue.
Royston, 1989[1] says:
The Shapiro-Wilk test [...] should not be used if the grouping interval exceeds 0.1 standard deviation units.
That's pretty big. With a normal distribution, a grouping interval of 0.1 s.d. would only produce about 35 unique values out of 100. This is an example of Royston's edge-case at n=100:
One of the values is repeated ten times. That's what he's saying is okay (just).
You need either really tiny sds or pretty heavy rounding to do worse than this.
The same paper suggests a modification for ties in that situation.
when specifically I should consider switching to the D'Agostino-Pearson (somewhat less favored by some who hold sway for some reason)
If you mean the test based on the skewness and kurtosis, then the reason is obvious enough. It simply doesn't perform quite as well overall. If there are differences in skewness or kurtosis, it's an excellent test, often display quite good power, but not every non-normal distribution differs substantively in skewness or kurtosis. Indeed it's a trivial matter to find distinctly non-normal distributions with the same skewness and kurtosis as the normal.
There's an example here which has skewness and kurtosis the same as for the normal, but you can see it's non-normal at a glance! (You may find that post useful more broadly.)
The D'Agostino $K^2$ test has very poor power against those, but Shapiro-Wilk has no trouble with them.
does anyone have a rationale for how similar two values should be in order to be considered ties?
For the statistical issues relevant to ties (as here), usually they're tied if they're exactly equal to as many figures as you have. Of course if you have given many more figures than are meaningful, that may be a different issue.
[1]: Royston, J.P. (1989),
"Correcting the Shapiro-Wilk W for ties,"
Journal of Statistical Computation and Simulation, Volume 31, Issue 4 | D'Agostino-Pearson vs. Shapiro-Wilk for normality | The ultimate reason why the Shapiro Wilk is popular has, I think, less to do with Pubmed or NIST but rather with its excellent power in a wide variety of situations of interest (which would in turn le | D'Agostino-Pearson vs. Shapiro-Wilk for normality
The ultimate reason why the Shapiro Wilk is popular has, I think, less to do with Pubmed or NIST but rather with its excellent power in a wide variety of situations of interest (which would in turn lead to wider implementation and hence, popularity); it generally comes out toward the top against a wide variety of non-normal distributions in power comparisons with other possible choices. I wouldn't claim it's the best possible omnibus test of normality, but it's a very solid choice.
If you have ties beyond that due to the ordinary rounding real numbers to some reasonable number of figures, you can reject normality immediately (say, if your data were counts!).
it can be fairly rare that two samples give identical readings, though they may share commonality to several decimal places.
the occasional such tie -- or a small fraction of ties -- should present no problem for the Shapiro-Wilk.
The Shapiro Wilk is impacted by ties, but a few ties shouldn't be a big issue.
Royston, 1989[1] says:
The Shapiro-Wilk test [...] should not be used if the grouping interval exceeds 0.1 standard deviation units.
That's pretty big. With a normal distribution, a grouping interval of 0.1 s.d. would only produce about 35 unique values out of 100. This is an example of Royston's edge-case at n=100:
One of the values is repeated ten times. That's what he's saying is okay (just).
You need either really tiny sds or pretty heavy rounding to do worse than this.
The same paper suggests a modification for ties in that situation.
when specifically I should consider switching to the D'Agostino-Pearson (somewhat less favored by some who hold sway for some reason)
If you mean the test based on the skewness and kurtosis, then the reason is obvious enough. It simply doesn't perform quite as well overall. If there are differences in skewness or kurtosis, it's an excellent test, often display quite good power, but not every non-normal distribution differs substantively in skewness or kurtosis. Indeed it's a trivial matter to find distinctly non-normal distributions with the same skewness and kurtosis as the normal.
There's an example here which has skewness and kurtosis the same as for the normal, but you can see it's non-normal at a glance! (You may find that post useful more broadly.)
The D'Agostino $K^2$ test has very poor power against those, but Shapiro-Wilk has no trouble with them.
does anyone have a rationale for how similar two values should be in order to be considered ties?
For the statistical issues relevant to ties (as here), usually they're tied if they're exactly equal to as many figures as you have. Of course if you have given many more figures than are meaningful, that may be a different issue.
[1]: Royston, J.P. (1989),
"Correcting the Shapiro-Wilk W for ties,"
Journal of Statistical Computation and Simulation, Volume 31, Issue 4 | D'Agostino-Pearson vs. Shapiro-Wilk for normality
The ultimate reason why the Shapiro Wilk is popular has, I think, less to do with Pubmed or NIST but rather with its excellent power in a wide variety of situations of interest (which would in turn le |
26,217 | Why is it called "mode" in MAP estimation? | In maximum a posteriori estimation (MAP), we use the maximum of the posterior distribution to derive point estimates of whatever we are interested in. The maximum of a distribution is also called its "mode" (assuming a unimodal distribution).
This is not the mean of the posterior (or any other) distribution. The difference is relevant when the posterior is asymmetric. | Why is it called "mode" in MAP estimation? | In maximum a posteriori estimation (MAP), we use the maximum of the posterior distribution to derive point estimates of whatever we are interested in. The maximum of a distribution is also called its | Why is it called "mode" in MAP estimation?
In maximum a posteriori estimation (MAP), we use the maximum of the posterior distribution to derive point estimates of whatever we are interested in. The maximum of a distribution is also called its "mode" (assuming a unimodal distribution).
This is not the mean of the posterior (or any other) distribution. The difference is relevant when the posterior is asymmetric. | Why is it called "mode" in MAP estimation?
In maximum a posteriori estimation (MAP), we use the maximum of the posterior distribution to derive point estimates of whatever we are interested in. The maximum of a distribution is also called its |
26,218 | Why is it called "mode" in MAP estimation? | When you are calculating the MAP estimate of the parameter, you assume posterior quantity to be of unimodal distribution like normal distribution or chi-square distribution. And the mode of the unimodal distribution doesn't mean the most frequent value, it refers to the local maximum or the peak point. That's is why you call MAP as the mode of the distribution. | Why is it called "mode" in MAP estimation? | When you are calculating the MAP estimate of the parameter, you assume posterior quantity to be of unimodal distribution like normal distribution or chi-square distribution. And the mode of the unimod | Why is it called "mode" in MAP estimation?
When you are calculating the MAP estimate of the parameter, you assume posterior quantity to be of unimodal distribution like normal distribution or chi-square distribution. And the mode of the unimodal distribution doesn't mean the most frequent value, it refers to the local maximum or the peak point. That's is why you call MAP as the mode of the distribution. | Why is it called "mode" in MAP estimation?
When you are calculating the MAP estimate of the parameter, you assume posterior quantity to be of unimodal distribution like normal distribution or chi-square distribution. And the mode of the unimod |
26,219 | Why is it called "mode" in MAP estimation? | The mode of the posterior $\arg \max p(\theta|x)$ does not always coincide with the mean of the posterior $\int_{\Theta} \theta p(\theta|x) \mathrm{d}\theta$. It does in some cases, like a Gaussian posterior, but generally it does not. Depending on your field, the latter is called expected a-posteriori (EAP), or simply posterior expectation. | Why is it called "mode" in MAP estimation? | The mode of the posterior $\arg \max p(\theta|x)$ does not always coincide with the mean of the posterior $\int_{\Theta} \theta p(\theta|x) \mathrm{d}\theta$. It does in some cases, like a Gaussian po | Why is it called "mode" in MAP estimation?
The mode of the posterior $\arg \max p(\theta|x)$ does not always coincide with the mean of the posterior $\int_{\Theta} \theta p(\theta|x) \mathrm{d}\theta$. It does in some cases, like a Gaussian posterior, but generally it does not. Depending on your field, the latter is called expected a-posteriori (EAP), or simply posterior expectation. | Why is it called "mode" in MAP estimation?
The mode of the posterior $\arg \max p(\theta|x)$ does not always coincide with the mean of the posterior $\int_{\Theta} \theta p(\theta|x) \mathrm{d}\theta$. It does in some cases, like a Gaussian po |
26,220 | How to interpret variables that are excluded from or included in the lasso model? | Your conclusion is correct. Think of two aspects:
Statistical power to detect an effect. Unless the power is very high, one can miss even large real effects.
Reliability: having a high probability of finding the right (true) features.
There are at least 4 major considerations:
Is the method reproducible by you using the same dataset?
Is the method reproducible by others using the same dataset?
Are the results reproducible using other datasets?
Is the result reliable?
When one desires to do more than prediction but to actually draw conclusions about which features are important in predicting the outcome, 3. and 4. are crucial.
You have addressed 3. (and for this purpose, 100 bootstraps is sufficient), but in addition to individual feature inclusion fractions we need to know the average absolute 'distance' between a bootstrap feature set and the original selected feature set. For example, what is the average number of features detected from the whole sample that were found in the bootstrap sample? What is the average number of features selected from a bootstrap sample that were found in the original analysis? What is the proportion of times that a bootstrap found an exact match to the original feature set? What is the proportion that a bootstrap was within one feature of agreeing exactly with the original? Two features?
It would not be appropriate to say that any cutoff should be used in making an overall conclusion.
Regarding part 4., none of this addresses the reliability of the process, i.e., how close the feature set is to the 'true' feature set. To address that, you might do a Monte-Carlo re-simulation study where you take the original sample lasso result as the 'truth' and simulate new response vectors several hundred times using some assumed error structure. For each re-simulation you run the lasso on the original whole predictor matrix and the new response vector, and determine how close the selected lasso feature set is to the truth that you simulated from. Re-simulation conditions on the entire set of candidate predictors and uses coefficient estimates from the initially fitted model (and in the lasso case, the set of selected predictors) as a convenient 'truth' to simulate from. By using the original predictors one automatically gets a reasonable set of co-linearities built into the Monte Carlo simulation.
To simulate new realizations of $Y$ given the original $X$ matrix and now true regression coefficients, one can use the residual variance and assume normality with mean zero, or to be even more empirical, save all the residuals from the original fit and take a bootstrap sample from them to add residuals to the known linear predictor $X\beta$ for each simulation. Then the original modeling process is run from scratch (including selection of the optimum penalty) and a new model is developed. For each of 100 or so iterations compare the new model to the true model you are simulating from.
Again, this is a good check on the reliability of the process -- the ability to find the 'true' features and to get good estimates of $\beta$.
When $Y$ is binary, instead of dealing with residuals, re-simulation involves computing the linear predictor $X\beta$ from the original fit (e.g., using the lasso), taking the logistic transformation, and generating for each Monte Carlo simulation a new $Y$ vector to fit afresh. In R one can say for example
lp <- predict(...) # assuming suitable predict method available, or fitted()
probs <- plogis(lp)
y <- ifelse(runif(n) <= probs, 1, 0) | How to interpret variables that are excluded from or included in the lasso model? | Your conclusion is correct. Think of two aspects:
Statistical power to detect an effect. Unless the power is very high, one can miss even large real effects.
Reliability: having a high probability | How to interpret variables that are excluded from or included in the lasso model?
Your conclusion is correct. Think of two aspects:
Statistical power to detect an effect. Unless the power is very high, one can miss even large real effects.
Reliability: having a high probability of finding the right (true) features.
There are at least 4 major considerations:
Is the method reproducible by you using the same dataset?
Is the method reproducible by others using the same dataset?
Are the results reproducible using other datasets?
Is the result reliable?
When one desires to do more than prediction but to actually draw conclusions about which features are important in predicting the outcome, 3. and 4. are crucial.
You have addressed 3. (and for this purpose, 100 bootstraps is sufficient), but in addition to individual feature inclusion fractions we need to know the average absolute 'distance' between a bootstrap feature set and the original selected feature set. For example, what is the average number of features detected from the whole sample that were found in the bootstrap sample? What is the average number of features selected from a bootstrap sample that were found in the original analysis? What is the proportion of times that a bootstrap found an exact match to the original feature set? What is the proportion that a bootstrap was within one feature of agreeing exactly with the original? Two features?
It would not be appropriate to say that any cutoff should be used in making an overall conclusion.
Regarding part 4., none of this addresses the reliability of the process, i.e., how close the feature set is to the 'true' feature set. To address that, you might do a Monte-Carlo re-simulation study where you take the original sample lasso result as the 'truth' and simulate new response vectors several hundred times using some assumed error structure. For each re-simulation you run the lasso on the original whole predictor matrix and the new response vector, and determine how close the selected lasso feature set is to the truth that you simulated from. Re-simulation conditions on the entire set of candidate predictors and uses coefficient estimates from the initially fitted model (and in the lasso case, the set of selected predictors) as a convenient 'truth' to simulate from. By using the original predictors one automatically gets a reasonable set of co-linearities built into the Monte Carlo simulation.
To simulate new realizations of $Y$ given the original $X$ matrix and now true regression coefficients, one can use the residual variance and assume normality with mean zero, or to be even more empirical, save all the residuals from the original fit and take a bootstrap sample from them to add residuals to the known linear predictor $X\beta$ for each simulation. Then the original modeling process is run from scratch (including selection of the optimum penalty) and a new model is developed. For each of 100 or so iterations compare the new model to the true model you are simulating from.
Again, this is a good check on the reliability of the process -- the ability to find the 'true' features and to get good estimates of $\beta$.
When $Y$ is binary, instead of dealing with residuals, re-simulation involves computing the linear predictor $X\beta$ from the original fit (e.g., using the lasso), taking the logistic transformation, and generating for each Monte Carlo simulation a new $Y$ vector to fit afresh. In R one can say for example
lp <- predict(...) # assuming suitable predict method available, or fitted()
probs <- plogis(lp)
y <- ifelse(runif(n) <= probs, 1, 0) | How to interpret variables that are excluded from or included in the lasso model?
Your conclusion is correct. Think of two aspects:
Statistical power to detect an effect. Unless the power is very high, one can miss even large real effects.
Reliability: having a high probability |
26,221 | What is an Affine Transformation? | An affine transformation has the form $f(x) = Ax + b$ where $A$ is a matrix and $b$ is a vector (of proper dimensions, obviously). | What is an Affine Transformation? | An affine transformation has the form $f(x) = Ax + b$ where $A$ is a matrix and $b$ is a vector (of proper dimensions, obviously). | What is an Affine Transformation?
An affine transformation has the form $f(x) = Ax + b$ where $A$ is a matrix and $b$ is a vector (of proper dimensions, obviously). | What is an Affine Transformation?
An affine transformation has the form $f(x) = Ax + b$ where $A$ is a matrix and $b$ is a vector (of proper dimensions, obviously). |
26,222 | What is an Affine Transformation? | Affine transformation(left multiply a matrix), also called linear transformation(for more intuition please refer to this blog: A Geometrical Understanding of Matrices), is parallel preserving, and it only stretches, reflects, rotates(for example diagonal matrix or orthogonal matrix) or shears(matrix with off-diagonal elements) a vector(the same applies to many vectors/a matrix), and the "non-affine"(also a type of projective transformation as explained in the comment by @whuber) transformation may be like the first example in the following diagram:
More generally speaking affine transformation has the following three properties:
straight lines preserved
parallel lines preserved
ratios of lengths along lines preserved (midpoints preserved) | What is an Affine Transformation? | Affine transformation(left multiply a matrix), also called linear transformation(for more intuition please refer to this blog: A Geometrical Understanding of Matrices), is parallel preserving, and it | What is an Affine Transformation?
Affine transformation(left multiply a matrix), also called linear transformation(for more intuition please refer to this blog: A Geometrical Understanding of Matrices), is parallel preserving, and it only stretches, reflects, rotates(for example diagonal matrix or orthogonal matrix) or shears(matrix with off-diagonal elements) a vector(the same applies to many vectors/a matrix), and the "non-affine"(also a type of projective transformation as explained in the comment by @whuber) transformation may be like the first example in the following diagram:
More generally speaking affine transformation has the following three properties:
straight lines preserved
parallel lines preserved
ratios of lengths along lines preserved (midpoints preserved) | What is an Affine Transformation?
Affine transformation(left multiply a matrix), also called linear transformation(for more intuition please refer to this blog: A Geometrical Understanding of Matrices), is parallel preserving, and it |
26,223 | What is an Affine Transformation? | In your comment the interview question you were asked was "give an example of statistical distribution, other than normal distribution, which is closed under affine transformation".
The example to which the question refers is the fact that if you have a normally distributed random variable $X$, say $X\sim \text{N}(\mu,\sigma^2)$, then an affine transformation is also normally distributed $aX+b \sim \text{N}(a\mu+b,a^2\sigma^2)$.
The terminology in Statistics for distributions which are 'closed under affine transformation' is $\textbf{location-scale family}$.
One example which would answer the question is the continuous uniform distribution. If $X\sim U[\alpha,\beta]$, and $Y= aX+b$ then
$$Y\sim U[a\alpha+b, a\beta+b].$$ | What is an Affine Transformation? | In your comment the interview question you were asked was "give an example of statistical distribution, other than normal distribution, which is closed under affine transformation".
The example to wh | What is an Affine Transformation?
In your comment the interview question you were asked was "give an example of statistical distribution, other than normal distribution, which is closed under affine transformation".
The example to which the question refers is the fact that if you have a normally distributed random variable $X$, say $X\sim \text{N}(\mu,\sigma^2)$, then an affine transformation is also normally distributed $aX+b \sim \text{N}(a\mu+b,a^2\sigma^2)$.
The terminology in Statistics for distributions which are 'closed under affine transformation' is $\textbf{location-scale family}$.
One example which would answer the question is the continuous uniform distribution. If $X\sim U[\alpha,\beta]$, and $Y= aX+b$ then
$$Y\sim U[a\alpha+b, a\beta+b].$$ | What is an Affine Transformation?
In your comment the interview question you were asked was "give an example of statistical distribution, other than normal distribution, which is closed under affine transformation".
The example to wh |
26,224 | What is an Affine Transformation? | So I look here: http://mathworld.wolfram.com/AffineTransformation.html
It is a rotation. All points on a line, stay on the same line.
Per @Luca:
It can have scaling, shear, translation as well. No bending. Straight lines are always straight. | What is an Affine Transformation? | So I look here: http://mathworld.wolfram.com/AffineTransformation.html
It is a rotation. All points on a line, stay on the same line.
Per @Luca:
It can have scaling, shear, translation as well. No | What is an Affine Transformation?
So I look here: http://mathworld.wolfram.com/AffineTransformation.html
It is a rotation. All points on a line, stay on the same line.
Per @Luca:
It can have scaling, shear, translation as well. No bending. Straight lines are always straight. | What is an Affine Transformation?
So I look here: http://mathworld.wolfram.com/AffineTransformation.html
It is a rotation. All points on a line, stay on the same line.
Per @Luca:
It can have scaling, shear, translation as well. No |
26,225 | Does SurveyMonkey ignore the fact that you get a non-random sample? | The short answer is yes: Survey Monkey ignores exactly how you obtained your sample. Survey Monkey is not smart enough to assume that what you have gathered isn't a convenience sample, but virtually every Survey Monkey survey is a convenience sample. This creates massive discrepancy in exactly what you're estimating which no amount of sheer sampling can/will eliminate. On one hand you could define a population (and associations therein) you would obtain from a SRS. On the other, you could define a population defined by your non-random sampling, the associations there you can estimate (and the power rules hold for such values). It's up to you as a researcher to discuss the discrepancy and let the reader decide exactly how valid the non-random sample could be in approximating a real trend.
As a point, there are inconsistent uses of the term bias. In probability theory, the bias of an estimator is defined by $\mbox{Bias}_n = \theta - \hat{\theta}_n$. However an estimator can be biased, but consistent, so that bias "vanishes" in large samples, such as the bias of maximum likelihood estimates of the standard deviation of normally distributed RVs. i.e. $\hat{\theta} \rightarrow_p \theta$. Estimators which don't have vanishing bias, (e.g. $\hat{\theta} \not\to_p \theta$) are called inconsistent in probability theory. Study design experts (like epidemiologists) have picked up a bad habit of calling inconsistency "bias". In this case, it's selection bias or volunteer bias. It's certainly a form of bias, but inconsistency implies that no amount of sampling will ever correct the issue.
In order to estimate population level associations from convenience sample data, you would have to correctly identify the sampling probability mechanism and use inverse probability weighting in all of your estimates. In very rare situations does this make sense. Identifying such a mechanism is next to impossible in practice. A time that it can be done is in a cohort of individuals with previous information who are approached to fill out a survey. Nonresponse probability can be estimated as a function of that previous information, e.g. age, sex, SES, ... Weighting gives you a chance to extrapolate what results would have been in the non-responder population. Census is a good example of the involvement of inverse probability weighting for such analyses. | Does SurveyMonkey ignore the fact that you get a non-random sample? | The short answer is yes: Survey Monkey ignores exactly how you obtained your sample. Survey Monkey is not smart enough to assume that what you have gathered isn't a convenience sample, but virtually e | Does SurveyMonkey ignore the fact that you get a non-random sample?
The short answer is yes: Survey Monkey ignores exactly how you obtained your sample. Survey Monkey is not smart enough to assume that what you have gathered isn't a convenience sample, but virtually every Survey Monkey survey is a convenience sample. This creates massive discrepancy in exactly what you're estimating which no amount of sheer sampling can/will eliminate. On one hand you could define a population (and associations therein) you would obtain from a SRS. On the other, you could define a population defined by your non-random sampling, the associations there you can estimate (and the power rules hold for such values). It's up to you as a researcher to discuss the discrepancy and let the reader decide exactly how valid the non-random sample could be in approximating a real trend.
As a point, there are inconsistent uses of the term bias. In probability theory, the bias of an estimator is defined by $\mbox{Bias}_n = \theta - \hat{\theta}_n$. However an estimator can be biased, but consistent, so that bias "vanishes" in large samples, such as the bias of maximum likelihood estimates of the standard deviation of normally distributed RVs. i.e. $\hat{\theta} \rightarrow_p \theta$. Estimators which don't have vanishing bias, (e.g. $\hat{\theta} \not\to_p \theta$) are called inconsistent in probability theory. Study design experts (like epidemiologists) have picked up a bad habit of calling inconsistency "bias". In this case, it's selection bias or volunteer bias. It's certainly a form of bias, but inconsistency implies that no amount of sampling will ever correct the issue.
In order to estimate population level associations from convenience sample data, you would have to correctly identify the sampling probability mechanism and use inverse probability weighting in all of your estimates. In very rare situations does this make sense. Identifying such a mechanism is next to impossible in practice. A time that it can be done is in a cohort of individuals with previous information who are approached to fill out a survey. Nonresponse probability can be estimated as a function of that previous information, e.g. age, sex, SES, ... Weighting gives you a chance to extrapolate what results would have been in the non-responder population. Census is a good example of the involvement of inverse probability weighting for such analyses. | Does SurveyMonkey ignore the fact that you get a non-random sample?
The short answer is yes: Survey Monkey ignores exactly how you obtained your sample. Survey Monkey is not smart enough to assume that what you have gathered isn't a convenience sample, but virtually e |
26,226 | Likelihood Ratio for two-sample Exponential distribution | If memory serves, it appears you have forgotten something in your LR statistic.
The likelihood function under the null is
$$L_{H_0} = \theta^{-n_1-n_2}\cdot \exp\left\{-\theta^{-1}\left(\sum x_i+\sum y_i\right)\right\}$$
and the MLE is
$$\hat \theta_0 = \frac {\sum x_i+\sum y_i}{n_1+n_2} = w_1\bar x +w_2 \bar y, \;\; w_1=\frac {n_1}{n_1+n_2},\;w_2=\frac {n_2}{n_1+n_2}$$
So$$ L_{H_0}(\hat \theta_0) = (\hat \theta_0)^{-n_1-n_2}\cdot e^{-n_1-n_2}$$
Under the alternative, the likelihood is
$$L_{H_1} = \theta_1^{-n_1}\cdot \exp\left\{-\theta_1^{-1}\left(\sum x_i\right)\right\}\cdot \theta_2^{-n_2}\cdot \exp\left\{-\theta_2^{-1}\left(\sum y_i\right)\right\}$$
and the MLE's are
$$\hat \theta_1 = \frac {\sum x_i}{n_1} = \bar x, \qquad \hat \theta_2 = \frac {\sum y_i}{n_2} = \bar y$$
So
$$L_{H_1}(\hat \theta_1,\,\hat \theta_2) = (\hat \theta_1)^{-n_1}(\hat \theta_2)^{-n_2}\cdot e^{-n_1-n_2}$$
Consider the ratio
$$\frac {L_{H_1}(\hat \theta_1,\,\hat \theta_2)}{L_{H_0}(\hat \theta_0)} = \frac {(\hat \theta_0)^{n_1+n_2}}{(\hat \theta_1)^{n_1}(\hat \theta_2)^{n_2}}=\left(\frac {\hat \theta_0}{\hat \theta_1}\right)^{n_1} \cdot \left(\frac {\hat \theta_0}{\hat \theta_2}\right)^{n_2}$$
$$= \left(w_1 + w_2 \frac {\bar y}{\bar x}\right)^{n_1} \cdot \left(w_1\frac {\bar x}{\bar y} + w_2 \right)^{n_2}$$
The sample means are independent -so I believe that you can now finish this. | Likelihood Ratio for two-sample Exponential distribution | If memory serves, it appears you have forgotten something in your LR statistic.
The likelihood function under the null is
$$L_{H_0} = \theta^{-n_1-n_2}\cdot \exp\left\{-\theta^{-1}\left(\sum x_i+\su | Likelihood Ratio for two-sample Exponential distribution
If memory serves, it appears you have forgotten something in your LR statistic.
The likelihood function under the null is
$$L_{H_0} = \theta^{-n_1-n_2}\cdot \exp\left\{-\theta^{-1}\left(\sum x_i+\sum y_i\right)\right\}$$
and the MLE is
$$\hat \theta_0 = \frac {\sum x_i+\sum y_i}{n_1+n_2} = w_1\bar x +w_2 \bar y, \;\; w_1=\frac {n_1}{n_1+n_2},\;w_2=\frac {n_2}{n_1+n_2}$$
So$$ L_{H_0}(\hat \theta_0) = (\hat \theta_0)^{-n_1-n_2}\cdot e^{-n_1-n_2}$$
Under the alternative, the likelihood is
$$L_{H_1} = \theta_1^{-n_1}\cdot \exp\left\{-\theta_1^{-1}\left(\sum x_i\right)\right\}\cdot \theta_2^{-n_2}\cdot \exp\left\{-\theta_2^{-1}\left(\sum y_i\right)\right\}$$
and the MLE's are
$$\hat \theta_1 = \frac {\sum x_i}{n_1} = \bar x, \qquad \hat \theta_2 = \frac {\sum y_i}{n_2} = \bar y$$
So
$$L_{H_1}(\hat \theta_1,\,\hat \theta_2) = (\hat \theta_1)^{-n_1}(\hat \theta_2)^{-n_2}\cdot e^{-n_1-n_2}$$
Consider the ratio
$$\frac {L_{H_1}(\hat \theta_1,\,\hat \theta_2)}{L_{H_0}(\hat \theta_0)} = \frac {(\hat \theta_0)^{n_1+n_2}}{(\hat \theta_1)^{n_1}(\hat \theta_2)^{n_2}}=\left(\frac {\hat \theta_0}{\hat \theta_1}\right)^{n_1} \cdot \left(\frac {\hat \theta_0}{\hat \theta_2}\right)^{n_2}$$
$$= \left(w_1 + w_2 \frac {\bar y}{\bar x}\right)^{n_1} \cdot \left(w_1\frac {\bar x}{\bar y} + w_2 \right)^{n_2}$$
The sample means are independent -so I believe that you can now finish this. | Likelihood Ratio for two-sample Exponential distribution
If memory serves, it appears you have forgotten something in your LR statistic.
The likelihood function under the null is
$$L_{H_0} = \theta^{-n_1-n_2}\cdot \exp\left\{-\theta^{-1}\left(\sum x_i+\su |
26,227 | Likelihood Ratio for two-sample Exponential distribution | The likelihood function given the sample $\mathbf x=(x_1,\ldots,x_{n_1},y_1,\ldots,y_{n_2})$ is given by
\begin{align}
L(\theta_1,\theta_2)&=\frac{1}{\theta_1^{n_1}\theta_2^{n_2}}\,\exp\left[-\frac{1}{\theta_1}\sum_{i=1}^{n_1} x_i-\frac{1}{\theta_2}\sum_{i=1}^{n_2}y_i\right]\mathbf1_{\mathbf x>0},\quad\theta_1,\theta_2>0.
\end{align}
The LR test criterion for testing $H_0:\theta_1=\theta_2$ against $H_1:\theta_1\ne \theta_2$ is of the form
\begin{align}
\lambda(\mathbf x)&=\frac{\sup\limits_{\theta_1=\theta_2}L(\theta_1,\theta_2)}{\sup\limits_{\theta_1,\theta_2}L(\theta_1,\theta_2)}
=\frac{L(\hat\theta,\hat\theta)}{L(\hat\theta_1,\hat\theta_2)},
\end{align}
where $\hat\theta$ is the MLE of $\theta_1=\theta_2$ under $H_0$, and $\hat\theta_i$ is the unrestricted MLE of $\theta_i$ for $i=1,2$.
It is easily verified that
$$\left(\hat\theta_1,\hat\theta_2\right)=(\bar x,\bar y)$$
and $$\hat\theta=\frac{n_1\bar x+n_2\bar y}{n_1+n_2}.$$
After some simplification we get this symmetry for the LRT criterion:
\begin{align}
\lambda(\mathbf x)&=\underbrace{\text{constant}}_{>0}\left(\frac{n_1\bar x}{n_1\bar x+n_2\bar y}\right)^{\!\!n_1}\left(\frac{n_2\bar y}{n_1\bar x+n_2\bar y}\right)^{\!\!n_2}
\\&=\text{constant}\cdot\,t^{n_1}(1-t_1)^{n_2},\quad\text{ where }t=\frac{n_1\bar x}{n_1\bar x+n_2\bar y}
\\&=g(t),\,\text{say.}
\end{align}
Studying the nature of the function $g$, we see that $$g'(t)\gtrless 0\;\iff\; t\lessgtr \frac{n_1}{n_1+n_2}.$$
Now since $2n_1\overline X/\theta_1\sim \chi^2_{2n_1}$ and $2n_2\overline Y/\theta_2\sim \chi^2_{2n_2}$ are independently distributed, we have
$$\frac{\overline X}{\overline Y}\stackrel{H_0}{\sim}F_{2n_1,2n_2}.$$
Define $$v=\frac{n_1\overline x}{n_2\overline y},$$
so that $$t=\frac{v}{v+1}\quad\uparrow\, v$$
Therefore,
$$\lambda(\mathbf x)<c \iff v<c_1\quad\text{ or }\quad v>c_2,$$
where $c_1,c_2$ can be found from some size restriction and the fact that, under $H_0$,
$$\frac{n_2}{n_1}\,v\sim F_{2n_1,2n_2}.$$ | Likelihood Ratio for two-sample Exponential distribution | The likelihood function given the sample $\mathbf x=(x_1,\ldots,x_{n_1},y_1,\ldots,y_{n_2})$ is given by
\begin{align}
L(\theta_1,\theta_2)&=\frac{1}{\theta_1^{n_1}\theta_2^{n_2}}\,\exp\left[-\frac{1} | Likelihood Ratio for two-sample Exponential distribution
The likelihood function given the sample $\mathbf x=(x_1,\ldots,x_{n_1},y_1,\ldots,y_{n_2})$ is given by
\begin{align}
L(\theta_1,\theta_2)&=\frac{1}{\theta_1^{n_1}\theta_2^{n_2}}\,\exp\left[-\frac{1}{\theta_1}\sum_{i=1}^{n_1} x_i-\frac{1}{\theta_2}\sum_{i=1}^{n_2}y_i\right]\mathbf1_{\mathbf x>0},\quad\theta_1,\theta_2>0.
\end{align}
The LR test criterion for testing $H_0:\theta_1=\theta_2$ against $H_1:\theta_1\ne \theta_2$ is of the form
\begin{align}
\lambda(\mathbf x)&=\frac{\sup\limits_{\theta_1=\theta_2}L(\theta_1,\theta_2)}{\sup\limits_{\theta_1,\theta_2}L(\theta_1,\theta_2)}
=\frac{L(\hat\theta,\hat\theta)}{L(\hat\theta_1,\hat\theta_2)},
\end{align}
where $\hat\theta$ is the MLE of $\theta_1=\theta_2$ under $H_0$, and $\hat\theta_i$ is the unrestricted MLE of $\theta_i$ for $i=1,2$.
It is easily verified that
$$\left(\hat\theta_1,\hat\theta_2\right)=(\bar x,\bar y)$$
and $$\hat\theta=\frac{n_1\bar x+n_2\bar y}{n_1+n_2}.$$
After some simplification we get this symmetry for the LRT criterion:
\begin{align}
\lambda(\mathbf x)&=\underbrace{\text{constant}}_{>0}\left(\frac{n_1\bar x}{n_1\bar x+n_2\bar y}\right)^{\!\!n_1}\left(\frac{n_2\bar y}{n_1\bar x+n_2\bar y}\right)^{\!\!n_2}
\\&=\text{constant}\cdot\,t^{n_1}(1-t_1)^{n_2},\quad\text{ where }t=\frac{n_1\bar x}{n_1\bar x+n_2\bar y}
\\&=g(t),\,\text{say.}
\end{align}
Studying the nature of the function $g$, we see that $$g'(t)\gtrless 0\;\iff\; t\lessgtr \frac{n_1}{n_1+n_2}.$$
Now since $2n_1\overline X/\theta_1\sim \chi^2_{2n_1}$ and $2n_2\overline Y/\theta_2\sim \chi^2_{2n_2}$ are independently distributed, we have
$$\frac{\overline X}{\overline Y}\stackrel{H_0}{\sim}F_{2n_1,2n_2}.$$
Define $$v=\frac{n_1\overline x}{n_2\overline y},$$
so that $$t=\frac{v}{v+1}\quad\uparrow\, v$$
Therefore,
$$\lambda(\mathbf x)<c \iff v<c_1\quad\text{ or }\quad v>c_2,$$
where $c_1,c_2$ can be found from some size restriction and the fact that, under $H_0$,
$$\frac{n_2}{n_1}\,v\sim F_{2n_1,2n_2}.$$ | Likelihood Ratio for two-sample Exponential distribution
The likelihood function given the sample $\mathbf x=(x_1,\ldots,x_{n_1},y_1,\ldots,y_{n_2})$ is given by
\begin{align}
L(\theta_1,\theta_2)&=\frac{1}{\theta_1^{n_1}\theta_2^{n_2}}\,\exp\left[-\frac{1} |
26,228 | What is the minimum number of data points required for kernel density estimation? | In the book "Density Estimation for Statistics and Data Analysis, Bernard. W. Silverman, CRC ,1986" there is a chapter "Required sample size for given accuracy" where a sample size required is given to get the relative MSE at zero not greater that 0.1. I enclose the table presented there. | What is the minimum number of data points required for kernel density estimation? | In the book "Density Estimation for Statistics and Data Analysis, Bernard. W. Silverman, CRC ,1986" there is a chapter "Required sample size for given accuracy" where a sample size required is given t | What is the minimum number of data points required for kernel density estimation?
In the book "Density Estimation for Statistics and Data Analysis, Bernard. W. Silverman, CRC ,1986" there is a chapter "Required sample size for given accuracy" where a sample size required is given to get the relative MSE at zero not greater that 0.1. I enclose the table presented there. | What is the minimum number of data points required for kernel density estimation?
In the book "Density Estimation for Statistics and Data Analysis, Bernard. W. Silverman, CRC ,1986" there is a chapter "Required sample size for given accuracy" where a sample size required is given t |
26,229 | Violin plots interpretation | A violin plot is just a histogram (or more often a smoothed variant like a kernel density) turned on its side and mirrored. Any textbook that teaches you how to interpret histograms should give you the intuition you seek. Edit per Nick Cox's suggestion: Freedman, Pisani, Purves, Statistics covers histograms.
As far as interpreting them in a more formal way, the whole point of graphing the distribution is to see things that statistical tests might be fooled by.
One thing I like to do with violin plots is add lines for the median, mean, etc. Sometimes I'll superimpose a boxplot so I can see even more in the way of summary statistics.
At very least, you should be able to pick out any gross deviations in the first few moments (mean, dispersion, skewness, kurtosis) as well as bimodality and outliers. | Violin plots interpretation | A violin plot is just a histogram (or more often a smoothed variant like a kernel density) turned on its side and mirrored. Any textbook that teaches you how to interpret histograms should give you t | Violin plots interpretation
A violin plot is just a histogram (or more often a smoothed variant like a kernel density) turned on its side and mirrored. Any textbook that teaches you how to interpret histograms should give you the intuition you seek. Edit per Nick Cox's suggestion: Freedman, Pisani, Purves, Statistics covers histograms.
As far as interpreting them in a more formal way, the whole point of graphing the distribution is to see things that statistical tests might be fooled by.
One thing I like to do with violin plots is add lines for the median, mean, etc. Sometimes I'll superimpose a boxplot so I can see even more in the way of summary statistics.
At very least, you should be able to pick out any gross deviations in the first few moments (mean, dispersion, skewness, kurtosis) as well as bimodality and outliers. | Violin plots interpretation
A violin plot is just a histogram (or more often a smoothed variant like a kernel density) turned on its side and mirrored. Any textbook that teaches you how to interpret histograms should give you t |
26,230 | Comparison negative binomial model and quasi-Poisson | I see the quasi-poisson as a technical fix; it allows you to estimate as an additional parameter $\phi$, the dispersion parameter. In the Poisson $\phi = 1$ by definition. If your data are not as or more dispersed than that, the standard errors of the model coefficients are biased. By estimating $\hat{\phi}$ at the same time as estimating the other model coefficients, you can provide a correction to the model standard errors, and hence their test statistics and associated $p$-values. This is just a correction to the model assumptions.
The negative binomial is a more direct model for the overdispersion; that the data generating process is or can be approximated by a negative-binomial.
The quasi-Poisson also introduces a whole pile of practical issues such as it not having a true likelihood hence the whole stack of useful things for model selection, like likelihood ratio test, AIC, etc... (I know there is something called QAIC, but R's glm() for example won't give you it). | Comparison negative binomial model and quasi-Poisson | I see the quasi-poisson as a technical fix; it allows you to estimate as an additional parameter $\phi$, the dispersion parameter. In the Poisson $\phi = 1$ by definition. If your data are not as or m | Comparison negative binomial model and quasi-Poisson
I see the quasi-poisson as a technical fix; it allows you to estimate as an additional parameter $\phi$, the dispersion parameter. In the Poisson $\phi = 1$ by definition. If your data are not as or more dispersed than that, the standard errors of the model coefficients are biased. By estimating $\hat{\phi}$ at the same time as estimating the other model coefficients, you can provide a correction to the model standard errors, and hence their test statistics and associated $p$-values. This is just a correction to the model assumptions.
The negative binomial is a more direct model for the overdispersion; that the data generating process is or can be approximated by a negative-binomial.
The quasi-Poisson also introduces a whole pile of practical issues such as it not having a true likelihood hence the whole stack of useful things for model selection, like likelihood ratio test, AIC, etc... (I know there is something called QAIC, but R's glm() for example won't give you it). | Comparison negative binomial model and quasi-Poisson
I see the quasi-poisson as a technical fix; it allows you to estimate as an additional parameter $\phi$, the dispersion parameter. In the Poisson $\phi = 1$ by definition. If your data are not as or m |
26,231 | How to predict new data with spline/smooth regression | The way the prediction is computed is like this:
From the original fit, you have knot locations spread through the range of mean_radius in your training data. Together with the degree of the B-spline basis (cubic by default in mboost), these knot locations define the shape of your B-spline basis functions. The default in mboost is to have 20 interior knots, which define 24 cubic B-spline basis functions (don't ask...). Lets call these basis functions $B_j(x); j=1,\dots,24$.
The effect of your covariate $x=$``mean_radius`` is represented simply as
$$
f(x) = \sum^{24}_j B_j(x) \theta_j
$$
This is a very neat trick, because it reduces the hard problem of estimating the unspecified function $f(x)$ to the much simpler problem of estimating linear regression weights $\theta_j$ associated with a collection of synthetic covariates $B_j(x)$.
Prediction then is not that complicated:
Given the estimated coefficients $\hat \theta_j$, we need to evaluate the $B_j(\cdot);\; j=1,\dots,24$ for the prediction data $x_{new}$. For that, all we need are the knot locations that define the basis functions for the original data. We then get the predicted values as
$$
\hat f(x_{new}) = \sum^{24}_j B_j(x_{new}) \hat\theta_j.
$$
Since boosting is an iterative procedure, the estimated coefficients at the stop iteration $m_{stop}$ are actually the sum of the coefficient updates in
iterations $1, \dots, m_{stop}$. If you really want to get a grip on the details, take a look at the output you get from
bbs(rnorm(100))$dpp(rep(1,100))$predict,
and go explore from there.
For example,
with(environment(bbs(rnorm(100))$dpp(rep(1,100))$predict), newX)
calls
with(environment(bbs(rnorm(100))$dpp(rep(1,100))$predict), Xfun)
to evaluate the $B_j(\cdot)$ on $x_{new}$. | How to predict new data with spline/smooth regression | The way the prediction is computed is like this:
From the original fit, you have knot locations spread through the range of mean_radius in your training data. Together with the degree of the B-spline | How to predict new data with spline/smooth regression
The way the prediction is computed is like this:
From the original fit, you have knot locations spread through the range of mean_radius in your training data. Together with the degree of the B-spline basis (cubic by default in mboost), these knot locations define the shape of your B-spline basis functions. The default in mboost is to have 20 interior knots, which define 24 cubic B-spline basis functions (don't ask...). Lets call these basis functions $B_j(x); j=1,\dots,24$.
The effect of your covariate $x=$``mean_radius`` is represented simply as
$$
f(x) = \sum^{24}_j B_j(x) \theta_j
$$
This is a very neat trick, because it reduces the hard problem of estimating the unspecified function $f(x)$ to the much simpler problem of estimating linear regression weights $\theta_j$ associated with a collection of synthetic covariates $B_j(x)$.
Prediction then is not that complicated:
Given the estimated coefficients $\hat \theta_j$, we need to evaluate the $B_j(\cdot);\; j=1,\dots,24$ for the prediction data $x_{new}$. For that, all we need are the knot locations that define the basis functions for the original data. We then get the predicted values as
$$
\hat f(x_{new}) = \sum^{24}_j B_j(x_{new}) \hat\theta_j.
$$
Since boosting is an iterative procedure, the estimated coefficients at the stop iteration $m_{stop}$ are actually the sum of the coefficient updates in
iterations $1, \dots, m_{stop}$. If you really want to get a grip on the details, take a look at the output you get from
bbs(rnorm(100))$dpp(rep(1,100))$predict,
and go explore from there.
For example,
with(environment(bbs(rnorm(100))$dpp(rep(1,100))$predict), newX)
calls
with(environment(bbs(rnorm(100))$dpp(rep(1,100))$predict), Xfun)
to evaluate the $B_j(\cdot)$ on $x_{new}$. | How to predict new data with spline/smooth regression
The way the prediction is computed is like this:
From the original fit, you have knot locations spread through the range of mean_radius in your training data. Together with the degree of the B-spline |
26,232 | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution | $T$ is distributed... as a Poisson variable scaled by $n$. Hence the variance of $T$ is $1/n^2 \times n\beta$. | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution | $T$ is distributed... as a Poisson variable scaled by $n$. Hence the variance of $T$ is $1/n^2 \times n\beta$. | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution
$T$ is distributed... as a Poisson variable scaled by $n$. Hence the variance of $T$ is $1/n^2 \times n\beta$. | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution
$T$ is distributed... as a Poisson variable scaled by $n$. Hence the variance of $T$ is $1/n^2 \times n\beta$. |
26,233 | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution | Remember that
$$
\mathbb{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2\,\mathbb{Var}(X_i) + 2 \sum_{1\leq i<j\leq n} a_i\,a_j\,\mathbb{Cov}(X_i X_j) \, ,
$$
always. But, if the $X_i$'s are independent, what is the value of $\mathbb{Cov}(X_i X_j)$? That's all you need to answer the question. | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution | Remember that
$$
\mathbb{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2\,\mathbb{Var}(X_i) + 2 \sum_{1\leq i<j\leq n} a_i\,a_j\,\mathbb{Cov}(X_i X_j) \, ,
$$
always. But, if the $X_i$'s | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution
Remember that
$$
\mathbb{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2\,\mathbb{Var}(X_i) + 2 \sum_{1\leq i<j\leq n} a_i\,a_j\,\mathbb{Cov}(X_i X_j) \, ,
$$
always. But, if the $X_i$'s are independent, what is the value of $\mathbb{Cov}(X_i X_j)$? That's all you need to answer the question. | Finding the variance of the estimator for the maximum likelihood for the Poisson distribution
Remember that
$$
\mathbb{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2\,\mathbb{Var}(X_i) + 2 \sum_{1\leq i<j\leq n} a_i\,a_j\,\mathbb{Cov}(X_i X_j) \, ,
$$
always. But, if the $X_i$'s |
26,234 | Robust MCMC estimator of marginal likelihood? | How about annealed importance sampling? It has much lower variance than regular importance sampling. I've seen it called the "gold standard", and it's not much harder to implement than "normal" importance sampling. It's slower in the sense that you have to make a bunch of MCMC moves for each sample, but each sample tends to be very high-quality so you don't need as many of them before your estimates settle down.
The other major alternative is sequential importance sampling. My sense is that it's also fairly straightforward to implement, but it requires some familiarity with sequential Monte Carlo (AKA particle filtering), which I lack.
Good luck!
Edited to add: It looks like the Radford Neal blog post you linked to also recommends Annealed Importance Sampling. Let us know if it works well for you. | Robust MCMC estimator of marginal likelihood? | How about annealed importance sampling? It has much lower variance than regular importance sampling. I've seen it called the "gold standard", and it's not much harder to implement than "normal" impo | Robust MCMC estimator of marginal likelihood?
How about annealed importance sampling? It has much lower variance than regular importance sampling. I've seen it called the "gold standard", and it's not much harder to implement than "normal" importance sampling. It's slower in the sense that you have to make a bunch of MCMC moves for each sample, but each sample tends to be very high-quality so you don't need as many of them before your estimates settle down.
The other major alternative is sequential importance sampling. My sense is that it's also fairly straightforward to implement, but it requires some familiarity with sequential Monte Carlo (AKA particle filtering), which I lack.
Good luck!
Edited to add: It looks like the Radford Neal blog post you linked to also recommends Annealed Importance Sampling. Let us know if it works well for you. | Robust MCMC estimator of marginal likelihood?
How about annealed importance sampling? It has much lower variance than regular importance sampling. I've seen it called the "gold standard", and it's not much harder to implement than "normal" impo |
26,235 | Robust MCMC estimator of marginal likelihood? | This might help on sheding some light on marginal distribution calculation. Also, I would recommend to use a method through power posteriors introduced by Friel and Pettitt. This approach seems quite promissing, although it has some limitations. Or you could you Laplace approximation of posterior distribution by normal distribution: if histogram from MCMC looks symmetric and normal-like, than this could be quite good approximation. | Robust MCMC estimator of marginal likelihood? | This might help on sheding some light on marginal distribution calculation. Also, I would recommend to use a method through power posteriors introduced by Friel and Pettitt. This approach seems quite | Robust MCMC estimator of marginal likelihood?
This might help on sheding some light on marginal distribution calculation. Also, I would recommend to use a method through power posteriors introduced by Friel and Pettitt. This approach seems quite promissing, although it has some limitations. Or you could you Laplace approximation of posterior distribution by normal distribution: if histogram from MCMC looks symmetric and normal-like, than this could be quite good approximation. | Robust MCMC estimator of marginal likelihood?
This might help on sheding some light on marginal distribution calculation. Also, I would recommend to use a method through power posteriors introduced by Friel and Pettitt. This approach seems quite |
26,236 | Quadratic weighted kappa versus linear weighted kappa | For four categories, the following linear and quadratic weights would be used. These tables can be read by indexing one rater by row and the other rater by column. For instance, raters would earn 0.33 "agreement credit" if one rater assigned the item to Pass2 (row 3) and the other assigned it to Fail (column 1). This is more than the 0.00 that would be award using nominal (i.e., identity) weights.
\begin{array} {|c|c|c|c|c|}
\hline
Linear& \text{Fail} & \text{Pass1} & \text{Pass2} & \text{Excel}\\
\hline
\text{Fail} & 1.00 & 0.67 & 0.33 & 0.00 \\
\hline
\text{Pass1} & 0.67 & 1.00 & 0.67 & 0.33 \\
\hline
\text{Pass2} & 0.33 & 0.67 & 1.00 & 0.67 \\
\hline
\text{Excel} & 0.00 & 0.33 & 0.67 & 1.00 \\
\hline
\end{array}
\begin{array} {|c|c|c|c|c|}
\hline
Quadratic & \text{Fail} & \text{Pass1} & \text{Pass2} & \text{Excel}\\
\hline
\text{Fail} & 1.00 & 0.89 & 0.56 & 0.00 \\
\hline
\text{Pass1} & 0.89 & 1.00 & 0.89 & 0.56 \\
\hline
\text{Pass2} & 0.56 & 0.89 & 1.00 & 0.89 \\
\hline
\text{Excel} & 0.00 & 0.56 & 0.89 & 1.00 \\
\hline
\end{array}
To choose between linear and quadratic weights, ask yourself if the difference between being off by 1 vs. 2 categories is the same as the difference between being off by 2 vs. 3 categories. With linear weights, the penalty is always the same (e.g., 0.33 credit is subtracted for each additional category). However, this is not the case for quadratic weights, where penalties begin mild then grow harsher.
\begin{array} {|c|c|c|c|}
\hline
\text{Difference} & \text{Linear} & \text{Quadratic} \\
\hline
0 & 1.00 & 1.00 \\
\hline
1 & 0.67 & 0.89 \\
\hline
2 & 0.33 & 0.56 \\
\hline
3 & 0.00 & 0.00 \\
\hline
\end{array}
Also, in case anyone is interested, here are the formulas for both:
$$
\text{Linear: } w_i = 1 - \frac{i}{k-1}
$$
$$
\text{Quadratic: } w_i = 1 - \frac{i^2}{(k-1)^2}
$$
where $i$ is the difference between categories and $k$ is the total number of categories. | Quadratic weighted kappa versus linear weighted kappa | For four categories, the following linear and quadratic weights would be used. These tables can be read by indexing one rater by row and the other rater by column. For instance, raters would earn 0.33 | Quadratic weighted kappa versus linear weighted kappa
For four categories, the following linear and quadratic weights would be used. These tables can be read by indexing one rater by row and the other rater by column. For instance, raters would earn 0.33 "agreement credit" if one rater assigned the item to Pass2 (row 3) and the other assigned it to Fail (column 1). This is more than the 0.00 that would be award using nominal (i.e., identity) weights.
\begin{array} {|c|c|c|c|c|}
\hline
Linear& \text{Fail} & \text{Pass1} & \text{Pass2} & \text{Excel}\\
\hline
\text{Fail} & 1.00 & 0.67 & 0.33 & 0.00 \\
\hline
\text{Pass1} & 0.67 & 1.00 & 0.67 & 0.33 \\
\hline
\text{Pass2} & 0.33 & 0.67 & 1.00 & 0.67 \\
\hline
\text{Excel} & 0.00 & 0.33 & 0.67 & 1.00 \\
\hline
\end{array}
\begin{array} {|c|c|c|c|c|}
\hline
Quadratic & \text{Fail} & \text{Pass1} & \text{Pass2} & \text{Excel}\\
\hline
\text{Fail} & 1.00 & 0.89 & 0.56 & 0.00 \\
\hline
\text{Pass1} & 0.89 & 1.00 & 0.89 & 0.56 \\
\hline
\text{Pass2} & 0.56 & 0.89 & 1.00 & 0.89 \\
\hline
\text{Excel} & 0.00 & 0.56 & 0.89 & 1.00 \\
\hline
\end{array}
To choose between linear and quadratic weights, ask yourself if the difference between being off by 1 vs. 2 categories is the same as the difference between being off by 2 vs. 3 categories. With linear weights, the penalty is always the same (e.g., 0.33 credit is subtracted for each additional category). However, this is not the case for quadratic weights, where penalties begin mild then grow harsher.
\begin{array} {|c|c|c|c|}
\hline
\text{Difference} & \text{Linear} & \text{Quadratic} \\
\hline
0 & 1.00 & 1.00 \\
\hline
1 & 0.67 & 0.89 \\
\hline
2 & 0.33 & 0.56 \\
\hline
3 & 0.00 & 0.00 \\
\hline
\end{array}
Also, in case anyone is interested, here are the formulas for both:
$$
\text{Linear: } w_i = 1 - \frac{i}{k-1}
$$
$$
\text{Quadratic: } w_i = 1 - \frac{i^2}{(k-1)^2}
$$
where $i$ is the difference between categories and $k$ is the total number of categories. | Quadratic weighted kappa versus linear weighted kappa
For four categories, the following linear and quadratic weights would be used. These tables can be read by indexing one rater by row and the other rater by column. For instance, raters would earn 0.33 |
26,237 | Computation of likelihood when $n$ is very large, so likelihood gets very small? | This is a common problem with computation of likelihoods for all manner of models; the kinds of things that are commonly done are to work on logs, and to use a common scaling factor that bring the values into a more reasonable range.
In this case, I'd suggest:
Step 1: Pick a fairly "typical" $\theta$, $\theta_0$. Divide the formula for both numerator and denominator of the general term by the numerator for $\theta = \theta_0$, in order to get something that will be much less likely to underflow.
Step 2: work on the log scale, this means that the numerator is an exp of sums of differences of logs, and the denominator is a sum of exp of sums of differences of logs.
NB: If any of your p's are 0 or 1, pull those out separately and don't take logs of those terms; they're easy to evaluate as is!
[In more general terms this scaling-and-working-on-the-log-scale can be seen as taking a set of log-likelihoods, $l_i$ and doing this: $\log(\sum_i e^{l_i})= c+\log(\sum_i e^{l_i−c})$. An obvious choice for $c$ is to make the largest term 0, which leaves us with: $\log(\sum_i e^{l_i})= \max_i(l_i)+\log(\sum_i e^{l_i−\max_i(l_i)})$. Note that when you have a numerator and denominator you could use the same $c$ for both, which will then cancel. In the above, that corresponds to taking the $\theta_0$ with the highest log-likelihood.]
The usual terms in the numerator will tend to be more moderate in size, and so in many situations the numerator and denominator are both relatively reasonable.
If there are a range of sizes in the denominator, add up the smaller ones before adding the larger ones.
If only a few terms dominate heavily, you should focus your attention on making the computation for those relatively accurate. | Computation of likelihood when $n$ is very large, so likelihood gets very small? | This is a common problem with computation of likelihoods for all manner of models; the kinds of things that are commonly done are to work on logs, and to use a common scaling factor that bring the val | Computation of likelihood when $n$ is very large, so likelihood gets very small?
This is a common problem with computation of likelihoods for all manner of models; the kinds of things that are commonly done are to work on logs, and to use a common scaling factor that bring the values into a more reasonable range.
In this case, I'd suggest:
Step 1: Pick a fairly "typical" $\theta$, $\theta_0$. Divide the formula for both numerator and denominator of the general term by the numerator for $\theta = \theta_0$, in order to get something that will be much less likely to underflow.
Step 2: work on the log scale, this means that the numerator is an exp of sums of differences of logs, and the denominator is a sum of exp of sums of differences of logs.
NB: If any of your p's are 0 or 1, pull those out separately and don't take logs of those terms; they're easy to evaluate as is!
[In more general terms this scaling-and-working-on-the-log-scale can be seen as taking a set of log-likelihoods, $l_i$ and doing this: $\log(\sum_i e^{l_i})= c+\log(\sum_i e^{l_i−c})$. An obvious choice for $c$ is to make the largest term 0, which leaves us with: $\log(\sum_i e^{l_i})= \max_i(l_i)+\log(\sum_i e^{l_i−\max_i(l_i)})$. Note that when you have a numerator and denominator you could use the same $c$ for both, which will then cancel. In the above, that corresponds to taking the $\theta_0$ with the highest log-likelihood.]
The usual terms in the numerator will tend to be more moderate in size, and so in many situations the numerator and denominator are both relatively reasonable.
If there are a range of sizes in the denominator, add up the smaller ones before adding the larger ones.
If only a few terms dominate heavily, you should focus your attention on making the computation for those relatively accurate. | Computation of likelihood when $n$ is very large, so likelihood gets very small?
This is a common problem with computation of likelihoods for all manner of models; the kinds of things that are commonly done are to work on logs, and to use a common scaling factor that bring the val |
26,238 | Computation of likelihood when $n$ is very large, so likelihood gets very small? | Try capitalizing on the properties of using the logarithms and summation rather than taking the product of decimal numbers. Following the summation just use the anti-log to put it back into your more natural form. I think something like this should do the trick
$\frac{exp(\sum_{i}^{n}(y_{i}*log(p_{i})+(1-y_{i})*log(1-p_{i})))}{\sum_{g}exp(\sum_{i}^{n}y_{i}*log(p_{i})+(1-y_{i})*log(1-p_{i}))}$ | Computation of likelihood when $n$ is very large, so likelihood gets very small? | Try capitalizing on the properties of using the logarithms and summation rather than taking the product of decimal numbers. Following the summation just use the anti-log to put it back into your more | Computation of likelihood when $n$ is very large, so likelihood gets very small?
Try capitalizing on the properties of using the logarithms and summation rather than taking the product of decimal numbers. Following the summation just use the anti-log to put it back into your more natural form. I think something like this should do the trick
$\frac{exp(\sum_{i}^{n}(y_{i}*log(p_{i})+(1-y_{i})*log(1-p_{i})))}{\sum_{g}exp(\sum_{i}^{n}y_{i}*log(p_{i})+(1-y_{i})*log(1-p_{i}))}$ | Computation of likelihood when $n$ is very large, so likelihood gets very small?
Try capitalizing on the properties of using the logarithms and summation rather than taking the product of decimal numbers. Following the summation just use the anti-log to put it back into your more |
26,239 | How do I find correlation measure between two nominal variables? | There are a bunch of measures of nominal-nominal association.
There's the phi coefficient, the contingency coefficient (which I think applies to square tables, so perhaps not suitable for you), Cramer's V coefficient, the lambda coefficient, and the uncertainty coefficient. There are no doubt still more.
Many of them turn out to be a function of the chi-square statistic.
(If you have one or more ordinal variables, there are many other coefficients that are suitable for that situation.)
This wikipedia page lists the ones I mention.
I believe SPSS can compute the ones that I think match your rectangular nominal-vs-nominal situation - at least I am certain in the case of phi and Cramer's V and the lambda coefficient:
(Tables from here and here) | How do I find correlation measure between two nominal variables? | There are a bunch of measures of nominal-nominal association.
There's the phi coefficient, the contingency coefficient (which I think applies to square tables, so perhaps not suitable for you), Crame | How do I find correlation measure between two nominal variables?
There are a bunch of measures of nominal-nominal association.
There's the phi coefficient, the contingency coefficient (which I think applies to square tables, so perhaps not suitable for you), Cramer's V coefficient, the lambda coefficient, and the uncertainty coefficient. There are no doubt still more.
Many of them turn out to be a function of the chi-square statistic.
(If you have one or more ordinal variables, there are many other coefficients that are suitable for that situation.)
This wikipedia page lists the ones I mention.
I believe SPSS can compute the ones that I think match your rectangular nominal-vs-nominal situation - at least I am certain in the case of phi and Cramer's V and the lambda coefficient:
(Tables from here and here) | How do I find correlation measure between two nominal variables?
There are a bunch of measures of nominal-nominal association.
There's the phi coefficient, the contingency coefficient (which I think applies to square tables, so perhaps not suitable for you), Crame |
26,240 | How do I find correlation measure between two nominal variables? | If you want more insight into the associations, you can fit a loglinear model to these data. (Analyze > Loglinear > General) or GENLOG, for starters. | How do I find correlation measure between two nominal variables? | If you want more insight into the associations, you can fit a loglinear model to these data. (Analyze > Loglinear > General) or GENLOG, for starters. | How do I find correlation measure between two nominal variables?
If you want more insight into the associations, you can fit a loglinear model to these data. (Analyze > Loglinear > General) or GENLOG, for starters. | How do I find correlation measure between two nominal variables?
If you want more insight into the associations, you can fit a loglinear model to these data. (Analyze > Loglinear > General) or GENLOG, for starters. |
26,241 | List of graph layout algorithms | Spring-Electric Force Directed
Placement algorithm as explained in Efficient and High Quality
Force-Directed Graph Drawing by Yifan Hu.
Buchheim Tree Drawing
Spring/Repulsion Model
Stress Majorization
Spectral Layout Algorithm
and many more with Julia code here
I am trying to write some of it using Java. There is a paper titled Graph Drawing and Analysis Library and Its Domain-Specific Language for Graphs’ Layout Specifications by
Renata Vaderna, Željko Vuković, Igor Dejanović, and Gordana Milosavljević in which they compare their library with other libraries like JUNG.
There is enough code there to get you started. | List of graph layout algorithms | Spring-Electric Force Directed
Placement algorithm as explained in Efficient and High Quality
Force-Directed Graph Drawing by Yifan Hu.
Buchheim Tree Drawing
Spring/Repulsion Model
Stress Majorization | List of graph layout algorithms
Spring-Electric Force Directed
Placement algorithm as explained in Efficient and High Quality
Force-Directed Graph Drawing by Yifan Hu.
Buchheim Tree Drawing
Spring/Repulsion Model
Stress Majorization
Spectral Layout Algorithm
and many more with Julia code here
I am trying to write some of it using Java. There is a paper titled Graph Drawing and Analysis Library and Its Domain-Specific Language for Graphs’ Layout Specifications by
Renata Vaderna, Željko Vuković, Igor Dejanović, and Gordana Milosavljević in which they compare their library with other libraries like JUNG.
There is enough code there to get you started. | List of graph layout algorithms
Spring-Electric Force Directed
Placement algorithm as explained in Efficient and High Quality
Force-Directed Graph Drawing by Yifan Hu.
Buchheim Tree Drawing
Spring/Repulsion Model
Stress Majorization |
26,242 | List of graph layout algorithms | If you are interested in the algorithms themselves rather than software which will just do it (of which there are many), check out some of the papers of Yifan Hu, which give a nice introduction to certain types of algorithms (not exhaustive).
Algorithms for visualizing large networks
Efficient and high quality graph drawing | List of graph layout algorithms | If you are interested in the algorithms themselves rather than software which will just do it (of which there are many), check out some of the papers of Yifan Hu, which give a nice introduction to cer | List of graph layout algorithms
If you are interested in the algorithms themselves rather than software which will just do it (of which there are many), check out some of the papers of Yifan Hu, which give a nice introduction to certain types of algorithms (not exhaustive).
Algorithms for visualizing large networks
Efficient and high quality graph drawing | List of graph layout algorithms
If you are interested in the algorithms themselves rather than software which will just do it (of which there are many), check out some of the papers of Yifan Hu, which give a nice introduction to cer |
26,243 | List of graph layout algorithms | Gibson, Faith, and Vickers wrote a paper comparing different relational graph layout techniques, analyzing where they excel and where they falter. While they don't contain pseudocode, they provide a good overview that you can then use to search for specific algorithms. | List of graph layout algorithms | Gibson, Faith, and Vickers wrote a paper comparing different relational graph layout techniques, analyzing where they excel and where they falter. While they don't contain pseudocode, they provide a g | List of graph layout algorithms
Gibson, Faith, and Vickers wrote a paper comparing different relational graph layout techniques, analyzing where they excel and where they falter. While they don't contain pseudocode, they provide a good overview that you can then use to search for specific algorithms. | List of graph layout algorithms
Gibson, Faith, and Vickers wrote a paper comparing different relational graph layout techniques, analyzing where they excel and where they falter. While they don't contain pseudocode, they provide a g |
26,244 | List of graph layout algorithms | You can start with Wikipedia, the R package igraph has several algorithms that might provide nice leads/references, including layout.random, layout.circle, layout.sphere, layout.fruchterman.reingold, layout.kamada.kawai, layout.spring, layout.reingold.tilford, layout.fruchterman.reingold.grid, layout.lgl, layout.svd, and layout.norm | List of graph layout algorithms | You can start with Wikipedia, the R package igraph has several algorithms that might provide nice leads/references, including layout.random, layout.circle, layout.sphere, layout.fruchterman.reingold, | List of graph layout algorithms
You can start with Wikipedia, the R package igraph has several algorithms that might provide nice leads/references, including layout.random, layout.circle, layout.sphere, layout.fruchterman.reingold, layout.kamada.kawai, layout.spring, layout.reingold.tilford, layout.fruchterman.reingold.grid, layout.lgl, layout.svd, and layout.norm | List of graph layout algorithms
You can start with Wikipedia, the R package igraph has several algorithms that might provide nice leads/references, including layout.random, layout.circle, layout.sphere, layout.fruchterman.reingold, |
26,245 | Influential residual vs. outlier | The stattrek site seems to have a much better description of outliers and influential points than your textbook but you've only quoted a short passage that may be misleading. I don't have that particular book so I cannot examine it in context. Keep in mind though, that the textbook passage you quoted says, "potentially". It's not exclusive either. Keeping those points in mind, stattrek and your book don't necessarily disagree. But it does appear that your book is misleading in the sense that it implies (from this short passage) that the only difference between outliers and influential points is whether they deviate on x or y axis. That is incorrect.
The "rule" for outliers varies depending on context. The rule you cite is just a rule of thumb and yes, not really designed for regression. There are a few ways to use it. It might be easier to visualize if you imagine multiple y-values at each x and examining the residuals. Typical textbook regression examples are too simple to see how that outlier rule might work, and in most real cases it is quite useless. Hopefully, in real life, you collect much more data. If it's necessary that you may be applying the quantile rule for outliers to a regression problem then they should be providing data for which it is appropriate. | Influential residual vs. outlier | The stattrek site seems to have a much better description of outliers and influential points than your textbook but you've only quoted a short passage that may be misleading. I don't have that partic | Influential residual vs. outlier
The stattrek site seems to have a much better description of outliers and influential points than your textbook but you've only quoted a short passage that may be misleading. I don't have that particular book so I cannot examine it in context. Keep in mind though, that the textbook passage you quoted says, "potentially". It's not exclusive either. Keeping those points in mind, stattrek and your book don't necessarily disagree. But it does appear that your book is misleading in the sense that it implies (from this short passage) that the only difference between outliers and influential points is whether they deviate on x or y axis. That is incorrect.
The "rule" for outliers varies depending on context. The rule you cite is just a rule of thumb and yes, not really designed for regression. There are a few ways to use it. It might be easier to visualize if you imagine multiple y-values at each x and examining the residuals. Typical textbook regression examples are too simple to see how that outlier rule might work, and in most real cases it is quite useless. Hopefully, in real life, you collect much more data. If it's necessary that you may be applying the quantile rule for outliers to a regression problem then they should be providing data for which it is appropriate. | Influential residual vs. outlier
The stattrek site seems to have a much better description of outliers and influential points than your textbook but you've only quoted a short passage that may be misleading. I don't have that partic |
26,246 | Influential residual vs. outlier | I agree with John. Here are a few more points. An influential observation is (strictly) one that influences the parameter estimates. A small deviation in the Y value gives a big change in the estimated beta parameter(s). In simple regression of 1 variable against another, influential variables are precisely those whose X value is distant from the mean of the X's. In multiple regression (several independent variables), the situation is more complex. You have to look at the diagonal of the so called hat matrix $X(X'X)^{-1}X'$, and regression software will give you this. Google "leverage".
Influence is a function of the design points (the X values), as your textbook states.
Note that influence is power. In a designed experiment, you want influential X values, assuming you can measure the corresponding Y value accurately. You get more bang for the buck that way.
To me, an outlier is basically a mistake - that is, an observation that does not follow the same model as the rest of the data. This may occur because of a data collection error, or because that particular subject was unusual in some way.
I don't much like stattrek's definition of an outlier for several reasons. Regression is not symmetric in Y and X. Y is modelled as a random variable and the X's are assumed to be fixed and known. Weirdness in the Y's is not the same as weirdness in the X's. Influence and outliership mean different things. Influence, in multiple regression, is not detected by looking at residual plots. A good description of outliers and influence for the single variable case should set you up to understand the multiple case as well.
I dislike your textbook even more, for the reasons given by John.
Bottom line, influential outliers are dangerous. They need to be examined closely and dealt with. | Influential residual vs. outlier | I agree with John. Here are a few more points. An influential observation is (strictly) one that influences the parameter estimates. A small deviation in the Y value gives a big change in the estimate | Influential residual vs. outlier
I agree with John. Here are a few more points. An influential observation is (strictly) one that influences the parameter estimates. A small deviation in the Y value gives a big change in the estimated beta parameter(s). In simple regression of 1 variable against another, influential variables are precisely those whose X value is distant from the mean of the X's. In multiple regression (several independent variables), the situation is more complex. You have to look at the diagonal of the so called hat matrix $X(X'X)^{-1}X'$, and regression software will give you this. Google "leverage".
Influence is a function of the design points (the X values), as your textbook states.
Note that influence is power. In a designed experiment, you want influential X values, assuming you can measure the corresponding Y value accurately. You get more bang for the buck that way.
To me, an outlier is basically a mistake - that is, an observation that does not follow the same model as the rest of the data. This may occur because of a data collection error, or because that particular subject was unusual in some way.
I don't much like stattrek's definition of an outlier for several reasons. Regression is not symmetric in Y and X. Y is modelled as a random variable and the X's are assumed to be fixed and known. Weirdness in the Y's is not the same as weirdness in the X's. Influence and outliership mean different things. Influence, in multiple regression, is not detected by looking at residual plots. A good description of outliers and influence for the single variable case should set you up to understand the multiple case as well.
I dislike your textbook even more, for the reasons given by John.
Bottom line, influential outliers are dangerous. They need to be examined closely and dealt with. | Influential residual vs. outlier
I agree with John. Here are a few more points. An influential observation is (strictly) one that influences the parameter estimates. A small deviation in the Y value gives a big change in the estimate |
26,247 | Constrained Regression in R: coefficients positive, sum to 1 and non-zero intercept | You just need to play around a little with the matrices involved. Add the intercept to X:
XX <- cbind(1,X)
Recalculate the D matrix used in solve.QP() (I prefer working directly with this to avoid calling solve():
Dmat <- t(XX)%*%XX
Recalculate d with the new XX:
dd <- t(Y)%*%XX
Change the constraint matrix by adding a zero column, since you seem to not have any constraints on the intercept (right?):
Amat <- t(cbind(0,rbind(1,diag(3))))
And finally:
solve.QP(Dmat = Dmat, factorized = FALSE, dvec = dd, Amat = Amat, bvec = b, meq = 1) | Constrained Regression in R: coefficients positive, sum to 1 and non-zero intercept | You just need to play around a little with the matrices involved. Add the intercept to X:
XX <- cbind(1,X)
Recalculate the D matrix used in solve.QP() (I prefer working directly with this to avoid ca | Constrained Regression in R: coefficients positive, sum to 1 and non-zero intercept
You just need to play around a little with the matrices involved. Add the intercept to X:
XX <- cbind(1,X)
Recalculate the D matrix used in solve.QP() (I prefer working directly with this to avoid calling solve():
Dmat <- t(XX)%*%XX
Recalculate d with the new XX:
dd <- t(Y)%*%XX
Change the constraint matrix by adding a zero column, since you seem to not have any constraints on the intercept (right?):
Amat <- t(cbind(0,rbind(1,diag(3))))
And finally:
solve.QP(Dmat = Dmat, factorized = FALSE, dvec = dd, Amat = Amat, bvec = b, meq = 1) | Constrained Regression in R: coefficients positive, sum to 1 and non-zero intercept
You just need to play around a little with the matrices involved. Add the intercept to X:
XX <- cbind(1,X)
Recalculate the D matrix used in solve.QP() (I prefer working directly with this to avoid ca |
26,248 | How to estimate the accuracy of an integral? | For simplicity assume f(x)>=0 for all x in[a,b] and we know M such that f(x) < M for all x in [a,b]. The the integral I of f over [a,b] can be enclosed in the rectangle with width b-a and hight M. The integral of f is the proportion of the rectangle that falls under the function f multiplied by M(b-a). Now if you pick points in the rectangle at random and count the point as a success if it falls under the curve and as a failure otherwise you have set up a Bernoulli trial . The sample fraction of points inside is a binomial proportion and hence has mean p and variance p(1-p)/n where n is the number of points taken. Hence you can construct a confidence interval for p and since I =p M(b-a) a confidence interval for I also since for the estimate I^ =p^ M (b-a), Var(I^)= M$^2$(b-a)$^2$ p(1-p)/n. So to use statistics to determine the smallest n for which the integral is accurate enough you could specify an upper limit S on the variance of I^. Note p(1-p)/n <=1/(4n) for every 0<=p<=1. So set S=M$^2$(b-a)$^2$ /(4n) or n = smallest integer> M$^2$(b-a)$^2$ /(4S). | How to estimate the accuracy of an integral? | For simplicity assume f(x)>=0 for all x in[a,b] and we know M such that f(x) < M for all x in [a,b]. The the integral I of f over [a,b] can be enclosed in the rectangle with width b-a and hight M. T | How to estimate the accuracy of an integral?
For simplicity assume f(x)>=0 for all x in[a,b] and we know M such that f(x) < M for all x in [a,b]. The the integral I of f over [a,b] can be enclosed in the rectangle with width b-a and hight M. The integral of f is the proportion of the rectangle that falls under the function f multiplied by M(b-a). Now if you pick points in the rectangle at random and count the point as a success if it falls under the curve and as a failure otherwise you have set up a Bernoulli trial . The sample fraction of points inside is a binomial proportion and hence has mean p and variance p(1-p)/n where n is the number of points taken. Hence you can construct a confidence interval for p and since I =p M(b-a) a confidence interval for I also since for the estimate I^ =p^ M (b-a), Var(I^)= M$^2$(b-a)$^2$ p(1-p)/n. So to use statistics to determine the smallest n for which the integral is accurate enough you could specify an upper limit S on the variance of I^. Note p(1-p)/n <=1/(4n) for every 0<=p<=1. So set S=M$^2$(b-a)$^2$ /(4n) or n = smallest integer> M$^2$(b-a)$^2$ /(4S). | How to estimate the accuracy of an integral?
For simplicity assume f(x)>=0 for all x in[a,b] and we know M such that f(x) < M for all x in [a,b]. The the integral I of f over [a,b] can be enclosed in the rectangle with width b-a and hight M. T |
26,249 | How to estimate the accuracy of an integral? | This is a non-trivial question that involves issues like total variation of $f$ and its reasonable multivariate extensions. The Stanford statistician Art Owen has worked on this using randomized quasi-Monte Carlo techniques. The regular Monte Carlo allows direct estimation of the accuracy of the integral, but each individual evaluation is not that accurate. Quasi-Monte Carlo produces more accurate estimates, but it is a fully deterministic technique, and as such does not allow to estimate the variance of your result. He showed how to combine the two approaches, and his paper is very lucid, so I won't try to reproduce it here.
A supplementary reading to this would of course be Niederreiter (1992) monograph. | How to estimate the accuracy of an integral? | This is a non-trivial question that involves issues like total variation of $f$ and its reasonable multivariate extensions. The Stanford statistician Art Owen has worked on this using randomized quasi | How to estimate the accuracy of an integral?
This is a non-trivial question that involves issues like total variation of $f$ and its reasonable multivariate extensions. The Stanford statistician Art Owen has worked on this using randomized quasi-Monte Carlo techniques. The regular Monte Carlo allows direct estimation of the accuracy of the integral, but each individual evaluation is not that accurate. Quasi-Monte Carlo produces more accurate estimates, but it is a fully deterministic technique, and as such does not allow to estimate the variance of your result. He showed how to combine the two approaches, and his paper is very lucid, so I won't try to reproduce it here.
A supplementary reading to this would of course be Niederreiter (1992) monograph. | How to estimate the accuracy of an integral?
This is a non-trivial question that involves issues like total variation of $f$ and its reasonable multivariate extensions. The Stanford statistician Art Owen has worked on this using randomized quasi |
26,250 | How do I compute class probabilities in caret package using 'glmnet' method? | I suspect your y is of class numeric and is not an R factor. You can look at the documentation for glmnet directly,
y: response variable. Quantitative for ‘family="gaussian"’ or
‘family="poisson"’ (non-negative counts). For
‘family="binomial"’ should be either a **factor with two
levels, or a two-column matrix of counts or proportions**.
(emphasize is mine.)
or check it with the following toy example:
library(caret)
data(iris)
iris.sub <- subset(iris, Species %in% c("setosa", "versicolor"))
train(iris.sub[,1:4], factor(iris.sub$Species), method='glmnet',
trControl=trainControl(classProbs=TRUE)) # work
train(iris.sub[,1:4], as.numeric(iris.sub$Species), method='glmnet',
trControl=trainControl(classProbs=TRUE)) # 'cannnot compute class probabilities for regression' | How do I compute class probabilities in caret package using 'glmnet' method? | I suspect your y is of class numeric and is not an R factor. You can look at the documentation for glmnet directly,
y: response variable. Quantitative for ‘family="gaussian"’ or
‘family="poi | How do I compute class probabilities in caret package using 'glmnet' method?
I suspect your y is of class numeric and is not an R factor. You can look at the documentation for glmnet directly,
y: response variable. Quantitative for ‘family="gaussian"’ or
‘family="poisson"’ (non-negative counts). For
‘family="binomial"’ should be either a **factor with two
levels, or a two-column matrix of counts or proportions**.
(emphasize is mine.)
or check it with the following toy example:
library(caret)
data(iris)
iris.sub <- subset(iris, Species %in% c("setosa", "versicolor"))
train(iris.sub[,1:4], factor(iris.sub$Species), method='glmnet',
trControl=trainControl(classProbs=TRUE)) # work
train(iris.sub[,1:4], as.numeric(iris.sub$Species), method='glmnet',
trControl=trainControl(classProbs=TRUE)) # 'cannnot compute class probabilities for regression' | How do I compute class probabilities in caret package using 'glmnet' method?
I suspect your y is of class numeric and is not an R factor. You can look at the documentation for glmnet directly,
y: response variable. Quantitative for ‘family="gaussian"’ or
‘family="poi |
26,251 | SVM rbf kernel - heuristic method for estimating gamma | First of all, there is no reason-except computational cost-not to use your whole dataset. As long as you don't use label information, there is not reason not to use all the information you can get from your data.
Why are quantiles of the distance a good heuristic? The solution of an SVM problem is a linear combination of the RBF kernels that sit on the support vectors $\sum_i y_i \alpha_i \exp(-\gamma ||x - x_i||^2)$. During the learning phase, the optimization adapts the $\alpha_i$ to maximize the margin while retaining correct classification.
Now, there are two extreme cases for the choice of $\gamma$:
Imagine the $\gamma$ is very small, which means that the RBF kernel is very wide. Let us assume that it is so wide that the RBF kernel is still sufficiently positive for every data point of the dataset. This will give probably give the optimizer a hard job since changing the value of a single $\alpha_i$ will change the decision function on all datapoints because the kernel is too wide.
The other extreme situation is when the $\gamma$ is large, which means that the RBF kernel is very narrow. When changing the $\alpha_i$ for that datapoint the decision function of the SVM will basically change only for that datapoint only. This means that probably all training vectors will end up as support vectors. This is clearly not desireable.
To see that the heuristic is a good choice, one must realize that a certain value of $\gamma$ determines a boundary for the RBF kernel in which the kernel will be larger than a certain value (like the one-$\sigma$-quantile for the Normal distribution). By choosing the $\gamma$ according to quantiles on the pairwise distances you make sure that a certain percentage of the datapoints lies within that boundary. Therefore, if you change the $\alpha_i$ for a datapoint you will in fact only affect the decision function for a certain percentage of datapoints which is what you want. How that percentage should be chosen depends on the learning problem, but you avoid changing the decision function for all or only one datapoint. | SVM rbf kernel - heuristic method for estimating gamma | First of all, there is no reason-except computational cost-not to use your whole dataset. As long as you don't use label information, there is not reason not to use all the information you can get fro | SVM rbf kernel - heuristic method for estimating gamma
First of all, there is no reason-except computational cost-not to use your whole dataset. As long as you don't use label information, there is not reason not to use all the information you can get from your data.
Why are quantiles of the distance a good heuristic? The solution of an SVM problem is a linear combination of the RBF kernels that sit on the support vectors $\sum_i y_i \alpha_i \exp(-\gamma ||x - x_i||^2)$. During the learning phase, the optimization adapts the $\alpha_i$ to maximize the margin while retaining correct classification.
Now, there are two extreme cases for the choice of $\gamma$:
Imagine the $\gamma$ is very small, which means that the RBF kernel is very wide. Let us assume that it is so wide that the RBF kernel is still sufficiently positive for every data point of the dataset. This will give probably give the optimizer a hard job since changing the value of a single $\alpha_i$ will change the decision function on all datapoints because the kernel is too wide.
The other extreme situation is when the $\gamma$ is large, which means that the RBF kernel is very narrow. When changing the $\alpha_i$ for that datapoint the decision function of the SVM will basically change only for that datapoint only. This means that probably all training vectors will end up as support vectors. This is clearly not desireable.
To see that the heuristic is a good choice, one must realize that a certain value of $\gamma$ determines a boundary for the RBF kernel in which the kernel will be larger than a certain value (like the one-$\sigma$-quantile for the Normal distribution). By choosing the $\gamma$ according to quantiles on the pairwise distances you make sure that a certain percentage of the datapoints lies within that boundary. Therefore, if you change the $\alpha_i$ for a datapoint you will in fact only affect the decision function for a certain percentage of datapoints which is what you want. How that percentage should be chosen depends on the learning problem, but you avoid changing the decision function for all or only one datapoint. | SVM rbf kernel - heuristic method for estimating gamma
First of all, there is no reason-except computational cost-not to use your whole dataset. As long as you don't use label information, there is not reason not to use all the information you can get fro |
26,252 | SVM rbf kernel - heuristic method for estimating gamma | Yeah! You're describing the so-called "median trick".
I really like the intuition behind the answer above. I also think it's easier to understand the problem of choosing $\gamma$ by thinking of it as the inverse of the variance of the RBF, à la
\begin{equation}
\gamma = \frac{1}{2 \sigma^2}
\end{equation}
so that the RBF becomes
\begin{equation}
\phi(x) = e^{\frac{\|x-x_i\|^2}{2 \sigma^2}}
\end{equation}
Now it's clear that the problem of searching for a good $\gamma$ is essentially the same as looking for a good variance for a Gaussian function (minus a scaling factor).
To do this we turn to variance estimators, but instead of computing variance via the average squared distance from some $x_i$ like $\mathbb{E}[(x-x_i)^2]$, we compute quantiles on that squared distance.
As the poster above said, using quantiles gives us control over how many data points lie within one (or two, or three..) standard deviations of our Gaussian function. | SVM rbf kernel - heuristic method for estimating gamma | Yeah! You're describing the so-called "median trick".
I really like the intuition behind the answer above. I also think it's easier to understand the problem of choosing $\gamma$ by thinking of it as | SVM rbf kernel - heuristic method for estimating gamma
Yeah! You're describing the so-called "median trick".
I really like the intuition behind the answer above. I also think it's easier to understand the problem of choosing $\gamma$ by thinking of it as the inverse of the variance of the RBF, à la
\begin{equation}
\gamma = \frac{1}{2 \sigma^2}
\end{equation}
so that the RBF becomes
\begin{equation}
\phi(x) = e^{\frac{\|x-x_i\|^2}{2 \sigma^2}}
\end{equation}
Now it's clear that the problem of searching for a good $\gamma$ is essentially the same as looking for a good variance for a Gaussian function (minus a scaling factor).
To do this we turn to variance estimators, but instead of computing variance via the average squared distance from some $x_i$ like $\mathbb{E}[(x-x_i)^2]$, we compute quantiles on that squared distance.
As the poster above said, using quantiles gives us control over how many data points lie within one (or two, or three..) standard deviations of our Gaussian function. | SVM rbf kernel - heuristic method for estimating gamma
Yeah! You're describing the so-called "median trick".
I really like the intuition behind the answer above. I also think it's easier to understand the problem of choosing $\gamma$ by thinking of it as |
26,253 | Beyond Fisher kernels | You are right about the three issues you raise, and your interpretation is exactly right.
People have looked at other directions to build kernels from probabilistic models:
Moreno et al. propose Kullback-Leibler although when this satisfies Mercer's conditions was not well understood when I looked at this problem back when I read it.
Jebara et al. propose inner product in the space of distributions. This paper sounds a lot like what you're after: you can download it here.
I read them a while back (2008), not sure how that area has evolved the past few years.
There are also non-probabilistic ways to do so; people in Bioinformatics have looked at dynamic programming types of things in the space of strings and so on. These things are not always PSD and have problems of their own. | Beyond Fisher kernels | You are right about the three issues you raise, and your interpretation is exactly right.
People have looked at other directions to build kernels from probabilistic models:
Moreno et al. propose Kul | Beyond Fisher kernels
You are right about the three issues you raise, and your interpretation is exactly right.
People have looked at other directions to build kernels from probabilistic models:
Moreno et al. propose Kullback-Leibler although when this satisfies Mercer's conditions was not well understood when I looked at this problem back when I read it.
Jebara et al. propose inner product in the space of distributions. This paper sounds a lot like what you're after: you can download it here.
I read them a while back (2008), not sure how that area has evolved the past few years.
There are also non-probabilistic ways to do so; people in Bioinformatics have looked at dynamic programming types of things in the space of strings and so on. These things are not always PSD and have problems of their own. | Beyond Fisher kernels
You are right about the three issues you raise, and your interpretation is exactly right.
People have looked at other directions to build kernels from probabilistic models:
Moreno et al. propose Kul |
26,254 | Help with SEM modeling (OpenMx, polycor) | You must have uncovered a bug in polycor, which you would want to report to the John Fox. Everything runs fine in Stata using my polychoric package:
. polychoric *
Polychoric correlation matrix
A1 A2 A3 A4 A5 B1 B2 B3 C1 D1 E1
A1 1
A2 .34544812 1
A3 .39920225 .19641726 1
A4 .09468652 .04343741 .31995685 1
A5 .30728339 -.0600463 .24367634 .18099061 1
B1 .01998441 -.29765985 .13740987 .21810968 .14069473 1
B2 -.19808738 .17745687 -.29049459 -.21054867 .02824307 -.57600551 1
B3 .17807109 -.18042045 .44605383 .40447746 .18369998 .49883132 -.50906364 1
C1 -.35973454 -.33099295 -.19920454 -.14631621 -.36058235 .00066762 -.05129489 -.11907687 1
D1 -.3934594 -.21234022 -.39764587 -.30230591 -.04982743 -.09899428 .14494953 -.5400759 .05427906 1
E1 -.13284936 .17703745 -.30631236 -.23069382 -.49212315 -.26670382 .24678619 -.47247566 .2956692 .28645516 1
For the latent variables that are measured with a single indicator (C, D, E), you need to fix the variance of the indicator in the continuous version of it, as otherwise the scale of the latent variable is not identified. Given that with binary/ordinal responses, it is fixed anyway to 1 with (ordinal) probit-type links, it probably means that you would have to postulate that your latent is equivalent to the observed indicator, or you have to postulate the standardized loading. This essentially makes your model equivalent to a CFA model where you have latent factors A and B measured with {A1-A5, C1, D1, E1} and {B1-B3, C1, D1, E1}, respectively. | Help with SEM modeling (OpenMx, polycor) | You must have uncovered a bug in polycor, which you would want to report to the John Fox. Everything runs fine in Stata using my polychoric package:
. polychoric *
Polychoric correlation matr | Help with SEM modeling (OpenMx, polycor)
You must have uncovered a bug in polycor, which you would want to report to the John Fox. Everything runs fine in Stata using my polychoric package:
. polychoric *
Polychoric correlation matrix
A1 A2 A3 A4 A5 B1 B2 B3 C1 D1 E1
A1 1
A2 .34544812 1
A3 .39920225 .19641726 1
A4 .09468652 .04343741 .31995685 1
A5 .30728339 -.0600463 .24367634 .18099061 1
B1 .01998441 -.29765985 .13740987 .21810968 .14069473 1
B2 -.19808738 .17745687 -.29049459 -.21054867 .02824307 -.57600551 1
B3 .17807109 -.18042045 .44605383 .40447746 .18369998 .49883132 -.50906364 1
C1 -.35973454 -.33099295 -.19920454 -.14631621 -.36058235 .00066762 -.05129489 -.11907687 1
D1 -.3934594 -.21234022 -.39764587 -.30230591 -.04982743 -.09899428 .14494953 -.5400759 .05427906 1
E1 -.13284936 .17703745 -.30631236 -.23069382 -.49212315 -.26670382 .24678619 -.47247566 .2956692 .28645516 1
For the latent variables that are measured with a single indicator (C, D, E), you need to fix the variance of the indicator in the continuous version of it, as otherwise the scale of the latent variable is not identified. Given that with binary/ordinal responses, it is fixed anyway to 1 with (ordinal) probit-type links, it probably means that you would have to postulate that your latent is equivalent to the observed indicator, or you have to postulate the standardized loading. This essentially makes your model equivalent to a CFA model where you have latent factors A and B measured with {A1-A5, C1, D1, E1} and {B1-B3, C1, D1, E1}, respectively. | Help with SEM modeling (OpenMx, polycor)
You must have uncovered a bug in polycor, which you would want to report to the John Fox. Everything runs fine in Stata using my polychoric package:
. polychoric *
Polychoric correlation matr |
26,255 | Convert SAS NLMIXED code for zero-inflated gamma regression to R | Having spent some time on this code, it appears to me as though it basically:
1) Does a logistic regression with right hand side b0_f + b1_f*x1 andy > 0 as a target variable,
2) For those observations for which y > 0, performs a regression with right hand side b0_h + b1_h*x1, a Gamma likelihood and link=log,
3) Also estimates the shape parameter of the Gamma distribution.
It maximizes the likelihood jointly, which is nice, because you only have to make the one function call. However, the likelihood separates anyway, so you don't get improved parameter estimates as a result.
Here is some R code which makes use of the glm function to save programming effort. This may not be what you'd like, as it obscures the algorithm itself. The code certainly isn't as clean as it could / should be, either.
McLerran <- function(y, x)
{
z <- y > 0
y.gt.0 <- y[y>0]
x.gt.0 <- x[y>0]
m1 <- glm(z~x, family=binomial)
m2 <- glm(y.gt.0~x.gt.0, family=Gamma(link=log))
list("p.ygt0"=m1,"ygt0"=m2)
}
# Sample data
x <- runif(100)
y <- rgamma(100, 3, 1) # Not a function of x (coef. of x = 0)
b <- rbinom(100, 1, 0.5*x) # p(y==0) is a function of x
y[b==1] <- 0
foo <- McLerran(y,x)
summary(foo$ygt0)
Call:
glm(formula = y.gt.0 ~ x.gt.0, family = Gamma(link = log))
Deviance Residuals:
Min 1Q Median 3Q Max
-2.08888 -0.44446 -0.06589 0.28111 1.31066
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.2033 0.1377 8.737 1.44e-12 ***
x.gt.0 -0.2440 0.2352 -1.037 0.303
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Gamma family taken to be 0.3448334)
Null deviance: 26.675 on 66 degrees of freedom
Residual deviance: 26.280 on 65 degrees of freedom
AIC: 256.42
Number of Fisher Scoring iterations: 6
The shape parameter for the Gamma distribution is equal to 1 / the dispersion parameter for the Gamma family. Coefficients and other stuff which you might like to access programatically can be accessed on the individual elements of the return value list:
> coefficients(foo$p.ygt0)
(Intercept) x
2.140239 -2.393388
Prediction can be done using the output of the routine. Here's some more R code that shows how to generate expected values and some other information:
# Predict expected value
predict.McLerren <- function(model, x.new)
{
x <- as.data.frame(x.new)
colnames(x) <- "x"
x$x.gt.0 <- x$x
pred.p.ygt0 <- predict(model$p.ygt0, newdata=x, type="response", se.fit=TRUE)
pred.ygt0 <- predict(model$ygt0, newdata=x, type="response", se.fit=TRUE)
p0 <- 1 - pred.p.ygt0$fit
ev <- (1-p0) * pred.ygt0$fit
se.p0 <- pred.p.ygt0$se.fit
se.ev <- pred.ygt0$se.fit
se.fit <- sqrt(((1-p0)*se.ev)^2 + (ev*se.p0)^2 + (se.p0*se.ev)^2)
list("fit"=ev, "p0"=p0, "se.fit" = se.fit,
"pred.p.ygt0"=pred.p.ygt0, "pred.ygt0"=pred.ygt0)
}
And a sample run:
> x.new <- seq(0.05,0.95,length=5)
>
> foo.pred <- predict.McLerren(foo, x.new)
> foo.pred$fit
1 2 3 4 5
2.408946 2.333231 2.201889 2.009979 1.763201
> foo.pred$se.fit
1 2 3 4 5
0.3409576 0.2378386 0.1753987 0.2022401 0.2785045
> foo.pred$p0
1 2 3 4 5
0.1205351 0.1733806 0.2429933 0.3294175 0.4291541
Now for coefficient extraction and the contrasts:
coef.McLerren <- function(model)
{
temp1 <- coefficients(model$p.ygt0)
temp2 <- coefficients(model$ygt0)
names(temp1) <- NULL
names(temp2) <- NULL
retval <- c(temp1, temp2)
names(retval) <- c("b0.f","b1.f","b0.h","b1.h")
retval
}
contrast.McLerren <- function(b0_f, b1_f, b2_f, b0_h, b1_h, b2_h)
{
(1-(1 / (1 + exp(-b0_f -b1_f))))*(exp(b0_h + b1_h)) - (1-(1 / (1 + exp(-b0_f -b2_f))))*(exp(b0_h + b2_h))
}
> coef.McLerren(foo)
b0.f b1.f b0.h b1.h
2.0819321 -1.8911883 1.0009568 0.1334845 | Convert SAS NLMIXED code for zero-inflated gamma regression to R | Having spent some time on this code, it appears to me as though it basically:
1) Does a logistic regression with right hand side b0_f + b1_f*x1 andy > 0 as a target variable,
2) For those observations | Convert SAS NLMIXED code for zero-inflated gamma regression to R
Having spent some time on this code, it appears to me as though it basically:
1) Does a logistic regression with right hand side b0_f + b1_f*x1 andy > 0 as a target variable,
2) For those observations for which y > 0, performs a regression with right hand side b0_h + b1_h*x1, a Gamma likelihood and link=log,
3) Also estimates the shape parameter of the Gamma distribution.
It maximizes the likelihood jointly, which is nice, because you only have to make the one function call. However, the likelihood separates anyway, so you don't get improved parameter estimates as a result.
Here is some R code which makes use of the glm function to save programming effort. This may not be what you'd like, as it obscures the algorithm itself. The code certainly isn't as clean as it could / should be, either.
McLerran <- function(y, x)
{
z <- y > 0
y.gt.0 <- y[y>0]
x.gt.0 <- x[y>0]
m1 <- glm(z~x, family=binomial)
m2 <- glm(y.gt.0~x.gt.0, family=Gamma(link=log))
list("p.ygt0"=m1,"ygt0"=m2)
}
# Sample data
x <- runif(100)
y <- rgamma(100, 3, 1) # Not a function of x (coef. of x = 0)
b <- rbinom(100, 1, 0.5*x) # p(y==0) is a function of x
y[b==1] <- 0
foo <- McLerran(y,x)
summary(foo$ygt0)
Call:
glm(formula = y.gt.0 ~ x.gt.0, family = Gamma(link = log))
Deviance Residuals:
Min 1Q Median 3Q Max
-2.08888 -0.44446 -0.06589 0.28111 1.31066
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.2033 0.1377 8.737 1.44e-12 ***
x.gt.0 -0.2440 0.2352 -1.037 0.303
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Gamma family taken to be 0.3448334)
Null deviance: 26.675 on 66 degrees of freedom
Residual deviance: 26.280 on 65 degrees of freedom
AIC: 256.42
Number of Fisher Scoring iterations: 6
The shape parameter for the Gamma distribution is equal to 1 / the dispersion parameter for the Gamma family. Coefficients and other stuff which you might like to access programatically can be accessed on the individual elements of the return value list:
> coefficients(foo$p.ygt0)
(Intercept) x
2.140239 -2.393388
Prediction can be done using the output of the routine. Here's some more R code that shows how to generate expected values and some other information:
# Predict expected value
predict.McLerren <- function(model, x.new)
{
x <- as.data.frame(x.new)
colnames(x) <- "x"
x$x.gt.0 <- x$x
pred.p.ygt0 <- predict(model$p.ygt0, newdata=x, type="response", se.fit=TRUE)
pred.ygt0 <- predict(model$ygt0, newdata=x, type="response", se.fit=TRUE)
p0 <- 1 - pred.p.ygt0$fit
ev <- (1-p0) * pred.ygt0$fit
se.p0 <- pred.p.ygt0$se.fit
se.ev <- pred.ygt0$se.fit
se.fit <- sqrt(((1-p0)*se.ev)^2 + (ev*se.p0)^2 + (se.p0*se.ev)^2)
list("fit"=ev, "p0"=p0, "se.fit" = se.fit,
"pred.p.ygt0"=pred.p.ygt0, "pred.ygt0"=pred.ygt0)
}
And a sample run:
> x.new <- seq(0.05,0.95,length=5)
>
> foo.pred <- predict.McLerren(foo, x.new)
> foo.pred$fit
1 2 3 4 5
2.408946 2.333231 2.201889 2.009979 1.763201
> foo.pred$se.fit
1 2 3 4 5
0.3409576 0.2378386 0.1753987 0.2022401 0.2785045
> foo.pred$p0
1 2 3 4 5
0.1205351 0.1733806 0.2429933 0.3294175 0.4291541
Now for coefficient extraction and the contrasts:
coef.McLerren <- function(model)
{
temp1 <- coefficients(model$p.ygt0)
temp2 <- coefficients(model$ygt0)
names(temp1) <- NULL
names(temp2) <- NULL
retval <- c(temp1, temp2)
names(retval) <- c("b0.f","b1.f","b0.h","b1.h")
retval
}
contrast.McLerren <- function(b0_f, b1_f, b2_f, b0_h, b1_h, b2_h)
{
(1-(1 / (1 + exp(-b0_f -b1_f))))*(exp(b0_h + b1_h)) - (1-(1 / (1 + exp(-b0_f -b2_f))))*(exp(b0_h + b2_h))
}
> coef.McLerren(foo)
b0.f b1.f b0.h b1.h
2.0819321 -1.8911883 1.0009568 0.1334845 | Convert SAS NLMIXED code for zero-inflated gamma regression to R
Having spent some time on this code, it appears to me as though it basically:
1) Does a logistic regression with right hand side b0_f + b1_f*x1 andy > 0 as a target variable,
2) For those observations |
26,256 | How to choose the number of splits in rpart()? | The convention is to use the best tree (lowest cross-validate relative error) or the smallest (simplest) tree within one standard error of the best tree. The best tree is in row 8 (7 splits), but the tree in row 7 (6 splits) does effectively the same job (xerror for tree in row 7 = 0.21761, which is within (smaller than) the xerror of best tree plus one standard error, xstd, (0.21076 + 0.042196) = 0.252956) and is simpler, hence the 1 standard error rule would select it. | How to choose the number of splits in rpart()? | The convention is to use the best tree (lowest cross-validate relative error) or the smallest (simplest) tree within one standard error of the best tree. The best tree is in row 8 (7 splits), but the | How to choose the number of splits in rpart()?
The convention is to use the best tree (lowest cross-validate relative error) or the smallest (simplest) tree within one standard error of the best tree. The best tree is in row 8 (7 splits), but the tree in row 7 (6 splits) does effectively the same job (xerror for tree in row 7 = 0.21761, which is within (smaller than) the xerror of best tree plus one standard error, xstd, (0.21076 + 0.042196) = 0.252956) and is simpler, hence the 1 standard error rule would select it. | How to choose the number of splits in rpart()?
The convention is to use the best tree (lowest cross-validate relative error) or the smallest (simplest) tree within one standard error of the best tree. The best tree is in row 8 (7 splits), but the |
26,257 | Significance of initial transition probabilites in a hidden markov model | Baum-Welch is an optimization algorithm for computing the maximum-likelihood estimator. For hidden Markov models the likelihood surface may be quite ugly, and it is certainly not concave. With good starting points the algorithm may converge faster and towards the MLE.
If you already know the transition probabilities and want to predict hidden states by the Viterbi algorithm, you need the transition probabilities. If you already know them, there is no need to re-estimate them using Baum-Welch. The re-estimation is computationally more expensive than the prediction. | Significance of initial transition probabilites in a hidden markov model | Baum-Welch is an optimization algorithm for computing the maximum-likelihood estimator. For hidden Markov models the likelihood surface may be quite ugly, and it is certainly not concave. With good st | Significance of initial transition probabilites in a hidden markov model
Baum-Welch is an optimization algorithm for computing the maximum-likelihood estimator. For hidden Markov models the likelihood surface may be quite ugly, and it is certainly not concave. With good starting points the algorithm may converge faster and towards the MLE.
If you already know the transition probabilities and want to predict hidden states by the Viterbi algorithm, you need the transition probabilities. If you already know them, there is no need to re-estimate them using Baum-Welch. The re-estimation is computationally more expensive than the prediction. | Significance of initial transition probabilites in a hidden markov model
Baum-Welch is an optimization algorithm for computing the maximum-likelihood estimator. For hidden Markov models the likelihood surface may be quite ugly, and it is certainly not concave. With good st |
26,258 | Significance of initial transition probabilites in a hidden markov model | Some of the materials concerning Initial Estimates of HMM are given in
Lawrence R. Rabiner (February 1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition". Proceedings of the IEEE 77 (2): 257–286. doi:10.1109/5.18626 (Section V.C)
You can also take a look at the Probabilistic modeling toolkit for Matlab/Octave, especially hmmFitEm function where You can provide your own Initial parameter of the model or just using ('nrandomRestarts' option).
While using 'nrandomRestarts', the first model (at the init step) uses:
Fit a mixture of Gaussians via MLE/MAP (using EM) for continues data;
Fit a mixture of product of discrete distributions via MLE/MAP (using EM) for discrete data;
the second, third models... (at the init step) use randomly initialized parameters and as the result converge more slowly with mostly lower Log Likelihood values. | Significance of initial transition probabilites in a hidden markov model | Some of the materials concerning Initial Estimates of HMM are given in
Lawrence R. Rabiner (February 1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition". Procee | Significance of initial transition probabilites in a hidden markov model
Some of the materials concerning Initial Estimates of HMM are given in
Lawrence R. Rabiner (February 1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition". Proceedings of the IEEE 77 (2): 257–286. doi:10.1109/5.18626 (Section V.C)
You can also take a look at the Probabilistic modeling toolkit for Matlab/Octave, especially hmmFitEm function where You can provide your own Initial parameter of the model or just using ('nrandomRestarts' option).
While using 'nrandomRestarts', the first model (at the init step) uses:
Fit a mixture of Gaussians via MLE/MAP (using EM) for continues data;
Fit a mixture of product of discrete distributions via MLE/MAP (using EM) for discrete data;
the second, third models... (at the init step) use randomly initialized parameters and as the result converge more slowly with mostly lower Log Likelihood values. | Significance of initial transition probabilites in a hidden markov model
Some of the materials concerning Initial Estimates of HMM are given in
Lawrence R. Rabiner (February 1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition". Procee |
26,259 | NaN p-value when using R's goodfit on binomial data | You have zero frequencies in observed counts. That explains NaNs in your data. If you look at test.gof object, you'll see that:
table(test.gof$observed)
0 1 2 3 4 5 7 8 10
56 5 3 2 5 1 1 2 1
you have 56 zeros. Anyway, IMHO this question is for http://stats.stackexchange.com. | NaN p-value when using R's goodfit on binomial data | You have zero frequencies in observed counts. That explains NaNs in your data. If you look at test.gof object, you'll see that:
table(test.gof$observed)
0 1 2 3 4 5 7 8 10
56 5 3 2 5 1 | NaN p-value when using R's goodfit on binomial data
You have zero frequencies in observed counts. That explains NaNs in your data. If you look at test.gof object, you'll see that:
table(test.gof$observed)
0 1 2 3 4 5 7 8 10
56 5 3 2 5 1 1 2 1
you have 56 zeros. Anyway, IMHO this question is for http://stats.stackexchange.com. | NaN p-value when using R's goodfit on binomial data
You have zero frequencies in observed counts. That explains NaNs in your data. If you look at test.gof object, you'll see that:
table(test.gof$observed)
0 1 2 3 4 5 7 8 10
56 5 3 2 5 1 |
26,260 | NaN p-value when using R's goodfit on binomial data | Would you be happier with a surgically altered goodfit object?
> idx <- which(test.gof$observed != 0)
> idx
[1] 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 49 50
> test.gof$par$size <- length( idx-1)
> test.gof$fitted <- test.gof$fitted[idx]
> test.gof$count <- test.gof$count[idx]
> test.gof$observed <- test.gof$observed[idx]
> summary(test.gof)
Goodness-of-fit test for binomial distribution
X^2 df P(> X^2)
Pearson Inf 75 0.0000000
Likelihood Ratio 21.48322 19 0.3107244
Warning message:
In summary.goodfit(test.gof) : Chi-squared approximation may be incorrect | NaN p-value when using R's goodfit on binomial data | Would you be happier with a surgically altered goodfit object?
> idx <- which(test.gof$observed != 0)
> idx
[1] 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 49 50
> test.gof$par$size <- leng | NaN p-value when using R's goodfit on binomial data
Would you be happier with a surgically altered goodfit object?
> idx <- which(test.gof$observed != 0)
> idx
[1] 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 49 50
> test.gof$par$size <- length( idx-1)
> test.gof$fitted <- test.gof$fitted[idx]
> test.gof$count <- test.gof$count[idx]
> test.gof$observed <- test.gof$observed[idx]
> summary(test.gof)
Goodness-of-fit test for binomial distribution
X^2 df P(> X^2)
Pearson Inf 75 0.0000000
Likelihood Ratio 21.48322 19 0.3107244
Warning message:
In summary.goodfit(test.gof) : Chi-squared approximation may be incorrect | NaN p-value when using R's goodfit on binomial data
Would you be happier with a surgically altered goodfit object?
> idx <- which(test.gof$observed != 0)
> idx
[1] 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 49 50
> test.gof$par$size <- leng |
26,261 | NaN p-value when using R's goodfit on binomial data | Try plotting it. You'll get a better idea of what's going on. As mentioned before, you're getting NaN because you're passing 0 frequencies to chisq.test()
test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts), prob=0.5))
plot(test.gof)
## doesn't look so good
test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts)))
plot(test.gof)
## looks a little more clear | NaN p-value when using R's goodfit on binomial data | Try plotting it. You'll get a better idea of what's going on. As mentioned before, you're getting NaN because you're passing 0 frequencies to chisq.test()
test.gof <- goodfit(counts, type="binomial", | NaN p-value when using R's goodfit on binomial data
Try plotting it. You'll get a better idea of what's going on. As mentioned before, you're getting NaN because you're passing 0 frequencies to chisq.test()
test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts), prob=0.5))
plot(test.gof)
## doesn't look so good
test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts)))
plot(test.gof)
## looks a little more clear | NaN p-value when using R's goodfit on binomial data
Try plotting it. You'll get a better idea of what's going on. As mentioned before, you're getting NaN because you're passing 0 frequencies to chisq.test()
test.gof <- goodfit(counts, type="binomial", |
26,262 | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed] | As long as the sponsors of the site are committed to keeping the site running, it would be premature to declare it 'dead.' It is not out of the question that StatProb.com may experience a revival in the future. In judging the longevity of a resource like StatProb.com, the short-term trends are irrelevant. Instead, the right questions to ask are:
Is the principle behind a site like StatProb.com a sound one? Is the idea of a free access peer-reviewed encyclopedia an idea that will grow in relevance over time, or diminish?
If the answer to the first question is "Yes", then is it likely that an alternative to the site will arise?
I think the answer to the first question is Yes. The field of statistics is rapidly growing and the demand for online statistical answers is growing, as evidenced by this site (stats.SE). The value of online encyclopedias has been proven by the success of Wikipedia. Yet because Wikipedia is open to everyone, peer-reviewed alternatives to Wikipedia will eventually be needed.
As a site like StatProb.com gains more articles, it will gain more users, and as it gains more users, it will increase its public profile. As it increases its public profile, more researchers will be interesting in contributing to the site. That StatProb.com is off to a slow start gives no indication of where it may one day end up.
I think the answer to the second question is No, because Springer.com has taken the lead in the online academic publishing world and it seems unlikely that it will give up that lead. Any prospective competitor to StatProb.com will need a strong advantage to compensate for the brand-name recognition that Springer possesses.
I checked the site, and recently (5/11) a new article has appeared on 'Strong Mixing Conditions.' As long as the site has the name of Springer attached to it, it will have some credibility in the academic world (whether it deserves it or not!) and a smart researcher can take advantage of this credibility. I imagine it would be a useful place to write background information for you or a colleague to cite in your own papers. I will keep the site StatProb in mind as a potential resource to that end, and I upvoted this question for making me aware of the site as a potential resource for my own academic career. | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed] | As long as the sponsors of the site are committed to keeping the site running, it would be premature to declare it 'dead.' It is not out of the question that StatProb.com may experience a revival in | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed]
As long as the sponsors of the site are committed to keeping the site running, it would be premature to declare it 'dead.' It is not out of the question that StatProb.com may experience a revival in the future. In judging the longevity of a resource like StatProb.com, the short-term trends are irrelevant. Instead, the right questions to ask are:
Is the principle behind a site like StatProb.com a sound one? Is the idea of a free access peer-reviewed encyclopedia an idea that will grow in relevance over time, or diminish?
If the answer to the first question is "Yes", then is it likely that an alternative to the site will arise?
I think the answer to the first question is Yes. The field of statistics is rapidly growing and the demand for online statistical answers is growing, as evidenced by this site (stats.SE). The value of online encyclopedias has been proven by the success of Wikipedia. Yet because Wikipedia is open to everyone, peer-reviewed alternatives to Wikipedia will eventually be needed.
As a site like StatProb.com gains more articles, it will gain more users, and as it gains more users, it will increase its public profile. As it increases its public profile, more researchers will be interesting in contributing to the site. That StatProb.com is off to a slow start gives no indication of where it may one day end up.
I think the answer to the second question is No, because Springer.com has taken the lead in the online academic publishing world and it seems unlikely that it will give up that lead. Any prospective competitor to StatProb.com will need a strong advantage to compensate for the brand-name recognition that Springer possesses.
I checked the site, and recently (5/11) a new article has appeared on 'Strong Mixing Conditions.' As long as the site has the name of Springer attached to it, it will have some credibility in the academic world (whether it deserves it or not!) and a smart researcher can take advantage of this credibility. I imagine it would be a useful place to write background information for you or a colleague to cite in your own papers. I will keep the site StatProb in mind as a potential resource to that end, and I upvoted this question for making me aware of the site as a potential resource for my own academic career. | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed]
As long as the sponsors of the site are committed to keeping the site running, it would be premature to declare it 'dead.' It is not out of the question that StatProb.com may experience a revival in |
26,263 | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed] | Maybe this question is now answered by the fate of the site. The site does not exist, and its content have winded up at The Encyclopedia of Mathematics. | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed] | Maybe this question is now answered by the fate of the site. The site does not exist, and its content have winded up at The Encyclopedia of Mathematics. | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed]
Maybe this question is now answered by the fate of the site. The site does not exist, and its content have winded up at The Encyclopedia of Mathematics. | Is it worthwhile to publish at the refereed wiki StatProb.com? [closed]
Maybe this question is now answered by the fate of the site. The site does not exist, and its content have winded up at The Encyclopedia of Mathematics. |
26,264 | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effect) | This is to add to @ocram's answer because it is too long to post as a comment. I would treat A ~ B + C as your null model so you can assess the statistical significance of a D-level random intercept in a nested model setup. As ocram pointed out, regularity conditions are violated when $H_0: \sigma^2 = 0$, and the likelihood ratio test statistic (LRT) will not necessarily be asymptotically distributed $\chi^2$. The solution was I taught was to bootstrap the LRT (whose bootstrap distribution will likely not be $\chi^2$) parametrically and compute a bootstrap p-value like this:
library(lme4)
my_modelB <- lm(formula = A ~ B + C)
lme_model <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.observed <- as.numeric(2*(logLik(lme_model) - logLik(my_modelB)))
nsim <- 999
lrt.sim <- numeric(nsim)
for (i in 1:nsim) {
y <- unlist(simulate(mymodlB))
nullmod <- lm(y ~ B + C)
altmod <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.sim[i] <- as.numeric(2*(logLik(altmod) - logLik(nullmod)))
}
mean(lrt.sim > lrt.observed) #pvalue
The proportion of bootstrapped LRTs more extreme that the observed LRT is the p-value. | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effe | This is to add to @ocram's answer because it is too long to post as a comment. I would treat A ~ B + C as your null model so you can assess the statistical significance of a D-level random intercept i | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effect)
This is to add to @ocram's answer because it is too long to post as a comment. I would treat A ~ B + C as your null model so you can assess the statistical significance of a D-level random intercept in a nested model setup. As ocram pointed out, regularity conditions are violated when $H_0: \sigma^2 = 0$, and the likelihood ratio test statistic (LRT) will not necessarily be asymptotically distributed $\chi^2$. The solution was I taught was to bootstrap the LRT (whose bootstrap distribution will likely not be $\chi^2$) parametrically and compute a bootstrap p-value like this:
library(lme4)
my_modelB <- lm(formula = A ~ B + C)
lme_model <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.observed <- as.numeric(2*(logLik(lme_model) - logLik(my_modelB)))
nsim <- 999
lrt.sim <- numeric(nsim)
for (i in 1:nsim) {
y <- unlist(simulate(mymodlB))
nullmod <- lm(y ~ B + C)
altmod <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.sim[i] <- as.numeric(2*(logLik(altmod) - logLik(nullmod)))
}
mean(lrt.sim > lrt.observed) #pvalue
The proportion of bootstrapped LRTs more extreme that the observed LRT is the p-value. | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effe
This is to add to @ocram's answer because it is too long to post as a comment. I would treat A ~ B + C as your null model so you can assess the statistical significance of a D-level random intercept i |
26,265 | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effect) | I am not totally sure to figure out what model is fitted when you use the lme function. (I guess the random effect is supposed to follow a normal distribution with zero mean?). However, the linear model is a special case of the mixed model when the variance of the random effect is zero. Although some technical difficulties exist (because $0$ is in the boundary of the parameter space for the variance) it should be possible to test $H_0:variance = 0$ vs $H_1: variance > 0$...
EDIT
In order to avoid confusion: The test mentioned above is sometimes used to decide whether or not the random effect is significant... but not to decide whether or not it should be transformed into a fixed effect. | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effe | I am not totally sure to figure out what model is fitted when you use the lme function. (I guess the random effect is supposed to follow a normal distribution with zero mean?). However, the linear mod | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effect)
I am not totally sure to figure out what model is fitted when you use the lme function. (I guess the random effect is supposed to follow a normal distribution with zero mean?). However, the linear model is a special case of the mixed model when the variance of the random effect is zero. Although some technical difficulties exist (because $0$ is in the boundary of the parameter space for the variance) it should be possible to test $H_0:variance = 0$ vs $H_1: variance > 0$...
EDIT
In order to avoid confusion: The test mentioned above is sometimes used to decide whether or not the random effect is significant... but not to decide whether or not it should be transformed into a fixed effect. | Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effe
I am not totally sure to figure out what model is fitted when you use the lme function. (I guess the random effect is supposed to follow a normal distribution with zero mean?). However, the linear mod |
26,266 | Lasso fitting by coordinate descent: open-source implementations? [closed] | I have a MATLAB and C/C++ implementation here.
Let me know if you find it useful. | Lasso fitting by coordinate descent: open-source implementations? [closed] | I have a MATLAB and C/C++ implementation here.
Let me know if you find it useful. | Lasso fitting by coordinate descent: open-source implementations? [closed]
I have a MATLAB and C/C++ implementation here.
Let me know if you find it useful. | Lasso fitting by coordinate descent: open-source implementations? [closed]
I have a MATLAB and C/C++ implementation here.
Let me know if you find it useful. |
26,267 | Lasso fitting by coordinate descent: open-source implementations? [closed] | You can also take a look at lasso4j which is an open source Java implementation of Lasso for linear regression. It is a port of the glmnet package to pure Java. | Lasso fitting by coordinate descent: open-source implementations? [closed] | You can also take a look at lasso4j which is an open source Java implementation of Lasso for linear regression. It is a port of the glmnet package to pure Java. | Lasso fitting by coordinate descent: open-source implementations? [closed]
You can also take a look at lasso4j which is an open source Java implementation of Lasso for linear regression. It is a port of the glmnet package to pure Java. | Lasso fitting by coordinate descent: open-source implementations? [closed]
You can also take a look at lasso4j which is an open source Java implementation of Lasso for linear regression. It is a port of the glmnet package to pure Java. |
26,268 | Lasso fitting by coordinate descent: open-source implementations? [closed] | Here's a GPL implementation of L1-regularized logistic regression but via the interior-point method rather than coordinate descent. | Lasso fitting by coordinate descent: open-source implementations? [closed] | Here's a GPL implementation of L1-regularized logistic regression but via the interior-point method rather than coordinate descent. | Lasso fitting by coordinate descent: open-source implementations? [closed]
Here's a GPL implementation of L1-regularized logistic regression but via the interior-point method rather than coordinate descent. | Lasso fitting by coordinate descent: open-source implementations? [closed]
Here's a GPL implementation of L1-regularized logistic regression but via the interior-point method rather than coordinate descent. |
26,269 | Lasso fitting by coordinate descent: open-source implementations? [closed] | I wrote the R package lassoshooting. It is written in C and the code is split in to two files where one of them is for the R interface and the other is more standalone (ccd_common.c).
See https://github.com/tabenius/lassoshooting | Lasso fitting by coordinate descent: open-source implementations? [closed] | I wrote the R package lassoshooting. It is written in C and the code is split in to two files where one of them is for the R interface and the other is more standalone (ccd_common.c).
See https://gith | Lasso fitting by coordinate descent: open-source implementations? [closed]
I wrote the R package lassoshooting. It is written in C and the code is split in to two files where one of them is for the R interface and the other is more standalone (ccd_common.c).
See https://github.com/tabenius/lassoshooting | Lasso fitting by coordinate descent: open-source implementations? [closed]
I wrote the R package lassoshooting. It is written in C and the code is split in to two files where one of them is for the R interface and the other is more standalone (ccd_common.c).
See https://gith |
26,270 | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | This is an interesting question, which doesn't really have a good solution. There a few different ways of tackling this problem.
Assume an underlying distribution and match moments - as suggested in the answers by @ivant and @onestop. One downside is that the multivariate generalisation may be unclear.
Saddlepoint approximations. In this paper:
Gillespie, C.S. and Renshaw, E. An improved saddlepoint approximation. Mathematical Biosciences, 2007.
We look at recovering a pdf/pmf when given only the first few moments. We found that this approach works when the skewness isn't too large.
Laguerre expansions:
Mustapha, H. and Dimitrakopoulosa, R. Generalized Laguerre expansions of multivariate probability densities with moments. Computers & Mathematics with Applications, 2010.
The results in this paper seem more promising, but I haven't coded them up. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | This is an interesting question, which doesn't really have a good solution. There a few different ways of tackling this problem.
Assume an underlying distribution and match moments - as suggested in | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
This is an interesting question, which doesn't really have a good solution. There a few different ways of tackling this problem.
Assume an underlying distribution and match moments - as suggested in the answers by @ivant and @onestop. One downside is that the multivariate generalisation may be unclear.
Saddlepoint approximations. In this paper:
Gillespie, C.S. and Renshaw, E. An improved saddlepoint approximation. Mathematical Biosciences, 2007.
We look at recovering a pdf/pmf when given only the first few moments. We found that this approach works when the skewness isn't too large.
Laguerre expansions:
Mustapha, H. and Dimitrakopoulosa, R. Generalized Laguerre expansions of multivariate probability densities with moments. Computers & Mathematics with Applications, 2010.
The results in this paper seem more promising, but I haven't coded them up. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
This is an interesting question, which doesn't really have a good solution. There a few different ways of tackling this problem.
Assume an underlying distribution and match moments - as suggested in |
26,271 | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | Fitting a distribution to data using the first four moments is exactly what Karl Pearson devised the Pearson family of continuous probability distributions for (maximum likelihood is much more popular these days of course). Should be straightforward to fit the relevant member of that family then use the same type of continuity correction as you give above for the normal distribution.
I assume you must have a truly enormous sample size? Otherwise sample estimates of skewness and especially kurtosis are often hopelessly imprecise, as well as being highly sensitive to outliers.In any case, I highly recommend you have a look at L-moments as an alternative that have several advantages over ordinary moments that can be advantageous for fitting distributions to data. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | Fitting a distribution to data using the first four moments is exactly what Karl Pearson devised the Pearson family of continuous probability distributions for (maximum likelihood is much more popular | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
Fitting a distribution to data using the first four moments is exactly what Karl Pearson devised the Pearson family of continuous probability distributions for (maximum likelihood is much more popular these days of course). Should be straightforward to fit the relevant member of that family then use the same type of continuity correction as you give above for the normal distribution.
I assume you must have a truly enormous sample size? Otherwise sample estimates of skewness and especially kurtosis are often hopelessly imprecise, as well as being highly sensitive to outliers.In any case, I highly recommend you have a look at L-moments as an alternative that have several advantages over ordinary moments that can be advantageous for fitting distributions to data. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
Fitting a distribution to data using the first four moments is exactly what Karl Pearson devised the Pearson family of continuous probability distributions for (maximum likelihood is much more popular |
26,272 | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | You could try to use skew normal distribution and see if excess kurtosis for your particular data sets is sufficiently close to the excess kurtosis of the distribution for given skewness. If it is, you can use the skew normal distribution cdf to estimate the probability. If not, you would have to come up with a transformation to the normal/skew pdf similar to the one used for the skew normal distribution, which would give you control over both skewness and excess kurtosis. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution | You could try to use skew normal distribution and see if excess kurtosis for your particular data sets is sufficiently close to the excess kurtosis of the distribution for given skewness. If it is, yo | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
You could try to use skew normal distribution and see if excess kurtosis for your particular data sets is sufficiently close to the excess kurtosis of the distribution for given skewness. If it is, you can use the skew normal distribution cdf to estimate the probability. If not, you would have to come up with a transformation to the normal/skew pdf similar to the one used for the skew normal distribution, which would give you control over both skewness and excess kurtosis. | Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
You could try to use skew normal distribution and see if excess kurtosis for your particular data sets is sufficiently close to the excess kurtosis of the distribution for given skewness. If it is, yo |
26,273 | Disadvantages of using a regression loss function in multi-class classification | Squared error used for classification problems is called Brier score and same as log-loss is a strictly proper scoring rule, i.e. it leads to producing well-calibrated probabilities. It is perfectly fine to use squared error as a loss function for classification.
This issue was studied by Hui and Belkin (2020), who conclude:
We argue that there is little compelling empirical or theoretical
evidence indicating a clear-cut advantage to the cross-entropy loss.
Indeed, in our experiments, performance on nearly all non-vision tasks
can be improved, sometimes significantly, by switching to the square
loss. Furthermore, training with square loss appears to be less
sensitive to the randomness in initialization. We posit that training
using the square loss for classification needs to be a part of best
practices of modern deep learning on equal footing with cross-entropy.
You may notice in Section 5 of the paper some technical considerations that the authors found to improve training.
Check also the why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? and What is happening here, when I use squared loss in logistic regression setting? threads. | Disadvantages of using a regression loss function in multi-class classification | Squared error used for classification problems is called Brier score and same as log-loss is a strictly proper scoring rule, i.e. it leads to producing well-calibrated probabilities. It is perfectly f | Disadvantages of using a regression loss function in multi-class classification
Squared error used for classification problems is called Brier score and same as log-loss is a strictly proper scoring rule, i.e. it leads to producing well-calibrated probabilities. It is perfectly fine to use squared error as a loss function for classification.
This issue was studied by Hui and Belkin (2020), who conclude:
We argue that there is little compelling empirical or theoretical
evidence indicating a clear-cut advantage to the cross-entropy loss.
Indeed, in our experiments, performance on nearly all non-vision tasks
can be improved, sometimes significantly, by switching to the square
loss. Furthermore, training with square loss appears to be less
sensitive to the randomness in initialization. We posit that training
using the square loss for classification needs to be a part of best
practices of modern deep learning on equal footing with cross-entropy.
You may notice in Section 5 of the paper some technical considerations that the authors found to improve training.
Check also the why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? and What is happening here, when I use squared loss in logistic regression setting? threads. | Disadvantages of using a regression loss function in multi-class classification
Squared error used for classification problems is called Brier score and same as log-loss is a strictly proper scoring rule, i.e. it leads to producing well-calibrated probabilities. It is perfectly f |
26,274 | Disadvantages of using a regression loss function in multi-class classification | The cross-entropy loss gives you the maximum likelihood estimate (MLE), i.e. if you find the minimum of cross-entropy loss you have found the model (from the family of models you consider) that gives the largest probability to your training data; no other model from your family gives more probability to your training data. (A model family might be e.g. the set of all possible weight assignments to some chosen neural network design.)
Being an MLE helps with mathematical reasoning about the properties of your result because there is lots of theory for MLEs.
Also, cross-entropy would be a little faster to compute than the sum of squared error (SSE) loss you mention.
Some people argue, that SSE loss is inferior because the loss depends not only on the probability of the correct label under your model but also on the distribution of the probabilities that the model gives to the wrong models (since it is not linear).
But, as far as deep neural networks are concerned, the real reason why cross-entropy is used most often (not always) for those, is that experience shows that it is very often leading to better results. We just haven't found something truly better yet (that would also be practical). | Disadvantages of using a regression loss function in multi-class classification | The cross-entropy loss gives you the maximum likelihood estimate (MLE), i.e. if you find the minimum of cross-entropy loss you have found the model (from the family of models you consider) that gives | Disadvantages of using a regression loss function in multi-class classification
The cross-entropy loss gives you the maximum likelihood estimate (MLE), i.e. if you find the minimum of cross-entropy loss you have found the model (from the family of models you consider) that gives the largest probability to your training data; no other model from your family gives more probability to your training data. (A model family might be e.g. the set of all possible weight assignments to some chosen neural network design.)
Being an MLE helps with mathematical reasoning about the properties of your result because there is lots of theory for MLEs.
Also, cross-entropy would be a little faster to compute than the sum of squared error (SSE) loss you mention.
Some people argue, that SSE loss is inferior because the loss depends not only on the probability of the correct label under your model but also on the distribution of the probabilities that the model gives to the wrong models (since it is not linear).
But, as far as deep neural networks are concerned, the real reason why cross-entropy is used most often (not always) for those, is that experience shows that it is very often leading to better results. We just haven't found something truly better yet (that would also be practical). | Disadvantages of using a regression loss function in multi-class classification
The cross-entropy loss gives you the maximum likelihood estimate (MLE), i.e. if you find the minimum of cross-entropy loss you have found the model (from the family of models you consider) that gives |
26,275 | Increasing sample size to obtain width of CI / SE: p-hacking? | As a partial answer to my question, I ran a simulation to see if it led to inappropriate rejection of $H_0$.
library(dplyr)
set.seed(1234)
start_n <- 20
increment_n <- 20
target_se <- 0.05
vec_p <- numeric()
vec_se <- numeric()
vec_n <- numeric()
vec_mean <- numeric()
# H0 true
for (i in 1:1000) {
y <- rnorm(start_n)
keep_running <- TRUE
while(keep_running == TRUE) {
se <- sd(y) / sqrt(length(y))
p <- t.test(y)$p.value
keep_running <- se > target_se
y <- c(y, rnorm(increment_n))
}
vec_se <- c(vec_se, se)
vec_p <- c(vec_p, p)
vec_n <- c(vec_n, length(y))
vec_mean <- c(vec_mean, mean(y))
}
mean(vec_p < 0.05)
table(vec_n)
Which gives:
Type I error rate: 0.045
vec_n
320 340 360 380 400 420 440 460 480 500 520
1 2 17 56 166 242 289 161 55 9 2
(vec_n is the sample size that was reached before the experiment stopped.)
The type I error rate tends to be a touch lower than 0.05, which is explained by @Michael Lew's answer. | Increasing sample size to obtain width of CI / SE: p-hacking? | As a partial answer to my question, I ran a simulation to see if it led to inappropriate rejection of $H_0$.
library(dplyr)
set.seed(1234)
start_n <- 20
increment_n <- 20
target_se <- 0.05
vec_p <- | Increasing sample size to obtain width of CI / SE: p-hacking?
As a partial answer to my question, I ran a simulation to see if it led to inappropriate rejection of $H_0$.
library(dplyr)
set.seed(1234)
start_n <- 20
increment_n <- 20
target_se <- 0.05
vec_p <- numeric()
vec_se <- numeric()
vec_n <- numeric()
vec_mean <- numeric()
# H0 true
for (i in 1:1000) {
y <- rnorm(start_n)
keep_running <- TRUE
while(keep_running == TRUE) {
se <- sd(y) / sqrt(length(y))
p <- t.test(y)$p.value
keep_running <- se > target_se
y <- c(y, rnorm(increment_n))
}
vec_se <- c(vec_se, se)
vec_p <- c(vec_p, p)
vec_n <- c(vec_n, length(y))
vec_mean <- c(vec_mean, mean(y))
}
mean(vec_p < 0.05)
table(vec_n)
Which gives:
Type I error rate: 0.045
vec_n
320 340 360 380 400 420 440 460 480 500 520
1 2 17 56 166 242 289 161 55 9 2
(vec_n is the sample size that was reached before the experiment stopped.)
The type I error rate tends to be a touch lower than 0.05, which is explained by @Michael Lew's answer. | Increasing sample size to obtain width of CI / SE: p-hacking?
As a partial answer to my question, I ran a simulation to see if it led to inappropriate rejection of $H_0$.
library(dplyr)
set.seed(1234)
start_n <- 20
increment_n <- 20
target_se <- 0.05
vec_p <- |
26,276 | Increasing sample size to obtain width of CI / SE: p-hacking? | Sampling until a nominated confidence interval width is obtained is technically similar to sequential testing and might be thought by some to be similar to p-hacking, but that does not mean that you not do it!
If your concern is accurate estimation of the population variance then a 'stop when CI is less than' strategy is going to give you low estimates more often than not, because the sampling is more likely to stop after an observation that lowers the sample standard deviation than after an observation that increases it. However, that bias may be quite small and thus might well be of no practical concern. It will depend on the sample size and thus the nominated CI width. It will be less with a large sample because the large sample will have a relatively stable CI estimate prior to stopping whereas a small sample CI will fluctuate much more with each new observation.
If your concern is to accurately estimate the population mean then I don't think there is any issue because your stopping rule is not dependent on the sample mean.
P-hacking is not an all or none phenomenon and procedures that might sometimes be illegitimate may in other circumstances be good practice! It depends on inferential objectives as well as experimental design considerations. See section 3 here: https://link.springer.com/chapter/10.1007/164_2019_286 | Increasing sample size to obtain width of CI / SE: p-hacking? | Sampling until a nominated confidence interval width is obtained is technically similar to sequential testing and might be thought by some to be similar to p-hacking, but that does not mean that you n | Increasing sample size to obtain width of CI / SE: p-hacking?
Sampling until a nominated confidence interval width is obtained is technically similar to sequential testing and might be thought by some to be similar to p-hacking, but that does not mean that you not do it!
If your concern is accurate estimation of the population variance then a 'stop when CI is less than' strategy is going to give you low estimates more often than not, because the sampling is more likely to stop after an observation that lowers the sample standard deviation than after an observation that increases it. However, that bias may be quite small and thus might well be of no practical concern. It will depend on the sample size and thus the nominated CI width. It will be less with a large sample because the large sample will have a relatively stable CI estimate prior to stopping whereas a small sample CI will fluctuate much more with each new observation.
If your concern is to accurately estimate the population mean then I don't think there is any issue because your stopping rule is not dependent on the sample mean.
P-hacking is not an all or none phenomenon and procedures that might sometimes be illegitimate may in other circumstances be good practice! It depends on inferential objectives as well as experimental design considerations. See section 3 here: https://link.springer.com/chapter/10.1007/164_2019_286 | Increasing sample size to obtain width of CI / SE: p-hacking?
Sampling until a nominated confidence interval width is obtained is technically similar to sequential testing and might be thought by some to be similar to p-hacking, but that does not mean that you n |
26,277 | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ range is usually quite small" but is the reverse true? | Answer edited 9/15/2021:
In his answer to the OP, @whuber claims as follows:
For a distribution with kurtosis $\kappa$, the total density within one
SD of the mean lies between $1−1/\kappa$ and $1$, where $\kappa$ is the (non-excess) kurtosis of the distribution.
THIS CLAIM IS FALSE.
The following example shows clearly that @whuber's result is false.
Consider my "Counterexample #1" from here, with $\theta = .001.$ In that counterexample, the kurtosis is $25.5,$ the range $1-1/\kappa$ to $1.0$ is from $0.96$ to $1.0,$ yet the probability within a standard deviation of the mean is $0.5$. These statements are verified by the R code:
th = .001
Z = c(-sqrt(.155/th +1.44), -1.2, -.5, +.5, +1.2, +sqrt(.155/th +1.44))
p = c(th/2, (.5-th)/2, .25, .25, (.5-th)/2, th/2)
sum(p) # The probabilities sum to one so it is a valid pmf
sum(Z*p) # The mean is zero
sum(Z^2*p) # The variance is one
plot(Z, p, type="h", lwd = 4, cex.lab=1.5, cex.axis=1.5,
ylab="Probability")
abline(v=c(-1,1), lty=2, lwd=2) # Shows values within +- 1 sd
k = sum(Z^4*p)
k # Kurtosis is 25.5
range = c(1 - 1/k,1)
range # (.96, 1.0) is the range suggested by @whuber's false theorem
# about probability within a sd of mu
sum(p[abs(Z)<1]) # 0.5 is the actual probability within +- 1sd
Here is a graph of the counterexample distribution. The dashed vertical lines mark the $\mu \pm \sigma$ limits, within which it is clearly visible that there is only $0.50$ probability.
You can also illustrate the counterexample using a reproducible data set and summary statistics. The following R code generates $1000000$ samples from the counterexample distribution, a large enough sample size so that the "bias corrections" are negligible. The estimated kurtosis is $26.02$, the range $(1 - 1/26.02, 1)$, within which the central probability is supposed to lie, is $(.96,1)$, yet the estimated central probability is $0.4999$.
set.seed(12345)
N = 1000000
Data = sample(Z, N, p, replace = TRUE)
xbar = mean(Data)
s = sd(Data)
library(moments)
ku = kurtosis(Data)
ku
c(1-1/ku, 1) # @whuber's false claim of central probability range
sum( Data >= xbar -s & Data <= xbar +s )/N # Actual central probability
It is amusing to see just how spectacularly @whuber's result does fail. In my counterexample #1 family of distributions, the kurtosis can tend to infinity, implying, according to @whuber's "result," that the central probability approaches $1.0$. But instead, the central probability stays constant at $0.5$!
One does not need to construct fancy counterexamples to illustrate such spectacular failure of @whuber's claim. Consider the common $T_\nu$ distribution, the Student T distribution with degrees of freedom parameter $\nu$. For $\nu > 4$, its mean is zero, its variance is $\sigma^2 = \nu/(\nu -2)$, and its (non-excess) kurtosis is $\kappa = 6/(\nu-4) +3$. In the range $4 < \nu \le 5$, the kurtosis ranges from $9$ to $\infty$, while the probability within $\pm \sigma$ can be calculated numerically, in R notation, as
pt(sigma, nu) - pt(-sigma,nu)
The following R code and resulting graph shows the range claimed by @whuber (dashed black lines), along with the actual central probability (solid red line).
nu = seq(4.0001, 4.9999, .0001)
sigma = sqrt(nu/(nu-2))
kurt = 6/(nu-4) + 3
Cent.Prob = pt(sigma, nu) - pt(-sigma, nu)
Upper.Bound = rep(1, length(nu))
Lower.Bound = 1 - 1/kurt
plot(nu, Cent.Prob, ylim = c(.6,1), type="l", col="red",
ylab="Central Probability", xlab = "degrees of freedom")
points(nu, Upper.Bound, type="l", lty=2)
points(nu, Lower.Bound, type="l", lty=2)
Again, there is a spectacular failure of @whuber's claim, in that the claim implies the central probability must be essentially $1.0$ (for $\nu \approx 4$), when in fact it is far less (around $0.77$).
Thus, @whuber's claim is false: The central probability need not lie in @whuber's stated range. In fact, as my Counterexample #1 shows, the central probability need not increase at all with larger kurtosis.
Here are two results that shed additional light on the relation of kurtosis to the center.
Theorem 1. Consider a random variable $X$ (includes data via the empirical distribution) that has, wlog, mean = 0, variance = 1, and finite fourth moment. Now, create a new random variable $X'$ by replacing the mass/density of $p_X$ within $0 \pm 1$ arbitrarily, but maintaining $E(X')=0$ and $Var(X')=1.$ Then the difference between the maximum and minimum kurtosis statistics over all such replacements is less than 0.25.
Theorem 2. Consider a random variable $X$ as in Theorem 1. Now, create a new random variable $X'$ by replacing the mass/density of $p_X$ outside of $0 \pm 1$ arbitrarily, but maintaining $E(X')=0$ and $Var(X')=1$ in such replacements. Then the difference between the maximum and minimum kurtosis statistics over all such replacements is unbounded (i.e., infinite).
Thus, the effect of moving mass near the center has at most a very small effect on kurtosis, while the effect of moving mass in the tails has an infinite effect.
While one is trying to prove a theorem that proves that the center somehow is related to kurtosis, it is very helpful to know in advance what counterexamples may exist to such a theorem.
Good counterexamples are given here.
"Counterexample #1" shows a family of distributions in which the kurtosis increases to infinity, while the mass inside $\mu \pm \sigma$ stays a constant 0.5.
"Counterexample #2" shows a family of distributions where the mass within $\mu \pm \sigma$ increases to 1.0, yet the kurtosis decreases to its minimum.
So the often-made assertion that kurtosis measures “concentration of mass in the center” is obviously wrong.
Many people think that higher kurtosis implies “more probability in the tails.” This is not true either: Counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend.
Instead, kurtosis precisely measures tail leverage. See
How the kurtosis value can determine the unhealthy event
and
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? . | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ rang | Answer edited 9/15/2021:
In his answer to the OP, @whuber claims as follows:
For a distribution with kurtosis $\kappa$, the total density within one
SD of the mean lies between $1−1/\kappa$ and $1$, w | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ range is usually quite small" but is the reverse true?
Answer edited 9/15/2021:
In his answer to the OP, @whuber claims as follows:
For a distribution with kurtosis $\kappa$, the total density within one
SD of the mean lies between $1−1/\kappa$ and $1$, where $\kappa$ is the (non-excess) kurtosis of the distribution.
THIS CLAIM IS FALSE.
The following example shows clearly that @whuber's result is false.
Consider my "Counterexample #1" from here, with $\theta = .001.$ In that counterexample, the kurtosis is $25.5,$ the range $1-1/\kappa$ to $1.0$ is from $0.96$ to $1.0,$ yet the probability within a standard deviation of the mean is $0.5$. These statements are verified by the R code:
th = .001
Z = c(-sqrt(.155/th +1.44), -1.2, -.5, +.5, +1.2, +sqrt(.155/th +1.44))
p = c(th/2, (.5-th)/2, .25, .25, (.5-th)/2, th/2)
sum(p) # The probabilities sum to one so it is a valid pmf
sum(Z*p) # The mean is zero
sum(Z^2*p) # The variance is one
plot(Z, p, type="h", lwd = 4, cex.lab=1.5, cex.axis=1.5,
ylab="Probability")
abline(v=c(-1,1), lty=2, lwd=2) # Shows values within +- 1 sd
k = sum(Z^4*p)
k # Kurtosis is 25.5
range = c(1 - 1/k,1)
range # (.96, 1.0) is the range suggested by @whuber's false theorem
# about probability within a sd of mu
sum(p[abs(Z)<1]) # 0.5 is the actual probability within +- 1sd
Here is a graph of the counterexample distribution. The dashed vertical lines mark the $\mu \pm \sigma$ limits, within which it is clearly visible that there is only $0.50$ probability.
You can also illustrate the counterexample using a reproducible data set and summary statistics. The following R code generates $1000000$ samples from the counterexample distribution, a large enough sample size so that the "bias corrections" are negligible. The estimated kurtosis is $26.02$, the range $(1 - 1/26.02, 1)$, within which the central probability is supposed to lie, is $(.96,1)$, yet the estimated central probability is $0.4999$.
set.seed(12345)
N = 1000000
Data = sample(Z, N, p, replace = TRUE)
xbar = mean(Data)
s = sd(Data)
library(moments)
ku = kurtosis(Data)
ku
c(1-1/ku, 1) # @whuber's false claim of central probability range
sum( Data >= xbar -s & Data <= xbar +s )/N # Actual central probability
It is amusing to see just how spectacularly @whuber's result does fail. In my counterexample #1 family of distributions, the kurtosis can tend to infinity, implying, according to @whuber's "result," that the central probability approaches $1.0$. But instead, the central probability stays constant at $0.5$!
One does not need to construct fancy counterexamples to illustrate such spectacular failure of @whuber's claim. Consider the common $T_\nu$ distribution, the Student T distribution with degrees of freedom parameter $\nu$. For $\nu > 4$, its mean is zero, its variance is $\sigma^2 = \nu/(\nu -2)$, and its (non-excess) kurtosis is $\kappa = 6/(\nu-4) +3$. In the range $4 < \nu \le 5$, the kurtosis ranges from $9$ to $\infty$, while the probability within $\pm \sigma$ can be calculated numerically, in R notation, as
pt(sigma, nu) - pt(-sigma,nu)
The following R code and resulting graph shows the range claimed by @whuber (dashed black lines), along with the actual central probability (solid red line).
nu = seq(4.0001, 4.9999, .0001)
sigma = sqrt(nu/(nu-2))
kurt = 6/(nu-4) + 3
Cent.Prob = pt(sigma, nu) - pt(-sigma, nu)
Upper.Bound = rep(1, length(nu))
Lower.Bound = 1 - 1/kurt
plot(nu, Cent.Prob, ylim = c(.6,1), type="l", col="red",
ylab="Central Probability", xlab = "degrees of freedom")
points(nu, Upper.Bound, type="l", lty=2)
points(nu, Lower.Bound, type="l", lty=2)
Again, there is a spectacular failure of @whuber's claim, in that the claim implies the central probability must be essentially $1.0$ (for $\nu \approx 4$), when in fact it is far less (around $0.77$).
Thus, @whuber's claim is false: The central probability need not lie in @whuber's stated range. In fact, as my Counterexample #1 shows, the central probability need not increase at all with larger kurtosis.
Here are two results that shed additional light on the relation of kurtosis to the center.
Theorem 1. Consider a random variable $X$ (includes data via the empirical distribution) that has, wlog, mean = 0, variance = 1, and finite fourth moment. Now, create a new random variable $X'$ by replacing the mass/density of $p_X$ within $0 \pm 1$ arbitrarily, but maintaining $E(X')=0$ and $Var(X')=1.$ Then the difference between the maximum and minimum kurtosis statistics over all such replacements is less than 0.25.
Theorem 2. Consider a random variable $X$ as in Theorem 1. Now, create a new random variable $X'$ by replacing the mass/density of $p_X$ outside of $0 \pm 1$ arbitrarily, but maintaining $E(X')=0$ and $Var(X')=1$ in such replacements. Then the difference between the maximum and minimum kurtosis statistics over all such replacements is unbounded (i.e., infinite).
Thus, the effect of moving mass near the center has at most a very small effect on kurtosis, while the effect of moving mass in the tails has an infinite effect.
While one is trying to prove a theorem that proves that the center somehow is related to kurtosis, it is very helpful to know in advance what counterexamples may exist to such a theorem.
Good counterexamples are given here.
"Counterexample #1" shows a family of distributions in which the kurtosis increases to infinity, while the mass inside $\mu \pm \sigma$ stays a constant 0.5.
"Counterexample #2" shows a family of distributions where the mass within $\mu \pm \sigma$ increases to 1.0, yet the kurtosis decreases to its minimum.
So the often-made assertion that kurtosis measures “concentration of mass in the center” is obviously wrong.
Many people think that higher kurtosis implies “more probability in the tails.” This is not true either: Counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend.
Instead, kurtosis precisely measures tail leverage. See
How the kurtosis value can determine the unhealthy event
and
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? . | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ rang
Answer edited 9/15/2021:
In his answer to the OP, @whuber claims as follows:
For a distribution with kurtosis $\kappa$, the total density within one
SD of the mean lies between $1−1/\kappa$ and $1$, w |
26,278 | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ range is usually quite small" but is the reverse true? | For a distribution with kurtosis $\kappa,$ the total density within one SD of the mean lies between $1-1/\kappa$ and $1.$
The intuition behind this is twofold: (1) kurtosis is most heavily influenced by extreme values and (2) it is more influenced by such extremes than the standard deviation. The question concerns what can be said about the proportion of "non-extreme" values based on the value $\kappa$ of the kurtosis alone, where "non-extreme" is taken to be within one standard deviation of the mean. The statement above shows that the kurtosis does determine some non-trivial limits on that proportion. It is the tightest such result that can be expressed in general: in the demonstrations below I offer two families of distributions (the simplest possible: each is supported on just three values) showing how the bounds can be approached arbitrarily closely (or even attained) for any kurtosis.
Let's begin with some simplifications. Because kurtosis is a standardized central moment, we may (with no loss of generality) restrict our study to zero-mean, unit-variance ("standardized") distributions. In these cases the kurtosis equals the (raw) fourth moment and the probability of a non-extreme value is the probability of the interval $[-1,1].$
The power-mean inequality asserts the fourth root of $\kappa$ is no less than the standard deviation. Thus, $\kappa \ge 1.$
Maximizing the chance of non-extreme values, given $\kappa$
Consider a standardized "trinomial" distribution supported on the values $-1, b,$ and $a$ with probabilities $p-r,$ $1-p,$ and $r,$ respectively where $|b|\le 1$ and $a \gt 1.$ Because probabilities are non-negative, $0 \le r \le p \le 1.$ The figure displays three examples with various values of the kurtosis.
Our conditions on the moments yield three equations:
$$\left\{\begin{aligned}
0 &= (p-r)(-1) + (1-p)(b) + r(a) &= r-p + (1-p)b + ra &\quad \text{Mean is }0 \\
1 &= (p-r)(-1)^2 + (1-p)(b)^2 + r(a)^2 &= p-r + (1-p)b^2 + ra^2 &\quad \text{Variance is } 1 \\
\kappa &= (p-r)(-1)^4 + (1-p)(b)^4 + r(a)^4 &= p-r + (1-p)b^4 + ra^4 &\quad \text{Kurtosis.}
\end{aligned}\right.$$
Because the chance of a non-extreme value is $(p-r) + (1-p) = 1-r,$ we seek to minimize $r$ given $\kappa,$ subject to these constraints.
No matter what $\kappa$ might be, the supremum of these chances is $1,$ because the infimum of $r$ is zero.
To show this, we need to demonstrate that for sufficiently and arbitrarily small values of $r,$ there are values of $p,$ $b,$ and $a$ satisfying the three constraints. Because $r$ can be taken to be arbitrarily small--it is, in effect, an infinitesimal--let's solve these equation to first order in $r.$ That's straightforward to do, giving
$$a \cong \left(\frac{\kappa - 1}{r}\right)^{1/4},\quad b \cong 1 - \left(\frac{a^2-1}{2}\right)r,\quad p = \frac{r(1+a) + b}{b+1}.$$
The value of $a$ grows large as $r$ diminishes, but in such a way that $ra$ and $ra^2$ continue to grow small with $r.$ When $r$ is sufficiently small, then, $0\lt r\lt p\lt 1$ and (clearly) $b \lt 1$ and $a\gt 1,$ assuring all constraints are satisfied. This completes the demonstration that $r$ can be made arbitrarily small for any $\kappa \gt 1.$
The figure displays numerical solutions for a range of $\kappa.$
Minimizing the chance of non-extreme values, given $\kappa$
This example is inspired by the proof of Chebyshev's Inequality. Recall the proof proceeds by replacing $|x|^s$ by $a^4$ times the indicator of the function $|x| \ge a$ for some constant $a,$ whence
$$\mu_s = E[|X|^s]\ \ge\ E[a^s \mathcal{I}(|X|\ge a)] = a^s \Pr(|X|\ge a).$$
This is Chebyshev's Inequality.
We may limit the search for a minimum to symmetric distributions. This is because we may take any standardized distribution with distribution function $F$ and symmetrize it to the distribution function $x\to (F(x) + 1 - F(-x))/2$ without changing any of the constraints or the objective function.
Suppose, then, $F$ is any discrete symmetric standardized distribution with kurtosis $\kappa \gt 1.$ If the event $(-\sqrt{\kappa},\sqrt{\kappa})$ has any positive probability let it do so at the values $\pm a.$ Change $F$ by zeroing the probability at $\pm a$ and compensate by increasing the probabilities at $\pm\sqrt{\kappa}$ by $a^2/(2\kappa)$ and increasing the probability at $0$ by $1 - a^2/\kappa.$ A quick calculation establishes this preserves the variance, kurtosis, and symmetry of $F,$ because the change in any moment of order $s$ is
$$\delta \mu_s = 2p\left[(a^2/\kappa)(\sqrt{\kappa}^s) - a^s\right] = 2pa^2\left[\kappa^{s/2 - 1} - a^{s-2}\right]$$
and this equals zero when $a^{s-2} = \kappa^{s/2-1},$ which always includes the solutions $s=2$ and $s=4.$
Consequently, after applying this operation to every one of the (at most countable) values with positive probability in the interval $(-\sqrt{\kappa}, \sqrt{\kappa}),$ we may assume all the probability within that interval (equal to $1-r,$ say) is concentrated at $0.$ Chebyshev's inequality (applied to the $s=4^\text{th}$ moment) tells us this probability must be at least $1-1/(\kappa^{1/4})^4 = 1-1/\kappa.$ Consequently $1-r \ge 1 - 1/\kappa.$ This minimum is attained (as seen in the proof of Chebyshev's inequality) by the trinomial distribution assigning probabilities $1/(2\kappa)$ to the values $\pm \sqrt{\kappa}$ and all the remaining probability to $0.$ Here are plots of three such distributions.
Finally, the discrete distributions are dense within the space of all distributions (this says nothing other than the graph of any CDF may be arbitrarily well approximated by at most a countable set of points strategically positioned along it). Because all the functionals involved (moments and probabilities) are continuous properties of the distribution, these results hold for all distributions. | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ rang | For a distribution with kurtosis $\kappa,$ the total density within one SD of the mean lies between $1-1/\kappa$ and $1.$
The intuition behind this is twofold: (1) kurtosis is most heavily influenced | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ range is usually quite small" but is the reverse true?
For a distribution with kurtosis $\kappa,$ the total density within one SD of the mean lies between $1-1/\kappa$ and $1.$
The intuition behind this is twofold: (1) kurtosis is most heavily influenced by extreme values and (2) it is more influenced by such extremes than the standard deviation. The question concerns what can be said about the proportion of "non-extreme" values based on the value $\kappa$ of the kurtosis alone, where "non-extreme" is taken to be within one standard deviation of the mean. The statement above shows that the kurtosis does determine some non-trivial limits on that proportion. It is the tightest such result that can be expressed in general: in the demonstrations below I offer two families of distributions (the simplest possible: each is supported on just three values) showing how the bounds can be approached arbitrarily closely (or even attained) for any kurtosis.
Let's begin with some simplifications. Because kurtosis is a standardized central moment, we may (with no loss of generality) restrict our study to zero-mean, unit-variance ("standardized") distributions. In these cases the kurtosis equals the (raw) fourth moment and the probability of a non-extreme value is the probability of the interval $[-1,1].$
The power-mean inequality asserts the fourth root of $\kappa$ is no less than the standard deviation. Thus, $\kappa \ge 1.$
Maximizing the chance of non-extreme values, given $\kappa$
Consider a standardized "trinomial" distribution supported on the values $-1, b,$ and $a$ with probabilities $p-r,$ $1-p,$ and $r,$ respectively where $|b|\le 1$ and $a \gt 1.$ Because probabilities are non-negative, $0 \le r \le p \le 1.$ The figure displays three examples with various values of the kurtosis.
Our conditions on the moments yield three equations:
$$\left\{\begin{aligned}
0 &= (p-r)(-1) + (1-p)(b) + r(a) &= r-p + (1-p)b + ra &\quad \text{Mean is }0 \\
1 &= (p-r)(-1)^2 + (1-p)(b)^2 + r(a)^2 &= p-r + (1-p)b^2 + ra^2 &\quad \text{Variance is } 1 \\
\kappa &= (p-r)(-1)^4 + (1-p)(b)^4 + r(a)^4 &= p-r + (1-p)b^4 + ra^4 &\quad \text{Kurtosis.}
\end{aligned}\right.$$
Because the chance of a non-extreme value is $(p-r) + (1-p) = 1-r,$ we seek to minimize $r$ given $\kappa,$ subject to these constraints.
No matter what $\kappa$ might be, the supremum of these chances is $1,$ because the infimum of $r$ is zero.
To show this, we need to demonstrate that for sufficiently and arbitrarily small values of $r,$ there are values of $p,$ $b,$ and $a$ satisfying the three constraints. Because $r$ can be taken to be arbitrarily small--it is, in effect, an infinitesimal--let's solve these equation to first order in $r.$ That's straightforward to do, giving
$$a \cong \left(\frac{\kappa - 1}{r}\right)^{1/4},\quad b \cong 1 - \left(\frac{a^2-1}{2}\right)r,\quad p = \frac{r(1+a) + b}{b+1}.$$
The value of $a$ grows large as $r$ diminishes, but in such a way that $ra$ and $ra^2$ continue to grow small with $r.$ When $r$ is sufficiently small, then, $0\lt r\lt p\lt 1$ and (clearly) $b \lt 1$ and $a\gt 1,$ assuring all constraints are satisfied. This completes the demonstration that $r$ can be made arbitrarily small for any $\kappa \gt 1.$
The figure displays numerical solutions for a range of $\kappa.$
Minimizing the chance of non-extreme values, given $\kappa$
This example is inspired by the proof of Chebyshev's Inequality. Recall the proof proceeds by replacing $|x|^s$ by $a^4$ times the indicator of the function $|x| \ge a$ for some constant $a,$ whence
$$\mu_s = E[|X|^s]\ \ge\ E[a^s \mathcal{I}(|X|\ge a)] = a^s \Pr(|X|\ge a).$$
This is Chebyshev's Inequality.
We may limit the search for a minimum to symmetric distributions. This is because we may take any standardized distribution with distribution function $F$ and symmetrize it to the distribution function $x\to (F(x) + 1 - F(-x))/2$ without changing any of the constraints or the objective function.
Suppose, then, $F$ is any discrete symmetric standardized distribution with kurtosis $\kappa \gt 1.$ If the event $(-\sqrt{\kappa},\sqrt{\kappa})$ has any positive probability let it do so at the values $\pm a.$ Change $F$ by zeroing the probability at $\pm a$ and compensate by increasing the probabilities at $\pm\sqrt{\kappa}$ by $a^2/(2\kappa)$ and increasing the probability at $0$ by $1 - a^2/\kappa.$ A quick calculation establishes this preserves the variance, kurtosis, and symmetry of $F,$ because the change in any moment of order $s$ is
$$\delta \mu_s = 2p\left[(a^2/\kappa)(\sqrt{\kappa}^s) - a^s\right] = 2pa^2\left[\kappa^{s/2 - 1} - a^{s-2}\right]$$
and this equals zero when $a^{s-2} = \kappa^{s/2-1},$ which always includes the solutions $s=2$ and $s=4.$
Consequently, after applying this operation to every one of the (at most countable) values with positive probability in the interval $(-\sqrt{\kappa}, \sqrt{\kappa}),$ we may assume all the probability within that interval (equal to $1-r,$ say) is concentrated at $0.$ Chebyshev's inequality (applied to the $s=4^\text{th}$ moment) tells us this probability must be at least $1-1/(\kappa^{1/4})^4 = 1-1/\kappa.$ Consequently $1-r \ge 1 - 1/\kappa.$ This minimum is attained (as seen in the proof of Chebyshev's inequality) by the trinomial distribution assigning probabilities $1/(2\kappa)$ to the values $\pm \sqrt{\kappa}$ and all the remaining probability to $0.$ Here are plots of three such distributions.
Finally, the discrete distributions are dense within the space of all distributions (this says nothing other than the graph of any CDF may be arbitrarily well approximated by at most a countable set of points strategically positioned along it). Because all the functionals involved (moments and probabilities) are continuous properties of the distribution, these results hold for all distributions. | Westfall says, "the proportion of the kurtosis that is determined by the central $\mu\pm\sigma$ rang
For a distribution with kurtosis $\kappa,$ the total density within one SD of the mean lies between $1-1/\kappa$ and $1.$
The intuition behind this is twofold: (1) kurtosis is most heavily influenced |
26,279 | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem) | I answered this once on twitter, I can reproduce the answer here.
Derivation (graphs licensing each step are provided below).
$$
\begin{align}
P(y|do(x)) &= P(y|do(x), do(z)) \qquad &\text{Rule 3: $(Y \perp\!\!\!\perp Z|X)_{G_\overline{XZ}}$}\\
&= P(y |x, do(z)) \qquad &\text{Rule 2: $(Y\perp\!\!\!\perp X)_{G_{\overline{Z}\underline{X}}}$}\\
&= \frac{P(y, x|do(z))}{P(x|do(z))} \qquad &\text{Def. of conditional probability}\\
&= \frac{\sum_{w}P(y, x|z, w)P(w)}{\sum_{w}P(x|z,w)P(w)}\qquad &\text{Backdoor using with $W$: $(\{Y, X\}\perp\!\!\!\perp Z|W)_{G_{\underline{Z}}}$}
\end{align}
$$
What does each step mean of the derivation means in plain English?
Step 1 simply states that, when $X$ is held fixed (by intervention),
manipulating $Z$ has no effect on $Y$;
Step 2 states that, when $Z$
is held fixed (by intervention), there is no confounding between $X$
and $Y$;
Step 3 is just applying the definition of conditional
probability; and, finally,
Step 4 notes that adjusting for $W$ is
sufficient to identify the causal effect of $Z$ on $X$ and $Y$. This
is because $W$ blocks all confounding paths from $Z$ to $X$ and $Y$
(backdoor paths). So we can use vanilla backdoor adjustment here.
Modified Graphs | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem) | I answered this once on twitter, I can reproduce the answer here.
Derivation (graphs licensing each step are provided below).
$$
\begin{align}
P(y|do(x)) &= P(y|do(x), do(z)) \qquad &\text{Rule | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem)
I answered this once on twitter, I can reproduce the answer here.
Derivation (graphs licensing each step are provided below).
$$
\begin{align}
P(y|do(x)) &= P(y|do(x), do(z)) \qquad &\text{Rule 3: $(Y \perp\!\!\!\perp Z|X)_{G_\overline{XZ}}$}\\
&= P(y |x, do(z)) \qquad &\text{Rule 2: $(Y\perp\!\!\!\perp X)_{G_{\overline{Z}\underline{X}}}$}\\
&= \frac{P(y, x|do(z))}{P(x|do(z))} \qquad &\text{Def. of conditional probability}\\
&= \frac{\sum_{w}P(y, x|z, w)P(w)}{\sum_{w}P(x|z,w)P(w)}\qquad &\text{Backdoor using with $W$: $(\{Y, X\}\perp\!\!\!\perp Z|W)_{G_{\underline{Z}}}$}
\end{align}
$$
What does each step mean of the derivation means in plain English?
Step 1 simply states that, when $X$ is held fixed (by intervention),
manipulating $Z$ has no effect on $Y$;
Step 2 states that, when $Z$
is held fixed (by intervention), there is no confounding between $X$
and $Y$;
Step 3 is just applying the definition of conditional
probability; and, finally,
Step 4 notes that adjusting for $W$ is
sufficient to identify the causal effect of $Z$ on $X$ and $Y$. This
is because $W$ blocks all confounding paths from $Z$ to $X$ and $Y$
(backdoor paths). So we can use vanilla backdoor adjustment here.
Modified Graphs | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem)
I answered this once on twitter, I can reproduce the answer here.
Derivation (graphs licensing each step are provided below).
$$
\begin{align}
P(y|do(x)) &= P(y|do(x), do(z)) \qquad &\text{Rule |
26,280 | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem) | In the following github repo, we provide Python software that assigns random probabilities to the nodes of the Napkin Bayesian Network. The software then does the marginalizations necessary to calculate P(y|x) and the right hand side of the Adjustment Formula given by Cinelli above. We find that they are always equal, no matter what the random probabilities are. Hence, Cinelli's Adjustment Formula reduces to P(y|do(x))=P(y|x)
https://github.com/rrtucci/napkin-do-calc | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem) | In the following github repo, we provide Python software that assigns random probabilities to the nodes of the Napkin Bayesian Network. The software then does the marginalizations necessary to calcula | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem)
In the following github repo, we provide Python software that assigns random probabilities to the nodes of the Napkin Bayesian Network. The software then does the marginalizations necessary to calculate P(y|x) and the right hand side of the Adjustment Formula given by Cinelli above. We find that they are always equal, no matter what the random probabilities are. Hence, Cinelli's Adjustment Formula reduces to P(y|do(x))=P(y|x)
https://github.com/rrtucci/napkin-do-calc | Do-Calculus for Causal Diagram 7.5 from "The Book of Why" (napkin problem)
In the following github repo, we provide Python software that assigns random probabilities to the nodes of the Napkin Bayesian Network. The software then does the marginalizations necessary to calcula |
26,281 | Randomly Sample M samples from N numbers with replacement, how to estimate N? | This is a standard statistical inference problem involving the classical occupancy distribution (see e.g., O'Neill 2019). Since $R$ is the number of repeated balls, the number of distinct balls selected in the sample is given by:
$$K = N-R \ \sim \ \text{Occ}(N, M).$$
The probability mass function for this random variable is:
$$p(K=k|N,M) = \frac{(N)_k \cdot S(M,k)}{N^M} \cdot \mathbb{I}(1 \leqslant k \leqslant \min(M,N)),$$
where the values $S(M,k)$ are the Stirling numbers of the second kind and $(N)_k$ are the falling factorials. The classical occupancy distribution has been subject to a great deal of analysis in the statistical literature, including analysis of statistical inference for the size parameter $N$ (see e.g., Harris 1968). The form of this distribution and its moments is known, so deriving the MLE or MOM estimators is a relatively simple task.
Maximum-likelihood estimator (MLE): Since the size parameter is an integer, we can find the MLE using discrete calculus. For any value $1 \leqslant k \leqslant \min(M,N)$ the forward difference of the probability mass function with respect to $N$ can be written as:
$$\begin{align}
\Delta_N p(k)
&\equiv p(K=k|N+1,M) - p(K=k|N,M) \\[10pt]
&= \frac{(N+1)_k \cdot S(M,k)}{(N+1)^M} - \frac{(N)_k \cdot S(M,k)}{N^M} \\[6pt]
&= S(M,k) \bigg[ \frac{(N+1)_k}{(N+1)^M} - \frac{(N)_k}{N^M} \bigg] \\[6pt]
&= S(M,k) \cdot \frac{(N)_{k}}{(N+1)^M} \bigg[ \frac{N+1}{N-k+1} - \Big( \frac{N+1}{N} \Big)^M \ \bigg] \\[6pt]
\end{align}$$
Thus, if we observe $K=k$ then the maximum-likelihood-estimator (MLE) is given by:
$$\hat{N}_\text{MLE} = \max \bigg \{ N \in \mathbb{N} \ \Bigg| \ \frac{N+1}{N-k+1} < \Big( \frac{N+1}{N} \Big)^M \bigg \}.$$
(There may be cases where the MLE is not unique, since we can also use the $\leqslant$ instead of $<$ in the inequality in this equation.) Here is a simple function in R to compute the MLE and an example when the input values are fairly large.
MLE.Occ.n <- function(m, k) {
n <- k
while ((n+1)/(n-k+1) >= (1+1/n)^m) { n <- n+1 }
n }
MLE.Occ.n(m = 1000, k = 649)
[1] 1066
Estimation using method-of-moments: The first four moments of the classical occupancy distribution are given in O'Neill (2019) (Section 2). The expected number of different balls is:
$$\mathbb{E}(K) = N \Bigg[ 1 - \Big( 1-\frac{1}{N} \Big)^M \Bigg].$$
Thus, if we observe $K=k$ then the method-of-moments estimator will approximately solve the implicit equation:
$$\log \hat{N}_\text{MOM}^* - \log k + \text{log1mexp} \Bigg[ - M \log \Big( 1-\frac{1}{\hat{N}_\text{MOM}^*} \Big) \Bigg] = 0.$$
You can solve this equation numerically to obtain a real value $\hat{N}_\text{MOM}^*$ and then use one of the two surrounding integers as $\hat{N}_\text{MOM}$ (these each give slight over- and under-estimates for the true expected value and you can then pick between these using some appropriate method --- e.g., rounding to the nearest integer). Here is a function in R to compute the method-of-moment estimator. As can be seen, it gives the same result as the MLE in the present example.
MOM.Occ.n <- function(m, k) {
FF <- function(n) { log(n) - log(k) + VGAM::log1mexp(-m*log(1-1/n)) }
UPPER <- m*k/(m-k)
n.real <- uniroot(f = FF, lower = k, upper = UPPER)$root
round(n.real, 0) }
MOM.Occ.n(m = 1000, k = 649)
[1] 1066 | Randomly Sample M samples from N numbers with replacement, how to estimate N? | This is a standard statistical inference problem involving the classical occupancy distribution (see e.g., O'Neill 2019). Since $R$ is the number of repeated balls, the number of distinct balls selec | Randomly Sample M samples from N numbers with replacement, how to estimate N?
This is a standard statistical inference problem involving the classical occupancy distribution (see e.g., O'Neill 2019). Since $R$ is the number of repeated balls, the number of distinct balls selected in the sample is given by:
$$K = N-R \ \sim \ \text{Occ}(N, M).$$
The probability mass function for this random variable is:
$$p(K=k|N,M) = \frac{(N)_k \cdot S(M,k)}{N^M} \cdot \mathbb{I}(1 \leqslant k \leqslant \min(M,N)),$$
where the values $S(M,k)$ are the Stirling numbers of the second kind and $(N)_k$ are the falling factorials. The classical occupancy distribution has been subject to a great deal of analysis in the statistical literature, including analysis of statistical inference for the size parameter $N$ (see e.g., Harris 1968). The form of this distribution and its moments is known, so deriving the MLE or MOM estimators is a relatively simple task.
Maximum-likelihood estimator (MLE): Since the size parameter is an integer, we can find the MLE using discrete calculus. For any value $1 \leqslant k \leqslant \min(M,N)$ the forward difference of the probability mass function with respect to $N$ can be written as:
$$\begin{align}
\Delta_N p(k)
&\equiv p(K=k|N+1,M) - p(K=k|N,M) \\[10pt]
&= \frac{(N+1)_k \cdot S(M,k)}{(N+1)^M} - \frac{(N)_k \cdot S(M,k)}{N^M} \\[6pt]
&= S(M,k) \bigg[ \frac{(N+1)_k}{(N+1)^M} - \frac{(N)_k}{N^M} \bigg] \\[6pt]
&= S(M,k) \cdot \frac{(N)_{k}}{(N+1)^M} \bigg[ \frac{N+1}{N-k+1} - \Big( \frac{N+1}{N} \Big)^M \ \bigg] \\[6pt]
\end{align}$$
Thus, if we observe $K=k$ then the maximum-likelihood-estimator (MLE) is given by:
$$\hat{N}_\text{MLE} = \max \bigg \{ N \in \mathbb{N} \ \Bigg| \ \frac{N+1}{N-k+1} < \Big( \frac{N+1}{N} \Big)^M \bigg \}.$$
(There may be cases where the MLE is not unique, since we can also use the $\leqslant$ instead of $<$ in the inequality in this equation.) Here is a simple function in R to compute the MLE and an example when the input values are fairly large.
MLE.Occ.n <- function(m, k) {
n <- k
while ((n+1)/(n-k+1) >= (1+1/n)^m) { n <- n+1 }
n }
MLE.Occ.n(m = 1000, k = 649)
[1] 1066
Estimation using method-of-moments: The first four moments of the classical occupancy distribution are given in O'Neill (2019) (Section 2). The expected number of different balls is:
$$\mathbb{E}(K) = N \Bigg[ 1 - \Big( 1-\frac{1}{N} \Big)^M \Bigg].$$
Thus, if we observe $K=k$ then the method-of-moments estimator will approximately solve the implicit equation:
$$\log \hat{N}_\text{MOM}^* - \log k + \text{log1mexp} \Bigg[ - M \log \Big( 1-\frac{1}{\hat{N}_\text{MOM}^*} \Big) \Bigg] = 0.$$
You can solve this equation numerically to obtain a real value $\hat{N}_\text{MOM}^*$ and then use one of the two surrounding integers as $\hat{N}_\text{MOM}$ (these each give slight over- and under-estimates for the true expected value and you can then pick between these using some appropriate method --- e.g., rounding to the nearest integer). Here is a function in R to compute the method-of-moment estimator. As can be seen, it gives the same result as the MLE in the present example.
MOM.Occ.n <- function(m, k) {
FF <- function(n) { log(n) - log(k) + VGAM::log1mexp(-m*log(1-1/n)) }
UPPER <- m*k/(m-k)
n.real <- uniroot(f = FF, lower = k, upper = UPPER)$root
round(n.real, 0) }
MOM.Occ.n(m = 1000, k = 649)
[1] 1066 | Randomly Sample M samples from N numbers with replacement, how to estimate N?
This is a standard statistical inference problem involving the classical occupancy distribution (see e.g., O'Neill 2019). Since $R$ is the number of repeated balls, the number of distinct balls selec |
26,282 | Randomly Sample M samples from N numbers with replacement, how to estimate N? | I think your likelihood expression has reversed $x=R$ and $m=M$ in $S_2(x,m)$ but no matter - this is a constant with respect to $N$ and so can be ignored. What you want is the integer $N$ which maximises $\frac{N!}{N^M \; (N-R)!}$. So you want the largest $N$ where $\frac{N!}{N^M \; (N-R)!} \ge \frac{(N-1)!}{(N-1)^M \; (N-1-R)!} $, i.e. where $N\left(\frac{N-1}{N}\right)^M\ge N-R$, though I doubt this has a simple closed form for $N$.
Another possible approach using a method of moments might be to consider a particular ball so the probability it is never selected is $\left(\frac{N-1}{N}\right)^M$, and the expected number of balls never selected is $N\left(\frac{N-1}{N}\right)^M$ and the expected number selected at least once is $N - N\left(\frac{N-1}{N}\right)^M$, If you see $R$ distinct balls from $M$ attempts then you could try to solve $R= N - N\left(\frac{N-1}{N}\right)^M$ for $N$. This is essentially the same equation as the likelihood approach, though without the rounding down.
Solving this would not be easy, but in some cases you could use the approximation $\left(\frac{N-1}{N}\right)^M \approx e^{-M/N}$ in which case you might consider $$\hat N\approx \dfrac{M}{\frac{M}{R}+ W\left(-\frac MRe^{-M/R}\right)}$$ where $W$ is the Lambert W function. (When $M \gg R$ the denominator is almost $\frac MR$ so $\hat N$ is very slightly more than $R$, as one might expect.)
As an illustration, if $M=100$ and $R=50$ then direct calculation would eventually give you $\hat N \approx 62.41$ while the suggested approximation could give you $\hat N\approx 62.75$. The likelihood approach would say $\hat N \le 62.41$ so round this down to $\hat N =62$. | Randomly Sample M samples from N numbers with replacement, how to estimate N? | I think your likelihood expression has reversed $x=R$ and $m=M$ in $S_2(x,m)$ but no matter - this is a constant with respect to $N$ and so can be ignored. What you want is the integer $N$ which maxi | Randomly Sample M samples from N numbers with replacement, how to estimate N?
I think your likelihood expression has reversed $x=R$ and $m=M$ in $S_2(x,m)$ but no matter - this is a constant with respect to $N$ and so can be ignored. What you want is the integer $N$ which maximises $\frac{N!}{N^M \; (N-R)!}$. So you want the largest $N$ where $\frac{N!}{N^M \; (N-R)!} \ge \frac{(N-1)!}{(N-1)^M \; (N-1-R)!} $, i.e. where $N\left(\frac{N-1}{N}\right)^M\ge N-R$, though I doubt this has a simple closed form for $N$.
Another possible approach using a method of moments might be to consider a particular ball so the probability it is never selected is $\left(\frac{N-1}{N}\right)^M$, and the expected number of balls never selected is $N\left(\frac{N-1}{N}\right)^M$ and the expected number selected at least once is $N - N\left(\frac{N-1}{N}\right)^M$, If you see $R$ distinct balls from $M$ attempts then you could try to solve $R= N - N\left(\frac{N-1}{N}\right)^M$ for $N$. This is essentially the same equation as the likelihood approach, though without the rounding down.
Solving this would not be easy, but in some cases you could use the approximation $\left(\frac{N-1}{N}\right)^M \approx e^{-M/N}$ in which case you might consider $$\hat N\approx \dfrac{M}{\frac{M}{R}+ W\left(-\frac MRe^{-M/R}\right)}$$ where $W$ is the Lambert W function. (When $M \gg R$ the denominator is almost $\frac MR$ so $\hat N$ is very slightly more than $R$, as one might expect.)
As an illustration, if $M=100$ and $R=50$ then direct calculation would eventually give you $\hat N \approx 62.41$ while the suggested approximation could give you $\hat N\approx 62.75$. The likelihood approach would say $\hat N \le 62.41$ so round this down to $\hat N =62$. | Randomly Sample M samples from N numbers with replacement, how to estimate N?
I think your likelihood expression has reversed $x=R$ and $m=M$ in $S_2(x,m)$ but no matter - this is a constant with respect to $N$ and so can be ignored. What you want is the integer $N$ which maxi |
26,283 | Randomly Sample M samples from N numbers with replacement, how to estimate N? | I think you would need another constraint. As described, it would only be possible to estimate a lower bound on the number. There could be any number of balls.
I think you needed to specify that each ball in the bag has a unique number. | Randomly Sample M samples from N numbers with replacement, how to estimate N? | I think you would need another constraint. As described, it would only be possible to estimate a lower bound on the number. There could be any number of balls.
I think you needed to specify that ea | Randomly Sample M samples from N numbers with replacement, how to estimate N?
I think you would need another constraint. As described, it would only be possible to estimate a lower bound on the number. There could be any number of balls.
I think you needed to specify that each ball in the bag has a unique number. | Randomly Sample M samples from N numbers with replacement, how to estimate N?
I think you would need another constraint. As described, it would only be possible to estimate a lower bound on the number. There could be any number of balls.
I think you needed to specify that ea |
26,284 | Kernel density estimation and boundary bias | If you know the boundaries, then one approach mentioned in Silverman's great little book (Density Estimation for Statistics and Data Analysis) is the "reflection technique". One simply reflects the data about the boundary (or boundaries). (This is what @NickCox mentioned in his comment.)
# Generate numbers from a uniform distribution
set.seed(12345)
N <- 10000
x <- runif(N)
# Reflect the data at the two boundaries
xReflected <- c(-x, x, 2-x)
# Construct density estimate
d <- density(xReflected, from=0, to=1)
plot(d$x, 3*d$y, ylab="Probability density", xlab="x", ylim=c(0,1.1), las=1)
Note that in this case we end up with 3 times the number of data points so we need to multiply by 3 the density that comes out of the density function.
Below is an animated display of 100 simulations (as above) but with the true density and the two estimated densities (one from the original data and one from the reflected data). That there is bias near the boundaries is pretty clear when using density with just the original data. | Kernel density estimation and boundary bias | If you know the boundaries, then one approach mentioned in Silverman's great little book (Density Estimation for Statistics and Data Analysis) is the "reflection technique". One simply reflects the d | Kernel density estimation and boundary bias
If you know the boundaries, then one approach mentioned in Silverman's great little book (Density Estimation for Statistics and Data Analysis) is the "reflection technique". One simply reflects the data about the boundary (or boundaries). (This is what @NickCox mentioned in his comment.)
# Generate numbers from a uniform distribution
set.seed(12345)
N <- 10000
x <- runif(N)
# Reflect the data at the two boundaries
xReflected <- c(-x, x, 2-x)
# Construct density estimate
d <- density(xReflected, from=0, to=1)
plot(d$x, 3*d$y, ylab="Probability density", xlab="x", ylim=c(0,1.1), las=1)
Note that in this case we end up with 3 times the number of data points so we need to multiply by 3 the density that comes out of the density function.
Below is an animated display of 100 simulations (as above) but with the true density and the two estimated densities (one from the original data and one from the reflected data). That there is bias near the boundaries is pretty clear when using density with just the original data. | Kernel density estimation and boundary bias
If you know the boundaries, then one approach mentioned in Silverman's great little book (Density Estimation for Statistics and Data Analysis) is the "reflection technique". One simply reflects the d |
26,285 | Kernel density estimation and boundary bias | I do not know if it is interesting (given the original quesiton and the answers it already received) but, I would like to suggest an alternative method. It could maybe be useful to somebody in the future as well (I hope at least) :-).
If you worry about bounday effects of your density smoothing method I would suggest to use P-splines (see Eilers and Marx, 1991 - the authors specifically talk about boundary bias in density smoothing in par 8). Quoting Eilers and Marx,
the P-spline density smoother is not troubled by
boundary effects, as for instance kernel smoothers
are.
In general, P-splines combine B-splines and finite difference penalties. The density smoothing problem is a special case of GLM. So we just need to parameterize our smoothing problem accordingly.
To answer the original question I will consider data grouped in a histogram fashion. I will indicate with $y_{i}$ the count (but the reasoning can be adapted to the density case as well) of observations falling in the bin/bar $u_{i}$. To smooth these data I will use the following ingredients:
the smoother: Whittaker smoother (special case of P-splines, the bases is the identity matrix)
first order difference penalty
IWLS algorithm to maximize my penalized likelihood (eq 36 in the reference)
$$
L = \sum_{i} y_{i} \log \mu_{i} - \sum_{i} \mu_{i} - \lambda \sum_{i} (\Delta^{(1)} \eta_{i})^{2}
$$
with $\mu_{i} = \exp(\eta_{i})$.
The results are produced by the code below for a fixed value of $\lambda$ (I left some comments to make it easier to read I hope). As you will notice form the results, the $\lambda$ parameter regulates the smoothness of the final estimates. For a very high $\lambda$ we obtaine a pretty flat line.
library(colorout)
# Simulate data
set.seed(1)
N = 10000
x = runif(N)
# Construct histograms
his = hist(x, breaks = 50, plot = F)
X = his$counts
u = his$mids
# Prepare basis (I-mat) and penalty (1st difference)
B = diag(length(X))
D1 = diff(B, diff = 1)
lambda = 1e6 # fixed but can be selected (e.g. AIC)
P = lambda * t(D1) %*% D1
# Smooth
tol = 1e-8
eta = log(X + 1)
for (it in 1:20)
{
mu = exp(eta)
z = X - mu + mu * eta
a = solve(t(B) %*% (c(mu) * B) + P, t(B) %*% z)
etnew = B %*% a
de = max(abs(etnew - eta))
cat('Crit', it, de, '\n')
if(de < tol) break
eta = etnew
}
# Plot
plot(u, exp(eta), ylim = c(0, max(X)), type = 'l', col = 2)
lines(u, X, type = 'h')
To conclude, I hope my suggestion is clear enough and replies (at least partially) the original question. | Kernel density estimation and boundary bias | I do not know if it is interesting (given the original quesiton and the answers it already received) but, I would like to suggest an alternative method. It could maybe be useful to somebody in the fut | Kernel density estimation and boundary bias
I do not know if it is interesting (given the original quesiton and the answers it already received) but, I would like to suggest an alternative method. It could maybe be useful to somebody in the future as well (I hope at least) :-).
If you worry about bounday effects of your density smoothing method I would suggest to use P-splines (see Eilers and Marx, 1991 - the authors specifically talk about boundary bias in density smoothing in par 8). Quoting Eilers and Marx,
the P-spline density smoother is not troubled by
boundary effects, as for instance kernel smoothers
are.
In general, P-splines combine B-splines and finite difference penalties. The density smoothing problem is a special case of GLM. So we just need to parameterize our smoothing problem accordingly.
To answer the original question I will consider data grouped in a histogram fashion. I will indicate with $y_{i}$ the count (but the reasoning can be adapted to the density case as well) of observations falling in the bin/bar $u_{i}$. To smooth these data I will use the following ingredients:
the smoother: Whittaker smoother (special case of P-splines, the bases is the identity matrix)
first order difference penalty
IWLS algorithm to maximize my penalized likelihood (eq 36 in the reference)
$$
L = \sum_{i} y_{i} \log \mu_{i} - \sum_{i} \mu_{i} - \lambda \sum_{i} (\Delta^{(1)} \eta_{i})^{2}
$$
with $\mu_{i} = \exp(\eta_{i})$.
The results are produced by the code below for a fixed value of $\lambda$ (I left some comments to make it easier to read I hope). As you will notice form the results, the $\lambda$ parameter regulates the smoothness of the final estimates. For a very high $\lambda$ we obtaine a pretty flat line.
library(colorout)
# Simulate data
set.seed(1)
N = 10000
x = runif(N)
# Construct histograms
his = hist(x, breaks = 50, plot = F)
X = his$counts
u = his$mids
# Prepare basis (I-mat) and penalty (1st difference)
B = diag(length(X))
D1 = diff(B, diff = 1)
lambda = 1e6 # fixed but can be selected (e.g. AIC)
P = lambda * t(D1) %*% D1
# Smooth
tol = 1e-8
eta = log(X + 1)
for (it in 1:20)
{
mu = exp(eta)
z = X - mu + mu * eta
a = solve(t(B) %*% (c(mu) * B) + P, t(B) %*% z)
etnew = B %*% a
de = max(abs(etnew - eta))
cat('Crit', it, de, '\n')
if(de < tol) break
eta = etnew
}
# Plot
plot(u, exp(eta), ylim = c(0, max(X)), type = 'l', col = 2)
lines(u, X, type = 'h')
To conclude, I hope my suggestion is clear enough and replies (at least partially) the original question. | Kernel density estimation and boundary bias
I do not know if it is interesting (given the original quesiton and the answers it already received) but, I would like to suggest an alternative method. It could maybe be useful to somebody in the fut |
26,286 | How do I decide when to use MAPE, SMAPE and MASE for time series analysis on stock forecasting | You are forecasting for stock control, so you need to think about setting safety amounts. In my opinion, a quantile forecast is far more important in this situation than a forecast of some central tendency (which the accuracy KPIs you mention assess).
You essentially have two or three possibilities.
Directly forecast high quantiles of your unknown future distribution. There are more and more papers on this. I'll attach some below.
Regarding your question, you can assess the quality of quantile forecasts using hinge loss functions, which are also used in quantile regression. Take a look at the papers by Ehm et al. (2016) and Gneiting (2011) below.
Forecast some central tendency, e.g., the conditional expectation, plus higher moments as necessary, and combine these with an appropriate distributional assumption to obtain quantiles or safety amounts. For instance, you could forecast the conditional mean and the conditional variance and use a normal or negative-binomial distribution to set target service levels.
In this case, you can use a forecast accuracy KPI that is consistent with the measure of central tendency you are forecasting for. For instance, if you try to forecast the conditional expectation, you can assess it using the MSE. Or you could forecast the conditional median and assess this using the MAE, wMAPE or MASE. See Kolassa (2019) on why this sounds so complicated. And you will still need to assess whether your forecasts of higher moments (e.g., the variance) are correct. Probably best to directly evaluate the quantiles this approach yields by the methods discussed above.
Forecast full predictive densities, from which you can derive all quantiles you need. This is what I argue for in Kolassa (2016).
You can evaluate predictive densities using proper scoring rules. See Kolassa (2016) for details and pointers to literature. The problem is that these are far less intuitive than the point forecast error measures discussed above.
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? is likely helpful, and also contains more information. If you are forecasting for a single store, I suspect that the MAPE will often be undefined, because of zero demands (that you would need to divide by).
References
(sorry for not nicely formatting these)
Ehm, W.; Gneiting, T.; Jordan, A. & Krüger, F.
Of quantiles and expectiles: consistent scoring functions, Choquet representations and forecast rankings (with discussion).
Journal of the Royal Statistical Society, Series B, 2016 , 78 , 505-562
Gneiting, T.
Quantiles as optimal point forecasts.
International Journal of Forecasting, 2011 , 27 , 197-207
Kolassa, S.
Why the "best" point forecast depends on the error or accuracy measure.
International Journal of Forecasting, 2020 , 36, 208-211
Kolassa, S.
Evaluating Predictive Count Data Distributions in Retail Sales Forecasting.
International Journal of Forecasting, 2016 , 32 , 788-803
The following are more generally on quantile forecasting:
Trapero, J. R.; Cardós, M. & Kourentzes, N.
Quantile forecast optimal combination to enhance safety stock estimation.
International Journal of Forecasting, 2019 , 35 , 239-250
Bruzda, J.
Quantile smoothing in supply chain and logistics forecasting.
International Journal of Production Economics, 2019 , 208 , 122 - 139
Kourentzes, N.; Trapero, J. R. & Barrow, D. K.
Optimising forecasting models for inventory planning.
Lancaster University Management School, Lancaster University Management School, 2019
Ulrich, M.; Jahnke, H.; Langrock, R.; Pesch, R. & Senge, R.
Distributional regression for demand forecasting -- a case study.
2018
Bruzda, J.
Multistep quantile forecasts for supply chain and logistics operations: bootstrapping, the GARCH model and quantile regression based approaches.
Central European Journal of Operations Research, 2018 | How do I decide when to use MAPE, SMAPE and MASE for time series analysis on stock forecasting | You are forecasting for stock control, so you need to think about setting safety amounts. In my opinion, a quantile forecast is far more important in this situation than a forecast of some central ten | How do I decide when to use MAPE, SMAPE and MASE for time series analysis on stock forecasting
You are forecasting for stock control, so you need to think about setting safety amounts. In my opinion, a quantile forecast is far more important in this situation than a forecast of some central tendency (which the accuracy KPIs you mention assess).
You essentially have two or three possibilities.
Directly forecast high quantiles of your unknown future distribution. There are more and more papers on this. I'll attach some below.
Regarding your question, you can assess the quality of quantile forecasts using hinge loss functions, which are also used in quantile regression. Take a look at the papers by Ehm et al. (2016) and Gneiting (2011) below.
Forecast some central tendency, e.g., the conditional expectation, plus higher moments as necessary, and combine these with an appropriate distributional assumption to obtain quantiles or safety amounts. For instance, you could forecast the conditional mean and the conditional variance and use a normal or negative-binomial distribution to set target service levels.
In this case, you can use a forecast accuracy KPI that is consistent with the measure of central tendency you are forecasting for. For instance, if you try to forecast the conditional expectation, you can assess it using the MSE. Or you could forecast the conditional median and assess this using the MAE, wMAPE or MASE. See Kolassa (2019) on why this sounds so complicated. And you will still need to assess whether your forecasts of higher moments (e.g., the variance) are correct. Probably best to directly evaluate the quantiles this approach yields by the methods discussed above.
Forecast full predictive densities, from which you can derive all quantiles you need. This is what I argue for in Kolassa (2016).
You can evaluate predictive densities using proper scoring rules. See Kolassa (2016) for details and pointers to literature. The problem is that these are far less intuitive than the point forecast error measures discussed above.
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? is likely helpful, and also contains more information. If you are forecasting for a single store, I suspect that the MAPE will often be undefined, because of zero demands (that you would need to divide by).
References
(sorry for not nicely formatting these)
Ehm, W.; Gneiting, T.; Jordan, A. & Krüger, F.
Of quantiles and expectiles: consistent scoring functions, Choquet representations and forecast rankings (with discussion).
Journal of the Royal Statistical Society, Series B, 2016 , 78 , 505-562
Gneiting, T.
Quantiles as optimal point forecasts.
International Journal of Forecasting, 2011 , 27 , 197-207
Kolassa, S.
Why the "best" point forecast depends on the error or accuracy measure.
International Journal of Forecasting, 2020 , 36, 208-211
Kolassa, S.
Evaluating Predictive Count Data Distributions in Retail Sales Forecasting.
International Journal of Forecasting, 2016 , 32 , 788-803
The following are more generally on quantile forecasting:
Trapero, J. R.; Cardós, M. & Kourentzes, N.
Quantile forecast optimal combination to enhance safety stock estimation.
International Journal of Forecasting, 2019 , 35 , 239-250
Bruzda, J.
Quantile smoothing in supply chain and logistics forecasting.
International Journal of Production Economics, 2019 , 208 , 122 - 139
Kourentzes, N.; Trapero, J. R. & Barrow, D. K.
Optimising forecasting models for inventory planning.
Lancaster University Management School, Lancaster University Management School, 2019
Ulrich, M.; Jahnke, H.; Langrock, R.; Pesch, R. & Senge, R.
Distributional regression for demand forecasting -- a case study.
2018
Bruzda, J.
Multistep quantile forecasts for supply chain and logistics operations: bootstrapping, the GARCH model and quantile regression based approaches.
Central European Journal of Operations Research, 2018 | How do I decide when to use MAPE, SMAPE and MASE for time series analysis on stock forecasting
You are forecasting for stock control, so you need to think about setting safety amounts. In my opinion, a quantile forecast is far more important in this situation than a forecast of some central ten |
26,287 | Independence of Mean and Variance of Discrete Uniform Distributions | jbowman's Answer (+1) tells much of the story. Here is a little more.
(a) For data from a continuous uniform distribution, the sample mean
and SD are uncorrelated, but not independent. The 'outlines' of the plot emphasize the dependence.
Among continuous distributions, independence holds only for
normal.
set.seed(1234)
m = 10^5; n = 5
x = runif(m*n); DAT = matrix(x, nrow=m)
a = rowMeans(DAT)
s = apply(DAT, 1, sd)
plot(a,s, pch=".")
(b) Discrete uniform. Discreteness makes it possible to find a value $a$ of the mean and
a value $s$ of the SD such that $P(\bar X = a) > 0,\, P(S = s) > 0,$
but $P(\bar X = a, X = s) = 0.$
set.seed(2019)
m = 20000; n = 5; x = sample(1:5, m*n, rep=T)
DAT = matrix(x, nrow=m)
a = rowMeans(DAT)
s = apply(DAT, 1, sd)
plot(a,s, pch=20)
(c) A rounded normal distribution is not normal. Discreteness causes
dependence.
set.seed(1776)
m = 10^5; n = 5
x = round(rnorm(m*n, 10, 1)); DAT = matrix(x, nrow=m)
a = rowMeans(DAT); s = apply(DAT, 1, sd)
plot(a,s, pch=20)
(d) Further to (a), using the distribution $\mathsf{Beta}(.1,.1),$
instead of $\mathsf{Beta}(1,1) \equiv \mathsf{Unif}(0,1).$
emphasizes the boundaries of the possible values of the sample mean
and SD. We are 'squashing' a 5-dimensional hypercube onto 2-space.
Images of some hyper-edges are clear. [Ref: The figure below is
similar to Fig. 4.6 in Suess & Trumbo (2010), Intro to probability simulation and Gibbs sampling with R, Springer.]
set.seed(1066)
m = 10^5; n = 5
x = rbeta(m*n, .1, .1); DAT = matrix(x, nrow=m)
a = rowMeans(DAT); s = apply(DAT, 1, sd)
plot(a,s, pch=".")
Addendum per Comment. | Independence of Mean and Variance of Discrete Uniform Distributions | jbowman's Answer (+1) tells much of the story. Here is a little more.
(a) For data from a continuous uniform distribution, the sample mean
and SD are uncorrelated, but not independent. The 'outlines' | Independence of Mean and Variance of Discrete Uniform Distributions
jbowman's Answer (+1) tells much of the story. Here is a little more.
(a) For data from a continuous uniform distribution, the sample mean
and SD are uncorrelated, but not independent. The 'outlines' of the plot emphasize the dependence.
Among continuous distributions, independence holds only for
normal.
set.seed(1234)
m = 10^5; n = 5
x = runif(m*n); DAT = matrix(x, nrow=m)
a = rowMeans(DAT)
s = apply(DAT, 1, sd)
plot(a,s, pch=".")
(b) Discrete uniform. Discreteness makes it possible to find a value $a$ of the mean and
a value $s$ of the SD such that $P(\bar X = a) > 0,\, P(S = s) > 0,$
but $P(\bar X = a, X = s) = 0.$
set.seed(2019)
m = 20000; n = 5; x = sample(1:5, m*n, rep=T)
DAT = matrix(x, nrow=m)
a = rowMeans(DAT)
s = apply(DAT, 1, sd)
plot(a,s, pch=20)
(c) A rounded normal distribution is not normal. Discreteness causes
dependence.
set.seed(1776)
m = 10^5; n = 5
x = round(rnorm(m*n, 10, 1)); DAT = matrix(x, nrow=m)
a = rowMeans(DAT); s = apply(DAT, 1, sd)
plot(a,s, pch=20)
(d) Further to (a), using the distribution $\mathsf{Beta}(.1,.1),$
instead of $\mathsf{Beta}(1,1) \equiv \mathsf{Unif}(0,1).$
emphasizes the boundaries of the possible values of the sample mean
and SD. We are 'squashing' a 5-dimensional hypercube onto 2-space.
Images of some hyper-edges are clear. [Ref: The figure below is
similar to Fig. 4.6 in Suess & Trumbo (2010), Intro to probability simulation and Gibbs sampling with R, Springer.]
set.seed(1066)
m = 10^5; n = 5
x = rbeta(m*n, .1, .1); DAT = matrix(x, nrow=m)
a = rowMeans(DAT); s = apply(DAT, 1, sd)
plot(a,s, pch=".")
Addendum per Comment. | Independence of Mean and Variance of Discrete Uniform Distributions
jbowman's Answer (+1) tells much of the story. Here is a little more.
(a) For data from a continuous uniform distribution, the sample mean
and SD are uncorrelated, but not independent. The 'outlines' |
26,288 | Independence of Mean and Variance of Discrete Uniform Distributions | It isn't that the mean and variance are dependent in the case of discrete distributions, it's that the sample mean and variance are dependent given the parameters of the distribution. The mean and variance themselves are fixed functions of the parameters of the distribution, and concepts such as "independence" don't apply to them. Consequently, you are asking the wrong hypothetical questions of yourself.
In the case of the discrete uniform distribution, plotting the results of 20,000 $(\bar{x}, s^2)$ pairs calculated from samples of 100 uniform $(1, 2, \dots, 10)$ variates results in:
which shows pretty clearly that they aren't independent; the higher values of $s^2$ are located disproportionately towards the center of the range of $\bar{x}$. (They are, however, uncorrelated; a simple symmetry argument should convince us of that.)
Of course, an example cannot prove Glen's conjecture in the post you linked to that no discrete distribution exists with independent sample means and variances! | Independence of Mean and Variance of Discrete Uniform Distributions | It isn't that the mean and variance are dependent in the case of discrete distributions, it's that the sample mean and variance are dependent given the parameters of the distribution. The mean and va | Independence of Mean and Variance of Discrete Uniform Distributions
It isn't that the mean and variance are dependent in the case of discrete distributions, it's that the sample mean and variance are dependent given the parameters of the distribution. The mean and variance themselves are fixed functions of the parameters of the distribution, and concepts such as "independence" don't apply to them. Consequently, you are asking the wrong hypothetical questions of yourself.
In the case of the discrete uniform distribution, plotting the results of 20,000 $(\bar{x}, s^2)$ pairs calculated from samples of 100 uniform $(1, 2, \dots, 10)$ variates results in:
which shows pretty clearly that they aren't independent; the higher values of $s^2$ are located disproportionately towards the center of the range of $\bar{x}$. (They are, however, uncorrelated; a simple symmetry argument should convince us of that.)
Of course, an example cannot prove Glen's conjecture in the post you linked to that no discrete distribution exists with independent sample means and variances! | Independence of Mean and Variance of Discrete Uniform Distributions
It isn't that the mean and variance are dependent in the case of discrete distributions, it's that the sample mean and variance are dependent given the parameters of the distribution. The mean and va |
26,289 | Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem? | To an extent I think this this may be a philosophical issue as much as a statistical one.
A lot of naturally occurring phenomena are approximately normally distributed. One can argue
whether the underlying cause of that may be something like the CLT:
Heights of people may be considered as the the sum of many smaller causes (perhaps independent, unlikely identically distributed): lengths of various bones, or results of various gene expressions, or results of many dietary
influences, or some combination of all of the above.
Test scores may be considered as the sums of scores on many individual test questions (possibly identically distributed, unlikely entirely independent).
Distance a particle travels in one dimension as a result of Brownian motion in a fluid: Motion may be considered abstractly as a random walk resulting from IID random hits by molecules.
One example where the CLT is not necessarily involved is the dispersion of shots around a bull's eye: The distance from the bull's eye can be modeled as a Rayleigh
distribution (proportional to square root of chi-sq with 2 DF) and the counterclockwise angle from the the positive horizontal axis can be modeled as uniform on $(0, 2\pi).$ Then after changing from polar to rectangular coordinates, distances in horizontal (x) and
vertical (y) directions turn out to be uncorrelated bivariate normal. [This is the essence of the Box-Muller transformation, which you can google.] However, the normal x and y coordinates might be considered as the sum of many small inaccuracies in targeting, which might justify a CLT-related mechanism in the background.
In a historical sense, the widespread use of normal (Gaussian) distributions instead of double-exponential (Laplace) distributions to model astronomical observations may be partly due to the CLT. In the early days of modeling errors of such observations, there was a debate between Gauss and Laplace, each arguing for his own favorite distribution. For various reasons, the normal model has won out. One can argue that one reason for the eventual success of the normal distribution was mathematical convenience based on normal limits of the CLT. This seems to be true even when it is unclear which family of distributions provides the better fit. (Even now, there are still astronomers who feel that the "one best observation" made by
a meticulous, respected astronomer is bound to be a better value than than the average of many observations made by presumably less-gifted observers. In effect, they would prefer no intervention at all by statisticians.) | Are there any examples of a variable being normally distributed that is *not* due to the Central Lim | To an extent I think this this may be a philosophical issue as much as a statistical one.
A lot of naturally occurring phenomena are approximately normally distributed. One can argue
whether the under | Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem?
To an extent I think this this may be a philosophical issue as much as a statistical one.
A lot of naturally occurring phenomena are approximately normally distributed. One can argue
whether the underlying cause of that may be something like the CLT:
Heights of people may be considered as the the sum of many smaller causes (perhaps independent, unlikely identically distributed): lengths of various bones, or results of various gene expressions, or results of many dietary
influences, or some combination of all of the above.
Test scores may be considered as the sums of scores on many individual test questions (possibly identically distributed, unlikely entirely independent).
Distance a particle travels in one dimension as a result of Brownian motion in a fluid: Motion may be considered abstractly as a random walk resulting from IID random hits by molecules.
One example where the CLT is not necessarily involved is the dispersion of shots around a bull's eye: The distance from the bull's eye can be modeled as a Rayleigh
distribution (proportional to square root of chi-sq with 2 DF) and the counterclockwise angle from the the positive horizontal axis can be modeled as uniform on $(0, 2\pi).$ Then after changing from polar to rectangular coordinates, distances in horizontal (x) and
vertical (y) directions turn out to be uncorrelated bivariate normal. [This is the essence of the Box-Muller transformation, which you can google.] However, the normal x and y coordinates might be considered as the sum of many small inaccuracies in targeting, which might justify a CLT-related mechanism in the background.
In a historical sense, the widespread use of normal (Gaussian) distributions instead of double-exponential (Laplace) distributions to model astronomical observations may be partly due to the CLT. In the early days of modeling errors of such observations, there was a debate between Gauss and Laplace, each arguing for his own favorite distribution. For various reasons, the normal model has won out. One can argue that one reason for the eventual success of the normal distribution was mathematical convenience based on normal limits of the CLT. This seems to be true even when it is unclear which family of distributions provides the better fit. (Even now, there are still astronomers who feel that the "one best observation" made by
a meticulous, respected astronomer is bound to be a better value than than the average of many observations made by presumably less-gifted observers. In effect, they would prefer no intervention at all by statisticians.) | Are there any examples of a variable being normally distributed that is *not* due to the Central Lim
To an extent I think this this may be a philosophical issue as much as a statistical one.
A lot of naturally occurring phenomena are approximately normally distributed. One can argue
whether the under |
26,290 | Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem? | Lots of naturally occurring variables are normally distributed. Heights of humans? Size of animal colonies? | Are there any examples of a variable being normally distributed that is *not* due to the Central Lim | Lots of naturally occurring variables are normally distributed. Heights of humans? Size of animal colonies? | Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem?
Lots of naturally occurring variables are normally distributed. Heights of humans? Size of animal colonies? | Are there any examples of a variable being normally distributed that is *not* due to the Central Lim
Lots of naturally occurring variables are normally distributed. Heights of humans? Size of animal colonies? |
26,291 | Criteria for choosing a mean function for a GP | As noted here
Why is the mean function in Gaussian Process uninteresting?
the mean function is usually not the main focus of the modeling effort, for Gaussian Processes. However, there are cases, such as extrapolation, where we need to use something better than a constant mean function, because otherwise the response of a Gaussian Process with a constant mean function $C$ will revert to just $C+\bar{y}$ "sufficiently far away" from the training data. And "sufficiently far away" can be "very close", if we use a Squared Exponential covariance function, and/or the length-scales which best fit the training data are very small with respect to the "diameter" of the training set.
Excluding the trivial case where the mean function is deterministic (i.e., it's a known function of the inputs but it doesn't depend on the training data, such as for example $\mathbf{c}^T\cdot\mathbf{x}$, with $\mathbf{c}$ a predetermined vector), we have basically two cases:
The mean function is a linear model
This means that the mean function is
$$g(\mathbf{x}\vert\boldsymbol{\beta})=\boldsymbol{\beta}^T\cdot\mathbf{b}(\mathbf{x})$$
where $\boldsymbol{\beta}$ is a vector of unknown parameters, and $\mathbf{b}(\mathbf{x})$ is a fixed set of basis functions, such as for example:
the monomials of maximum degree $p$, i.e., $\{x_1^{\alpha_1}\dots x_d^{\alpha_d}\vert\sum_{i=1}^d\alpha_i\le p\}$
the Fourier monomials in $\mathbb{R}^d$, i.e., $\{\exp(i\mathbf{m}\cdot \mathbf{x})\vert \ \Vert\mathbf{m}\Vert_1\le M\}$
splines (I leave you the pleasure of writing out the multivariate expression)
etc.
In this case, if we choose a Gaussian prior $\boldsymbol{\beta}\sim\mathcal{N}(\mathbf{b},\boldsymbol{\Sigma})$, then the predictive mean vector and covariance matrix still have an analytical expression, just like in the case of the constant mean function. The expression is a bit cumbersome: you can find it as equations (2.41) of C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning. Note: as in the constant mean function case, these analytical expressions are only exact if the covariance function (kernel) is pre-determined (except at most for a nugget term to accommodate noisy observations). If (as it's nearly always the case) the covariance function contains free hyperparameters whose posterior distribution you need to estimate, based on the training data, then you need to use simulation (MCMC) if you want to perform "exact" Bayesian inference.
The mean function is a nonlinear model
This is the case when, for example:
the basis functions are themselves functions of the training data
the number of basis functions depends on the training data
the mean function is not a linear combination of basis functions (e.g., rational functions)
etc.
In this case, the only way to compute the predictive mean and covariance is through simulation, even if the covariance function is prespecified. However, I've never seen a practical application of nonlinear mean functions. I guess that, when the data generating process is so complicated that a linear model for the mean function is inappropriate, you either focus on improving/complicating the covariance function, or you use another statistical model instead of a Gaussian Process (for example, a Bayesian Neural Network).
Criteria for selection
Now that you have the expression, you can either perform the selection based on purely heuristic criteria (e.g., WAIC or cross-validation), or based on prior knowledge. For example, if you know from Physics that for $\Vert \mathbf{x} \Vert_2\to\infty$, your response should be a linear function of the inputs, you will select a mean function which is a linear polynomial, if you know that it must become periodic, you will choose a Fourier basis, etc.
Another possible criterion is interpretability: for obvious reasons, a GP is not the most immediately interpretable model, but if you use a linear mean function, then at least asymptotically, when the effects of the kernel have "died out", you can interpret the coefficients of the linear model as a sort of effect size.
Finally, nonconstant mean functions can be used to show the strict relationship between spline models, Generalized Additive Models (GAMs) and Gaussian Processes. | Criteria for choosing a mean function for a GP | As noted here
Why is the mean function in Gaussian Process uninteresting?
the mean function is usually not the main focus of the modeling effort, for Gaussian Processes. However, there are cases, such | Criteria for choosing a mean function for a GP
As noted here
Why is the mean function in Gaussian Process uninteresting?
the mean function is usually not the main focus of the modeling effort, for Gaussian Processes. However, there are cases, such as extrapolation, where we need to use something better than a constant mean function, because otherwise the response of a Gaussian Process with a constant mean function $C$ will revert to just $C+\bar{y}$ "sufficiently far away" from the training data. And "sufficiently far away" can be "very close", if we use a Squared Exponential covariance function, and/or the length-scales which best fit the training data are very small with respect to the "diameter" of the training set.
Excluding the trivial case where the mean function is deterministic (i.e., it's a known function of the inputs but it doesn't depend on the training data, such as for example $\mathbf{c}^T\cdot\mathbf{x}$, with $\mathbf{c}$ a predetermined vector), we have basically two cases:
The mean function is a linear model
This means that the mean function is
$$g(\mathbf{x}\vert\boldsymbol{\beta})=\boldsymbol{\beta}^T\cdot\mathbf{b}(\mathbf{x})$$
where $\boldsymbol{\beta}$ is a vector of unknown parameters, and $\mathbf{b}(\mathbf{x})$ is a fixed set of basis functions, such as for example:
the monomials of maximum degree $p$, i.e., $\{x_1^{\alpha_1}\dots x_d^{\alpha_d}\vert\sum_{i=1}^d\alpha_i\le p\}$
the Fourier monomials in $\mathbb{R}^d$, i.e., $\{\exp(i\mathbf{m}\cdot \mathbf{x})\vert \ \Vert\mathbf{m}\Vert_1\le M\}$
splines (I leave you the pleasure of writing out the multivariate expression)
etc.
In this case, if we choose a Gaussian prior $\boldsymbol{\beta}\sim\mathcal{N}(\mathbf{b},\boldsymbol{\Sigma})$, then the predictive mean vector and covariance matrix still have an analytical expression, just like in the case of the constant mean function. The expression is a bit cumbersome: you can find it as equations (2.41) of C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning. Note: as in the constant mean function case, these analytical expressions are only exact if the covariance function (kernel) is pre-determined (except at most for a nugget term to accommodate noisy observations). If (as it's nearly always the case) the covariance function contains free hyperparameters whose posterior distribution you need to estimate, based on the training data, then you need to use simulation (MCMC) if you want to perform "exact" Bayesian inference.
The mean function is a nonlinear model
This is the case when, for example:
the basis functions are themselves functions of the training data
the number of basis functions depends on the training data
the mean function is not a linear combination of basis functions (e.g., rational functions)
etc.
In this case, the only way to compute the predictive mean and covariance is through simulation, even if the covariance function is prespecified. However, I've never seen a practical application of nonlinear mean functions. I guess that, when the data generating process is so complicated that a linear model for the mean function is inappropriate, you either focus on improving/complicating the covariance function, or you use another statistical model instead of a Gaussian Process (for example, a Bayesian Neural Network).
Criteria for selection
Now that you have the expression, you can either perform the selection based on purely heuristic criteria (e.g., WAIC or cross-validation), or based on prior knowledge. For example, if you know from Physics that for $\Vert \mathbf{x} \Vert_2\to\infty$, your response should be a linear function of the inputs, you will select a mean function which is a linear polynomial, if you know that it must become periodic, you will choose a Fourier basis, etc.
Another possible criterion is interpretability: for obvious reasons, a GP is not the most immediately interpretable model, but if you use a linear mean function, then at least asymptotically, when the effects of the kernel have "died out", you can interpret the coefficients of the linear model as a sort of effect size.
Finally, nonconstant mean functions can be used to show the strict relationship between spline models, Generalized Additive Models (GAMs) and Gaussian Processes. | Criteria for choosing a mean function for a GP
As noted here
Why is the mean function in Gaussian Process uninteresting?
the mean function is usually not the main focus of the modeling effort, for Gaussian Processes. However, there are cases, such |
26,292 | Why is $r$ used to denote correlation? | From Pearson's "Notes on the history of correlation"
The title of Galton's R. I. lecture was Typical Laws of Heredity in Man. Here for the first time appears a numerical measure $r$ of what is termed 'reversion' and which Galton later termed 'regression'. This $r$ is the source of our symbol for the correlation coefficient.
This 1977 lecture was also printed in Nature and in the Proceedings of the Royal Institution
From page 532 in Francis Galton 1877 Typical laws of heredity. Nature vol 15 (via galton.org)
Reversion is expressed by a fractional coefficient of the deviation, which we will write $r$. In the "reverted" parentages (a phrase whose meaning and object have already been explained) $$y = \frac{1}{r c \sqrt{\pi}} \cdot e ^{- \frac{x^2}{r^2c^2}}$$ In short, the population, of which each unit is a reverted parentage, follows the law of deviation, and has modulus, which we will write $c_2$, equal to $r c_1$. | Why is $r$ used to denote correlation? | From Pearson's "Notes on the history of correlation"
The title of Galton's R. I. lecture was Typical Laws of Heredity in Man. Here for the first time appears a numerical measure $r$ of what is termed | Why is $r$ used to denote correlation?
From Pearson's "Notes on the history of correlation"
The title of Galton's R. I. lecture was Typical Laws of Heredity in Man. Here for the first time appears a numerical measure $r$ of what is termed 'reversion' and which Galton later termed 'regression'. This $r$ is the source of our symbol for the correlation coefficient.
This 1977 lecture was also printed in Nature and in the Proceedings of the Royal Institution
From page 532 in Francis Galton 1877 Typical laws of heredity. Nature vol 15 (via galton.org)
Reversion is expressed by a fractional coefficient of the deviation, which we will write $r$. In the "reverted" parentages (a phrase whose meaning and object have already been explained) $$y = \frac{1}{r c \sqrt{\pi}} \cdot e ^{- \frac{x^2}{r^2c^2}}$$ In short, the population, of which each unit is a reverted parentage, follows the law of deviation, and has modulus, which we will write $c_2$, equal to $r c_1$. | Why is $r$ used to denote correlation?
From Pearson's "Notes on the history of correlation"
The title of Galton's R. I. lecture was Typical Laws of Heredity in Man. Here for the first time appears a numerical measure $r$ of what is termed |
26,293 | How to estimate a calibration curve with bootstrap (R) | After discussing with prof Frank Harrell by email, I devised the following procedure for estimating the optimism-corrected calibration curve, partially based on his Tutorial in Biostatistics (STATISTICS IN MEDICINE, VOL. 15,361-387 (1996)):
fit a risk prediction model on all data
fit a flexible model (gam with spline and logit link) to the model's predicted probabilities vs outcome, and query the gam at a grid of predicted probabilities $p=(0.01,0.02,...,0.99)$. This is the apparent calibration curve and we call it $cal_{app}$
draw bootstrap sample with replacement, same size of original data
fit risk prediction model on bootstrap sample
use the bootstrap model to predict probabilities from the bootstrap sample, fit a gam between the predicted probabilities and the outcome, and query the gam at a grid of predicted probabilities (let us call these points $cal_{boot}$)
use the bootstrap model to predict probabilities from the original sample, fit a gam between the predicted probabilities and the outcome, and query the gam at a grid of predicted probabilities obtaining a calibration curve ($cal_{orig}$)
compute the optimism at every point $p$ of the grid like so $$Optimism(p)=cal_{boot}(p) - cal_{orig}(p)$$
repeat steps 3-7 some 100 times, average the optimism at each point $p$
compute the optimism corrected calibration like so $$cal_{corr}(p)=cal_{app}(p)-<Optimism(p)>$$
Important note: The procedure above is inspired by Harrell's work and my discussion with him, but all errors are mine alone. | How to estimate a calibration curve with bootstrap (R) | After discussing with prof Frank Harrell by email, I devised the following procedure for estimating the optimism-corrected calibration curve, partially based on his Tutorial in Biostatistics (STATISTI | How to estimate a calibration curve with bootstrap (R)
After discussing with prof Frank Harrell by email, I devised the following procedure for estimating the optimism-corrected calibration curve, partially based on his Tutorial in Biostatistics (STATISTICS IN MEDICINE, VOL. 15,361-387 (1996)):
fit a risk prediction model on all data
fit a flexible model (gam with spline and logit link) to the model's predicted probabilities vs outcome, and query the gam at a grid of predicted probabilities $p=(0.01,0.02,...,0.99)$. This is the apparent calibration curve and we call it $cal_{app}$
draw bootstrap sample with replacement, same size of original data
fit risk prediction model on bootstrap sample
use the bootstrap model to predict probabilities from the bootstrap sample, fit a gam between the predicted probabilities and the outcome, and query the gam at a grid of predicted probabilities (let us call these points $cal_{boot}$)
use the bootstrap model to predict probabilities from the original sample, fit a gam between the predicted probabilities and the outcome, and query the gam at a grid of predicted probabilities obtaining a calibration curve ($cal_{orig}$)
compute the optimism at every point $p$ of the grid like so $$Optimism(p)=cal_{boot}(p) - cal_{orig}(p)$$
repeat steps 3-7 some 100 times, average the optimism at each point $p$
compute the optimism corrected calibration like so $$cal_{corr}(p)=cal_{app}(p)-<Optimism(p)>$$
Important note: The procedure above is inspired by Harrell's work and my discussion with him, but all errors are mine alone. | How to estimate a calibration curve with bootstrap (R)
After discussing with prof Frank Harrell by email, I devised the following procedure for estimating the optimism-corrected calibration curve, partially based on his Tutorial in Biostatistics (STATISTI |
26,294 | Is autocorrelation in a supervised learning dataset a problem? | You touch on an issue that has a parallel in the econometric literature. It's called the long-horizon predictability problem. While it's difficult to predict the stock markets and currencies in the short term, some econometric studies have shown that long term returns are "much more predictable" using covariates like dividend yields.
Well, it turns out there is a subtle flaw in these models. Since both the response and the predictors cover an overlapping period, they're highly autocorrelated across horizons, and the data points are not independent.
Here is a couple of papers I could find in my library. The Berkowitz paper is probably the most devastating on the subject.
A study that shows long horizon predictability :
Mark, N. C., & Choi, D. Y. (1997). Real exchange-rate prediction over long horizons. Journal of International Economics, 43(1), 29-60.
Criticism and statistic tests :
Berkowitz, J., & Giorgianni, L. (2001). Long-horizon exchange rate predictability?. The Review of Economics and Statistics, 83(1), 81-91.
Boudoukh, J., Richardson, M., & Whitelaw, R. F. (2006). The myth of long-horizon predictability. The Review of Financial Studies, 21(4), 1577-1605.
Richardson, M., & Smith, T. (1991). Tests of financial models in the presence of overlapping observations. The Review of Financial Studies, 4(2), 227-254. | Is autocorrelation in a supervised learning dataset a problem? | You touch on an issue that has a parallel in the econometric literature. It's called the long-horizon predictability problem. While it's difficult to predict the stock markets and currencies in the sh | Is autocorrelation in a supervised learning dataset a problem?
You touch on an issue that has a parallel in the econometric literature. It's called the long-horizon predictability problem. While it's difficult to predict the stock markets and currencies in the short term, some econometric studies have shown that long term returns are "much more predictable" using covariates like dividend yields.
Well, it turns out there is a subtle flaw in these models. Since both the response and the predictors cover an overlapping period, they're highly autocorrelated across horizons, and the data points are not independent.
Here is a couple of papers I could find in my library. The Berkowitz paper is probably the most devastating on the subject.
A study that shows long horizon predictability :
Mark, N. C., & Choi, D. Y. (1997). Real exchange-rate prediction over long horizons. Journal of International Economics, 43(1), 29-60.
Criticism and statistic tests :
Berkowitz, J., & Giorgianni, L. (2001). Long-horizon exchange rate predictability?. The Review of Economics and Statistics, 83(1), 81-91.
Boudoukh, J., Richardson, M., & Whitelaw, R. F. (2006). The myth of long-horizon predictability. The Review of Financial Studies, 21(4), 1577-1605.
Richardson, M., & Smith, T. (1991). Tests of financial models in the presence of overlapping observations. The Review of Financial Studies, 4(2), 227-254. | Is autocorrelation in a supervised learning dataset a problem?
You touch on an issue that has a parallel in the econometric literature. It's called the long-horizon predictability problem. While it's difficult to predict the stock markets and currencies in the sh |
26,295 | Is autocorrelation in a supervised learning dataset a problem? | Let's sketch your problem as:
$$ f(\{X_t: t \leq T \}) = X_{T+1} \tag{1}$$
that is, you are trying to machine learn a function $f(x)$. Your feature set is all the data available until $T$.
In a somehow overloaded notation I wanted to highlight the fact that if we look at $X$ as a stochastic process, it'd be convenient to impose that $X$ is adapted to a filtration (an increasing stream of information) - I'm mentioning filtrations here for completeness sake.
We can also look at equation $1$ as trying to estimate (here):
$$ E[X_{T+1} | X_T, X_{T-1}, ..] = f(\{X_t: t \leq T \}) $$
In the simplest case that pops in my head - the OLS linear regression - we have:
$$ E[X_{T+1} | X_T, X_{T-1}, ..] = Xb + e $$
I am suggesting this line of thought to bridge statistical learning and classic econometrics.
I am doing so because, no matter how you estimate (linear regression, random forest, GBMs, ..) $ E[X_{T+1} | X_T, X_{T-1}, ..]$, you will have to deal with the stationarity of your process X, that is: how $ E[X_{T+1} | X_T, X_{T-1}, ..]$ behaves in time. There are multiple definitions of stationarity that try to give us a flavor of time-homegeneity of your stochastic process, i.e how the mean and variance of the estimator of your expected value behave as you increase the forecasting horizon.
In the worst case scenario, where there is no sort of homogeneity, every {X_t} is drawn from a different random variable.
Best case scenario, iid.
We are in-between the worst and best case scenario: autocorrelation impacts the type of stationarity a stochastic process displays: the autocovariance function $\gamma(h)$, where $h$ is the time gap between two measurements, characterizes weakly stationary processes. The autocorrelation function is the scale-independent version of the autocovariance function (source, source)
If the mean function m(t) is constant and the covariance function
r(s,t) is everywhere finite, and depends only on the time difference τ
= t − s, the process {X(t), t ∈ T} is called weakly stationary, or covariance stationary (source)
The weakly stationary framework should guide you on how to treat your data.
The key takeaway is that you can not put auto-correlation under the rug - you have to deal with it:
You increase the granularity of the time mesh: You throw away data-points (less granularity and a lot less data to train your model) but auto-correlation is still biting you on the dispersion of $E[X_{T+1} | X_T, X_{T-1}, ..]$ and you'll see lot of variance in your cross-validation
You increase the granularity of the time mesh: sampling, chunking and cross-validation are all much more complex. From a model point of view, you'll have to deal with auto-correlation explicitly. | Is autocorrelation in a supervised learning dataset a problem? | Let's sketch your problem as:
$$ f(\{X_t: t \leq T \}) = X_{T+1} \tag{1}$$
that is, you are trying to machine learn a function $f(x)$. Your feature set is all the data available until $T$.
In a someho | Is autocorrelation in a supervised learning dataset a problem?
Let's sketch your problem as:
$$ f(\{X_t: t \leq T \}) = X_{T+1} \tag{1}$$
that is, you are trying to machine learn a function $f(x)$. Your feature set is all the data available until $T$.
In a somehow overloaded notation I wanted to highlight the fact that if we look at $X$ as a stochastic process, it'd be convenient to impose that $X$ is adapted to a filtration (an increasing stream of information) - I'm mentioning filtrations here for completeness sake.
We can also look at equation $1$ as trying to estimate (here):
$$ E[X_{T+1} | X_T, X_{T-1}, ..] = f(\{X_t: t \leq T \}) $$
In the simplest case that pops in my head - the OLS linear regression - we have:
$$ E[X_{T+1} | X_T, X_{T-1}, ..] = Xb + e $$
I am suggesting this line of thought to bridge statistical learning and classic econometrics.
I am doing so because, no matter how you estimate (linear regression, random forest, GBMs, ..) $ E[X_{T+1} | X_T, X_{T-1}, ..]$, you will have to deal with the stationarity of your process X, that is: how $ E[X_{T+1} | X_T, X_{T-1}, ..]$ behaves in time. There are multiple definitions of stationarity that try to give us a flavor of time-homegeneity of your stochastic process, i.e how the mean and variance of the estimator of your expected value behave as you increase the forecasting horizon.
In the worst case scenario, where there is no sort of homogeneity, every {X_t} is drawn from a different random variable.
Best case scenario, iid.
We are in-between the worst and best case scenario: autocorrelation impacts the type of stationarity a stochastic process displays: the autocovariance function $\gamma(h)$, where $h$ is the time gap between two measurements, characterizes weakly stationary processes. The autocorrelation function is the scale-independent version of the autocovariance function (source, source)
If the mean function m(t) is constant and the covariance function
r(s,t) is everywhere finite, and depends only on the time difference τ
= t − s, the process {X(t), t ∈ T} is called weakly stationary, or covariance stationary (source)
The weakly stationary framework should guide you on how to treat your data.
The key takeaway is that you can not put auto-correlation under the rug - you have to deal with it:
You increase the granularity of the time mesh: You throw away data-points (less granularity and a lot less data to train your model) but auto-correlation is still biting you on the dispersion of $E[X_{T+1} | X_T, X_{T-1}, ..]$ and you'll see lot of variance in your cross-validation
You increase the granularity of the time mesh: sampling, chunking and cross-validation are all much more complex. From a model point of view, you'll have to deal with auto-correlation explicitly. | Is autocorrelation in a supervised learning dataset a problem?
Let's sketch your problem as:
$$ f(\{X_t: t \leq T \}) = X_{T+1} \tag{1}$$
that is, you are trying to machine learn a function $f(x)$. Your feature set is all the data available until $T$.
In a someho |
26,296 | Is autocorrelation in a supervised learning dataset a problem? | Let us take a much simpler example. Say I take random draws from U[0,1] where every even draw is identical to it's preceding odd draw, and the odd draws are independent.
The sampling variance of the sample mean from such a draw is TWICE the sampling variance of an i.i.d draw (the well known sigma squared/n). So you see how parameter inference immediately suffers.
So when you estimate your MSE from the test set, you require twice as many data points in the set to get to the same precision (in terms of variance) as you would if the test sample had independent draws.
So when you tune your hyperparameters to the validation set, you should be less confidence in your choice as it is now more sampling error clouds your judgement about one choice being better than the other. | Is autocorrelation in a supervised learning dataset a problem? | Let us take a much simpler example. Say I take random draws from U[0,1] where every even draw is identical to it's preceding odd draw, and the odd draws are independent.
The sampling variance of the s | Is autocorrelation in a supervised learning dataset a problem?
Let us take a much simpler example. Say I take random draws from U[0,1] where every even draw is identical to it's preceding odd draw, and the odd draws are independent.
The sampling variance of the sample mean from such a draw is TWICE the sampling variance of an i.i.d draw (the well known sigma squared/n). So you see how parameter inference immediately suffers.
So when you estimate your MSE from the test set, you require twice as many data points in the set to get to the same precision (in terms of variance) as you would if the test sample had independent draws.
So when you tune your hyperparameters to the validation set, you should be less confidence in your choice as it is now more sampling error clouds your judgement about one choice being better than the other. | Is autocorrelation in a supervised learning dataset a problem?
Let us take a much simpler example. Say I take random draws from U[0,1] where every even draw is identical to it's preceding odd draw, and the odd draws are independent.
The sampling variance of the s |
26,297 | What are the differences between Dirichlet regression and log-ratio analysis? | Log ratio methods are a mathematic transform where as Dirichlet regression is a particular probabilistic model.
To better understand the difference lets think about a common probabilistic model applied to log-ratio transformed data. If you apply a multivariate normal model to either Additive Log-ratio or Isometric Log-ratio transformed data is it equivalent to applying a multivariate logistic-normal model to the original compositional dataset. (e.g., the ALR or ILR transform of a logistic normal distribution is multivariate normal in the transformed space). Note that there are many different statistical models that can be applied to log-ratio transformed compositional data (Dirichlet regression is a single model).
Now a good question becomes: What is the difference between the Dirichlet distribution and the logistic-normal distribution. The Dirichlet distribution (and Dirichlet regression by extension) assumes that the compositional parts (the variables) are independent except for the sum constraint. On the other hand, the Logistic-Normal distribution allows for covariation between the parts in addition to the sum constraint. In this sense the Logistic-Normal distribution is a more flexible distribution that is often better able to capture the covariation between variables that may be of interest to a researcher.
That said, the logistic-normal distribution does not allow for complete independence between the parts as the Dirichlet distribution does (although it can get close enough for many approximations).
Again, log-ratio methods are a data transform not a statistical model. There are many many different models that can do everything from mixed-effects modeling to hypothesis testing etc... In addition, both Logistic-Normal regression and Dirichlet regression can do all the things you are discussing as well. The key difference between logistic-normal regression and Dirichlet regression is whether you want to assume some level of dependence between the variables or you want complete independence between the variables (excluding the dependence that occurs due to the sum constraint of compositional data).
Dirichlet regression - I would do a google search and find some papers that discuss it. here is a paper discussing the DirichletReg package for R. This appears to the the whitepaper for that package. With regards to compositional data analysis: I would recommend Modeling and Analysis of Compositional Data by Pawlowsky-Glahn, Egozcue, and Tolosana-Delgado. It is a really wonderful book. For a very applied book Analyzing Compositional Data in R by van den Boogaart and Tolosana-Delgado. | What are the differences between Dirichlet regression and log-ratio analysis? | Log ratio methods are a mathematic transform where as Dirichlet regression is a particular probabilistic model.
To better understand the difference lets think about a common probabilistic model appli | What are the differences between Dirichlet regression and log-ratio analysis?
Log ratio methods are a mathematic transform where as Dirichlet regression is a particular probabilistic model.
To better understand the difference lets think about a common probabilistic model applied to log-ratio transformed data. If you apply a multivariate normal model to either Additive Log-ratio or Isometric Log-ratio transformed data is it equivalent to applying a multivariate logistic-normal model to the original compositional dataset. (e.g., the ALR or ILR transform of a logistic normal distribution is multivariate normal in the transformed space). Note that there are many different statistical models that can be applied to log-ratio transformed compositional data (Dirichlet regression is a single model).
Now a good question becomes: What is the difference between the Dirichlet distribution and the logistic-normal distribution. The Dirichlet distribution (and Dirichlet regression by extension) assumes that the compositional parts (the variables) are independent except for the sum constraint. On the other hand, the Logistic-Normal distribution allows for covariation between the parts in addition to the sum constraint. In this sense the Logistic-Normal distribution is a more flexible distribution that is often better able to capture the covariation between variables that may be of interest to a researcher.
That said, the logistic-normal distribution does not allow for complete independence between the parts as the Dirichlet distribution does (although it can get close enough for many approximations).
Again, log-ratio methods are a data transform not a statistical model. There are many many different models that can do everything from mixed-effects modeling to hypothesis testing etc... In addition, both Logistic-Normal regression and Dirichlet regression can do all the things you are discussing as well. The key difference between logistic-normal regression and Dirichlet regression is whether you want to assume some level of dependence between the variables or you want complete independence between the variables (excluding the dependence that occurs due to the sum constraint of compositional data).
Dirichlet regression - I would do a google search and find some papers that discuss it. here is a paper discussing the DirichletReg package for R. This appears to the the whitepaper for that package. With regards to compositional data analysis: I would recommend Modeling and Analysis of Compositional Data by Pawlowsky-Glahn, Egozcue, and Tolosana-Delgado. It is a really wonderful book. For a very applied book Analyzing Compositional Data in R by van den Boogaart and Tolosana-Delgado. | What are the differences between Dirichlet regression and log-ratio analysis?
Log ratio methods are a mathematic transform where as Dirichlet regression is a particular probabilistic model.
To better understand the difference lets think about a common probabilistic model appli |
26,298 | What are the differences between Dirichlet regression and log-ratio analysis? | What are the main differences in assumptions between these two models? When should you prefer one above the other?
The Dirichlet assumes a negative correlation structure, whereas the LR does not. The Dirichlet is not a member of the linear exponential family, hence it is not robust under model mispecification. This of course applies more to the standard errors of the estimates. So far, I have seen that either methods produces comparable fit.
Are there any "methods" that one topic allows which the other doesn't? My current data set has multiple independent variables (both factors and continuous), and I would like to model both fixed and random effects, and then do parameter estimation, test hypotheses, find confidence intervals, etc.
For random effects, if you know what to do, then go ahead with Dirichlet. Otherwise, you might want to stick with LR.
What are the best resources to learn these two topics from? The log-ratio analysis seems to be the topic of many books, but on the other hand, Dirichlet regression seems to be mainly covered in small lecture notes (20-30 pages).
There is a book, Dirichlet distributions and beyond. And then, there are papers about Dirichlet regressions. | What are the differences between Dirichlet regression and log-ratio analysis? | What are the main differences in assumptions between these two models? When should you prefer one above the other?
The Dirichlet assumes a negative correlation structure, whereas the LR does not. Th | What are the differences between Dirichlet regression and log-ratio analysis?
What are the main differences in assumptions between these two models? When should you prefer one above the other?
The Dirichlet assumes a negative correlation structure, whereas the LR does not. The Dirichlet is not a member of the linear exponential family, hence it is not robust under model mispecification. This of course applies more to the standard errors of the estimates. So far, I have seen that either methods produces comparable fit.
Are there any "methods" that one topic allows which the other doesn't? My current data set has multiple independent variables (both factors and continuous), and I would like to model both fixed and random effects, and then do parameter estimation, test hypotheses, find confidence intervals, etc.
For random effects, if you know what to do, then go ahead with Dirichlet. Otherwise, you might want to stick with LR.
What are the best resources to learn these two topics from? The log-ratio analysis seems to be the topic of many books, but on the other hand, Dirichlet regression seems to be mainly covered in small lecture notes (20-30 pages).
There is a book, Dirichlet distributions and beyond. And then, there are papers about Dirichlet regressions. | What are the differences between Dirichlet regression and log-ratio analysis?
What are the main differences in assumptions between these two models? When should you prefer one above the other?
The Dirichlet assumes a negative correlation structure, whereas the LR does not. Th |
26,299 | Should we normalize before using VarianceThreshold in sklearn? | Yes, one must do normalization before using VarianceThreshold. This is necessary to bring all the features to same scale. Other wise the variance estimates can be misleading between higher value features and lower value features. By default, it is not included in the function. One must do it using MinMaxScaler or StandardScaler available in scikit-learn. | Should we normalize before using VarianceThreshold in sklearn? | Yes, one must do normalization before using VarianceThreshold. This is necessary to bring all the features to same scale. Other wise the variance estimates can be misleading between higher value featu | Should we normalize before using VarianceThreshold in sklearn?
Yes, one must do normalization before using VarianceThreshold. This is necessary to bring all the features to same scale. Other wise the variance estimates can be misleading between higher value features and lower value features. By default, it is not included in the function. One must do it using MinMaxScaler or StandardScaler available in scikit-learn. | Should we normalize before using VarianceThreshold in sklearn?
Yes, one must do normalization before using VarianceThreshold. This is necessary to bring all the features to same scale. Other wise the variance estimates can be misleading between higher value featu |
26,300 | Should we normalize before using VarianceThreshold in sklearn? | The features must have the same units, therefore scaling is necessary (for example reduce the range to [0,1] with the MinMaxScaler).
Standardize for example with StandardScaler, that means removing the mean and scaling to unit variance, is wrong because of course each feature will have variance 1. | Should we normalize before using VarianceThreshold in sklearn? | The features must have the same units, therefore scaling is necessary (for example reduce the range to [0,1] with the MinMaxScaler).
Standardize for example with StandardScaler, that means removing th | Should we normalize before using VarianceThreshold in sklearn?
The features must have the same units, therefore scaling is necessary (for example reduce the range to [0,1] with the MinMaxScaler).
Standardize for example with StandardScaler, that means removing the mean and scaling to unit variance, is wrong because of course each feature will have variance 1. | Should we normalize before using VarianceThreshold in sklearn?
The features must have the same units, therefore scaling is necessary (for example reduce the range to [0,1] with the MinMaxScaler).
Standardize for example with StandardScaler, that means removing th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.