idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
39,601
|
Why do we say EM is a partially non-Bayesian method?
|
EM is based on a demarginalisation of the (standard or observed) likelihood
$$L^\text{o}(\theta|\mathbf x)=\int_{\mathfrak Z} L^\text{c}(\theta|\mathbf x,\mathbf z)\,\text d\mathbf z \tag{1}$$
introducing a latent variable $\mathbf Z$ to simplify the representation of the (observed) likelihood$$L^\text{o}(\theta|\mathbf x)$$into the completed likelihood$$L^\text{c}(\theta|\mathbf x,\mathbf z)$$but requiring (pseudo) inference on $\mathbf Z$ on the side. This inference is somewhat Bayesian in the sense that it uses the conditional distribution of $\mathbf Z$ given $\mathbf X=\mathbf x$ and the (current value of the) parameter $\theta$. Indeed, in the E step of the EM algorithm, a conditional expected log-likelihood is computed
$$Q(\theta^{(t)},\theta|\mathbf x) = \mathbb E_{\theta^{(t)}} [\log L^\text{c}(\theta|\mathbf x,\mathbf Z) |\mathbf x ] \tag{2}$$
where the conditional expectation is against the conditional distribution of $\mathbf Z$ given the observation $\mathbf X=\mathbf x$ and $\theta=\theta^{(t)}$. However, the setting is not Bayesian in that
while somewhat free, the "prior" distribution on $\mathbf Z$ is constrained by (1)
there is no prior distribution on $\theta$ for EM and $\theta$ is never considered as a random variable by the EM algorithm
EM results in finding a local mode of the observed likelihood, free from any prior input, and does not produce an inference on $\mathbf Z$
Another analogy can be found with Gibbs sampling, or more specifically data augmentation (Tanner & Wong, 1988) in that one iteration of Gibbs sampling looks like one iteration of the EM algorithm
simulate $\mathbf z^{(t)}$ from $f(\cdot|\mathbf x,\theta^{(t)}$ versus compute (2) under $f(\cdot|\mathbf x,\theta^{(t)}$, which often results in computing $\mathbb E[\mathbf z^{(t)}|\mathbf x,\theta^{(t)}]$ (or even simulating $\mathbf z^{(t)}$ from $f(\cdot|\mathbf x,\theta^{(t)}$ in the MCEM version of Celeux and Diebolt (1980));
simulate $\theta^{(t+1)}$ from $\pi(\cdot|\mathbf x,\theta^{(t)}$ versus maximise (2) in $\theta$ which results in $\theta^{(t+1)}$
As a last remark, let me point out that EM can be used in theory to find a MAP estimator associated with a prior density$\pi(\theta)$ by switching from $Q(\theta^{(t)},\theta|\mathbf x)$ in (2) to
$$Q(\theta^{(t)},\theta|\mathbf x)+\log \pi(\theta)$$
in the E step, to be maximised in the M step. (The argument showing that EM increases the target at each iteration also applies there.)
|
Why do we say EM is a partially non-Bayesian method?
|
EM is based on a demarginalisation of the (standard or observed) likelihood
$$L^\text{o}(\theta|\mathbf x)=\int_{\mathfrak Z} L^\text{c}(\theta|\mathbf x,\mathbf z)\,\text d\mathbf z \tag{1}$$
introdu
|
Why do we say EM is a partially non-Bayesian method?
EM is based on a demarginalisation of the (standard or observed) likelihood
$$L^\text{o}(\theta|\mathbf x)=\int_{\mathfrak Z} L^\text{c}(\theta|\mathbf x,\mathbf z)\,\text d\mathbf z \tag{1}$$
introducing a latent variable $\mathbf Z$ to simplify the representation of the (observed) likelihood$$L^\text{o}(\theta|\mathbf x)$$into the completed likelihood$$L^\text{c}(\theta|\mathbf x,\mathbf z)$$but requiring (pseudo) inference on $\mathbf Z$ on the side. This inference is somewhat Bayesian in the sense that it uses the conditional distribution of $\mathbf Z$ given $\mathbf X=\mathbf x$ and the (current value of the) parameter $\theta$. Indeed, in the E step of the EM algorithm, a conditional expected log-likelihood is computed
$$Q(\theta^{(t)},\theta|\mathbf x) = \mathbb E_{\theta^{(t)}} [\log L^\text{c}(\theta|\mathbf x,\mathbf Z) |\mathbf x ] \tag{2}$$
where the conditional expectation is against the conditional distribution of $\mathbf Z$ given the observation $\mathbf X=\mathbf x$ and $\theta=\theta^{(t)}$. However, the setting is not Bayesian in that
while somewhat free, the "prior" distribution on $\mathbf Z$ is constrained by (1)
there is no prior distribution on $\theta$ for EM and $\theta$ is never considered as a random variable by the EM algorithm
EM results in finding a local mode of the observed likelihood, free from any prior input, and does not produce an inference on $\mathbf Z$
Another analogy can be found with Gibbs sampling, or more specifically data augmentation (Tanner & Wong, 1988) in that one iteration of Gibbs sampling looks like one iteration of the EM algorithm
simulate $\mathbf z^{(t)}$ from $f(\cdot|\mathbf x,\theta^{(t)}$ versus compute (2) under $f(\cdot|\mathbf x,\theta^{(t)}$, which often results in computing $\mathbb E[\mathbf z^{(t)}|\mathbf x,\theta^{(t)}]$ (or even simulating $\mathbf z^{(t)}$ from $f(\cdot|\mathbf x,\theta^{(t)}$ in the MCEM version of Celeux and Diebolt (1980));
simulate $\theta^{(t+1)}$ from $\pi(\cdot|\mathbf x,\theta^{(t)}$ versus maximise (2) in $\theta$ which results in $\theta^{(t+1)}$
As a last remark, let me point out that EM can be used in theory to find a MAP estimator associated with a prior density$\pi(\theta)$ by switching from $Q(\theta^{(t)},\theta|\mathbf x)$ in (2) to
$$Q(\theta^{(t)},\theta|\mathbf x)+\log \pi(\theta)$$
in the E step, to be maximised in the M step. (The argument showing that EM increases the target at each iteration also applies there.)
|
Why do we say EM is a partially non-Bayesian method?
EM is based on a demarginalisation of the (standard or observed) likelihood
$$L^\text{o}(\theta|\mathbf x)=\int_{\mathfrak Z} L^\text{c}(\theta|\mathbf x,\mathbf z)\,\text d\mathbf z \tag{1}$$
introdu
|
39,602
|
Why do we say EM is a partially non-Bayesian method?
|
In full Bayesian inference, you go from a prior distribution over parameter values, $P(\theta)$ to a posterior distribution given the data $P(\theta | x)$. With Expectation Maximisation, you go from a prior distribution to an estimate $\hat \theta$ of the most likely posterior value of $\theta$ given the data and the priors (the Maximum a Posteriori or MAP value). Since this is a point estimate, and not a full posterior distribution, it's not a fully Bayesian method.
|
Why do we say EM is a partially non-Bayesian method?
|
In full Bayesian inference, you go from a prior distribution over parameter values, $P(\theta)$ to a posterior distribution given the data $P(\theta | x)$. With Expectation Maximisation, you go from a
|
Why do we say EM is a partially non-Bayesian method?
In full Bayesian inference, you go from a prior distribution over parameter values, $P(\theta)$ to a posterior distribution given the data $P(\theta | x)$. With Expectation Maximisation, you go from a prior distribution to an estimate $\hat \theta$ of the most likely posterior value of $\theta$ given the data and the priors (the Maximum a Posteriori or MAP value). Since this is a point estimate, and not a full posterior distribution, it's not a fully Bayesian method.
|
Why do we say EM is a partially non-Bayesian method?
In full Bayesian inference, you go from a prior distribution over parameter values, $P(\theta)$ to a posterior distribution given the data $P(\theta | x)$. With Expectation Maximisation, you go from a
|
39,603
|
Statistics Help: Difference between Differences?
|
It's a little risky to answer without better understanding your use case, but assuming iid nested data within columns and testing the hypotheses:
$$
H_0: \mu_A - \mu_B = \mu_C - \mu_D, \\
H_1: \mu_A - \mu_B \ne \mu_C - \mu_D,
$$
you could appeal to the central limit theorem and construct a Wald test, comparing
$$
Z = \frac{(\bar{x_A} - \bar{x_B}) - (\bar{x_C} - \bar{x_D})}{\sqrt{s_A^2/n_A +s_B^2/n_B +s_C^2/n_C +s_D^2/n_D}}
$$
to a $N(0, 1)$ distribution (i.e., is $|Z| > 1.96$ for $\alpha=.05$.
|
Statistics Help: Difference between Differences?
|
It's a little risky to answer without better understanding your use case, but assuming iid nested data within columns and testing the hypotheses:
$$
H_0: \mu_A - \mu_B = \mu_C - \mu_D, \\
H_1: \mu_A -
|
Statistics Help: Difference between Differences?
It's a little risky to answer without better understanding your use case, but assuming iid nested data within columns and testing the hypotheses:
$$
H_0: \mu_A - \mu_B = \mu_C - \mu_D, \\
H_1: \mu_A - \mu_B \ne \mu_C - \mu_D,
$$
you could appeal to the central limit theorem and construct a Wald test, comparing
$$
Z = \frac{(\bar{x_A} - \bar{x_B}) - (\bar{x_C} - \bar{x_D})}{\sqrt{s_A^2/n_A +s_B^2/n_B +s_C^2/n_C +s_D^2/n_D}}
$$
to a $N(0, 1)$ distribution (i.e., is $|Z| > 1.96$ for $\alpha=.05$.
|
Statistics Help: Difference between Differences?
It's a little risky to answer without better understanding your use case, but assuming iid nested data within columns and testing the hypotheses:
$$
H_0: \mu_A - \mu_B = \mu_C - \mu_D, \\
H_1: \mu_A -
|
39,604
|
Understanding different Monte Carlo approximation notations
|
Yes, the formula you provide should converge to a true answer for arbitrary probability distribution $g(x)$ given infinite sample points. The problem is that you don't want to wait infinitely long. So instead a more interesting question is whether it is likely to converge to a value close to the true value given a finite number of samples. And here the answer depends on the distribution of $f(x)$ in space. For distributions of $f(x)$ that are more or less uniform on the domain of interest basic MC sampling works very well. However, if majority of the stuff in $f(x)$ is concentrated in a small region, especially in higher dimensions, basic MC is completely unfeasible. This problem is actually relatively frequent in real life, where $f(x)$ is a narrow multidimensional gaussian. MC sampling over a cube containing that gaussian is a very bad idea in high dimensions.
In order to solve this problem, people have designed many methods to "sample where it matters". The simplest of those is so called importance sampling. The idea is that you have prior knowledge on how $f(x)$ might be distributed, and sample using some compromise between $g(x)$ and that prior distribution, but then you also have to correct the resulting answer to adjust for the fact that your were not sampling exactly from $g(x)$. That is the last formula you have provided. The middle formula I have not seen before.
Finally, importance sampling depends on the prior. Even in the absence of prior it is possible to do better than basic MC by adaptively finding the prior distribution. However, this is an actively researched open problem.
So, to summarize, there are multiple formulas for MC that all work for arbitrary $f(x)$ and $g(x)$ but have different convergence speeds and are thus better or worse in specific scenarios
|
Understanding different Monte Carlo approximation notations
|
Yes, the formula you provide should converge to a true answer for arbitrary probability distribution $g(x)$ given infinite sample points. The problem is that you don't want to wait infinitely long. So
|
Understanding different Monte Carlo approximation notations
Yes, the formula you provide should converge to a true answer for arbitrary probability distribution $g(x)$ given infinite sample points. The problem is that you don't want to wait infinitely long. So instead a more interesting question is whether it is likely to converge to a value close to the true value given a finite number of samples. And here the answer depends on the distribution of $f(x)$ in space. For distributions of $f(x)$ that are more or less uniform on the domain of interest basic MC sampling works very well. However, if majority of the stuff in $f(x)$ is concentrated in a small region, especially in higher dimensions, basic MC is completely unfeasible. This problem is actually relatively frequent in real life, where $f(x)$ is a narrow multidimensional gaussian. MC sampling over a cube containing that gaussian is a very bad idea in high dimensions.
In order to solve this problem, people have designed many methods to "sample where it matters". The simplest of those is so called importance sampling. The idea is that you have prior knowledge on how $f(x)$ might be distributed, and sample using some compromise between $g(x)$ and that prior distribution, but then you also have to correct the resulting answer to adjust for the fact that your were not sampling exactly from $g(x)$. That is the last formula you have provided. The middle formula I have not seen before.
Finally, importance sampling depends on the prior. Even in the absence of prior it is possible to do better than basic MC by adaptively finding the prior distribution. However, this is an actively researched open problem.
So, to summarize, there are multiple formulas for MC that all work for arbitrary $f(x)$ and $g(x)$ but have different convergence speeds and are thus better or worse in specific scenarios
|
Understanding different Monte Carlo approximation notations
Yes, the formula you provide should converge to a true answer for arbitrary probability distribution $g(x)$ given infinite sample points. The problem is that you don't want to wait infinitely long. So
|
39,605
|
Understanding different Monte Carlo approximation notations
|
In probabilistic terms, the Monte Carlo method (or its justification) is called the Law of Large Numbers. The convergence
$$\frac{1}{N} \sum_{i=1}^N f(X_i) \stackrel{\text{a.s.}}{\to} \mathbb E_g[f(X)]\tag{1}$$ does not assume anything but iid-ness of the $X_i$'s and the existence of the expectation.
A more precise characterisation of the convergence requires further properties of the pair $(f,g)$. For instance, the variance of the l.h.s. in (1) goes to zero with $N$ provided the variance$$\text{var}_g(f(X))$$exists (in dimension one). The speed it goes to zero is precisely $\text{O}(\sqrt{N})$ no matter what the dimension of $X$ and no matter what the Monte Carlo method.
The second part of the question alludes to other forms of Monte Carlo approximations. They are a consequence of the non-identifiability of the pair $(f,g)$ in the integral$$\mathfrak I=\int_A f(x)g(x)\text{d}x$$ which can be equally written as$$\mathfrak I=\int_A \frac{f(x)g(x)}{h(x)} h(x)\text{d}x$$for an arbitrary density $h$ with support including $A$ (i.e. positive over $A$). Due to this lack of identifiability, the choice of $h$ is mostly free and the optimal choice of $h$ is
$$h^\star(x) = \dfrac{|f(x)|g(x)}{\int_A |f(x)|g(x)\text{d}x}$$
as it achieves the minimal variance. This variance is zero when $f$ is non-negative (or non-positive) over the whole set $A$. Obviously, in practice, this choice of $h$ is unavailable but it explains why simulating from $g$ is rarely the optimal choice.
|
Understanding different Monte Carlo approximation notations
|
In probabilistic terms, the Monte Carlo method (or its justification) is called the Law of Large Numbers. The convergence
$$\frac{1}{N} \sum_{i=1}^N f(X_i) \stackrel{\text{a.s.}}{\to} \mathbb E_g[f(X)
|
Understanding different Monte Carlo approximation notations
In probabilistic terms, the Monte Carlo method (or its justification) is called the Law of Large Numbers. The convergence
$$\frac{1}{N} \sum_{i=1}^N f(X_i) \stackrel{\text{a.s.}}{\to} \mathbb E_g[f(X)]\tag{1}$$ does not assume anything but iid-ness of the $X_i$'s and the existence of the expectation.
A more precise characterisation of the convergence requires further properties of the pair $(f,g)$. For instance, the variance of the l.h.s. in (1) goes to zero with $N$ provided the variance$$\text{var}_g(f(X))$$exists (in dimension one). The speed it goes to zero is precisely $\text{O}(\sqrt{N})$ no matter what the dimension of $X$ and no matter what the Monte Carlo method.
The second part of the question alludes to other forms of Monte Carlo approximations. They are a consequence of the non-identifiability of the pair $(f,g)$ in the integral$$\mathfrak I=\int_A f(x)g(x)\text{d}x$$ which can be equally written as$$\mathfrak I=\int_A \frac{f(x)g(x)}{h(x)} h(x)\text{d}x$$for an arbitrary density $h$ with support including $A$ (i.e. positive over $A$). Due to this lack of identifiability, the choice of $h$ is mostly free and the optimal choice of $h$ is
$$h^\star(x) = \dfrac{|f(x)|g(x)}{\int_A |f(x)|g(x)\text{d}x}$$
as it achieves the minimal variance. This variance is zero when $f$ is non-negative (or non-positive) over the whole set $A$. Obviously, in practice, this choice of $h$ is unavailable but it explains why simulating from $g$ is rarely the optimal choice.
|
Understanding different Monte Carlo approximation notations
In probabilistic terms, the Monte Carlo method (or its justification) is called the Law of Large Numbers. The convergence
$$\frac{1}{N} \sum_{i=1}^N f(X_i) \stackrel{\text{a.s.}}{\to} \mathbb E_g[f(X)
|
39,606
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
Trivial -- but magical:
BBP <- function(n = 13) {
sum(sapply(seq_len(n), function(k) {
((sample.int(8*k+1, 1) <= 4) -
(sample.int(8*k+4, 1) <= 2) -
(sample.int(8*k+5, 1) <= 1) -
(sample.int(8*k+6, 1) <= 1)) / 16^k
})) + (4 - 2/4 - 1/5 - 1/6)
}
As you can see in this R code, only rational arithmetic operations (comparison, subtraction, division, and addition) are performed on the results of a small number of draws of integral values using sample.int. By default, only $13*4=52$ draws are made (of values never greater than $110$) -- but the expected value of the result is $\pi$ to full double-precision!
Here is a sample run of $10,000$ iterations (requiring one second of time):
x <- replicate(1e4, BBP())
mu <- mean(x)
se <- sd(x) / sqrt(length(x))
signif(c(Estimate=mu, SE=se, Z=(mu-pi)/se), 4)
Its output is
Estimate SE Z
3.1430000 0.0004514 2.0870000
In other words, this (random) estimate of $\pi$ is $3.143\pm 0.00045$ and the smallish Z-value of $2.08$ indicates this doesn't deviate significantly from the true value of $\pi.$
This is trivial because, as I hope the code makes obvious, calculations like sample.int(b,1) <= a (when the integer a does not exceed b) are just stupid ways to estimate the rational fractions a/b. Thus, this code estimates the Bailey Borwein Plouffe formula
$$\pi = \sum_{k=0}^\infty \frac{1}{16^k}\left(\frac{4}{8k+1}-\frac{2}{8k+4}-\frac{1}{8k+5}-\frac{1}{8k+6}\right)$$
by expressing the $k=0$ term explicitly and sampling all subsequent terms through $k=13.$ Since each term in the formula introduces $4$ additional bits in the binary expansion of $\pi,$ terminating the sampling at this point gives $4*(13)=52$ bits after the binary point, which is slightly more than the maximal $52$ total bits of precision available in the IEEE double precision floats used by R.
Although we could work out the variance analytically, the previous example already gives us a good estimate of it, because the standard error was only $0.0045,$ associated with a variance of $0.002$ per iteration.
var(x)
[1] 0.002037781
Thus, if you would like to use BBP to estimate $\pi$ to within a standard error of $\sigma,$ you will need approximately $0.002/\sigma^2$ iterations. For example, estimating $\pi$ to six decimal places in this manner will require around two billion iterations (about three days of computation).
One way to reduce the variance (greatly) would be to compute a few more of the initial terms in the BBP sum once and for all, using Monte Carlo simulation only to estimate the least significant bits of the result :-).
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
Trivial -- but magical:
BBP <- function(n = 13) {
sum(sapply(seq_len(n), function(k) {
((sample.int(8*k+1, 1) <= 4) -
(sample.int(8*k+4, 1) <= 2) -
(sample.int(8*k+5, 1) <= 1) -
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
Trivial -- but magical:
BBP <- function(n = 13) {
sum(sapply(seq_len(n), function(k) {
((sample.int(8*k+1, 1) <= 4) -
(sample.int(8*k+4, 1) <= 2) -
(sample.int(8*k+5, 1) <= 1) -
(sample.int(8*k+6, 1) <= 1)) / 16^k
})) + (4 - 2/4 - 1/5 - 1/6)
}
As you can see in this R code, only rational arithmetic operations (comparison, subtraction, division, and addition) are performed on the results of a small number of draws of integral values using sample.int. By default, only $13*4=52$ draws are made (of values never greater than $110$) -- but the expected value of the result is $\pi$ to full double-precision!
Here is a sample run of $10,000$ iterations (requiring one second of time):
x <- replicate(1e4, BBP())
mu <- mean(x)
se <- sd(x) / sqrt(length(x))
signif(c(Estimate=mu, SE=se, Z=(mu-pi)/se), 4)
Its output is
Estimate SE Z
3.1430000 0.0004514 2.0870000
In other words, this (random) estimate of $\pi$ is $3.143\pm 0.00045$ and the smallish Z-value of $2.08$ indicates this doesn't deviate significantly from the true value of $\pi.$
This is trivial because, as I hope the code makes obvious, calculations like sample.int(b,1) <= a (when the integer a does not exceed b) are just stupid ways to estimate the rational fractions a/b. Thus, this code estimates the Bailey Borwein Plouffe formula
$$\pi = \sum_{k=0}^\infty \frac{1}{16^k}\left(\frac{4}{8k+1}-\frac{2}{8k+4}-\frac{1}{8k+5}-\frac{1}{8k+6}\right)$$
by expressing the $k=0$ term explicitly and sampling all subsequent terms through $k=13.$ Since each term in the formula introduces $4$ additional bits in the binary expansion of $\pi,$ terminating the sampling at this point gives $4*(13)=52$ bits after the binary point, which is slightly more than the maximal $52$ total bits of precision available in the IEEE double precision floats used by R.
Although we could work out the variance analytically, the previous example already gives us a good estimate of it, because the standard error was only $0.0045,$ associated with a variance of $0.002$ per iteration.
var(x)
[1] 0.002037781
Thus, if you would like to use BBP to estimate $\pi$ to within a standard error of $\sigma,$ you will need approximately $0.002/\sigma^2$ iterations. For example, estimating $\pi$ to six decimal places in this manner will require around two billion iterations (about three days of computation).
One way to reduce the variance (greatly) would be to compute a few more of the initial terms in the BBP sum once and for all, using Monte Carlo simulation only to estimate the least significant bits of the result :-).
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
Trivial -- but magical:
BBP <- function(n = 13) {
sum(sapply(seq_len(n), function(k) {
((sample.int(8*k+1, 1) <= 4) -
(sample.int(8*k+4, 1) <= 2) -
(sample.int(8*k+5, 1) <= 1) -
|
39,607
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
A simple method would be generating a pair of integers from $[0,M)$ and see if it's inside the quadrant, i.e. let the numbers be denoted as $m_1, m_2$. If $m_1^2+m_2^2<M^2$, the point is inside the quadrant. If the numbers were continuous, the probability would have been $\pi/4$. So, increasing $M$, increases the precision of the estimate. Below is a Python one liner:
N = int(1e8)
M = int(1e9)
4 * np.mean(np.sum(np.random.randint(0, M, (2, N))**2, axis=0) < M**2)
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
A simple method would be generating a pair of integers from $[0,M)$ and see if it's inside the quadrant, i.e. let the numbers be denoted as $m_1, m_2$. If $m_1^2+m_2^2<M^2$, the point is inside the qu
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
A simple method would be generating a pair of integers from $[0,M)$ and see if it's inside the quadrant, i.e. let the numbers be denoted as $m_1, m_2$. If $m_1^2+m_2^2<M^2$, the point is inside the quadrant. If the numbers were continuous, the probability would have been $\pi/4$. So, increasing $M$, increases the precision of the estimate. Below is a Python one liner:
N = int(1e8)
M = int(1e9)
4 * np.mean(np.sum(np.random.randint(0, M, (2, N))**2, axis=0) < M**2)
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
A simple method would be generating a pair of integers from $[0,M)$ and see if it's inside the quadrant, i.e. let the numbers be denoted as $m_1, m_2$. If $m_1^2+m_2^2<M^2$, the point is inside the qu
|
39,608
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
To produce the probability $1/\pi$, the following algorithm can be used (Flajolet et al. 2010), which is based on a series expansion by Ramanujan:
Set $t$ to 0.
Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 3.
Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 4.
With probability 5/9, add 1 to $t$. (For example, generate a uniform random integer in [1, 9], and if that integer is 5 or less, add 1 to $t$.)
Flip a fair coin $2t$ times, and return 0 if heads showed more often than tails or vice versa. Do this step two more times.
Return 1.
Then, run the algorithm above until you get 1, then let $X$ be the number of runs including the last. Then the expected value of $X$ is $\pi$.
Note that the algorithm doesn't involve fractions at all.
See also: https://math.stackexchange.com/questions/4189867/obtaining-irrational-probabilities
REFERENCES:
Flajolet, P., Pelletier, M., Soria, M., "On Buffon machines and numbers", arXiv:0906.5560 [math.PR], 2010.
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
|
To produce the probability $1/\pi$, the following algorithm can be used (Flajolet et al. 2010), which is based on a series expansion by Ramanujan:
Set $t$ to 0.
Flip two fair coins. If both show head
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
To produce the probability $1/\pi$, the following algorithm can be used (Flajolet et al. 2010), which is based on a series expansion by Ramanujan:
Set $t$ to 0.
Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 3.
Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 4.
With probability 5/9, add 1 to $t$. (For example, generate a uniform random integer in [1, 9], and if that integer is 5 or less, add 1 to $t$.)
Flip a fair coin $2t$ times, and return 0 if heads showed more often than tails or vice versa. Do this step two more times.
Return 1.
Then, run the algorithm above until you get 1, then let $X$ be the number of runs including the last. Then the expected value of $X$ is $\pi$.
Note that the algorithm doesn't involve fractions at all.
See also: https://math.stackexchange.com/questions/4189867/obtaining-irrational-probabilities
REFERENCES:
Flajolet, P., Pelletier, M., Soria, M., "On Buffon machines and numbers", arXiv:0906.5560 [math.PR], 2010.
|
Estimating Pi using Monte Carlo simulations but without using fractional numbers
To produce the probability $1/\pi$, the following algorithm can be used (Flajolet et al. 2010), which is based on a series expansion by Ramanujan:
Set $t$ to 0.
Flip two fair coins. If both show head
|
39,609
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \mathcal{N}(0,1)$
|
TLDR; $s(z)$ is asymptotically normal, and its variance is $\frac {12} {n-1}$ according to CLT for Markov chains. It can be shown that the distribution is a special case of generalized $\chi^2$ distribution.
Markov Chain approach, asymptotics and variance
The sequence $z_i$ is Markov chain because once you know $z_i$ the value of $z_{i+1}$ doesn't depend on $z_k$ where $k<i$. Therefore, Markov chain CLT is applicable. Here's how we apply it.
The sum, or any linear combination of normal r.v.s, is a r.v. itself. Knowing that $z_i\sim\mathcal N(0,2)$ or $z_i\sim\sqrt 2\space \mathcal N(0,1)$, we know that $z_i^2\sim 2\space\chi^2_1$, see the definition of $\chi^2$ distribution. Thus, $\sigma_z^2=\operatorname{var}[z_i^2]=2^2\times 2=8$.
Markov chain CLT states:
$$\sqrt{n-1}(s(z)-\mu)\sim\mathcal N(0,\sigma^2),$$
where $\mu=E[z_i^2]$ and $\sigma^2 = \sigma_z^2 + 2\sum_{k=1}^\infty \operatorname{cov}( z_{1}^2, z_{1+k}^2)=8+2\times 2=12$.
Hence $\operatorname{var}[s(z)]=\frac{12}{n-1}$
Here's the proof by simulation (Python):
import numpy as np
n = 51
s = np.mean(np.diff(np.random.randn(10000,n))**2,axis=1)
vars = np.var(s)
print(vars)
print(12/(n-1))
Output:
0.23526746023519335
0.24
Note, that if $z_i^2$ weren't correlated then $s(z)$ would have been from scaled $\chi^2$ distribution with variance $\frac 8 {n-1}$. However, due to overlapping terms $x_i$ in $z_i$ and $z_{i+1}$ we had to apply modified CLT to obtain the asymptotic distribution of $s(z)$.
Acknowledgements:
My initial answer, which I updated a few times already, did not account for correlation, which was pointed out by @Sextus Empiricus. Also, I used this answer for $\operatorname{cov}( z_{1}^2, z_{1+k}^2)$, where the correlation $\rho=\operatorname{corr}[z_i,z_{i+1}]=-1/2$ and we know that correlation disappears between $z_i$ and $z_j$ when $|i-j|>1$.
The distribution
Let's start with independent row vector of randoms $X'=(x_1,\dots,x_n)$. We get the row vector of differences $Z'$ aby pplying Toeplitz matrix $B'$ as follows $Z'=X'B'$, where
$$B' = \begin{bmatrix} {-1} & 0&\dots & 0 &0\\
1 & -1 & \dots&0 & 0 \\
0 & {1}& \dots & 0& 0\\
\vdots & \vdots & \vdots & \vdots & \vdots \\
0& 0 & \dots & 1 & -1 \\
0& 0 & \dots & 0 & {1}
\end{bmatrix}$$
Your quantity then is a quadratic form
$$s(z)=\frac 1 {n-1} X'B'BX$$
where $B'B$ has a form of a tridiagonal Toeplitz matrix:
Let's apply eigen decomposition $B'B=P'\Lambda P$ then we have:
$$s(z)=\frac 1 {n-1} X'P'\Lambda PX=\frac 1 {n-1} Y'\Lambda Y$$
where $Y=PX\sim\mathcal N(0,I_{n-1})$, i.e. each $Y_i$ (principal component) is an independent normal.
Hence, $$s(z)=\frac 1 {n-1} \sum_{i=1}^{n-1}\lambda_i Y_i^2$$
where $Y_i^2\sim\chi^2_1$ and $\lambda_i$ are eigenvalues. The eigenvalues of Toeplitz tridiagonal matrices are known to form a sine wave and easy to find, see "The Eigenproblem of a Tridiagonal P-Toeplitz Matrix" by Gover .
So the distribution can be seen as a linear combination of $\chi^2$ variables or a generalized $\chi^2$ distribution.
Miscellaneous
We can define a row vector $V'=(x_1,z_1,\dots,z_{n-1})$, then it can be obtained by applying a matrix $D'$ to the original observations $V'=X'D'$, the matrix $B'$ above is the subset of columns of $D'$:
The matrix $D'D$ looks like this:
We can get the matrix $U'$ that recovers the original vector from $V$ as follows: $X'=V'U'$, and $U'=D'^{-1}$.
The matrix $U'$ is upper unit triangular, meaning $u_{ij}=1_{i\ge j}$:
Matrix $A=U'U$, which appears in the quadratic form, has a very interesting form: $a_{ij} = n+1-min(i,j)$, e.g. $n=5$:
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \
|
TLDR; $s(z)$ is asymptotically normal, and its variance is $\frac {12} {n-1}$ according to CLT for Markov chains. It can be shown that the distribution is a special case of generalized $\chi^2$ distri
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \mathcal{N}(0,1)$
TLDR; $s(z)$ is asymptotically normal, and its variance is $\frac {12} {n-1}$ according to CLT for Markov chains. It can be shown that the distribution is a special case of generalized $\chi^2$ distribution.
Markov Chain approach, asymptotics and variance
The sequence $z_i$ is Markov chain because once you know $z_i$ the value of $z_{i+1}$ doesn't depend on $z_k$ where $k<i$. Therefore, Markov chain CLT is applicable. Here's how we apply it.
The sum, or any linear combination of normal r.v.s, is a r.v. itself. Knowing that $z_i\sim\mathcal N(0,2)$ or $z_i\sim\sqrt 2\space \mathcal N(0,1)$, we know that $z_i^2\sim 2\space\chi^2_1$, see the definition of $\chi^2$ distribution. Thus, $\sigma_z^2=\operatorname{var}[z_i^2]=2^2\times 2=8$.
Markov chain CLT states:
$$\sqrt{n-1}(s(z)-\mu)\sim\mathcal N(0,\sigma^2),$$
where $\mu=E[z_i^2]$ and $\sigma^2 = \sigma_z^2 + 2\sum_{k=1}^\infty \operatorname{cov}( z_{1}^2, z_{1+k}^2)=8+2\times 2=12$.
Hence $\operatorname{var}[s(z)]=\frac{12}{n-1}$
Here's the proof by simulation (Python):
import numpy as np
n = 51
s = np.mean(np.diff(np.random.randn(10000,n))**2,axis=1)
vars = np.var(s)
print(vars)
print(12/(n-1))
Output:
0.23526746023519335
0.24
Note, that if $z_i^2$ weren't correlated then $s(z)$ would have been from scaled $\chi^2$ distribution with variance $\frac 8 {n-1}$. However, due to overlapping terms $x_i$ in $z_i$ and $z_{i+1}$ we had to apply modified CLT to obtain the asymptotic distribution of $s(z)$.
Acknowledgements:
My initial answer, which I updated a few times already, did not account for correlation, which was pointed out by @Sextus Empiricus. Also, I used this answer for $\operatorname{cov}( z_{1}^2, z_{1+k}^2)$, where the correlation $\rho=\operatorname{corr}[z_i,z_{i+1}]=-1/2$ and we know that correlation disappears between $z_i$ and $z_j$ when $|i-j|>1$.
The distribution
Let's start with independent row vector of randoms $X'=(x_1,\dots,x_n)$. We get the row vector of differences $Z'$ aby pplying Toeplitz matrix $B'$ as follows $Z'=X'B'$, where
$$B' = \begin{bmatrix} {-1} & 0&\dots & 0 &0\\
1 & -1 & \dots&0 & 0 \\
0 & {1}& \dots & 0& 0\\
\vdots & \vdots & \vdots & \vdots & \vdots \\
0& 0 & \dots & 1 & -1 \\
0& 0 & \dots & 0 & {1}
\end{bmatrix}$$
Your quantity then is a quadratic form
$$s(z)=\frac 1 {n-1} X'B'BX$$
where $B'B$ has a form of a tridiagonal Toeplitz matrix:
Let's apply eigen decomposition $B'B=P'\Lambda P$ then we have:
$$s(z)=\frac 1 {n-1} X'P'\Lambda PX=\frac 1 {n-1} Y'\Lambda Y$$
where $Y=PX\sim\mathcal N(0,I_{n-1})$, i.e. each $Y_i$ (principal component) is an independent normal.
Hence, $$s(z)=\frac 1 {n-1} \sum_{i=1}^{n-1}\lambda_i Y_i^2$$
where $Y_i^2\sim\chi^2_1$ and $\lambda_i$ are eigenvalues. The eigenvalues of Toeplitz tridiagonal matrices are known to form a sine wave and easy to find, see "The Eigenproblem of a Tridiagonal P-Toeplitz Matrix" by Gover .
So the distribution can be seen as a linear combination of $\chi^2$ variables or a generalized $\chi^2$ distribution.
Miscellaneous
We can define a row vector $V'=(x_1,z_1,\dots,z_{n-1})$, then it can be obtained by applying a matrix $D'$ to the original observations $V'=X'D'$, the matrix $B'$ above is the subset of columns of $D'$:
The matrix $D'D$ looks like this:
We can get the matrix $U'$ that recovers the original vector from $V$ as follows: $X'=V'U'$, and $U'=D'^{-1}$.
The matrix $U'$ is upper unit triangular, meaning $u_{ij}=1_{i\ge j}$:
Matrix $A=U'U$, which appears in the quadratic form, has a very interesting form: $a_{ij} = n+1-min(i,j)$, e.g. $n=5$:
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \
TLDR; $s(z)$ is asymptotically normal, and its variance is $\frac {12} {n-1}$ according to CLT for Markov chains. It can be shown that the distribution is a special case of generalized $\chi^2$ distri
|
39,610
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \mathcal{N}(0,1)$
|
Small pedantic note: Below I changed the coefficient into $1/\sqrt{n-1}$ otherwise the limiting distribution will be a degenerate distribution (zero variance). In that case one would also need to subtract the mean of the $z_i^2$. That means, only a scaled and shifted sum like $\sum_{i=1}^{n-1} \frac{(x_{i+1}-x_i)^2-1}{\sqrt{n-1}}$ will approach a normal distribution.
Distribution of $s(\mathbf{z})$ as a linear sum of chi-squared variables
The sum $s(\mathbf{z}) = \frac{1}{\sqrt{n-1}}\sum_{i=1}^{n-1}(z_i)^2$ is similarly distributed as the sum $s(\mathbf{y}) = \frac{1}{\sqrt{n-1}}\sum_{i=1}^{n-1}(y_i)^2$ where the $y_i$ are $n-1$ independent normally distributed variables with variance $\lambda_i = 2 + 2 \cos(\frac{i}{n}\pi)$ $$s(\mathbf{z}) \sim s(\mathbf{y}) \quad \text{where} \quad y_i \sim N\left(0,\lambda_i \right)$$
Consequences:
The variance of $s(\mathbf{z})^2$ is equal to $1/\sqrt{n-1}$ times the sum of the variances of the individual terms $y_i^2$ (which relate to scaled $\chi_{(1)}^2$ distributions or to gamma distributions).
For the individual terms which are the squares of normal distributed variables we have
$$\begin{array}{rcl}
\text{var}(y_i^2) &=& 2 \text{var}(y_i)^2 \\
&=&2\left( 2 + 2 \cos\left(\frac{i}{n}\pi\right)\right)^2 \\
&=& 12 + 4\cos\left(2\frac{i}{n}\pi\right) + 16 \cos\left(\frac{i}{n}\pi\right) \end{array}$$
and for the sum
$$\begin{array}{rcl}
\text{var}[s(\mathbf{z})^2] &=& \frac{1}{n-1} \sum_{i=1}^{n-1}12 + { 4\cos\left(2\frac{i}{n}\pi\right)} \overbrace{ + 16 \cos\left(\frac{i}{n}\pi\right)}^{\substack{\text{these terms cancel}\\\text{ due to symmetry}}} \\
&=& \frac{1}{n-1} ( \sum_{i=1}^{n-1}12 + 4 \underbrace{\sum_{i=1}^{n-1}\cos\left(2\frac{i}{n}\pi\right))}_{=-1} \\& =& \frac{12(n-1) -4}{n-1} \\&\approx& 12
\end{array}$$
where we used this to derive that the sum of cosines is equal to -1.
We can not express the probability density function in a closed-form but we can express the cumulants of the distribution $\kappa_k(s(\mathbf{z}))$ in terms of the cumulants of a single chi-squared variable $\kappa_k(\chi_{(1)}^2)$. For increasing $n$ the 1st order cumulant will go to infinity (so to make the limiting distribution a normal distribution you should not only change the factor $1/(n-1)$ but also subtract the mean), the 2nd cumulant will approach to $12$ and the other, higher order cumulants, will approach zero, which means that you approach a normal distribution.
$$\kappa_k(s(\mathbf{z})) = \kappa_k(\chi_{(1)}^2)\frac{1}{\sqrt{n-1}^k} \sum_{i=1}^{n-1} \lambda_i^k \approx \kappa_k(\chi_{(1)}^2)\frac{n-1}{\sqrt{n-1}^k} \int_0^1 ( 2 + 2 \cos(x\pi))^k dx$$
Maybe there is a more direct way to use some version of the CLT for a sum of independent variables that only differ by a scaling constant instead of manually computing the cumulants. But I couldn't find one.
Why is it equivalent to a sum of chi-squared variables?
(see also here Show that the distribution of $x'Ax$ is linear combination of chi-squared)
Example for $n=3$
In the case of $n=3$ then the $z_1$ and $z_2$ are distributed like a multivariate normal distribution with a negative correlation. Geometrically it looks like an elongated shape.
n <- 10^4
set.seed(1)
x <- matrix(rnorm(3*n),3)
z1 <- x[2,]-x[1,]
z2 <- x[3,]-x[2,]
plot(z1,z2, xlab = expression(z[1]), ylab = expression(z[2]))
We can express the square in terms of alternative variables $Y_1 = \sqrt{0.5}(Z_1-Z_2) \sim N(0,3)$ and $Y_2 = \sqrt{0.5}(Z_1+Z_2) \sim N(0,1)$
$$Z_1^2 + Z_2^2 = 0.5(Z_1-Z_2)^2 + 0.5(Z_1+Z_2)^2 = Y_1^2 + Y_2^2$$
Note that the $Y_i$ are independent. So the distribution is similar to the distribution of a sum of independent squared normal distributed variables, but with different variance.
Generalized for all $n$
More generally $z_i$ is a multivariate normal distribution (any linear combination of the $z_i$ is a linear combination of the $x_i$ which is a normal distributed variable).
The variance of each $z_i$, being the sum of two standard normal variables, is $2$. The covariance of two neighbouring variables is $-1$ (which you can find with covariance of sums). So the covariance matrix is like:
$$\Sigma = \begin{bmatrix} {2} & -1 & 0 & \dots & 0 &0\\
-1 & 2 & -1 & 0& \dots & 0 \\
0 & -1 & {2}& \dots & 0& 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0& 0 & \dots & {2} & -1 \\
0 & 0& 0 & \dots & -1 & {2}
\end{bmatrix}$$
In this general case we can do the same as for $n=3$ and restate the
dependent $z_i^2$ as a sum of independent squared normal variables $y_i^2$. We use the same geometrical interpretation and rotate the distribution (keeping the radial distance invariant) and the distribution of $z_i$ is equivalent to rotated $y_i$ which have a variance that relate to the eigenvalues of the covariance matrix $\Sigma$. These eigenvalues will be in between 0 and 4 (see more about that below).
These eigenvalues follow a cosine function
$$\lambda_i = 2 + 2 \cos(\frac{i}{n}\pi)$$
for $1\leq i\leq n+1$. Which can be derived from the general description of eigenvalues of triadagonal Toeplitz matrices (as mentioned by Aksakal in the comments, you can see previous edits of this post for an alternative derivation of that relation with cosines)
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \
|
Small pedantic note: Below I changed the coefficient into $1/\sqrt{n-1}$ otherwise the limiting distribution will be a degenerate distribution (zero variance). In that case one would also need to subt
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \mathcal{N}(0,1)$
Small pedantic note: Below I changed the coefficient into $1/\sqrt{n-1}$ otherwise the limiting distribution will be a degenerate distribution (zero variance). In that case one would also need to subtract the mean of the $z_i^2$. That means, only a scaled and shifted sum like $\sum_{i=1}^{n-1} \frac{(x_{i+1}-x_i)^2-1}{\sqrt{n-1}}$ will approach a normal distribution.
Distribution of $s(\mathbf{z})$ as a linear sum of chi-squared variables
The sum $s(\mathbf{z}) = \frac{1}{\sqrt{n-1}}\sum_{i=1}^{n-1}(z_i)^2$ is similarly distributed as the sum $s(\mathbf{y}) = \frac{1}{\sqrt{n-1}}\sum_{i=1}^{n-1}(y_i)^2$ where the $y_i$ are $n-1$ independent normally distributed variables with variance $\lambda_i = 2 + 2 \cos(\frac{i}{n}\pi)$ $$s(\mathbf{z}) \sim s(\mathbf{y}) \quad \text{where} \quad y_i \sim N\left(0,\lambda_i \right)$$
Consequences:
The variance of $s(\mathbf{z})^2$ is equal to $1/\sqrt{n-1}$ times the sum of the variances of the individual terms $y_i^2$ (which relate to scaled $\chi_{(1)}^2$ distributions or to gamma distributions).
For the individual terms which are the squares of normal distributed variables we have
$$\begin{array}{rcl}
\text{var}(y_i^2) &=& 2 \text{var}(y_i)^2 \\
&=&2\left( 2 + 2 \cos\left(\frac{i}{n}\pi\right)\right)^2 \\
&=& 12 + 4\cos\left(2\frac{i}{n}\pi\right) + 16 \cos\left(\frac{i}{n}\pi\right) \end{array}$$
and for the sum
$$\begin{array}{rcl}
\text{var}[s(\mathbf{z})^2] &=& \frac{1}{n-1} \sum_{i=1}^{n-1}12 + { 4\cos\left(2\frac{i}{n}\pi\right)} \overbrace{ + 16 \cos\left(\frac{i}{n}\pi\right)}^{\substack{\text{these terms cancel}\\\text{ due to symmetry}}} \\
&=& \frac{1}{n-1} ( \sum_{i=1}^{n-1}12 + 4 \underbrace{\sum_{i=1}^{n-1}\cos\left(2\frac{i}{n}\pi\right))}_{=-1} \\& =& \frac{12(n-1) -4}{n-1} \\&\approx& 12
\end{array}$$
where we used this to derive that the sum of cosines is equal to -1.
We can not express the probability density function in a closed-form but we can express the cumulants of the distribution $\kappa_k(s(\mathbf{z}))$ in terms of the cumulants of a single chi-squared variable $\kappa_k(\chi_{(1)}^2)$. For increasing $n$ the 1st order cumulant will go to infinity (so to make the limiting distribution a normal distribution you should not only change the factor $1/(n-1)$ but also subtract the mean), the 2nd cumulant will approach to $12$ and the other, higher order cumulants, will approach zero, which means that you approach a normal distribution.
$$\kappa_k(s(\mathbf{z})) = \kappa_k(\chi_{(1)}^2)\frac{1}{\sqrt{n-1}^k} \sum_{i=1}^{n-1} \lambda_i^k \approx \kappa_k(\chi_{(1)}^2)\frac{n-1}{\sqrt{n-1}^k} \int_0^1 ( 2 + 2 \cos(x\pi))^k dx$$
Maybe there is a more direct way to use some version of the CLT for a sum of independent variables that only differ by a scaling constant instead of manually computing the cumulants. But I couldn't find one.
Why is it equivalent to a sum of chi-squared variables?
(see also here Show that the distribution of $x'Ax$ is linear combination of chi-squared)
Example for $n=3$
In the case of $n=3$ then the $z_1$ and $z_2$ are distributed like a multivariate normal distribution with a negative correlation. Geometrically it looks like an elongated shape.
n <- 10^4
set.seed(1)
x <- matrix(rnorm(3*n),3)
z1 <- x[2,]-x[1,]
z2 <- x[3,]-x[2,]
plot(z1,z2, xlab = expression(z[1]), ylab = expression(z[2]))
We can express the square in terms of alternative variables $Y_1 = \sqrt{0.5}(Z_1-Z_2) \sim N(0,3)$ and $Y_2 = \sqrt{0.5}(Z_1+Z_2) \sim N(0,1)$
$$Z_1^2 + Z_2^2 = 0.5(Z_1-Z_2)^2 + 0.5(Z_1+Z_2)^2 = Y_1^2 + Y_2^2$$
Note that the $Y_i$ are independent. So the distribution is similar to the distribution of a sum of independent squared normal distributed variables, but with different variance.
Generalized for all $n$
More generally $z_i$ is a multivariate normal distribution (any linear combination of the $z_i$ is a linear combination of the $x_i$ which is a normal distributed variable).
The variance of each $z_i$, being the sum of two standard normal variables, is $2$. The covariance of two neighbouring variables is $-1$ (which you can find with covariance of sums). So the covariance matrix is like:
$$\Sigma = \begin{bmatrix} {2} & -1 & 0 & \dots & 0 &0\\
-1 & 2 & -1 & 0& \dots & 0 \\
0 & -1 & {2}& \dots & 0& 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0& 0 & \dots & {2} & -1 \\
0 & 0& 0 & \dots & -1 & {2}
\end{bmatrix}$$
In this general case we can do the same as for $n=3$ and restate the
dependent $z_i^2$ as a sum of independent squared normal variables $y_i^2$. We use the same geometrical interpretation and rotate the distribution (keeping the radial distance invariant) and the distribution of $z_i$ is equivalent to rotated $y_i$ which have a variance that relate to the eigenvalues of the covariance matrix $\Sigma$. These eigenvalues will be in between 0 and 4 (see more about that below).
These eigenvalues follow a cosine function
$$\lambda_i = 2 + 2 \cos(\frac{i}{n}\pi)$$
for $1\leq i\leq n+1$. Which can be derived from the general description of eigenvalues of triadagonal Toeplitz matrices (as mentioned by Aksakal in the comments, you can see previous edits of this post for an alternative derivation of that relation with cosines)
|
Variance and asymptotic normality of $\frac{1}{n-1}\sum_{i=1}^{n-1}(x_{i+1}-x_i)^2$, where $X \sim \
Small pedantic note: Below I changed the coefficient into $1/\sqrt{n-1}$ otherwise the limiting distribution will be a degenerate distribution (zero variance). In that case one would also need to subt
|
39,611
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
|
The likelihood function $L(\theta)$ provides us with both, a point estimator and a confidence set estimator for $\theta$. In particular, to make things simple, let $X_1,\ldots, X_n$ be an i.i.d. random sample from a model having pdf $f_\theta$, with $\theta\in \mathbb{R}$.
The Maximum Likelihood Estimator is defined as
$$
\hat \theta = \underset{\theta\in\Theta}{\arg \max}\, L(\theta),
$$
or, under appropriate smoothing conditions, as the solution to the likelihood equation
$$
\frac{d\ell(\theta)}{d\theta} = 0,
$$
where $\ell(\theta) = \log L(\theta)$. Under appropriate conditions, and
if $\theta$ is the true parameter, we have
\begin{align}\label{1}
\Lambda_n(\theta) = -2(\ell(\theta)-\ell(\hat\theta)) &\overset{d}\to \chi_1^2,\quad\quad\text{(*)}
\end{align}
that is to say
$$
\Lambda_n(\theta)\,\,\dot\sim\,\, \chi_1^2
$$
where the symbol "$\,\dot\sim\,$" means "asymptotically as $n\to\infty$ distributed as".
The point here is that $\Lambda_n(\theta)$ provides an asymptotic pivotal quantity, which we can use to do hypothesis testing and build confidence sets. To go straight on to the matter of your post, I'll focus here on the latter.
By definition, a confidence set of level $1-\alpha$ is a random set which traps the true value with a probability no lower than $1-\alpha$. Thus if $C_{1-\alpha}$ is some set s.t.
$$
P_{\theta}(\Lambda_n(\theta)\in C_{1-\alpha})\geq 1-\alpha,\quad\forall\theta,
$$
then the set
$$
\{\theta:\Lambda_n(\theta)\in C_{1-\alpha}\}
$$
forms a (random) confidence set with probability coverage $1-\alpha$. The set $C_{1-\alpha}$ can be determined in different ways, but the usual approach is to use the threshold $\chi_{1,1-\alpha}^2$, and look for values of $\Lambda_n(\theta)$ that are below $\chi_{1,1-\alpha}^2$. Indeed, for every fixed $\theta$,
$$
P_\theta(\Lambda_{n}(\theta) \leq \chi_{1,1-\alpha}^2)\,\, \dot=\,\, 1-\alpha
$$
where $\chi_{1,1-\alpha}^2$ is the upper $\alpha$th level quantile of the $\chi_1^2$ distributions. The usual likelihood-based confidence set is thus
$$
\{\theta:\Lambda_n(\theta)\leq \chi_{1,1-\alpha}^2\}.
$$
The R-example below illustrates this in the case of $f_\theta$ being the Poisson distribution. In the figure, the horizontal dashed line represents the 0.95th quantile of $\chi_1^2$ distribution, and the dashed vertical lines mark the limits of the confidence set, which in this case happens to be an interval.
library(latex2exp)
# fix some observed data
y <- c(5, 4, 1, 0, 0, 1, 1, 2, 1, 1)
# the log-likelihood function
llik <- function(lambda, y){
oo = dpois(y, lambda = lambda, log = TRUE)
return(sum(oo))
}
ybar <- mean(y)
x <- seq(.1, 4, len=100)
ll <- sapply(x, function(x) llik(x,y=y))
llikv <- Vectorize(function(x) llik(x,y=y), "x")
lo <- uniroot(function(x) 2*(llik(ybar, y)-llikv(x))-qchisq(0.95, df=1),
lower = 0.1, upper = ybar)
up <- uniroot(function(x) 2*(llik(ybar, y)-llikv(x))-qchisq(0.95, df=1),
lower = ybar, upper=4)
plot(y = 2*(llik(ybar, y)-ll), x=x,
type="l", ylab=NA,
xlab = expression(theta),
xlim=c(0.1,4))
abline(v = ybar, lwd=2, lty="dotted")
mtext(TeX("$\\widehat{\\theta}$"),side=1, at=1.6,padj = 1)
abline(h = qchisq(0.95, df=1), lwd=2, lty=2, col="gray")
segments(x0=lo$root, x1=lo$root,y0=-1.5,y1=8+qchisq(0.95, df=1), lwd=2, lty=2)
segments(x0=up$root, x1=up$root,y0=-1.5,y1=8+qchisq(0.95, df=1), lwd=2, lty=2)
mtext(TeX("$\\chi_{1,.05}^2$"),
side=1,
at=3,padj = -2.5,las=1,adj=0)
mtext("95% lik. conf. set",
side=1,
at=1,padj = -4.5,las=1,adj=0)
text(x = 0.5, y=50,TeX("$2(l(\\widehat{\\theta})-l(\\theta))$"))
|
How to create an interval estimate based on being within some percentage of the maximum value of the
|
The likelihood function $L(\theta)$ provides us with both, a point estimator and a confidence set estimator for $\theta$. In particular, to make things simple, let $X_1,\ldots, X_n$ be an i.i.d. rando
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
The likelihood function $L(\theta)$ provides us with both, a point estimator and a confidence set estimator for $\theta$. In particular, to make things simple, let $X_1,\ldots, X_n$ be an i.i.d. random sample from a model having pdf $f_\theta$, with $\theta\in \mathbb{R}$.
The Maximum Likelihood Estimator is defined as
$$
\hat \theta = \underset{\theta\in\Theta}{\arg \max}\, L(\theta),
$$
or, under appropriate smoothing conditions, as the solution to the likelihood equation
$$
\frac{d\ell(\theta)}{d\theta} = 0,
$$
where $\ell(\theta) = \log L(\theta)$. Under appropriate conditions, and
if $\theta$ is the true parameter, we have
\begin{align}\label{1}
\Lambda_n(\theta) = -2(\ell(\theta)-\ell(\hat\theta)) &\overset{d}\to \chi_1^2,\quad\quad\text{(*)}
\end{align}
that is to say
$$
\Lambda_n(\theta)\,\,\dot\sim\,\, \chi_1^2
$$
where the symbol "$\,\dot\sim\,$" means "asymptotically as $n\to\infty$ distributed as".
The point here is that $\Lambda_n(\theta)$ provides an asymptotic pivotal quantity, which we can use to do hypothesis testing and build confidence sets. To go straight on to the matter of your post, I'll focus here on the latter.
By definition, a confidence set of level $1-\alpha$ is a random set which traps the true value with a probability no lower than $1-\alpha$. Thus if $C_{1-\alpha}$ is some set s.t.
$$
P_{\theta}(\Lambda_n(\theta)\in C_{1-\alpha})\geq 1-\alpha,\quad\forall\theta,
$$
then the set
$$
\{\theta:\Lambda_n(\theta)\in C_{1-\alpha}\}
$$
forms a (random) confidence set with probability coverage $1-\alpha$. The set $C_{1-\alpha}$ can be determined in different ways, but the usual approach is to use the threshold $\chi_{1,1-\alpha}^2$, and look for values of $\Lambda_n(\theta)$ that are below $\chi_{1,1-\alpha}^2$. Indeed, for every fixed $\theta$,
$$
P_\theta(\Lambda_{n}(\theta) \leq \chi_{1,1-\alpha}^2)\,\, \dot=\,\, 1-\alpha
$$
where $\chi_{1,1-\alpha}^2$ is the upper $\alpha$th level quantile of the $\chi_1^2$ distributions. The usual likelihood-based confidence set is thus
$$
\{\theta:\Lambda_n(\theta)\leq \chi_{1,1-\alpha}^2\}.
$$
The R-example below illustrates this in the case of $f_\theta$ being the Poisson distribution. In the figure, the horizontal dashed line represents the 0.95th quantile of $\chi_1^2$ distribution, and the dashed vertical lines mark the limits of the confidence set, which in this case happens to be an interval.
library(latex2exp)
# fix some observed data
y <- c(5, 4, 1, 0, 0, 1, 1, 2, 1, 1)
# the log-likelihood function
llik <- function(lambda, y){
oo = dpois(y, lambda = lambda, log = TRUE)
return(sum(oo))
}
ybar <- mean(y)
x <- seq(.1, 4, len=100)
ll <- sapply(x, function(x) llik(x,y=y))
llikv <- Vectorize(function(x) llik(x,y=y), "x")
lo <- uniroot(function(x) 2*(llik(ybar, y)-llikv(x))-qchisq(0.95, df=1),
lower = 0.1, upper = ybar)
up <- uniroot(function(x) 2*(llik(ybar, y)-llikv(x))-qchisq(0.95, df=1),
lower = ybar, upper=4)
plot(y = 2*(llik(ybar, y)-ll), x=x,
type="l", ylab=NA,
xlab = expression(theta),
xlim=c(0.1,4))
abline(v = ybar, lwd=2, lty="dotted")
mtext(TeX("$\\widehat{\\theta}$"),side=1, at=1.6,padj = 1)
abline(h = qchisq(0.95, df=1), lwd=2, lty=2, col="gray")
segments(x0=lo$root, x1=lo$root,y0=-1.5,y1=8+qchisq(0.95, df=1), lwd=2, lty=2)
segments(x0=up$root, x1=up$root,y0=-1.5,y1=8+qchisq(0.95, df=1), lwd=2, lty=2)
mtext(TeX("$\\chi_{1,.05}^2$"),
side=1,
at=3,padj = -2.5,las=1,adj=0)
mtext("95% lik. conf. set",
side=1,
at=1,padj = -4.5,las=1,adj=0)
text(x = 0.5, y=50,TeX("$2(l(\\widehat{\\theta})-l(\\theta))$"))
|
How to create an interval estimate based on being within some percentage of the maximum value of the
The likelihood function $L(\theta)$ provides us with both, a point estimator and a confidence set estimator for $\theta$. In particular, to make things simple, let $X_1,\ldots, X_n$ be an i.i.d. rando
|
39,612
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
|
Under some conditions you can show that the MLE of a parameter has the following distribution: $\sqrt n (\hat\theta-\theta)\sim\mathcal N(0,\hat\sigma^2)$, where $\hat\sigma^2$ is the corresponding estimate of the variance. The variance estimate $\hat\sigma^2/n$ can be used to estimate the confidence interval of the parameter estimate $\hat\theta$. The variance estimate is related to the Fisher information of the distribution (likelihood) function.
|
How to create an interval estimate based on being within some percentage of the maximum value of the
|
Under some conditions you can show that the MLE of a parameter has the following distribution: $\sqrt n (\hat\theta-\theta)\sim\mathcal N(0,\hat\sigma^2)$, where $\hat\sigma^2$ is the corresponding es
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
Under some conditions you can show that the MLE of a parameter has the following distribution: $\sqrt n (\hat\theta-\theta)\sim\mathcal N(0,\hat\sigma^2)$, where $\hat\sigma^2$ is the corresponding estimate of the variance. The variance estimate $\hat\sigma^2/n$ can be used to estimate the confidence interval of the parameter estimate $\hat\theta$. The variance estimate is related to the Fisher information of the distribution (likelihood) function.
|
How to create an interval estimate based on being within some percentage of the maximum value of the
Under some conditions you can show that the MLE of a parameter has the following distribution: $\sqrt n (\hat\theta-\theta)\sim\mathcal N(0,\hat\sigma^2)$, where $\hat\sigma^2$ is the corresponding es
|
39,613
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
|
You have likelihood intervals/regions and confidence intervals/regions. They are different.
Confidence intervals stem from the fiducial distribution, which is a different thing from the likelihood function.
If we form a confidence interval based on inverting an $\alpha$-level likelihood ratio test, is that basically what we're doing?
The likelihood ratio test is using a likelihood ratio instead of a likelihood function. The goal is to make a hypothesis test that is most powerful for some alternative hypothesis.
The likelihood ratio is a function of the data (given a null hypothesis $\theta_0$ and alternative hypothesis $\theta_a$) $$\Lambda(x|\theta_0,\theta_a) = \frac{\mathcal{L}(\theta_0|x)}{\mathcal{L}(\theta_a|x)}$$
The likelihood or relative likelihood is a function of the distribution parameter (given data $x$). $$\mathcal{L}_{\text{relative}}(\theta|x) = \frac{\mathcal{L}(\theta|x)}{\mathcal{L}(\hat\theta_{ML}|x)}$$ it scales the likelihood function $\mathcal{L}(\theta|x)$ by the maximum $\mathcal{L}(\hat\theta_{ML}|x)$
The likelihood ratio $\Lambda(x)$ is a way to define a hypothesis test, and that hypothesis test can be used to define a confidence interval. But,
the bounds for the values $\Lambda(x)$ that cover $\alpha\%$ of the distribution $\Lambda(x)$
are different from
the bounds for the values $\theta$ that cover $\alpha\%$ of the distribution $\mathcal{L}_{\text{relative}}(\theta)$
In the question The basic logic of constructing a confidence interval the use of the likelihood function to construct a confidence interval is discussed. A condition for the two being the same is:
When are the two methods the same?
This horizontal vs vertical is giving the same result, when the boundaries $U$ and $L$, that bound the intervals in the plot $\theta$ vs $\hat \theta$ are iso-lines for $f(\hat \theta ; \theta)$. If boundaries are everywhere at the same height than in neither of the two directions you can make an improvement.
Or in other words when $\partial_{\hat\theta}f(\hat \theta ; \theta) = -\partial_{\theta}f(\hat \theta ; \theta)$ and the likelihood distribution and fiducial distribution coincidence.
This happens in the case described utobi. In the limit of sample size $n\to\infty$ the likelihood function becomes equivalent to a shifted chi-square distribution and the bounds of the likelihood and confidence interval (based on highest density) start to coincide and the likelihood interval can be interpreted gains (approximately) the properties a confidence interval.
|
How to create an interval estimate based on being within some percentage of the maximum value of the
|
You have likelihood intervals/regions and confidence intervals/regions. They are different.
Confidence intervals stem from the fiducial distribution, which is a different thing from the likelihood fun
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
You have likelihood intervals/regions and confidence intervals/regions. They are different.
Confidence intervals stem from the fiducial distribution, which is a different thing from the likelihood function.
If we form a confidence interval based on inverting an $\alpha$-level likelihood ratio test, is that basically what we're doing?
The likelihood ratio test is using a likelihood ratio instead of a likelihood function. The goal is to make a hypothesis test that is most powerful for some alternative hypothesis.
The likelihood ratio is a function of the data (given a null hypothesis $\theta_0$ and alternative hypothesis $\theta_a$) $$\Lambda(x|\theta_0,\theta_a) = \frac{\mathcal{L}(\theta_0|x)}{\mathcal{L}(\theta_a|x)}$$
The likelihood or relative likelihood is a function of the distribution parameter (given data $x$). $$\mathcal{L}_{\text{relative}}(\theta|x) = \frac{\mathcal{L}(\theta|x)}{\mathcal{L}(\hat\theta_{ML}|x)}$$ it scales the likelihood function $\mathcal{L}(\theta|x)$ by the maximum $\mathcal{L}(\hat\theta_{ML}|x)$
The likelihood ratio $\Lambda(x)$ is a way to define a hypothesis test, and that hypothesis test can be used to define a confidence interval. But,
the bounds for the values $\Lambda(x)$ that cover $\alpha\%$ of the distribution $\Lambda(x)$
are different from
the bounds for the values $\theta$ that cover $\alpha\%$ of the distribution $\mathcal{L}_{\text{relative}}(\theta)$
In the question The basic logic of constructing a confidence interval the use of the likelihood function to construct a confidence interval is discussed. A condition for the two being the same is:
When are the two methods the same?
This horizontal vs vertical is giving the same result, when the boundaries $U$ and $L$, that bound the intervals in the plot $\theta$ vs $\hat \theta$ are iso-lines for $f(\hat \theta ; \theta)$. If boundaries are everywhere at the same height than in neither of the two directions you can make an improvement.
Or in other words when $\partial_{\hat\theta}f(\hat \theta ; \theta) = -\partial_{\theta}f(\hat \theta ; \theta)$ and the likelihood distribution and fiducial distribution coincidence.
This happens in the case described utobi. In the limit of sample size $n\to\infty$ the likelihood function becomes equivalent to a shifted chi-square distribution and the bounds of the likelihood and confidence interval (based on highest density) start to coincide and the likelihood interval can be interpreted gains (approximately) the properties a confidence interval.
|
How to create an interval estimate based on being within some percentage of the maximum value of the
You have likelihood intervals/regions and confidence intervals/regions. They are different.
Confidence intervals stem from the fiducial distribution, which is a different thing from the likelihood fun
|
39,614
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
|
TL;DR I use an example — inference for the rate $\theta$ in a Poisson model — to show how to construct a likelihood interval. I've borrowed this example from the book In All Likelihood: Statistical Modelling And Inference Using Likelihood by Yudi Pawitan.
Say we observe $x = 3$ from $\operatorname{Poisson}(\theta)$. The likelihood interval for the rate $\theta$ is:
$$
\begin{aligned}
\left\{\theta:\frac{L(\theta)}{L(\widehat{\theta})} > c\right\}
\end{aligned}
$$
where $\widehat{\theta}$ is the maximum likelihood estimate (MLE) and the likelihood is scaled so that the maximum is 1, which occurs at $\theta=\widehat{\theta}$; $c$ is a cutoff point (between 0 and 1; to be chosen next). In his answer @SextusEmpiricus refers to $L(\theta) / L(\widehat{\theta})$ as the relative likelihood.
This formalizes the concept of the likelihood "within some percentage of the maximum value" $L(\widehat{\theta})$: the likelihood interval is the set of parameter values $\theta$ with high likelihood. But how can we decide what is "high enough", ie. how can we calibrate the likelihood?
As @utobi explains, under regularity conditions, the likelihood ratio $2\log L(\widehat{\theta}) / L(\theta)$ has (approximately) Chi-squared distribution. Therefore, when the likelihood is reasonably regular:
$$
\begin{aligned}
P\left\{\frac{L(\theta)}{L(\widehat{\theta})} > c\right\}
= P\left\{2\log\frac{L(\widehat{\theta})}{L(\theta)} < - 2\log c\right\}
= P\left\{\chi^2_1 < -2\log c\right\}
\end{aligned}
$$
If we want to choose $c$, so that the likelihood interval has approximate $(1-\alpha)$100% coverage probability, then $P(\chi^2_1 < -2\log c) = 1 - \alpha \Leftrightarrow c = \exp\left\{-\frac{1}{2}\chi^2_{1,1-\alpha}\right\}$.
Here is a plot of the (scaled) likelihood $L(\theta)/L(\widehat{\theta})$ as well as the likelihood interval calibrated to have 95% coverage probability.
# significance level
alpha <- 0.05
# corresponding cutoff
c <- exp(-qchisq(1 - alpha, 1) / 2)
x <- 3
theta <- seq(0, 10, len = 10000)
loglike <- dpois(x, theta, log = TRUE)
# Compute a likelihood interval with (1-alpha)100% coverage.
# See Example 5.14, "In All Likelihood"
likelihood_interval(theta, loglike, alpha)
#> alpha lower upper
#> 0.05 0.746065 7.779286
R code to reproduce the figure and calculate the likelihood interval.
find_cutoff <- function(x, y, x0) {
if (sum(x < x0) < 2) {
return(min(x))
} else {
return(approx(y[x < x0], x[x < x0], xout = 0)$y)
}
}
# Compute likelihood interval for a scalar theta.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
likelihood_interval <- function(theta, llike, alpha = 0.05) {
# Scale the negative log-likelihood so that the minimum is 0
nllike <- -2 * (llike - max(llike))
# Find the MLE
theta.mle <- mean(theta[nllike == 0])
# Shift by the Chi-squared critical value
nllike <- nllike - qchisq(1 - alpha, 1)
lower <- find_cutoff(theta, nllike, theta.mle)
upper <- -find_cutoff(-theta, nllike, -theta.mle)
data.frame(alpha, lower, upper)
}
# significance level
alpha <- 0.05
# corresponding cutoff
c <- exp(-qchisq(1 - alpha, 1) / 2)
# An observation from Poisson(theta). We will do inference on theta.
x <- 3
theta <- seq(0, qpois(0.999, x), len = 10000)
# Compute a likelihood interval with (1-alpha)100% coverage.
# See Example 5.14, "In All Likelihood"
loglike <- dpois(x, theta, log = TRUE)
like.ci <- likelihood_interval(theta, loglike, alpha)
like.ci # approximate confidence interval based on the likelihood ratio statistic
# Scale the Poisson likelihood, so that the maximum is 1 at the MLE theta = x.
like <- exp(loglike - dpois(x, x, log = TRUE))
plot(
theta, like,
xlab = "θ", ylab = "L(θ)", ylim = c(0, 1),
type = "l", xaxs = "i", yaxs = "i", xaxt = "n"
)
axis(1, at = c(0, 2, 3, 4, 6, 8, 10), labels = c(0, 2, "", 4, 6, 8, 10))
abline(h = c, col = "gray62")
mtext("θ = 3", side = 1, at = x, padj = 1)
segments(
like.ci$lower, c, like.ci$upper, c,
col = "#2297E6", lwd = 3
)
title(
paste(
"95% likelihood interval [blue segment] for Poisson rate θ on observing x =", x
),
font.main = 1
)
|
How to create an interval estimate based on being within some percentage of the maximum value of the
|
TL;DR I use an example — inference for the rate $\theta$ in a Poisson model — to show how to construct a likelihood interval. I've borrowed this example from the book In All Likelihood: Statistical Mo
|
How to create an interval estimate based on being within some percentage of the maximum value of the likelihood function?
TL;DR I use an example — inference for the rate $\theta$ in a Poisson model — to show how to construct a likelihood interval. I've borrowed this example from the book In All Likelihood: Statistical Modelling And Inference Using Likelihood by Yudi Pawitan.
Say we observe $x = 3$ from $\operatorname{Poisson}(\theta)$. The likelihood interval for the rate $\theta$ is:
$$
\begin{aligned}
\left\{\theta:\frac{L(\theta)}{L(\widehat{\theta})} > c\right\}
\end{aligned}
$$
where $\widehat{\theta}$ is the maximum likelihood estimate (MLE) and the likelihood is scaled so that the maximum is 1, which occurs at $\theta=\widehat{\theta}$; $c$ is a cutoff point (between 0 and 1; to be chosen next). In his answer @SextusEmpiricus refers to $L(\theta) / L(\widehat{\theta})$ as the relative likelihood.
This formalizes the concept of the likelihood "within some percentage of the maximum value" $L(\widehat{\theta})$: the likelihood interval is the set of parameter values $\theta$ with high likelihood. But how can we decide what is "high enough", ie. how can we calibrate the likelihood?
As @utobi explains, under regularity conditions, the likelihood ratio $2\log L(\widehat{\theta}) / L(\theta)$ has (approximately) Chi-squared distribution. Therefore, when the likelihood is reasonably regular:
$$
\begin{aligned}
P\left\{\frac{L(\theta)}{L(\widehat{\theta})} > c\right\}
= P\left\{2\log\frac{L(\widehat{\theta})}{L(\theta)} < - 2\log c\right\}
= P\left\{\chi^2_1 < -2\log c\right\}
\end{aligned}
$$
If we want to choose $c$, so that the likelihood interval has approximate $(1-\alpha)$100% coverage probability, then $P(\chi^2_1 < -2\log c) = 1 - \alpha \Leftrightarrow c = \exp\left\{-\frac{1}{2}\chi^2_{1,1-\alpha}\right\}$.
Here is a plot of the (scaled) likelihood $L(\theta)/L(\widehat{\theta})$ as well as the likelihood interval calibrated to have 95% coverage probability.
# significance level
alpha <- 0.05
# corresponding cutoff
c <- exp(-qchisq(1 - alpha, 1) / 2)
x <- 3
theta <- seq(0, 10, len = 10000)
loglike <- dpois(x, theta, log = TRUE)
# Compute a likelihood interval with (1-alpha)100% coverage.
# See Example 5.14, "In All Likelihood"
likelihood_interval(theta, loglike, alpha)
#> alpha lower upper
#> 0.05 0.746065 7.779286
R code to reproduce the figure and calculate the likelihood interval.
find_cutoff <- function(x, y, x0) {
if (sum(x < x0) < 2) {
return(min(x))
} else {
return(approx(y[x < x0], x[x < x0], xout = 0)$y)
}
}
# Compute likelihood interval for a scalar theta.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
likelihood_interval <- function(theta, llike, alpha = 0.05) {
# Scale the negative log-likelihood so that the minimum is 0
nllike <- -2 * (llike - max(llike))
# Find the MLE
theta.mle <- mean(theta[nllike == 0])
# Shift by the Chi-squared critical value
nllike <- nllike - qchisq(1 - alpha, 1)
lower <- find_cutoff(theta, nllike, theta.mle)
upper <- -find_cutoff(-theta, nllike, -theta.mle)
data.frame(alpha, lower, upper)
}
# significance level
alpha <- 0.05
# corresponding cutoff
c <- exp(-qchisq(1 - alpha, 1) / 2)
# An observation from Poisson(theta). We will do inference on theta.
x <- 3
theta <- seq(0, qpois(0.999, x), len = 10000)
# Compute a likelihood interval with (1-alpha)100% coverage.
# See Example 5.14, "In All Likelihood"
loglike <- dpois(x, theta, log = TRUE)
like.ci <- likelihood_interval(theta, loglike, alpha)
like.ci # approximate confidence interval based on the likelihood ratio statistic
# Scale the Poisson likelihood, so that the maximum is 1 at the MLE theta = x.
like <- exp(loglike - dpois(x, x, log = TRUE))
plot(
theta, like,
xlab = "θ", ylab = "L(θ)", ylim = c(0, 1),
type = "l", xaxs = "i", yaxs = "i", xaxt = "n"
)
axis(1, at = c(0, 2, 3, 4, 6, 8, 10), labels = c(0, 2, "", 4, 6, 8, 10))
abline(h = c, col = "gray62")
mtext("θ = 3", side = 1, at = x, padj = 1)
segments(
like.ci$lower, c, like.ci$upper, c,
col = "#2297E6", lwd = 3
)
title(
paste(
"95% likelihood interval [blue segment] for Poisson rate θ on observing x =", x
),
font.main = 1
)
|
How to create an interval estimate based on being within some percentage of the maximum value of the
TL;DR I use an example — inference for the rate $\theta$ in a Poisson model — to show how to construct a likelihood interval. I've borrowed this example from the book In All Likelihood: Statistical Mo
|
39,615
|
Is there ever a statistical reason NOT to use Satterthwaite's method to account for unequal variances?
|
For 2-sample t tests. For two-sample t tests, I think it is now standard practice
to use the Welch two-sample t test, unless there is strong
prior evidence (say, from data of the same type) that population
variances are equal. In some statistical software packages the
Welch test is the default 2-sample t test, so that one must
specifically request the pooled version of the test if desired.
(For example, I know that the Welch test is the default in
both R and Minitab. I believe some other statistical software
programs show P-values for both tests.)
The Welch two-sample t test uses the
Satterthwaite DF, which is often smaller than the DF $n_1 + n_2 - 2$ of the pooled 2-sample t test (never larger). This means that the power of the Welch 2-sample t test is somewhat smaller than the power of the pooled test, often not enough smaller to matter for practical purposes. But some statisticians do make an exception to standard practice when sample sizes are very small and sample standard deviations are similar.
For one-way ANOVA. However, the Satterthwaite (or Welch) ANOVA, implemented in R as oneway.test, is relatively new, and there has not been the same level of scrutiny of the Satterthwaite ANOVA as there has been of the Satterthwaite 2-sample t test. A couple of limited simulation studies I have seen and my own experience have made me feel comfortable using the Satterthwaite ANOVA by default. But I don't think one can say yet that it is 'standard practice' to use the Satterthwaite ANOVA.
At this point, I would have to admit that strong preference for
the Satterthwaite one-way ANOVA is still a matter of personal opinion (even if fairly widespread). So we may see other answers here voicing different opinions.
Addendum: In response to a Comment, here is an example of simulation investigating
the behavior of the Welch ANOVA.
The two-sample pooled t test is known to behave badly
if sample sizes differ and the population from which the smaller sample was chosen has a larger variance than the other population. Specifically, if population means are the same, the true significance level can be inflated considerably.
Here we use simulation to investigate the behavior of a standard ANOVA (assuming equal population variances) in an analogous situation and compare the behavior the behavior of the Welch ANOVA in the same situation. In particular, we use sample sizes 5, 10, and 15, and respective population SDs
7, 3, and 1.
To make sure we assess precisely the versions of ANOVA implemented in R, we simulate 100,000 datasets, run both ANOVAs in R, and look at the 200,000 resulting P-vales.
Because R formats each ANOVA, only for us to use the P-value in each case, the code is inefficient and runs slowly.
set.seed(2020)
m = 10^5; pv.e = pv.w = numeric(m)
for(i in 1:m){
x1 = rnorm( 5, 50, 7)
x2 = rnorm(10, 50, 3)
x3 = rnorm(15, 50, 1)
x = c(x1,x2,x3)
g = as.factor(rep(1:3, c(5,10,15)))
pv.w[i] = oneway.test(x~g)$p.val
pv.e[i] = summary(aov(x~g))[[1]][1,5]
}
mean(pv.e <= .05)
[1] 0.2496
mean(pv.w <= .05)
[1] 0.05673
Quite wrongly assuming equal population variances, the standard ANOVA has an actual rejection rate of about 25%
for a test intended to be at the 5% level. This could
lead to massive false 'discovery' of population differences, where there are none.
By contrast, the Welch ANOVA has a rejection rate of
about 5.7% where the 5% level is intended. Not a perfect
result in this problematic situation, but a great improvement over the catastrophic result of the standard ANOVA.
Below are histograms of simulated P-values for the two tests.
Under the null hypothesis, the P-value of a test with a continuous test statistic should be standard uniform (with bars roughly the height of the green line).
|
Is there ever a statistical reason NOT to use Satterthwaite's method to account for unequal variance
|
For 2-sample t tests. For two-sample t tests, I think it is now standard practice
to use the Welch two-sample t test, unless there is strong
prior evidence (say, from data of the same type) that popul
|
Is there ever a statistical reason NOT to use Satterthwaite's method to account for unequal variances?
For 2-sample t tests. For two-sample t tests, I think it is now standard practice
to use the Welch two-sample t test, unless there is strong
prior evidence (say, from data of the same type) that population
variances are equal. In some statistical software packages the
Welch test is the default 2-sample t test, so that one must
specifically request the pooled version of the test if desired.
(For example, I know that the Welch test is the default in
both R and Minitab. I believe some other statistical software
programs show P-values for both tests.)
The Welch two-sample t test uses the
Satterthwaite DF, which is often smaller than the DF $n_1 + n_2 - 2$ of the pooled 2-sample t test (never larger). This means that the power of the Welch 2-sample t test is somewhat smaller than the power of the pooled test, often not enough smaller to matter for practical purposes. But some statisticians do make an exception to standard practice when sample sizes are very small and sample standard deviations are similar.
For one-way ANOVA. However, the Satterthwaite (or Welch) ANOVA, implemented in R as oneway.test, is relatively new, and there has not been the same level of scrutiny of the Satterthwaite ANOVA as there has been of the Satterthwaite 2-sample t test. A couple of limited simulation studies I have seen and my own experience have made me feel comfortable using the Satterthwaite ANOVA by default. But I don't think one can say yet that it is 'standard practice' to use the Satterthwaite ANOVA.
At this point, I would have to admit that strong preference for
the Satterthwaite one-way ANOVA is still a matter of personal opinion (even if fairly widespread). So we may see other answers here voicing different opinions.
Addendum: In response to a Comment, here is an example of simulation investigating
the behavior of the Welch ANOVA.
The two-sample pooled t test is known to behave badly
if sample sizes differ and the population from which the smaller sample was chosen has a larger variance than the other population. Specifically, if population means are the same, the true significance level can be inflated considerably.
Here we use simulation to investigate the behavior of a standard ANOVA (assuming equal population variances) in an analogous situation and compare the behavior the behavior of the Welch ANOVA in the same situation. In particular, we use sample sizes 5, 10, and 15, and respective population SDs
7, 3, and 1.
To make sure we assess precisely the versions of ANOVA implemented in R, we simulate 100,000 datasets, run both ANOVAs in R, and look at the 200,000 resulting P-vales.
Because R formats each ANOVA, only for us to use the P-value in each case, the code is inefficient and runs slowly.
set.seed(2020)
m = 10^5; pv.e = pv.w = numeric(m)
for(i in 1:m){
x1 = rnorm( 5, 50, 7)
x2 = rnorm(10, 50, 3)
x3 = rnorm(15, 50, 1)
x = c(x1,x2,x3)
g = as.factor(rep(1:3, c(5,10,15)))
pv.w[i] = oneway.test(x~g)$p.val
pv.e[i] = summary(aov(x~g))[[1]][1,5]
}
mean(pv.e <= .05)
[1] 0.2496
mean(pv.w <= .05)
[1] 0.05673
Quite wrongly assuming equal population variances, the standard ANOVA has an actual rejection rate of about 25%
for a test intended to be at the 5% level. This could
lead to massive false 'discovery' of population differences, where there are none.
By contrast, the Welch ANOVA has a rejection rate of
about 5.7% where the 5% level is intended. Not a perfect
result in this problematic situation, but a great improvement over the catastrophic result of the standard ANOVA.
Below are histograms of simulated P-values for the two tests.
Under the null hypothesis, the P-value of a test with a continuous test statistic should be standard uniform (with bars roughly the height of the green line).
|
Is there ever a statistical reason NOT to use Satterthwaite's method to account for unequal variance
For 2-sample t tests. For two-sample t tests, I think it is now standard practice
to use the Welch two-sample t test, unless there is strong
prior evidence (say, from data of the same type) that popul
|
39,616
|
How does LightGBM deals with incremental learning (and concept drift)?
|
LightGBM will add more trees if we update it through continued training (e.g. through BoosterUpdateOneIter). Assuming we use refit we will be using existing tree structures to update the output of the leaves based on the new data. It is faster than re-training from scratch, since we do not have to re-discover the optimal tree structures. Nevertheless, please note that almost certainly it will have worse performance (on the combined old and new data) than doing a full retrain from scratch on them.
Any online learning algorithm will be designed to adapt to changes. That said, LighyGBM's performance will depend on the training parameters we will use and how we will validate our predictions (e.g. how much we care to disregard previous data points). Assuming we properly train our booster, without having a relevant baseline (e.g. a ridge regression trained on an incremental manner) it does not make sense to say "LightGBM is good (or bad)" for dealing with concept drift.
|
How does LightGBM deals with incremental learning (and concept drift)?
|
LightGBM will add more trees if we update it through continued training (e.g. through BoosterUpdateOneIter). Assuming we use refit we will be using existing tree structures to update the output of the
|
How does LightGBM deals with incremental learning (and concept drift)?
LightGBM will add more trees if we update it through continued training (e.g. through BoosterUpdateOneIter). Assuming we use refit we will be using existing tree structures to update the output of the leaves based on the new data. It is faster than re-training from scratch, since we do not have to re-discover the optimal tree structures. Nevertheless, please note that almost certainly it will have worse performance (on the combined old and new data) than doing a full retrain from scratch on them.
Any online learning algorithm will be designed to adapt to changes. That said, LighyGBM's performance will depend on the training parameters we will use and how we will validate our predictions (e.g. how much we care to disregard previous data points). Assuming we properly train our booster, without having a relevant baseline (e.g. a ridge regression trained on an incremental manner) it does not make sense to say "LightGBM is good (or bad)" for dealing with concept drift.
|
How does LightGBM deals with incremental learning (and concept drift)?
LightGBM will add more trees if we update it through continued training (e.g. through BoosterUpdateOneIter). Assuming we use refit we will be using existing tree structures to update the output of the
|
39,617
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
|
Dice score measures the relative overlap between the prediction and the ground truth (intersection over union). It has the same value for small and large objects both: Did you guess a half of the object correctly? Great, your loss is 1/2. I don't care if the object was 10 or 1000 pixels large.
On the other hand, cross-entropy is evaluated on individual pixels, so large objects contribute more to it than small ones, which is why it requires additional weighting to avoid ignoring minority classes.
A problem with dice is that it can have high variance. Getting a single pixel wrong in a tiny object can have the same effect as missing nearly a whole large object, thus the loss becomes highly dependent on the current batch. I don't know details about the generalized dice, but I assume it helps fighting this problem.
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
|
Dice score measures the relative overlap between the prediction and the ground truth (intersection over union). It has the same value for small and large objects both: Did you guess a half of the obje
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
Dice score measures the relative overlap between the prediction and the ground truth (intersection over union). It has the same value for small and large objects both: Did you guess a half of the object correctly? Great, your loss is 1/2. I don't care if the object was 10 or 1000 pixels large.
On the other hand, cross-entropy is evaluated on individual pixels, so large objects contribute more to it than small ones, which is why it requires additional weighting to avoid ignoring minority classes.
A problem with dice is that it can have high variance. Getting a single pixel wrong in a tiny object can have the same effect as missing nearly a whole large object, thus the loss becomes highly dependent on the current batch. I don't know details about the generalized dice, but I assume it helps fighting this problem.
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
Dice score measures the relative overlap between the prediction and the ground truth (intersection over union). It has the same value for small and large objects both: Did you guess a half of the obje
|
39,618
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
|
Dealing with a two-class problem is intuitive, it calculates the overlap between foreground regions (while background is not of interest). In a multi-class scenario, when provided with probability maps on one end and one-hot encoded labels on the other, it effectively performs multiple two-class problems. In practice, one would calculate a vector of dice scores per class, take its mean and return 1-mean. (I have attached implementation in the latest source code of DeepMedic for reference)
Here, scores for each class are calculated independently of their relative sizes and hence contribute fairly to the mean score.
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
|
Dealing with a two-class problem is intuitive, it calculates the overlap between foreground regions (while background is not of interest). In a multi-class scenario, when provided with probability map
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
Dealing with a two-class problem is intuitive, it calculates the overlap between foreground regions (while background is not of interest). In a multi-class scenario, when provided with probability maps on one end and one-hot encoded labels on the other, it effectively performs multiple two-class problems. In practice, one would calculate a vector of dice scores per class, take its mean and return 1-mean. (I have attached implementation in the latest source code of DeepMedic for reference)
Here, scores for each class are calculated independently of their relative sizes and hence contribute fairly to the mean score.
|
What is the intuition behind what makes dice coefficient handle imbalanced data?
Dealing with a two-class problem is intuitive, it calculates the overlap between foreground regions (while background is not of interest). In a multi-class scenario, when provided with probability map
|
39,619
|
Why does the likelihood function of a binomial distribution not include the combinatorics term? [duplicate]
|
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
|
Why does the likelihood function of a binomial distribution not include the combinatorics term? [dup
|
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giv
|
Why does the likelihood function of a binomial distribution not include the combinatorics term? [duplicate]
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
|
Why does the likelihood function of a binomial distribution not include the combinatorics term? [dup
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giv
|
39,620
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent lme4 model?
|
I'm going to answer that myself in case that others may be interested at some point. In the end I did follow the approach mentioned above. The code would look like this:
library(lme4)
library(robustlmm)
data(Dyestuff, package = "lme4")
# fit a mixed model
model <- lmer(Yield ~ 1|Batch, data=Dyestuff)
# fit the robust equivalent
robust.model <- rlmer(Yield ~ 1|Batch, data=Dyestuff)
# get coefficients from non-robust model to extract Satterthwaite approximated DFs
coefs <- data.frame(coef(summary(model)))
# get coefficients from robust model to extract t-values
coefs.robust <- coef(summary(robust.model))
# calculate p-values based on robust t-values and non-robust approx. DFs
p.values <- 2*pt(abs(coefs.robust[,3]), coefs$df, lower=FALSE)
p.values
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent l
|
I'm going to answer that myself in case that others may be interested at some point. In the end I did follow the approach mentioned above. The code would look like this:
library(lme4)
library(robustlm
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent lme4 model?
I'm going to answer that myself in case that others may be interested at some point. In the end I did follow the approach mentioned above. The code would look like this:
library(lme4)
library(robustlmm)
data(Dyestuff, package = "lme4")
# fit a mixed model
model <- lmer(Yield ~ 1|Batch, data=Dyestuff)
# fit the robust equivalent
robust.model <- rlmer(Yield ~ 1|Batch, data=Dyestuff)
# get coefficients from non-robust model to extract Satterthwaite approximated DFs
coefs <- data.frame(coef(summary(model)))
# get coefficients from robust model to extract t-values
coefs.robust <- coef(summary(robust.model))
# calculate p-values based on robust t-values and non-robust approx. DFs
p.values <- 2*pt(abs(coefs.robust[,3]), coefs$df, lower=FALSE)
p.values
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent l
I'm going to answer that myself in case that others may be interested at some point. In the end I did follow the approach mentioned above. The code would look like this:
library(lme4)
library(robustlm
|
39,621
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent lme4 model?
|
I would like to add for future reference, that the tab_model function of sjPlot package can also calculate p values based on Satterthwaite-approximated dfs.
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent l
|
I would like to add for future reference, that the tab_model function of sjPlot package can also calculate p values based on Satterthwaite-approximated dfs.
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent lme4 model?
I would like to add for future reference, that the tab_model function of sjPlot package can also calculate p values based on Satterthwaite-approximated dfs.
|
Obtaining p-values in a robustlmm mixed model via Satterthwaite-approximated DFs of the equivalent l
I would like to add for future reference, that the tab_model function of sjPlot package can also calculate p values based on Satterthwaite-approximated dfs.
|
39,622
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
|
You're almost there. As pointed out by @whuber, the trick is to recognise that $\phi(x)^2$ is another Gaussian that integrates to one after normalisation:
\begin{align}
\int \phi(x)^2 dx
&= \int_{-\infty}^\infty \left(\frac1{\sqrt{2\pi}} e^{-\frac12x^2}\right)^2dx
\\&= \frac1{\sqrt{2\pi}}\int_{-\infty}^\infty \frac1{\sqrt{2\pi}} e^{-x^2}dx
\\&= \frac1{\sqrt{4\pi}}\int_{-\infty}^\infty \frac1{\sqrt{2\pi}/\sqrt{2}} e^{ -\frac12(\frac x{1/\sqrt{2}})^2}dx
\\&= \frac1{2\sqrt{\pi}}.
\end{align}
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
|
You're almost there. As pointed out by @whuber, the trick is to recognise that $\phi(x)^2$ is another Gaussian that integrates to one after normalisation:
\begin{align}
\int \phi(x)^2 dx
&= \int_{-
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
You're almost there. As pointed out by @whuber, the trick is to recognise that $\phi(x)^2$ is another Gaussian that integrates to one after normalisation:
\begin{align}
\int \phi(x)^2 dx
&= \int_{-\infty}^\infty \left(\frac1{\sqrt{2\pi}} e^{-\frac12x^2}\right)^2dx
\\&= \frac1{\sqrt{2\pi}}\int_{-\infty}^\infty \frac1{\sqrt{2\pi}} e^{-x^2}dx
\\&= \frac1{\sqrt{4\pi}}\int_{-\infty}^\infty \frac1{\sqrt{2\pi}/\sqrt{2}} e^{ -\frac12(\frac x{1/\sqrt{2}})^2}dx
\\&= \frac1{2\sqrt{\pi}}.
\end{align}
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
You're almost there. As pointed out by @whuber, the trick is to recognise that $\phi(x)^2$ is another Gaussian that integrates to one after normalisation:
\begin{align}
\int \phi(x)^2 dx
&= \int_{-
|
39,623
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
|
If you want to check your work, it only takes a few seconds with a computer algebra system. In your case, $X \sim N(0,1)$ with pdf $f(x)$:
The cdf is:
where Erf denotes the error function.
Then the desired correlation can be found immediately with:
... where I am using the Corr function from the mathStatica package for Mathematica.
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
|
If you want to check your work, it only takes a few seconds with a computer algebra system. In your case, $X \sim N(0,1)$ with pdf $f(x)$:
The cdf is:
where Erf denotes the error function.
Then the
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
If you want to check your work, it only takes a few seconds with a computer algebra system. In your case, $X \sim N(0,1)$ with pdf $f(x)$:
The cdf is:
where Erf denotes the error function.
Then the desired correlation can be found immediately with:
... where I am using the Corr function from the mathStatica package for Mathematica.
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
If you want to check your work, it only takes a few seconds with a computer algebra system. In your case, $X \sim N(0,1)$ with pdf $f(x)$:
The cdf is:
where Erf denotes the error function.
Then the
|
39,624
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
|
For a general case with $a,b$ we can derive the result of following integral,
\begin{eqnarray}
\int_{-\infty}^{\infty}\Phi(a+bx)^{2}\phi(x)dx\\&=&P\left(z_{1}\leq a+bx,z_{2}\leq a+bx\right)\\&=&P\left(z_{1}-bx\leq -a,z_{2}-bx\leq -a\right)\\&=&
\mathcal{MVN}\left(x=\{-a,-a\},\mu=\{0,0\},\Sigma=\begin{bmatrix}b^{2}+1 & 1\\1& b^{2}+1 \end{bmatrix} \right)
\end{eqnarray}
In this case with $a=0, b=1$ we have,
\begin{equation}
\mathbb{E}(\Phi(X)^2)=\mathcal{MVN}\left(x=\{0,0\},\mu=\{0,0\},\Sigma=\begin{bmatrix}2 & 1\\1& 2 \end{bmatrix} \right)
\end{equation}
After we take Jarle Tufto's result as given and having the fact that $\mathbb{V}(X)=1$, $\mathbb{V}(\Phi(X))=\mathbb{E}(\Phi(X)^2)-\mathbb{E}(\Phi(X))^2$ and $\mathbb{E}(\Phi(X))^2=\frac{1}{4}$, we then obtain final correlation formula
\begin{eqnarray}
\rho=\frac{\frac{1}{2\pi}}{\sqrt{1}\sqrt{\mathcal{MVN}\left(x=\{0,0\},\mu=\{0,0\},\Sigma=\begin{bmatrix}2 & 1\\1& 2 \end{bmatrix} \right)-\frac{1}{4}}}
\end{eqnarray}
A quick R-implementation shows,
sqrt(3/pi)
[1] 0.977205
(1/(2*sqrt(pi)))/sqrt(pmnorm(x = c(0,0), mean = rep(0.,2),
varcov = matrix(c(2,1,1,2), ncol=2, byrow=T))-0.25)
[1] 0.977205
which coincides with wolfies's result.
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
|
For a general case with $a,b$ we can derive the result of following integral,
\begin{eqnarray}
\int_{-\infty}^{\infty}\Phi(a+bx)^{2}\phi(x)dx\\&=&P\left(z_{1}\leq a+bx,z_{2}\leq a+bx\right)\\&=&P\left
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(X)$
For a general case with $a,b$ we can derive the result of following integral,
\begin{eqnarray}
\int_{-\infty}^{\infty}\Phi(a+bx)^{2}\phi(x)dx\\&=&P\left(z_{1}\leq a+bx,z_{2}\leq a+bx\right)\\&=&P\left(z_{1}-bx\leq -a,z_{2}-bx\leq -a\right)\\&=&
\mathcal{MVN}\left(x=\{-a,-a\},\mu=\{0,0\},\Sigma=\begin{bmatrix}b^{2}+1 & 1\\1& b^{2}+1 \end{bmatrix} \right)
\end{eqnarray}
In this case with $a=0, b=1$ we have,
\begin{equation}
\mathbb{E}(\Phi(X)^2)=\mathcal{MVN}\left(x=\{0,0\},\mu=\{0,0\},\Sigma=\begin{bmatrix}2 & 1\\1& 2 \end{bmatrix} \right)
\end{equation}
After we take Jarle Tufto's result as given and having the fact that $\mathbb{V}(X)=1$, $\mathbb{V}(\Phi(X))=\mathbb{E}(\Phi(X)^2)-\mathbb{E}(\Phi(X))^2$ and $\mathbb{E}(\Phi(X))^2=\frac{1}{4}$, we then obtain final correlation formula
\begin{eqnarray}
\rho=\frac{\frac{1}{2\pi}}{\sqrt{1}\sqrt{\mathcal{MVN}\left(x=\{0,0\},\mu=\{0,0\},\Sigma=\begin{bmatrix}2 & 1\\1& 2 \end{bmatrix} \right)-\frac{1}{4}}}
\end{eqnarray}
A quick R-implementation shows,
sqrt(3/pi)
[1] 0.977205
(1/(2*sqrt(pi)))/sqrt(pmnorm(x = c(0,0), mean = rep(0.,2),
varcov = matrix(c(2,1,1,2), ncol=2, byrow=T))-0.25)
[1] 0.977205
which coincides with wolfies's result.
|
If $X$ follows standard normal distribution, find the correlation coefficient between $X$ and $\Phi(
For a general case with $a,b$ we can derive the result of following integral,
\begin{eqnarray}
\int_{-\infty}^{\infty}\Phi(a+bx)^{2}\phi(x)dx\\&=&P\left(z_{1}\leq a+bx,z_{2}\leq a+bx\right)\\&=&P\left
|
39,625
|
Two sample t-test to show equality of the two means
|
You cannot use the first test in the way you describe, because failure to reject in the first test only says that you were unable to reject $H_0$ nothing more than that. It is like only being given the information that "the prosecutor was unable to provide the jury with enough evidence to secure a conviction" - that does not tell you that the suspect is innocent.
The second test is not usable in practice, because no matter how much data you have, you cannot exclude the possibility of very small differences.
What you can do is to look at
$$H_{0}: |\mu_{1} - \mu_{2}|>\delta \text{ vs }H_{1}: |\mu_{1} - \mu_{2}| \leq \delta,$$
i.e. try to reject the null hypothesis that the absolute size of the difference is greater than some difference $\delta>0$. $\delta$ would be chosen e.g. so that any difference smaller than that is for all (or your specific) practical purposes irrelevant.
|
Two sample t-test to show equality of the two means
|
You cannot use the first test in the way you describe, because failure to reject in the first test only says that you were unable to reject $H_0$ nothing more than that. It is like only being given th
|
Two sample t-test to show equality of the two means
You cannot use the first test in the way you describe, because failure to reject in the first test only says that you were unable to reject $H_0$ nothing more than that. It is like only being given the information that "the prosecutor was unable to provide the jury with enough evidence to secure a conviction" - that does not tell you that the suspect is innocent.
The second test is not usable in practice, because no matter how much data you have, you cannot exclude the possibility of very small differences.
What you can do is to look at
$$H_{0}: |\mu_{1} - \mu_{2}|>\delta \text{ vs }H_{1}: |\mu_{1} - \mu_{2}| \leq \delta,$$
i.e. try to reject the null hypothesis that the absolute size of the difference is greater than some difference $\delta>0$. $\delta$ would be chosen e.g. so that any difference smaller than that is for all (or your specific) practical purposes irrelevant.
|
Two sample t-test to show equality of the two means
You cannot use the first test in the way you describe, because failure to reject in the first test only says that you were unable to reject $H_0$ nothing more than that. It is like only being given th
|
39,626
|
Two sample t-test to show equality of the two means
|
I learned in school that the null hypothesis should always represent the "common" belief and the alternative hypothesis should represent the change that I would like to show.
That is not accurate explanation of the null hypothesis. The null hypothesis is simply a hypothesis that consists of a specific distribution from which probabilities can be calculated. The reason we use $\mu_1=\mu_2$ as the null hypothesis has nothing to do with whether this is the "common" belief. It's used as the null hypothesis because if we hypothesize that the mean is a specific value, then given a particular set of data we can calculate the probability of seeing that data. We can't use $\mu \neq \mu_2$ as our null hypothesis because there's no way to calculate p-values based simply on the hypothesis that the means aren't equal to a particular value. Consider the following problem:
The weights of apples have a standard deviation of 5 grams. The mean is not equal to 100. What is the probability of seeing an apple with a weight of 110 grams?
There's no way to answer that, because simply being told what the mean isn't is not enough to calculate probabilities.
Björn suggests testing the hypothesis that the difference in means is greater than some $\delta_0$. How that would work is to take the null hypothesis as that the difference is equal to exactly $\delta_0$. Then once you have the data, you can calculate the p-value given that $\delta_0$. Call that $p_{\delta_0}$. If the difference in sample means is less than $\delta_0$, then the the p-value would have been even smaller than $p_{\delta_0}$ if we had chosen $\delta$ to be larger than $\delta_0$. We reject the null if the p-value is less than $\alpha$, so if we're rejecting under that null, that means that $p_{\delta_0} < \alpha$. And since $p_{\delta}<p_{\delta_0}$ for any $\delta>\delta_0$, we can conclude that $p_{\delta}<\alpha$ for any $\delta>\delta_0$. Thus, we can not only reject this null of $\delta_0$, but we can reject any null with a larger $\delta$. It is only because of this ability to get an upper bound on p that we don't need a specific value for $\delta$. If we just take "$\delta$ is larger than zero" as our null hypothesis, without any lower bound for $\delta$, then there is no upper bound for p, and so we cannot conclude that it is lower than $\alpha$.
|
Two sample t-test to show equality of the two means
|
I learned in school that the null hypothesis should always represent the "common" belief and the alternative hypothesis should represent the change that I would like to show.
That is not accurate exp
|
Two sample t-test to show equality of the two means
I learned in school that the null hypothesis should always represent the "common" belief and the alternative hypothesis should represent the change that I would like to show.
That is not accurate explanation of the null hypothesis. The null hypothesis is simply a hypothesis that consists of a specific distribution from which probabilities can be calculated. The reason we use $\mu_1=\mu_2$ as the null hypothesis has nothing to do with whether this is the "common" belief. It's used as the null hypothesis because if we hypothesize that the mean is a specific value, then given a particular set of data we can calculate the probability of seeing that data. We can't use $\mu \neq \mu_2$ as our null hypothesis because there's no way to calculate p-values based simply on the hypothesis that the means aren't equal to a particular value. Consider the following problem:
The weights of apples have a standard deviation of 5 grams. The mean is not equal to 100. What is the probability of seeing an apple with a weight of 110 grams?
There's no way to answer that, because simply being told what the mean isn't is not enough to calculate probabilities.
Björn suggests testing the hypothesis that the difference in means is greater than some $\delta_0$. How that would work is to take the null hypothesis as that the difference is equal to exactly $\delta_0$. Then once you have the data, you can calculate the p-value given that $\delta_0$. Call that $p_{\delta_0}$. If the difference in sample means is less than $\delta_0$, then the the p-value would have been even smaller than $p_{\delta_0}$ if we had chosen $\delta$ to be larger than $\delta_0$. We reject the null if the p-value is less than $\alpha$, so if we're rejecting under that null, that means that $p_{\delta_0} < \alpha$. And since $p_{\delta}<p_{\delta_0}$ for any $\delta>\delta_0$, we can conclude that $p_{\delta}<\alpha$ for any $\delta>\delta_0$. Thus, we can not only reject this null of $\delta_0$, but we can reject any null with a larger $\delta$. It is only because of this ability to get an upper bound on p that we don't need a specific value for $\delta$. If we just take "$\delta$ is larger than zero" as our null hypothesis, without any lower bound for $\delta$, then there is no upper bound for p, and so we cannot conclude that it is lower than $\alpha$.
|
Two sample t-test to show equality of the two means
I learned in school that the null hypothesis should always represent the "common" belief and the alternative hypothesis should represent the change that I would like to show.
That is not accurate exp
|
39,627
|
Statistical significance when comparing two models for classification
|
Simply speaking, the performance metrics used are statistics derived from our test set. We can go ahead and compute confidence intervals around these statistics as we would do in a classical setting.
For example, let's say we use Accuracy (which is not good metric for classification), i.e. the proportion of correctly classified items in our test set. We can treat this statistics as coming from a binomial distribution and ask about its correspond binomial proportion confidence intervals. Let's say that we have $N=100$ training points and classifier $C_1$ classified $80$ items correctly while classifier $C_2$ classified $83$ items correctly. The Wilson confidence interval for a type I error probability $\alpha =0.05$ would be
$[0.711, 0.866]$ for classifier $C_1$ and $[0.744, 0.891]$ for $C_2$. Usual hypothesis testing would suggest that $C_1$ and $C_2$ do not have substantially different performance in terms of accuracy. What if we had $N = 10000$ and classifier $C_1$ classified $8000$ items correctly while classifier $C_2$ classified $8300$ items correctly? The confidence intervals would be $[0.792, 0.807]$ and $[0.822, 0.837]$ for classifiers $C_1$ and $C_2$ respectively. This would suggest that $C_1$ and $C_2$ have different performance on this test set.
Notice, that I simply used a parametric approximation to get the CIs for Accuracy.
I would strongly suggest using bootstrapping to get a non-parametric estimate of the distribution of metric of interest. You can then use a paired sample hypothesis test.
I would suggest looking at some classic references like: "Approximate statistical tests for comparing supervised classification learning algorithms" by Dietterich or "Statistical comparisons of classifiers over multiple data sets" by Demšar for more details; they explicit look into paired $t$-tests and ANOVA approaches. I also found Derrac et al.'s "A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms" quite nice to follow (and more generally applicable than its title would suggest).
|
Statistical significance when comparing two models for classification
|
Simply speaking, the performance metrics used are statistics derived from our test set. We can go ahead and compute confidence intervals around these statistics as we would do in a classical setting.
|
Statistical significance when comparing two models for classification
Simply speaking, the performance metrics used are statistics derived from our test set. We can go ahead and compute confidence intervals around these statistics as we would do in a classical setting.
For example, let's say we use Accuracy (which is not good metric for classification), i.e. the proportion of correctly classified items in our test set. We can treat this statistics as coming from a binomial distribution and ask about its correspond binomial proportion confidence intervals. Let's say that we have $N=100$ training points and classifier $C_1$ classified $80$ items correctly while classifier $C_2$ classified $83$ items correctly. The Wilson confidence interval for a type I error probability $\alpha =0.05$ would be
$[0.711, 0.866]$ for classifier $C_1$ and $[0.744, 0.891]$ for $C_2$. Usual hypothesis testing would suggest that $C_1$ and $C_2$ do not have substantially different performance in terms of accuracy. What if we had $N = 10000$ and classifier $C_1$ classified $8000$ items correctly while classifier $C_2$ classified $8300$ items correctly? The confidence intervals would be $[0.792, 0.807]$ and $[0.822, 0.837]$ for classifiers $C_1$ and $C_2$ respectively. This would suggest that $C_1$ and $C_2$ have different performance on this test set.
Notice, that I simply used a parametric approximation to get the CIs for Accuracy.
I would strongly suggest using bootstrapping to get a non-parametric estimate of the distribution of metric of interest. You can then use a paired sample hypothesis test.
I would suggest looking at some classic references like: "Approximate statistical tests for comparing supervised classification learning algorithms" by Dietterich or "Statistical comparisons of classifiers over multiple data sets" by Demšar for more details; they explicit look into paired $t$-tests and ANOVA approaches. I also found Derrac et al.'s "A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms" quite nice to follow (and more generally applicable than its title would suggest).
|
Statistical significance when comparing two models for classification
Simply speaking, the performance metrics used are statistics derived from our test set. We can go ahead and compute confidence intervals around these statistics as we would do in a classical setting.
|
39,628
|
Does confidence interval for odds ratio assume log-normal distribution?
|
Given the comments, I have included the proof of the floating equation at the bottom of the response.
Given a two-by-two contingency table where the OR is $\frac{a/b}{c/d}$, if you take the log of both sides, the log odds ratio is $\log(a) - \log(b) - (\log(c) - \log(d))$. Hence the formula for the log odds ratio is additive. Because it is additive, we can assume that the log odds ratio converges to normality, much faster than the odds ratio, which has a multiplicative structure.
We can demonstrate the above using bootstrapping:
fake_dat <- data.frame(
y = c(rep(1, 125), rep(0, 50), rep(1, 100), rep(0, 75)),
g = c(rep("A", 175), rep("B", 175))
)
(mat <- table(fake_dat))
g
y A B
0 50 75
1 125 100
Take A to be the treatment group, and B to be the control group.
# And the odds ratio is?
mat[2, 1] / mat[1, 1] / (mat[2, 2] / mat[1, 2])
[1] 1.875
We can resample the odds ratio and the log odds ratio from the data repeatedly and check their distributions:
or.s <- lor.s <- rep(NA, 3000)
for (i in 1:3000) {
rand.samp <- sample(1:nrow(fake_dat), nrow(fake_dat), replace = TRUE)
new_dat <- fake_dat[rand.samp, ]
mat <- table(new_dat)
or <- mat[2, 1] / mat[1, 1] / (mat[2, 2] / mat[1, 2])
lor <- log(or)
or.s[i] <- or
lor.s[i] <- lor
}
par(mfrow = c(1, 2))
hist(or.s, main = "OR")
hist(lor.s, main = "LOR")
par(mfrow = c(1, 1))
We can see that at our sample size of 350, the estimate of the sampling distribution of the log odds ratio appears normally distributed. If you drop the sample size enough, you will arrive at a non-normally distributed estimate of the sampling distribution for the log odds ratio.
If we then apply the delta method to calculate the variance of the log odds ratio, we arrive at the equation you mentioned. And the delta method relies on the central limit theorem. So the lone requirement for normality is a large enough sample size, as the method is a large sample approximation.
One way to know if your sample size is large enough is to calculate a small sample odds ratio. One common small-sample formula for the odds ratio requires adding 0.5 to the cell counts before calculating the odds ratio and standard error in the same way. If the small sample odds ratio and confidence interval markedly differ from the standard odds ratio and CI, then the large sample approximation is probably inadequate. However, note that the small sample method is best used as a diagnostic rather than seen as a solution.
You can find coverage of these topics in chapter 2 of Agresti's Categorical Data Analysis and chapter 7 of Jewell's Estimation and Inference for Measures of Association. For the delta method, a good guide is Powell's Approximating variance of demographic parameters using the DELTA METHOD.
Origin of equation in question:
If we suppose that {$n_i, i = 1,2,3,4$} have a multinomial $(n, \pi_i)$ distribution, the quantity $\hat\pi_i$ has mean and variance:
\begin{equation}
\mathrm{E}(\hat\pi_i)=\pi_i \quad\mathrm{and}\quad \mathrm{var}(\hat\pi_i)=\pi_i(1-\pi_i)/n=(\pi_i-\pi_i^2)/n
\end{equation}
and for $i \neq j$:
\begin{equation}
\mathrm{cov(\hat\pi_i,\hat\pi_j)}=-\pi_i\pi_j/n
\end{equation}
We know that:
\begin{equation}
OR = \frac{a / b}{c/d}=\frac{n\pi_a\times n\pi_d}{n\pi_b\times n\pi_c}=\frac{\pi_a\times \pi_d}{\pi_b\times \pi_c}
\end{equation}
Then, $\log(OR) = \log(\pi_a) + \log(\pi_d) - \log(\pi_b) - \log(\pi_c)$.
For the delta method, we have the equation,
\begin{equation}
\mathrm{var}(G)=\mathrm{var}[f(X_1,X_2,...,X_n)]\\
=\sum_{i=1}^n\mathrm{var}(X_i)\big[f'(X_i)\big]^2 + 2\sum_{i=1}^n\sum_{j=1}^n\mathrm{cov}(X_i,X_j)\big[f'(X_i)f'(X_j)\big]
\end{equation}
Given this equation, the variance of $\big(\log{OR} = \log(\pi_a) + \log(\pi_d) - \log(\pi_b) - \log(\pi_c)\big)$ is:
\begin{equation}
\frac{1}{\pi_a^2}\frac{\pi_a-\pi_a^2}{n} +
\frac{1}{\pi_d^2}\frac{\pi_d-\pi_d^2}{n} +
\frac{1}{\pi_b^2}\frac{\pi_b-\pi_b^2}{n} +
\frac{1}{\pi_c^2}\frac{\pi_c-\pi_c^2}{n}\\
- \frac{2}{n}\frac{\pi_a\pi_d}{\pi_a\pi_d}
+ \frac{2}{n}\frac{\pi_a\pi_b}{\pi_a\pi_b}
+ \frac{2}{n}\frac{\pi_a\pi_c}{\pi_a\pi_c}
+ \frac{2}{n}\frac{\pi_d\pi_b}{\pi_d\pi_b}
+ \frac{2}{n}\frac{\pi_d\pi_c}{\pi_d\pi_c}
- \frac{2}{n}\frac{\pi_b\pi_c}{\pi_b\pi_c}\\
=
\frac{1}{n\pi_a}-\frac{1}{n}
+\frac{1}{n\pi_b}-\frac{1}{n}
+\frac{1}{n\pi_c}-\frac{1}{n}
+\frac{1}{n\pi_d}-\frac{1}{n}
+\frac{4}{n}\\
=
\frac{1}{n\pi_a}
+\frac{1}{n\pi_b}
+\frac{1}{n\pi_c}
+\frac{1}{n\pi_d}-\frac{4}{n}+\frac{4}{n}\\
=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}
\end{equation}
|
Does confidence interval for odds ratio assume log-normal distribution?
|
Given the comments, I have included the proof of the floating equation at the bottom of the response.
Given a two-by-two contingency table where the OR is $\frac{a/b}{c/d}$, if you take the log of bo
|
Does confidence interval for odds ratio assume log-normal distribution?
Given the comments, I have included the proof of the floating equation at the bottom of the response.
Given a two-by-two contingency table where the OR is $\frac{a/b}{c/d}$, if you take the log of both sides, the log odds ratio is $\log(a) - \log(b) - (\log(c) - \log(d))$. Hence the formula for the log odds ratio is additive. Because it is additive, we can assume that the log odds ratio converges to normality, much faster than the odds ratio, which has a multiplicative structure.
We can demonstrate the above using bootstrapping:
fake_dat <- data.frame(
y = c(rep(1, 125), rep(0, 50), rep(1, 100), rep(0, 75)),
g = c(rep("A", 175), rep("B", 175))
)
(mat <- table(fake_dat))
g
y A B
0 50 75
1 125 100
Take A to be the treatment group, and B to be the control group.
# And the odds ratio is?
mat[2, 1] / mat[1, 1] / (mat[2, 2] / mat[1, 2])
[1] 1.875
We can resample the odds ratio and the log odds ratio from the data repeatedly and check their distributions:
or.s <- lor.s <- rep(NA, 3000)
for (i in 1:3000) {
rand.samp <- sample(1:nrow(fake_dat), nrow(fake_dat), replace = TRUE)
new_dat <- fake_dat[rand.samp, ]
mat <- table(new_dat)
or <- mat[2, 1] / mat[1, 1] / (mat[2, 2] / mat[1, 2])
lor <- log(or)
or.s[i] <- or
lor.s[i] <- lor
}
par(mfrow = c(1, 2))
hist(or.s, main = "OR")
hist(lor.s, main = "LOR")
par(mfrow = c(1, 1))
We can see that at our sample size of 350, the estimate of the sampling distribution of the log odds ratio appears normally distributed. If you drop the sample size enough, you will arrive at a non-normally distributed estimate of the sampling distribution for the log odds ratio.
If we then apply the delta method to calculate the variance of the log odds ratio, we arrive at the equation you mentioned. And the delta method relies on the central limit theorem. So the lone requirement for normality is a large enough sample size, as the method is a large sample approximation.
One way to know if your sample size is large enough is to calculate a small sample odds ratio. One common small-sample formula for the odds ratio requires adding 0.5 to the cell counts before calculating the odds ratio and standard error in the same way. If the small sample odds ratio and confidence interval markedly differ from the standard odds ratio and CI, then the large sample approximation is probably inadequate. However, note that the small sample method is best used as a diagnostic rather than seen as a solution.
You can find coverage of these topics in chapter 2 of Agresti's Categorical Data Analysis and chapter 7 of Jewell's Estimation and Inference for Measures of Association. For the delta method, a good guide is Powell's Approximating variance of demographic parameters using the DELTA METHOD.
Origin of equation in question:
If we suppose that {$n_i, i = 1,2,3,4$} have a multinomial $(n, \pi_i)$ distribution, the quantity $\hat\pi_i$ has mean and variance:
\begin{equation}
\mathrm{E}(\hat\pi_i)=\pi_i \quad\mathrm{and}\quad \mathrm{var}(\hat\pi_i)=\pi_i(1-\pi_i)/n=(\pi_i-\pi_i^2)/n
\end{equation}
and for $i \neq j$:
\begin{equation}
\mathrm{cov(\hat\pi_i,\hat\pi_j)}=-\pi_i\pi_j/n
\end{equation}
We know that:
\begin{equation}
OR = \frac{a / b}{c/d}=\frac{n\pi_a\times n\pi_d}{n\pi_b\times n\pi_c}=\frac{\pi_a\times \pi_d}{\pi_b\times \pi_c}
\end{equation}
Then, $\log(OR) = \log(\pi_a) + \log(\pi_d) - \log(\pi_b) - \log(\pi_c)$.
For the delta method, we have the equation,
\begin{equation}
\mathrm{var}(G)=\mathrm{var}[f(X_1,X_2,...,X_n)]\\
=\sum_{i=1}^n\mathrm{var}(X_i)\big[f'(X_i)\big]^2 + 2\sum_{i=1}^n\sum_{j=1}^n\mathrm{cov}(X_i,X_j)\big[f'(X_i)f'(X_j)\big]
\end{equation}
Given this equation, the variance of $\big(\log{OR} = \log(\pi_a) + \log(\pi_d) - \log(\pi_b) - \log(\pi_c)\big)$ is:
\begin{equation}
\frac{1}{\pi_a^2}\frac{\pi_a-\pi_a^2}{n} +
\frac{1}{\pi_d^2}\frac{\pi_d-\pi_d^2}{n} +
\frac{1}{\pi_b^2}\frac{\pi_b-\pi_b^2}{n} +
\frac{1}{\pi_c^2}\frac{\pi_c-\pi_c^2}{n}\\
- \frac{2}{n}\frac{\pi_a\pi_d}{\pi_a\pi_d}
+ \frac{2}{n}\frac{\pi_a\pi_b}{\pi_a\pi_b}
+ \frac{2}{n}\frac{\pi_a\pi_c}{\pi_a\pi_c}
+ \frac{2}{n}\frac{\pi_d\pi_b}{\pi_d\pi_b}
+ \frac{2}{n}\frac{\pi_d\pi_c}{\pi_d\pi_c}
- \frac{2}{n}\frac{\pi_b\pi_c}{\pi_b\pi_c}\\
=
\frac{1}{n\pi_a}-\frac{1}{n}
+\frac{1}{n\pi_b}-\frac{1}{n}
+\frac{1}{n\pi_c}-\frac{1}{n}
+\frac{1}{n\pi_d}-\frac{1}{n}
+\frac{4}{n}\\
=
\frac{1}{n\pi_a}
+\frac{1}{n\pi_b}
+\frac{1}{n\pi_c}
+\frac{1}{n\pi_d}-\frac{4}{n}+\frac{4}{n}\\
=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}
\end{equation}
|
Does confidence interval for odds ratio assume log-normal distribution?
Given the comments, I have included the proof of the floating equation at the bottom of the response.
Given a two-by-two contingency table where the OR is $\frac{a/b}{c/d}$, if you take the log of bo
|
39,629
|
Identifying outliers in the data
|
The method is correct. The problem with outlier detection is: there is not a general solution and real thresholds. It should be used additionally but not primary to assess your data. You find in different literature different threshold. The inventor of this method suggests your threshold with D > 1 1. Your method (4/sample size) is suggested by other authors 2. Another solution is 4*mean(cooksd) like in boxplots (but I did not found a source for this one). If you use this, you don't detect outliers from your code:
Also the method is for regression models, which is not very accurate with few values, it should be better to use a larger data sample.
So there is not really perfect solution for detecting outliers. And if you think they are not outliers it should be fine not to remove then.
PS: I found a similar post while doing research: How to read Cook's distance plots? There it is already discussed better than by myself.
Cook, R. D. and Weisberg, S. (1984) Residuals and Influence in Regression.
Bollen, Kenneth A.; Jackman, Robert W. (1990). Fox, John; Long, J. Scott, eds. Regression Diagnostics: An Expository Treatment of Outliers and Influential Cases. Modern Methods of Data Analysis.
|
Identifying outliers in the data
|
The method is correct. The problem with outlier detection is: there is not a general solution and real thresholds. It should be used additionally but not primary to assess your data. You find in diff
|
Identifying outliers in the data
The method is correct. The problem with outlier detection is: there is not a general solution and real thresholds. It should be used additionally but not primary to assess your data. You find in different literature different threshold. The inventor of this method suggests your threshold with D > 1 1. Your method (4/sample size) is suggested by other authors 2. Another solution is 4*mean(cooksd) like in boxplots (but I did not found a source for this one). If you use this, you don't detect outliers from your code:
Also the method is for regression models, which is not very accurate with few values, it should be better to use a larger data sample.
So there is not really perfect solution for detecting outliers. And if you think they are not outliers it should be fine not to remove then.
PS: I found a similar post while doing research: How to read Cook's distance plots? There it is already discussed better than by myself.
Cook, R. D. and Weisberg, S. (1984) Residuals and Influence in Regression.
Bollen, Kenneth A.; Jackman, Robert W. (1990). Fox, John; Long, J. Scott, eds. Regression Diagnostics: An Expository Treatment of Outliers and Influential Cases. Modern Methods of Data Analysis.
|
Identifying outliers in the data
The method is correct. The problem with outlier detection is: there is not a general solution and real thresholds. It should be used additionally but not primary to assess your data. You find in diff
|
39,630
|
Identifying outliers in the data
|
The method is correct IN THIS CASE but not in general . You need a sufficient model to detect an anomaly ( read underlying signal and latent deterministic structure ).
You approach assumes the kind of model i.e. a model with time and versions of time as exogenous predictors premising no pulses/level shifts/seasonal pulses and attempts to find what had been assumed to be not present. Why is my high degree polynomial regression model suddenly unfit for the data? should ne reviewd and especially @whuber comment Does the p-value in the incremental F-test determine how many trials I expect to get correct? reflecting on "fitting polynomials to data can be a deceptively very poor approach "
The problem or opportunity is to simultaneously identify both i.e. a possible hybrid model using both memory and the waiting to be identified latent deterministic structure.
I used my favorite time series package ( specifically authored to do both with my help) and obtained in this trivial case and . It concluded as you did regarding the pulse and the level shift.
In summary the noise model can be other than white noise i.e. not (0,0,0) in arima notation. The baseline can have break points in trend not simply curvature which is implied by time squares et al.
A solid reference to detecting amomalies is here http://docplayer.net/12080848-Outliers-level-shifts-and-variance-changes-in-time-series.html covering the impact of error variance changes while incorporating memory as it searches for anomalies.
Standard Regression Diagnostics assume uncorrelated errors while attempting to provide clarity to anomalies. If you have an error process that is white noise then simpler procedures often work BUT when you have time series data this is often not the case thus more powerful/correct/robust approaches are needed.
|
Identifying outliers in the data
|
The method is correct IN THIS CASE but not in general . You need a sufficient model to detect an anomaly ( read underlying signal and latent deterministic structure ).
You approach assumes the kind of
|
Identifying outliers in the data
The method is correct IN THIS CASE but not in general . You need a sufficient model to detect an anomaly ( read underlying signal and latent deterministic structure ).
You approach assumes the kind of model i.e. a model with time and versions of time as exogenous predictors premising no pulses/level shifts/seasonal pulses and attempts to find what had been assumed to be not present. Why is my high degree polynomial regression model suddenly unfit for the data? should ne reviewd and especially @whuber comment Does the p-value in the incremental F-test determine how many trials I expect to get correct? reflecting on "fitting polynomials to data can be a deceptively very poor approach "
The problem or opportunity is to simultaneously identify both i.e. a possible hybrid model using both memory and the waiting to be identified latent deterministic structure.
I used my favorite time series package ( specifically authored to do both with my help) and obtained in this trivial case and . It concluded as you did regarding the pulse and the level shift.
In summary the noise model can be other than white noise i.e. not (0,0,0) in arima notation. The baseline can have break points in trend not simply curvature which is implied by time squares et al.
A solid reference to detecting amomalies is here http://docplayer.net/12080848-Outliers-level-shifts-and-variance-changes-in-time-series.html covering the impact of error variance changes while incorporating memory as it searches for anomalies.
Standard Regression Diagnostics assume uncorrelated errors while attempting to provide clarity to anomalies. If you have an error process that is white noise then simpler procedures often work BUT when you have time series data this is often not the case thus more powerful/correct/robust approaches are needed.
|
Identifying outliers in the data
The method is correct IN THIS CASE but not in general . You need a sufficient model to detect an anomaly ( read underlying signal and latent deterministic structure ).
You approach assumes the kind of
|
39,631
|
Is the sum of p-value and specificity 1
|
There's some confusion between $\alpha$, the type I error rate, and the $p$-value. I'll try to explain these two concepts before turning to specificity.
You consider a test statistic $X$, with left deviations of $X$ going against the null hypothesis $H_0$ (this is a bit unusual, in general right deviations go againt $H_0$ but it doesn't matter).
Given some $\alpha \in (0,1)$, usually $\alpha = 0.05$, there's some threshold $a$ such that the test procedure
$$ \text{reject } H_0 \text{ if } X < a $$
has type I error $\alpha$. The relation between $a$ and $\alpha$ is
$P_{H_0}(X < a) = \alpha$.
An equivalent test procedure can be obtained through a $p$-value: given an observed value $x_{obs}$ of the test statistic, the $p$-value is
$$ p = P_{H_0}(X < x_{obs}), $$
and the rejection rule is now "reject $H_0$ if $p < \alpha$".
So $\alpha$ is some known value, fixed when you design your test procedure (usually to $\alpha = 0.05$). The $p$-value $p$ depends on the observed data. It is thus a random variable (with $P_{H_0}(p < \alpha) = \alpha$).
Now what is specificity? Briefly put, it is the probability of ''not'' rejecting $H_0$, when $H_0$ is true (cf Wikipedia), that is
$$ Sp = P_{H_0}(X > a) = 1 - \alpha.$$
So you have
$$ \alpha + Sp = 1.$$
But you don't have $p + Sp = 1$ -- this doesn't make sense, $p$ is a random variable, $Sp$ is a constant.
|
Is the sum of p-value and specificity 1
|
There's some confusion between $\alpha$, the type I error rate, and the $p$-value. I'll try to explain these two concepts before turning to specificity.
You consider a test statistic $X$, with left de
|
Is the sum of p-value and specificity 1
There's some confusion between $\alpha$, the type I error rate, and the $p$-value. I'll try to explain these two concepts before turning to specificity.
You consider a test statistic $X$, with left deviations of $X$ going against the null hypothesis $H_0$ (this is a bit unusual, in general right deviations go againt $H_0$ but it doesn't matter).
Given some $\alpha \in (0,1)$, usually $\alpha = 0.05$, there's some threshold $a$ such that the test procedure
$$ \text{reject } H_0 \text{ if } X < a $$
has type I error $\alpha$. The relation between $a$ and $\alpha$ is
$P_{H_0}(X < a) = \alpha$.
An equivalent test procedure can be obtained through a $p$-value: given an observed value $x_{obs}$ of the test statistic, the $p$-value is
$$ p = P_{H_0}(X < x_{obs}), $$
and the rejection rule is now "reject $H_0$ if $p < \alpha$".
So $\alpha$ is some known value, fixed when you design your test procedure (usually to $\alpha = 0.05$). The $p$-value $p$ depends on the observed data. It is thus a random variable (with $P_{H_0}(p < \alpha) = \alpha$).
Now what is specificity? Briefly put, it is the probability of ''not'' rejecting $H_0$, when $H_0$ is true (cf Wikipedia), that is
$$ Sp = P_{H_0}(X > a) = 1 - \alpha.$$
So you have
$$ \alpha + Sp = 1.$$
But you don't have $p + Sp = 1$ -- this doesn't make sense, $p$ is a random variable, $Sp$ is a constant.
|
Is the sum of p-value and specificity 1
There's some confusion between $\alpha$, the type I error rate, and the $p$-value. I'll try to explain these two concepts before turning to specificity.
You consider a test statistic $X$, with left de
|
39,632
|
Is the sum of p-value and specificity 1
|
You seem to be referring to type I and type II errors, that are among the fundamental concepts of null hypothesis testing. If $H_0$ is true, then we expect to see $\alpha$ proportion of false positives (incorrectly reject), we call this type I error. On another hand, if $H_0$ is false, we expect to see $\beta$ false negatives (fail to reject), we call this statistical power of the test.
$$
\begin{array}{cc}
& \text{Not reject} & \text{Reject} \\
H_0 \,\text{is true} & \text{True negatives} & \text{Type I error} \\
H_0 \,\text{is false} & \text{Type II error} & \text{True positives}
\end{array}
$$
So yes, $\alpha$ controls the ratio of false positives when $H_0$ is true. However, as nicely stated in recent blog entry by Steve Luck,
. . . this is a statement about what happens when the null
hypothesis is actually true. In real research, we don't know whether
the null hypothesis is actually true. If we knew that, we wouldn't
need any statistics! In real research, we have a p value, and we want
to know whether we should accept or reject the null hypothesis. The
probability of a false positive in that situation is not the same as
the probability of a false positive when the null hypothesis is true.
It can be way higher.
Below, I post one of his self-explanatory figures showing the point.
Similar discussion can be found in recent Tweet thread by F. Perry Wilson (it seems recently everyone on the Internet is discussing $p$-values). Basically, if you don't know if $H_0$ is true, or at least don't know the probability that $H_0$ is true, then $p$-value gets pretty meaningless when interpreted as a probability.
|
Is the sum of p-value and specificity 1
|
You seem to be referring to type I and type II errors, that are among the fundamental concepts of null hypothesis testing. If $H_0$ is true, then we expect to see $\alpha$ proportion of false positive
|
Is the sum of p-value and specificity 1
You seem to be referring to type I and type II errors, that are among the fundamental concepts of null hypothesis testing. If $H_0$ is true, then we expect to see $\alpha$ proportion of false positives (incorrectly reject), we call this type I error. On another hand, if $H_0$ is false, we expect to see $\beta$ false negatives (fail to reject), we call this statistical power of the test.
$$
\begin{array}{cc}
& \text{Not reject} & \text{Reject} \\
H_0 \,\text{is true} & \text{True negatives} & \text{Type I error} \\
H_0 \,\text{is false} & \text{Type II error} & \text{True positives}
\end{array}
$$
So yes, $\alpha$ controls the ratio of false positives when $H_0$ is true. However, as nicely stated in recent blog entry by Steve Luck,
. . . this is a statement about what happens when the null
hypothesis is actually true. In real research, we don't know whether
the null hypothesis is actually true. If we knew that, we wouldn't
need any statistics! In real research, we have a p value, and we want
to know whether we should accept or reject the null hypothesis. The
probability of a false positive in that situation is not the same as
the probability of a false positive when the null hypothesis is true.
It can be way higher.
Below, I post one of his self-explanatory figures showing the point.
Similar discussion can be found in recent Tweet thread by F. Perry Wilson (it seems recently everyone on the Internet is discussing $p$-values). Basically, if you don't know if $H_0$ is true, or at least don't know the probability that $H_0$ is true, then $p$-value gets pretty meaningless when interpreted as a probability.
|
Is the sum of p-value and specificity 1
You seem to be referring to type I and type II errors, that are among the fundamental concepts of null hypothesis testing. If $H_0$ is true, then we expect to see $\alpha$ proportion of false positive
|
39,633
|
Is this a typo/error in Bishop's book
|
Indeed, indeed, there is a typo in (9.16) and it should be
$$0=\sum_{n=1}^N \gamma(z_{nk}) {\mathbf \Sigma}_k^{-1} ({\mathbf x}_n-{\mathbf \mu}_k)$$Fortunately, this does not impact the next equation (9.17).
As for deriving the conditional MLE of the covariance matrix ${\mathbf \Sigma}_k$, the result and the method are correct. The determinant is accounted for when taking the derivative in ${\mathbf \Sigma}_k$. (If it was ignored, the lhs of (9.19) would be zero.) The steps are the same as in the Normal case, except for the weights $\gamma(z_{nk})$.
|
Is this a typo/error in Bishop's book
|
Indeed, indeed, there is a typo in (9.16) and it should be
$$0=\sum_{n=1}^N \gamma(z_{nk}) {\mathbf \Sigma}_k^{-1} ({\mathbf x}_n-{\mathbf \mu}_k)$$Fortunately, this does not impact the next equation
|
Is this a typo/error in Bishop's book
Indeed, indeed, there is a typo in (9.16) and it should be
$$0=\sum_{n=1}^N \gamma(z_{nk}) {\mathbf \Sigma}_k^{-1} ({\mathbf x}_n-{\mathbf \mu}_k)$$Fortunately, this does not impact the next equation (9.17).
As for deriving the conditional MLE of the covariance matrix ${\mathbf \Sigma}_k$, the result and the method are correct. The determinant is accounted for when taking the derivative in ${\mathbf \Sigma}_k$. (If it was ignored, the lhs of (9.19) would be zero.) The steps are the same as in the Normal case, except for the weights $\gamma(z_{nk})$.
|
Is this a typo/error in Bishop's book
Indeed, indeed, there is a typo in (9.16) and it should be
$$0=\sum_{n=1}^N \gamma(z_{nk}) {\mathbf \Sigma}_k^{-1} ({\mathbf x}_n-{\mathbf \mu}_k)$$Fortunately, this does not impact the next equation
|
39,634
|
What is wrong with one tailed z-tests for a proportion?
|
My comment was specifically about your articulation of the appropriate one-sample one-sided (aka one-tailed) null hypothesis (not about one-sided tests per se) which, for proportions, should either be $H_{0}: p \ge p_{0}$ with $H_{1}: p < p_{0}$, or $H_{0}: p \le p_{0}$ with $H_{1}: p > p_{0}$. Bear in mind that null hypotheses are articulated before you evaluate your data for directionality of a rejection decision.
The null hypothesis you posed in your answer was of the form $H_{0}: p = p_{0}$ which has as its proper alternative $H_{1}: p \ne p_{0}$, since by definition an alternative hypothesis corresponds to the complementary event in the null hypothesis. However, you proposed an alternative $H_{1}: p < p_{0}$, which is not the complement of your null. Indeed, it does not even correspond to the alternative hypothesis expressed on the site you linked to in your answer. The crux of the issue is that it may truly be the case that $p>p_{0}$, but this state of nature does not fit within either your $H_{0}$ or your $H_{1}$, since the sample space of $p$ is not fully represented by your null and alternate, they cannot be well formed.
To be super explicit: Nothing is wrong with one-sided one-sample inequality tests, but you articulated the null hypothesis incorrectly for such a test.
|
What is wrong with one tailed z-tests for a proportion?
|
My comment was specifically about your articulation of the appropriate one-sample one-sided (aka one-tailed) null hypothesis (not about one-sided tests per se) which, for proportions, should either be
|
What is wrong with one tailed z-tests for a proportion?
My comment was specifically about your articulation of the appropriate one-sample one-sided (aka one-tailed) null hypothesis (not about one-sided tests per se) which, for proportions, should either be $H_{0}: p \ge p_{0}$ with $H_{1}: p < p_{0}$, or $H_{0}: p \le p_{0}$ with $H_{1}: p > p_{0}$. Bear in mind that null hypotheses are articulated before you evaluate your data for directionality of a rejection decision.
The null hypothesis you posed in your answer was of the form $H_{0}: p = p_{0}$ which has as its proper alternative $H_{1}: p \ne p_{0}$, since by definition an alternative hypothesis corresponds to the complementary event in the null hypothesis. However, you proposed an alternative $H_{1}: p < p_{0}$, which is not the complement of your null. Indeed, it does not even correspond to the alternative hypothesis expressed on the site you linked to in your answer. The crux of the issue is that it may truly be the case that $p>p_{0}$, but this state of nature does not fit within either your $H_{0}$ or your $H_{1}$, since the sample space of $p$ is not fully represented by your null and alternate, they cannot be well formed.
To be super explicit: Nothing is wrong with one-sided one-sample inequality tests, but you articulated the null hypothesis incorrectly for such a test.
|
What is wrong with one tailed z-tests for a proportion?
My comment was specifically about your articulation of the appropriate one-sample one-sided (aka one-tailed) null hypothesis (not about one-sided tests per se) which, for proportions, should either be
|
39,635
|
What is wrong with one tailed z-tests for a proportion?
|
One problem with your suggestion is that the normal approximation for proportions is only reasonable in certain circumstances. A large sample and the observed proportion well away from the boundaries of 0 and 1. Neither of those circumstances is specified in the original question. See this question: Testing equality of two binomial proportions proportion (one near 100 %)
I have previously sparked some discussion by suggesting that the normal approximation method for confidence intervals of proportions be omitted from textbooks. You can read it here: What statistical methods are archaic and should be omitted from textbooks?
|
What is wrong with one tailed z-tests for a proportion?
|
One problem with your suggestion is that the normal approximation for proportions is only reasonable in certain circumstances. A large sample and the observed proportion well away from the boundaries
|
What is wrong with one tailed z-tests for a proportion?
One problem with your suggestion is that the normal approximation for proportions is only reasonable in certain circumstances. A large sample and the observed proportion well away from the boundaries of 0 and 1. Neither of those circumstances is specified in the original question. See this question: Testing equality of two binomial proportions proportion (one near 100 %)
I have previously sparked some discussion by suggesting that the normal approximation method for confidence intervals of proportions be omitted from textbooks. You can read it here: What statistical methods are archaic and should be omitted from textbooks?
|
What is wrong with one tailed z-tests for a proportion?
One problem with your suggestion is that the normal approximation for proportions is only reasonable in certain circumstances. A large sample and the observed proportion well away from the boundaries
|
39,636
|
Confused about Dropout implementations in Tensorflow
|
This is because you're using the Adam optimizer. The Adam optimizer is a kind of momentum optimizer (specifically, it tracks the first and second moments of the updates), so an update will still occur for all model parameters even though dropout is present.
|
Confused about Dropout implementations in Tensorflow
|
This is because you're using the Adam optimizer. The Adam optimizer is a kind of momentum optimizer (specifically, it tracks the first and second moments of the updates), so an update will still occur
|
Confused about Dropout implementations in Tensorflow
This is because you're using the Adam optimizer. The Adam optimizer is a kind of momentum optimizer (specifically, it tracks the first and second moments of the updates), so an update will still occur for all model parameters even though dropout is present.
|
Confused about Dropout implementations in Tensorflow
This is because you're using the Adam optimizer. The Adam optimizer is a kind of momentum optimizer (specifically, it tracks the first and second moments of the updates), so an update will still occur
|
39,637
|
Confused about Dropout implementations in Tensorflow
|
Dropout:
Dropout in Tensorflow is implemented slightly different than in the original paper: instead of scaling the weights by 1/(1-p) after updating the weights (where p is the dropout rate), the neuron outputs (e.g., the outputs from ReLUs) are scaled by 1/(1-p) during the forward and backward passes. In this manner, the weights do not have to be scaled after updating.
Stochastic Gradient Descent (SGD) optimizer:
SGD operates by shuffling the training data for each epoch and updating the weights using the negative gradient multiplied by the learning rate. Note that it does not take into account any past weight updates.
Optimization methods with adaptive learning rate:
Adagrad, Adadelta and RMSprop are three optimizers which use previous values of gradients to adjust the learning rate. Note that, as SGD, the weight update is still performed using only the current gradient multiplied by this variable learning rate; hence, for mini-batch of size 1, these methods would not update some of the weights when using dropout.
Optimization methods using momentum:
Methods such as Nesterov accelerated gradient (NAG) and Adam present a momentum term, which takes into account previous gradients for the current update. Therefore, for mini-batch of size 1, these methods would not update some of the weights only during the first few data samples - after which, most likely, all the neurons would have been activated and every weight would have a history of non-zero gradients.
I would suggest verifying the weight updates for the first few training data samples for mini-batch size of 1. This would clarify the impact of these different optimization methods on the current weight update when using dropout. Note that for large mini-batch sizes the gradient is computed via the mean updates; therefore, even after the first mini-batch, most likely all the weight updates will be non-zero.
|
Confused about Dropout implementations in Tensorflow
|
Dropout:
Dropout in Tensorflow is implemented slightly different than in the original paper: instead of scaling the weights by 1/(1-p) after updating the weights (where p is the dropout rate), the neu
|
Confused about Dropout implementations in Tensorflow
Dropout:
Dropout in Tensorflow is implemented slightly different than in the original paper: instead of scaling the weights by 1/(1-p) after updating the weights (where p is the dropout rate), the neuron outputs (e.g., the outputs from ReLUs) are scaled by 1/(1-p) during the forward and backward passes. In this manner, the weights do not have to be scaled after updating.
Stochastic Gradient Descent (SGD) optimizer:
SGD operates by shuffling the training data for each epoch and updating the weights using the negative gradient multiplied by the learning rate. Note that it does not take into account any past weight updates.
Optimization methods with adaptive learning rate:
Adagrad, Adadelta and RMSprop are three optimizers which use previous values of gradients to adjust the learning rate. Note that, as SGD, the weight update is still performed using only the current gradient multiplied by this variable learning rate; hence, for mini-batch of size 1, these methods would not update some of the weights when using dropout.
Optimization methods using momentum:
Methods such as Nesterov accelerated gradient (NAG) and Adam present a momentum term, which takes into account previous gradients for the current update. Therefore, for mini-batch of size 1, these methods would not update some of the weights only during the first few data samples - after which, most likely, all the neurons would have been activated and every weight would have a history of non-zero gradients.
I would suggest verifying the weight updates for the first few training data samples for mini-batch size of 1. This would clarify the impact of these different optimization methods on the current weight update when using dropout. Note that for large mini-batch sizes the gradient is computed via the mean updates; therefore, even after the first mini-batch, most likely all the weight updates will be non-zero.
|
Confused about Dropout implementations in Tensorflow
Dropout:
Dropout in Tensorflow is implemented slightly different than in the original paper: instead of scaling the weights by 1/(1-p) after updating the weights (where p is the dropout rate), the neu
|
39,638
|
Berry Esseen Theorem for Infinite Absolute Third Moment
|
The Berry -Esseen theorem requires the finiteness of the third moment. A generalization of Berry-Esseen inequality that doesn't, can be found as Theorem 5 on page 112 of
Petrov, V. V. (1975). Sums of independent random variables (Vol. 82). Springer Science & Business Media., doi:10.1007/978-3-642-65809-9.
For the case of i.i.d. random variables with mean zero and variance 1, the theorem simplifies to the following:
Let $F_n(x) = P\left(n^{-1/2}\sum_{i=1}^n X_i<x\right)$ . Let $g()$ be a function that is non-negative, even, and non-decreasing in the interval $x>0$, and such that $x/g(x)$ is non-decreasing in the interval $x>0$. If $E[X_1^2g(X_1)]<\infty$. Then
$$\sup_x \left|F_n(x) - \Phi(x)\right| \leq \frac {A}{g(\sqrt{n})}E[X_1^2g(X_1)]$$ for some universal $A>0$.
The result can be extended to variables with non-zero and different means, and different variances (see chapter 11 of A. DasGupta's 2008 book Asymptotic theory of statistics and probability (Google Books))
|
Berry Esseen Theorem for Infinite Absolute Third Moment
|
The Berry -Esseen theorem requires the finiteness of the third moment. A generalization of Berry-Esseen inequality that doesn't, can be found as Theorem 5 on page 112 of
Petrov, V. V. (1975). Sums of
|
Berry Esseen Theorem for Infinite Absolute Third Moment
The Berry -Esseen theorem requires the finiteness of the third moment. A generalization of Berry-Esseen inequality that doesn't, can be found as Theorem 5 on page 112 of
Petrov, V. V. (1975). Sums of independent random variables (Vol. 82). Springer Science & Business Media., doi:10.1007/978-3-642-65809-9.
For the case of i.i.d. random variables with mean zero and variance 1, the theorem simplifies to the following:
Let $F_n(x) = P\left(n^{-1/2}\sum_{i=1}^n X_i<x\right)$ . Let $g()$ be a function that is non-negative, even, and non-decreasing in the interval $x>0$, and such that $x/g(x)$ is non-decreasing in the interval $x>0$. If $E[X_1^2g(X_1)]<\infty$. Then
$$\sup_x \left|F_n(x) - \Phi(x)\right| \leq \frac {A}{g(\sqrt{n})}E[X_1^2g(X_1)]$$ for some universal $A>0$.
The result can be extended to variables with non-zero and different means, and different variances (see chapter 11 of A. DasGupta's 2008 book Asymptotic theory of statistics and probability (Google Books))
|
Berry Esseen Theorem for Infinite Absolute Third Moment
The Berry -Esseen theorem requires the finiteness of the third moment. A generalization of Berry-Esseen inequality that doesn't, can be found as Theorem 5 on page 112 of
Petrov, V. V. (1975). Sums of
|
39,639
|
What is "unit" standard deviation?
|
It means you are converting your data features from its original units (miles, dollars, elapsed time,...) to units of standard deviation. As you requested follows a very simple example:
Supose you want to predict house prices from two features: number of bedrooms (integer unit) and size (in squared meters unit), like the fictitious data bellow:
import numpy as np
X = np.array([[1, 65],[3, 130],[2, 80],[2, 70],[1, 50]])
Notice that each feature have very different mean and standard deviation
print("mean={}, std{}".format(X.mean(axis=0), X.std(axis=0))
Outputs: mean=[ 1.83333333, 78.33333333]), std=[ 0.68718427, 24.94438258])
Noticed that the feature size has mean and std more than 30x bigger than number of bedroom, this produces distortions in some algorithms calculation (like neural nets, svm, knn, etc) where somes feature with larger values dominates completely the others with smaller values. To solve that an common and very effective practice is to transform the data to units of standard deviation with zero mean, that is, you subtract the mean and divides by the standard deviation, like bellow:
X_t = (X - X.mean(axis=0))/X.std(axis=0)
The variable X_t (X transformed) contains your features in unit standard deviations with zero mean, printing X_t you get:
array([[-1.21267813, -0.53452248],
[ 1.69774938, 2.07127462],
[ 0.24253563, 0.06681531],
[ 0.24253563, -0.33407655],
[-1.21267813, -1.13586028],
[ 0.24253563, -0.13363062]])
Look how the numbers in both features have all the same magnitude. If you print X_t mean and std now you get
mean=[ 1.11022302e-16 2.08166817e-16], std=[ 1. 1.]
as expected.
|
What is "unit" standard deviation?
|
It means you are converting your data features from its original units (miles, dollars, elapsed time,...) to units of standard deviation. As you requested follows a very simple example:
Supose you w
|
What is "unit" standard deviation?
It means you are converting your data features from its original units (miles, dollars, elapsed time,...) to units of standard deviation. As you requested follows a very simple example:
Supose you want to predict house prices from two features: number of bedrooms (integer unit) and size (in squared meters unit), like the fictitious data bellow:
import numpy as np
X = np.array([[1, 65],[3, 130],[2, 80],[2, 70],[1, 50]])
Notice that each feature have very different mean and standard deviation
print("mean={}, std{}".format(X.mean(axis=0), X.std(axis=0))
Outputs: mean=[ 1.83333333, 78.33333333]), std=[ 0.68718427, 24.94438258])
Noticed that the feature size has mean and std more than 30x bigger than number of bedroom, this produces distortions in some algorithms calculation (like neural nets, svm, knn, etc) where somes feature with larger values dominates completely the others with smaller values. To solve that an common and very effective practice is to transform the data to units of standard deviation with zero mean, that is, you subtract the mean and divides by the standard deviation, like bellow:
X_t = (X - X.mean(axis=0))/X.std(axis=0)
The variable X_t (X transformed) contains your features in unit standard deviations with zero mean, printing X_t you get:
array([[-1.21267813, -0.53452248],
[ 1.69774938, 2.07127462],
[ 0.24253563, 0.06681531],
[ 0.24253563, -0.33407655],
[-1.21267813, -1.13586028],
[ 0.24253563, -0.13363062]])
Look how the numbers in both features have all the same magnitude. If you print X_t mean and std now you get
mean=[ 1.11022302e-16 2.08166817e-16], std=[ 1. 1.]
as expected.
|
What is "unit" standard deviation?
It means you are converting your data features from its original units (miles, dollars, elapsed time,...) to units of standard deviation. As you requested follows a very simple example:
Supose you w
|
39,640
|
Warnings during WAIC computation: how to proceed?
|
I'm happy that you care about these diagnostic messages. I'm responsible for that specific warning message. WAIC doesn't have a good way to diagnose the reliability and that 0.4 threshold is empirically chosen. Pareto k diagnostic in PSIS-LOO is much better. You did not mention the specific k values, but if they are larger than 1, then theory and experiments show that the error can be arbitrary large (see Aki Vehtari, Andrew Gelman and Jonah Gabry (2017). Pareto smoothed importance sampling. https://arxiv.org/abs/1507.02646). WAIC and PSIS-LOO are connected so that if PSIS-LOO fails then WAIC fails even more (Aki Vehtari, Andrew Gelman and Jonah Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. In Statistics and Computing, 27(5):1413–1432, https://arxiv.org/abs/1507.04544). Thus, if even one p_waic can have arbitrary large error, then the total WAIC can have arbitrary large error and should not be trusted for final results.
Since you have a large difference in WAIC, it's likely that the difference is similar with more accurate/robust approach (e.g. k-fold-CV), but you cannot be sure without computing that more accurate/robust result. In your case it is likely that WAIC and PSIS-LOO fail because you have a flexible model and some of the observation are highly influential, that is, the full data posterior and leave-one-out-posterior are so different that using full posterior as the proposal distribution in importance sampling LOO fails. WAIC uses instead Taylor series approximation, which also fails in case of highly influential observations (which explains why p_waic is used as the ad-hoc diagnostic).
|
Warnings during WAIC computation: how to proceed?
|
I'm happy that you care about these diagnostic messages. I'm responsible for that specific warning message. WAIC doesn't have a good way to diagnose the reliability and that 0.4 threshold is empirical
|
Warnings during WAIC computation: how to proceed?
I'm happy that you care about these diagnostic messages. I'm responsible for that specific warning message. WAIC doesn't have a good way to diagnose the reliability and that 0.4 threshold is empirically chosen. Pareto k diagnostic in PSIS-LOO is much better. You did not mention the specific k values, but if they are larger than 1, then theory and experiments show that the error can be arbitrary large (see Aki Vehtari, Andrew Gelman and Jonah Gabry (2017). Pareto smoothed importance sampling. https://arxiv.org/abs/1507.02646). WAIC and PSIS-LOO are connected so that if PSIS-LOO fails then WAIC fails even more (Aki Vehtari, Andrew Gelman and Jonah Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. In Statistics and Computing, 27(5):1413–1432, https://arxiv.org/abs/1507.04544). Thus, if even one p_waic can have arbitrary large error, then the total WAIC can have arbitrary large error and should not be trusted for final results.
Since you have a large difference in WAIC, it's likely that the difference is similar with more accurate/robust approach (e.g. k-fold-CV), but you cannot be sure without computing that more accurate/robust result. In your case it is likely that WAIC and PSIS-LOO fail because you have a flexible model and some of the observation are highly influential, that is, the full data posterior and leave-one-out-posterior are so different that using full posterior as the proposal distribution in importance sampling LOO fails. WAIC uses instead Taylor series approximation, which also fails in case of highly influential observations (which explains why p_waic is used as the ad-hoc diagnostic).
|
Warnings during WAIC computation: how to proceed?
I'm happy that you care about these diagnostic messages. I'm responsible for that specific warning message. WAIC doesn't have a good way to diagnose the reliability and that 0.4 threshold is empirical
|
39,641
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
|
We have
\begin{align}
\operatorname{Var}(x^T(A+B)x)&=\operatorname{Var}(x^TAx+x^TBx)
\\&=\operatorname{Var}(x^TAx)+\operatorname{Var}(x^TBx)+2\operatorname{Cov}(x^TAx,x^TBx)
\end{align}
So, $$\operatorname{Cov}(x^TAx,x^TBx)=\frac12\left[\operatorname{Var}(x^T(A+B)x)-\operatorname{Var}(x^TAx)-\operatorname{Var}(x^TBx)\right]\tag{1}$$
Note that for the calculation of variance and covariance when $x$ is multivariate normal, it is assumed that $A$ and $B$ are symmetric, which also makes $A+B$ symmetric.
For $y\sim N(0,I)$, it is shown here that $$\operatorname{Var}(y^TAy)=2\operatorname{tr}(A^2)\tag{2}$$
Now suppose $x\sim N(\mu,\Sigma)$ where $\Sigma$ is positive definite.
Then there exists a nonsingular matrix $C$ such that $\Sigma=CC^T$.
And $$x\sim N(\mu,\Sigma)\implies y=C^{-1}(X-\mu)\sim N(0,I)$$
So, $$x^TAx=(\mu+Cy)^TA(\mu+Cy)=\mu^TA\mu+2\mu^TACy+y^T(C^TAC)y$$
Therefore noting that $y$ and $y^T(C^TAC)y$ are uncorrelated,
\begin{align}
\operatorname{Var}(x^TAx)&=4\operatorname{Var}(\mu^TACy)+\operatorname{Var}\left(y^T(C^TAC)y\right)
\\&=4\mu^T(AC)(AC)^T\mu + 2\operatorname{tr}((C^TAC)(C^TAC))&\small\left[\text{ using }(2)\right]
\\&=4\mu^T A\Sigma A\mu + 2\operatorname{tr}(C^TA\Sigma AC)
\\&=4\mu^T A\Sigma A\mu + 2\operatorname{tr}(A\Sigma A\Sigma)
&\small\left[\because \,\operatorname{tr}(AB)=\operatorname{tr}(BA)\right] \tag{3}
\end{align}
Now it follows from $(1)$ and $(3)$ that
$$\operatorname{Cov}(x^TAx,x^TBx)=4\mu^TA\Sigma B\mu + 2\operatorname{tr}(A\Sigma B\Sigma)$$
For a direct calculation of the covariance and hence the variance, one may refer to Graybill's Matrices with Applications in Statistics, 2nd edition (1983).
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
|
We have
\begin{align}
\operatorname{Var}(x^T(A+B)x)&=\operatorname{Var}(x^TAx+x^TBx)
\\&=\operatorname{Var}(x^TAx)+\operatorname{Var}(x^TBx)+2\operatorname{Cov}(x^TAx,x^TBx)
\end{align}
So, $$\operato
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
We have
\begin{align}
\operatorname{Var}(x^T(A+B)x)&=\operatorname{Var}(x^TAx+x^TBx)
\\&=\operatorname{Var}(x^TAx)+\operatorname{Var}(x^TBx)+2\operatorname{Cov}(x^TAx,x^TBx)
\end{align}
So, $$\operatorname{Cov}(x^TAx,x^TBx)=\frac12\left[\operatorname{Var}(x^T(A+B)x)-\operatorname{Var}(x^TAx)-\operatorname{Var}(x^TBx)\right]\tag{1}$$
Note that for the calculation of variance and covariance when $x$ is multivariate normal, it is assumed that $A$ and $B$ are symmetric, which also makes $A+B$ symmetric.
For $y\sim N(0,I)$, it is shown here that $$\operatorname{Var}(y^TAy)=2\operatorname{tr}(A^2)\tag{2}$$
Now suppose $x\sim N(\mu,\Sigma)$ where $\Sigma$ is positive definite.
Then there exists a nonsingular matrix $C$ such that $\Sigma=CC^T$.
And $$x\sim N(\mu,\Sigma)\implies y=C^{-1}(X-\mu)\sim N(0,I)$$
So, $$x^TAx=(\mu+Cy)^TA(\mu+Cy)=\mu^TA\mu+2\mu^TACy+y^T(C^TAC)y$$
Therefore noting that $y$ and $y^T(C^TAC)y$ are uncorrelated,
\begin{align}
\operatorname{Var}(x^TAx)&=4\operatorname{Var}(\mu^TACy)+\operatorname{Var}\left(y^T(C^TAC)y\right)
\\&=4\mu^T(AC)(AC)^T\mu + 2\operatorname{tr}((C^TAC)(C^TAC))&\small\left[\text{ using }(2)\right]
\\&=4\mu^T A\Sigma A\mu + 2\operatorname{tr}(C^TA\Sigma AC)
\\&=4\mu^T A\Sigma A\mu + 2\operatorname{tr}(A\Sigma A\Sigma)
&\small\left[\because \,\operatorname{tr}(AB)=\operatorname{tr}(BA)\right] \tag{3}
\end{align}
Now it follows from $(1)$ and $(3)$ that
$$\operatorname{Cov}(x^TAx,x^TBx)=4\mu^TA\Sigma B\mu + 2\operatorname{tr}(A\Sigma B\Sigma)$$
For a direct calculation of the covariance and hence the variance, one may refer to Graybill's Matrices with Applications in Statistics, 2nd edition (1983).
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
We have
\begin{align}
\operatorname{Var}(x^T(A+B)x)&=\operatorname{Var}(x^TAx+x^TBx)
\\&=\operatorname{Var}(x^TAx)+\operatorname{Var}(x^TBx)+2\operatorname{Cov}(x^TAx,x^TBx)
\end{align}
So, $$\operato
|
39,642
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
|
Just trying to provide a sketch for the proof, the key here is to show the following equations: $$\begin{align}\operatorname{E}[x'Ax]&=\operatorname{tr}(A\Sigma)+\mu'A\mu, \label{1}\tag{1}\\\operatorname{E}[x'Axx'Bx]&=2\operatorname{tr}(A\Sigma B\Sigma)+4\mu'A\Sigma B\mu+(\operatorname{tr}(A\Sigma)+\mu'A\mu)(\operatorname{tr}(B\Sigma)+\mu'B\mu). \label{2}\tag{2}\end{align}$$ The desired result follows immediately since by definition $$\begin{align}\operatorname{Cov}(x'Ax, x'Bx)&=\operatorname{E}[(x'Ax-E[x'Ax])(x'Bx-\operatorname{E}[x'Bx])']\\&=\operatorname{E}[x'Axx'Bx]-\operatorname{E}[x'Ax]\operatorname{E}[x'Bx].\end{align}$$
The proof to equation ($\ref{1}$) is simple and can be found in many introductory text. Equation ($\ref{2}$) is the real deal here, but fortunately a proof can be found in Proofs Section 5 of the Matrix Reference Manual. Check out 5.18 and 5.19 for Isserlis' theorem, and finally 5.28 where they derived an expression for a much more general form: $$\operatorname{E}[(Ax-a)'(Bx-b)(Cx-c)'(Dx-d)].$$
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
|
Just trying to provide a sketch for the proof, the key here is to show the following equations: $$\begin{align}\operatorname{E}[x'Ax]&=\operatorname{tr}(A\Sigma)+\mu'A\mu, \label{1}\tag{1}\\\operatorn
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
Just trying to provide a sketch for the proof, the key here is to show the following equations: $$\begin{align}\operatorname{E}[x'Ax]&=\operatorname{tr}(A\Sigma)+\mu'A\mu, \label{1}\tag{1}\\\operatorname{E}[x'Axx'Bx]&=2\operatorname{tr}(A\Sigma B\Sigma)+4\mu'A\Sigma B\mu+(\operatorname{tr}(A\Sigma)+\mu'A\mu)(\operatorname{tr}(B\Sigma)+\mu'B\mu). \label{2}\tag{2}\end{align}$$ The desired result follows immediately since by definition $$\begin{align}\operatorname{Cov}(x'Ax, x'Bx)&=\operatorname{E}[(x'Ax-E[x'Ax])(x'Bx-\operatorname{E}[x'Bx])']\\&=\operatorname{E}[x'Axx'Bx]-\operatorname{E}[x'Ax]\operatorname{E}[x'Bx].\end{align}$$
The proof to equation ($\ref{1}$) is simple and can be found in many introductory text. Equation ($\ref{2}$) is the real deal here, but fortunately a proof can be found in Proofs Section 5 of the Matrix Reference Manual. Check out 5.18 and 5.19 for Isserlis' theorem, and finally 5.28 where they derived an expression for a much more general form: $$\operatorname{E}[(Ax-a)'(Bx-b)(Cx-c)'(Dx-d)].$$
|
Prove that $\mathrm{Cov}(x^TAx,x^TBx) = 2 \mathrm{Tr}(A \Sigma B \Sigma) + 4 \mu^TA \Sigma B \mu$
Just trying to provide a sketch for the proof, the key here is to show the following equations: $$\begin{align}\operatorname{E}[x'Ax]&=\operatorname{tr}(A\Sigma)+\mu'A\mu, \label{1}\tag{1}\\\operatorn
|
39,643
|
Kernels in Gaussian Processes
|
Notation / setting
We are considering a GP regression model:
\begin{equation}
y_i = f(x_i) + \epsilon_i
\end{equation}
where $y_i\in \mathbb{R}$,$x_i \in \mathbb{R}^d$, $f$ a Gaussian process (whose realizations are functions $f:\mathbb{R}^d\rightarrow \mathbb{R}$),
\begin{equation}
f \sim \mathrm{GP}(m(x_i), \kappa(x_i,x_j)).
\end{equation}
$n$ datapoints $(y_1,x_1), (y_2,x_2), (y_3,x_3),\ldots, (y_n,x_n)$ are given. (I use $\kappa$ to distinguish the function from the matrices $K(\cdot,\cdot)$ that contain values of $\kappa$ evaluated at certain points. The question denotes both by $K$)
How to handle $d$-dimensional inputs
The question covers computing the posterior predictive distribution for a test point (or $p$ test points) in the case $d=1$ and asks how to extend to the general $d=2,3,\ldots$.
Answer: nothing changes and the formulas from the one-dimensional case work as well in this case. Note that $m$ is then a function from $\mathbb{R}^d$ to $\mathbb{R}$ and $\kappa$ a function from $\mathbb{R}^d \times \mathbb{R}^d$ to $\mathbb{R}$.
So, for example the matrix denoted by $K(x,x)$ in the question is a $n\times n$ matrix for which $K(x,x)_{i,j} = \kappa(x_i, x_j)$ ($x_i$ and $x_j$ are $d$-dimensional but since $\kappa$ maps two $d$-dimensional vectors to a scalar, $\kappa(x_i, x_j)$ is a scalar. Similarly for $K(x,x')$ and $K(x',x')$ where $x'$ are the test points.
Thus, the dimensions of the matrices in the predictive covariance equation are $(p\times p) - (p \times n)\,(n \times n)\,(n \times p)$ independent of whether the elements of the matrices are obtained by evaluating a function $\kappa(\cdot,\cdot)$ whose arguments are $1$-dimensional of a function $\kappa(\cdot, \cdot)$ whose arguments are $d$-dimensional. In fact, the inputs could even be in some space other than $\mathbb{R}^d$ (such as if we have a categorical predictor) as long as a positive-definite covariance function can be defined.
An extra remark about the SE kernel appearing in the question
The question mentions the SE kernel
\begin{equation}
k_{f}(x_{i},x_{j}) = \sigma^{2}\exp\!\Big(-\frac{1}{2\ell^{2}}\sum_{j=1}^{q}(x_{i,j}-x_{k,j})^{2}\Big)
\end{equation}
Note that this is already a function from $\mathbb{R}^q \times \mathbb{R}^q$ to $\mathbb{R}$ (with scalar inputs there would be no "$x_{i,j}$ and $x_{k,j}$" for different values of $j$. And $q$ should be $d$ if $d$ is the dimension of inputs.
Optionally, the length scale $\ell$ could be made different for each input dimension as $\ell_{j}$, such that the term $\frac{1}{2\ell_{j}^{2}}$ is instead placed inside the summation
|
Kernels in Gaussian Processes
|
Notation / setting
We are considering a GP regression model:
\begin{equation}
y_i = f(x_i) + \epsilon_i
\end{equation}
where $y_i\in \mathbb{R}$,$x_i \in \mathbb{R}^d$, $f$ a Gaussian process (whose r
|
Kernels in Gaussian Processes
Notation / setting
We are considering a GP regression model:
\begin{equation}
y_i = f(x_i) + \epsilon_i
\end{equation}
where $y_i\in \mathbb{R}$,$x_i \in \mathbb{R}^d$, $f$ a Gaussian process (whose realizations are functions $f:\mathbb{R}^d\rightarrow \mathbb{R}$),
\begin{equation}
f \sim \mathrm{GP}(m(x_i), \kappa(x_i,x_j)).
\end{equation}
$n$ datapoints $(y_1,x_1), (y_2,x_2), (y_3,x_3),\ldots, (y_n,x_n)$ are given. (I use $\kappa$ to distinguish the function from the matrices $K(\cdot,\cdot)$ that contain values of $\kappa$ evaluated at certain points. The question denotes both by $K$)
How to handle $d$-dimensional inputs
The question covers computing the posterior predictive distribution for a test point (or $p$ test points) in the case $d=1$ and asks how to extend to the general $d=2,3,\ldots$.
Answer: nothing changes and the formulas from the one-dimensional case work as well in this case. Note that $m$ is then a function from $\mathbb{R}^d$ to $\mathbb{R}$ and $\kappa$ a function from $\mathbb{R}^d \times \mathbb{R}^d$ to $\mathbb{R}$.
So, for example the matrix denoted by $K(x,x)$ in the question is a $n\times n$ matrix for which $K(x,x)_{i,j} = \kappa(x_i, x_j)$ ($x_i$ and $x_j$ are $d$-dimensional but since $\kappa$ maps two $d$-dimensional vectors to a scalar, $\kappa(x_i, x_j)$ is a scalar. Similarly for $K(x,x')$ and $K(x',x')$ where $x'$ are the test points.
Thus, the dimensions of the matrices in the predictive covariance equation are $(p\times p) - (p \times n)\,(n \times n)\,(n \times p)$ independent of whether the elements of the matrices are obtained by evaluating a function $\kappa(\cdot,\cdot)$ whose arguments are $1$-dimensional of a function $\kappa(\cdot, \cdot)$ whose arguments are $d$-dimensional. In fact, the inputs could even be in some space other than $\mathbb{R}^d$ (such as if we have a categorical predictor) as long as a positive-definite covariance function can be defined.
An extra remark about the SE kernel appearing in the question
The question mentions the SE kernel
\begin{equation}
k_{f}(x_{i},x_{j}) = \sigma^{2}\exp\!\Big(-\frac{1}{2\ell^{2}}\sum_{j=1}^{q}(x_{i,j}-x_{k,j})^{2}\Big)
\end{equation}
Note that this is already a function from $\mathbb{R}^q \times \mathbb{R}^q$ to $\mathbb{R}$ (with scalar inputs there would be no "$x_{i,j}$ and $x_{k,j}$" for different values of $j$. And $q$ should be $d$ if $d$ is the dimension of inputs.
Optionally, the length scale $\ell$ could be made different for each input dimension as $\ell_{j}$, such that the term $\frac{1}{2\ell_{j}^{2}}$ is instead placed inside the summation
|
Kernels in Gaussian Processes
Notation / setting
We are considering a GP regression model:
\begin{equation}
y_i = f(x_i) + \epsilon_i
\end{equation}
where $y_i\in \mathbb{R}$,$x_i \in \mathbb{R}^d$, $f$ a Gaussian process (whose r
|
39,644
|
Joint distribution in layman's terms
|
As a concrete example, suppose I toss a coin and roll a die one after the other. As you know, there is a probability distribution associated with the outcomes of both (discrete uniform distributions, i.e. every possible outcome is equally likely). I can use these to find e.g. P(coin lands heads) or P(I roll a six).
The joint probability distribution is just a distribution over combinations of these events. So it tells me about P(I roll a six and the coin lands heads), or P(the coin lands tails and I roll a two).
As a continuous example, IQs are distributed normally with mean 100 and adult male heights are distributed normally with mean about $5' 10''$. The joint distribution is then a distribution over (height, IQ) pairs, so it tells you about the probability of having both a given height and a given IQ, rather than just one or the other. So armed with the joint distribution, I can now ask questions like 'what is the probability that a man off the street has an IQ between 90 and 110 and is between 5'9'' and 5'11' tall.'
In short, you make a new event, which is the combination of two events you already know about. The joint distribution tells you about the probability of the new event.
Edit:
As noted in the comments, when you are computing joint distributions, it's very important to think about whether the individual components are independent. Say we want to find the joint distribution over house prices and number of bedrooms in your neighbourhood. The result will still be a distribution over (house price, number of bedroom) pairs, just as above. However, you will need to explicitly account for the fact that these are not independent by using a conditional probability. This is what makes joint distributions really useful -- they account for the influence of one component on the other.
Note also that in this example, the joint density is mixed, i.e. one component is continuous (house price) and one is discrete (number of bedrooms). We can still use it to ask, 'What is the probability that a house has 4 bedrooms and is worth between £400,000 and £500,000?'
|
Joint distribution in layman's terms
|
As a concrete example, suppose I toss a coin and roll a die one after the other. As you know, there is a probability distribution associated with the outcomes of both (discrete uniform distributions,
|
Joint distribution in layman's terms
As a concrete example, suppose I toss a coin and roll a die one after the other. As you know, there is a probability distribution associated with the outcomes of both (discrete uniform distributions, i.e. every possible outcome is equally likely). I can use these to find e.g. P(coin lands heads) or P(I roll a six).
The joint probability distribution is just a distribution over combinations of these events. So it tells me about P(I roll a six and the coin lands heads), or P(the coin lands tails and I roll a two).
As a continuous example, IQs are distributed normally with mean 100 and adult male heights are distributed normally with mean about $5' 10''$. The joint distribution is then a distribution over (height, IQ) pairs, so it tells you about the probability of having both a given height and a given IQ, rather than just one or the other. So armed with the joint distribution, I can now ask questions like 'what is the probability that a man off the street has an IQ between 90 and 110 and is between 5'9'' and 5'11' tall.'
In short, you make a new event, which is the combination of two events you already know about. The joint distribution tells you about the probability of the new event.
Edit:
As noted in the comments, when you are computing joint distributions, it's very important to think about whether the individual components are independent. Say we want to find the joint distribution over house prices and number of bedrooms in your neighbourhood. The result will still be a distribution over (house price, number of bedroom) pairs, just as above. However, you will need to explicitly account for the fact that these are not independent by using a conditional probability. This is what makes joint distributions really useful -- they account for the influence of one component on the other.
Note also that in this example, the joint density is mixed, i.e. one component is continuous (house price) and one is discrete (number of bedrooms). We can still use it to ask, 'What is the probability that a house has 4 bedrooms and is worth between £400,000 and £500,000?'
|
Joint distribution in layman's terms
As a concrete example, suppose I toss a coin and roll a die one after the other. As you know, there is a probability distribution associated with the outcomes of both (discrete uniform distributions,
|
39,645
|
What parameters to optimize in KNN?
|
As you correctly recognise almost always people focus on optimising against $k$ when in comes to $k$-NN applications. A standard paper on the matter is Complete Cross-Validation for Nearest Neighbor Classifiers by Mullin and Sukthankar.
Having said that, this is half of the story (or one third actually if you take $k$-NN literally). The nearest-neighbour part means that we employ some notion of near, ie. we use some distance metric to quantify similarity and thus define neighbours and the notion of near in general. This choice is most probably of equal if not greater importance than $k$. To given an example of an obviously wrong metric: Assume we want to cluster cities geographically and we base their proximity on lexicographic order; Athens, Greece and Athens, Georgia are extremely close to each other while Athens, Greece and Tirana, Albania (*) are far part; obviously this metric is useless for our intended purposes. They are many possible metrics; to mention some commonly used: Euclidean distance, Chebychev distance, Mahalanobis distance, Hamming distance and Cosine similarity. We need to therefore either derive/select a distance metric based on our prior knowledge of the data or learn a good metric from our data if possible. Distance metric learning is a task on its own. Some nice first papers on the matter are: An Efficient Algorithm for Local Distance Metric Learning by Yang et al. and Distance Metric Learning for Large Margin Nearest Neighbor Classification by Weinberger et al. The vast majority of applications employ a Euclidean distance (or cosine similarity if they are an NLP application) but this might be not be the most appropriate for the data at hand.
So, first think what is a reasonable metric of similarity between the data to be clustered and then focus on the $k$.
(*) For the less-aware of European geography: Albania and Greece are adjacent to each other.
|
What parameters to optimize in KNN?
|
As you correctly recognise almost always people focus on optimising against $k$ when in comes to $k$-NN applications. A standard paper on the matter is Complete Cross-Validation for Nearest Neighbor C
|
What parameters to optimize in KNN?
As you correctly recognise almost always people focus on optimising against $k$ when in comes to $k$-NN applications. A standard paper on the matter is Complete Cross-Validation for Nearest Neighbor Classifiers by Mullin and Sukthankar.
Having said that, this is half of the story (or one third actually if you take $k$-NN literally). The nearest-neighbour part means that we employ some notion of near, ie. we use some distance metric to quantify similarity and thus define neighbours and the notion of near in general. This choice is most probably of equal if not greater importance than $k$. To given an example of an obviously wrong metric: Assume we want to cluster cities geographically and we base their proximity on lexicographic order; Athens, Greece and Athens, Georgia are extremely close to each other while Athens, Greece and Tirana, Albania (*) are far part; obviously this metric is useless for our intended purposes. They are many possible metrics; to mention some commonly used: Euclidean distance, Chebychev distance, Mahalanobis distance, Hamming distance and Cosine similarity. We need to therefore either derive/select a distance metric based on our prior knowledge of the data or learn a good metric from our data if possible. Distance metric learning is a task on its own. Some nice first papers on the matter are: An Efficient Algorithm for Local Distance Metric Learning by Yang et al. and Distance Metric Learning for Large Margin Nearest Neighbor Classification by Weinberger et al. The vast majority of applications employ a Euclidean distance (or cosine similarity if they are an NLP application) but this might be not be the most appropriate for the data at hand.
So, first think what is a reasonable metric of similarity between the data to be clustered and then focus on the $k$.
(*) For the less-aware of European geography: Albania and Greece are adjacent to each other.
|
What parameters to optimize in KNN?
As you correctly recognise almost always people focus on optimising against $k$ when in comes to $k$-NN applications. A standard paper on the matter is Complete Cross-Validation for Nearest Neighbor C
|
39,646
|
What parameters to optimize in KNN?
|
For better results, normalizing data on the same scale is highly recommended. Generally, the normalization range considered between 0 and 1. KNN is not suitable for the large dimensional data. In such cases, dimension needs to reduce to improve the performance. Also, handling missing values will help us in improving results.
|
What parameters to optimize in KNN?
|
For better results, normalizing data on the same scale is highly recommended. Generally, the normalization range considered between 0 and 1. KNN is not suitable for the large dimensional data. In such
|
What parameters to optimize in KNN?
For better results, normalizing data on the same scale is highly recommended. Generally, the normalization range considered between 0 and 1. KNN is not suitable for the large dimensional data. In such cases, dimension needs to reduce to improve the performance. Also, handling missing values will help us in improving results.
|
What parameters to optimize in KNN?
For better results, normalizing data on the same scale is highly recommended. Generally, the normalization range considered between 0 and 1. KNN is not suitable for the large dimensional data. In such
|
39,647
|
MAR vs. MNAR: how can I decide?
|
MAR means that the probability of becoming missing on variable $x$ is independent of the (possibly unobserved) variable $x$. So to empirically differentiate between MAR and NMAR your need to know the values of $x$ when $x$ is missing, which we obviously do not have. Sometimes, there is something about the design of the study or the reason why the values were missing that makes it reasonable to assume MAR. In other cases we know a bit about the reasons of missingness an it strongly suggest NMAR. Most of the time, we just have to make an assumption and hope for the best...
|
MAR vs. MNAR: how can I decide?
|
MAR means that the probability of becoming missing on variable $x$ is independent of the (possibly unobserved) variable $x$. So to empirically differentiate between MAR and NMAR your need to know the
|
MAR vs. MNAR: how can I decide?
MAR means that the probability of becoming missing on variable $x$ is independent of the (possibly unobserved) variable $x$. So to empirically differentiate between MAR and NMAR your need to know the values of $x$ when $x$ is missing, which we obviously do not have. Sometimes, there is something about the design of the study or the reason why the values were missing that makes it reasonable to assume MAR. In other cases we know a bit about the reasons of missingness an it strongly suggest NMAR. Most of the time, we just have to make an assumption and hope for the best...
|
MAR vs. MNAR: how can I decide?
MAR means that the probability of becoming missing on variable $x$ is independent of the (possibly unobserved) variable $x$. So to empirically differentiate between MAR and NMAR your need to know the
|
39,648
|
MAR vs. MNAR: how can I decide?
|
To Maarten's answer, I would add that MAR and MNAR are not really hard distinctions. Rather, it may be better to think of a continuum from MAR to MNAR. There will almost always be some degree of dependence between the probability of missingness on $x$ and the values of $x$ itself: the question is really how problematic is this dependence for your purposes? One thing that may help is to try to predict the probability of missingness using auxiliary variables. Including such variables in your multiple imputation can effectively transform MNAR into MAR. Also, to answer your other question, you can indeed impute categorical variables, although you may need to explicitly declare them as such.
|
MAR vs. MNAR: how can I decide?
|
To Maarten's answer, I would add that MAR and MNAR are not really hard distinctions. Rather, it may be better to think of a continuum from MAR to MNAR. There will almost always be some degree of depen
|
MAR vs. MNAR: how can I decide?
To Maarten's answer, I would add that MAR and MNAR are not really hard distinctions. Rather, it may be better to think of a continuum from MAR to MNAR. There will almost always be some degree of dependence between the probability of missingness on $x$ and the values of $x$ itself: the question is really how problematic is this dependence for your purposes? One thing that may help is to try to predict the probability of missingness using auxiliary variables. Including such variables in your multiple imputation can effectively transform MNAR into MAR. Also, to answer your other question, you can indeed impute categorical variables, although you may need to explicitly declare them as such.
|
MAR vs. MNAR: how can I decide?
To Maarten's answer, I would add that MAR and MNAR are not really hard distinctions. Rather, it may be better to think of a continuum from MAR to MNAR. There will almost always be some degree of depen
|
39,649
|
How to determine significant associations in a mosaic plot?
|
The formula for the standardized residuals is:
$$\begin{align}\text{Pearson's residuals}\,&=\,\frac{\text{Observed - Expected}}{ \sqrt{\text{Expected}}}\\
d_{ij}&=\frac{n_{ij}-m_{ij}}{\sqrt{m_{ij}}}
\end{align}$$
where $m_{ij} = E( f_{ij})$ is the expected frequency of the $i$-th row and the $j$-th column.
The sum of squared standardized residuals is the chi square value.
From Extending Mosaic Displays: Marginal, Partial, and Conditional Views of Categorical Data by Michael Friendly
Under the assumption of independence, these values roughly correspond to two-tailed probabilities $p < .05$ and $p < .0001$ that a given value of $| d_{ij} |$ exceeds $2$ or $4$.
Notice the following footnote:
For exploratory purposes, we do not usually make adjustments (e.g., Bonferroni) for multiple tests because the goal is to display the pattern of residuals in the table as a whole. However, the number and values of these cutoffs can be easily set by the user.
We are dealing with a multi-way table, in reference to which the R documentation for the mosaicplot package states:
Extended mosaic displays show the standardized residuals of a loglinear model of the counts from by the color and outline of the mosaic's tiles. (Standardized residuals are often referred to a standard normal distribution.) Negative residuals are drawn in shaded of red and with broken outlines; positive ones are drawn in blue with solid outlines.
The fact that this is a three-way contingency table complicates the interpretation, which is very nicely explained in @roando2's answer.
Here is a simulation with a made-up table that resembles the OP to clarify the calculations:
tab_df = data.frame(expand.grid(
age = c("15-24", "25-39", ">40"),
attitude = c("no","moderate"),
memory = c("yes", "no")),
count = c(1,4,3,1,8,39,32,36,25,35,32,38) )
(tab = xtabs(count ~ ., data = tab_df))
, , memory = yes
attitude
age no moderate
15-24 1 1
25-39 4 8
>40 3 39
, , memory = no
attitude
age no moderate
15-24 32 35
25-39 36 32
>40 25 38
summary(tab)
Call: xtabs(formula = count ~ ., data = tab)
Number of cases in table: 254
Number of factors: 3
Test for independence of all factors:
Chisq = 78.33, df = 7, p-value = 3.011e-14
require(vcd)
mosaic(~ memory + age + attitude, data = tab, shade = T)
expected = mosaic(~ memory + age + attitude, data = tab, type = "expected")
expected
# Finding, as an example, the expected counts in >40 with memory and moderate att.:
over_forty = sum(3,39,25,38)
mem_yes = sum(1,4,3,1,8,39)
att_mod = sum(1,8,39,35,32,38)
exp_older_mem_mod = over_forty * mem_yes * att_mod / sum(tab)^2
# Corresponding standardized Pearson's residual:
(39 - exp_older_mem_mod) / sqrt(exp_older_mem_mod) # [1] 6.709703
It is interesting to compare the graphical representation to the results of the Poisson regression, which illustrates perfectly the English interpretation in @rolando2 's answer:
fit <- glm(count ~ age + attitude + memory, data=tab_df, family=poisson())
summary(fit)
Call:
glm(formula = count ~ age + attitude + memory, family = poisson(),
data = tab_df)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.7999 0.1854 9.708 < 2e-16 ***
age25-39 0.1479 0.1643 0.900 0.36794
age>40 0.4199 0.1550 2.709 0.00674 **
attitudemoderate 0.4153 0.1282 3.239 0.00120 **
memoryno 1.2629 0.1514 8.344 < 2e-16 ***
|
How to determine significant associations in a mosaic plot?
|
The formula for the standardized residuals is:
$$\begin{align}\text{Pearson's residuals}\,&=\,\frac{\text{Observed - Expected}}{ \sqrt{\text{Expected}}}\\
d_{ij}&=\frac{n_{ij}-m_{ij}}{\sqrt{m_{ij}}}
\
|
How to determine significant associations in a mosaic plot?
The formula for the standardized residuals is:
$$\begin{align}\text{Pearson's residuals}\,&=\,\frac{\text{Observed - Expected}}{ \sqrt{\text{Expected}}}\\
d_{ij}&=\frac{n_{ij}-m_{ij}}{\sqrt{m_{ij}}}
\end{align}$$
where $m_{ij} = E( f_{ij})$ is the expected frequency of the $i$-th row and the $j$-th column.
The sum of squared standardized residuals is the chi square value.
From Extending Mosaic Displays: Marginal, Partial, and Conditional Views of Categorical Data by Michael Friendly
Under the assumption of independence, these values roughly correspond to two-tailed probabilities $p < .05$ and $p < .0001$ that a given value of $| d_{ij} |$ exceeds $2$ or $4$.
Notice the following footnote:
For exploratory purposes, we do not usually make adjustments (e.g., Bonferroni) for multiple tests because the goal is to display the pattern of residuals in the table as a whole. However, the number and values of these cutoffs can be easily set by the user.
We are dealing with a multi-way table, in reference to which the R documentation for the mosaicplot package states:
Extended mosaic displays show the standardized residuals of a loglinear model of the counts from by the color and outline of the mosaic's tiles. (Standardized residuals are often referred to a standard normal distribution.) Negative residuals are drawn in shaded of red and with broken outlines; positive ones are drawn in blue with solid outlines.
The fact that this is a three-way contingency table complicates the interpretation, which is very nicely explained in @roando2's answer.
Here is a simulation with a made-up table that resembles the OP to clarify the calculations:
tab_df = data.frame(expand.grid(
age = c("15-24", "25-39", ">40"),
attitude = c("no","moderate"),
memory = c("yes", "no")),
count = c(1,4,3,1,8,39,32,36,25,35,32,38) )
(tab = xtabs(count ~ ., data = tab_df))
, , memory = yes
attitude
age no moderate
15-24 1 1
25-39 4 8
>40 3 39
, , memory = no
attitude
age no moderate
15-24 32 35
25-39 36 32
>40 25 38
summary(tab)
Call: xtabs(formula = count ~ ., data = tab)
Number of cases in table: 254
Number of factors: 3
Test for independence of all factors:
Chisq = 78.33, df = 7, p-value = 3.011e-14
require(vcd)
mosaic(~ memory + age + attitude, data = tab, shade = T)
expected = mosaic(~ memory + age + attitude, data = tab, type = "expected")
expected
# Finding, as an example, the expected counts in >40 with memory and moderate att.:
over_forty = sum(3,39,25,38)
mem_yes = sum(1,4,3,1,8,39)
att_mod = sum(1,8,39,35,32,38)
exp_older_mem_mod = over_forty * mem_yes * att_mod / sum(tab)^2
# Corresponding standardized Pearson's residual:
(39 - exp_older_mem_mod) / sqrt(exp_older_mem_mod) # [1] 6.709703
It is interesting to compare the graphical representation to the results of the Poisson regression, which illustrates perfectly the English interpretation in @rolando2 's answer:
fit <- glm(count ~ age + attitude + memory, data=tab_df, family=poisson())
summary(fit)
Call:
glm(formula = count ~ age + attitude + memory, family = poisson(),
data = tab_df)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.7999 0.1854 9.708 < 2e-16 ***
age25-39 0.1479 0.1643 0.900 0.36794
age>40 0.4199 0.1550 2.709 0.00674 **
attitudemoderate 0.4153 0.1282 3.239 0.00120 **
memoryno 1.2629 0.1514 8.344 < 2e-16 ***
|
How to determine significant associations in a mosaic plot?
The formula for the standardized residuals is:
$$\begin{align}\text{Pearson's residuals}\,&=\,\frac{\text{Observed - Expected}}{ \sqrt{\text{Expected}}}\\
d_{ij}&=\frac{n_{ij}-m_{ij}}{\sqrt{m_{ij}}}
\
|
39,650
|
How to determine significant associations in a mosaic plot?
|
This is best interpreted using some specific language. Within the 40+ age group (in the plot, labeled "40-") there is a significant association between the variables memory and attitude. We cite associations between variables, not between values or categories within them (such as "moderate" or "no").
A more specific statement one could make is that, for those 40+ but not for other age groups, "yes" on memory is disproportionately paired with "moderate" on attitude.
We could also say there is an interaction between age and memory as they relate to attitude, or between age and attitude as they relate to memory. Only rarely would one put a variable like age at the end of such a sentence, since age is ordinarily a candidate to be a predictor or cause, not an effect.
All of the above is based on the plot's characterization of each cell using, via a color, a range of Pearson residuals. The plot does not give us sufficient information to further specify the values of each residual. Nor does any individual residual value determine significance in this context. The mosaic plot, being based on a Chi-square test, does not address significance except by yielding a single, overall, "omnibus" p-value.
|
How to determine significant associations in a mosaic plot?
|
This is best interpreted using some specific language. Within the 40+ age group (in the plot, labeled "40-") there is a significant association between the variables memory and attitude. We cite ass
|
How to determine significant associations in a mosaic plot?
This is best interpreted using some specific language. Within the 40+ age group (in the plot, labeled "40-") there is a significant association between the variables memory and attitude. We cite associations between variables, not between values or categories within them (such as "moderate" or "no").
A more specific statement one could make is that, for those 40+ but not for other age groups, "yes" on memory is disproportionately paired with "moderate" on attitude.
We could also say there is an interaction between age and memory as they relate to attitude, or between age and attitude as they relate to memory. Only rarely would one put a variable like age at the end of such a sentence, since age is ordinarily a candidate to be a predictor or cause, not an effect.
All of the above is based on the plot's characterization of each cell using, via a color, a range of Pearson residuals. The plot does not give us sufficient information to further specify the values of each residual. Nor does any individual residual value determine significance in this context. The mosaic plot, being based on a Chi-square test, does not address significance except by yielding a single, overall, "omnibus" p-value.
|
How to determine significant associations in a mosaic plot?
This is best interpreted using some specific language. Within the 40+ age group (in the plot, labeled "40-") there is a significant association between the variables memory and attitude. We cite ass
|
39,651
|
Theoretical: Minimum Number of Support Vectors
|
Yes. The minimum number of support vectors is two for your scenario. You don't need more than two here.
All of the support vectors lie exactly on the margin. Regardless of the number of dimensions or size of data set, the number of support vectors could be as little as 2.
Reference: https://stackoverflow.com/questions/9480605/what-is-the-relation-between-the-number-of-support-vectors-and-training-data-and
|
Theoretical: Minimum Number of Support Vectors
|
Yes. The minimum number of support vectors is two for your scenario. You don't need more than two here.
All of the support vectors lie exactly on the margin. Regardless of the number of dimensions or
|
Theoretical: Minimum Number of Support Vectors
Yes. The minimum number of support vectors is two for your scenario. You don't need more than two here.
All of the support vectors lie exactly on the margin. Regardless of the number of dimensions or size of data set, the number of support vectors could be as little as 2.
Reference: https://stackoverflow.com/questions/9480605/what-is-the-relation-between-the-number-of-support-vectors-and-training-data-and
|
Theoretical: Minimum Number of Support Vectors
Yes. The minimum number of support vectors is two for your scenario. You don't need more than two here.
All of the support vectors lie exactly on the margin. Regardless of the number of dimensions or
|
39,652
|
Theoretical: Minimum Number of Support Vectors
|
Unfortunately the figure provided in the answer by @SmallChess still has 3 support vectors. The answer is correct - the minimum number of support vectors is 2. The best way to understand this issue is the convex hull model of SVM (for instance http://www.robots.ox.ac.uk/~cvrg/bennett00duality.pdf). The minimum case will have two convex hulls where the minimum distance between them is the distance between two vertices of the hull
|
Theoretical: Minimum Number of Support Vectors
|
Unfortunately the figure provided in the answer by @SmallChess still has 3 support vectors. The answer is correct - the minimum number of support vectors is 2. The best way to understand this issue is
|
Theoretical: Minimum Number of Support Vectors
Unfortunately the figure provided in the answer by @SmallChess still has 3 support vectors. The answer is correct - the minimum number of support vectors is 2. The best way to understand this issue is the convex hull model of SVM (for instance http://www.robots.ox.ac.uk/~cvrg/bennett00duality.pdf). The minimum case will have two convex hulls where the minimum distance between them is the distance between two vertices of the hull
|
Theoretical: Minimum Number of Support Vectors
Unfortunately the figure provided in the answer by @SmallChess still has 3 support vectors. The answer is correct - the minimum number of support vectors is 2. The best way to understand this issue is
|
39,653
|
Theoretical: Minimum Number of Support Vectors
|
If the training data involves only one class (which is a trivial problem) then there is no support vector at all.
Reference: Learning From Data - A Short Course, by Yaser S. Abu-Mostafa, Malik Magdon-Ismail and Hsuan-Tien Lin. e-Chapter 8, Exercise 8.12 and Page 31.
|
Theoretical: Minimum Number of Support Vectors
|
If the training data involves only one class (which is a trivial problem) then there is no support vector at all.
Reference: Learning From Data - A Short Course, by Yaser S. Abu-Mostafa, Malik Magdon-
|
Theoretical: Minimum Number of Support Vectors
If the training data involves only one class (which is a trivial problem) then there is no support vector at all.
Reference: Learning From Data - A Short Course, by Yaser S. Abu-Mostafa, Malik Magdon-Ismail and Hsuan-Tien Lin. e-Chapter 8, Exercise 8.12 and Page 31.
|
Theoretical: Minimum Number of Support Vectors
If the training data involves only one class (which is a trivial problem) then there is no support vector at all.
Reference: Learning From Data - A Short Course, by Yaser S. Abu-Mostafa, Malik Magdon-
|
39,654
|
Does variance only work on normally distributed data (as a measure of dispersion)?
|
Your question is a little vague, but no, variance isn't used because of its association with the normal distribution. Most distributions have at least a mean and a variance. Some do not have a variance. Some can either have or not have a variance. Some have no mean and so do not have a variance.
Just for mental clarification on your side, if a distribution has a mean then $\bar{x}\approx\mu,$ but if it does not then $\bar{x}\approx\text{nothing}$. That is it gravitates nowhere and any calculation just floats around the real number line. It doesn't mean anything. The same is true if you calculate a standard deviation for a distribution that does not have one. It has no meaning.
The variance is a property of a distribution. You are correct in that it can be used to scale the problem, but it is deeper than that. In some theoretical frameworks, it is a measure of our ignorance, or more precisely, uncertainty. In others, it measures how large of an effect chance can have on outcomes.
Although variance is a conceptualization of dispersion, it is an incomplete conceptualization. Both skew and kurtosis further explain how the dispersion operates on a problem.
For many problems in a null hypothesis framework of thinking, the Central Limit Theorem makes the discussion of problems simpler and so it doesn't hurt that there is a linkage between the normal distribution, with its very well defined distributional properties, and the use of the standard deviation. However, this is more true for simple problems than complex ones. This is also less true for Bayesian methods which do not use a null hypothesis and which do not depend on the sampling distribution of the estimator.
The average absolute deviation is a valuable tool in parameter free and distribution free methods, but less valuable for the uniform distribution. If you actually had a bounded uniform distribution, then the mean and the variance are known.
Let me give you a uniform distribution problem that may not be as simple as you think. Consider that a new enemy battle tank has appeared on the battlefield. You do not know how many they have, let alone that they existed. You want to estimate the total number of tanks.
Tanks have serial numbers on their engines, or used to before someone figured this out. The probability of capturing any one specific serial number is $1/N$ where $N$ is the total of the tanks. Of course you do not know $N$, so this is an interesting problem. You need to know N. You can only see the distribution of captured serial numbers and not know if the largest number captured is also the last tank built. It probably is not.
In that case, the mean and standard deviation provide the most powerful tools to solve the problem, despite the intuition that the standard deviation is a bad estimator.
It will be true that it is a bad estimator for certain problems, but you need to learn them on a case by case basis.
Statistical tools are chosen based on needs, rules of math and trade-offs between real world costs and limitations and the demands of the problem. Sometimes that is the variance, but sometimes it is not. The best thing to do is to learn why the rules are designed the way they are and that is too long for a posting here.
I would recommend a good practitioners book on non-parametric statistics and if you have had calculus a good introductory practitioners book on Bayesian methods.
|
Does variance only work on normally distributed data (as a measure of dispersion)?
|
Your question is a little vague, but no, variance isn't used because of its association with the normal distribution. Most distributions have at least a mean and a variance. Some do not have a varia
|
Does variance only work on normally distributed data (as a measure of dispersion)?
Your question is a little vague, but no, variance isn't used because of its association with the normal distribution. Most distributions have at least a mean and a variance. Some do not have a variance. Some can either have or not have a variance. Some have no mean and so do not have a variance.
Just for mental clarification on your side, if a distribution has a mean then $\bar{x}\approx\mu,$ but if it does not then $\bar{x}\approx\text{nothing}$. That is it gravitates nowhere and any calculation just floats around the real number line. It doesn't mean anything. The same is true if you calculate a standard deviation for a distribution that does not have one. It has no meaning.
The variance is a property of a distribution. You are correct in that it can be used to scale the problem, but it is deeper than that. In some theoretical frameworks, it is a measure of our ignorance, or more precisely, uncertainty. In others, it measures how large of an effect chance can have on outcomes.
Although variance is a conceptualization of dispersion, it is an incomplete conceptualization. Both skew and kurtosis further explain how the dispersion operates on a problem.
For many problems in a null hypothesis framework of thinking, the Central Limit Theorem makes the discussion of problems simpler and so it doesn't hurt that there is a linkage between the normal distribution, with its very well defined distributional properties, and the use of the standard deviation. However, this is more true for simple problems than complex ones. This is also less true for Bayesian methods which do not use a null hypothesis and which do not depend on the sampling distribution of the estimator.
The average absolute deviation is a valuable tool in parameter free and distribution free methods, but less valuable for the uniform distribution. If you actually had a bounded uniform distribution, then the mean and the variance are known.
Let me give you a uniform distribution problem that may not be as simple as you think. Consider that a new enemy battle tank has appeared on the battlefield. You do not know how many they have, let alone that they existed. You want to estimate the total number of tanks.
Tanks have serial numbers on their engines, or used to before someone figured this out. The probability of capturing any one specific serial number is $1/N$ where $N$ is the total of the tanks. Of course you do not know $N$, so this is an interesting problem. You need to know N. You can only see the distribution of captured serial numbers and not know if the largest number captured is also the last tank built. It probably is not.
In that case, the mean and standard deviation provide the most powerful tools to solve the problem, despite the intuition that the standard deviation is a bad estimator.
It will be true that it is a bad estimator for certain problems, but you need to learn them on a case by case basis.
Statistical tools are chosen based on needs, rules of math and trade-offs between real world costs and limitations and the demands of the problem. Sometimes that is the variance, but sometimes it is not. The best thing to do is to learn why the rules are designed the way they are and that is too long for a posting here.
I would recommend a good practitioners book on non-parametric statistics and if you have had calculus a good introductory practitioners book on Bayesian methods.
|
Does variance only work on normally distributed data (as a measure of dispersion)?
Your question is a little vague, but no, variance isn't used because of its association with the normal distribution. Most distributions have at least a mean and a variance. Some do not have a varia
|
39,655
|
Does variance only work on normally distributed data (as a measure of dispersion)?
|
First we need to be clear about the distinction between a measure of the variability of a distribution (such as its standard deviation or its mean deviation or its range) and the best way to estimate that measure from a sample. For example, if your distribution is uniform, the best sample estimate of the population mean deviation from the mean is not the sample mean deviation -- actually a fraction of the range is generally much better.
(Of course if you really don't know what distribution you may be dealing with, such considerations may not be much help.)
So why measure population variability by variance?
Variance (and through it, standard deviation) has a very particular property that isn't shared by other measures of variability, which is a very simple form for the variance of sums (and more generally linear combinations) of variables.
When you have independence, the simple form becomes much simpler still.
Specifically, under independence, $\text{Var}(X+Y) = \text{Var}(X) + \text{Var}(Y)$ and because of that the standard deviation is also quite simple in form. The non-independence case is not much more complicated.
Other measures of variability don't have such a simple property.
This makes variance (and hence standard deviation) very attractive ways to measure variability of distributions.
A second reason is that the mean (which is often seen as a natural location measure) is the location that minimizes a square error loss function -- and when you minimize it, you obtain the variance. Many people see a square-error loss function as natural or useful, and in that case the variance in turn becomes a natural measure of variation.
|
Does variance only work on normally distributed data (as a measure of dispersion)?
|
First we need to be clear about the distinction between a measure of the variability of a distribution (such as its standard deviation or its mean deviation or its range) and the best way to estimate
|
Does variance only work on normally distributed data (as a measure of dispersion)?
First we need to be clear about the distinction between a measure of the variability of a distribution (such as its standard deviation or its mean deviation or its range) and the best way to estimate that measure from a sample. For example, if your distribution is uniform, the best sample estimate of the population mean deviation from the mean is not the sample mean deviation -- actually a fraction of the range is generally much better.
(Of course if you really don't know what distribution you may be dealing with, such considerations may not be much help.)
So why measure population variability by variance?
Variance (and through it, standard deviation) has a very particular property that isn't shared by other measures of variability, which is a very simple form for the variance of sums (and more generally linear combinations) of variables.
When you have independence, the simple form becomes much simpler still.
Specifically, under independence, $\text{Var}(X+Y) = \text{Var}(X) + \text{Var}(Y)$ and because of that the standard deviation is also quite simple in form. The non-independence case is not much more complicated.
Other measures of variability don't have such a simple property.
This makes variance (and hence standard deviation) very attractive ways to measure variability of distributions.
A second reason is that the mean (which is often seen as a natural location measure) is the location that minimizes a square error loss function -- and when you minimize it, you obtain the variance. Many people see a square-error loss function as natural or useful, and in that case the variance in turn becomes a natural measure of variation.
|
Does variance only work on normally distributed data (as a measure of dispersion)?
First we need to be clear about the distinction between a measure of the variability of a distribution (such as its standard deviation or its mean deviation or its range) and the best way to estimate
|
39,656
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
|
This is a nice problem for testing the development code in the next version of mathStatica.
Note that $\text{Cov}(X, XY) = \mu_{1,1}(X, XY)$ (i.e. the covariance operator is the {1,1} central moment), which is why I am requesting the {1,1} CentralMoment of {X, X Y} ... when the variables are {X,Y}:
where $\mu_{2,1} = E\left[(X-E[X])^2 \;(Y-E[Y])^1\right]$
If $X$ and $Y$ are independent (information not stated in the problem), then the answer simplifies further:
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
|
This is a nice problem for testing the development code in the next version of mathStatica.
Note that $\text{Cov}(X, XY) = \mu_{1,1}(X, XY)$ (i.e. the covariance operator is the {1,1} central moment)
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
This is a nice problem for testing the development code in the next version of mathStatica.
Note that $\text{Cov}(X, XY) = \mu_{1,1}(X, XY)$ (i.e. the covariance operator is the {1,1} central moment), which is why I am requesting the {1,1} CentralMoment of {X, X Y} ... when the variables are {X,Y}:
where $\mu_{2,1} = E\left[(X-E[X])^2 \;(Y-E[Y])^1\right]$
If $X$ and $Y$ are independent (information not stated in the problem), then the answer simplifies further:
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
This is a nice problem for testing the development code in the next version of mathStatica.
Note that $\text{Cov}(X, XY) = \mu_{1,1}(X, XY)$ (i.e. the covariance operator is the {1,1} central moment)
|
39,657
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
|
My first reaction is that you won't be able to find this value without knowing the dependence structure between $X$ and $Y$. This is further verified by using law of total covariance as shown below,
\begin{align*}
Cov(X,XY)& = E[Cov(X,XY\mid X)] + Cov(E[X\mid X],E[XY \mid X])\\
& = E[X^2Cov(1,Y \mid X)] + Cov(X, X\, E[Y \mid X])\\
& = 0 + Cov(X, X\, E[Y \mid X])\\
& = Cov(X, X\, E[Y \mid X]).
\end{align*}
$Y$ goes away due to the expectation, so your claim cannot hold. If you know the expectation of $Y \mid X$ then you can find this. If not, then you won't be able to solve this.
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
|
My first reaction is that you won't be able to find this value without knowing the dependence structure between $X$ and $Y$. This is further verified by using law of total covariance as shown below,
\
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
My first reaction is that you won't be able to find this value without knowing the dependence structure between $X$ and $Y$. This is further verified by using law of total covariance as shown below,
\begin{align*}
Cov(X,XY)& = E[Cov(X,XY\mid X)] + Cov(E[X\mid X],E[XY \mid X])\\
& = E[X^2Cov(1,Y \mid X)] + Cov(X, X\, E[Y \mid X])\\
& = 0 + Cov(X, X\, E[Y \mid X])\\
& = Cov(X, X\, E[Y \mid X]).
\end{align*}
$Y$ goes away due to the expectation, so your claim cannot hold. If you know the expectation of $Y \mid X$ then you can find this. If not, then you won't be able to solve this.
|
Is there a way to calculate easily $\mathrm{Cov}(X,XY)$?
My first reaction is that you won't be able to find this value without knowing the dependence structure between $X$ and $Y$. This is further verified by using law of total covariance as shown below,
\
|
39,658
|
Association rules - support, confidence and lift
|
It depends on your task. But usually you want all three to be high.
high support: should apply to a large amount of cases
high confidence: should be correct often
high lift: indicates it is not just a coincidence
Consider e.g. "rain" and "day". Assuming we live in a very unfortunate place at the Equator, where it is raining 50% of the time, and it is day 50% of the time, and these are independent of each other. I.e. in 25% of the time it is raining and it is day.
We then have a support of 25% - that is pretty high for most data sets. We also have a confidence of 50% - that is also pretty good. If 50% of my visitors buy a product I recommend I would be a billionaire. But the lift is just 1, i.e. no improvement.
Beware that on other data sets, you won't get anywhere near 25% support. Consider a supermarket with diverse prodcuts. How many % of customers do you think buy toilet paper?
|
Association rules - support, confidence and lift
|
It depends on your task. But usually you want all three to be high.
high support: should apply to a large amount of cases
high confidence: should be correct often
high lift: indicates it is not just
|
Association rules - support, confidence and lift
It depends on your task. But usually you want all three to be high.
high support: should apply to a large amount of cases
high confidence: should be correct often
high lift: indicates it is not just a coincidence
Consider e.g. "rain" and "day". Assuming we live in a very unfortunate place at the Equator, where it is raining 50% of the time, and it is day 50% of the time, and these are independent of each other. I.e. in 25% of the time it is raining and it is day.
We then have a support of 25% - that is pretty high for most data sets. We also have a confidence of 50% - that is also pretty good. If 50% of my visitors buy a product I recommend I would be a billionaire. But the lift is just 1, i.e. no improvement.
Beware that on other data sets, you won't get anywhere near 25% support. Consider a supermarket with diverse prodcuts. How many % of customers do you think buy toilet paper?
|
Association rules - support, confidence and lift
It depends on your task. But usually you want all three to be high.
high support: should apply to a large amount of cases
high confidence: should be correct often
high lift: indicates it is not just
|
39,659
|
Interpretation of the entropy with a coding length?
|
Code:
AAA as 0
AAB,AAC,ABA,ACA,BAA,CAA as 1000...1101
12 triplets with 1 letter "A" as 1110000...1111011
8 triplets without "A" as 11111000...11111111
Each triplet takes on average 0.512*1+0.384*4+0.096*7+0.008*8=2.784 bits, or 0.928 bits per character. With coding 4-character, 5-character etc. groups, you can further decrease the number of bits per character. With long groups of characters and optimal coding, you can make the number of bits per character as close to the entropy as you want, but not less than the entropy.
|
Interpretation of the entropy with a coding length?
|
Code:
AAA as 0
AAB,AAC,ABA,ACA,BAA,CAA as 1000...1101
12 triplets with 1 letter "A" as 1110000...1111011
8 triplets without "A" as 11111000...11111111
Each triplet takes on average 0.512*1+0.384*4+0.
|
Interpretation of the entropy with a coding length?
Code:
AAA as 0
AAB,AAC,ABA,ACA,BAA,CAA as 1000...1101
12 triplets with 1 letter "A" as 1110000...1111011
8 triplets without "A" as 11111000...11111111
Each triplet takes on average 0.512*1+0.384*4+0.096*7+0.008*8=2.784 bits, or 0.928 bits per character. With coding 4-character, 5-character etc. groups, you can further decrease the number of bits per character. With long groups of characters and optimal coding, you can make the number of bits per character as close to the entropy as you want, but not less than the entropy.
|
Interpretation of the entropy with a coding length?
Code:
AAA as 0
AAB,AAC,ABA,ACA,BAA,CAA as 1000...1101
12 triplets with 1 letter "A" as 1110000...1111011
8 triplets without "A" as 11111000...11111111
Each triplet takes on average 0.512*1+0.384*4+0.
|
39,660
|
GLMs must be 'linear in the parameters'
|
Your example model can be reëxpressed to be linear in the parameters $\alpha=\beta_1\beta_2$ & $\zeta=\exp\beta_3$:
$$g(\operatorname{E} Y) = \beta_0 + \alpha x_1 + \zeta x_2^2$$
(Clearly $\beta_1$ & $\beta_2$ aren't separately estimable; a non-linear model wouldn't help there. And note that $\hat\zeta$ must be constrained to be positive.) Some models can't be so reëxpressed:
$$g(\operatorname{E} Y) = \beta_0 + \beta_1 x_1 + x_2^{\beta_2}$$
Some can be though it's not obvious at first: https://stats.stackexchange.com/a/60504/17230.
There's a very thorough discussion of different meanings of "linear" at How to tell the difference between linear and non-linear regression models?.
|
GLMs must be 'linear in the parameters'
|
Your example model can be reëxpressed to be linear in the parameters $\alpha=\beta_1\beta_2$ & $\zeta=\exp\beta_3$:
$$g(\operatorname{E} Y) = \beta_0 + \alpha x_1 + \zeta x_2^2$$
(Clearly $\beta_1$ &
|
GLMs must be 'linear in the parameters'
Your example model can be reëxpressed to be linear in the parameters $\alpha=\beta_1\beta_2$ & $\zeta=\exp\beta_3$:
$$g(\operatorname{E} Y) = \beta_0 + \alpha x_1 + \zeta x_2^2$$
(Clearly $\beta_1$ & $\beta_2$ aren't separately estimable; a non-linear model wouldn't help there. And note that $\hat\zeta$ must be constrained to be positive.) Some models can't be so reëxpressed:
$$g(\operatorname{E} Y) = \beta_0 + \beta_1 x_1 + x_2^{\beta_2}$$
Some can be though it's not obvious at first: https://stats.stackexchange.com/a/60504/17230.
There's a very thorough discussion of different meanings of "linear" at How to tell the difference between linear and non-linear regression models?.
|
GLMs must be 'linear in the parameters'
Your example model can be reëxpressed to be linear in the parameters $\alpha=\beta_1\beta_2$ & $\zeta=\exp\beta_3$:
$$g(\operatorname{E} Y) = \beta_0 + \alpha x_1 + \zeta x_2^2$$
(Clearly $\beta_1$ &
|
39,661
|
GLMs must be 'linear in the parameters'
|
Linear in the parameters means that you can write your prediction as
$$\beta_0+\sum_{j=1}^px_{ij}\beta_j $$
For some definition of $x_{ij} $. But these x's need not be linear functions of your data. For example, ploynomial fitting of a time series has $x_{ij}=t_i^j $ where $t_i $ is the time associated with data point $i $. The prediction is a non linear function of time, but it is linear in the betas.
UPDATE
In response to the comment, the answer is "sort of". If $\beta_2$ was constant, then the predictor is linear in $\beta_0,\beta_1,\exp (\beta_3) $. It is not linear in $\beta_3$, but a transformation of $\beta_3$. In terms of least squares estimates it doesn't make much difference here.
|
GLMs must be 'linear in the parameters'
|
Linear in the parameters means that you can write your prediction as
$$\beta_0+\sum_{j=1}^px_{ij}\beta_j $$
For some definition of $x_{ij} $. But these x's need not be linear functions of your data.
|
GLMs must be 'linear in the parameters'
Linear in the parameters means that you can write your prediction as
$$\beta_0+\sum_{j=1}^px_{ij}\beta_j $$
For some definition of $x_{ij} $. But these x's need not be linear functions of your data. For example, ploynomial fitting of a time series has $x_{ij}=t_i^j $ where $t_i $ is the time associated with data point $i $. The prediction is a non linear function of time, but it is linear in the betas.
UPDATE
In response to the comment, the answer is "sort of". If $\beta_2$ was constant, then the predictor is linear in $\beta_0,\beta_1,\exp (\beta_3) $. It is not linear in $\beta_3$, but a transformation of $\beta_3$. In terms of least squares estimates it doesn't make much difference here.
|
GLMs must be 'linear in the parameters'
Linear in the parameters means that you can write your prediction as
$$\beta_0+\sum_{j=1}^px_{ij}\beta_j $$
For some definition of $x_{ij} $. But these x's need not be linear functions of your data.
|
39,662
|
GLMs must be 'linear in the parameters'
|
I think it's better for you to understand the three components of the GLM. Esp, you need understand how link function is defined.
You can refer to the page 7 in the slides below. 'linear in the parameters' is true after being transformed by the link function.
enter link description here
|
GLMs must be 'linear in the parameters'
|
I think it's better for you to understand the three components of the GLM. Esp, you need understand how link function is defined.
You can refer to the page 7 in the slides below. 'linear in the param
|
GLMs must be 'linear in the parameters'
I think it's better for you to understand the three components of the GLM. Esp, you need understand how link function is defined.
You can refer to the page 7 in the slides below. 'linear in the parameters' is true after being transformed by the link function.
enter link description here
|
GLMs must be 'linear in the parameters'
I think it's better for you to understand the three components of the GLM. Esp, you need understand how link function is defined.
You can refer to the page 7 in the slides below. 'linear in the param
|
39,663
|
Sampling from Gaussian Process Posterior
|
You want to sample posterior using the data and model given.
In this case you can:
sample from posterior normal distribution with given mean and covariance matrix - use model.predict with full_covariance=True in case;
use built-in function model.posterior_samples_f that does the job for you.
A sample code is below:
import GPy
import numpy as np
sample_size = 5
X = np.random.uniform(0, 1., (sample_size, 1))
Y = np.sin(X) + np.random.randn(sample_size, 1)*0.05
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
model = GPy.models.GPRegression(X,Y,kernel, noise_var=1e-10)
testX = np.linspace(0, 1, 100).reshape(-1, 1)
posteriorTestY = model.posterior_samples_f(testX, full_cov=True, size=3)
simY, simMse = model.predict(testX)
plt.plot(testX, posteriorTestY)
plt.plot(X, Y, 'ok', markersize=10)
plt.plot(testX, simY - 3 * simMse ** 0.5, '--g')
plt.plot(testX, simY + 3 * simMse ** 0.5, '--g')
|
Sampling from Gaussian Process Posterior
|
You want to sample posterior using the data and model given.
In this case you can:
sample from posterior normal distribution with given mean and covariance matrix - use model.predict with full_covari
|
Sampling from Gaussian Process Posterior
You want to sample posterior using the data and model given.
In this case you can:
sample from posterior normal distribution with given mean and covariance matrix - use model.predict with full_covariance=True in case;
use built-in function model.posterior_samples_f that does the job for you.
A sample code is below:
import GPy
import numpy as np
sample_size = 5
X = np.random.uniform(0, 1., (sample_size, 1))
Y = np.sin(X) + np.random.randn(sample_size, 1)*0.05
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
model = GPy.models.GPRegression(X,Y,kernel, noise_var=1e-10)
testX = np.linspace(0, 1, 100).reshape(-1, 1)
posteriorTestY = model.posterior_samples_f(testX, full_cov=True, size=3)
simY, simMse = model.predict(testX)
plt.plot(testX, posteriorTestY)
plt.plot(X, Y, 'ok', markersize=10)
plt.plot(testX, simY - 3 * simMse ** 0.5, '--g')
plt.plot(testX, simY + 3 * simMse ** 0.5, '--g')
|
Sampling from Gaussian Process Posterior
You want to sample posterior using the data and model given.
In this case you can:
sample from posterior normal distribution with given mean and covariance matrix - use model.predict with full_covari
|
39,664
|
Do bookmakers have an "underhead" for non-favourites? [closed]
|
Usually the opposite in fact. It tends to be the case that when gambling people are attracted to the small probability of a large win, rather than a higher probability of a moderate win. This effect can be neatly modelled by Prospect Theory.
This results in a Favourite-longshot bias that has been studied many times (Google Scholar). Roughly summarised the implied probability on a long shot is actually most wrong. Of course it doesn't often look like this because an implied probability of 2% for a real probability of 0.2% only looks like an over-round of 1.8%, but of course in log odds ratio, that is a massive difference.
You will often observe that 100-1 becomes a sort of catch all odds for anything very rare (unlikely teams winning leagues etc.).
Repeated betting on the favourite will lose you money much slower in the long run that betting on the outsider.
A related issue are "accumulators", in these bets people stack together multiple bets one after the other to create outlandish odds from multiple fairly low odds events. Here the over-round is even worse, and bookmakers can often afford to offer bonuses to attract people to these bets.
To see the issue, imagine betting on 5 events each with a real probability of 20% and implied probability of 25%. On an single bets your expected return is 80% of your stake (you win £4 when you "should" have won £5). By the time you stack them together the real probability of winning is $0.2^5 = 0.00032$ but your implied probability is $0.25^5 = 0.0009765$ - in other words you expect to win back only a third of your stake! This is why a bookmaker can happily offer to "double" your win on an accumulator.
|
Do bookmakers have an "underhead" for non-favourites? [closed]
|
Usually the opposite in fact. It tends to be the case that when gambling people are attracted to the small probability of a large win, rather than a higher probability of a moderate win. This effect
|
Do bookmakers have an "underhead" for non-favourites? [closed]
Usually the opposite in fact. It tends to be the case that when gambling people are attracted to the small probability of a large win, rather than a higher probability of a moderate win. This effect can be neatly modelled by Prospect Theory.
This results in a Favourite-longshot bias that has been studied many times (Google Scholar). Roughly summarised the implied probability on a long shot is actually most wrong. Of course it doesn't often look like this because an implied probability of 2% for a real probability of 0.2% only looks like an over-round of 1.8%, but of course in log odds ratio, that is a massive difference.
You will often observe that 100-1 becomes a sort of catch all odds for anything very rare (unlikely teams winning leagues etc.).
Repeated betting on the favourite will lose you money much slower in the long run that betting on the outsider.
A related issue are "accumulators", in these bets people stack together multiple bets one after the other to create outlandish odds from multiple fairly low odds events. Here the over-round is even worse, and bookmakers can often afford to offer bonuses to attract people to these bets.
To see the issue, imagine betting on 5 events each with a real probability of 20% and implied probability of 25%. On an single bets your expected return is 80% of your stake (you win £4 when you "should" have won £5). By the time you stack them together the real probability of winning is $0.2^5 = 0.00032$ but your implied probability is $0.25^5 = 0.0009765$ - in other words you expect to win back only a third of your stake! This is why a bookmaker can happily offer to "double" your win on an accumulator.
|
Do bookmakers have an "underhead" for non-favourites? [closed]
Usually the opposite in fact. It tends to be the case that when gambling people are attracted to the small probability of a large win, rather than a higher probability of a moderate win. This effect
|
39,665
|
"Inverse" Q-Q plot?
|
You've re-discovered the P-P plot. For an introduction, see here.
I'll add a slightly droll comment from one text, to the effect that if you want to be, or to appear, optimistic about fit, you use a P-P plot, whereas if you want to be (appear) pessimistic, you use a Q-Q plot.
Your example is a case in point. The P-P plot is necessarily anchored in principle at [0, 0] and [1, 1], but come even slightly waggly tails, the Q-Q plot shows them quite explicitly. Come a lousy fit, whether through outliers, curvature or grouping, and the Q-Q plot tells the bad news without restraint.
Despite that, the lesser use of P-P plots I guess arises because you have to do more work to relate them to the original data.
EDIT The quotation I had in mind:
Exaggerating a bit, one may say that one should apply the sample df
$F_n$ (or, likewise, the survivor function $1 - F_n$) and the P-P plot
if one wants to justify a hypothesis visually. The other tools are
preferable whenever a critical attitude towards the modeling is
adopted.
Reiss, R.-D. and Thomas, M. 2007. Statistical Analysis of Extreme Values: With Applications to Insurance, Finance, Hydrology and Other Fields. Basel: Birkhäuser, p.63. (nearly identical wording in 2nd edition 2001 p.67 and 1st edition 1997 p.57)
|
"Inverse" Q-Q plot?
|
You've re-discovered the P-P plot. For an introduction, see here.
I'll add a slightly droll comment from one text, to the effect that if you want to be, or to appear, optimistic about fit, you use a
|
"Inverse" Q-Q plot?
You've re-discovered the P-P plot. For an introduction, see here.
I'll add a slightly droll comment from one text, to the effect that if you want to be, or to appear, optimistic about fit, you use a P-P plot, whereas if you want to be (appear) pessimistic, you use a Q-Q plot.
Your example is a case in point. The P-P plot is necessarily anchored in principle at [0, 0] and [1, 1], but come even slightly waggly tails, the Q-Q plot shows them quite explicitly. Come a lousy fit, whether through outliers, curvature or grouping, and the Q-Q plot tells the bad news without restraint.
Despite that, the lesser use of P-P plots I guess arises because you have to do more work to relate them to the original data.
EDIT The quotation I had in mind:
Exaggerating a bit, one may say that one should apply the sample df
$F_n$ (or, likewise, the survivor function $1 - F_n$) and the P-P plot
if one wants to justify a hypothesis visually. The other tools are
preferable whenever a critical attitude towards the modeling is
adopted.
Reiss, R.-D. and Thomas, M. 2007. Statistical Analysis of Extreme Values: With Applications to Insurance, Finance, Hydrology and Other Fields. Basel: Birkhäuser, p.63. (nearly identical wording in 2nd edition 2001 p.67 and 1st edition 1997 p.57)
|
"Inverse" Q-Q plot?
You've re-discovered the P-P plot. For an introduction, see here.
I'll add a slightly droll comment from one text, to the effect that if you want to be, or to appear, optimistic about fit, you use a
|
39,666
|
Length of Time-Series for Forecasting Modeling
|
No matter what is model, generally the more data you have the better. If you want to make a forecast you want your sample to be representative enough for the population as it changes. In most cases you have only partial knowledge about the changes in the past and no knowledge about the future. Gathering more data helps you to gain more confidence in predictability of the changes (this is related to forecastability). You want to find a repeating pattern, trend, or at least describe the random behavior of your process with some model, so you need to be confident that what you observed is somehow similar to what can possibly happen in the future.
It is possible to make time-series forecasts even with short time-series (see also Rob Hyndman's blog), but generally more data means more information and more representative sample. Think of your sample in terms of time-units observations. If you have two years of weekly data, this means that you have only $2\times 52 = 104$ weekly observations. If you want to make forecast half year ahead, than you should consider the fact that you have only four half-years in your data.
Imagine that you work with weather data and want to make half year ahead forecast about temperature. There is noticeable seasonality in temperature, e.g. in central England temperature seems to rise in the first half of the year and drop in the second half (Parker et al, 1992). If you had two years of data temperature fluctuations and wanted to make half-year ahead forecast about temperature between January and June, than only the two of four year-halves of your data would be relevant because of the seasonality (data about drop of the temperature in the second half of the year does not provide you much information about rise in the first half).
(source http://www.metoffice.gov.uk/hadobs/hadcet/cetml1659on.dat)
If there are cycles or seasonality (like with temperature data) that you may assume that will repeat in the future or trend that will continue in the future, than the data that "catches" this pattern may be enough. However the pattern can change, consider for example copper dataset from R fma library. Looking only at the data until the year 1920 would lead you to totally different conclusions than looking at the data after this year (even the average price differs).
In case of multivariate data you are looking at the changes of multiple variables across the time, so you should consider if you have enough information about each of the variables. As an example let ma use bank data from fma library that describes deposits in a mutual savings bank in a large metropolitan area with three variables available: end of month balance (EOM), composite AAA bond rates (AAA), and US Government 3-4 year bonds (threefour). As you can see from the plot posted below, both the individual variables and their mutual relations change over time.
Before building your model you should consider if you have enough information about changes over time in your variables and about of their mutual relations. Answer on the question if have enough data for your forecast horizon unfortunately highly depends on what is your data (see Optimal forecast window for timeseries). You should also remember that sometimes short part of time-series may suggest some pattern (e.g. clear upward trend of AAA in bank data before time 30) that is not so obvious or nonexistent in longer term. Gathering more data, in most cases, helps you to build greater confidence about behavior of the pattern you observe over time.
Parker, D.E., Legg, T.P., and Folland, C.K. (1992). A new daily central England temperature series, 1772–1991. International Journal of Climatology, 12(4), 317-342.
|
Length of Time-Series for Forecasting Modeling
|
No matter what is model, generally the more data you have the better. If you want to make a forecast you want your sample to be representative enough for the population as it changes. In most cases yo
|
Length of Time-Series for Forecasting Modeling
No matter what is model, generally the more data you have the better. If you want to make a forecast you want your sample to be representative enough for the population as it changes. In most cases you have only partial knowledge about the changes in the past and no knowledge about the future. Gathering more data helps you to gain more confidence in predictability of the changes (this is related to forecastability). You want to find a repeating pattern, trend, or at least describe the random behavior of your process with some model, so you need to be confident that what you observed is somehow similar to what can possibly happen in the future.
It is possible to make time-series forecasts even with short time-series (see also Rob Hyndman's blog), but generally more data means more information and more representative sample. Think of your sample in terms of time-units observations. If you have two years of weekly data, this means that you have only $2\times 52 = 104$ weekly observations. If you want to make forecast half year ahead, than you should consider the fact that you have only four half-years in your data.
Imagine that you work with weather data and want to make half year ahead forecast about temperature. There is noticeable seasonality in temperature, e.g. in central England temperature seems to rise in the first half of the year and drop in the second half (Parker et al, 1992). If you had two years of data temperature fluctuations and wanted to make half-year ahead forecast about temperature between January and June, than only the two of four year-halves of your data would be relevant because of the seasonality (data about drop of the temperature in the second half of the year does not provide you much information about rise in the first half).
(source http://www.metoffice.gov.uk/hadobs/hadcet/cetml1659on.dat)
If there are cycles or seasonality (like with temperature data) that you may assume that will repeat in the future or trend that will continue in the future, than the data that "catches" this pattern may be enough. However the pattern can change, consider for example copper dataset from R fma library. Looking only at the data until the year 1920 would lead you to totally different conclusions than looking at the data after this year (even the average price differs).
In case of multivariate data you are looking at the changes of multiple variables across the time, so you should consider if you have enough information about each of the variables. As an example let ma use bank data from fma library that describes deposits in a mutual savings bank in a large metropolitan area with three variables available: end of month balance (EOM), composite AAA bond rates (AAA), and US Government 3-4 year bonds (threefour). As you can see from the plot posted below, both the individual variables and their mutual relations change over time.
Before building your model you should consider if you have enough information about changes over time in your variables and about of their mutual relations. Answer on the question if have enough data for your forecast horizon unfortunately highly depends on what is your data (see Optimal forecast window for timeseries). You should also remember that sometimes short part of time-series may suggest some pattern (e.g. clear upward trend of AAA in bank data before time 30) that is not so obvious or nonexistent in longer term. Gathering more data, in most cases, helps you to build greater confidence about behavior of the pattern you observe over time.
Parker, D.E., Legg, T.P., and Folland, C.K. (1992). A new daily central England temperature series, 1772–1991. International Journal of Climatology, 12(4), 317-342.
|
Length of Time-Series for Forecasting Modeling
No matter what is model, generally the more data you have the better. If you want to make a forecast you want your sample to be representative enough for the population as it changes. In most cases yo
|
39,667
|
Length of Time-Series for Forecasting Modeling
|
If you are trying to model weekly data in order to make a 26 week out forecast , I would recommend that you consider using more than 2 years of data (104 points) . A four year history base would give you the opportunity to extract seasonal structure be it auto-regressive or fixed effects and any level shifts/time trends that may be in play. You are right in saying that sometimes we have too much data and lose focus on the most recent. There is no hard and fast rule as to the "optimum" ...just enough to capture the signal but not too much to lead to spurious tests of significance ( i.e. daily data for 20 years ). Hope this helps.
|
Length of Time-Series for Forecasting Modeling
|
If you are trying to model weekly data in order to make a 26 week out forecast , I would recommend that you consider using more than 2 years of data (104 points) . A four year history base would give
|
Length of Time-Series for Forecasting Modeling
If you are trying to model weekly data in order to make a 26 week out forecast , I would recommend that you consider using more than 2 years of data (104 points) . A four year history base would give you the opportunity to extract seasonal structure be it auto-regressive or fixed effects and any level shifts/time trends that may be in play. You are right in saying that sometimes we have too much data and lose focus on the most recent. There is no hard and fast rule as to the "optimum" ...just enough to capture the signal but not too much to lead to spurious tests of significance ( i.e. daily data for 20 years ). Hope this helps.
|
Length of Time-Series for Forecasting Modeling
If you are trying to model weekly data in order to make a 26 week out forecast , I would recommend that you consider using more than 2 years of data (104 points) . A four year history base would give
|
39,668
|
How to use PCA in regression?
|
Say your predictor matrix is $X$ and your response vector is $y$. PCA is concerned only with the (co)variance within the predictor matrix $X$ itself, while a regression model is (also) concerned with the covariance between $X$ and the response $y$. If there is no relationship between these concepts, dimension reduction by PCA can be harmful to your regression by screening out those predictors in $X$ that are correlated with the response.
Here is a simple example, in R, as I don't have access to matlab. Suppose I create some random gaussian data
x_1 <- rnorm(10000, mean = 0, sd = 1)
x_2 <- rnorm(10000, mean = 0, sd = .1)
X <- cbind(x_1, x_2)
And set up a situation where the a response is correlated with only the smaller varaince component
y <- x_2 + 1
The principal components of $X$ are just $x_1$ and $x_2$, given the way I set it up
> cor(X)
x_1 x_2
x_1 1.0000000000 0.0004543833
x_2 0.0004543833 1.0000000000
If PCA is used to select one component, we get $x_1$, as this has the highest variance. Regressing $y$ on $x_1$ is useless
> lm(y ~ x_1)
Call:
lm(formula = y ~ x_1)
Coefficients:
(Intercept) x_1
1.001e+00 4.544e-05
The coefficient of $x_1$ here is zero, the model has no more predictive power than an intercept only model. On the other hand, if I select the lower variance component
> lm(y ~ x_2)
Call:
lm(formula = y ~ x_2)
Coefficients:
(Intercept) x_2
1 1
I get back a much more predictive model.
Because PCA ignores the relationship of $y$ to $X$, there is no reason to believe that its dimension reduction is reasonable in the context of your regression problem.
On the other hand using the relationship between $y$ and $X$ to do pre-regression dimension reduction and variable selection is a very good way to overfit your model to your training data. In some ways, PCAs ignorance of the relationship between $X$ and $y$ is a blessing, because, while it can be harmful in the way outlined above, it cannot overfit the relationship in your training data in the same way that peeking at $y$ can.
As for your more practical question, the matlab documentation says
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the n-by-p data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is p-by-p. Each column of coeff contains coefficients for one principal component, and the columns are in descending order of component variance. By default, pca centers the data and uses the singular value decomposition (SVD) algorithm.
This says to me, that to do PCA dimension reduction in matlab, you need to:
Center the columns of your $X$ matrix.
Select the first $N$ columns of the coef matrix, where $N$ is the number of non-intercept regressors you want in your model.
Create a new data matrix as center(X) * coef[, 1:N].
Use the columns in the new matrix as regressors in your dimensional reduced regression.
Again, I am far from fluent in matlab, so the syntactic details of how to preform these steps is unknown to me.
|
How to use PCA in regression?
|
Say your predictor matrix is $X$ and your response vector is $y$. PCA is concerned only with the (co)variance within the predictor matrix $X$ itself, while a regression model is (also) concerned with
|
How to use PCA in regression?
Say your predictor matrix is $X$ and your response vector is $y$. PCA is concerned only with the (co)variance within the predictor matrix $X$ itself, while a regression model is (also) concerned with the covariance between $X$ and the response $y$. If there is no relationship between these concepts, dimension reduction by PCA can be harmful to your regression by screening out those predictors in $X$ that are correlated with the response.
Here is a simple example, in R, as I don't have access to matlab. Suppose I create some random gaussian data
x_1 <- rnorm(10000, mean = 0, sd = 1)
x_2 <- rnorm(10000, mean = 0, sd = .1)
X <- cbind(x_1, x_2)
And set up a situation where the a response is correlated with only the smaller varaince component
y <- x_2 + 1
The principal components of $X$ are just $x_1$ and $x_2$, given the way I set it up
> cor(X)
x_1 x_2
x_1 1.0000000000 0.0004543833
x_2 0.0004543833 1.0000000000
If PCA is used to select one component, we get $x_1$, as this has the highest variance. Regressing $y$ on $x_1$ is useless
> lm(y ~ x_1)
Call:
lm(formula = y ~ x_1)
Coefficients:
(Intercept) x_1
1.001e+00 4.544e-05
The coefficient of $x_1$ here is zero, the model has no more predictive power than an intercept only model. On the other hand, if I select the lower variance component
> lm(y ~ x_2)
Call:
lm(formula = y ~ x_2)
Coefficients:
(Intercept) x_2
1 1
I get back a much more predictive model.
Because PCA ignores the relationship of $y$ to $X$, there is no reason to believe that its dimension reduction is reasonable in the context of your regression problem.
On the other hand using the relationship between $y$ and $X$ to do pre-regression dimension reduction and variable selection is a very good way to overfit your model to your training data. In some ways, PCAs ignorance of the relationship between $X$ and $y$ is a blessing, because, while it can be harmful in the way outlined above, it cannot overfit the relationship in your training data in the same way that peeking at $y$ can.
As for your more practical question, the matlab documentation says
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the n-by-p data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is p-by-p. Each column of coeff contains coefficients for one principal component, and the columns are in descending order of component variance. By default, pca centers the data and uses the singular value decomposition (SVD) algorithm.
This says to me, that to do PCA dimension reduction in matlab, you need to:
Center the columns of your $X$ matrix.
Select the first $N$ columns of the coef matrix, where $N$ is the number of non-intercept regressors you want in your model.
Create a new data matrix as center(X) * coef[, 1:N].
Use the columns in the new matrix as regressors in your dimensional reduced regression.
Again, I am far from fluent in matlab, so the syntactic details of how to preform these steps is unknown to me.
|
How to use PCA in regression?
Say your predictor matrix is $X$ and your response vector is $y$. PCA is concerned only with the (co)variance within the predictor matrix $X$ itself, while a regression model is (also) concerned with
|
39,669
|
Probability of failure
|
Let $X$ denote the resistance and $Y$ the load. Then,
\begin{align}
P\{Y > X\} &= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X,Y}(x,y)
\,\mathrm dx \,\mathrm dy\\
&= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X}(x)f_{Y}(y)
\,\mathrm dx \,\mathrm dy & \scriptstyle{\text{because}~X~\text{and}
~Y~\text{are independent}}\\
&= \int_{y=-\infty}^\infty f_{Y}(y)\left[ \int_{x=-\infty}^y f_{X}(x)
\,\mathrm dx\right] \,\mathrm dy\\
&= \int_{y=-\infty}^\infty f_{Y}(y)F_{X}(y) \,\mathrm dy\\
\text{that is}, \qquad p_{failure} &= \int_{-\infty}^\infty PDF_{load} \times CDF_{resistance}
\end{align}
which is the formula that you are asking about,
without needing to worry about convolutions, cross-correlations,
complex numbers, and the like as in Sean Easter's answer.
As a practical matter, $X$ and $Y$ are likely to take on nonnegative
values only, in which case the above integral need only be on the positive real line.
|
Probability of failure
|
Let $X$ denote the resistance and $Y$ the load. Then,
\begin{align}
P\{Y > X\} &= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X,Y}(x,y)
\,\mathrm dx \,\mathrm dy\\
&= \int_{y=-\infty}^\infty \int_{x
|
Probability of failure
Let $X$ denote the resistance and $Y$ the load. Then,
\begin{align}
P\{Y > X\} &= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X,Y}(x,y)
\,\mathrm dx \,\mathrm dy\\
&= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X}(x)f_{Y}(y)
\,\mathrm dx \,\mathrm dy & \scriptstyle{\text{because}~X~\text{and}
~Y~\text{are independent}}\\
&= \int_{y=-\infty}^\infty f_{Y}(y)\left[ \int_{x=-\infty}^y f_{X}(x)
\,\mathrm dx\right] \,\mathrm dy\\
&= \int_{y=-\infty}^\infty f_{Y}(y)F_{X}(y) \,\mathrm dy\\
\text{that is}, \qquad p_{failure} &= \int_{-\infty}^\infty PDF_{load} \times CDF_{resistance}
\end{align}
which is the formula that you are asking about,
without needing to worry about convolutions, cross-correlations,
complex numbers, and the like as in Sean Easter's answer.
As a practical matter, $X$ and $Y$ are likely to take on nonnegative
values only, in which case the above integral need only be on the positive real line.
|
Probability of failure
Let $X$ denote the resistance and $Y$ the load. Then,
\begin{align}
P\{Y > X\} &= \int_{y=-\infty}^\infty \int_{x=-\infty}^y f_{X,Y}(x,y)
\,\mathrm dx \,\mathrm dy\\
&= \int_{y=-\infty}^\infty \int_{x
|
39,670
|
Probability of failure
|
Rephrased, the probability of failure is equivalent to the probability that resistance - load is less than zero. What you're looking for is the distribution of the difference of random variables.
Since these are independent, you can use convolution to solve for their difference. But it's applied to the densities, not a cumulative density. Also, the convolution is itself an infinite integral. Let $X$ represent load, $Y$ resistance. You'd want to convolve $p_{X}(-t)$ and $p_{Y}(t)$, called the cross-correlation in signal processing:
$$p_{Y-X}(\tau) = p_x(-\tau) \ast p_Y(\tau)= \int_{-\infty}^{\infty}p_{X}(t)p_{Y}(\tau + t)dt$$
Strictly, cross-correlation is equivalent to the convolution of $p_X^*(-\tau)$ and $p_Y(\tau)$, where the asterisk is the complex conjugate. Since densities are real-valued, $p_X^*(-\tau) = p_X(-\tau)$ and there's no need to worry.
The probability of failure is the probability that the difference is less than zero, which you can find by integrating the density of the differences up to zero: $\int_{-\infty}^0p_{Y-X}(\tau)d\tau$. (I.e., the CDF of the difference.) You can do all of this numerically, but the more you can do analytically, the more efficient it will be.
|
Probability of failure
|
Rephrased, the probability of failure is equivalent to the probability that resistance - load is less than zero. What you're looking for is the distribution of the difference of random variables.
Sinc
|
Probability of failure
Rephrased, the probability of failure is equivalent to the probability that resistance - load is less than zero. What you're looking for is the distribution of the difference of random variables.
Since these are independent, you can use convolution to solve for their difference. But it's applied to the densities, not a cumulative density. Also, the convolution is itself an infinite integral. Let $X$ represent load, $Y$ resistance. You'd want to convolve $p_{X}(-t)$ and $p_{Y}(t)$, called the cross-correlation in signal processing:
$$p_{Y-X}(\tau) = p_x(-\tau) \ast p_Y(\tau)= \int_{-\infty}^{\infty}p_{X}(t)p_{Y}(\tau + t)dt$$
Strictly, cross-correlation is equivalent to the convolution of $p_X^*(-\tau)$ and $p_Y(\tau)$, where the asterisk is the complex conjugate. Since densities are real-valued, $p_X^*(-\tau) = p_X(-\tau)$ and there's no need to worry.
The probability of failure is the probability that the difference is less than zero, which you can find by integrating the density of the differences up to zero: $\int_{-\infty}^0p_{Y-X}(\tau)d\tau$. (I.e., the CDF of the difference.) You can do all of this numerically, but the more you can do analytically, the more efficient it will be.
|
Probability of failure
Rephrased, the probability of failure is equivalent to the probability that resistance - load is less than zero. What you're looking for is the distribution of the difference of random variables.
Sinc
|
39,671
|
Why is asymptotic normality important for an estimator?
|
Why is asymptotic normality important for an estimator?
I wouldn't say it's important, really, but when it happens, it can be convenient, and the plain fact is, it happens a lot -- for many popular estimators in commonly used models, it is the case that the distribution of an appropriately standardized estimator will be asymptotically normal.
So whether I wish it or not, it happens. [Indeed, in these notes, Charles Geyer says "almost all estimators of practical interest are [...] asymptotically normal", and I think that's probably a fair assessment.]
Is it because it allows for easy construction for confidence intervals?
Well, it does allow easy construction of confidence intervals if the sample sizes are large enough that you could reasonably approximate the sampling distribution as normal. ... as long as you have a computer, or tables, or happen to remember the critical values you want. [Without any of those, it would be mildly inconvenient ... but I can manage okay even if I decide to compute an 85% interval or a 96.5% interval or whatever even if I don't have a computer or tables, since I can take a nearby value I know, or a pair of nearby values either side of the value I want, and do a little bit of playing with a calculator ... or at the worst, a pen and paper, and get an interval that'll be accurate enough; after all, it's already an approximation in at least a couple of different ways, so how accurate do I really need it?]
But I really wouldn't say that "I want asymptotic normality because of that".
I construct finite-sample CIs all the time without bothering with normality. I'm perfectly happy to use a binomial(40,0.5) interval or a $t_{80}$ interval or a $\chi^2_{100}$ interval or an $F_{60,120}$ interval instead of trying to invoke asymptotic normality in any of those cases, so asymptotic-something-else wouldn't have been a big deal. Indeed, I use permutation tests at least sometimes, and generate CIs from permutation or randomization distributions, and I don't give a damn about the asymptotic distribution when I do (since one conditions on the sample, asymptotics are irrelevant).
Isn't it still possible to construct confidence intervals without this property i.e. if it converged to another distribution?
Yes, absolutely. Imagine some scaled estimator was say asymptotically chi-squared with 2df (which is not normal). Would I be bothered? Would it even be mildly inconvenient? Not a bit. (If anything, in some ways that would be easier)
But even if the asymptotic distribution weren't especially convenient, that wouldn't necessarily bother me. For example, I can happily use a Kolmogorov-Smirnov test without difficulty, and the statistic is an estimator of something. It's not convenient in the sense that I could only write down the asymptotic distribution as an infinite sum (but it is convenient in that I just go ahead and use either tables or a computer program to do things with it ... just as I do with the normal).
On the other hand, we needn't (and shouldn't) ignore the fact that the most common kinds of estimator will often be asymptotically normal -- MLEs are usually asymptotically normal, as are method-of-moments estimators and estimators based on (non-extreme) quantiles (and more besides). I'm not going to ignore it when happens.
Please tell me some reasons why you want an estimator to be asymptotically normal?
I don't, especially. But if it happens, I'm happy to use that fact whenever it's convenient and reasonable to do that instead of something else.
|
Why is asymptotic normality important for an estimator?
|
Why is asymptotic normality important for an estimator?
I wouldn't say it's important, really, but when it happens, it can be convenient, and the plain fact is, it happens a lot -- for many popular e
|
Why is asymptotic normality important for an estimator?
Why is asymptotic normality important for an estimator?
I wouldn't say it's important, really, but when it happens, it can be convenient, and the plain fact is, it happens a lot -- for many popular estimators in commonly used models, it is the case that the distribution of an appropriately standardized estimator will be asymptotically normal.
So whether I wish it or not, it happens. [Indeed, in these notes, Charles Geyer says "almost all estimators of practical interest are [...] asymptotically normal", and I think that's probably a fair assessment.]
Is it because it allows for easy construction for confidence intervals?
Well, it does allow easy construction of confidence intervals if the sample sizes are large enough that you could reasonably approximate the sampling distribution as normal. ... as long as you have a computer, or tables, or happen to remember the critical values you want. [Without any of those, it would be mildly inconvenient ... but I can manage okay even if I decide to compute an 85% interval or a 96.5% interval or whatever even if I don't have a computer or tables, since I can take a nearby value I know, or a pair of nearby values either side of the value I want, and do a little bit of playing with a calculator ... or at the worst, a pen and paper, and get an interval that'll be accurate enough; after all, it's already an approximation in at least a couple of different ways, so how accurate do I really need it?]
But I really wouldn't say that "I want asymptotic normality because of that".
I construct finite-sample CIs all the time without bothering with normality. I'm perfectly happy to use a binomial(40,0.5) interval or a $t_{80}$ interval or a $\chi^2_{100}$ interval or an $F_{60,120}$ interval instead of trying to invoke asymptotic normality in any of those cases, so asymptotic-something-else wouldn't have been a big deal. Indeed, I use permutation tests at least sometimes, and generate CIs from permutation or randomization distributions, and I don't give a damn about the asymptotic distribution when I do (since one conditions on the sample, asymptotics are irrelevant).
Isn't it still possible to construct confidence intervals without this property i.e. if it converged to another distribution?
Yes, absolutely. Imagine some scaled estimator was say asymptotically chi-squared with 2df (which is not normal). Would I be bothered? Would it even be mildly inconvenient? Not a bit. (If anything, in some ways that would be easier)
But even if the asymptotic distribution weren't especially convenient, that wouldn't necessarily bother me. For example, I can happily use a Kolmogorov-Smirnov test without difficulty, and the statistic is an estimator of something. It's not convenient in the sense that I could only write down the asymptotic distribution as an infinite sum (but it is convenient in that I just go ahead and use either tables or a computer program to do things with it ... just as I do with the normal).
On the other hand, we needn't (and shouldn't) ignore the fact that the most common kinds of estimator will often be asymptotically normal -- MLEs are usually asymptotically normal, as are method-of-moments estimators and estimators based on (non-extreme) quantiles (and more besides). I'm not going to ignore it when happens.
Please tell me some reasons why you want an estimator to be asymptotically normal?
I don't, especially. But if it happens, I'm happy to use that fact whenever it's convenient and reasonable to do that instead of something else.
|
Why is asymptotic normality important for an estimator?
Why is asymptotic normality important for an estimator?
I wouldn't say it's important, really, but when it happens, it can be convenient, and the plain fact is, it happens a lot -- for many popular e
|
39,672
|
How to train a Gaussian mixture hidden Markov model?
|
In the reference at the bottom $^*$, I see the training involves the following:
Initialize the HMM & GMM parameters (randomly or using prior assumptions).
Then repeat the following until convergence criteria are satisfied:
Do a forward pass and backwards pass to find probabilities associated with the training sequences and the parameters of the GMM-HMM.
Recalculate the HMM & GMM parameters - the mean, covariances, and mixture coefficients of each mixture component at each state, and the transition probabilities between states - all calculated using the probabilities found in step 1.
$*$ University of Edinburgh GMM-HMM slides (Google: Hidden Markov Models and Gaussian Mixture Models, or try this link). This reference gives a lot of details and suggests doing these calculations in the log domain.
|
How to train a Gaussian mixture hidden Markov model?
|
In the reference at the bottom $^*$, I see the training involves the following:
Initialize the HMM & GMM parameters (randomly or using prior assumptions).
Then repeat the following until convergence
|
How to train a Gaussian mixture hidden Markov model?
In the reference at the bottom $^*$, I see the training involves the following:
Initialize the HMM & GMM parameters (randomly or using prior assumptions).
Then repeat the following until convergence criteria are satisfied:
Do a forward pass and backwards pass to find probabilities associated with the training sequences and the parameters of the GMM-HMM.
Recalculate the HMM & GMM parameters - the mean, covariances, and mixture coefficients of each mixture component at each state, and the transition probabilities between states - all calculated using the probabilities found in step 1.
$*$ University of Edinburgh GMM-HMM slides (Google: Hidden Markov Models and Gaussian Mixture Models, or try this link). This reference gives a lot of details and suggests doing these calculations in the log domain.
|
How to train a Gaussian mixture hidden Markov model?
In the reference at the bottom $^*$, I see the training involves the following:
Initialize the HMM & GMM parameters (randomly or using prior assumptions).
Then repeat the following until convergence
|
39,673
|
How to train a Gaussian mixture hidden Markov model?
|
This paper[1] is absolute classic and has the whole HMM machinery for gaussian mixture laid out for you. I think it's fair to say Rabiner made the first important step in speech recognition with GMM in 1980s.
[1] Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.
|
How to train a Gaussian mixture hidden Markov model?
|
This paper[1] is absolute classic and has the whole HMM machinery for gaussian mixture laid out for you. I think it's fair to say Rabiner made the first important step in speech recognition with GMM i
|
How to train a Gaussian mixture hidden Markov model?
This paper[1] is absolute classic and has the whole HMM machinery for gaussian mixture laid out for you. I think it's fair to say Rabiner made the first important step in speech recognition with GMM in 1980s.
[1] Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.
|
How to train a Gaussian mixture hidden Markov model?
This paper[1] is absolute classic and has the whole HMM machinery for gaussian mixture laid out for you. I think it's fair to say Rabiner made the first important step in speech recognition with GMM i
|
39,674
|
How to train a Gaussian mixture hidden Markov model?
|
pomegranate is another python library that provides GMM and HMM with even better documents than hmmlearn. Currently I prepare transfer from hmmlearn to it.
http://pomegranate.readthedocs.io/en/latest/GeneralMixtureModel.html
|
How to train a Gaussian mixture hidden Markov model?
|
pomegranate is another python library that provides GMM and HMM with even better documents than hmmlearn. Currently I prepare transfer from hmmlearn to it.
http://pomegranate.readthedocs.io/en/latest/
|
How to train a Gaussian mixture hidden Markov model?
pomegranate is another python library that provides GMM and HMM with even better documents than hmmlearn. Currently I prepare transfer from hmmlearn to it.
http://pomegranate.readthedocs.io/en/latest/GeneralMixtureModel.html
|
How to train a Gaussian mixture hidden Markov model?
pomegranate is another python library that provides GMM and HMM with even better documents than hmmlearn. Currently I prepare transfer from hmmlearn to it.
http://pomegranate.readthedocs.io/en/latest/
|
39,675
|
How to train a Gaussian mixture hidden Markov model?
|
Assuming your HMM uses Gaussian Mixture, for parameters estimation, you perform forward and backward pass and update the parameters. The difference is that you need to include normal pdf mixture as the probability of observation given a state. So, for transition probability estimation, you do it just like a discrete observation HMM, but to re-estimate the mean, variance(or covariance matrix for multivariate case), and mixture weights, you introduce a new formula for probability of being in state i at time t with m-th mixture component accounting for the observation at t, which is simply normalized alpha*beta * normalized c*N(o,u,var) alpha and beta are the forward and backward formulas in Baum-Welch and c = m-th mixture weight while being in state i, o = observation at t, u = mean or mean vector, var = variance or covariance matrix
|
How to train a Gaussian mixture hidden Markov model?
|
Assuming your HMM uses Gaussian Mixture, for parameters estimation, you perform forward and backward pass and update the parameters. The difference is that you need to include normal pdf mixture as th
|
How to train a Gaussian mixture hidden Markov model?
Assuming your HMM uses Gaussian Mixture, for parameters estimation, you perform forward and backward pass and update the parameters. The difference is that you need to include normal pdf mixture as the probability of observation given a state. So, for transition probability estimation, you do it just like a discrete observation HMM, but to re-estimate the mean, variance(or covariance matrix for multivariate case), and mixture weights, you introduce a new formula for probability of being in state i at time t with m-th mixture component accounting for the observation at t, which is simply normalized alpha*beta * normalized c*N(o,u,var) alpha and beta are the forward and backward formulas in Baum-Welch and c = m-th mixture weight while being in state i, o = observation at t, u = mean or mean vector, var = variance or covariance matrix
|
How to train a Gaussian mixture hidden Markov model?
Assuming your HMM uses Gaussian Mixture, for parameters estimation, you perform forward and backward pass and update the parameters. The difference is that you need to include normal pdf mixture as th
|
39,676
|
Test for randomness - randtests - fails
|
At least for the "Mann-Kendall Rank Test", the problem seems to be in the testing package you're using, and not in the data.
Specifically, the Mann-Kendall test is supposed to detect monotone trends in the data by calculating the Kendall rank correlation coefficient between the data points and their position in the input sequence. However, looking at the source code of the randtests R package you're using, I see two problems with it:
It's using a naïve $\mathrm O(n^2)$ algorithm to calculate the Kendall correlation coefficient, which means that it gets very slow for large data sets. Your data set, with 20 times 400,000 points, is just about at the limit of what it can handle.
Also, it seems to be assuming that no two data points have identical values. For your data, this is patently false, leading to bogus results.
I retested your data using a better implementation of the Kendall test, and got $\tau_B = -0.0012$, $p = 0.25$ for the whole data, and $\tau_B = -0.010$, $p = 0.03$ for the most strongly trended (i.e. lowest $p$ value) column (the last one, as it happens). For the lowest $p$ value out of 20, this is well within the bounds of reasonable random variation. It also took me only a few minutes to run this test on my laptop.
FWIW, here's the Python code I used to run this test:
import numpy as np
import scipy.stats as stats
data = np.loadtxt('data.csv', delimiter=',', dtype=int, skiprows=1)
for col in range(1, len(data[0])):
tau, p = stats.kendalltau(data[:,0], data[:,col])
print "column %2d: tau = %+g, p = %g" % (col, tau, p)
for order in ('C', 'F'):
flat = data[:,1:].flatten(order)
tau, p = stats.kendalltau(flat, np.arange(len(flat)))
print "full data (%s): tau = %+g, p = %g" % (order, tau, p)
(For the full data tests, C means row-major order and F means column-major order; I tested them both for the sake of completeness.)
|
Test for randomness - randtests - fails
|
At least for the "Mann-Kendall Rank Test", the problem seems to be in the testing package you're using, and not in the data.
Specifically, the Mann-Kendall test is supposed to detect monotone trends i
|
Test for randomness - randtests - fails
At least for the "Mann-Kendall Rank Test", the problem seems to be in the testing package you're using, and not in the data.
Specifically, the Mann-Kendall test is supposed to detect monotone trends in the data by calculating the Kendall rank correlation coefficient between the data points and their position in the input sequence. However, looking at the source code of the randtests R package you're using, I see two problems with it:
It's using a naïve $\mathrm O(n^2)$ algorithm to calculate the Kendall correlation coefficient, which means that it gets very slow for large data sets. Your data set, with 20 times 400,000 points, is just about at the limit of what it can handle.
Also, it seems to be assuming that no two data points have identical values. For your data, this is patently false, leading to bogus results.
I retested your data using a better implementation of the Kendall test, and got $\tau_B = -0.0012$, $p = 0.25$ for the whole data, and $\tau_B = -0.010$, $p = 0.03$ for the most strongly trended (i.e. lowest $p$ value) column (the last one, as it happens). For the lowest $p$ value out of 20, this is well within the bounds of reasonable random variation. It also took me only a few minutes to run this test on my laptop.
FWIW, here's the Python code I used to run this test:
import numpy as np
import scipy.stats as stats
data = np.loadtxt('data.csv', delimiter=',', dtype=int, skiprows=1)
for col in range(1, len(data[0])):
tau, p = stats.kendalltau(data[:,0], data[:,col])
print "column %2d: tau = %+g, p = %g" % (col, tau, p)
for order in ('C', 'F'):
flat = data[:,1:].flatten(order)
tau, p = stats.kendalltau(flat, np.arange(len(flat)))
print "full data (%s): tau = %+g, p = %g" % (order, tau, p)
(For the full data tests, C means row-major order and F means column-major order; I tested them both for the sake of completeness.)
|
Test for randomness - randtests - fails
At least for the "Mann-Kendall Rank Test", the problem seems to be in the testing package you're using, and not in the data.
Specifically, the Mann-Kendall test is supposed to detect monotone trends i
|
39,677
|
Test for randomness - randtests - fails
|
One can not substantiate (much less assert) the correctness of a Cryptographically Secure Pseudo Random Number Generator using tests of its output (because any useful test fits the definition of a break of the CSPRNG); that seems to be what's attempted in the question (since it is about "check the correctness of the PRNG" and was initialy asked in the crypto group). Such tests can only give a proof/argument of non-correctness.
Correctness of design of a CSPRNG can be substantiated only by examining that design. Correctness of implementation of a CSPRNG can be substantiated by comparing the output for known seed to known good output.
Correctness of a True Random Number Generator (as necessary for seeding a CSPRNG) can be substantiated by statistical tests similar to those in the question if the TRNG is simple and/or known enough. In particular, such standard statistical tests
can not meaningfully check a TRNG which incorporates a CSPRNG at the output (again, a valid test of the TRNG would be a break of the CSPRNG)
can meaningfully check a TRNG which output is periodical sampling of a noise source, including if such TRNG has on its output a Von Neumann de-biaser (or other de-biaser with extremely small state).
[I quit interpreting the p-values; still, be careful that having "combined all data in a single vector" you obtained a data set (of positive integers at most 80) that is not uniformly random, since on each row of the original data set, the integers are unique].
|
Test for randomness - randtests - fails
|
One can not substantiate (much less assert) the correctness of a Cryptographically Secure Pseudo Random Number Generator using tests of its output (because any useful test fits the definition of a bre
|
Test for randomness - randtests - fails
One can not substantiate (much less assert) the correctness of a Cryptographically Secure Pseudo Random Number Generator using tests of its output (because any useful test fits the definition of a break of the CSPRNG); that seems to be what's attempted in the question (since it is about "check the correctness of the PRNG" and was initialy asked in the crypto group). Such tests can only give a proof/argument of non-correctness.
Correctness of design of a CSPRNG can be substantiated only by examining that design. Correctness of implementation of a CSPRNG can be substantiated by comparing the output for known seed to known good output.
Correctness of a True Random Number Generator (as necessary for seeding a CSPRNG) can be substantiated by statistical tests similar to those in the question if the TRNG is simple and/or known enough. In particular, such standard statistical tests
can not meaningfully check a TRNG which incorporates a CSPRNG at the output (again, a valid test of the TRNG would be a break of the CSPRNG)
can meaningfully check a TRNG which output is periodical sampling of a noise source, including if such TRNG has on its output a Von Neumann de-biaser (or other de-biaser with extremely small state).
[I quit interpreting the p-values; still, be careful that having "combined all data in a single vector" you obtained a data set (of positive integers at most 80) that is not uniformly random, since on each row of the original data set, the integers are unique].
|
Test for randomness - randtests - fails
One can not substantiate (much less assert) the correctness of a Cryptographically Secure Pseudo Random Number Generator using tests of its output (because any useful test fits the definition of a bre
|
39,678
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
|
The approximation could be poor when $p$ is close to zero or one, but when $p = 1/2$ it holds exactly.
The idea here is that we want to estimate the probability of an event by using a sample proportion across many Monte Carlo trials, and we want to know how accurate of an estimate that proportion is of the true probability. The standard deviation $\sigma$ of a sample proportion is as the authors note $\sqrt{p (1 - p) / s}$ (where $s$ is the number of Monte Carlo simulations), but the problem is we don't know $p$. However, we can maximize $\sigma$ with respect to $p$ and get a conservative "estimate" of this standard error that will always hold no matter what $p$ happens to be. This may end up causing us to run more simulations than we need to, but this won't matter as long as the iterations themselves are computationally cheap.
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
|
The approximation could be poor when $p$ is close to zero or one, but when $p = 1/2$ it holds exactly.
The idea here is that we want to estimate the probability of an event by using a sample proportio
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
The approximation could be poor when $p$ is close to zero or one, but when $p = 1/2$ it holds exactly.
The idea here is that we want to estimate the probability of an event by using a sample proportion across many Monte Carlo trials, and we want to know how accurate of an estimate that proportion is of the true probability. The standard deviation $\sigma$ of a sample proportion is as the authors note $\sqrt{p (1 - p) / s}$ (where $s$ is the number of Monte Carlo simulations), but the problem is we don't know $p$. However, we can maximize $\sigma$ with respect to $p$ and get a conservative "estimate" of this standard error that will always hold no matter what $p$ happens to be. This may end up causing us to run more simulations than we need to, but this won't matter as long as the iterations themselves are computationally cheap.
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
The approximation could be poor when $p$ is close to zero or one, but when $p = 1/2$ it holds exactly.
The idea here is that we want to estimate the probability of an event by using a sample proportio
|
39,679
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
|
This approximation is called Wald confidence interval. It's based on normal approximation of Binomial. How good is this approximation? There are two answers: when sample size is at least 30 and "it depends".
The "30" answer is very popular and has been propagated from book to book until it became an axiom, pretty much. Once I was able to track the first paper which mentioned it.
The "it depends" answer is explored in this paper: Central Limit Theorem and Sample Size by Zachary R. Smith and Craig S. Wells
The other thing is variance reduction. It is especially critical to use for low or high $p$. Obviously, Wald's formula will not work directly in this case.
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
|
This approximation is called Wald confidence interval. It's based on normal approximation of Binomial. How good is this approximation? There are two answers: when sample size is at least 30 and "it de
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
This approximation is called Wald confidence interval. It's based on normal approximation of Binomial. How good is this approximation? There are two answers: when sample size is at least 30 and "it depends".
The "30" answer is very popular and has been propagated from book to book until it became an axiom, pretty much. Once I was able to track the first paper which mentioned it.
The "it depends" answer is explored in this paper: Central Limit Theorem and Sample Size by Zachary R. Smith and Craig S. Wells
The other thing is variance reduction. It is especially critical to use for low or high $p$. Obviously, Wald's formula will not work directly in this case.
|
Number of samples needed in Monte Carlo simulation: how good is this approximation?
This approximation is called Wald confidence interval. It's based on normal approximation of Binomial. How good is this approximation? There are two answers: when sample size is at least 30 and "it de
|
39,680
|
Credible set for beta distribution
|
A level $\alpha$ "Highest Posterior Density" (HPD) interval for a (posterior) distribution $F$ with (continuous) density $f$ and mode $\mu$ is an interval $I=[x,y]$ containing $\mu$ for which
$1-\alpha$ of the probability is in the interval: $F(I) = F(y) - F(x) = 1-\alpha$.
The densities are the same at either end: $f(x) = f(y)$.
Among the various strategies to find $I$, one that stands out as generally effective is the following.
Choose a reasonable starting value for $x$, such as $F^{-1}(\alpha/2)$.
Define the "$\alpha$ offset" of $x$ to be the point $y$ for which $[x,y]$ has probability $1-\alpha$. Thus
$$y = F^{-1}(F(x) + 1 - \alpha)$$
provided $F(x) \le \alpha$.
Search for $x$ in the interval $(-\infty, F^{-1}(\alpha))$ at which $f(x) = f(y)$. The unimodality of $F$ and the continuity of $f$ guarantee such an $x$ exists and is unique.
The search (3) can be carried out in practice by minimizing $(f(y)-f(x))^2$ plus a penalty term in case the probability of $[x,y]$ is not exactly $1-\alpha$. (The penalty term is useful in case the search procedure provides a candidate value of $x$ for which $F(x)$ exceeds $\alpha$, in which case a valid offset $y$ cannot be found.)
Applying such a general procedure would be a better idea for a Beta distribution compared to using a Normal approximation, because Betas tend to be skewed (unless their parameters $a$ and $b$ are relatively similar).
For example, the orange region in the figure covers a $1-0.05$ HPD interval for a Beta$(11,4)$ distribution whose density is graphed. The dashed gray line shows the common value of the density at the endpoints.
Here is the R code that performed the calculation. It is written to be very general: if you can supply functions to compute $F$, $f$, and $F^{-1}$, it will work. (An example for Normal distributions has been commented out.)
offset <- function(x, alpha=0.05, F=pbeta, F.Inv=qbeta, ...) {
q <- F(x, ...)
y <- F.Inv(min(q + 1-alpha, 1), ...)
}
f <- function(x, alpha=0.05, F=pbeta, F.Inv=qbeta, f.dist=dbeta, ...) {
y <- offset(x, alpha, F=F, F.Inv=F.Inv, ...)
mapply(function(u,v) diff(f.dist(c(u,v), ...))^2, x, y) +
(diff(F(c(x,y), ...)) - (1-alpha))^2
}
#
# Specify the problem.
#
alpha <- 0.05 # Level of the CI (between 0 and 1)
a <- 11; b <- 4 # Parameters
F <- pbeta # Distribution function
F.Inv <- qbeta # Inverse distribution function
f.dist <- dbeta # Density function
x.min <- 0 # Minimum to consider
x.max <- 1 # Maximum to consider
# F <- pnorm
# F.Inv <- qnorm
# f.dist <- dnorm
# x.min <- -2
# x.max <- 5
#
# Find the solution.
#
x.0 <- qbeta(alpha/2, a, b)
x.lim <- qbeta(alpha, a, b)
sol <- optimize(function(x) f(x, alpha, F=F, F.Inv=F.Inv, f.dist=f.dist, a, b),
interval=c(x.min, x.lim), tol=1e-10)
x <- sol$minimum
y <- offset(x, alpha, F=F, F.Inv=F.Inv, a, b)
#
# Plot the solution.
#
u <- seq(x, y, length.out=1001)
v <- f.dist(u, a, b)
u <- c(u[1], u, u[length(u)])
v <- c(0, v, 0)
curve(f.dist(x, a, b), xlim=c(x.min, x.max), ylim=c(0,max(v)*1.2), n=1001, xlab="X", ylab="Density")
polygon(u, v, col="#f8e0a0", border=NA)
abline(h = f.dist(x, a, b), col="Gray", lty=2, lwd=2)
curve(f.dist(x, a, b), add=TRUE, lwd=2)
|
Credible set for beta distribution
|
A level $\alpha$ "Highest Posterior Density" (HPD) interval for a (posterior) distribution $F$ with (continuous) density $f$ and mode $\mu$ is an interval $I=[x,y]$ containing $\mu$ for which
$1-\alp
|
Credible set for beta distribution
A level $\alpha$ "Highest Posterior Density" (HPD) interval for a (posterior) distribution $F$ with (continuous) density $f$ and mode $\mu$ is an interval $I=[x,y]$ containing $\mu$ for which
$1-\alpha$ of the probability is in the interval: $F(I) = F(y) - F(x) = 1-\alpha$.
The densities are the same at either end: $f(x) = f(y)$.
Among the various strategies to find $I$, one that stands out as generally effective is the following.
Choose a reasonable starting value for $x$, such as $F^{-1}(\alpha/2)$.
Define the "$\alpha$ offset" of $x$ to be the point $y$ for which $[x,y]$ has probability $1-\alpha$. Thus
$$y = F^{-1}(F(x) + 1 - \alpha)$$
provided $F(x) \le \alpha$.
Search for $x$ in the interval $(-\infty, F^{-1}(\alpha))$ at which $f(x) = f(y)$. The unimodality of $F$ and the continuity of $f$ guarantee such an $x$ exists and is unique.
The search (3) can be carried out in practice by minimizing $(f(y)-f(x))^2$ plus a penalty term in case the probability of $[x,y]$ is not exactly $1-\alpha$. (The penalty term is useful in case the search procedure provides a candidate value of $x$ for which $F(x)$ exceeds $\alpha$, in which case a valid offset $y$ cannot be found.)
Applying such a general procedure would be a better idea for a Beta distribution compared to using a Normal approximation, because Betas tend to be skewed (unless their parameters $a$ and $b$ are relatively similar).
For example, the orange region in the figure covers a $1-0.05$ HPD interval for a Beta$(11,4)$ distribution whose density is graphed. The dashed gray line shows the common value of the density at the endpoints.
Here is the R code that performed the calculation. It is written to be very general: if you can supply functions to compute $F$, $f$, and $F^{-1}$, it will work. (An example for Normal distributions has been commented out.)
offset <- function(x, alpha=0.05, F=pbeta, F.Inv=qbeta, ...) {
q <- F(x, ...)
y <- F.Inv(min(q + 1-alpha, 1), ...)
}
f <- function(x, alpha=0.05, F=pbeta, F.Inv=qbeta, f.dist=dbeta, ...) {
y <- offset(x, alpha, F=F, F.Inv=F.Inv, ...)
mapply(function(u,v) diff(f.dist(c(u,v), ...))^2, x, y) +
(diff(F(c(x,y), ...)) - (1-alpha))^2
}
#
# Specify the problem.
#
alpha <- 0.05 # Level of the CI (between 0 and 1)
a <- 11; b <- 4 # Parameters
F <- pbeta # Distribution function
F.Inv <- qbeta # Inverse distribution function
f.dist <- dbeta # Density function
x.min <- 0 # Minimum to consider
x.max <- 1 # Maximum to consider
# F <- pnorm
# F.Inv <- qnorm
# f.dist <- dnorm
# x.min <- -2
# x.max <- 5
#
# Find the solution.
#
x.0 <- qbeta(alpha/2, a, b)
x.lim <- qbeta(alpha, a, b)
sol <- optimize(function(x) f(x, alpha, F=F, F.Inv=F.Inv, f.dist=f.dist, a, b),
interval=c(x.min, x.lim), tol=1e-10)
x <- sol$minimum
y <- offset(x, alpha, F=F, F.Inv=F.Inv, a, b)
#
# Plot the solution.
#
u <- seq(x, y, length.out=1001)
v <- f.dist(u, a, b)
u <- c(u[1], u, u[length(u)])
v <- c(0, v, 0)
curve(f.dist(x, a, b), xlim=c(x.min, x.max), ylim=c(0,max(v)*1.2), n=1001, xlab="X", ylab="Density")
polygon(u, v, col="#f8e0a0", border=NA)
abline(h = f.dist(x, a, b), col="Gray", lty=2, lwd=2)
curve(f.dist(x, a, b), add=TRUE, lwd=2)
|
Credible set for beta distribution
A level $\alpha$ "Highest Posterior Density" (HPD) interval for a (posterior) distribution $F$ with (continuous) density $f$ and mode $\mu$ is an interval $I=[x,y]$ containing $\mu$ for which
$1-\alp
|
39,681
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
|
This is mostly a case of carefully working out the math. I'll handle the two predictor + intercept case, it should be clear how to generalize it.
The standardized elastic net model results in the following relationship:
$$\frac{y - \mu(y)}{\sigma(y)} = \beta_1 \frac{x_1 - \mu(x_1)}{\sigma(x_1)} + \beta_2 \frac{x_1 - \mu(x_1)}{\sigma(x_1)}$$
If you very carefully move terms around until only $y$ is on the left hand side, you'll get
$$ y = \frac{\beta_1 \sigma(y)}{\sigma(x_1)} x_1 + \frac{\beta_2 \sigma(y)}{\sigma(x_2)} x_2- \left( \frac{\beta_1 \mu(x_1)}{\sigma(x_1)} + \frac{\beta_2 \mu(x_2)}{\sigma(x_2)} \right) \sigma(y) + \mu(y) $$
which gives the relationship between the standardized and unstandardized coefficients.
Here's a quick demonstration you can test this with
X <- matrix(runif(100, 0, 1), ncol=2)
y <- 1 -2*X[,1] + X[,2]
Xst <- scale(X)
yst <- scale(y)
enet <- glmnet(X, y, lambda=0)
enetst <- glmnet(Xst, yst, lambda=0)
coef <- coefficients(enetst)
# Un-standardized betas
coef[2]*sd(y)/sd(X[,1]) # = -2
coef[3]*sd(y)/sd(X[,2]) # = 1
# Unstandardized intercept (= 1)
-(coef[2]*mean(X[,1])/sd(X[,1]) + coef[3]*mean(X[,2])/sd(X[,2]))*sd(y) + mean(y)
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
|
This is mostly a case of carefully working out the math. I'll handle the two predictor + intercept case, it should be clear how to generalize it.
The standardized elastic net model results in the fol
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
This is mostly a case of carefully working out the math. I'll handle the two predictor + intercept case, it should be clear how to generalize it.
The standardized elastic net model results in the following relationship:
$$\frac{y - \mu(y)}{\sigma(y)} = \beta_1 \frac{x_1 - \mu(x_1)}{\sigma(x_1)} + \beta_2 \frac{x_1 - \mu(x_1)}{\sigma(x_1)}$$
If you very carefully move terms around until only $y$ is on the left hand side, you'll get
$$ y = \frac{\beta_1 \sigma(y)}{\sigma(x_1)} x_1 + \frac{\beta_2 \sigma(y)}{\sigma(x_2)} x_2- \left( \frac{\beta_1 \mu(x_1)}{\sigma(x_1)} + \frac{\beta_2 \mu(x_2)}{\sigma(x_2)} \right) \sigma(y) + \mu(y) $$
which gives the relationship between the standardized and unstandardized coefficients.
Here's a quick demonstration you can test this with
X <- matrix(runif(100, 0, 1), ncol=2)
y <- 1 -2*X[,1] + X[,2]
Xst <- scale(X)
yst <- scale(y)
enet <- glmnet(X, y, lambda=0)
enetst <- glmnet(Xst, yst, lambda=0)
coef <- coefficients(enetst)
# Un-standardized betas
coef[2]*sd(y)/sd(X[,1]) # = -2
coef[3]*sd(y)/sd(X[,2]) # = 1
# Unstandardized intercept (= 1)
-(coef[2]*mean(X[,1])/sd(X[,1]) + coef[3]*mean(X[,2])/sd(X[,2]))*sd(y) + mean(y)
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
This is mostly a case of carefully working out the math. I'll handle the two predictor + intercept case, it should be clear how to generalize it.
The standardized elastic net model results in the fol
|
39,682
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
|
The process GLMnet is following when it calculates coefficients for linear regression seem to be as follows:
Standardise each $x$ by subtracting the mean and dividing by the standard deviation (in calculating the standard deviation, divide by $N$, not $N-1$).
Standardise $y$ by dividing by its standard deviation (again with the 'divide by $N$' formula).
Alter the target lambda by dividing it by the standard deviation calculated for $y$.
Calculate the $\beta $s using the formula in http://www.jstatsoft.org/v33/i01/paper.
Unstandardise the $\beta$s using a variant of the formula in Matthew's answer. Note that since you did not subtract the mean of $y$ you do not need to add it at the end.
To calculate the intercept, use the unstandardised $\beta$s and the unstandardised $x$s and $y$s. Take the average of $y$ and each $x$. The formula is:
$$intercept = \bar y - \beta \bar x$$
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
|
The process GLMnet is following when it calculates coefficients for linear regression seem to be as follows:
Standardise each $x$ by subtracting the mean and dividing by the standard deviation (in cal
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
The process GLMnet is following when it calculates coefficients for linear regression seem to be as follows:
Standardise each $x$ by subtracting the mean and dividing by the standard deviation (in calculating the standard deviation, divide by $N$, not $N-1$).
Standardise $y$ by dividing by its standard deviation (again with the 'divide by $N$' formula).
Alter the target lambda by dividing it by the standard deviation calculated for $y$.
Calculate the $\beta $s using the formula in http://www.jstatsoft.org/v33/i01/paper.
Unstandardise the $\beta$s using a variant of the formula in Matthew's answer. Note that since you did not subtract the mean of $y$ you do not need to add it at the end.
To calculate the intercept, use the unstandardised $\beta$s and the unstandardised $x$s and $y$s. Take the average of $y$ and each $x$. The formula is:
$$intercept = \bar y - \beta \bar x$$
|
GLMnet - "Unstandardizing" Linear Regression Coefficients
The process GLMnet is following when it calculates coefficients for linear regression seem to be as follows:
Standardise each $x$ by subtracting the mean and dividing by the standard deviation (in cal
|
39,683
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
The approach in the question seems to be correct as long as the variables of concern are continuous or binary. Categorical variables with three or more levels cannot be multiplied as stated.
The standardized interaction term should be the standardized version of the product of the two original variables, not the product of the two standardized variables. Here is an example using the sample data set auto in Stata:
Let's say we are interested in using mile per gallon (mpg), weight of the car (weight) and their interaction to predict the price (price). The original model is:
. reg price mpg weight c.mpg#c.weight
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430463 3 76143487.7 Prob > F = 0.0000
Residual | 406634933 70 5809070.47 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
--------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
---------------+----------------------------------------------------------------
mpg | 396.7844 185.2023 2.14 0.036 27.41003 766.1587
weight | 5.067008 1.378057 3.68 0.000 2.31856 7.815455
|
c.mpg#c.weight | -.1916795 .0711936 -2.69 0.009 -.3336706 -.0496885
|
_cons | -5944.881 4525.706 -1.31 0.193 -14971.12 3081.356
--------------------------------------------------------------------------------
If we standardized the product, the results will agree with the original:
. reg price zmpg zwt zmpgWeight
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430457 3 76143485.6 Prob > F = 0.0000
Residual | 406634939 70 5809070.56 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
zmpg | 2295.597 1071.489 2.14 0.036 158.5807 4432.614
zwt | 3938.046 1071.017 3.68 0.000 1801.97 6074.121
zmpgWeight | -1773.852 658.8436 -2.69 0.009 -3087.874 -459.8299
_cons | 6165.257 280.1802 22.00 0.000 5606.455 6724.059
------------------------------------------------------------------------------
However, if we use the product of the standardized variables, the results will different than the original. ANOVA results are the same, but you can see the p-values of the standardized mpg and weight are different:
. reg price zmpg zwt c.zmpg#c.zwt
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430459 3 76143486.3 Prob > F = 0.0000
Residual | 406634937 70 5809070.53 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
zmpg | -1052.87 556.2308 -1.89 0.063 -2162.238 56.49692
zwt | 765.3424 526.041 1.45 0.150 -283.8133 1814.498
|
c.zmpg#c.zwt | -861.8786 320.1187 -2.69 0.009 -1500.335 -223.422
|
_cons | 5478.971 378.7809 14.46 0.000 4723.517 6234.426
------------------------------------------------------------------------------
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
The approach in the question seems to be correct as long as the variables of concern are continuous or binary. Categorical variables with three or more levels cannot be multiplied as stated.
The stand
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
The approach in the question seems to be correct as long as the variables of concern are continuous or binary. Categorical variables with three or more levels cannot be multiplied as stated.
The standardized interaction term should be the standardized version of the product of the two original variables, not the product of the two standardized variables. Here is an example using the sample data set auto in Stata:
Let's say we are interested in using mile per gallon (mpg), weight of the car (weight) and their interaction to predict the price (price). The original model is:
. reg price mpg weight c.mpg#c.weight
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430463 3 76143487.7 Prob > F = 0.0000
Residual | 406634933 70 5809070.47 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
--------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
---------------+----------------------------------------------------------------
mpg | 396.7844 185.2023 2.14 0.036 27.41003 766.1587
weight | 5.067008 1.378057 3.68 0.000 2.31856 7.815455
|
c.mpg#c.weight | -.1916795 .0711936 -2.69 0.009 -.3336706 -.0496885
|
_cons | -5944.881 4525.706 -1.31 0.193 -14971.12 3081.356
--------------------------------------------------------------------------------
If we standardized the product, the results will agree with the original:
. reg price zmpg zwt zmpgWeight
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430457 3 76143485.6 Prob > F = 0.0000
Residual | 406634939 70 5809070.56 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
zmpg | 2295.597 1071.489 2.14 0.036 158.5807 4432.614
zwt | 3938.046 1071.017 3.68 0.000 1801.97 6074.121
zmpgWeight | -1773.852 658.8436 -2.69 0.009 -3087.874 -459.8299
_cons | 6165.257 280.1802 22.00 0.000 5606.455 6724.059
------------------------------------------------------------------------------
However, if we use the product of the standardized variables, the results will different than the original. ANOVA results are the same, but you can see the p-values of the standardized mpg and weight are different:
. reg price zmpg zwt c.zmpg#c.zwt
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 3, 70) = 13.11
Model | 228430459 3 76143486.3 Prob > F = 0.0000
Residual | 406634937 70 5809070.53 R-squared = 0.3597
-------------+------------------------------ Adj R-squared = 0.3323
Total | 635065396 73 8699525.97 Root MSE = 2410.2
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
zmpg | -1052.87 556.2308 -1.89 0.063 -2162.238 56.49692
zwt | 765.3424 526.041 1.45 0.150 -283.8133 1814.498
|
c.zmpg#c.zwt | -861.8786 320.1187 -2.69 0.009 -1500.335 -223.422
|
_cons | 5478.971 378.7809 14.46 0.000 4723.517 6234.426
------------------------------------------------------------------------------
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
The approach in the question seems to be correct as long as the variables of concern are continuous or binary. Categorical variables with three or more levels cannot be multiplied as stated.
The stand
|
39,684
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
Standardization is not a requirement, but is an option. Mean-centering (a part of standardization) makes the lower order terms more interpretable. Penguin_Knight showed that standarizing after forming the interaction term rather than before gives you the same results as the unstandardized model. Note that this is a consequence of the change in interpretation of lower order terms when you mean-center variables before forming the interaction term. Both of his outputs are valid (note the interaction t value is identical) you just need to know how to interpret the lower order coefficients (the main effects in ANOVA terms). In short, when you mean-center/standardize before forming your interaction terms, the mpg effect is the effect of mpg when for an average weight car (because it is the effect when all other variables it interacts with is 0, and for the weight variables we set 0 to equal the mean). Without mean-centering/standardizing, the mpg effect is the effect of mpg for a car that weighs 0 pounds (hence mean-centering usually improves interpretability since cars can't weight 0 points).
Is correct but missing some details. For continuous variables, you only need to multiply two variables to form an interaction (again after mean-centering or standardizing if you wish). When categorical variables are involved, you can create an interaction term by first creating separate numerical variables that correspond to contrasts of interest. You can create as many contrasts as you have levels of your categorical variable minus 1. You do not need to use a full set of contrast codes, however. Once you have your columns of contrast codes, you create your interactions the same as before as you now are merely multiplying two numerical variables. Note: this works for interactions of categorical:categorical and categorical:continuous and any permutation at higher orders of interactions.
Run your regression. I have been assuming that you also have the lower order variables in the model as well (i.e. $y=a+b+ab$ rather than $y=ab$ which would adjust how you interpret the results).
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
Standardization is not a requirement, but is an option. Mean-centering (a part of standardization) makes the lower order terms more interpretable. Penguin_Knight showed that standarizing after forming
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
Standardization is not a requirement, but is an option. Mean-centering (a part of standardization) makes the lower order terms more interpretable. Penguin_Knight showed that standarizing after forming the interaction term rather than before gives you the same results as the unstandardized model. Note that this is a consequence of the change in interpretation of lower order terms when you mean-center variables before forming the interaction term. Both of his outputs are valid (note the interaction t value is identical) you just need to know how to interpret the lower order coefficients (the main effects in ANOVA terms). In short, when you mean-center/standardize before forming your interaction terms, the mpg effect is the effect of mpg when for an average weight car (because it is the effect when all other variables it interacts with is 0, and for the weight variables we set 0 to equal the mean). Without mean-centering/standardizing, the mpg effect is the effect of mpg for a car that weighs 0 pounds (hence mean-centering usually improves interpretability since cars can't weight 0 points).
Is correct but missing some details. For continuous variables, you only need to multiply two variables to form an interaction (again after mean-centering or standardizing if you wish). When categorical variables are involved, you can create an interaction term by first creating separate numerical variables that correspond to contrasts of interest. You can create as many contrasts as you have levels of your categorical variable minus 1. You do not need to use a full set of contrast codes, however. Once you have your columns of contrast codes, you create your interactions the same as before as you now are merely multiplying two numerical variables. Note: this works for interactions of categorical:categorical and categorical:continuous and any permutation at higher orders of interactions.
Run your regression. I have been assuming that you also have the lower order variables in the model as well (i.e. $y=a+b+ab$ rather than $y=ab$ which would adjust how you interpret the results).
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
Standardization is not a requirement, but is an option. Mean-centering (a part of standardization) makes the lower order terms more interpretable. Penguin_Knight showed that standarizing after forming
|
39,685
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
You don't need to standardize anything unless the scales are vastly different. Even in this case you don't necessarily need to standardize, a simple unit of measure (scale) will work fine. Once you changes the scale, then the interactions would be on scaled variables too.
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
|
You don't need to standardize anything unless the scales are vastly different. Even in this case you don't necessarily need to standardize, a simple unit of measure (scale) will work fine. Once you ch
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
You don't need to standardize anything unless the scales are vastly different. Even in this case you don't necessarily need to standardize, a simple unit of measure (scale) will work fine. Once you changes the scale, then the interactions would be on scaled variables too.
|
Adding Interaction Terms to Multiple Linear Regression, how to standardize?
You don't need to standardize anything unless the scales are vastly different. Even in this case you don't necessarily need to standardize, a simple unit of measure (scale) will work fine. Once you ch
|
39,686
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
|
(I'm the creator of lavaan.survey)
As Stas already indicated, the combination (multiple imputation * complex sampling) can be tricky business. The main papers are Kott (1995) and Kim, Brick & Fuller (2006).
Here are some considerations:
As mentioned by Stas, all the usual best practices of MI apply. Considering the below, I would probably not use quickpred() initially. There is a risk it will discard things that you actually need. It might help to make some reasonable subselection though.
If you have weights, these need to be included in the imputation model as a covariate (Kim et al. 2006, p. 518). Since you are doing multiple group analysis ("domain estimation"), you also need to include the interaction between the group dummies and the weights in the imputation model (p. 519).
If you have strata and clusters, things become more complicated. The imputation model needs to account for the resulting correlation between the observations. If not you will get the wrong standard errors (Kim et al. 2006: p. 514). A model-based way of doing this might be to include strata as fixed effects and clusters as random effects in a Bayesian imputation model. A more survey-like approach would be to follow Stas' suggestion and use a resampling procedure that respects the strata and clusters. For example, in bootstrapping and with just the clusters, you would sample a random cluster (PSU) with replacement and then individuals (2SUS) with replacement within the sampled clusters.
Another advantage of Stas' resampling suggestion, even without strata and clusters, is that you will account for the uncertainty about the parameters of the imputation model including that caused by the weights. I am not sure if mice does this accurately by default. This is usually a relatively small additional term in the variance but it might make a difference.
Once you have the multiply imputed datasets, you can just pass these as an imputationList to lavaan.survey (see the JSS lavaan.survey paper). lavaan.survey will then do all the usual MI pooling calculations for you. So you don't need to manually fit a model separately for each imputation!
Hope this helps,
All the best,
Daniel
P.S. Thanks to Stas and @Gaming_dude who brought this post to my attention. I would be happy to continue the conversation (here, lavaan Google discussion group, twitter, email..)!
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
|
(I'm the creator of lavaan.survey)
As Stas already indicated, the combination (multiple imputation * complex sampling) can be tricky business. The main papers are Kott (1995) and Kim, Brick & Fuller (
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
(I'm the creator of lavaan.survey)
As Stas already indicated, the combination (multiple imputation * complex sampling) can be tricky business. The main papers are Kott (1995) and Kim, Brick & Fuller (2006).
Here are some considerations:
As mentioned by Stas, all the usual best practices of MI apply. Considering the below, I would probably not use quickpred() initially. There is a risk it will discard things that you actually need. It might help to make some reasonable subselection though.
If you have weights, these need to be included in the imputation model as a covariate (Kim et al. 2006, p. 518). Since you are doing multiple group analysis ("domain estimation"), you also need to include the interaction between the group dummies and the weights in the imputation model (p. 519).
If you have strata and clusters, things become more complicated. The imputation model needs to account for the resulting correlation between the observations. If not you will get the wrong standard errors (Kim et al. 2006: p. 514). A model-based way of doing this might be to include strata as fixed effects and clusters as random effects in a Bayesian imputation model. A more survey-like approach would be to follow Stas' suggestion and use a resampling procedure that respects the strata and clusters. For example, in bootstrapping and with just the clusters, you would sample a random cluster (PSU) with replacement and then individuals (2SUS) with replacement within the sampled clusters.
Another advantage of Stas' resampling suggestion, even without strata and clusters, is that you will account for the uncertainty about the parameters of the imputation model including that caused by the weights. I am not sure if mice does this accurately by default. This is usually a relatively small additional term in the variance but it might make a difference.
Once you have the multiply imputed datasets, you can just pass these as an imputationList to lavaan.survey (see the JSS lavaan.survey paper). lavaan.survey will then do all the usual MI pooling calculations for you. So you don't need to manually fit a model separately for each imputation!
Hope this helps,
All the best,
Daniel
P.S. Thanks to Stas and @Gaming_dude who brought this post to my attention. I would be happy to continue the conversation (here, lavaan Google discussion group, twitter, email..)!
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
(I'm the creator of lavaan.survey)
As Stas already indicated, the combination (multiple imputation * complex sampling) can be tricky business. The main papers are Kott (1995) and Kim, Brick & Fuller (
|
39,687
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
|
If I were dealing with this in my project, and I am grateful that I don't have to, this is what I would have done.
Take a survey bootstrap sample that respects my survey design -- see Rao and Wu 1988.
For each bootstrap replicate, impute the missing data once, see Shao and Sitter 1996.
Within each imputation, follow the best practices for SEM imputation, which would probably mean: do the imputation separately for men and women, so that the unique features within the group are preserved for the subsequent multiple group analysis; include all variables that are in the SEM model as predictors in the imputation model; include the survey design variables (strata, clusters, weights, possibly non-linear functions of weights) into the imputation model.
Run your analysis in lavaan.survey using the weight corresponding to the current bootstrap replicate.
Repeat 1-4 to obtain design-consistent, imputation-adjusted standard errors.
I don't know what is going to happen to the tests like the goodness of fit that SEM people are so crazy about (and that always rejects anyway). Judging from the technical description of lavaan.survey in JSS (Oberski 2014), there's a way to pass the variance estimation step 5 to lavaan.survey so that it could estimate the variance of the estimating equations $\Gamma$ and then form all of these traditional tests. Whether, and how, that is doable is beyond me though. I don't quite see the mechanism of aligning the replicate weights with imputations, but may be it is in place somewhere.
Refs:
Oberski 2014: http://www.citeulike.org/user/ctacmo/article/13599829
Rao and Wu 1988: http://www.citeulike.org/user/ctacmo/article/582039
Shao and Sitter 1996: http://www.citeulike.org/user/ctacmo/article/1269394
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
|
If I were dealing with this in my project, and I am grateful that I don't have to, this is what I would have done.
Take a survey bootstrap sample that respects my survey design -- see Rao and Wu 1988
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
If I were dealing with this in my project, and I am grateful that I don't have to, this is what I would have done.
Take a survey bootstrap sample that respects my survey design -- see Rao and Wu 1988.
For each bootstrap replicate, impute the missing data once, see Shao and Sitter 1996.
Within each imputation, follow the best practices for SEM imputation, which would probably mean: do the imputation separately for men and women, so that the unique features within the group are preserved for the subsequent multiple group analysis; include all variables that are in the SEM model as predictors in the imputation model; include the survey design variables (strata, clusters, weights, possibly non-linear functions of weights) into the imputation model.
Run your analysis in lavaan.survey using the weight corresponding to the current bootstrap replicate.
Repeat 1-4 to obtain design-consistent, imputation-adjusted standard errors.
I don't know what is going to happen to the tests like the goodness of fit that SEM people are so crazy about (and that always rejects anyway). Judging from the technical description of lavaan.survey in JSS (Oberski 2014), there's a way to pass the variance estimation step 5 to lavaan.survey so that it could estimate the variance of the estimating equations $\Gamma$ and then form all of these traditional tests. Whether, and how, that is doable is beyond me though. I don't quite see the mechanism of aligning the replicate weights with imputations, but may be it is in place somewhere.
Refs:
Oberski 2014: http://www.citeulike.org/user/ctacmo/article/13599829
Rao and Wu 1988: http://www.citeulike.org/user/ctacmo/article/582039
Shao and Sitter 1996: http://www.citeulike.org/user/ctacmo/article/1269394
|
Questions on multiple imputation with MICE for a multigroup-SEM-analysis? (including survey weights)
If I were dealing with this in my project, and I am grateful that I don't have to, this is what I would have done.
Take a survey bootstrap sample that respects my survey design -- see Rao and Wu 1988
|
39,688
|
Maximum of Independent Gamma random variables? [closed]
|
The gamma distribution is in the Gumbel domain of attraction. You can
refer to the book by L. de Haan and A Ferreira, Extreme Values Theory,
an Introduction.
See therein theorem 1.1.8 and exercise 1.7 for its application to the
gamma distribution. Another very useful book even provides explicit
values for the two sequences required in the normalisation:
P. Embrechts, C. Klüppelberg and T. Mikosch Modelling Extremal Events
for Inusurance and Finance; This is in section 3.4,
p. 156 in my edition.
|
Maximum of Independent Gamma random variables? [closed]
|
The gamma distribution is in the Gumbel domain of attraction. You can
refer to the book by L. de Haan and A Ferreira, Extreme Values Theory,
an Introduction.
See therein theorem 1.1.8 and exercise 1.
|
Maximum of Independent Gamma random variables? [closed]
The gamma distribution is in the Gumbel domain of attraction. You can
refer to the book by L. de Haan and A Ferreira, Extreme Values Theory,
an Introduction.
See therein theorem 1.1.8 and exercise 1.7 for its application to the
gamma distribution. Another very useful book even provides explicit
values for the two sequences required in the normalisation:
P. Embrechts, C. Klüppelberg and T. Mikosch Modelling Extremal Events
for Inusurance and Finance; This is in section 3.4,
p. 156 in my edition.
|
Maximum of Independent Gamma random variables? [closed]
The gamma distribution is in the Gumbel domain of attraction. You can
refer to the book by L. de Haan and A Ferreira, Extreme Values Theory,
an Introduction.
See therein theorem 1.1.8 and exercise 1.
|
39,689
|
Maximum of Independent Gamma random variables? [closed]
|
Taken from
David, H. A., & Nagaraja, H. N. (2003). Order statistics 3d ed., ch 10. p 296
The "two references that give the most complete and rigorous discussion of the problem" are
|
Maximum of Independent Gamma random variables? [closed]
|
Taken from
David, H. A., & Nagaraja, H. N. (2003). Order statistics 3d ed., ch 10. p 296
The "two references that give the most complete and rigorous discussion of the problem" are
|
Maximum of Independent Gamma random variables? [closed]
Taken from
David, H. A., & Nagaraja, H. N. (2003). Order statistics 3d ed., ch 10. p 296
The "two references that give the most complete and rigorous discussion of the problem" are
|
Maximum of Independent Gamma random variables? [closed]
Taken from
David, H. A., & Nagaraja, H. N. (2003). Order statistics 3d ed., ch 10. p 296
The "two references that give the most complete and rigorous discussion of the problem" are
|
39,690
|
Why do one-versus-all multi class SVMs need to be calibrated?
|
Setup
Recall that an SVM can be viewed as a weight vector $w$ and an intercept $b$, and that the output function for a test input $x$ is is $\langle w, x \rangle + b$. To get a binary prediction, we take $f(x) = \mathrm{sign}(\langle w, x \rangle + b)$.
(I'm going to use some primal notations here, but use $\langle \cdot, \cdot \rangle$ to denote that inner products are happening in some Hilbert space rather than necessarily in $\mathbb R^n$. This won't be important to the answer; feel free to think of everything as a traditional vector if you like.)
Importance of $\lVert w \rVert$
If we want to compare output functions from different models, say $(w, b)$ and $(w', b')$, we would need some kind of assurance that the values of $\langle w, x \rangle + b$ and $\langle w', x \rangle + b'$ are of similar size.
Otherwise, for example, suppose that $\lVert w \rVert \gg \lVert w' \rVert$, so that for most values of $x$, $\lvert \langle w, x \rangle \rvert \gg \lvert \langle w', x \rangle \rvert$. $\lvert b \rvert$ will probably also be larger, but this will just make the mean value 0 (in a balanced classification problem); we'll still typically have $\lvert \langle w, x \rangle + b \rvert \gg \lvert \langle w', x \rangle + b' \rvert$.
Then, when we pick the model with the highest-valued output function, we'll usually just pick $(w, b)$.
This is not great because, for any $\alpha > 0$, the model defined by $(w, b)$ gives the same predictions as that defined by $(\alpha w, \alpha b)$: $\langle w, x \rangle + b > 0$ iff $\langle \alpha w, x \rangle + \alpha b = \alpha \left( \langle w, x \rangle + b \right) > 0$.
What determines $\lVert w \rVert$?
The question, then, is how do we end up with large $w$s as a result of the SVM optimization problem?
For a hard-margin SVM, the margin is $2 / \lVert w \rVert$. So a high value of $\lVert w \rVert$ actually corresponds to a small-margin model, which makes it (in the underlying assumption of SVMs) a worse model. So if we don't scale the output scores, we actually trust the worst models the most!
For soft-margin SVMs, the margin is still $2 / \lVert w \rVert$, but how "hard" the margin is depends on the total slack. This tradeoff is done by the $C$ hyperparameter in the objective $\tfrac12 \lVert w \rVert^2 + C \sum_i \xi_i$, where $\xi_i$ is the amount you'd need to move the $i$th training example to put it on the right side of the margin. A higher $C$ corresponds to a harder margin, thus a smaller margin, and a larger $\lVert w \rVert$. If you're tuning the hyperparameters of your multiclass ensemble individually for each problem, the unscaled version of the ensemble will additionally be biased towards those with higher values of $C$, without any really good reason for doing so.
Moral of the story
"It is important that the output functions be calibrated to produce comparable scores." Otherwise, you're doing almost exactly the wrong thing.
|
Why do one-versus-all multi class SVMs need to be calibrated?
|
Setup
Recall that an SVM can be viewed as a weight vector $w$ and an intercept $b$, and that the output function for a test input $x$ is is $\langle w, x \rangle + b$. To get a binary prediction, we t
|
Why do one-versus-all multi class SVMs need to be calibrated?
Setup
Recall that an SVM can be viewed as a weight vector $w$ and an intercept $b$, and that the output function for a test input $x$ is is $\langle w, x \rangle + b$. To get a binary prediction, we take $f(x) = \mathrm{sign}(\langle w, x \rangle + b)$.
(I'm going to use some primal notations here, but use $\langle \cdot, \cdot \rangle$ to denote that inner products are happening in some Hilbert space rather than necessarily in $\mathbb R^n$. This won't be important to the answer; feel free to think of everything as a traditional vector if you like.)
Importance of $\lVert w \rVert$
If we want to compare output functions from different models, say $(w, b)$ and $(w', b')$, we would need some kind of assurance that the values of $\langle w, x \rangle + b$ and $\langle w', x \rangle + b'$ are of similar size.
Otherwise, for example, suppose that $\lVert w \rVert \gg \lVert w' \rVert$, so that for most values of $x$, $\lvert \langle w, x \rangle \rvert \gg \lvert \langle w', x \rangle \rvert$. $\lvert b \rvert$ will probably also be larger, but this will just make the mean value 0 (in a balanced classification problem); we'll still typically have $\lvert \langle w, x \rangle + b \rvert \gg \lvert \langle w', x \rangle + b' \rvert$.
Then, when we pick the model with the highest-valued output function, we'll usually just pick $(w, b)$.
This is not great because, for any $\alpha > 0$, the model defined by $(w, b)$ gives the same predictions as that defined by $(\alpha w, \alpha b)$: $\langle w, x \rangle + b > 0$ iff $\langle \alpha w, x \rangle + \alpha b = \alpha \left( \langle w, x \rangle + b \right) > 0$.
What determines $\lVert w \rVert$?
The question, then, is how do we end up with large $w$s as a result of the SVM optimization problem?
For a hard-margin SVM, the margin is $2 / \lVert w \rVert$. So a high value of $\lVert w \rVert$ actually corresponds to a small-margin model, which makes it (in the underlying assumption of SVMs) a worse model. So if we don't scale the output scores, we actually trust the worst models the most!
For soft-margin SVMs, the margin is still $2 / \lVert w \rVert$, but how "hard" the margin is depends on the total slack. This tradeoff is done by the $C$ hyperparameter in the objective $\tfrac12 \lVert w \rVert^2 + C \sum_i \xi_i$, where $\xi_i$ is the amount you'd need to move the $i$th training example to put it on the right side of the margin. A higher $C$ corresponds to a harder margin, thus a smaller margin, and a larger $\lVert w \rVert$. If you're tuning the hyperparameters of your multiclass ensemble individually for each problem, the unscaled version of the ensemble will additionally be biased towards those with higher values of $C$, without any really good reason for doing so.
Moral of the story
"It is important that the output functions be calibrated to produce comparable scores." Otherwise, you're doing almost exactly the wrong thing.
|
Why do one-versus-all multi class SVMs need to be calibrated?
Setup
Recall that an SVM can be viewed as a weight vector $w$ and an intercept $b$, and that the output function for a test input $x$ is is $\langle w, x \rangle + b$. To get a binary prediction, we t
|
39,691
|
Why do one-versus-all multi class SVMs need to be calibrated?
|
This is because each individual one-vs-all classifier corresponds to different support vectors and their respective alphas in the decomposition. The score output for a test data point is in no way bounded and should be normalized (e.g. using Platt's scheme from scores to probabilities) in order to be comparable.
|
Why do one-versus-all multi class SVMs need to be calibrated?
|
This is because each individual one-vs-all classifier corresponds to different support vectors and their respective alphas in the decomposition. The score output for a test data point is in no way bou
|
Why do one-versus-all multi class SVMs need to be calibrated?
This is because each individual one-vs-all classifier corresponds to different support vectors and their respective alphas in the decomposition. The score output for a test data point is in no way bounded and should be normalized (e.g. using Platt's scheme from scores to probabilities) in order to be comparable.
|
Why do one-versus-all multi class SVMs need to be calibrated?
This is because each individual one-vs-all classifier corresponds to different support vectors and their respective alphas in the decomposition. The score output for a test data point is in no way bou
|
39,692
|
How to Assess the Fit of Thousands of Distributions?
|
I'll suggest using representative plots. Pull 16 or 20 subjects, and show their QQ-plots in 4x4 or 4x5 chart. Sometimes, you can plot several subjects in the same plot. This doesn't substitute other ways of representing the fits, but on the other hand I don't think you can avoid this step either. It's used a lot in panel (longitudinal) data analysis. You really need to see the representative plots.
See, Fig.12-1.3 in this book. It's not the distributions, but the same idea: show the sample plots for subjects.
You can get fancy and draw 3d plots, of course, or contour plots, where x-axis is subject, but these are sometimes hard to analyze visually. They may reveal important patterns though.
UPDATE
You can also show the histogram of Kolmogorov-Smirnov statistics. It's true that the critical values are expensive to compute, but the statistics itself is easy to compute. So, you can obtain KS-stat for each subject, and show the histogram of obtained values. This will give you a great visual cue as to how the gamma distribution fits in general. It's almost like bootstrapping.
|
How to Assess the Fit of Thousands of Distributions?
|
I'll suggest using representative plots. Pull 16 or 20 subjects, and show their QQ-plots in 4x4 or 4x5 chart. Sometimes, you can plot several subjects in the same plot. This doesn't substitute other w
|
How to Assess the Fit of Thousands of Distributions?
I'll suggest using representative plots. Pull 16 or 20 subjects, and show their QQ-plots in 4x4 or 4x5 chart. Sometimes, you can plot several subjects in the same plot. This doesn't substitute other ways of representing the fits, but on the other hand I don't think you can avoid this step either. It's used a lot in panel (longitudinal) data analysis. You really need to see the representative plots.
See, Fig.12-1.3 in this book. It's not the distributions, but the same idea: show the sample plots for subjects.
You can get fancy and draw 3d plots, of course, or contour plots, where x-axis is subject, but these are sometimes hard to analyze visually. They may reveal important patterns though.
UPDATE
You can also show the histogram of Kolmogorov-Smirnov statistics. It's true that the critical values are expensive to compute, but the statistics itself is easy to compute. So, you can obtain KS-stat for each subject, and show the histogram of obtained values. This will give you a great visual cue as to how the gamma distribution fits in general. It's almost like bootstrapping.
|
How to Assess the Fit of Thousands of Distributions?
I'll suggest using representative plots. Pull 16 or 20 subjects, and show their QQ-plots in 4x4 or 4x5 chart. Sometimes, you can plot several subjects in the same plot. This doesn't substitute other w
|
39,693
|
How to Assess the Fit of Thousands of Distributions?
|
I hope that understood your situation and the question correctly. Considering your data set's number of distributions, visual exploratory approaches (such as QQ plots, which you mentioned) are not feasible in this case. Therefore, you have to resort to analytical approaches, such as goodness-of-fit (GoF) tests, as some have already mentioned in the comments above.
Since you have informed that distribution parameters are estimated from data, I assume that you have used or plan to use one of distribution fitting approaches. One of the most popular fitting approaches (along with least squares, to a lesser degree) is maximum likelihood estimation (MLE), which is generally easy to perform, for example, using function fitdistr() from R package MASS. However, depending on your particular data, fitting via fitdistr() might not be so trivial. Some people prefer R package fitdistrplus, as they consider it more advanced or useful.
After this straightforward step, you need to validate the estimation results, using one or more of the following GoF tests for continuous data (considering their pros and cons): chi-square (via binning), Kolmogorov-Smirnov (via corrected tables for critical values or Monte Carlo simulation, which I'm listing here just for completeness, as you are trying to avoid this), Anderson-Darling, Lilliefors, Cramér–von Mises and Watson. In terms of performance, the problem gets reduced to performing a relatively large number of non-parametric GoF tests, which IMHO is achievable either via doing it on a more powerful hardware (i.e., renting Amazon EC2 virtual instance), or via parallelizing code.
Returning to the essence of your question, my idea of possible approaches is to aggregate results either via bootstrapping (similarly to the one presented in this excellent answer), or some kind of averaging approach, similar to ensemble methods (for example, take a look at this research paper).
|
How to Assess the Fit of Thousands of Distributions?
|
I hope that understood your situation and the question correctly. Considering your data set's number of distributions, visual exploratory approaches (such as QQ plots, which you mentioned) are not fea
|
How to Assess the Fit of Thousands of Distributions?
I hope that understood your situation and the question correctly. Considering your data set's number of distributions, visual exploratory approaches (such as QQ plots, which you mentioned) are not feasible in this case. Therefore, you have to resort to analytical approaches, such as goodness-of-fit (GoF) tests, as some have already mentioned in the comments above.
Since you have informed that distribution parameters are estimated from data, I assume that you have used or plan to use one of distribution fitting approaches. One of the most popular fitting approaches (along with least squares, to a lesser degree) is maximum likelihood estimation (MLE), which is generally easy to perform, for example, using function fitdistr() from R package MASS. However, depending on your particular data, fitting via fitdistr() might not be so trivial. Some people prefer R package fitdistrplus, as they consider it more advanced or useful.
After this straightforward step, you need to validate the estimation results, using one or more of the following GoF tests for continuous data (considering their pros and cons): chi-square (via binning), Kolmogorov-Smirnov (via corrected tables for critical values or Monte Carlo simulation, which I'm listing here just for completeness, as you are trying to avoid this), Anderson-Darling, Lilliefors, Cramér–von Mises and Watson. In terms of performance, the problem gets reduced to performing a relatively large number of non-parametric GoF tests, which IMHO is achievable either via doing it on a more powerful hardware (i.e., renting Amazon EC2 virtual instance), or via parallelizing code.
Returning to the essence of your question, my idea of possible approaches is to aggregate results either via bootstrapping (similarly to the one presented in this excellent answer), or some kind of averaging approach, similar to ensemble methods (for example, take a look at this research paper).
|
How to Assess the Fit of Thousands of Distributions?
I hope that understood your situation and the question correctly. Considering your data set's number of distributions, visual exploratory approaches (such as QQ plots, which you mentioned) are not fea
|
39,694
|
(Feed-Forward) Neural Networks keep converging to mean
|
Problem is identified in chat discussion. The logistic function in the hidden layer is saturated with large input values.
It is recommended to normalize the input values to [-1,1] range as standard NN practice.
Question owner acknowledged that it resolved his problem.
|
(Feed-Forward) Neural Networks keep converging to mean
|
Problem is identified in chat discussion. The logistic function in the hidden layer is saturated with large input values.
It is recommended to normalize the input values to [-1,1] range as standard NN
|
(Feed-Forward) Neural Networks keep converging to mean
Problem is identified in chat discussion. The logistic function in the hidden layer is saturated with large input values.
It is recommended to normalize the input values to [-1,1] range as standard NN practice.
Question owner acknowledged that it resolved his problem.
|
(Feed-Forward) Neural Networks keep converging to mean
Problem is identified in chat discussion. The logistic function in the hidden layer is saturated with large input values.
It is recommended to normalize the input values to [-1,1] range as standard NN
|
39,695
|
(Feed-Forward) Neural Networks keep converging to mean
|
(1) Some of your predictors are on wildly different scales. Most texts (e.g., Ripley, 96; Hastie, Tibshirani, Friedman 08) recommend preprocessing the predictors by scaling the range to [0,1].
(2) In my experience, fit is very sensitive to tuning parameter values. Again, most authors recommend searching over an extensive grid of values for (a) the size of the hidden layer and (b) the value of weight decay, the smoothing parameter.
Here is some code that uses package nnet for fitting the network and package caret for training the tuning parameters. I also use doMC to parallelize the training over 24 cores on my machine but you should adjust the ''cores = '' value to however many you have on yours. Also, I think doMC doesn't work on Windows so you might have to delete those two lines if you are running windows.
library(caret)
library(nnet)
library(doMC)
registerDoMC(cores = 24)
ctrl <- trainControl(method = "cv",
number = 10)
nnet_grid <- expand.grid(.decay = 10^seq(-5, 1, .25),
.size = c(1, 2, 4, 8, 16))
nnfit <- train(form = target ~ majorholiday + mon + sat + sun + thu + tue + wed + backtickets + l1_target + l7_target,
data = dat,
method = 'nnet',
MaxNWts = 4000,
maxit = 4000,
preProcess = "range",
trControl = ctrl,
tuneGrid = nnet_grid,
linout = TRUE)
ps <- predict(nnfit, dat)
The best combination of size and decay I get are size = 1 and decay = 10^(.5). What this tells me is that, according to cross-validated test error, the winning combination is one with the smallest size (i.e., the least number of additional parameters) and a very large weight decay value (i.e., the most smoothing). This points to a simple, rather than a complex solution for this data, where ``simple'' here means closer to linear.
Remember that the feed forward neural net with one hidden layer is a nonlinear generalization of linear regression. With zero hidden units, it is equivalent. Thus, it makes sense that a multiple regression, which essentially models the response surface with a hyperplane in your covariate space, also fit well.
In any case, preprocessing the predictors by scaling them to [0,1] seems to solve your mean-only predicted values problem.
|
(Feed-Forward) Neural Networks keep converging to mean
|
(1) Some of your predictors are on wildly different scales. Most texts (e.g., Ripley, 96; Hastie, Tibshirani, Friedman 08) recommend preprocessing the predictors by scaling the range to [0,1].
(2) In
|
(Feed-Forward) Neural Networks keep converging to mean
(1) Some of your predictors are on wildly different scales. Most texts (e.g., Ripley, 96; Hastie, Tibshirani, Friedman 08) recommend preprocessing the predictors by scaling the range to [0,1].
(2) In my experience, fit is very sensitive to tuning parameter values. Again, most authors recommend searching over an extensive grid of values for (a) the size of the hidden layer and (b) the value of weight decay, the smoothing parameter.
Here is some code that uses package nnet for fitting the network and package caret for training the tuning parameters. I also use doMC to parallelize the training over 24 cores on my machine but you should adjust the ''cores = '' value to however many you have on yours. Also, I think doMC doesn't work on Windows so you might have to delete those two lines if you are running windows.
library(caret)
library(nnet)
library(doMC)
registerDoMC(cores = 24)
ctrl <- trainControl(method = "cv",
number = 10)
nnet_grid <- expand.grid(.decay = 10^seq(-5, 1, .25),
.size = c(1, 2, 4, 8, 16))
nnfit <- train(form = target ~ majorholiday + mon + sat + sun + thu + tue + wed + backtickets + l1_target + l7_target,
data = dat,
method = 'nnet',
MaxNWts = 4000,
maxit = 4000,
preProcess = "range",
trControl = ctrl,
tuneGrid = nnet_grid,
linout = TRUE)
ps <- predict(nnfit, dat)
The best combination of size and decay I get are size = 1 and decay = 10^(.5). What this tells me is that, according to cross-validated test error, the winning combination is one with the smallest size (i.e., the least number of additional parameters) and a very large weight decay value (i.e., the most smoothing). This points to a simple, rather than a complex solution for this data, where ``simple'' here means closer to linear.
Remember that the feed forward neural net with one hidden layer is a nonlinear generalization of linear regression. With zero hidden units, it is equivalent. Thus, it makes sense that a multiple regression, which essentially models the response surface with a hyperplane in your covariate space, also fit well.
In any case, preprocessing the predictors by scaling them to [0,1] seems to solve your mean-only predicted values problem.
|
(Feed-Forward) Neural Networks keep converging to mean
(1) Some of your predictors are on wildly different scales. Most texts (e.g., Ripley, 96; Hastie, Tibshirani, Friedman 08) recommend preprocessing the predictors by scaling the range to [0,1].
(2) In
|
39,696
|
(Feed-Forward) Neural Networks keep converging to mean
|
I had this problem before.
I think in the general case the average problem can happen when.
Your final hidden layer blew up and all values are activated all the time. When all the values are 1 or 0 all the time. You basically just have the bias unit doing the lifting. And the bias unit can only predict the average at best. This can be cuased by wrong weight initialization or a step size that is too big. Is your stepsize = 50000?
`
Using cross-entropy or squared error loss, when you should be using softmax
|
(Feed-Forward) Neural Networks keep converging to mean
|
I had this problem before.
I think in the general case the average problem can happen when.
Your final hidden layer blew up and all values are activated all the time. When all the values are 1 or 0 a
|
(Feed-Forward) Neural Networks keep converging to mean
I had this problem before.
I think in the general case the average problem can happen when.
Your final hidden layer blew up and all values are activated all the time. When all the values are 1 or 0 all the time. You basically just have the bias unit doing the lifting. And the bias unit can only predict the average at best. This can be cuased by wrong weight initialization or a step size that is too big. Is your stepsize = 50000?
`
Using cross-entropy or squared error loss, when you should be using softmax
|
(Feed-Forward) Neural Networks keep converging to mean
I had this problem before.
I think in the general case the average problem can happen when.
Your final hidden layer blew up and all values are activated all the time. When all the values are 1 or 0 a
|
39,697
|
How to prevent collinearity?
|
As far as I understand, collinearity or multicollinearity (hereafter referred to simply as collinearity) cannot be prevented/avoided during data analysis, because collinearity is a built-in "feature" of data. Therefore, a particular data set has certain levels of collinearity (or the lack of). However, collinearity can be prevented/avoided to some degree prior to data analysis, that is, during research design planning or, possibly, exploratory data analysis (EDA) phases. This is likely what Ieno and Zuur (2015) mean by their phrase, which you've cited in your question above.
Potential solutions for preventing / avoiding / dealing with collinearity include using appropriate research designs, which reduce collinearity. However, while I ran across mentioning this approach several times, it was unclear to me which designs exactly are helpful in that regard and why (while StatsStudent mentions one such method - stratified sampling, relevant sources are not provided). Before mentioning other solutions, it is worth to say that sometimes recommended option of dropping predictors is considered as rather bad one - see this blog post or the blog author's book (Baguley, 2012). He also mentions that doing nothing should be considered as one of the valid approaches to dealing with collinearity as well.
Other approaches to dealing with (mainly reducing) collinearity include: increasing sample size and transforming predictors (Baguley, 2012); using principal component analysis (PCA), using simple regression between highly correlated variables (sequential regression) and calculating ratio of correlated variables (Balling, n.d.); a priori modeling and ridge regression (Graham, 2003). While much of literature is focused on dealing with collinearity in multiple regression settings, it should be noted that researchers, who use structural equation modeling (SEM) in their studies, face similar issues of collinearity (Grewal, Cote & Baumgartner, 2004). This is despite the fact that latent variable modeling (LVM) is also considered as an approach to reducing collinearity (see below).
Finally, I highly recommend a very comprehensive paper on the topic by Dormann, Elith, Bacher, Buchmann, Carl, Carré et al. (2013), which contains an excellent overview of methods for dealing with collinearity as well as their comparison via simulation. Those methods include: PCA and other variables clustering methods; already mentioned sequential regression; principal component regression (PCR), partial least squares (PLS) and some other LVM methods; tolerant/penalized regression techniques, include the above-mentioned ridge regression.
References
Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. New York, NY: Palgrave Macmillan.
Balling L. W. (n.d.). A brief introduction to regression designs and mixed-effects modelling by a recent convert. Retrieved from http://pure.au.dk/portal/files/14325917/balling_csl.pdf
Dormann, C. F., Elith, J., Bacher, S., Buchmann, C., Carl, G., Carré, G., ..., & Lautenbach, S. (2013). Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27-46. doi:10.1111/j.1600-0587.2012.07348.x Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0587.2012.07348.x/pdf
Graham, M. H. (2003). Confronting multicollinearity in ecological multiple regression, Ecology, 84(11), 2809-2815. Retrieved from http://www.auburn.edu/~tds0009/Articles/Graham%202003.pdf
Grewal, R., Cote J. A., & Baumgartner, H. (2004). Multicollinearity and measurement error in structural equation models: Implications for theory testing. Marketing Science, 23(4), 519-529. doi:10.1287/mksc.1040.0070 Retrieved from http://www.personal.psu.edu/rug2/Grewal,%20Cote,%20%26%20Baumgartner%20MKS%202004.pdf
|
How to prevent collinearity?
|
As far as I understand, collinearity or multicollinearity (hereafter referred to simply as collinearity) cannot be prevented/avoided during data analysis, because collinearity is a built-in "feature"
|
How to prevent collinearity?
As far as I understand, collinearity or multicollinearity (hereafter referred to simply as collinearity) cannot be prevented/avoided during data analysis, because collinearity is a built-in "feature" of data. Therefore, a particular data set has certain levels of collinearity (or the lack of). However, collinearity can be prevented/avoided to some degree prior to data analysis, that is, during research design planning or, possibly, exploratory data analysis (EDA) phases. This is likely what Ieno and Zuur (2015) mean by their phrase, which you've cited in your question above.
Potential solutions for preventing / avoiding / dealing with collinearity include using appropriate research designs, which reduce collinearity. However, while I ran across mentioning this approach several times, it was unclear to me which designs exactly are helpful in that regard and why (while StatsStudent mentions one such method - stratified sampling, relevant sources are not provided). Before mentioning other solutions, it is worth to say that sometimes recommended option of dropping predictors is considered as rather bad one - see this blog post or the blog author's book (Baguley, 2012). He also mentions that doing nothing should be considered as one of the valid approaches to dealing with collinearity as well.
Other approaches to dealing with (mainly reducing) collinearity include: increasing sample size and transforming predictors (Baguley, 2012); using principal component analysis (PCA), using simple regression between highly correlated variables (sequential regression) and calculating ratio of correlated variables (Balling, n.d.); a priori modeling and ridge regression (Graham, 2003). While much of literature is focused on dealing with collinearity in multiple regression settings, it should be noted that researchers, who use structural equation modeling (SEM) in their studies, face similar issues of collinearity (Grewal, Cote & Baumgartner, 2004). This is despite the fact that latent variable modeling (LVM) is also considered as an approach to reducing collinearity (see below).
Finally, I highly recommend a very comprehensive paper on the topic by Dormann, Elith, Bacher, Buchmann, Carl, Carré et al. (2013), which contains an excellent overview of methods for dealing with collinearity as well as their comparison via simulation. Those methods include: PCA and other variables clustering methods; already mentioned sequential regression; principal component regression (PCR), partial least squares (PLS) and some other LVM methods; tolerant/penalized regression techniques, include the above-mentioned ridge regression.
References
Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. New York, NY: Palgrave Macmillan.
Balling L. W. (n.d.). A brief introduction to regression designs and mixed-effects modelling by a recent convert. Retrieved from http://pure.au.dk/portal/files/14325917/balling_csl.pdf
Dormann, C. F., Elith, J., Bacher, S., Buchmann, C., Carl, G., Carré, G., ..., & Lautenbach, S. (2013). Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27-46. doi:10.1111/j.1600-0587.2012.07348.x Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0587.2012.07348.x/pdf
Graham, M. H. (2003). Confronting multicollinearity in ecological multiple regression, Ecology, 84(11), 2809-2815. Retrieved from http://www.auburn.edu/~tds0009/Articles/Graham%202003.pdf
Grewal, R., Cote J. A., & Baumgartner, H. (2004). Multicollinearity and measurement error in structural equation models: Implications for theory testing. Marketing Science, 23(4), 519-529. doi:10.1287/mksc.1040.0070 Retrieved from http://www.personal.psu.edu/rug2/Grewal,%20Cote,%20%26%20Baumgartner%20MKS%202004.pdf
|
How to prevent collinearity?
As far as I understand, collinearity or multicollinearity (hereafter referred to simply as collinearity) cannot be prevented/avoided during data analysis, because collinearity is a built-in "feature"
|
39,698
|
How to prevent collinearity?
|
There are several sampling techniques that can be used to reduce collinearity, so I'll mention just one of them: a stratified random sampling plan can eliminate some of the problem. Here's an example using the scenario you described above: If you find that certain trees are collinear with altitude or slope of the track, you could stratify the population-area on categories (areas) of altitude or steepness (e.g. form a stratum based on the following bins: 0-5 slope, 5-10 slope, 10-15 slope. . . ). By creating slope strata, you can avoid reduce some of the the issue of multicollinearity with slope.
|
How to prevent collinearity?
|
There are several sampling techniques that can be used to reduce collinearity, so I'll mention just one of them: a stratified random sampling plan can eliminate some of the problem. Here's an exampl
|
How to prevent collinearity?
There are several sampling techniques that can be used to reduce collinearity, so I'll mention just one of them: a stratified random sampling plan can eliminate some of the problem. Here's an example using the scenario you described above: If you find that certain trees are collinear with altitude or slope of the track, you could stratify the population-area on categories (areas) of altitude or steepness (e.g. form a stratum based on the following bins: 0-5 slope, 5-10 slope, 10-15 slope. . . ). By creating slope strata, you can avoid reduce some of the the issue of multicollinearity with slope.
|
How to prevent collinearity?
There are several sampling techniques that can be used to reduce collinearity, so I'll mention just one of them: a stratified random sampling plan can eliminate some of the problem. Here's an exampl
|
39,699
|
How to prevent collinearity?
|
Obviously, I can't speak for Leno or Zuur. I am also not an ecologist. However, from the description, I think they just mean walk more than one path. In essence, multicollinearity just means that your variables are correlated with each other.
If you walk only one path up the mountain, every variable you measure at each point will be at least somewhat related by virtue of the fact that they are measured at the same altitude. On the other hand, if you walk up the mountain on the south side, and also walk up on the north side, you may find that the variables differ even at the same altitude. For example, (depending on where you are on the planet) one side will get more sunlight than the other side. One side may get more rain. Etc. By walking more, and more dissimilar, paths, you can minimize the collinearity that would otherwise have arisen.
|
How to prevent collinearity?
|
Obviously, I can't speak for Leno or Zuur. I am also not an ecologist. However, from the description, I think they just mean walk more than one path. In essence, multicollinearity just means that y
|
How to prevent collinearity?
Obviously, I can't speak for Leno or Zuur. I am also not an ecologist. However, from the description, I think they just mean walk more than one path. In essence, multicollinearity just means that your variables are correlated with each other.
If you walk only one path up the mountain, every variable you measure at each point will be at least somewhat related by virtue of the fact that they are measured at the same altitude. On the other hand, if you walk up the mountain on the south side, and also walk up on the north side, you may find that the variables differ even at the same altitude. For example, (depending on where you are on the planet) one side will get more sunlight than the other side. One side may get more rain. Etc. By walking more, and more dissimilar, paths, you can minimize the collinearity that would otherwise have arisen.
|
How to prevent collinearity?
Obviously, I can't speak for Leno or Zuur. I am also not an ecologist. However, from the description, I think they just mean walk more than one path. In essence, multicollinearity just means that y
|
39,700
|
How to prevent collinearity?
|
Similar to @StatsStudent,
You need replication of the other variables within values of the same slope. E.g. be also doing transcets around the mountain (sideways) starting at different elevations/slopes.
If there is truly no variaiton in tree trype with slope, then you will never be able to seperate the two using natural variation as it is non-existant. Unless you bin data within different ranges of slope (but this is not necessary a sampling thing as you can do this afterwards during data analysis..) In this case you would need an EXPERIMENT :)
|
How to prevent collinearity?
|
Similar to @StatsStudent,
You need replication of the other variables within values of the same slope. E.g. be also doing transcets around the mountain (sideways) starting at different elevations/slop
|
How to prevent collinearity?
Similar to @StatsStudent,
You need replication of the other variables within values of the same slope. E.g. be also doing transcets around the mountain (sideways) starting at different elevations/slopes.
If there is truly no variaiton in tree trype with slope, then you will never be able to seperate the two using natural variation as it is non-existant. Unless you bin data within different ranges of slope (but this is not necessary a sampling thing as you can do this afterwards during data analysis..) In this case you would need an EXPERIMENT :)
|
How to prevent collinearity?
Similar to @StatsStudent,
You need replication of the other variables within values of the same slope. E.g. be also doing transcets around the mountain (sideways) starting at different elevations/slop
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.