idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
31,101 | If I know the density I'm estimating is symmetric about 0, how to impose this restriction in my kernel density estimator? | One way to impose the restriction is just to reflect the data about zero, so that
$$\hat f(x) = \frac{1}{2nh}\sum_i k\left(\frac{X_i-x}{h} \right)+k\left(\frac{-X_i-x}{h} \right)$$
If you used the same bandwidth as for an ordinary kernel estimator, you would expect that the variance component of error would be halved, and the bias component not changed. Presumably you could (in principle) get a smaller $h$ and smaller bias, but less variance reduction. You won't get an improved rate of convergence, just a constant factor.
This paper actually has the details, both for when the centre of symmetry is known (your case) and when it's unknown. If it's unknown you need to estimate it, and you have to be careful that your estimator isn't too bad. The paper shows that (for large enough $n$ and under weak assumptions about smoothness) you can always get an improvement even if the centre of symmetry has to be estimated. | If I know the density I'm estimating is symmetric about 0, how to impose this restriction in my kern | One way to impose the restriction is just to reflect the data about zero, so that
$$\hat f(x) = \frac{1}{2nh}\sum_i k\left(\frac{X_i-x}{h} \right)+k\left(\frac{-X_i-x}{h} \right)$$
If you used the sam | If I know the density I'm estimating is symmetric about 0, how to impose this restriction in my kernel density estimator?
One way to impose the restriction is just to reflect the data about zero, so that
$$\hat f(x) = \frac{1}{2nh}\sum_i k\left(\frac{X_i-x}{h} \right)+k\left(\frac{-X_i-x}{h} \right)$$
If you used the same bandwidth as for an ordinary kernel estimator, you would expect that the variance component of error would be halved, and the bias component not changed. Presumably you could (in principle) get a smaller $h$ and smaller bias, but less variance reduction. You won't get an improved rate of convergence, just a constant factor.
This paper actually has the details, both for when the centre of symmetry is known (your case) and when it's unknown. If it's unknown you need to estimate it, and you have to be careful that your estimator isn't too bad. The paper shows that (for large enough $n$ and under weak assumptions about smoothness) you can always get an improvement even if the centre of symmetry has to be estimated. | If I know the density I'm estimating is symmetric about 0, how to impose this restriction in my kern
One way to impose the restriction is just to reflect the data about zero, so that
$$\hat f(x) = \frac{1}{2nh}\sum_i k\left(\frac{X_i-x}{h} \right)+k\left(\frac{-X_i-x}{h} \right)$$
If you used the sam |
31,102 | Choosing the right forecast model for exponential data (COVID19) forecast package R | You can force ets() to use a model with multiplicative trend (and multiplicative error) by using the parameter model="MMN". Of course, you need to start the series later, since multiplicative trends and errors don't make sense for zero values.
temp3 <- ts(temp[-(1:9)], start = c(2020, 32),
frequency = 365.25)
test <- ets(temp3,model="MMN")
test %>% forecast(., h = 14) %>% autoplot()
I certainly hope this graphic is what you wanted.
It also illustrates why ets() is very careful about fitting multiplicative trends on its own. They can and will explode. Also:
I don't like fitting a exponential regression model as this will not catch up when the exponential part of the epidemic stops.
Of course, ets() will not know when to stop extrapolating the exponential growth, so this (extremely correct) rationale applies equally to ets(). You may want to consider models that are explicitly tailored towards epidemiology or (market) penetration, like the Bass diffusion model or similar.
EDIT: Rob Hyndman explains in more depth why smoothing and similar models do not make a lot of sense to forecast COVID-19, and gives pointers to more appropriate models. And here is Ivan Svetunkov. | Choosing the right forecast model for exponential data (COVID19) forecast package R | You can force ets() to use a model with multiplicative trend (and multiplicative error) by using the parameter model="MMN". Of course, you need to start the series later, since multiplicative trends a | Choosing the right forecast model for exponential data (COVID19) forecast package R
You can force ets() to use a model with multiplicative trend (and multiplicative error) by using the parameter model="MMN". Of course, you need to start the series later, since multiplicative trends and errors don't make sense for zero values.
temp3 <- ts(temp[-(1:9)], start = c(2020, 32),
frequency = 365.25)
test <- ets(temp3,model="MMN")
test %>% forecast(., h = 14) %>% autoplot()
I certainly hope this graphic is what you wanted.
It also illustrates why ets() is very careful about fitting multiplicative trends on its own. They can and will explode. Also:
I don't like fitting a exponential regression model as this will not catch up when the exponential part of the epidemic stops.
Of course, ets() will not know when to stop extrapolating the exponential growth, so this (extremely correct) rationale applies equally to ets(). You may want to consider models that are explicitly tailored towards epidemiology or (market) penetration, like the Bass diffusion model or similar.
EDIT: Rob Hyndman explains in more depth why smoothing and similar models do not make a lot of sense to forecast COVID-19, and gives pointers to more appropriate models. And here is Ivan Svetunkov. | Choosing the right forecast model for exponential data (COVID19) forecast package R
You can force ets() to use a model with multiplicative trend (and multiplicative error) by using the parameter model="MMN". Of course, you need to start the series later, since multiplicative trends a |
31,103 | Choosing the right forecast model for exponential data (COVID19) forecast package R | I suggest using a binary logistic regression model. Calculate p as the proportion of the population infected, p = c / N, l as the link function, for example, l = ln(p/(1-p)). Then use ordinary least squares, again, for example, to find l_hat = f(t). Next, use the reverse link function p_hat = exp(l_hat)/(1+exp(l_hat)). Then convert the estimated proportion, p_hat, into a case count, c_hat = p_hat * N.
At each step, there are other choices you could make. A different link function or a different regression method come to mind.
You could evaluate the quality of your estimate graphically by comparing the number of cases and the estimate, the proportion and the estimate, the logit and its estimate (or other link function).
Good luck and stay safe.
John | Choosing the right forecast model for exponential data (COVID19) forecast package R | I suggest using a binary logistic regression model. Calculate p as the proportion of the population infected, p = c / N, l as the link function, for example, l = ln(p/(1-p)). Then use ordinary least | Choosing the right forecast model for exponential data (COVID19) forecast package R
I suggest using a binary logistic regression model. Calculate p as the proportion of the population infected, p = c / N, l as the link function, for example, l = ln(p/(1-p)). Then use ordinary least squares, again, for example, to find l_hat = f(t). Next, use the reverse link function p_hat = exp(l_hat)/(1+exp(l_hat)). Then convert the estimated proportion, p_hat, into a case count, c_hat = p_hat * N.
At each step, there are other choices you could make. A different link function or a different regression method come to mind.
You could evaluate the quality of your estimate graphically by comparing the number of cases and the estimate, the proportion and the estimate, the logit and its estimate (or other link function).
Good luck and stay safe.
John | Choosing the right forecast model for exponential data (COVID19) forecast package R
I suggest using a binary logistic regression model. Calculate p as the proportion of the population infected, p = c / N, l as the link function, for example, l = ln(p/(1-p)). Then use ordinary least |
31,104 | How to interpret sum of two random variables that cross domains? | It's a good question, interpreted in the following way: the random variable $X$ determined by the coin experiment has a sample space of $\Omega_1=\{\text{Heads},\ \text{Tails}\}$ while the random variable $Y$ determined by the roll of a die has a sample space $\Omega_2$ consisting of the six possible stable orientations of the die on the table. These obviously are not the same sample space, so what sense does it make to add them?
I will explain one approach to this problem in two ways: first using mathematical terminology and then again using a standard physical model (or metaphor) for probability spaces based on random sampling.
The mathematical account
Implicitly, we form the product space $\Omega=\Omega_1\times\Omega_2.$ Its elements consist of all the ordered pairs $(\omega_1,\omega_2)$ where $\omega_1\in\Omega_1$ and $\omega_2\in\Omega_2.$ The original random variables define new random variables on $\Omega:$ $X$ defines the random variable
$$(\omega_1, \omega_2) \to X(\omega_1)$$
while $Y$ defines the random variable
$$(\omega_1,\omega_2) \to Y(\omega_2).$$
It is a traditional abuse of notation to reuse the symbols $X$ and $Y$ for these new functions on this new space $\Omega.$ It is also understood, when $X$ or $Y$ have continuous distributions, that $\Omega$ is given a sufficiently rich set of events (its sigma algebra) to make these new functions into bona fide random variables: that is, they will be measurable.
Now it makes sense to write "$X+Y$" because it can be defined by pointwise addition (as usual) via
$$(X+Y)(\omega_1,\omega_2) = X(\omega_1,\omega_2) + Y(\omega_1,\omega_2) = X(\omega_1) + Y(\omega_2)$$
where the last equality comes from the definitions of $X$ and $Y$ on $\Omega.$
If we continue to insist that the two original sample spaces were different, there is no way to model any dependency among them. Each has its probability function $\mathbb{P}_1$ and $\mathbb{P}_2.$ One thing we can always do, though, is to give the events in $\Omega$ the probabilities $\mathbb P$ they must have according to the definition of independence. In particular, when $E_1\subset \Omega_1$ and $E_2\subset\Omega_2$ are events, then $E_1\times E_2 = \{(\omega_1,\omega_2)\in\Omega\mid \omega_1\in E_1\text{ and } \omega_2\in E_2\}$ is an event and
$$\mathbb{P}(E_1\times E_2) = \mathbb{P}_1(E_1)\,\mathbb{P}_2(E_2).$$
This enables us to do probability calculations with random variables defined on $\Omega$ including, for example, $X+Y.$
In general, though, people usually assume this product construction has already been carried out, so that in effect $X$ and $Y$ were defined on a common sample space all along. This permits us to model arbitrary dependencies among these variables, simply by defining any valid sigma algebra and probability function we like on $\Omega.$
A metaphorical (intuitive) account
In terms of the tickets in a box metaphor, $\Omega_1$ is a box with two tickets on it (one for each side of the coin) and $\Omega_2$ is a box with six tickets on it (one for each side of the die). $X$ consists of writing $0$ on one of the coin tickets and $1$ on the other coin ticket. $Y$ consists of writing the numbers $1,\ldots,6$ on the six die tickets, one number per ticket. The "product box" $\Omega$ is created by tabulating all combinations a ticket from one box coupled with a ticket from the other box, as in this figure where elements of $\Omega_1$ index the columns and elements of $\Omega_2$ index the rows.
I have identified the six sides of the die by giving it a coordinate system in which the die is the cube bounded by the eight points with coordinates $\pm 1,$ so that unit vectors pointing out from its six faces determine those faces, and for brevity I write the vectors without parentheses or commas:
$$\array{& \text{Tails} & \text{Heads} \\
\hline 100: & (\text{Tails}, 100) & (\text{Heads}, 100)\\
010: & (\text{Tails}, 010) & (\text{Heads}, 010) \\
001: & (\text{Tails}, 001) & (\text{Heads}, 001) \\
-001: & (\text{Tails}, -001) & (\text{Heads}, -001) \\
-010: & (\text{Tails}, -010) & (\text{Heads}, -010) \\
-100: & (\text{Tails}, -100) & (\text{Heads}, -100)
}$$
We cut out the $2\times 6 = 12$ cells of this table and put them into a new box: that's $\Omega.$ These tickets represent all possible combinations of a flip of the coin and a roll of the die.
(Clearly it doesn't matter whether you work with $\Omega_1\times \Omega_2$ or $\Omega_2\times\Omega_1;$ the difference--although a real one at a basic mathematical level as well as in the typographical formatting of the table--is just a matter of notation.)
One way to model the probability on the product space is to go through this process with the actual tickets in the boxes rather than the unique types of tickets as shown above. For instance, if $\Omega_2$ has two tickets for "Tails" and one ticket for "Heads" (modeling a coin that favors Tails 2:1), then the table would have three columns: one for each ticket. When $\Omega_1$ has $m$ tickets and $\Omega_2$ has $n$ tickets, this will create $mn$ new "product" tickets to put into $\Omega.$
If instead we put arbitrary numbers of each product ticket into $\Omega,$ we can change the probabilities of events and create dependencies among the random variables. Thus, in a setting where we wish to add two random variables representing different kinds of outcomes, we usually assume this table has been created and the tickets consist of various cells from the table in various proportions. | How to interpret sum of two random variables that cross domains? | It's a good question, interpreted in the following way: the random variable $X$ determined by the coin experiment has a sample space of $\Omega_1=\{\text{Heads},\ \text{Tails}\}$ while the random vari | How to interpret sum of two random variables that cross domains?
It's a good question, interpreted in the following way: the random variable $X$ determined by the coin experiment has a sample space of $\Omega_1=\{\text{Heads},\ \text{Tails}\}$ while the random variable $Y$ determined by the roll of a die has a sample space $\Omega_2$ consisting of the six possible stable orientations of the die on the table. These obviously are not the same sample space, so what sense does it make to add them?
I will explain one approach to this problem in two ways: first using mathematical terminology and then again using a standard physical model (or metaphor) for probability spaces based on random sampling.
The mathematical account
Implicitly, we form the product space $\Omega=\Omega_1\times\Omega_2.$ Its elements consist of all the ordered pairs $(\omega_1,\omega_2)$ where $\omega_1\in\Omega_1$ and $\omega_2\in\Omega_2.$ The original random variables define new random variables on $\Omega:$ $X$ defines the random variable
$$(\omega_1, \omega_2) \to X(\omega_1)$$
while $Y$ defines the random variable
$$(\omega_1,\omega_2) \to Y(\omega_2).$$
It is a traditional abuse of notation to reuse the symbols $X$ and $Y$ for these new functions on this new space $\Omega.$ It is also understood, when $X$ or $Y$ have continuous distributions, that $\Omega$ is given a sufficiently rich set of events (its sigma algebra) to make these new functions into bona fide random variables: that is, they will be measurable.
Now it makes sense to write "$X+Y$" because it can be defined by pointwise addition (as usual) via
$$(X+Y)(\omega_1,\omega_2) = X(\omega_1,\omega_2) + Y(\omega_1,\omega_2) = X(\omega_1) + Y(\omega_2)$$
where the last equality comes from the definitions of $X$ and $Y$ on $\Omega.$
If we continue to insist that the two original sample spaces were different, there is no way to model any dependency among them. Each has its probability function $\mathbb{P}_1$ and $\mathbb{P}_2.$ One thing we can always do, though, is to give the events in $\Omega$ the probabilities $\mathbb P$ they must have according to the definition of independence. In particular, when $E_1\subset \Omega_1$ and $E_2\subset\Omega_2$ are events, then $E_1\times E_2 = \{(\omega_1,\omega_2)\in\Omega\mid \omega_1\in E_1\text{ and } \omega_2\in E_2\}$ is an event and
$$\mathbb{P}(E_1\times E_2) = \mathbb{P}_1(E_1)\,\mathbb{P}_2(E_2).$$
This enables us to do probability calculations with random variables defined on $\Omega$ including, for example, $X+Y.$
In general, though, people usually assume this product construction has already been carried out, so that in effect $X$ and $Y$ were defined on a common sample space all along. This permits us to model arbitrary dependencies among these variables, simply by defining any valid sigma algebra and probability function we like on $\Omega.$
A metaphorical (intuitive) account
In terms of the tickets in a box metaphor, $\Omega_1$ is a box with two tickets on it (one for each side of the coin) and $\Omega_2$ is a box with six tickets on it (one for each side of the die). $X$ consists of writing $0$ on one of the coin tickets and $1$ on the other coin ticket. $Y$ consists of writing the numbers $1,\ldots,6$ on the six die tickets, one number per ticket. The "product box" $\Omega$ is created by tabulating all combinations a ticket from one box coupled with a ticket from the other box, as in this figure where elements of $\Omega_1$ index the columns and elements of $\Omega_2$ index the rows.
I have identified the six sides of the die by giving it a coordinate system in which the die is the cube bounded by the eight points with coordinates $\pm 1,$ so that unit vectors pointing out from its six faces determine those faces, and for brevity I write the vectors without parentheses or commas:
$$\array{& \text{Tails} & \text{Heads} \\
\hline 100: & (\text{Tails}, 100) & (\text{Heads}, 100)\\
010: & (\text{Tails}, 010) & (\text{Heads}, 010) \\
001: & (\text{Tails}, 001) & (\text{Heads}, 001) \\
-001: & (\text{Tails}, -001) & (\text{Heads}, -001) \\
-010: & (\text{Tails}, -010) & (\text{Heads}, -010) \\
-100: & (\text{Tails}, -100) & (\text{Heads}, -100)
}$$
We cut out the $2\times 6 = 12$ cells of this table and put them into a new box: that's $\Omega.$ These tickets represent all possible combinations of a flip of the coin and a roll of the die.
(Clearly it doesn't matter whether you work with $\Omega_1\times \Omega_2$ or $\Omega_2\times\Omega_1;$ the difference--although a real one at a basic mathematical level as well as in the typographical formatting of the table--is just a matter of notation.)
One way to model the probability on the product space is to go through this process with the actual tickets in the boxes rather than the unique types of tickets as shown above. For instance, if $\Omega_2$ has two tickets for "Tails" and one ticket for "Heads" (modeling a coin that favors Tails 2:1), then the table would have three columns: one for each ticket. When $\Omega_1$ has $m$ tickets and $\Omega_2$ has $n$ tickets, this will create $mn$ new "product" tickets to put into $\Omega.$
If instead we put arbitrary numbers of each product ticket into $\Omega,$ we can change the probabilities of events and create dependencies among the random variables. Thus, in a setting where we wish to add two random variables representing different kinds of outcomes, we usually assume this table has been created and the tickets consist of various cells from the table in various proportions. | How to interpret sum of two random variables that cross domains?
It's a good question, interpreted in the following way: the random variable $X$ determined by the coin experiment has a sample space of $\Omega_1=\{\text{Heads},\ \text{Tails}\}$ while the random vari |
31,105 | How to interpret sum of two random variables that cross domains? | Surely they can have different sample spaces (or supports) but you can add them and generate another RV.
If you assume independence between the two, the resulting mass function can be calculated by simply convolving the two. But, a more primitive solution is to list all possible scenarios:
$$\begin{align}&P(Z=1)=P(Y=0)P(X=1)=1/12\\&P(Z=2)=P(Y=0)P(X=2)+P(Y=1)P(X=1)=2/12\\&...\end{align}$$
which will make up the mass function of $Z$. | How to interpret sum of two random variables that cross domains? | Surely they can have different sample spaces (or supports) but you can add them and generate another RV.
If you assume independence between the two, the resulting mass function can be calculated by si | How to interpret sum of two random variables that cross domains?
Surely they can have different sample spaces (or supports) but you can add them and generate another RV.
If you assume independence between the two, the resulting mass function can be calculated by simply convolving the two. But, a more primitive solution is to list all possible scenarios:
$$\begin{align}&P(Z=1)=P(Y=0)P(X=1)=1/12\\&P(Z=2)=P(Y=0)P(X=2)+P(Y=1)P(X=1)=2/12\\&...\end{align}$$
which will make up the mass function of $Z$. | How to interpret sum of two random variables that cross domains?
Surely they can have different sample spaces (or supports) but you can add them and generate another RV.
If you assume independence between the two, the resulting mass function can be calculated by si |
31,106 | How to interpret sum of two random variables that cross domains? | Remember that X and Y will be the outcome of the roll of a dice/ flip of a coin. So if I understand your question correctly, you will ad the value you get from your first die roll (say 2) to the value of the coin toss (say 0). Resulting in a Z of 2 for example. | How to interpret sum of two random variables that cross domains? | Remember that X and Y will be the outcome of the roll of a dice/ flip of a coin. So if I understand your question correctly, you will ad the value you get from your first die roll (say 2) to the value | How to interpret sum of two random variables that cross domains?
Remember that X and Y will be the outcome of the roll of a dice/ flip of a coin. So if I understand your question correctly, you will ad the value you get from your first die roll (say 2) to the value of the coin toss (say 0). Resulting in a Z of 2 for example. | How to interpret sum of two random variables that cross domains?
Remember that X and Y will be the outcome of the roll of a dice/ flip of a coin. So if I understand your question correctly, you will ad the value you get from your first die roll (say 2) to the value |
31,107 | How to interpret sum of two random variables that cross domains? | Any two sample space can be combined by the operation of direct sum. That's generally denoted by the symbol $\oplus$, although the simple addition symbol is sometimes used when it's considered obvious that the direct sum is intended.
However, in this case, it appears that regular addition is meant. The coin flips are being represented by the integers {0, 1}, so those numbers can be added to the die roll. While the sample space ranges, in the sense of the possible values, are different, in both cases the overall space is the set of integers, so addition is possible. | How to interpret sum of two random variables that cross domains? | Any two sample space can be combined by the operation of direct sum. That's generally denoted by the symbol $\oplus$, although the simple addition symbol is sometimes used when it's considered obvious | How to interpret sum of two random variables that cross domains?
Any two sample space can be combined by the operation of direct sum. That's generally denoted by the symbol $\oplus$, although the simple addition symbol is sometimes used when it's considered obvious that the direct sum is intended.
However, in this case, it appears that regular addition is meant. The coin flips are being represented by the integers {0, 1}, so those numbers can be added to the die roll. While the sample space ranges, in the sense of the possible values, are different, in both cases the overall space is the set of integers, so addition is possible. | How to interpret sum of two random variables that cross domains?
Any two sample space can be combined by the operation of direct sum. That's generally denoted by the symbol $\oplus$, although the simple addition symbol is sometimes used when it's considered obvious |
31,108 | When Can Integration and Expectation be Exchanged? | Generally speaking, the expected value of an integral is an iterated integral, and so the normal mathematical rules for interchange of integrals apply. To see this more clearly, we first note that the expectation operator is an integration operation. Formally, a random variable $X$ in the probability space $(\Omega, \mathscr{G}, P)$ has expected value defined by the Lebesgue integral:
$$\mathbb{E}(X) \equiv \int_\Omega X(\omega) dP(\omega).$$
Now, suppose we have a random variable that is an integral over some other function:
$$X(\omega) = \int_\mathbb{R} H(r,\omega) dr.$$
In this case, the expected value can be written as an iterated integral as follows:
$$\mathbb{E}(X) = \int_\Omega \Bigg( \int_\mathbb{R} H(r,\omega) dr \Bigg) dP(\omega).$$
Now, there are a number of theorems relating to when you are allowed to interchange the order of integration in this kind of iterated integral, but the most important is Fubini's theorem. (If $H$ is a measureable function in the above expression and $\mathbb{E}|H|$ is finite then you will be able to interchange the order of integration under this theorem.) This is a large subject, so I will not attempt to set it out fully here. Instead, I will refer you to books on integration and measure theory, which deal with the basis of Lebesgue integration and the application of iterated integrals. Good luck with your explorations! | When Can Integration and Expectation be Exchanged? | Generally speaking, the expected value of an integral is an iterated integral, and so the normal mathematical rules for interchange of integrals apply. To see this more clearly, we first note that th | When Can Integration and Expectation be Exchanged?
Generally speaking, the expected value of an integral is an iterated integral, and so the normal mathematical rules for interchange of integrals apply. To see this more clearly, we first note that the expectation operator is an integration operation. Formally, a random variable $X$ in the probability space $(\Omega, \mathscr{G}, P)$ has expected value defined by the Lebesgue integral:
$$\mathbb{E}(X) \equiv \int_\Omega X(\omega) dP(\omega).$$
Now, suppose we have a random variable that is an integral over some other function:
$$X(\omega) = \int_\mathbb{R} H(r,\omega) dr.$$
In this case, the expected value can be written as an iterated integral as follows:
$$\mathbb{E}(X) = \int_\Omega \Bigg( \int_\mathbb{R} H(r,\omega) dr \Bigg) dP(\omega).$$
Now, there are a number of theorems relating to when you are allowed to interchange the order of integration in this kind of iterated integral, but the most important is Fubini's theorem. (If $H$ is a measureable function in the above expression and $\mathbb{E}|H|$ is finite then you will be able to interchange the order of integration under this theorem.) This is a large subject, so I will not attempt to set it out fully here. Instead, I will refer you to books on integration and measure theory, which deal with the basis of Lebesgue integration and the application of iterated integrals. Good luck with your explorations! | When Can Integration and Expectation be Exchanged?
Generally speaking, the expected value of an integral is an iterated integral, and so the normal mathematical rules for interchange of integrals apply. To see this more clearly, we first note that th |
31,109 | How is $P(D;\theta) = P(D|\theta)$? | There are a couple of issues here:
In classical statistics all the distributions used are implicitly conditional on $\theta$, which is considered to be an "unknown constant". In Bayesian analysis there is no such thing as an unknown constant (anything unknown is treated as a random variable) and we instead use explicit conditioning statements for all probability statements.
This means that, in Bayesian analysis, the sampling density $P(X|\theta)$ is the object $P_\theta(X)$ that you referred to in the classical case. (The likelihood function is just the sampling density treated as a function of the parameter $\theta$ with $X=x$ taken to be fixed.) It also means that the density $P(X)$ in the Bayesian analysis is not conditional on $\theta$. It is the marginal density of the data, which is given by: $$P(X) = \int \limits_{\Theta} P(X|\theta) P(\theta) \ d \theta.$$ There are a few places in your question where you get a bit sloppy with conditioning statements, and you end up equivocating the conditional and marginal distributions of the data. That is not a big problem in classical statistics (since all probability statements are implicitly conditional on the parameter), but it will cause trouble for you in Bayesian analysis.
The notation $P(X ; \theta)$ is usually used only in classical statistics, and it is used to denote the same thing as $P_\theta(X)$ ---i.e., it is implicitly the conditional density of the data given the parameter. It would be unusual (and confusing) to use this notation for the joint density.
The Bayesian method whereby you maximise the posterior distribution with respect to the parameter is a point-estimation method called maximum a-posteriori (MAP) estimation. This is a point-estimation method that gives you a single point-estimate. You should bear in mind that Bayesians are usually concerned with also retaining the whole posterior density, since this contains more information than the MAP estimator. | How is $P(D;\theta) = P(D|\theta)$? | There are a couple of issues here:
In classical statistics all the distributions used are implicitly conditional on $\theta$, which is considered to be an "unknown constant". In Bayesian analysis th | How is $P(D;\theta) = P(D|\theta)$?
There are a couple of issues here:
In classical statistics all the distributions used are implicitly conditional on $\theta$, which is considered to be an "unknown constant". In Bayesian analysis there is no such thing as an unknown constant (anything unknown is treated as a random variable) and we instead use explicit conditioning statements for all probability statements.
This means that, in Bayesian analysis, the sampling density $P(X|\theta)$ is the object $P_\theta(X)$ that you referred to in the classical case. (The likelihood function is just the sampling density treated as a function of the parameter $\theta$ with $X=x$ taken to be fixed.) It also means that the density $P(X)$ in the Bayesian analysis is not conditional on $\theta$. It is the marginal density of the data, which is given by: $$P(X) = \int \limits_{\Theta} P(X|\theta) P(\theta) \ d \theta.$$ There are a few places in your question where you get a bit sloppy with conditioning statements, and you end up equivocating the conditional and marginal distributions of the data. That is not a big problem in classical statistics (since all probability statements are implicitly conditional on the parameter), but it will cause trouble for you in Bayesian analysis.
The notation $P(X ; \theta)$ is usually used only in classical statistics, and it is used to denote the same thing as $P_\theta(X)$ ---i.e., it is implicitly the conditional density of the data given the parameter. It would be unusual (and confusing) to use this notation for the joint density.
The Bayesian method whereby you maximise the posterior distribution with respect to the parameter is a point-estimation method called maximum a-posteriori (MAP) estimation. This is a point-estimation method that gives you a single point-estimate. You should bear in mind that Bayesians are usually concerned with also retaining the whole posterior density, since this contains more information than the MAP estimator. | How is $P(D;\theta) = P(D|\theta)$?
There are a couple of issues here:
In classical statistics all the distributions used are implicitly conditional on $\theta$, which is considered to be an "unknown constant". In Bayesian analysis th |
31,110 | How is $P(D;\theta) = P(D|\theta)$? | I'll use a simplified notation in this answer. If you're doing classical statistics, $\theta$ is not a random variable. Hence, the notation $p(x;\theta)$ is describing a member of a family of probability functions or densities $\{p_\theta(x)\}_{\theta\in\Theta}$, in which $\Theta$ is the parameter space. In a Bayesian analysis, $\theta$ is a random variable, and $p(x\mid\theta)$ is a conditional probability function or density, which models your uncertainty about $x$ for each possible value of $\theta$. After you're done with your experiment, there is no longer uncertainty about $x$ (it becomes data/information you know about), and you look at $p(x\mid \theta)=L_x(\theta)$ as a function of $\theta$, for this "fixed" data $x$. This likelihood function $L_x(\theta)$ lives in the intersection between the classical and Bayesian styles of inference. In my opinion, the Bayesian way is better understood in terms of conditional independence. I suggest that you write down and explore the likelihood function for the Bernoulli model; graph it; think about it's meaning before and after the experiment is conducted. You mentioned that a Bayesian maximizes the posterior $\pi(\theta\mid x)$. That's not necessarily the case. There are other ways to summarize the posterior distribution. Essentially, the chosen summary depends on the introduction of a loss function. Check Robert's Bayesian Choice to learn all the gory details. | How is $P(D;\theta) = P(D|\theta)$? | I'll use a simplified notation in this answer. If you're doing classical statistics, $\theta$ is not a random variable. Hence, the notation $p(x;\theta)$ is describing a member of a family of probabil | How is $P(D;\theta) = P(D|\theta)$?
I'll use a simplified notation in this answer. If you're doing classical statistics, $\theta$ is not a random variable. Hence, the notation $p(x;\theta)$ is describing a member of a family of probability functions or densities $\{p_\theta(x)\}_{\theta\in\Theta}$, in which $\Theta$ is the parameter space. In a Bayesian analysis, $\theta$ is a random variable, and $p(x\mid\theta)$ is a conditional probability function or density, which models your uncertainty about $x$ for each possible value of $\theta$. After you're done with your experiment, there is no longer uncertainty about $x$ (it becomes data/information you know about), and you look at $p(x\mid \theta)=L_x(\theta)$ as a function of $\theta$, for this "fixed" data $x$. This likelihood function $L_x(\theta)$ lives in the intersection between the classical and Bayesian styles of inference. In my opinion, the Bayesian way is better understood in terms of conditional independence. I suggest that you write down and explore the likelihood function for the Bernoulli model; graph it; think about it's meaning before and after the experiment is conducted. You mentioned that a Bayesian maximizes the posterior $\pi(\theta\mid x)$. That's not necessarily the case. There are other ways to summarize the posterior distribution. Essentially, the chosen summary depends on the introduction of a loss function. Check Robert's Bayesian Choice to learn all the gory details. | How is $P(D;\theta) = P(D|\theta)$?
I'll use a simplified notation in this answer. If you're doing classical statistics, $\theta$ is not a random variable. Hence, the notation $p(x;\theta)$ is describing a member of a family of probabil |
31,111 | Why is the cross-entropy always more than the entropy? | Let's say you have two distributions $p$ and $q$. Cross entropy is: $H(p,q)=-\sum_x{p(x)\log{q(x)}}$. First, you'll manipulate it to obtain the very well known form: $H(p,q)=H(p)+D_{KL}(p||q)$, where $D_{KL}(p||q)$ is called the KL distance.
$$H(p,q)=-\sum_x{p(x)\log{\left(\frac{q(x)p(x)}{p(x)}\right)}}=-\sum_xp(x)\log{\left(\frac{q(x)}{p(x)}\right)}-\sum_x{p(x)\log{p(x)}}=D_{KL}(p||q)+H(p)$$
Then, it only remains to prove that $D_{KL}(p,q)\geq 0$, which can be done in various ways. The page I shared uses $\log(x)\leq x-1$:
$$D_{KL}(p||q)\geq \sum_x{p(x)\left(1-\frac{q(x)}{p(x)} \right)}=\sum_x{p(x)}-\sum_x{q(x)}=1-\sum_x{q(x)}\geq 0$$
From the beginning, we assume that $x$ is in the support set of $p(x)$, i.e. $p(x)$ is non-zero. In the wikipedia entry, it says $\sum_x{q(x)}=1$, but I disagree with it, since support set of $q(x)$ can be different. | Why is the cross-entropy always more than the entropy? | Let's say you have two distributions $p$ and $q$. Cross entropy is: $H(p,q)=-\sum_x{p(x)\log{q(x)}}$. First, you'll manipulate it to obtain the very well known form: $H(p,q)=H(p)+D_{KL}(p||q)$, where | Why is the cross-entropy always more than the entropy?
Let's say you have two distributions $p$ and $q$. Cross entropy is: $H(p,q)=-\sum_x{p(x)\log{q(x)}}$. First, you'll manipulate it to obtain the very well known form: $H(p,q)=H(p)+D_{KL}(p||q)$, where $D_{KL}(p||q)$ is called the KL distance.
$$H(p,q)=-\sum_x{p(x)\log{\left(\frac{q(x)p(x)}{p(x)}\right)}}=-\sum_xp(x)\log{\left(\frac{q(x)}{p(x)}\right)}-\sum_x{p(x)\log{p(x)}}=D_{KL}(p||q)+H(p)$$
Then, it only remains to prove that $D_{KL}(p,q)\geq 0$, which can be done in various ways. The page I shared uses $\log(x)\leq x-1$:
$$D_{KL}(p||q)\geq \sum_x{p(x)\left(1-\frac{q(x)}{p(x)} \right)}=\sum_x{p(x)}-\sum_x{q(x)}=1-\sum_x{q(x)}\geq 0$$
From the beginning, we assume that $x$ is in the support set of $p(x)$, i.e. $p(x)$ is non-zero. In the wikipedia entry, it says $\sum_x{q(x)}=1$, but I disagree with it, since support set of $q(x)$ can be different. | Why is the cross-entropy always more than the entropy?
Let's say you have two distributions $p$ and $q$. Cross entropy is: $H(p,q)=-\sum_x{p(x)\log{q(x)}}$. First, you'll manipulate it to obtain the very well known form: $H(p,q)=H(p)+D_{KL}(p||q)$, where |
31,112 | Why is the cross-entropy always more than the entropy? | $$
H(p,q) \geq H(p) \xrightarrow[]{} \sum_{x}^{}-p_{x}\log(q_{x}) \geq \sum_{x}^{}-p_{x}\log(p_{x}) \\ \xrightarrow[]{-\log(x)=\log(\frac{1}{x})} \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \geq \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}})
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}}) - \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \leq 0
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}[\log(\frac{1}{p_{x}})- \log(\frac{1}{q_{x}})] \leq 0
\\ \xrightarrow[]{\log(x) - \log(y) = \log(\frac{x}{y})} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq 0
\\ \xrightarrow[\sum_{x}^{}p_{x} = 1, \text{So It Is Like $\color{red}\alpha$ In Concave Definition}]{{\log \text{is a Concave Function}}} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq \log[\sum_{}^{}p_{x}(\frac{q_{x}}{p_{x}})]
\\ \log[\sum_{}^{}\not{p_{x}}\frac{q_{x}}{\not{p_{x}}}] = \log(\sum_{x_{p_{x}\neq 0}}^{}q_{x}) \leq \log(\sum_{x}^{}q_{x}) = \log(1) = 0
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq \log[\sum_{x_{p_{x}\neq 0}}^{}p_{x}(\frac{q_{x}}{p_{x}})] \leq 0
\\ \xrightarrow[]{} \text{Proved} $$ | Why is the cross-entropy always more than the entropy? | $$
H(p,q) \geq H(p) \xrightarrow[]{} \sum_{x}^{}-p_{x}\log(q_{x}) \geq \sum_{x}^{}-p_{x}\log(p_{x}) \\ \xrightarrow[]{-\log(x)=\log(\frac{1}{x})} \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \geq \sum_{x}^{} | Why is the cross-entropy always more than the entropy?
$$
H(p,q) \geq H(p) \xrightarrow[]{} \sum_{x}^{}-p_{x}\log(q_{x}) \geq \sum_{x}^{}-p_{x}\log(p_{x}) \\ \xrightarrow[]{-\log(x)=\log(\frac{1}{x})} \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \geq \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}})
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}}) - \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \leq 0
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}[\log(\frac{1}{p_{x}})- \log(\frac{1}{q_{x}})] \leq 0
\\ \xrightarrow[]{\log(x) - \log(y) = \log(\frac{x}{y})} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq 0
\\ \xrightarrow[\sum_{x}^{}p_{x} = 1, \text{So It Is Like $\color{red}\alpha$ In Concave Definition}]{{\log \text{is a Concave Function}}} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq \log[\sum_{}^{}p_{x}(\frac{q_{x}}{p_{x}})]
\\ \log[\sum_{}^{}\not{p_{x}}\frac{q_{x}}{\not{p_{x}}}] = \log(\sum_{x_{p_{x}\neq 0}}^{}q_{x}) \leq \log(\sum_{x}^{}q_{x}) = \log(1) = 0
\\ \xrightarrow[]{} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq \log[\sum_{x_{p_{x}\neq 0}}^{}p_{x}(\frac{q_{x}}{p_{x}})] \leq 0
\\ \xrightarrow[]{} \text{Proved} $$ | Why is the cross-entropy always more than the entropy?
$$
H(p,q) \geq H(p) \xrightarrow[]{} \sum_{x}^{}-p_{x}\log(q_{x}) \geq \sum_{x}^{}-p_{x}\log(p_{x}) \\ \xrightarrow[]{-\log(x)=\log(\frac{1}{x})} \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \geq \sum_{x}^{} |
31,113 | Why is the cross-entropy always more than the entropy? | More intuitively with logical deduction:
i) P(x) is the "real scenario", how things really happens, and Q(x) is the estimation
ii) P(x) (-Log(Q(x))* is the "loss / punishment" function, since if indeed when, as example, probability P(x) to occur to so high and Q(x) is very low, then as P and Q are always between 0 and 1, that loss / punishment function will be VERY HIGH (due to log of a very small number)
So
a) By definition: H(P) = Sum_over_x (P(x)*Log(P(x)) is the Entropy, and P is the "Ground True" (or known to have minimal errors if the ground true cannot be properly measured)
b) then it follows, for ALL other estimated distribution/probabilities Qi, H(P,Qi): Sum_over_x (P(x)*Log(Qi(x)) is always on average less accurate, and hence greater than H(P)
Hence H(P,Q) >= H(P) is really by definition of the H(P).
Note: Compare
error A: P(x1) is 0.1, and Q(x1) is 0.5, log(0.5) = -0.69314718056,
P(x1)* -Log (Q(x1)) = 0.0693.....
error B: P(x2) is 0.5 vs Q(x2) is 0.1, log(0.1) = -2.302585093
P(x2)* -Log (Q(x2)) = 1.151.....
In machine learning use case, P(xi) * -Log (Q(xi)) will penalize more those predict a lot lower probability when indeed it is a high probability case, whereas for a lower probability P(xi) such as 0.1 in error A above,
You can look at the following for more information:
https://machinelearningmastery.com/cross-entropy-for-machine-learning/#:~:text=Cross%2Dentropy%20is%20commonly%20used,difference%20between%20two%20probability%20distributions. | Why is the cross-entropy always more than the entropy? | More intuitively with logical deduction:
i) P(x) is the "real scenario", how things really happens, and Q(x) is the estimation
ii) P(x) (-Log(Q(x))* is the "loss / punishment" function, since if indee | Why is the cross-entropy always more than the entropy?
More intuitively with logical deduction:
i) P(x) is the "real scenario", how things really happens, and Q(x) is the estimation
ii) P(x) (-Log(Q(x))* is the "loss / punishment" function, since if indeed when, as example, probability P(x) to occur to so high and Q(x) is very low, then as P and Q are always between 0 and 1, that loss / punishment function will be VERY HIGH (due to log of a very small number)
So
a) By definition: H(P) = Sum_over_x (P(x)*Log(P(x)) is the Entropy, and P is the "Ground True" (or known to have minimal errors if the ground true cannot be properly measured)
b) then it follows, for ALL other estimated distribution/probabilities Qi, H(P,Qi): Sum_over_x (P(x)*Log(Qi(x)) is always on average less accurate, and hence greater than H(P)
Hence H(P,Q) >= H(P) is really by definition of the H(P).
Note: Compare
error A: P(x1) is 0.1, and Q(x1) is 0.5, log(0.5) = -0.69314718056,
P(x1)* -Log (Q(x1)) = 0.0693.....
error B: P(x2) is 0.5 vs Q(x2) is 0.1, log(0.1) = -2.302585093
P(x2)* -Log (Q(x2)) = 1.151.....
In machine learning use case, P(xi) * -Log (Q(xi)) will penalize more those predict a lot lower probability when indeed it is a high probability case, whereas for a lower probability P(xi) such as 0.1 in error A above,
You can look at the following for more information:
https://machinelearningmastery.com/cross-entropy-for-machine-learning/#:~:text=Cross%2Dentropy%20is%20commonly%20used,difference%20between%20two%20probability%20distributions. | Why is the cross-entropy always more than the entropy?
More intuitively with logical deduction:
i) P(x) is the "real scenario", how things really happens, and Q(x) is the estimation
ii) P(x) (-Log(Q(x))* is the "loss / punishment" function, since if indee |
31,114 | Confidence interval for the mean - Normal distribution or Student's t-distribution? | 1. Normal data, variance known: If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and known standard deviation $\sigma,$ then a 95% confidence interval (CI) for $\mu$ is $\bar X \pm 1.95 \sigma/\sqrt{n}.$ This is the only situation in which the z interval is exactly correct.
2. Nonnormal data, variance known: If the population distribution is not normal and the sample is 'large enough', then $\bar X$ is approximately normal and the same formula provides an approximate 95% CI. The rule that $n \ge 30$ is 'large enough' is unreliable here. If the population distribution is heavy-tailed, then $\bar X$ may not have a distribution that is close to normal (even if $n \ge 30).$ The 'Central Limit Theorem', often provides
reasonable approximations for moderate values of $n,$ but it is a limit theorem,
with guaranteed results only as $n \rightarrow \infty.$
3. Normal data, variance unknown. If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and standard deviation $\sigma,$ with $\mu$ estimated by the sample mean $\bar X$ and $\sigma$ estimated by the sample standard deviation $S.$ Then a 95% confidence interval (CI) for $\mu$ is $\bar X \pm t^* S/\sqrt{n},$ where $S$ is the sample standard deviation and
where $t^*$ cuts probability $0.025$ from the upper tail of Student's t distribution with $n - 1$ degrees of freedom. This is the only situation in which the t interval is exactly correct.
Examples: If $n=10$, then $t^* = 2.262$
and if $n = 30,$ then $t^* = 2.045.$ (Computations from R below; you could also use a printed 't table'.)
qt(.975, 9); qt(.975, 29)
[1] 2.262157 # for n = 10
[1] 2.04523 # for n = 30
Notice that 2.045 and 1.96 (from Part 1 above) both round to 2.0. If $n \ge 30$ then $t^*$ rounds to 2.0. That is the basis for
the 'rule of 30', often mindlessly parroted in other contexts where it is not relevant.
There is no similar coincidental rounding for CIs with confidence levels other than 95%. For example, in Part 1 above
a 99% CI for $\mu$ is obtained as $\bar X \pm 2.58 \sigma/\sqrt{n}.$ However,
$t^*=2.76$ for $n = 30$ and $t^* = 2.65$ for $n = 70.$
qnorm(.995)
[1] 2.575829
qt(.995, 29)
[1] 2.756386
qt(.995, 69)
[1] 2.648977
4. Nonnormal data, variance unknown: Confidence intervals based on the t distribution (as in Part 3 above) are known to be 'robust' against moderate departures from normality.
(If $n$ is very small, there should be no far outliers or evidence of severe skewness.) Then, to a degree that is difficult to predict, a t CI may provide a useful CI for $\mu.$
By contrast, if the type of distribution is known, it may be possible
to find an exact form of CI.
For example, if $n = 30$ observations from a (distinctly nonnormal)
exponential distribution with unknown mean $\mu$ have $\bar X = 17.24,\,
S = 15.33,$ then the (approximate) 95% t CI is $(11.33, 23.15).$
t.test(x)
One Sample t-test
data: x
t = 5.9654, df = 29, p-value = 1.752e-06
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
11.32947 23.15118
sample estimates:
mean of x
17.24033
However,
$$\frac{\bar X}{\mu} \sim \mathsf{Gamma}(\text{shape}=n,\text{rate}=n),$$
so that $$P(L \le \bar X/\mu < U) = P(\bar X/U < \mu < \bar X/L)=0.95$$
and an exact 95% CI for $\mu$ is $(\bar X/U,\, \bar X/L) = (12.42, 25.16).$
qgamma(c(.025,.975), 30, 30)
[1] 0.6746958 1.3882946
mean(x)/qgamma(c(.975,.025), 30, 30)
[1] 12.41835 25.55274
Addendum on bootstrap CI: If data seem non-normal, but the actual population
distribution is unknown, then a 95% nonparametric bootstrap CI may be the best
choice. Suppose we have $n=20$ observations from an unknown distribution, with $\bar X$ = 13.54$ and values shown in the stripchart below.
The observations seem distinctly right-skewed and fail a Shapio-Wilk normality test with P-value 0.001. If we assume the data are exponential and use the method in Part 4, the 95% CI is $(9.13, 22.17),$ but we have no way to know whether the data are exponential.
Accordingly, we find a 95% nonparametric bootstrap
in order to approximate $L^*$ and $U^*$ such that
$P(L^* < D = \bar X/\mu < U^*) \approx 0.95.$ In the R code below
the suffixes .re indicate random 're-sampled' quantities based on
$B$ samples of size $n$ randomly chosen without replacement from among the
$n = 20$ observations. The resulting 95% CI is $(9.17, 22.71).$ [There are
many styles of bootstrap CIs. This one treats $\mu$ as if it is a scale
parameter. Other choices are possible.]
B = 10^5; a.obs = 13.54
d.re = replicate(B, mean(sample(x, 20, rep=T))/a.obs)
UL.re = quantile(d.re, c(.975,.025))
a.obs/UL.re
97.5% 2.5%
9.172171 22.714980 | Confidence interval for the mean - Normal distribution or Student's t-distribution? | 1. Normal data, variance known: If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and known standard deviation $\sigma,$ then a 95% con | Confidence interval for the mean - Normal distribution or Student's t-distribution?
1. Normal data, variance known: If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and known standard deviation $\sigma,$ then a 95% confidence interval (CI) for $\mu$ is $\bar X \pm 1.95 \sigma/\sqrt{n}.$ This is the only situation in which the z interval is exactly correct.
2. Nonnormal data, variance known: If the population distribution is not normal and the sample is 'large enough', then $\bar X$ is approximately normal and the same formula provides an approximate 95% CI. The rule that $n \ge 30$ is 'large enough' is unreliable here. If the population distribution is heavy-tailed, then $\bar X$ may not have a distribution that is close to normal (even if $n \ge 30).$ The 'Central Limit Theorem', often provides
reasonable approximations for moderate values of $n,$ but it is a limit theorem,
with guaranteed results only as $n \rightarrow \infty.$
3. Normal data, variance unknown. If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and standard deviation $\sigma,$ with $\mu$ estimated by the sample mean $\bar X$ and $\sigma$ estimated by the sample standard deviation $S.$ Then a 95% confidence interval (CI) for $\mu$ is $\bar X \pm t^* S/\sqrt{n},$ where $S$ is the sample standard deviation and
where $t^*$ cuts probability $0.025$ from the upper tail of Student's t distribution with $n - 1$ degrees of freedom. This is the only situation in which the t interval is exactly correct.
Examples: If $n=10$, then $t^* = 2.262$
and if $n = 30,$ then $t^* = 2.045.$ (Computations from R below; you could also use a printed 't table'.)
qt(.975, 9); qt(.975, 29)
[1] 2.262157 # for n = 10
[1] 2.04523 # for n = 30
Notice that 2.045 and 1.96 (from Part 1 above) both round to 2.0. If $n \ge 30$ then $t^*$ rounds to 2.0. That is the basis for
the 'rule of 30', often mindlessly parroted in other contexts where it is not relevant.
There is no similar coincidental rounding for CIs with confidence levels other than 95%. For example, in Part 1 above
a 99% CI for $\mu$ is obtained as $\bar X \pm 2.58 \sigma/\sqrt{n}.$ However,
$t^*=2.76$ for $n = 30$ and $t^* = 2.65$ for $n = 70.$
qnorm(.995)
[1] 2.575829
qt(.995, 29)
[1] 2.756386
qt(.995, 69)
[1] 2.648977
4. Nonnormal data, variance unknown: Confidence intervals based on the t distribution (as in Part 3 above) are known to be 'robust' against moderate departures from normality.
(If $n$ is very small, there should be no far outliers or evidence of severe skewness.) Then, to a degree that is difficult to predict, a t CI may provide a useful CI for $\mu.$
By contrast, if the type of distribution is known, it may be possible
to find an exact form of CI.
For example, if $n = 30$ observations from a (distinctly nonnormal)
exponential distribution with unknown mean $\mu$ have $\bar X = 17.24,\,
S = 15.33,$ then the (approximate) 95% t CI is $(11.33, 23.15).$
t.test(x)
One Sample t-test
data: x
t = 5.9654, df = 29, p-value = 1.752e-06
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
11.32947 23.15118
sample estimates:
mean of x
17.24033
However,
$$\frac{\bar X}{\mu} \sim \mathsf{Gamma}(\text{shape}=n,\text{rate}=n),$$
so that $$P(L \le \bar X/\mu < U) = P(\bar X/U < \mu < \bar X/L)=0.95$$
and an exact 95% CI for $\mu$ is $(\bar X/U,\, \bar X/L) = (12.42, 25.16).$
qgamma(c(.025,.975), 30, 30)
[1] 0.6746958 1.3882946
mean(x)/qgamma(c(.975,.025), 30, 30)
[1] 12.41835 25.55274
Addendum on bootstrap CI: If data seem non-normal, but the actual population
distribution is unknown, then a 95% nonparametric bootstrap CI may be the best
choice. Suppose we have $n=20$ observations from an unknown distribution, with $\bar X$ = 13.54$ and values shown in the stripchart below.
The observations seem distinctly right-skewed and fail a Shapio-Wilk normality test with P-value 0.001. If we assume the data are exponential and use the method in Part 4, the 95% CI is $(9.13, 22.17),$ but we have no way to know whether the data are exponential.
Accordingly, we find a 95% nonparametric bootstrap
in order to approximate $L^*$ and $U^*$ such that
$P(L^* < D = \bar X/\mu < U^*) \approx 0.95.$ In the R code below
the suffixes .re indicate random 're-sampled' quantities based on
$B$ samples of size $n$ randomly chosen without replacement from among the
$n = 20$ observations. The resulting 95% CI is $(9.17, 22.71).$ [There are
many styles of bootstrap CIs. This one treats $\mu$ as if it is a scale
parameter. Other choices are possible.]
B = 10^5; a.obs = 13.54
d.re = replicate(B, mean(sample(x, 20, rep=T))/a.obs)
UL.re = quantile(d.re, c(.975,.025))
a.obs/UL.re
97.5% 2.5%
9.172171 22.714980 | Confidence interval for the mean - Normal distribution or Student's t-distribution?
1. Normal data, variance known: If you have observations $X_1, X_2, \dots, X_n$ sampled at random from
a normal population with unknown mean $\mu$ and known standard deviation $\sigma,$ then a 95% con |
31,115 | Confidence interval for the mean - Normal distribution or Student's t-distribution? | First, $\sigma \over \sqrt{n}$ is not standard deviation, it is standard error. Second, k depends on the confidence level you are interested in and, as you said, can be taken either from a normal distribution or t-distribution. Third, the t-distribution is meant mainly for small sample sizes ($n < 30$) and for large sample sizes it doesn't matter whether you apply the normal distribution or t-distribution as the two approximate. Finally, if $\sigma$ is not known, better to go with the t-distribution using the sample standard deviation instead of $\sigma$. | Confidence interval for the mean - Normal distribution or Student's t-distribution? | First, $\sigma \over \sqrt{n}$ is not standard deviation, it is standard error. Second, k depends on the confidence level you are interested in and, as you said, can be taken either from a normal dist | Confidence interval for the mean - Normal distribution or Student's t-distribution?
First, $\sigma \over \sqrt{n}$ is not standard deviation, it is standard error. Second, k depends on the confidence level you are interested in and, as you said, can be taken either from a normal distribution or t-distribution. Third, the t-distribution is meant mainly for small sample sizes ($n < 30$) and for large sample sizes it doesn't matter whether you apply the normal distribution or t-distribution as the two approximate. Finally, if $\sigma$ is not known, better to go with the t-distribution using the sample standard deviation instead of $\sigma$. | Confidence interval for the mean - Normal distribution or Student's t-distribution?
First, $\sigma \over \sqrt{n}$ is not standard deviation, it is standard error. Second, k depends on the confidence level you are interested in and, as you said, can be taken either from a normal dist |
31,116 | Constrain a Neural Network to be monotonic? | Here's an example of an early publication in this vein.
Joseph Sill. "Monotonic Networks". California Institute of Technology. 1998.
Monotonicity is a constraint which arises in many application domains. We present a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue of functional form. A straightforward method for implementing and training a monotonic network is described. Monotonic networks are proven to be universal approximators of continuous, differentiable monotonic functions. We apply monotonic networks to a real-world task in corporate bond rating prediction and compare them to other approaches. | Constrain a Neural Network to be monotonic? | Here's an example of an early publication in this vein.
Joseph Sill. "Monotonic Networks". California Institute of Technology. 1998.
Monotonicity is a constraint which arises in many application dom | Constrain a Neural Network to be monotonic?
Here's an example of an early publication in this vein.
Joseph Sill. "Monotonic Networks". California Institute of Technology. 1998.
Monotonicity is a constraint which arises in many application domains. We present a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue of functional form. A straightforward method for implementing and training a monotonic network is described. Monotonic networks are proven to be universal approximators of continuous, differentiable monotonic functions. We apply monotonic networks to a real-world task in corporate bond rating prediction and compare them to other approaches. | Constrain a Neural Network to be monotonic?
Here's an example of an early publication in this vein.
Joseph Sill. "Monotonic Networks". California Institute of Technology. 1998.
Monotonicity is a constraint which arises in many application dom |
31,117 | Constrain a Neural Network to be monotonic? | You may want to have a look at "Unconstrained Monotonic Neural Networks".
The basic idea is to construct a neural network that forces the output to be positive. The integral of the output from that neural network is the final output that forms a monotonic function.
The paper describes how to train such a neural network. That is to say how to get the derivatives for the parameters with the added integral.
Here is the link to the paper: https://arxiv.org/pdf/1908.05164.pdf | Constrain a Neural Network to be monotonic? | You may want to have a look at "Unconstrained Monotonic Neural Networks".
The basic idea is to construct a neural network that forces the output to be positive. The integral of the output from that ne | Constrain a Neural Network to be monotonic?
You may want to have a look at "Unconstrained Monotonic Neural Networks".
The basic idea is to construct a neural network that forces the output to be positive. The integral of the output from that neural network is the final output that forms a monotonic function.
The paper describes how to train such a neural network. That is to say how to get the derivatives for the parameters with the added integral.
Here is the link to the paper: https://arxiv.org/pdf/1908.05164.pdf | Constrain a Neural Network to be monotonic?
You may want to have a look at "Unconstrained Monotonic Neural Networks".
The basic idea is to construct a neural network that forces the output to be positive. The integral of the output from that ne |
31,118 | Why use matrix transpose in gradient descent? | Consider what matrix multiplication is, and observe the pattern of indices carefully:
$$D_{ij} = \sum_{k}W_{ik} X_{kj}$$
$$\frac{\partial D_{ij}}{\partial W_{ik}} = X_{kj}$$
For a previously described loss function $L$, by the chain rule,
$$\frac{\partial L}{\partial W_{ik}} =
\sum_j \frac{\partial L}{\partial D_{ij}} \frac{\partial D_{ij}}{\partial W_{ik}} =
\sum_j \frac{\partial L}{\partial D_{ij}} X_{kj} =
\sum_j \frac{\partial L}{\partial D_{ij}} X_{jk}^T
$$
Note $\partial D_{i'j}/\partial W_{ik} = 0$ for $i'\ne i$, so our chain rule sum is over the given $i$ ranging over $j$.
Since we used $X^T$, the inner index $j$ matches up for convenient matrix multiplication notation,
$$
\frac{\partial L}{\partial W} = \frac{\partial L}{\partial D} X^T
$$
This matrix of partial derivatives $\partial L / \partial W$ can also be implemented as the outer product of vectors: $(\partial L / \partial D) \otimes X$.
If you really understand the chain rule and are careful with your indexing, then you should be able to reason through every step of the gradient calculation.
We need to be careful which matrix calculus layout convention we use: here "denominator layout" is used where $\partial L / \partial W$ has the same shape as $W$ and $\partial L / \partial D$ is a column vector. | Why use matrix transpose in gradient descent? | Consider what matrix multiplication is, and observe the pattern of indices carefully:
$$D_{ij} = \sum_{k}W_{ik} X_{kj}$$
$$\frac{\partial D_{ij}}{\partial W_{ik}} = X_{kj}$$
For a previously described | Why use matrix transpose in gradient descent?
Consider what matrix multiplication is, and observe the pattern of indices carefully:
$$D_{ij} = \sum_{k}W_{ik} X_{kj}$$
$$\frac{\partial D_{ij}}{\partial W_{ik}} = X_{kj}$$
For a previously described loss function $L$, by the chain rule,
$$\frac{\partial L}{\partial W_{ik}} =
\sum_j \frac{\partial L}{\partial D_{ij}} \frac{\partial D_{ij}}{\partial W_{ik}} =
\sum_j \frac{\partial L}{\partial D_{ij}} X_{kj} =
\sum_j \frac{\partial L}{\partial D_{ij}} X_{jk}^T
$$
Note $\partial D_{i'j}/\partial W_{ik} = 0$ for $i'\ne i$, so our chain rule sum is over the given $i$ ranging over $j$.
Since we used $X^T$, the inner index $j$ matches up for convenient matrix multiplication notation,
$$
\frac{\partial L}{\partial W} = \frac{\partial L}{\partial D} X^T
$$
This matrix of partial derivatives $\partial L / \partial W$ can also be implemented as the outer product of vectors: $(\partial L / \partial D) \otimes X$.
If you really understand the chain rule and are careful with your indexing, then you should be able to reason through every step of the gradient calculation.
We need to be careful which matrix calculus layout convention we use: here "denominator layout" is used where $\partial L / \partial W$ has the same shape as $W$ and $\partial L / \partial D$ is a column vector. | Why use matrix transpose in gradient descent?
Consider what matrix multiplication is, and observe the pattern of indices carefully:
$$D_{ij} = \sum_{k}W_{ik} X_{kj}$$
$$\frac{\partial D_{ij}}{\partial W_{ik}} = X_{kj}$$
For a previously described |
31,119 | Why use matrix transpose in gradient descent? | This thread is a bit old but I think the question is important for DL practitioners, so let me give a more intuitive answer.
If you compare the computational graph for the forward pass with that of the backward pass you'll notice that there are a couple of key differences.
In the backward pas, you have to sum across the different contributions from the different inputs to each node. That is, if a node takes several variables as input, you need the sum of the local gradients across those variables.
So, what in the forward pass is a set of branches feeding into a given node becomes a sum of local gradients in the backward pass.
Branching in forward becomes sum in backward
On the oher hand, the opposite is also true. That is if you have a sum of inputs in the forward pass, you want to get all the partial derivatives with respect to each of those variables in the backward pass, that is
Sum in forward becomes branching in backward
If you just multiply by the inverse you are not perfroming those operations.
On the other hand,multiplying by the transpose does the trick! You can think of the transpose as a kind of "inverse" (in the sense that it transforms outputs back to inputs) but which at the same time turns sums into branchings and branchings into sums. This happens because in multiplying you sum across rows of the matrix but get different results (branch) across columns. Transposing swap both operations.
This is a wonderful video that goes into a detailed explanation about this point and many others, both formally and in an example graph:
https://www.youtube.com/watch?v=kvnBw_D0gfs | Why use matrix transpose in gradient descent? | This thread is a bit old but I think the question is important for DL practitioners, so let me give a more intuitive answer.
If you compare the computational graph for the forward pass with that of th | Why use matrix transpose in gradient descent?
This thread is a bit old but I think the question is important for DL practitioners, so let me give a more intuitive answer.
If you compare the computational graph for the forward pass with that of the backward pass you'll notice that there are a couple of key differences.
In the backward pas, you have to sum across the different contributions from the different inputs to each node. That is, if a node takes several variables as input, you need the sum of the local gradients across those variables.
So, what in the forward pass is a set of branches feeding into a given node becomes a sum of local gradients in the backward pass.
Branching in forward becomes sum in backward
On the oher hand, the opposite is also true. That is if you have a sum of inputs in the forward pass, you want to get all the partial derivatives with respect to each of those variables in the backward pass, that is
Sum in forward becomes branching in backward
If you just multiply by the inverse you are not perfroming those operations.
On the other hand,multiplying by the transpose does the trick! You can think of the transpose as a kind of "inverse" (in the sense that it transforms outputs back to inputs) but which at the same time turns sums into branchings and branchings into sums. This happens because in multiplying you sum across rows of the matrix but get different results (branch) across columns. Transposing swap both operations.
This is a wonderful video that goes into a detailed explanation about this point and many others, both formally and in an example graph:
https://www.youtube.com/watch?v=kvnBw_D0gfs | Why use matrix transpose in gradient descent?
This thread is a bit old but I think the question is important for DL practitioners, so let me give a more intuitive answer.
If you compare the computational graph for the forward pass with that of th |
31,120 | Why does "stack more layers" work? [duplicate] | The universal approximation theorem is mainly a proof that for every continuous mapping there exists a neural network of the described structure with a weight configuration that approximates that mapping to an arbitrary accuracy.
It does not give any proof that this weight configuration can be learned via traditional learning methods, and it relies on the fact there are enough units in each layer, but you don't really know what is "enough". For these reasons, UAT has very little practical use.
Deep networks have multitude of benefits over shallow ones:
Hierarchical features:
Deep learning methods aim at learning of feature hierarchies with features from higher-levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human crafted features. [1]
Distributed representations:
In addition to depth of architecture, we have found that another ingredient is crucial: distributed representations. (...) most non-parametric learning algorithms suffer from the so-called curse of dimensionality. (...) That curse occurs when the only way a learning algorithm generalizes to a new case x is by exploiting only a raw notion of similarity (...) between the cases. This is typically done by the learner looking in its training examples for cases that are close to x (...). Imagine trying to approximate a function by many small linear or constant pieces. We need at least one example for each piece. We can figure out what each piece should look like by looking mostly at the examples in the neighborhood of each piece. If the target function has a lot of variations, we'll need correspondingly many training examples. In dimension d (...), the number of variations may grow exponentially with d, hence the number of required examples. However, (...) we may still obtain good results when we are trying to discriminate between two highly complicated regions (manifolds), e.g. associated with two classes of objects. Even though each manifold may have many variations, they might be separable by a smooth (maybe even linear) decision surface. That is the situation where local non-parametric algorithms work well. (...)
Distributed representations are transformations of the data that compactly capture many different factors of variations present in the data. Because many examples can inform us about each of these factors, and because each factor may tell us something about examples that are very far from the training examples, it is possible to generalize non-locally, and escape the curse of dimensionality. [1]
This can be translated into pictures:
A non-distributed representation (learned by a shallow network) has to assign an output to every piece of the input space (represented by colored hypercubes). However, the number of pieces (and thus number of training points needed to learn this representation) grows exponentially with the dimensionality:
On the other hand, distributed representations do not try describe completely every piece of the input space. Instead, they partition the space by isolating simple concepts which can be later merged to provide complex information. See below how K hyperplanes split the space into 2$^K$ regions:
(Images from [1])
For more insight about distributed representations, I also recommend this thread at Quora: Deep Learning: What is meant by a distributed representation?
In theory, deep networks can emulate shallow networks:
Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error
than its shallower counterpart. [2]
Note that this is also rather a theoretical result; as the cited paper states, empirically deep networks (w/o residual connections) experience "performance degradation".
[1]: http://www.iro.umontreal.ca/~bengioy/yoshua_en/research.html
[2]: Deep Residual Learning for Image Recognition (He et al., 2015) | Why does "stack more layers" work? [duplicate] | The universal approximation theorem is mainly a proof that for every continuous mapping there exists a neural network of the described structure with a weight configuration that approximates that mapp | Why does "stack more layers" work? [duplicate]
The universal approximation theorem is mainly a proof that for every continuous mapping there exists a neural network of the described structure with a weight configuration that approximates that mapping to an arbitrary accuracy.
It does not give any proof that this weight configuration can be learned via traditional learning methods, and it relies on the fact there are enough units in each layer, but you don't really know what is "enough". For these reasons, UAT has very little practical use.
Deep networks have multitude of benefits over shallow ones:
Hierarchical features:
Deep learning methods aim at learning of feature hierarchies with features from higher-levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human crafted features. [1]
Distributed representations:
In addition to depth of architecture, we have found that another ingredient is crucial: distributed representations. (...) most non-parametric learning algorithms suffer from the so-called curse of dimensionality. (...) That curse occurs when the only way a learning algorithm generalizes to a new case x is by exploiting only a raw notion of similarity (...) between the cases. This is typically done by the learner looking in its training examples for cases that are close to x (...). Imagine trying to approximate a function by many small linear or constant pieces. We need at least one example for each piece. We can figure out what each piece should look like by looking mostly at the examples in the neighborhood of each piece. If the target function has a lot of variations, we'll need correspondingly many training examples. In dimension d (...), the number of variations may grow exponentially with d, hence the number of required examples. However, (...) we may still obtain good results when we are trying to discriminate between two highly complicated regions (manifolds), e.g. associated with two classes of objects. Even though each manifold may have many variations, they might be separable by a smooth (maybe even linear) decision surface. That is the situation where local non-parametric algorithms work well. (...)
Distributed representations are transformations of the data that compactly capture many different factors of variations present in the data. Because many examples can inform us about each of these factors, and because each factor may tell us something about examples that are very far from the training examples, it is possible to generalize non-locally, and escape the curse of dimensionality. [1]
This can be translated into pictures:
A non-distributed representation (learned by a shallow network) has to assign an output to every piece of the input space (represented by colored hypercubes). However, the number of pieces (and thus number of training points needed to learn this representation) grows exponentially with the dimensionality:
On the other hand, distributed representations do not try describe completely every piece of the input space. Instead, they partition the space by isolating simple concepts which can be later merged to provide complex information. See below how K hyperplanes split the space into 2$^K$ regions:
(Images from [1])
For more insight about distributed representations, I also recommend this thread at Quora: Deep Learning: What is meant by a distributed representation?
In theory, deep networks can emulate shallow networks:
Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error
than its shallower counterpart. [2]
Note that this is also rather a theoretical result; as the cited paper states, empirically deep networks (w/o residual connections) experience "performance degradation".
[1]: http://www.iro.umontreal.ca/~bengioy/yoshua_en/research.html
[2]: Deep Residual Learning for Image Recognition (He et al., 2015) | Why does "stack more layers" work? [duplicate]
The universal approximation theorem is mainly a proof that for every continuous mapping there exists a neural network of the described structure with a weight configuration that approximates that mapp |
31,121 | Why does "stack more layers" work? [duplicate] | Your observation is correct, as the Universal AT doesn't account for layer sizes. In real life scenarios however, weight initializations, learning rates and similar parameters can significantly impact the learning. Interestingly enough, for graph-learning-based tasks, two hidden layers appear as the optimal number of layers, yet this is normally not the case for images.
Furthermore, it is not possible to have infinitely many units in a single layer, and thus more layers are used to construct higher-order generalizations as it simply works. There remains an open gap in understanding exactly how neural networks, especially deeper ones learn. | Why does "stack more layers" work? [duplicate] | Your observation is correct, as the Universal AT doesn't account for layer sizes. In real life scenarios however, weight initializations, learning rates and similar parameters can significantly impact | Why does "stack more layers" work? [duplicate]
Your observation is correct, as the Universal AT doesn't account for layer sizes. In real life scenarios however, weight initializations, learning rates and similar parameters can significantly impact the learning. Interestingly enough, for graph-learning-based tasks, two hidden layers appear as the optimal number of layers, yet this is normally not the case for images.
Furthermore, it is not possible to have infinitely many units in a single layer, and thus more layers are used to construct higher-order generalizations as it simply works. There remains an open gap in understanding exactly how neural networks, especially deeper ones learn. | Why does "stack more layers" work? [duplicate]
Your observation is correct, as the Universal AT doesn't account for layer sizes. In real life scenarios however, weight initializations, learning rates and similar parameters can significantly impact |
31,122 | Why does "stack more layers" work? [duplicate] | In theory you could achieve the same result with just a single hidden layer, as the theorem suggests.
In practice, as you note: "this paper doesn't say how many units each layer has". This is really important because the number of required units in a single hidden layer network could be exponentially high, thus any learning will require an unfeasible amount of time.
Adding layers helps to maintain the total number of units low, and by consequence the training time will be quicker too. | Why does "stack more layers" work? [duplicate] | In theory you could achieve the same result with just a single hidden layer, as the theorem suggests.
In practice, as you note: "this paper doesn't say how many units each layer has". This is really i | Why does "stack more layers" work? [duplicate]
In theory you could achieve the same result with just a single hidden layer, as the theorem suggests.
In practice, as you note: "this paper doesn't say how many units each layer has". This is really important because the number of required units in a single hidden layer network could be exponentially high, thus any learning will require an unfeasible amount of time.
Adding layers helps to maintain the total number of units low, and by consequence the training time will be quicker too. | Why does "stack more layers" work? [duplicate]
In theory you could achieve the same result with just a single hidden layer, as the theorem suggests.
In practice, as you note: "this paper doesn't say how many units each layer has". This is really i |
31,123 | GLM standard errors | How can I scale the fisher information matrix so that I get the same standard errors from the GLM function?
Time your unscaled co-variance matrix the dispersion paramter as done in summary.glm. The relevant code from summary.glm is
if (is.null(dispersion))
dispersion <- if (object$family$family %in% c("poisson",
"binomial"))
1
else if (df.r > 0) {
est.disp <- TRUE
if (any(object$weights == 0))
warning("observations with zero weight not used for calculating dispersion")
sum((object$weights * object$residuals^2)[object$weights >
0])/df.r
}
else {
est.disp <- TRUE
NaN
}
# [other code...]
if (p > 0) {
p1 <- 1L:p
Qr <- qr.lm(object)
coef.p <- object$coefficients[Qr$pivot[p1]]
covmat.unscaled <- chol2inv(Qr$qr[p1, p1, drop = FALSE])
dimnames(covmat.unscaled) <- list(names(coef.p), names(coef.p))
covmat <- dispersion * covmat.unscaled
# [more code ...]
The chol2inv(Qr$qr[p1, p1, drop = FALSE]) computes $(R^\top R)^{-1}=(X^\top WX)^{-1}$ which you make a comment about. Here, $R$ is the upper triangular matrix from the QR decomposition $QR=\sqrt{W}X$.
atiretoo answers only holds when the dispersion paramter is one as with the Poisson and Binomial distribution. | GLM standard errors | How can I scale the fisher information matrix so that I get the same standard errors from the GLM function?
Time your unscaled co-variance matrix the dispersion paramter as done in summary.glm. The r | GLM standard errors
How can I scale the fisher information matrix so that I get the same standard errors from the GLM function?
Time your unscaled co-variance matrix the dispersion paramter as done in summary.glm. The relevant code from summary.glm is
if (is.null(dispersion))
dispersion <- if (object$family$family %in% c("poisson",
"binomial"))
1
else if (df.r > 0) {
est.disp <- TRUE
if (any(object$weights == 0))
warning("observations with zero weight not used for calculating dispersion")
sum((object$weights * object$residuals^2)[object$weights >
0])/df.r
}
else {
est.disp <- TRUE
NaN
}
# [other code...]
if (p > 0) {
p1 <- 1L:p
Qr <- qr.lm(object)
coef.p <- object$coefficients[Qr$pivot[p1]]
covmat.unscaled <- chol2inv(Qr$qr[p1, p1, drop = FALSE])
dimnames(covmat.unscaled) <- list(names(coef.p), names(coef.p))
covmat <- dispersion * covmat.unscaled
# [more code ...]
The chol2inv(Qr$qr[p1, p1, drop = FALSE]) computes $(R^\top R)^{-1}=(X^\top WX)^{-1}$ which you make a comment about. Here, $R$ is the upper triangular matrix from the QR decomposition $QR=\sqrt{W}X$.
atiretoo answers only holds when the dispersion paramter is one as with the Poisson and Binomial distribution. | GLM standard errors
How can I scale the fisher information matrix so that I get the same standard errors from the GLM function?
Time your unscaled co-variance matrix the dispersion paramter as done in summary.glm. The r |
31,124 | GLM standard errors | You're very close! The standard errors of the coefficients are the square roots of the diagonal of your matrix, which is the inverse of the Fisher information matrix. Here is an example.
data <- caret::twoClassSim()
model <- glm(Class~TwoFactor1*TwoFactor2, data = data, family="binomial")
# here are the standard errors we want
SE <- broom::tidy(model)$std.error
X <- model.matrix(model)
p <- fitted(model)
W <- diag(p*(1-p))
# this is the covariance matrix (inverse of Fisher information)
V <- solve(t(X)%*%W%*%X)
all.equal(vcov(model), V)
#> [1] "Mean relative difference: 1.066523e-05"
# close enough
# these are the standard errors: take square root of diagonal
all.equal(SE, sqrt(diag(V)))
#> [1] "names for current but not for target"
#> [2] "Mean relative difference: 4.359204e-06" | GLM standard errors | You're very close! The standard errors of the coefficients are the square roots of the diagonal of your matrix, which is the inverse of the Fisher information matrix. Here is an example.
data <- car | GLM standard errors
You're very close! The standard errors of the coefficients are the square roots of the diagonal of your matrix, which is the inverse of the Fisher information matrix. Here is an example.
data <- caret::twoClassSim()
model <- glm(Class~TwoFactor1*TwoFactor2, data = data, family="binomial")
# here are the standard errors we want
SE <- broom::tidy(model)$std.error
X <- model.matrix(model)
p <- fitted(model)
W <- diag(p*(1-p))
# this is the covariance matrix (inverse of Fisher information)
V <- solve(t(X)%*%W%*%X)
all.equal(vcov(model), V)
#> [1] "Mean relative difference: 1.066523e-05"
# close enough
# these are the standard errors: take square root of diagonal
all.equal(SE, sqrt(diag(V)))
#> [1] "names for current but not for target"
#> [2] "Mean relative difference: 4.359204e-06" | GLM standard errors
You're very close! The standard errors of the coefficients are the square roots of the diagonal of your matrix, which is the inverse of the Fisher information matrix. Here is an example.
data <- car |
31,125 | If and how to use one-tailed testing in multiple regression | It only requires minimal manual computations to perform one-sided hypothesis testes concerning the regression coefficients $\beta_{i}$.
Two possible one-sided hypotheses are:
\begin{align}
\mathrm{H}_{0}&:\beta_i \geq 0 \\ \tag{1}
\mathrm{H}_{1}&: \beta_i < 0
\end{align}
or
\begin{align}
\mathrm{H}_{0}&:\beta_i \leq 0 \\ \tag{2}
\mathrm{H}_{1}&: \beta_i > 0
\end{align}
The $p$-values provided by R are for the two-sided hypotheses and are calculated as $2P(T_{d}\leq -|t|)$ where $T$ is the test statistic (i.e. the regression coefficient divided by its standard error) and $d$ are the residual degrees of freedom.
The corresponding one-sided $p$-values are $P(T_d\leq t)$ and $P(T_d\geq t)$, for the first $(1)$ and second $(2)$ one-sided hypotheses, respectively.
Here is how to calculate the one-sided $p$-values in R:
mod <- lm(Infant.Mortality~., data = swiss)
res <- summary(mod)
# For the two-sided hypotheses
2*pt(-abs(coef(res)[, 3]), mod$df)
(Intercept) Fertility Agriculture Examination Education Catholic
0.118504431 0.007335715 0.678267621 0.702498865 0.476305225 0.996338704
# For H1: beta < 0
pt(coef(res)[, 3], mod$df, lower = TRUE)
(Intercept) Fertility Agriculture Examination Education Catholic
0.9407478 0.9963321 0.3391338 0.6487506 0.7618474 0.5018306
# For H1: beta > 0
pt(coef(res)[, 3], mod$df, lower = FALSE)
(Intercept) Fertility Agriculture Examination Education Catholic
0.059252216 0.003667858 0.660866190 0.351249433 0.238152613 0.498169352
mod$df extracts the residual degrees of freedom and coef(res)[, 3] extracts the test statistics. | If and how to use one-tailed testing in multiple regression | It only requires minimal manual computations to perform one-sided hypothesis testes concerning the regression coefficients $\beta_{i}$.
Two possible one-sided hypotheses are:
\begin{align}
\mathrm{H}_ | If and how to use one-tailed testing in multiple regression
It only requires minimal manual computations to perform one-sided hypothesis testes concerning the regression coefficients $\beta_{i}$.
Two possible one-sided hypotheses are:
\begin{align}
\mathrm{H}_{0}&:\beta_i \geq 0 \\ \tag{1}
\mathrm{H}_{1}&: \beta_i < 0
\end{align}
or
\begin{align}
\mathrm{H}_{0}&:\beta_i \leq 0 \\ \tag{2}
\mathrm{H}_{1}&: \beta_i > 0
\end{align}
The $p$-values provided by R are for the two-sided hypotheses and are calculated as $2P(T_{d}\leq -|t|)$ where $T$ is the test statistic (i.e. the regression coefficient divided by its standard error) and $d$ are the residual degrees of freedom.
The corresponding one-sided $p$-values are $P(T_d\leq t)$ and $P(T_d\geq t)$, for the first $(1)$ and second $(2)$ one-sided hypotheses, respectively.
Here is how to calculate the one-sided $p$-values in R:
mod <- lm(Infant.Mortality~., data = swiss)
res <- summary(mod)
# For the two-sided hypotheses
2*pt(-abs(coef(res)[, 3]), mod$df)
(Intercept) Fertility Agriculture Examination Education Catholic
0.118504431 0.007335715 0.678267621 0.702498865 0.476305225 0.996338704
# For H1: beta < 0
pt(coef(res)[, 3], mod$df, lower = TRUE)
(Intercept) Fertility Agriculture Examination Education Catholic
0.9407478 0.9963321 0.3391338 0.6487506 0.7618474 0.5018306
# For H1: beta > 0
pt(coef(res)[, 3], mod$df, lower = FALSE)
(Intercept) Fertility Agriculture Examination Education Catholic
0.059252216 0.003667858 0.660866190 0.351249433 0.238152613 0.498169352
mod$df extracts the residual degrees of freedom and coef(res)[, 3] extracts the test statistics. | If and how to use one-tailed testing in multiple regression
It only requires minimal manual computations to perform one-sided hypothesis testes concerning the regression coefficients $\beta_{i}$.
Two possible one-sided hypotheses are:
\begin{align}
\mathrm{H}_ |
31,126 | If and how to use one-tailed testing in multiple regression | I'm not a statistician, so take my answer with a grain of salt but here goes. p-values reported in in linear regression will be obtained using F-test, commonly used for evaluating least-square fitting problems like linear regression). F-test doesn't have two tail version because the distribution of F-value is one-tailed (because F=t^2) to begin with. Read a more detailed answer here | If and how to use one-tailed testing in multiple regression | I'm not a statistician, so take my answer with a grain of salt but here goes. p-values reported in in linear regression will be obtained using F-test, commonly used for evaluating least-square fitting | If and how to use one-tailed testing in multiple regression
I'm not a statistician, so take my answer with a grain of salt but here goes. p-values reported in in linear regression will be obtained using F-test, commonly used for evaluating least-square fitting problems like linear regression). F-test doesn't have two tail version because the distribution of F-value is one-tailed (because F=t^2) to begin with. Read a more detailed answer here | If and how to use one-tailed testing in multiple regression
I'm not a statistician, so take my answer with a grain of salt but here goes. p-values reported in in linear regression will be obtained using F-test, commonly used for evaluating least-square fitting |
31,127 | Loss not decreasing but performance is improving | This is not unusual for reinforcement learning and does not indicate anything is wrong. As the agent gets better at playing, estimating the reward does get more difficult (because it's no longer always 0). In addition, as the reward gets higher, and the average episode length gets longer, the amount of variance in the reward can also get larger, so it's challenging even to prevent the loss from increasing. A third factor, which you mentioned, is that the constantly changing poses a "moving-target" problem for the Q-network.
My guess for why this doesn't happen in Pong is that Pong is a much simpler game and easier to predict than Space Invaders. | Loss not decreasing but performance is improving | This is not unusual for reinforcement learning and does not indicate anything is wrong. As the agent gets better at playing, estimating the reward does get more difficult (because it's no longer alway | Loss not decreasing but performance is improving
This is not unusual for reinforcement learning and does not indicate anything is wrong. As the agent gets better at playing, estimating the reward does get more difficult (because it's no longer always 0). In addition, as the reward gets higher, and the average episode length gets longer, the amount of variance in the reward can also get larger, so it's challenging even to prevent the loss from increasing. A third factor, which you mentioned, is that the constantly changing poses a "moving-target" problem for the Q-network.
My guess for why this doesn't happen in Pong is that Pong is a much simpler game and easier to predict than Space Invaders. | Loss not decreasing but performance is improving
This is not unusual for reinforcement learning and does not indicate anything is wrong. As the agent gets better at playing, estimating the reward does get more difficult (because it's no longer alway |
31,128 | Why is tree correlation a problem when working with bagging? | I would like to answer this question by first overviewing bagging.
In bagged trees, we resample observations from a dataset with replacement and fit a tree. We consider all the features in our resampling and this process is repeated $n$ times. If you have ever fit a simple decision tree holding out a test set you will see that your results vary dramatically, every time you perform a training, testing split. This high variance is undesirable and therefore we consider a new dataset which is a subset of the original (bootstrap sample). We aggregate all the $n$ trees by averaging in a regressor or by majority vote in a classifier to obtain a final result.
One issue we have not considered in this bagging process is how similar the trees tend to be. While there are mathematical definitions to the correlation between these trees, consider this example.
Consider one strong predictor in our data set which reduces a measure of error (ex: RSS) the most. All our bagged trees tend to to make the same cuts because they all share the same features. This makes all these trees look very similar hence increasing correlation.
To solve tree correlation we allow random forest to randomly choose only $m$ predictors in performing the split. Now the bagged trees all have different randomly selected features to perform cuts on. Therefore, the feature space is split on different predictors, decorrelating all the trees.
When performing random forest if you set max_features=# features in the dataset (in scikit learn, mtry in R) you will be constructing a bagged decision tree model. If max_features<# of features we will be performing Random forest. It is always a good idea to tune this parameter in constructing any model. | Why is tree correlation a problem when working with bagging? | I would like to answer this question by first overviewing bagging.
In bagged trees, we resample observations from a dataset with replacement and fit a tree. We consider all the features in our resamp | Why is tree correlation a problem when working with bagging?
I would like to answer this question by first overviewing bagging.
In bagged trees, we resample observations from a dataset with replacement and fit a tree. We consider all the features in our resampling and this process is repeated $n$ times. If you have ever fit a simple decision tree holding out a test set you will see that your results vary dramatically, every time you perform a training, testing split. This high variance is undesirable and therefore we consider a new dataset which is a subset of the original (bootstrap sample). We aggregate all the $n$ trees by averaging in a regressor or by majority vote in a classifier to obtain a final result.
One issue we have not considered in this bagging process is how similar the trees tend to be. While there are mathematical definitions to the correlation between these trees, consider this example.
Consider one strong predictor in our data set which reduces a measure of error (ex: RSS) the most. All our bagged trees tend to to make the same cuts because they all share the same features. This makes all these trees look very similar hence increasing correlation.
To solve tree correlation we allow random forest to randomly choose only $m$ predictors in performing the split. Now the bagged trees all have different randomly selected features to perform cuts on. Therefore, the feature space is split on different predictors, decorrelating all the trees.
When performing random forest if you set max_features=# features in the dataset (in scikit learn, mtry in R) you will be constructing a bagged decision tree model. If max_features<# of features we will be performing Random forest. It is always a good idea to tune this parameter in constructing any model. | Why is tree correlation a problem when working with bagging?
I would like to answer this question by first overviewing bagging.
In bagged trees, we resample observations from a dataset with replacement and fit a tree. We consider all the features in our resamp |
31,129 | Implementing a discrete analogue to Gaussian function [closed] | The most basic way to discretize the continuous probability distribution is to assumed its "rounded" form, i.e. if $X \sim \mathcal{N}(\mu, \sigma)$, then $\lfloor X \rfloor$ follows the discrete analogue. For discrete normal distribution, the probability mass function following from such procedure is
$$
f(x) = \Phi\left(\frac{x-\mu+1}{\sigma}\right) - \Phi\left(\frac{x-\mu}{\sigma}\right)
$$
where $\Phi$ is standard normal cumulative distribution function, as described in
Roy, D. (2003). The discrete normal distribution. Communications in
Statistics-Theory and Methods, 32, 1871-1883.
Alternatively, you may consider rounding within the $\pm0.5$ interval (i.e. distribution of $\lfloor X +0.5\rfloor$). | Implementing a discrete analogue to Gaussian function [closed] | The most basic way to discretize the continuous probability distribution is to assumed its "rounded" form, i.e. if $X \sim \mathcal{N}(\mu, \sigma)$, then $\lfloor X \rfloor$ follows the discrete anal | Implementing a discrete analogue to Gaussian function [closed]
The most basic way to discretize the continuous probability distribution is to assumed its "rounded" form, i.e. if $X \sim \mathcal{N}(\mu, \sigma)$, then $\lfloor X \rfloor$ follows the discrete analogue. For discrete normal distribution, the probability mass function following from such procedure is
$$
f(x) = \Phi\left(\frac{x-\mu+1}{\sigma}\right) - \Phi\left(\frac{x-\mu}{\sigma}\right)
$$
where $\Phi$ is standard normal cumulative distribution function, as described in
Roy, D. (2003). The discrete normal distribution. Communications in
Statistics-Theory and Methods, 32, 1871-1883.
Alternatively, you may consider rounding within the $\pm0.5$ interval (i.e. distribution of $\lfloor X +0.5\rfloor$). | Implementing a discrete analogue to Gaussian function [closed]
The most basic way to discretize the continuous probability distribution is to assumed its "rounded" form, i.e. if $X \sim \mathcal{N}(\mu, \sigma)$, then $\lfloor X \rfloor$ follows the discrete anal |
31,130 | Implementing a discrete analogue to Gaussian function [closed] | Luc Devroye defines a discrete normal in his book (Non-Uniform
Random Variate Generation) (pg 117). Here is a link to chapter 3:
http://www.nrbook.com/devroye/Devroye_files/chapter_three.pdf
last page of chapter 3 is page 117.
$$\Pr(X=i)= c\exp\left[\frac{(|x| +0.5)^2}{2\sigma^2}\right],$$
where $i$ is an integer. The book also gives an algorithm for generating a random variable from this distribution. This would apply for your question if $N\to\infty$. | Implementing a discrete analogue to Gaussian function [closed] | Luc Devroye defines a discrete normal in his book (Non-Uniform
Random Variate Generation) (pg 117). Here is a link to chapter 3:
http://www.nrbook.com/devroye/Devroye_files/chapter_three.pdf
last page | Implementing a discrete analogue to Gaussian function [closed]
Luc Devroye defines a discrete normal in his book (Non-Uniform
Random Variate Generation) (pg 117). Here is a link to chapter 3:
http://www.nrbook.com/devroye/Devroye_files/chapter_three.pdf
last page of chapter 3 is page 117.
$$\Pr(X=i)= c\exp\left[\frac{(|x| +0.5)^2}{2\sigma^2}\right],$$
where $i$ is an integer. The book also gives an algorithm for generating a random variable from this distribution. This would apply for your question if $N\to\infty$. | Implementing a discrete analogue to Gaussian function [closed]
Luc Devroye defines a discrete normal in his book (Non-Uniform
Random Variate Generation) (pg 117). Here is a link to chapter 3:
http://www.nrbook.com/devroye/Devroye_files/chapter_three.pdf
last page |
31,131 | Implementing a discrete analogue to Gaussian function [closed] | I don't know of any such distribution. The normal distribution runs from negative infinity to positive infinity, so bounds (i.e., $-N$ to $N$) will weaken the analogy to the normal. That said, if I wanted a distribution that seemed this way, for a simulation say, I would sample from a Binomial distribution with $n = 2N$ and $\pi=.5$. Then I would subtract $N$ from all realized values. If you wanted the mean to be anything else, you could use a different value of $\pi$, but you need to recognize that that would weaken the analogy to the normal even further. The population SD of the generated distribution would necessarily be $.5\sqrt{2N}$. You could make that wider if you wanted by sampling the $\pi$ parameter from a narrow, symmetrical distribution around .5, with the wider that distribution of $\pi$s, the higher the resulting SD could be. The distribution then might be a beta binomial. | Implementing a discrete analogue to Gaussian function [closed] | I don't know of any such distribution. The normal distribution runs from negative infinity to positive infinity, so bounds (i.e., $-N$ to $N$) will weaken the analogy to the normal. That said, if I | Implementing a discrete analogue to Gaussian function [closed]
I don't know of any such distribution. The normal distribution runs from negative infinity to positive infinity, so bounds (i.e., $-N$ to $N$) will weaken the analogy to the normal. That said, if I wanted a distribution that seemed this way, for a simulation say, I would sample from a Binomial distribution with $n = 2N$ and $\pi=.5$. Then I would subtract $N$ from all realized values. If you wanted the mean to be anything else, you could use a different value of $\pi$, but you need to recognize that that would weaken the analogy to the normal even further. The population SD of the generated distribution would necessarily be $.5\sqrt{2N}$. You could make that wider if you wanted by sampling the $\pi$ parameter from a narrow, symmetrical distribution around .5, with the wider that distribution of $\pi$s, the higher the resulting SD could be. The distribution then might be a beta binomial. | Implementing a discrete analogue to Gaussian function [closed]
I don't know of any such distribution. The normal distribution runs from negative infinity to positive infinity, so bounds (i.e., $-N$ to $N$) will weaken the analogy to the normal. That said, if I |
31,132 | Algorithms for weighted maximum likelihood parameter estimation | There are a number of ways to handle importance weights. Note that "weights" as a general term can be ambiguous. R's glm method, for instance, takes a weight parameter that is interpreted differently.
This paper has a good discussion of a few approaches to handling importance weights.
By far the most common approach when using stochastic optimisation methods is to just multiply each stochastic step by the importance weight for the sampled data point. This can work poorly if you have a mixture of very large and small weights. If there is less than a factor of 20 between your various weights it should work fine, although it might be slow to converge.
Another approach when using SGD optimisation is rejection sampling, with probability proportional to $w_i/w_{\max}$. This is almost never used in practice though.
Pre-sampling your dataset before applying a standard optimization algorithm is more common. Sample with replacement a new dataset with $w_i/w_{\max}$ proportional sampling. Typically you would take $2n$ to $10n$ samples, where n is the size of your original dataset.
The linked paper suggests another approach which I believe is implemented in the Vowpal Wabbit package.
The popular liblinear package supports importance weights as well. If you're using LBFGS you can specify the loss and derivative manually, including the importance weights as you have in your post. | Algorithms for weighted maximum likelihood parameter estimation | There are a number of ways to handle importance weights. Note that "weights" as a general term can be ambiguous. R's glm method, for instance, takes a weight parameter that is interpreted differently. | Algorithms for weighted maximum likelihood parameter estimation
There are a number of ways to handle importance weights. Note that "weights" as a general term can be ambiguous. R's glm method, for instance, takes a weight parameter that is interpreted differently.
This paper has a good discussion of a few approaches to handling importance weights.
By far the most common approach when using stochastic optimisation methods is to just multiply each stochastic step by the importance weight for the sampled data point. This can work poorly if you have a mixture of very large and small weights. If there is less than a factor of 20 between your various weights it should work fine, although it might be slow to converge.
Another approach when using SGD optimisation is rejection sampling, with probability proportional to $w_i/w_{\max}$. This is almost never used in practice though.
Pre-sampling your dataset before applying a standard optimization algorithm is more common. Sample with replacement a new dataset with $w_i/w_{\max}$ proportional sampling. Typically you would take $2n$ to $10n$ samples, where n is the size of your original dataset.
The linked paper suggests another approach which I believe is implemented in the Vowpal Wabbit package.
The popular liblinear package supports importance weights as well. If you're using LBFGS you can specify the loss and derivative manually, including the importance weights as you have in your post. | Algorithms for weighted maximum likelihood parameter estimation
There are a number of ways to handle importance weights. Note that "weights" as a general term can be ambiguous. R's glm method, for instance, takes a weight parameter that is interpreted differently. |
31,133 | Normally distributed errors and the central limit theorem | This may be better appreciated by expressing the result of CLT in terms of sums of iid random variables. We have
$$\sqrt{n} \frac{ \bar{X} -\mu}{\sigma} \sim N(0, 1) \quad \text{asymptotically}$$
Multiply the quotient by $\frac{\sigma}{\sqrt{n}}$ and use the fact that $Var(cX) = c^2 Var(X)$ to get
$$\bar{X}-\mu \sim N\left(0, \frac{\sigma^2}{n} \right)$$
Now add $\mu$ to the LHS and use the fact that $\mathbb{E} \left[a X+\mu\right] = a \mathbb{E}[X] + \mu$ to obtain
$$\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i \sim N\left(\mu, \frac{\sigma^2}{n} \right)$$
Lastly, multiply by $n$ and use the above two results to see that
$$\sum_{i=1}^n X_i \sim N \left(n \mu, n\sigma^2 \right) $$
And what does this have to do with Wooldridge's statement? Well, if the error is the sum of many iid random variables then it will be approximately normally distributed, as just seen. But there is an issue here, namely that the unobserved factors will not necessarily be identically distributed and they might not even be independent!
Nevertheless, the CLT has been successfully extended to independent non-identically distributed random variables and even cases of mild dependence, under some additional regularity conditions. These are essentially conditions that guarantee that no term in the sum exerts disproportional influence on the asymptotic distribution, see also the wikipedia page on the CLT. You do not need to know these results of course; Wooldridge's aim is merely to provide intuition.
Hope this helps. | Normally distributed errors and the central limit theorem | This may be better appreciated by expressing the result of CLT in terms of sums of iid random variables. We have
$$\sqrt{n} \frac{ \bar{X} -\mu}{\sigma} \sim N(0, 1) \quad \text{asymptotically}$$
Mult | Normally distributed errors and the central limit theorem
This may be better appreciated by expressing the result of CLT in terms of sums of iid random variables. We have
$$\sqrt{n} \frac{ \bar{X} -\mu}{\sigma} \sim N(0, 1) \quad \text{asymptotically}$$
Multiply the quotient by $\frac{\sigma}{\sqrt{n}}$ and use the fact that $Var(cX) = c^2 Var(X)$ to get
$$\bar{X}-\mu \sim N\left(0, \frac{\sigma^2}{n} \right)$$
Now add $\mu$ to the LHS and use the fact that $\mathbb{E} \left[a X+\mu\right] = a \mathbb{E}[X] + \mu$ to obtain
$$\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i \sim N\left(\mu, \frac{\sigma^2}{n} \right)$$
Lastly, multiply by $n$ and use the above two results to see that
$$\sum_{i=1}^n X_i \sim N \left(n \mu, n\sigma^2 \right) $$
And what does this have to do with Wooldridge's statement? Well, if the error is the sum of many iid random variables then it will be approximately normally distributed, as just seen. But there is an issue here, namely that the unobserved factors will not necessarily be identically distributed and they might not even be independent!
Nevertheless, the CLT has been successfully extended to independent non-identically distributed random variables and even cases of mild dependence, under some additional regularity conditions. These are essentially conditions that guarantee that no term in the sum exerts disproportional influence on the asymptotic distribution, see also the wikipedia page on the CLT. You do not need to know these results of course; Wooldridge's aim is merely to provide intuition.
Hope this helps. | Normally distributed errors and the central limit theorem
This may be better appreciated by expressing the result of CLT in terms of sums of iid random variables. We have
$$\sqrt{n} \frac{ \bar{X} -\mu}{\sigma} \sim N(0, 1) \quad \text{asymptotically}$$
Mult |
31,134 | box plot in R: Do the outliers count when the quantiles are being determined? | R -- like many, but not all programs -- mostly uses Tukey's definition* of how to draw a boxplot.
The entire original sample is used to calculate the hinges (where the box-ends are drawn).
Hinges are very similar to the quartiles (you could say they're a particular way to calculate the upper and lower quartiles that differs slightly from the more usual definitions of quartiles -- though there are a number of different definitions of sample quartiles as well; indeed R offers nine distinct quartile calculations, not counting hinges themselves).
The upper hinge is at the median of the upper half of the data (the upper half includes the median of the original sample if it was a data point) and the lower hinge is at the median of the lower half (which also includes the median of the original sample if it was at a data point):
$\qquad$
So for example with 6 observations the hinges are the second largest and the 5th largest observation (3 points in each half). With 9 observations the hinges are the 3rd and the 8th largest (5 points in each half, the median coming in both halves). With 11 observations the lower hinge is halfway between the 3rd and 4th largest observation and the upper hinge is halfway between the 8th and 9th largest observation (6 points in each half). The illustration shows the case with 13 observations.
Note that quartiles (/hinges) are not at all sensitive to the values of the outliers, only to the fact that they are outside the quartiles. You can move them all close to the box ends (so that there are no outliers) without changing the quartiles/hinges, or as far away as you like (so they're all far away), again without changing the values of the quartiles. So there'd really be no need to do anything when there's an "outlier".
* Or rather, one of them; Tukey gave several definitions, though for present purposes we need only worry about how the calculation of the hinges works; I say mostly because the version with "outliers" would be what Tukey called a schematic plot but they don't do the one with two distinct kinds of "outlier" marks. | box plot in R: Do the outliers count when the quantiles are being determined? | R -- like many, but not all programs -- mostly uses Tukey's definition* of how to draw a boxplot.
The entire original sample is used to calculate the hinges (where the box-ends are drawn).
Hinges are | box plot in R: Do the outliers count when the quantiles are being determined?
R -- like many, but not all programs -- mostly uses Tukey's definition* of how to draw a boxplot.
The entire original sample is used to calculate the hinges (where the box-ends are drawn).
Hinges are very similar to the quartiles (you could say they're a particular way to calculate the upper and lower quartiles that differs slightly from the more usual definitions of quartiles -- though there are a number of different definitions of sample quartiles as well; indeed R offers nine distinct quartile calculations, not counting hinges themselves).
The upper hinge is at the median of the upper half of the data (the upper half includes the median of the original sample if it was a data point) and the lower hinge is at the median of the lower half (which also includes the median of the original sample if it was at a data point):
$\qquad$
So for example with 6 observations the hinges are the second largest and the 5th largest observation (3 points in each half). With 9 observations the hinges are the 3rd and the 8th largest (5 points in each half, the median coming in both halves). With 11 observations the lower hinge is halfway between the 3rd and 4th largest observation and the upper hinge is halfway between the 8th and 9th largest observation (6 points in each half). The illustration shows the case with 13 observations.
Note that quartiles (/hinges) are not at all sensitive to the values of the outliers, only to the fact that they are outside the quartiles. You can move them all close to the box ends (so that there are no outliers) without changing the quartiles/hinges, or as far away as you like (so they're all far away), again without changing the values of the quartiles. So there'd really be no need to do anything when there's an "outlier".
* Or rather, one of them; Tukey gave several definitions, though for present purposes we need only worry about how the calculation of the hinges works; I say mostly because the version with "outliers" would be what Tukey called a schematic plot but they don't do the one with two distinct kinds of "outlier" marks. | box plot in R: Do the outliers count when the quantiles are being determined?
R -- like many, but not all programs -- mostly uses Tukey's definition* of how to draw a boxplot.
The entire original sample is used to calculate the hinges (where the box-ends are drawn).
Hinges are |
31,135 | Can a perceptron with sigmoid activation function perform nonlinear classification? | The network in the diagram has an input layer and an output layer, but no hidden layers. This type of network can't perform nonlinear classification or implement arbitrary nonlinear functions, regardless of the choice of activation function.
The input is projected onto the weight vector and scaled/shifted along this direction. This is a linear operation that reduces the input to a single value, which is then passed through the (possibly nonlinear) activation function. This linear reduction to a single value is the reason the network can't implement arbitrary functions. Consider a hyperplane in input space that's orthogonal to the weight vector. All inputs falling within this hyperplane are mapped to the same output value (the decision boundary plotted below is an example of such a hyperplane).
Here's an example function corresponding to a network with two inputs and a logistic sigmoid output:
The function is a surface bent into a sigmoidal shape along the direction of the weight vector. Changing the network parameters can rotate the direction of the sigmoidal surface, and stretch or shift it. But, the fundamental sigmoidal shape will always remain.
If the network is used to implement a classifier, the decision boundary will always be linear. For example, say we take the output of the example network to represent the probability that the class is '1', given the input (i.e. the network implements a logistic regression model). We impose a threshold such that inputs are classified as '1' if they produce above-threshold output, otherwise '0'. Here's top view of the same function, where color represents the output:
The plotted decision boundary corresponds to a threshold of 0.5 (i.e. we assign the most likely class). All points to the right of this boundary would be classified as '1', and those to the left as '0'. The decision boundary is orthogonal to the weight vector (plotted in red). Changing the weights could rotate or shift the decision boundary, but never make it nonlinear.
However, things change radically once the network contains at least one hidden layer with sigmoidal (or other nonlinear) activation function. Such a network can indeed perform nonlinear classification and approximate arbitrary functions (but doing so may require adding vastly more units to the network). This is a consequence of the universal approximation theorem, as @broncoAbierto mentioned. | Can a perceptron with sigmoid activation function perform nonlinear classification? | The network in the diagram has an input layer and an output layer, but no hidden layers. This type of network can't perform nonlinear classification or implement arbitrary nonlinear functions, regardl | Can a perceptron with sigmoid activation function perform nonlinear classification?
The network in the diagram has an input layer and an output layer, but no hidden layers. This type of network can't perform nonlinear classification or implement arbitrary nonlinear functions, regardless of the choice of activation function.
The input is projected onto the weight vector and scaled/shifted along this direction. This is a linear operation that reduces the input to a single value, which is then passed through the (possibly nonlinear) activation function. This linear reduction to a single value is the reason the network can't implement arbitrary functions. Consider a hyperplane in input space that's orthogonal to the weight vector. All inputs falling within this hyperplane are mapped to the same output value (the decision boundary plotted below is an example of such a hyperplane).
Here's an example function corresponding to a network with two inputs and a logistic sigmoid output:
The function is a surface bent into a sigmoidal shape along the direction of the weight vector. Changing the network parameters can rotate the direction of the sigmoidal surface, and stretch or shift it. But, the fundamental sigmoidal shape will always remain.
If the network is used to implement a classifier, the decision boundary will always be linear. For example, say we take the output of the example network to represent the probability that the class is '1', given the input (i.e. the network implements a logistic regression model). We impose a threshold such that inputs are classified as '1' if they produce above-threshold output, otherwise '0'. Here's top view of the same function, where color represents the output:
The plotted decision boundary corresponds to a threshold of 0.5 (i.e. we assign the most likely class). All points to the right of this boundary would be classified as '1', and those to the left as '0'. The decision boundary is orthogonal to the weight vector (plotted in red). Changing the weights could rotate or shift the decision boundary, but never make it nonlinear.
However, things change radically once the network contains at least one hidden layer with sigmoidal (or other nonlinear) activation function. Such a network can indeed perform nonlinear classification and approximate arbitrary functions (but doing so may require adding vastly more units to the network). This is a consequence of the universal approximation theorem, as @broncoAbierto mentioned. | Can a perceptron with sigmoid activation function perform nonlinear classification?
The network in the diagram has an input layer and an output layer, but no hidden layers. This type of network can't perform nonlinear classification or implement arbitrary nonlinear functions, regardl |
31,136 | Can a perceptron with sigmoid activation function perform nonlinear classification? | I agree with the answer given by @user20160.
However, you may be interested in what happens if the activation function is not monotonic.
For example: bell-shaped gaussian function $g(z) = e^{-v^2/2}$.
In this case, two hyperplanes are generated, which can be used to solve the XOR problem.
See "Gaussian perceptron: experimental results" | Can a perceptron with sigmoid activation function perform nonlinear classification? | I agree with the answer given by @user20160.
However, you may be interested in what happens if the activation function is not monotonic.
For example: bell-shaped gaussian function $g(z) = e^{-v^2/2}$. | Can a perceptron with sigmoid activation function perform nonlinear classification?
I agree with the answer given by @user20160.
However, you may be interested in what happens if the activation function is not monotonic.
For example: bell-shaped gaussian function $g(z) = e^{-v^2/2}$.
In this case, two hyperplanes are generated, which can be used to solve the XOR problem.
See "Gaussian perceptron: experimental results" | Can a perceptron with sigmoid activation function perform nonlinear classification?
I agree with the answer given by @user20160.
However, you may be interested in what happens if the activation function is not monotonic.
For example: bell-shaped gaussian function $g(z) = e^{-v^2/2}$. |
31,137 | Can a perceptron with sigmoid activation function perform nonlinear classification? | To find out more about classification in general, take a look at https://en.wikipedia.org/wiki/Perceptron and
https://en.wikipedia.org/wiki/Linear_classifier, for example.
Answering the question, firstly, a classifier does not need to be able to represent any complex function to be considered non-linear. An example is a quadratic classifier (wiki/Quadratic_classifier), which is a polynomial function of degree 2. We can have infinitely many further examples as polynomial classifiers of degree n, n goes from 3 to infinity.
Secondly, no, changing activation function to sigmoid doesn't help if we consider a classical setup. In fact, sigmoid activation function wouldn't even make a sensible classifier. In classical setup the output of perceptron is either -1 or +1, +1 representing Class 1, and -1 representing Class 2. If you changed activation function to sigmoid, you would no longer have an interpretable output. (Now, of course, you can apply a step function after sigmoid, but if you think about it, it is the same as using only the step function)
Clarifying the connection to the broncoAbierto answer, a composition of arbitrarily many perceptrons with sigmoid activation (i.e., a neural network) indeed is a non-linear classifier. Moreover, it can approximate any complex function. A single perceptron, however, doesn't have any of these properties. | Can a perceptron with sigmoid activation function perform nonlinear classification? | To find out more about classification in general, take a look at https://en.wikipedia.org/wiki/Perceptron and
https://en.wikipedia.org/wiki/Linear_classifier, for example.
Answering the question, fir | Can a perceptron with sigmoid activation function perform nonlinear classification?
To find out more about classification in general, take a look at https://en.wikipedia.org/wiki/Perceptron and
https://en.wikipedia.org/wiki/Linear_classifier, for example.
Answering the question, firstly, a classifier does not need to be able to represent any complex function to be considered non-linear. An example is a quadratic classifier (wiki/Quadratic_classifier), which is a polynomial function of degree 2. We can have infinitely many further examples as polynomial classifiers of degree n, n goes from 3 to infinity.
Secondly, no, changing activation function to sigmoid doesn't help if we consider a classical setup. In fact, sigmoid activation function wouldn't even make a sensible classifier. In classical setup the output of perceptron is either -1 or +1, +1 representing Class 1, and -1 representing Class 2. If you changed activation function to sigmoid, you would no longer have an interpretable output. (Now, of course, you can apply a step function after sigmoid, but if you think about it, it is the same as using only the step function)
Clarifying the connection to the broncoAbierto answer, a composition of arbitrarily many perceptrons with sigmoid activation (i.e., a neural network) indeed is a non-linear classifier. Moreover, it can approximate any complex function. A single perceptron, however, doesn't have any of these properties. | Can a perceptron with sigmoid activation function perform nonlinear classification?
To find out more about classification in general, take a look at https://en.wikipedia.org/wiki/Perceptron and
https://en.wikipedia.org/wiki/Linear_classifier, for example.
Answering the question, fir |
31,138 | Can a perceptron with sigmoid activation function perform nonlinear classification? | You might have heard of the universal approximation theorem for neural networks. Networks with sigmoid activation are a particular case of the networks for which this holds, so yes, they can perform non-linear classification, and they can approximate any function arbitrarily well provided that the number of units is sufficiently large. | Can a perceptron with sigmoid activation function perform nonlinear classification? | You might have heard of the universal approximation theorem for neural networks. Networks with sigmoid activation are a particular case of the networks for which this holds, so yes, they can perform n | Can a perceptron with sigmoid activation function perform nonlinear classification?
You might have heard of the universal approximation theorem for neural networks. Networks with sigmoid activation are a particular case of the networks for which this holds, so yes, they can perform non-linear classification, and they can approximate any function arbitrarily well provided that the number of units is sufficiently large. | Can a perceptron with sigmoid activation function perform nonlinear classification?
You might have heard of the universal approximation theorem for neural networks. Networks with sigmoid activation are a particular case of the networks for which this holds, so yes, they can perform n |
31,139 | Slight inconsistency between the Kruskal-Wallis built-in R function and manual calculation | kruskal.test applies a correction for ties as described in this Wikipedia article (point 4):
A correction for ties if using the short-cut formula described in the
previous point can be made by dividing H by $1 -
\frac{\sum_{i=1}^G (t_i^3 - t_i)}{N^3-N}$, ...
Continuing from your code:
TIES <- table(activity)
H / (1 - sum(TIES^3 - TIES)/(length(activity)^3 - length(activity)))
#[1] 8.9056
You can find out what the R function does by carefully studying the code, which you can see using getAnywhere(kruskal.test.default). | Slight inconsistency between the Kruskal-Wallis built-in R function and manual calculation | kruskal.test applies a correction for ties as described in this Wikipedia article (point 4):
A correction for ties if using the short-cut formula described in the
previous point can be made by divi | Slight inconsistency between the Kruskal-Wallis built-in R function and manual calculation
kruskal.test applies a correction for ties as described in this Wikipedia article (point 4):
A correction for ties if using the short-cut formula described in the
previous point can be made by dividing H by $1 -
\frac{\sum_{i=1}^G (t_i^3 - t_i)}{N^3-N}$, ...
Continuing from your code:
TIES <- table(activity)
H / (1 - sum(TIES^3 - TIES)/(length(activity)^3 - length(activity)))
#[1] 8.9056
You can find out what the R function does by carefully studying the code, which you can see using getAnywhere(kruskal.test.default). | Slight inconsistency between the Kruskal-Wallis built-in R function and manual calculation
kruskal.test applies a correction for ties as described in this Wikipedia article (point 4):
A correction for ties if using the short-cut formula described in the
previous point can be made by divi |
31,140 | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate] | As you state, $R^2 = 1-SSE/SST$, where SSE is the sum of squared residuals of the model and SST is the sum of squared residuals of a simple model that just predicts the average response variable for each observation.
Consider two regression models. The first has data in a relatively small range:
dat <- data.frame(x=1:10, y=c(2, 1, 4, 3, 6, 5, 8, 7, 10, 9))
plot(dat)
We get an $R^2$ of 0.88, with SSE = 9.7 and SST = 82.5.
Now add the point with x=100, y=100 and re-run the experiment:
dat2 <- data.frame(x=c(1:10, 100), y=c(2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 100))
plot(dat2)
The second model has $R^2$ of 0.999, with SSE = 10.0 and SST = 8201. The model fit hasn't really improved (both models have a RMSE of roughly 1). However, the $R^2$ has improved dramatically because the simple baseline driving SST is much worse in the second case. Instead of predicting the sensible value of 5.5 for all observations as it did in the first case, it is now predicting 14.1.
We now see that "wide data" (aka a large range of response variables) can cause SST to become much worse than the "narrow data" case, causing the $R^2$ to improve without a closer model fit. | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate] | As you state, $R^2 = 1-SSE/SST$, where SSE is the sum of squared residuals of the model and SST is the sum of squared residuals of a simple model that just predicts the average response variable for e | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate]
As you state, $R^2 = 1-SSE/SST$, where SSE is the sum of squared residuals of the model and SST is the sum of squared residuals of a simple model that just predicts the average response variable for each observation.
Consider two regression models. The first has data in a relatively small range:
dat <- data.frame(x=1:10, y=c(2, 1, 4, 3, 6, 5, 8, 7, 10, 9))
plot(dat)
We get an $R^2$ of 0.88, with SSE = 9.7 and SST = 82.5.
Now add the point with x=100, y=100 and re-run the experiment:
dat2 <- data.frame(x=c(1:10, 100), y=c(2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 100))
plot(dat2)
The second model has $R^2$ of 0.999, with SSE = 10.0 and SST = 8201. The model fit hasn't really improved (both models have a RMSE of roughly 1). However, the $R^2$ has improved dramatically because the simple baseline driving SST is much worse in the second case. Instead of predicting the sensible value of 5.5 for all observations as it did in the first case, it is now predicting 14.1.
We now see that "wide data" (aka a large range of response variables) can cause SST to become much worse than the "narrow data" case, causing the $R^2$ to improve without a closer model fit. | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate]
As you state, $R^2 = 1-SSE/SST$, where SSE is the sum of squared residuals of the model and SST is the sum of squared residuals of a simple model that just predicts the average response variable for e |
31,141 | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate] | If variability between subjects increases then variability within subjects decreases, i.e. the intraindividual measurement error is greater. For instance, with insulin resistance we often use HOMA-IR which is a great marker because it only requires one measure of fasting glucose/insulin over time... but if you measured the same individual tomorrow, you would have a different prediction, like with blood pressure, protein urea, etc. Clinically, we rely on multiple measurements of imprecise markers to make decisions. If you predict a highly variable marker within an individual well, it still won't tell you much about that individual in general. | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate] | If variability between subjects increases then variability within subjects decreases, i.e. the intraindividual measurement error is greater. For instance, with insulin resistance we often use HOMA-IR | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate]
If variability between subjects increases then variability within subjects decreases, i.e. the intraindividual measurement error is greater. For instance, with insulin resistance we often use HOMA-IR which is a great marker because it only requires one measure of fasting glucose/insulin over time... but if you measured the same individual tomorrow, you would have a different prediction, like with blood pressure, protein urea, etc. Clinically, we rely on multiple measurements of imprecise markers to make decisions. If you predict a highly variable marker within an individual well, it still won't tell you much about that individual in general. | How is it possible to get a high $R^²$ & still have 'poor predictions'? [duplicate]
If variability between subjects increases then variability within subjects decreases, i.e. the intraindividual measurement error is greater. For instance, with insulin resistance we often use HOMA-IR |
31,142 | General guidelines on how to derive a hypothesis statistical test? | How did authors of statistical hypothesis tests come up with their statistics?
There are numerous ways to identify test statistics, depending on circumstances. It's important to try to identify the alternatives you see as important to pick up and try to get some power against those, under some plausible set of assumptions.
If you have a hypothesis relating to population means (in fact, let's make it simple and consider a one-sample test), for example, a statistic based on the sample mean would seem like an obvious choice for a statistic, since it will tend to behave differently under the null and the alternative. However (for example), if you're looking at shift-alternatives for a Laplace / double-exponential family ($\text{DExp}(\mu,\tau)$), something based on the sample median would be a better choice for a test of a shift in mean than something based on the sample mean.
If you have a specific parametric model (based on some particular distribution-family), it's common to at least consider a likelihood ratio test, since they have a number of attractive properties for large samples.
In many situations where you're trying to design a test from scratch, a test statistic will be based on a pivotal quantity. The test statistic in a one-sample t-test (as well as with many other tests you may have seen before) is a pivotal quantity.
Given a specific problem, is it always obvious what the ideal (if this is definable on objective grounds at all) statistic ought to be?
Not at all. Consider a test of general normality against an ominibus alternative, for example. There are many ways to measure deviation from normality (dozens of such tests have been proposed), and at typical sample sizes, none of them is most powerful against every alternative.
In trying to design a test for a situation like that, a certain amount of creativity is called for in coming up with a choice that will have good power against the kinds of alternatives you're most interested in picking up.
It seems those two requirements listed in step 2 above are too broad and many different statistics could be devised to test the same hypotheses.
Indeed. If you make some parametric assumption (assume the data are drawn from some distribution family and then make your hypothesis relate to one or more parameters of it) then there might be a best-possible test for all such situations (specifically, a uniformly most powerful test), but even then if your parametric assumption is more like a rough guess, then a desire for some robustness to that assumption may change things quite a bit.
For example (again, taking a one sample test of location shift to be simple), if I am sampling from a normal population then a t-test will be best. But let's say I think that it may not be exactly normal and on top of that there might be a small amount of contamination by some other process with a moderately heavy-tail, then something more robust (perhaps even a rank based alternative like the signed rank test) may tend to perform better across a variety of such situations. | General guidelines on how to derive a hypothesis statistical test? | How did authors of statistical hypothesis tests come up with their statistics?
There are numerous ways to identify test statistics, depending on circumstances. It's important to try to identify the a | General guidelines on how to derive a hypothesis statistical test?
How did authors of statistical hypothesis tests come up with their statistics?
There are numerous ways to identify test statistics, depending on circumstances. It's important to try to identify the alternatives you see as important to pick up and try to get some power against those, under some plausible set of assumptions.
If you have a hypothesis relating to population means (in fact, let's make it simple and consider a one-sample test), for example, a statistic based on the sample mean would seem like an obvious choice for a statistic, since it will tend to behave differently under the null and the alternative. However (for example), if you're looking at shift-alternatives for a Laplace / double-exponential family ($\text{DExp}(\mu,\tau)$), something based on the sample median would be a better choice for a test of a shift in mean than something based on the sample mean.
If you have a specific parametric model (based on some particular distribution-family), it's common to at least consider a likelihood ratio test, since they have a number of attractive properties for large samples.
In many situations where you're trying to design a test from scratch, a test statistic will be based on a pivotal quantity. The test statistic in a one-sample t-test (as well as with many other tests you may have seen before) is a pivotal quantity.
Given a specific problem, is it always obvious what the ideal (if this is definable on objective grounds at all) statistic ought to be?
Not at all. Consider a test of general normality against an ominibus alternative, for example. There are many ways to measure deviation from normality (dozens of such tests have been proposed), and at typical sample sizes, none of them is most powerful against every alternative.
In trying to design a test for a situation like that, a certain amount of creativity is called for in coming up with a choice that will have good power against the kinds of alternatives you're most interested in picking up.
It seems those two requirements listed in step 2 above are too broad and many different statistics could be devised to test the same hypotheses.
Indeed. If you make some parametric assumption (assume the data are drawn from some distribution family and then make your hypothesis relate to one or more parameters of it) then there might be a best-possible test for all such situations (specifically, a uniformly most powerful test), but even then if your parametric assumption is more like a rough guess, then a desire for some robustness to that assumption may change things quite a bit.
For example (again, taking a one sample test of location shift to be simple), if I am sampling from a normal population then a t-test will be best. But let's say I think that it may not be exactly normal and on top of that there might be a small amount of contamination by some other process with a moderately heavy-tail, then something more robust (perhaps even a rank based alternative like the signed rank test) may tend to perform better across a variety of such situations. | General guidelines on how to derive a hypothesis statistical test?
How did authors of statistical hypothesis tests come up with their statistics?
There are numerous ways to identify test statistics, depending on circumstances. It's important to try to identify the a |
31,143 | General guidelines on how to derive a hypothesis statistical test? | An useful test statistic is one whose distribution depends on the parameter of interest and no other part of the statistical model. That way its distribution under the null hypothesis (i.e. when the parameter of interest has the value specified by the null hypothesis) can be fully specified. An ideal test statistic adds to that the property of having a distribution that is strongly dependent on the parameter of interest so that the resulting test has good power.
Consider Student's t-test. It was developed as a significance test (see What is the difference between "testing of hypothesis" and "test of significance"?) for small sample means. The difficulty that Gossett faced was that the distribution of the mean of a small sample from a normal population depends on the parameter of interest, $\mu$, but also a 'nuisance parameter', the standard deviation of the population, $\sigma$. The small sample condition meant that the standard deviation estimated from the sample, $s$, is not an adequate estimate of $\sigma$. To solve the problem Gossett devised the test statistic $t=\sqrt{n}\times \bar{x}/s$ which is dependent on only the data and that has a defined distribution for any given sample size, $n$. Importantly, that distribution is entirely unaffected by $\sigma$. (Actually, that form of the test statistic was a revision by Fisher, if I remember correctly.)
Nowadays it is not always easy to see the genius of Gossett's solution, t particularly as his t-statistic looks almost identical to the z-statistic for a normal distribution with known variance (just substitute $\sigma$ for $s$). The hard part was determining the nature of the distribution of the test statistic. Proof that Gossett's distribution was correct didn't come until a later paper by Fisher.
In many cases statistical tests are devised by finding test statistics that take a distribution that can be proved to approximate known distributions under assumptions that are acceptable. Many tests are based on approximations to the chi squared distribution, for example. | General guidelines on how to derive a hypothesis statistical test? | An useful test statistic is one whose distribution depends on the parameter of interest and no other part of the statistical model. That way its distribution under the null hypothesis (i.e. when the p | General guidelines on how to derive a hypothesis statistical test?
An useful test statistic is one whose distribution depends on the parameter of interest and no other part of the statistical model. That way its distribution under the null hypothesis (i.e. when the parameter of interest has the value specified by the null hypothesis) can be fully specified. An ideal test statistic adds to that the property of having a distribution that is strongly dependent on the parameter of interest so that the resulting test has good power.
Consider Student's t-test. It was developed as a significance test (see What is the difference between "testing of hypothesis" and "test of significance"?) for small sample means. The difficulty that Gossett faced was that the distribution of the mean of a small sample from a normal population depends on the parameter of interest, $\mu$, but also a 'nuisance parameter', the standard deviation of the population, $\sigma$. The small sample condition meant that the standard deviation estimated from the sample, $s$, is not an adequate estimate of $\sigma$. To solve the problem Gossett devised the test statistic $t=\sqrt{n}\times \bar{x}/s$ which is dependent on only the data and that has a defined distribution for any given sample size, $n$. Importantly, that distribution is entirely unaffected by $\sigma$. (Actually, that form of the test statistic was a revision by Fisher, if I remember correctly.)
Nowadays it is not always easy to see the genius of Gossett's solution, t particularly as his t-statistic looks almost identical to the z-statistic for a normal distribution with known variance (just substitute $\sigma$ for $s$). The hard part was determining the nature of the distribution of the test statistic. Proof that Gossett's distribution was correct didn't come until a later paper by Fisher.
In many cases statistical tests are devised by finding test statistics that take a distribution that can be proved to approximate known distributions under assumptions that are acceptable. Many tests are based on approximations to the chi squared distribution, for example. | General guidelines on how to derive a hypothesis statistical test?
An useful test statistic is one whose distribution depends on the parameter of interest and no other part of the statistical model. That way its distribution under the null hypothesis (i.e. when the p |
31,144 | Test logistic regression model using residual deviance and degrees of freedom | They are using a deviance test shown below:
$$
D(y) = -2\ell(\hat\beta;y) + 2\ell(\hat\theta^{(s)};y)
$$
Here the $\hat β$ represents the fitted model of interest and $\hatθ(s)$ represents the saturated model. The log-likelihood for the saturated model is (more often than not) $0$, hence you are left with the residual deviance of the model they fitted ($29.92$). This deviance test is approximately chi-squared with degrees of freedom $n-p$ ($n$ being the observations and $p$ being the number of variables fitted). You have $n=16$ and $p=6$ so the test will be approximately $\chi^2_{10}$. The null of the test is that your fitted model fits the data well and there is no misfit—you haven't missed any sources of variation. In the above test you reject the null and, as a result, you have missed something in the model you fitted. The reason for using this test is that the saturated model will fit the data perfectly so if you were in the case where you are not rejecting the null between your fitted model and the saturated model, it indicates you haven't missed big sources of data variation in your model. | Test logistic regression model using residual deviance and degrees of freedom | They are using a deviance test shown below:
$$
D(y) = -2\ell(\hat\beta;y) + 2\ell(\hat\theta^{(s)};y)
$$
Here the $\hat β$ represents the fitted model of interest and $\hatθ(s)$ represents the saturat | Test logistic regression model using residual deviance and degrees of freedom
They are using a deviance test shown below:
$$
D(y) = -2\ell(\hat\beta;y) + 2\ell(\hat\theta^{(s)};y)
$$
Here the $\hat β$ represents the fitted model of interest and $\hatθ(s)$ represents the saturated model. The log-likelihood for the saturated model is (more often than not) $0$, hence you are left with the residual deviance of the model they fitted ($29.92$). This deviance test is approximately chi-squared with degrees of freedom $n-p$ ($n$ being the observations and $p$ being the number of variables fitted). You have $n=16$ and $p=6$ so the test will be approximately $\chi^2_{10}$. The null of the test is that your fitted model fits the data well and there is no misfit—you haven't missed any sources of variation. In the above test you reject the null and, as a result, you have missed something in the model you fitted. The reason for using this test is that the saturated model will fit the data perfectly so if you were in the case where you are not rejecting the null between your fitted model and the saturated model, it indicates you haven't missed big sources of data variation in your model. | Test logistic regression model using residual deviance and degrees of freedom
They are using a deviance test shown below:
$$
D(y) = -2\ell(\hat\beta;y) + 2\ell(\hat\theta^{(s)};y)
$$
Here the $\hat β$ represents the fitted model of interest and $\hatθ(s)$ represents the saturat |
31,145 | Test logistic regression model using residual deviance and degrees of freedom | Your question, as stated, has been answered by @francium87d. Comparing the residual deviance against the appropriate chi-squared distribution constitutes testing the fitted model against the saturated model and shows, in this case, a significant lack of fit.
Still, it might help to look more thoroughly at the data and the model to understand better what it means that the model has a lack of fit:
d = read.table(text=" age education wantsMore notUsing using
<25 low yes 53 6
<25 low no 10 4
<25 high yes 212 52
<25 high no 50 10
25-29 low yes 60 14
25-29 low no 19 10
25-29 high yes 155 54
25-29 high no 65 27
30-39 low yes 112 33
30-39 low no 77 80
30-39 high yes 118 46
30-39 high no 68 78
40-49 low yes 35 6
40-49 low no 46 48
40-49 high yes 8 8
40-49 high no 12 31", header=TRUE, stringsAsFactors=FALSE)
d = d[order(d[,3],d[,2]), c(3,2,1,5,4)]
library(binom)
d$proportion = with(d, using/(using+notUsing))
d$sum = with(d, using+notUsing)
bCI = binom.confint(x=d$using, n=d$sum, methods="exact")
m = glm(cbind(using,notUsing)~age+education+wantsMore, d, family=binomial)
preds = predict(m, new.data=d[,1:3], type="response")
windows()
par(mar=c(5, 8, 4, 2))
bp = barplot(d$proportion, horiz=T, xlim=c(0,1), xlab="proportion",
main="Birth control usage")
box()
axis(side=2, at=bp, labels=paste(d[,1], d[,2], d[,3]), las=1)
arrows(y0=bp, x0=bCI[,5], x1=bCI[,6], code=3, angle=90, length=.05)
points(x=preds, y=bp, pch=15, col="red")
The figure plots the observed proportion of women in each set of categories that are using birth control, along with the exact 95% confidence interval. The model's predicted proportions are overlaid in red. We can see that two predicted proportions are outside of the 95% CIs, and anther five are at or very near the limits of the respective CIs. That's seven out of sixteen ($44\%$) that are off target. So the model's predictions don't match the observed data very well.
How could the model fit better? Perhaps there are interactions amongst the variables that are relevant. Let's add all the two-way interactions and assess the fit:
m2 = glm(cbind(using,notUsing)~(age+education+wantsMore)^2, d, family=binomial)
summary(m2)
# ...
# Null deviance: 165.7724 on 15 degrees of freedom
# Residual deviance: 2.4415 on 3 degrees of freedom
# AIC: 99.949
#
# Number of Fisher Scoring iterations: 4
1-pchisq(2.4415, df=3) # [1] 0.4859562
drop1(m2, test="LRT")
# Single term deletions
#
# Model:
# cbind(using, notUsing) ~ (age + education + wantsMore)^2
# Df Deviance AIC LRT Pr(>Chi)
# <none> 2.4415 99.949
# age:education 3 10.8240 102.332 8.3826 0.03873 *
# age:wantsMore 3 13.7639 105.272 11.3224 0.01010 *
# education:wantsMore 1 5.7983 101.306 3.3568 0.06693 .
The p-value for the lack of fit test for this model is now $0.486$. But do we really need all those extra interaction terms? The drop1() command shows the results of the nested model tests without them. The interaction between education and wantsMore is not quite significant, but I would be fine with it in the model anyway. So let's see how the predictions from this model compare to the data:
These aren't perfect, but we shouldn't assume that the observed proportions are a perfect reflection of the true data generating process. These look to me like they are bouncing around the appropriate amount (more correctly that the data are bouncing around the predictions, I suppose). | Test logistic regression model using residual deviance and degrees of freedom | Your question, as stated, has been answered by @francium87d. Comparing the residual deviance against the appropriate chi-squared distribution constitutes testing the fitted model against the saturate | Test logistic regression model using residual deviance and degrees of freedom
Your question, as stated, has been answered by @francium87d. Comparing the residual deviance against the appropriate chi-squared distribution constitutes testing the fitted model against the saturated model and shows, in this case, a significant lack of fit.
Still, it might help to look more thoroughly at the data and the model to understand better what it means that the model has a lack of fit:
d = read.table(text=" age education wantsMore notUsing using
<25 low yes 53 6
<25 low no 10 4
<25 high yes 212 52
<25 high no 50 10
25-29 low yes 60 14
25-29 low no 19 10
25-29 high yes 155 54
25-29 high no 65 27
30-39 low yes 112 33
30-39 low no 77 80
30-39 high yes 118 46
30-39 high no 68 78
40-49 low yes 35 6
40-49 low no 46 48
40-49 high yes 8 8
40-49 high no 12 31", header=TRUE, stringsAsFactors=FALSE)
d = d[order(d[,3],d[,2]), c(3,2,1,5,4)]
library(binom)
d$proportion = with(d, using/(using+notUsing))
d$sum = with(d, using+notUsing)
bCI = binom.confint(x=d$using, n=d$sum, methods="exact")
m = glm(cbind(using,notUsing)~age+education+wantsMore, d, family=binomial)
preds = predict(m, new.data=d[,1:3], type="response")
windows()
par(mar=c(5, 8, 4, 2))
bp = barplot(d$proportion, horiz=T, xlim=c(0,1), xlab="proportion",
main="Birth control usage")
box()
axis(side=2, at=bp, labels=paste(d[,1], d[,2], d[,3]), las=1)
arrows(y0=bp, x0=bCI[,5], x1=bCI[,6], code=3, angle=90, length=.05)
points(x=preds, y=bp, pch=15, col="red")
The figure plots the observed proportion of women in each set of categories that are using birth control, along with the exact 95% confidence interval. The model's predicted proportions are overlaid in red. We can see that two predicted proportions are outside of the 95% CIs, and anther five are at or very near the limits of the respective CIs. That's seven out of sixteen ($44\%$) that are off target. So the model's predictions don't match the observed data very well.
How could the model fit better? Perhaps there are interactions amongst the variables that are relevant. Let's add all the two-way interactions and assess the fit:
m2 = glm(cbind(using,notUsing)~(age+education+wantsMore)^2, d, family=binomial)
summary(m2)
# ...
# Null deviance: 165.7724 on 15 degrees of freedom
# Residual deviance: 2.4415 on 3 degrees of freedom
# AIC: 99.949
#
# Number of Fisher Scoring iterations: 4
1-pchisq(2.4415, df=3) # [1] 0.4859562
drop1(m2, test="LRT")
# Single term deletions
#
# Model:
# cbind(using, notUsing) ~ (age + education + wantsMore)^2
# Df Deviance AIC LRT Pr(>Chi)
# <none> 2.4415 99.949
# age:education 3 10.8240 102.332 8.3826 0.03873 *
# age:wantsMore 3 13.7639 105.272 11.3224 0.01010 *
# education:wantsMore 1 5.7983 101.306 3.3568 0.06693 .
The p-value for the lack of fit test for this model is now $0.486$. But do we really need all those extra interaction terms? The drop1() command shows the results of the nested model tests without them. The interaction between education and wantsMore is not quite significant, but I would be fine with it in the model anyway. So let's see how the predictions from this model compare to the data:
These aren't perfect, but we shouldn't assume that the observed proportions are a perfect reflection of the true data generating process. These look to me like they are bouncing around the appropriate amount (more correctly that the data are bouncing around the predictions, I suppose). | Test logistic regression model using residual deviance and degrees of freedom
Your question, as stated, has been answered by @francium87d. Comparing the residual deviance against the appropriate chi-squared distribution constitutes testing the fitted model against the saturate |
31,146 | Test logistic regression model using residual deviance and degrees of freedom | I do not believe that the residual deviance statistic has a $\chi^2$ distribution. I think it is a degenerate distribution because asymptotic theory does not apply when the degrees of freedom increases at the same speed as the sample size. At any rate I doubt that the test has sufficient power, and encourage directed tests such as tests of linearity using regression splines and tests of interaction. | Test logistic regression model using residual deviance and degrees of freedom | I do not believe that the residual deviance statistic has a $\chi^2$ distribution. I think it is a degenerate distribution because asymptotic theory does not apply when the degrees of freedom increas | Test logistic regression model using residual deviance and degrees of freedom
I do not believe that the residual deviance statistic has a $\chi^2$ distribution. I think it is a degenerate distribution because asymptotic theory does not apply when the degrees of freedom increases at the same speed as the sample size. At any rate I doubt that the test has sufficient power, and encourage directed tests such as tests of linearity using regression splines and tests of interaction. | Test logistic regression model using residual deviance and degrees of freedom
I do not believe that the residual deviance statistic has a $\chi^2$ distribution. I think it is a degenerate distribution because asymptotic theory does not apply when the degrees of freedom increas |
31,147 | Difference between "logistic regression" and "binomal GLM with logistic link" | This sounds like pseudostatistical gibberish to me. It may be that what he has in mind is the beta-binomial distribution, which is a way to account for greater variability in the response than 'ought' to occur with a binomial, but it's hard to say. The beta-binomial distribution would not be familiar to someone who has only taken a couple of applied statistics classes, but should not be exotic to a statistics professor.
The rest of his argument sounds like a Dunning-Kruger effect to me. That is where someone knows just a little bit about a topic, but is unaware of the breadth and depth of the issues or the potential caveats and complications, and therefore thinks that the topic is easy and obvious. The idea that the best way to forecast the election is to build one simple logistic regression model with the state polls is strikingly ignorant. | Difference between "logistic regression" and "binomal GLM with logistic link" | This sounds like pseudostatistical gibberish to me. It may be that what he has in mind is the beta-binomial distribution, which is a way to account for greater variability in the response than 'ought | Difference between "logistic regression" and "binomal GLM with logistic link"
This sounds like pseudostatistical gibberish to me. It may be that what he has in mind is the beta-binomial distribution, which is a way to account for greater variability in the response than 'ought' to occur with a binomial, but it's hard to say. The beta-binomial distribution would not be familiar to someone who has only taken a couple of applied statistics classes, but should not be exotic to a statistics professor.
The rest of his argument sounds like a Dunning-Kruger effect to me. That is where someone knows just a little bit about a topic, but is unaware of the breadth and depth of the issues or the potential caveats and complications, and therefore thinks that the topic is easy and obvious. The idea that the best way to forecast the election is to build one simple logistic regression model with the state polls is strikingly ignorant. | Difference between "logistic regression" and "binomal GLM with logistic link"
This sounds like pseudostatistical gibberish to me. It may be that what he has in mind is the beta-binomial distribution, which is a way to account for greater variability in the response than 'ought |
31,148 | Difference between "logistic regression" and "binomal GLM with logistic link" | Logistic regression is often taught to undergrads as a transformed response: Take a number between 0 and 1, make log-odds out of that, and then fit OLS to it. That is also what is done for logistic regression in some social sciences. Given that Nate did his undergrad in economics, it would not be unusual if he had been taught this non-GLM approach. | Difference between "logistic regression" and "binomal GLM with logistic link" | Logistic regression is often taught to undergrads as a transformed response: Take a number between 0 and 1, make log-odds out of that, and then fit OLS to it. That is also what is done for logistic r | Difference between "logistic regression" and "binomal GLM with logistic link"
Logistic regression is often taught to undergrads as a transformed response: Take a number between 0 and 1, make log-odds out of that, and then fit OLS to it. That is also what is done for logistic regression in some social sciences. Given that Nate did his undergrad in economics, it would not be unusual if he had been taught this non-GLM approach. | Difference between "logistic regression" and "binomal GLM with logistic link"
Logistic regression is often taught to undergrads as a transformed response: Take a number between 0 and 1, make log-odds out of that, and then fit OLS to it. That is also what is done for logistic r |
31,149 | Can a linear SVM only have 2 classes? | SVMs (linear or otherwise) inherently do binary classification. However, there are various procedures for extending them to multiclass problems. The most common methods involve transforming the problem into a set of binary classification problems, by one of two strategies:
One vs. the rest. For $k$ classes, $k$ binary classifiers are trained. Each determines whether an example belongs to its 'own' class versus any other class. The classifier with the largest output is taken to be the class of the example.
One vs. one. A binary classifier is trained for each pair of classes. A voting procedure is used to combine the outputs.
More sophisticated methods exist. Search for "multiclass SVM". | Can a linear SVM only have 2 classes? | SVMs (linear or otherwise) inherently do binary classification. However, there are various procedures for extending them to multiclass problems. The most common methods involve transforming the proble | Can a linear SVM only have 2 classes?
SVMs (linear or otherwise) inherently do binary classification. However, there are various procedures for extending them to multiclass problems. The most common methods involve transforming the problem into a set of binary classification problems, by one of two strategies:
One vs. the rest. For $k$ classes, $k$ binary classifiers are trained. Each determines whether an example belongs to its 'own' class versus any other class. The classifier with the largest output is taken to be the class of the example.
One vs. one. A binary classifier is trained for each pair of classes. A voting procedure is used to combine the outputs.
More sophisticated methods exist. Search for "multiclass SVM". | Can a linear SVM only have 2 classes?
SVMs (linear or otherwise) inherently do binary classification. However, there are various procedures for extending them to multiclass problems. The most common methods involve transforming the proble |
31,150 | Can a linear SVM only have 2 classes? | Yes, support vector machines were originally designed to only support two-class-problems. That is not only true for linear SVMs, but for support vector machines in general. There are ways to work around this, but they usually come as a kind of afterthought. There is an ongoing discussion about the merit of these approaches. If you would like to delve into this discussion, this paper may serve as a starting point. | Can a linear SVM only have 2 classes? | Yes, support vector machines were originally designed to only support two-class-problems. That is not only true for linear SVMs, but for support vector machines in general. There are ways to work arou | Can a linear SVM only have 2 classes?
Yes, support vector machines were originally designed to only support two-class-problems. That is not only true for linear SVMs, but for support vector machines in general. There are ways to work around this, but they usually come as a kind of afterthought. There is an ongoing discussion about the merit of these approaches. If you would like to delve into this discussion, this paper may serve as a starting point. | Can a linear SVM only have 2 classes?
Yes, support vector machines were originally designed to only support two-class-problems. That is not only true for linear SVMs, but for support vector machines in general. There are ways to work arou |
31,151 | Comparison between Bayes estimators | First, note that I corrected the original wording of the question wrt the indicator functions in your likelihood definitions as they have to be functions of $x$ not $\theta$. Hence the likelihood is $$f(x)=\theta x^{\theta-1}\mathbb{I}_{[0,1]}(x)$$ that clearly integrates to one: $$\int_0^1 \theta x^{\theta-1}\text{d}x = 1$$
Second, the posterior in $\theta$ is not a Beta function since as indicated by Greenparker
$$\pi(\theta|x)\propto\, \mathbb{I}_{[0,1/2]}(\theta)\theta x^{\theta-1}\propto
\mathbb{I}_{[0,1/2]}(\theta)\,\theta \exp\{\log(x)\theta\}$$
Due to the constraint on the values of $\theta$ it is not a Gamma distribution either, but a truncation of the Gamma distribution.
Hence the Bayes estimator is the posterior expectation
$$\begin{align*}\mathbb{E}[\theta|x] &= \int_0^{1/2} \theta\times\theta \exp\{\log(x)\theta\}\text{d}\theta \Big/ \int_0^{1/2} \theta \exp\{\log(x)\theta\}\text{d}\theta\\&= \int_0^{1/2} \theta^2 \exp\{\log(x)\theta\}\text{d}\theta \Big/ \int_0^{1/2} \theta \exp\{\log(x)\theta\}\text{d}\theta\\\end{align*}
$$
that may seem to require the use of the incomplete Gamma function but which can be derived in closed form by integration by part:
$$\int_0^{1/2} \theta^k \exp\{-\alpha\theta\}\text{d}\theta =
\frac{-1}{\alpha}\left[\theta^k \exp\{-\alpha\theta\}\right]_0^{1/2} +
\frac{k}{\alpha}\int_0^{1/2} \theta^{k-1} \exp\{-\alpha\theta\}\text{d}\theta$$ since
$$\int_0^{1/2} \exp\{-\alpha\theta\}\text{d}\theta = \frac{1-\exp\{-\alpha/2\}}{\alpha}$$
Last, as indicated in my book, indeed, minimising in $\delta$
$$\int w(\theta)(\theta-\delta)^2 \pi(\theta|x) \text{d}\theta$$
is equivalent to minimising in $\delta$
$$\int w(\theta)(\theta-\delta)^2 \pi(\theta)f(x|\theta) \text{d}\theta$$ which itself is equivalent to minimising in $\delta$
$$\int (\theta-\delta)^2 w(\theta)\pi(\theta)f(x|\theta) \text{d}\theta$$ which amounts to replacing the original prior $\pi$ with a new prior $w(\theta)\pi(\theta)$ that needs to be renormalised into a density, that is,
$$\pi_1(\theta)=w(\theta)\pi(\theta) \Big/\int w(\theta)\pi(\theta)\text{d}\theta$$ | Comparison between Bayes estimators | First, note that I corrected the original wording of the question wrt the indicator functions in your likelihood definitions as they have to be functions of $x$ not $\theta$. Hence the likelihood is $ | Comparison between Bayes estimators
First, note that I corrected the original wording of the question wrt the indicator functions in your likelihood definitions as they have to be functions of $x$ not $\theta$. Hence the likelihood is $$f(x)=\theta x^{\theta-1}\mathbb{I}_{[0,1]}(x)$$ that clearly integrates to one: $$\int_0^1 \theta x^{\theta-1}\text{d}x = 1$$
Second, the posterior in $\theta$ is not a Beta function since as indicated by Greenparker
$$\pi(\theta|x)\propto\, \mathbb{I}_{[0,1/2]}(\theta)\theta x^{\theta-1}\propto
\mathbb{I}_{[0,1/2]}(\theta)\,\theta \exp\{\log(x)\theta\}$$
Due to the constraint on the values of $\theta$ it is not a Gamma distribution either, but a truncation of the Gamma distribution.
Hence the Bayes estimator is the posterior expectation
$$\begin{align*}\mathbb{E}[\theta|x] &= \int_0^{1/2} \theta\times\theta \exp\{\log(x)\theta\}\text{d}\theta \Big/ \int_0^{1/2} \theta \exp\{\log(x)\theta\}\text{d}\theta\\&= \int_0^{1/2} \theta^2 \exp\{\log(x)\theta\}\text{d}\theta \Big/ \int_0^{1/2} \theta \exp\{\log(x)\theta\}\text{d}\theta\\\end{align*}
$$
that may seem to require the use of the incomplete Gamma function but which can be derived in closed form by integration by part:
$$\int_0^{1/2} \theta^k \exp\{-\alpha\theta\}\text{d}\theta =
\frac{-1}{\alpha}\left[\theta^k \exp\{-\alpha\theta\}\right]_0^{1/2} +
\frac{k}{\alpha}\int_0^{1/2} \theta^{k-1} \exp\{-\alpha\theta\}\text{d}\theta$$ since
$$\int_0^{1/2} \exp\{-\alpha\theta\}\text{d}\theta = \frac{1-\exp\{-\alpha/2\}}{\alpha}$$
Last, as indicated in my book, indeed, minimising in $\delta$
$$\int w(\theta)(\theta-\delta)^2 \pi(\theta|x) \text{d}\theta$$
is equivalent to minimising in $\delta$
$$\int w(\theta)(\theta-\delta)^2 \pi(\theta)f(x|\theta) \text{d}\theta$$ which itself is equivalent to minimising in $\delta$
$$\int (\theta-\delta)^2 w(\theta)\pi(\theta)f(x|\theta) \text{d}\theta$$ which amounts to replacing the original prior $\pi$ with a new prior $w(\theta)\pi(\theta)$ that needs to be renormalised into a density, that is,
$$\pi_1(\theta)=w(\theta)\pi(\theta) \Big/\int w(\theta)\pi(\theta)\text{d}\theta$$ | Comparison between Bayes estimators
First, note that I corrected the original wording of the question wrt the indicator functions in your likelihood definitions as they have to be functions of $x$ not $\theta$. Hence the likelihood is $ |
31,152 | Comparison between Bayes estimators | Your answer for the squared error loss part is wrong.
$$\pi(\theta|x) \propto f(x|\theta) \pi(\theta) = 2\theta x^{\theta-1}I_{(0,1/2)}(\theta). $$
This is a $Beta(\theta,1)$ distribution in $x$, not in $\theta$, and the random variable in the posterior is $\theta$. So your answer is incorrect, and the correct answer would be the posterior mean of that distribution.
For the second part,
(The prior for the weighted loss function is $\pi_1$ but you refer to it as $\pi$. I am switching notation back to $\pi_1$.)
Let $\pi'(\theta) = cw(\theta) \pi_1(\theta)$, where $c$ is a normalizing constant. You need to calculate
\begin{align*}
\delta^{\pi_1}(x) & = \dfrac{E^{\pi_1}[w(\theta) \theta |x ]}{E^{\pi_1}[w(\theta|x)]}\\
& = \dfrac{\int w(\theta) \theta f(x|\theta) \pi_1(\theta)\, d\theta}{\int w(\theta) f(x|\theta)\pi_1(\theta)\, d\theta}\\
& = \dfrac{\int \theta f(x|\theta) \pi'(\theta) \,d\theta}{\int f(x|\theta) \pi'(\theta) \, d\theta}\\
& = E^{\pi'}[\theta|x]
\end{align*}
Thus, for weighted least squares loss function, the theorem says that the Bayes estimate is the posterior mean with respect to a different prior. The prior being
$$\pi'(\theta) \propto w(\theta) \pi_1(\theta). $$
The normalizing constant is $ \int_{\theta} w(\theta) \pi(\theta) d\theta = E_{\pi_1}[w(\theta)]$.
\begin{align*}
E_{\pi_1}[w(\theta)] & = \int_{0}^{1/2} I_{0,1}(\theta) d(\theta) = \frac{1}{2}.
\end{align*}
So the prior is $\pi'(\theta) = 2I_{(0,1/2)}(\theta)$. This is the same prior you had in the first question.
Thus the answer for the scenarios (whatever it is) will be the same. You can find the integral here. Although, it might be sufficient to right the form of the answer, and not complete the integral. | Comparison between Bayes estimators | Your answer for the squared error loss part is wrong.
$$\pi(\theta|x) \propto f(x|\theta) \pi(\theta) = 2\theta x^{\theta-1}I_{(0,1/2)}(\theta). $$
This is a $Beta(\theta,1)$ distribution in $x$, not | Comparison between Bayes estimators
Your answer for the squared error loss part is wrong.
$$\pi(\theta|x) \propto f(x|\theta) \pi(\theta) = 2\theta x^{\theta-1}I_{(0,1/2)}(\theta). $$
This is a $Beta(\theta,1)$ distribution in $x$, not in $\theta$, and the random variable in the posterior is $\theta$. So your answer is incorrect, and the correct answer would be the posterior mean of that distribution.
For the second part,
(The prior for the weighted loss function is $\pi_1$ but you refer to it as $\pi$. I am switching notation back to $\pi_1$.)
Let $\pi'(\theta) = cw(\theta) \pi_1(\theta)$, where $c$ is a normalizing constant. You need to calculate
\begin{align*}
\delta^{\pi_1}(x) & = \dfrac{E^{\pi_1}[w(\theta) \theta |x ]}{E^{\pi_1}[w(\theta|x)]}\\
& = \dfrac{\int w(\theta) \theta f(x|\theta) \pi_1(\theta)\, d\theta}{\int w(\theta) f(x|\theta)\pi_1(\theta)\, d\theta}\\
& = \dfrac{\int \theta f(x|\theta) \pi'(\theta) \,d\theta}{\int f(x|\theta) \pi'(\theta) \, d\theta}\\
& = E^{\pi'}[\theta|x]
\end{align*}
Thus, for weighted least squares loss function, the theorem says that the Bayes estimate is the posterior mean with respect to a different prior. The prior being
$$\pi'(\theta) \propto w(\theta) \pi_1(\theta). $$
The normalizing constant is $ \int_{\theta} w(\theta) \pi(\theta) d\theta = E_{\pi_1}[w(\theta)]$.
\begin{align*}
E_{\pi_1}[w(\theta)] & = \int_{0}^{1/2} I_{0,1}(\theta) d(\theta) = \frac{1}{2}.
\end{align*}
So the prior is $\pi'(\theta) = 2I_{(0,1/2)}(\theta)$. This is the same prior you had in the first question.
Thus the answer for the scenarios (whatever it is) will be the same. You can find the integral here. Although, it might be sufficient to right the form of the answer, and not complete the integral. | Comparison between Bayes estimators
Your answer for the squared error loss part is wrong.
$$\pi(\theta|x) \propto f(x|\theta) \pi(\theta) = 2\theta x^{\theta-1}I_{(0,1/2)}(\theta). $$
This is a $Beta(\theta,1)$ distribution in $x$, not |
31,153 | Why are all Lasso coefficients in model 0.0? | Here, the key fact about LASSO regression is that it minimizes sum of squared error, under the constraint that the sum of absolute values of coefficients is less than some constant $c$. (See here.) So, for all of the coefficients to be zero, there must be no vector of coefficients with summed absolute value less than $c$ that improves error.
For another view, consider the LASSO loss function:
$$\sum_{i = 1}^n (Y_i - X_i^T\beta) + \lambda\sum_{j=1}^p|\beta_j|$$
As put in the tutorial referenced above, "If $\lambda$ is sufficiently large, some of the coefficients are driven to zero, leading to a sparse model." For it to be the case that zero coefficients minimize this function, $\lambda$ must be large enough that any improvement in error (the left term) is less than the added loss from the increased norm (the right term).
It's common to use cross validation to set this parameter such that the model minimizes CV error. This could be why LassoCV gave you different results—it may have set $\lambda$ for you. | Why are all Lasso coefficients in model 0.0? | Here, the key fact about LASSO regression is that it minimizes sum of squared error, under the constraint that the sum of absolute values of coefficients is less than some constant $c$. (See here.) So | Why are all Lasso coefficients in model 0.0?
Here, the key fact about LASSO regression is that it minimizes sum of squared error, under the constraint that the sum of absolute values of coefficients is less than some constant $c$. (See here.) So, for all of the coefficients to be zero, there must be no vector of coefficients with summed absolute value less than $c$ that improves error.
For another view, consider the LASSO loss function:
$$\sum_{i = 1}^n (Y_i - X_i^T\beta) + \lambda\sum_{j=1}^p|\beta_j|$$
As put in the tutorial referenced above, "If $\lambda$ is sufficiently large, some of the coefficients are driven to zero, leading to a sparse model." For it to be the case that zero coefficients minimize this function, $\lambda$ must be large enough that any improvement in error (the left term) is less than the added loss from the increased norm (the right term).
It's common to use cross validation to set this parameter such that the model minimizes CV error. This could be why LassoCV gave you different results—it may have set $\lambda$ for you. | Why are all Lasso coefficients in model 0.0?
Here, the key fact about LASSO regression is that it minimizes sum of squared error, under the constraint that the sum of absolute values of coefficients is less than some constant $c$. (See here.) So |
31,154 | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)? | I would guess that the reason why adding an L1 penalty slows things down significantly is that an L1 penalty is not differentiable (i.e. absolute value), while the L2 penalty is. This means that the surface of the function will not be smooth, and so standard quasi-Newton's methods will have a lot of trouble with this problems. Recall that one way to think of a quasi-Newton's method is that it makes a quadratic approximation of the function and then the initial proposal will the maximum of that approximation. If the quadratic approximation matches fairly well to the target function, we should expect the proposal to be close the maximum (or minimum, depending on how you look at the world). But if your target function is non-differentialable, this quadratic approximation may be very bad, thus taking many more iterations to converge (assuming it does).
If you've found an R-package that implements BFGS for L1 penalties, by all means try it. BFGS, in general, is a very generic algorithm for optimization. As is the case with any generic algorithm, there will be plenty of special cases where it does not do well. Algorithms that are specially tailored to your problem clearly should do better (assuming the package is as good as it's author claims: I haven't heard of lbfgs, but there's a whole lot of great things I haven't heard of. Update: I have used R's lbfgs package, and the L-BFGS implementation they have is quite good! I still haven't used it's OWL-QN algorithm, which is what the OP is referring to).
If it doesn't work out for you, you might want to try the "Nelder-Mead" method with R's optim. It does not use derivatives for optimization. As such, it will typically be slower on a smooth function but stabler on an unsmooth function such as you have. | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)? | I would guess that the reason why adding an L1 penalty slows things down significantly is that an L1 penalty is not differentiable (i.e. absolute value), while the L2 penalty is. This means that the s | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)?
I would guess that the reason why adding an L1 penalty slows things down significantly is that an L1 penalty is not differentiable (i.e. absolute value), while the L2 penalty is. This means that the surface of the function will not be smooth, and so standard quasi-Newton's methods will have a lot of trouble with this problems. Recall that one way to think of a quasi-Newton's method is that it makes a quadratic approximation of the function and then the initial proposal will the maximum of that approximation. If the quadratic approximation matches fairly well to the target function, we should expect the proposal to be close the maximum (or minimum, depending on how you look at the world). But if your target function is non-differentialable, this quadratic approximation may be very bad, thus taking many more iterations to converge (assuming it does).
If you've found an R-package that implements BFGS for L1 penalties, by all means try it. BFGS, in general, is a very generic algorithm for optimization. As is the case with any generic algorithm, there will be plenty of special cases where it does not do well. Algorithms that are specially tailored to your problem clearly should do better (assuming the package is as good as it's author claims: I haven't heard of lbfgs, but there's a whole lot of great things I haven't heard of. Update: I have used R's lbfgs package, and the L-BFGS implementation they have is quite good! I still haven't used it's OWL-QN algorithm, which is what the OP is referring to).
If it doesn't work out for you, you might want to try the "Nelder-Mead" method with R's optim. It does not use derivatives for optimization. As such, it will typically be slower on a smooth function but stabler on an unsmooth function such as you have. | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)?
I would guess that the reason why adding an L1 penalty slows things down significantly is that an L1 penalty is not differentiable (i.e. absolute value), while the L2 penalty is. This means that the s |
31,155 | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)? | I don't know why your problem slows down when you add an $L_1$ penalty. It probably depends on (1) what the problem is; (2) how you've coded it; and (3) the optimization method you're using.
I think the there's an "unspoken answer" to your question: the most efficient solutions to numerical problems are often tailor-made. General-purpose algorithms are just that: general purpose. Specialized solutions to specific problems will tend to work better, because we can bring to bear observations about how that particular problem is presented and its specific properties which are known to the analyst. To your specific question about glmnet, it has a number "tricks" which makes it highly efficient - for the particular problem that it's trying to solve! The Journal of Statistical Software paper on provides details:
Its optimization for all models (elastic net, ridge regression and not just LASSO) uses cyclical coordinate descent, which is a pretty good way to go about solving this problem.
The coefficients are computed along paths for a range of $\lambda$ values. So rather than wandering over the response surface for a single value of the regularization parameter $\lambda$, the it moves from largest to smallest values, using coefficient estimates from previous solutions as starting points. This exploits the fact that coefficient estimates ascend from smaller to larger values as $\lambda$ decreases; it doesn't have to re-solve the same problem over and over again from randomly-initialized starts as one would with a naive implementation of a standard optimization routine.
And it's coded in FORTRAN.
L-BFGS is a limited memory BFGS algorithm. While it has tricks that can make it more efficient than standard BFGS for some problems, it's not clear whether the problems that it solves have any bearing on the particular problem at hand. L-BFGS is one of the options in optim as well, so I'm not sure why you'd need an additional package.
Note that BFGS depends on derivatives, which are computed by finite differences when analytical forms are not provided. This could be where you get problems, because the $L_1$ penalty is not differentiable everywhere. Not only does this mean that you're probably not going to estimate LASSO coefficients at precisely 0, it might wreak havoc with updating from iteration to iteration. | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)? | I don't know why your problem slows down when you add an $L_1$ penalty. It probably depends on (1) what the problem is; (2) how you've coded it; and (3) the optimization method you're using.
I think t | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)?
I don't know why your problem slows down when you add an $L_1$ penalty. It probably depends on (1) what the problem is; (2) how you've coded it; and (3) the optimization method you're using.
I think the there's an "unspoken answer" to your question: the most efficient solutions to numerical problems are often tailor-made. General-purpose algorithms are just that: general purpose. Specialized solutions to specific problems will tend to work better, because we can bring to bear observations about how that particular problem is presented and its specific properties which are known to the analyst. To your specific question about glmnet, it has a number "tricks" which makes it highly efficient - for the particular problem that it's trying to solve! The Journal of Statistical Software paper on provides details:
Its optimization for all models (elastic net, ridge regression and not just LASSO) uses cyclical coordinate descent, which is a pretty good way to go about solving this problem.
The coefficients are computed along paths for a range of $\lambda$ values. So rather than wandering over the response surface for a single value of the regularization parameter $\lambda$, the it moves from largest to smallest values, using coefficient estimates from previous solutions as starting points. This exploits the fact that coefficient estimates ascend from smaller to larger values as $\lambda$ decreases; it doesn't have to re-solve the same problem over and over again from randomly-initialized starts as one would with a naive implementation of a standard optimization routine.
And it's coded in FORTRAN.
L-BFGS is a limited memory BFGS algorithm. While it has tricks that can make it more efficient than standard BFGS for some problems, it's not clear whether the problems that it solves have any bearing on the particular problem at hand. L-BFGS is one of the options in optim as well, so I'm not sure why you'd need an additional package.
Note that BFGS depends on derivatives, which are computed by finite differences when analytical forms are not provided. This could be where you get problems, because the $L_1$ penalty is not differentiable everywhere. Not only does this mean that you're probably not going to estimate LASSO coefficients at precisely 0, it might wreak havoc with updating from iteration to iteration. | Why does adding L1 penalty to R's optim slows things down so much (relative to no penalty or L2)?
I don't know why your problem slows down when you add an $L_1$ penalty. It probably depends on (1) what the problem is; (2) how you've coded it; and (3) the optimization method you're using.
I think t |
31,156 | Simulate from Kernel Density Estimate (empirical PDF) | Here's an algorithm to sample from an arbitrary mixture $f(x) = \frac1N \sum_{i=1}^N f_i(x)$:
Pick a mixture component $i$ uniformly at random.
Sample from $f_i$.
It should be clear that this produces an exact sample.
A Gaussian kernel density estimate is a mixture $\frac1N \sum_{i=1}^N \mathcal{N}(x; x_i, h^2)$. So you can take a sample of size $N$ by picking a bunch of $x_i$s and adding normal noise with zero mean and variance $h^2$ to it.
Your code snippet is selecting a bunch of $x_i$s, but then it's doing something slightly different:
changing $x_i$ to
$
\hat\mu + \frac{x_i - \hat\mu}{\sqrt{1 + h^2 / \hat\sigma^2}}
$
adding zero-mean normal noise with variance $\frac{h^2}{1 + h^2/\hat\sigma^2} = \frac{1}{\frac{1}{h^2} + \frac{1}{\hat\sigma^2}}$, the harmonic mean of $h^2$ and $\sigma^2$.
We can see that the expected value of a sample according to this procedure is
$$
\frac1N \sum_{i=1}^N \frac{x_i}{\sqrt{1 + h^2/\hat\sigma^2}}
+ \hat\mu
- \frac{1}{\sqrt{1 + h^2 /\hat\sigma^2}} \hat\mu
= \hat\mu
$$
since $\hat\mu = \frac1N \sum_{i=1}^N x_i$.
I don't think the sampling disribution is the same, though. | Simulate from Kernel Density Estimate (empirical PDF) | Here's an algorithm to sample from an arbitrary mixture $f(x) = \frac1N \sum_{i=1}^N f_i(x)$:
Pick a mixture component $i$ uniformly at random.
Sample from $f_i$.
It should be clear that this produc | Simulate from Kernel Density Estimate (empirical PDF)
Here's an algorithm to sample from an arbitrary mixture $f(x) = \frac1N \sum_{i=1}^N f_i(x)$:
Pick a mixture component $i$ uniformly at random.
Sample from $f_i$.
It should be clear that this produces an exact sample.
A Gaussian kernel density estimate is a mixture $\frac1N \sum_{i=1}^N \mathcal{N}(x; x_i, h^2)$. So you can take a sample of size $N$ by picking a bunch of $x_i$s and adding normal noise with zero mean and variance $h^2$ to it.
Your code snippet is selecting a bunch of $x_i$s, but then it's doing something slightly different:
changing $x_i$ to
$
\hat\mu + \frac{x_i - \hat\mu}{\sqrt{1 + h^2 / \hat\sigma^2}}
$
adding zero-mean normal noise with variance $\frac{h^2}{1 + h^2/\hat\sigma^2} = \frac{1}{\frac{1}{h^2} + \frac{1}{\hat\sigma^2}}$, the harmonic mean of $h^2$ and $\sigma^2$.
We can see that the expected value of a sample according to this procedure is
$$
\frac1N \sum_{i=1}^N \frac{x_i}{\sqrt{1 + h^2/\hat\sigma^2}}
+ \hat\mu
- \frac{1}{\sqrt{1 + h^2 /\hat\sigma^2}} \hat\mu
= \hat\mu
$$
since $\hat\mu = \frac1N \sum_{i=1}^N x_i$.
I don't think the sampling disribution is the same, though. | Simulate from Kernel Density Estimate (empirical PDF)
Here's an algorithm to sample from an arbitrary mixture $f(x) = \frac1N \sum_{i=1}^N f_i(x)$:
Pick a mixture component $i$ uniformly at random.
Sample from $f_i$.
It should be clear that this produc |
31,157 | Simulate from Kernel Density Estimate (empirical PDF) | To eliminate any confusion about whether it is possible or not to draw values from the KDE using a bootstrap approach, it is possible. The bootstrap is not limited to estimating variability intervals.
Below is a smoothed bootstrap with variance correction algorithm that generates synthetic values $Y_{i}'s$ from a KDE $K$ of window $h$. It comes from this book by Silverman, see page 25 of this document, section 6.4.1 "Simulating from density estimates". As noted in the book, this algorithm allows to find independent realizations from a KDE $\hat{y}$, without requiring to know $\hat{y}$ explicitly:
To generate a synthetic value $Y$ (from a training set $\big\{X_{1},...X_{n}\big\}$):
Step 1: Choose $i$ uniformly with replacement from $\big\{1,...,n\big\}$,
Step 2: Sample $\epsilon$ from $K$ (i.e., from the Normal distribution if $K$ is Gaussian),
Step 3: Set $Y=\bar{X}+(X_{i}-\bar{X}+h.\epsilon)/\sqrt{1+h^{2}{\sigma_{K}}^2/{\sigma_{X}}^2}$.
Where $\bar{X}$ and ${\sigma_{X}}^2$ are the sample mean and variance, and ${\sigma_{K}}^2$ is the variance of $K$ (i.e., 1 for a Gaussian $K$).
As explained by Dougal, the expected value of the realizations is $\bar{X}$. Thanks to the variance correction, the variance is ${\sigma_{X}}^2$ (on the other hand, the smoothed bootstrap without variance correction, where step 3 is simply $Y=X_{i}+h.\epsilon$, inflates the variance).
The R code snippet in my question above is strictly following this algorithm.
The advantages of the smoothed bootstrap over the bootstrap are:
"spurious features" in the data are not reproduced as different values from the ones in the sample can be generated,
values beyond the max/min of the observations can be generated. | Simulate from Kernel Density Estimate (empirical PDF) | To eliminate any confusion about whether it is possible or not to draw values from the KDE using a bootstrap approach, it is possible. The bootstrap is not limited to estimating variability intervals. | Simulate from Kernel Density Estimate (empirical PDF)
To eliminate any confusion about whether it is possible or not to draw values from the KDE using a bootstrap approach, it is possible. The bootstrap is not limited to estimating variability intervals.
Below is a smoothed bootstrap with variance correction algorithm that generates synthetic values $Y_{i}'s$ from a KDE $K$ of window $h$. It comes from this book by Silverman, see page 25 of this document, section 6.4.1 "Simulating from density estimates". As noted in the book, this algorithm allows to find independent realizations from a KDE $\hat{y}$, without requiring to know $\hat{y}$ explicitly:
To generate a synthetic value $Y$ (from a training set $\big\{X_{1},...X_{n}\big\}$):
Step 1: Choose $i$ uniformly with replacement from $\big\{1,...,n\big\}$,
Step 2: Sample $\epsilon$ from $K$ (i.e., from the Normal distribution if $K$ is Gaussian),
Step 3: Set $Y=\bar{X}+(X_{i}-\bar{X}+h.\epsilon)/\sqrt{1+h^{2}{\sigma_{K}}^2/{\sigma_{X}}^2}$.
Where $\bar{X}$ and ${\sigma_{X}}^2$ are the sample mean and variance, and ${\sigma_{K}}^2$ is the variance of $K$ (i.e., 1 for a Gaussian $K$).
As explained by Dougal, the expected value of the realizations is $\bar{X}$. Thanks to the variance correction, the variance is ${\sigma_{X}}^2$ (on the other hand, the smoothed bootstrap without variance correction, where step 3 is simply $Y=X_{i}+h.\epsilon$, inflates the variance).
The R code snippet in my question above is strictly following this algorithm.
The advantages of the smoothed bootstrap over the bootstrap are:
"spurious features" in the data are not reproduced as different values from the ones in the sample can be generated,
values beyond the max/min of the observations can be generated. | Simulate from Kernel Density Estimate (empirical PDF)
To eliminate any confusion about whether it is possible or not to draw values from the KDE using a bootstrap approach, it is possible. The bootstrap is not limited to estimating variability intervals. |
31,158 | Given two absorbing Markov chains, what is the probability that one will terminate before the other? | Run the chains in parallel. Define three absorbing states in the resulting product chain:
The first chain reaches an absorbing state but the second does not.
The second chain reaches an absorbing state but the first does not.
Both chains simultaneously reach an absorbing state.
The limiting probabilities of these three states in the product chain give the chances of interest.
This solution involves some (simple) constructions. As in the question, let $\mathbb{P} = P_{ij}, 1 \le i,j\le n$ be a transition matrix for a chain $\mathcal P$. When the chain is in state $i$, $P_{ij}$ gives the probability of a transition to state $j$. An absorbing state makes a transition to itself with probability $1$.
Any state $i$ can be made absorbing upon replacing the row $\mathbb{P}_{i} = (P_{ij}, j=1, 2, \ldots,n)$ by an indicator vector $(0,0,\ldots,0,1,0,\ldots,0)$ with a $1$ in position $i$.
Any set $A$ of absorbing states can be merged by creating a new chain $\mathcal{P}/A$ whose states are $\{i\,|\, i\notin A\}\cup \{A\}$. The transition matrix is given by
$$(\mathbb{P}/A)_{ij} = \begin{array}{ll} \left\{
\begin{array}{ll}
P_{ij} & i \notin A,\, j \notin A\\
\sum_{k\in A} P_{ik} & i\notin A, j=A \\
0 & i=A, j\notin A \\
1 & i = j = A.
\end{array}\right.
\end{array}$$
This amounts to summing the columns of $\mathbb{P}$ corresponding to $A$ and replacing the rows correspond to $A$ by a single row that makes a transition to itself.
The product of two chains $\mathcal{P}$ on states $S_P$ and $\mathcal{Q}$ on states $S_Q$, with transition matrices $\mathbb{P}$ and $\mathbb{Q}$, respectively, is a Markov chain on the states $S_P\times S_Q = \{(p,q)\,|\, p\in S_P, q\in S_Q\}$ with transition matrix
$$(\mathbb{P} \otimes \mathbb{Q})_{(i,j),(k,l)} = P_{ik}Q_{jl}.$$
In effect, the product chain runs the two chains in parallel, separately tracking where each is, and making transitions independently.
A simple example may clarify these constructions. Suppose Polly is flipping a coin with a chance $p$ of landing heads. She plans to do so until observing a heads. The states for the coin flipping process are $S_P = \{\text{T},\text{H}\}$ representing the results of the most recent flip: $\text{T}$ for tails, $\text{H}$ for heads. By planning to stop at heads, Polly will apply the first construction by making $\text H$ an absorbing state. The resulting transition matrix is
$$\mathbb{P} = \pmatrix{1-p & p \\ 0 & 1}.$$
It begins in a random state $(1-p,p)$ given by the first toss.
In time with Polly, Quincy will toss a fair coin. He plans to stop once he sees two heads in a row. His Markov chain therefore has to keep track of the preceding outcome as well as the current outcome. There are four such combinations of two heads and two tails, which I will abbreviate as "$\text{TH}$", for instance, where the first letter is the previous outcome and the second letter is the current outcome. Quincy applies construction (1) to make $\text{HH}$ an absorbing state. After doing so, he realizes that he doesn't really need four states: he can simplify his chain to three states: $\text{T}$ means the current outcome is tails, $\text{H}$ means the current outcome is heads, and $\text{X}$ means the last two outcomes were both heads--this is the absorbing state. The transition matrix is
$$\mathbb{Q} = \pmatrix{\frac{1}{2} & \frac{1}{2} & 0 \\
\frac{1}{2} & 0 & \frac{1}{2} \\
0 & 0 & 1}.$$
The product chain runs on six states: $(T,T), (T,H), (T,X); (H,T), (H,H), (H,X)$. The transition matrix is a tensor product of $\mathbb{P}$ and $\mathbb{Q}$ and is just as easily computed. For instance, $(\mathbb{P}\otimes\mathbb{Q})_{(T,T),(T,H)}$ is the chance that Polly makes a transition from $\text T$ to $\text T$ and, at the same time (and independently), Quincy makes a transition from $\text T$ to $\text H$. The former has a chance of $1-p$ and the latter a chance of $1/2$. Because the chains are run independently, those chances multiply, giving $(1-p)/2$. The full transition matrix is
$$\mathbb{P}\otimes\mathbb{Q} = \pmatrix{
\frac{1-p}{2} & \frac{1-p}{2} & 0 & \frac{p}{2} & \frac{p}{2} & 0 \\
\frac{1-p}{2} & 0 & \frac{1-p}{2} & \frac{p}{2} & 0 & \frac{p}{2} \\
0 & 0 & 1-p & 0 & 0 & p \\
0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 \\
0 & 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 1
}.$$
It is in block matrix form with blocks corresponding to the second matrix $\mathbb Q$:
$$\mathbb{P}\otimes\mathbb{Q} = \pmatrix{
P_{11}\mathbb Q & P_{12}\mathbb Q \\
P_{21}\mathbb Q & P_{22}\mathbb Q
} = \pmatrix{
(1-p)\mathbb Q & p\mathbb Q \\
\mathbb 0 & \mathbb Q
}.$$
Polly and Quincy compete to see who will achieve their goal first. The winner will be Polly whenever a transition is first made to $(\text{H},\text{*})$ where $\text{*}$ is not $\text X$; the winner will be Quincy whenever a transition is first made to $(\text{T},\text{X})$; and if before either of those can happen a transition is made to $(\text{H},\text{X})$, the result will be a draw. To keep track, we will make the states $(\text{H},\text{T})$ and $(\text{H},\text{H})$ both absorbing (via construction (1)) and then merge them (via construction (2)). The resulting transition matrix, ordered by the states $(T,T), (T,H), (T,X), \{(H,T), (H,H)\}, (H,X)$ is
$$\mathbb{R} = \pmatrix{
\frac{1-p}{2} & \frac{1-p}{2} & 0 & p & 0 \\
\frac{1-p}{2} & 0 & \frac{1-p}{2} & \frac{p}{2} & \frac{p}{2} \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1
}.$$
The results of the simultaneous first throw by Polly and Quincy will be the states $(T,T), (T,H), (T,X), \{(H,T), (H,H)\}, (H,X)$ with probabilities $\mu = ((1-p)/2, (1-p)/2, 0, p, 0)$, respectively: this is the initial state at which to start the chain.
In the limit as $n\to \infty$,
$$\mu \cdot \mathbb{R}^n \to \frac{1}{1+4p-p^2}(0, 0, (1-p)^2, p(5-p), p(1-p)).$$
Thus the relative chances of the three absorbing states $(T,X), \{(H,T), (H,H)\}, (H,X)$ (representing Quincy wins, Polly wins, they draw) are $(1-p)^2:p(5-p):p(1-p)$.
As a function of $p$ (the chance that any one of Polly's throws will be heads), the red curve plots Polly's chance of winning, the blue curve plots Quincy's chance of winning, and the gold curve plots the chance of a draw. | Given two absorbing Markov chains, what is the probability that one will terminate before the other? | Run the chains in parallel. Define three absorbing states in the resulting product chain:
The first chain reaches an absorbing state but the second does not.
The second chain reaches an absorbing st | Given two absorbing Markov chains, what is the probability that one will terminate before the other?
Run the chains in parallel. Define three absorbing states in the resulting product chain:
The first chain reaches an absorbing state but the second does not.
The second chain reaches an absorbing state but the first does not.
Both chains simultaneously reach an absorbing state.
The limiting probabilities of these three states in the product chain give the chances of interest.
This solution involves some (simple) constructions. As in the question, let $\mathbb{P} = P_{ij}, 1 \le i,j\le n$ be a transition matrix for a chain $\mathcal P$. When the chain is in state $i$, $P_{ij}$ gives the probability of a transition to state $j$. An absorbing state makes a transition to itself with probability $1$.
Any state $i$ can be made absorbing upon replacing the row $\mathbb{P}_{i} = (P_{ij}, j=1, 2, \ldots,n)$ by an indicator vector $(0,0,\ldots,0,1,0,\ldots,0)$ with a $1$ in position $i$.
Any set $A$ of absorbing states can be merged by creating a new chain $\mathcal{P}/A$ whose states are $\{i\,|\, i\notin A\}\cup \{A\}$. The transition matrix is given by
$$(\mathbb{P}/A)_{ij} = \begin{array}{ll} \left\{
\begin{array}{ll}
P_{ij} & i \notin A,\, j \notin A\\
\sum_{k\in A} P_{ik} & i\notin A, j=A \\
0 & i=A, j\notin A \\
1 & i = j = A.
\end{array}\right.
\end{array}$$
This amounts to summing the columns of $\mathbb{P}$ corresponding to $A$ and replacing the rows correspond to $A$ by a single row that makes a transition to itself.
The product of two chains $\mathcal{P}$ on states $S_P$ and $\mathcal{Q}$ on states $S_Q$, with transition matrices $\mathbb{P}$ and $\mathbb{Q}$, respectively, is a Markov chain on the states $S_P\times S_Q = \{(p,q)\,|\, p\in S_P, q\in S_Q\}$ with transition matrix
$$(\mathbb{P} \otimes \mathbb{Q})_{(i,j),(k,l)} = P_{ik}Q_{jl}.$$
In effect, the product chain runs the two chains in parallel, separately tracking where each is, and making transitions independently.
A simple example may clarify these constructions. Suppose Polly is flipping a coin with a chance $p$ of landing heads. She plans to do so until observing a heads. The states for the coin flipping process are $S_P = \{\text{T},\text{H}\}$ representing the results of the most recent flip: $\text{T}$ for tails, $\text{H}$ for heads. By planning to stop at heads, Polly will apply the first construction by making $\text H$ an absorbing state. The resulting transition matrix is
$$\mathbb{P} = \pmatrix{1-p & p \\ 0 & 1}.$$
It begins in a random state $(1-p,p)$ given by the first toss.
In time with Polly, Quincy will toss a fair coin. He plans to stop once he sees two heads in a row. His Markov chain therefore has to keep track of the preceding outcome as well as the current outcome. There are four such combinations of two heads and two tails, which I will abbreviate as "$\text{TH}$", for instance, where the first letter is the previous outcome and the second letter is the current outcome. Quincy applies construction (1) to make $\text{HH}$ an absorbing state. After doing so, he realizes that he doesn't really need four states: he can simplify his chain to three states: $\text{T}$ means the current outcome is tails, $\text{H}$ means the current outcome is heads, and $\text{X}$ means the last two outcomes were both heads--this is the absorbing state. The transition matrix is
$$\mathbb{Q} = \pmatrix{\frac{1}{2} & \frac{1}{2} & 0 \\
\frac{1}{2} & 0 & \frac{1}{2} \\
0 & 0 & 1}.$$
The product chain runs on six states: $(T,T), (T,H), (T,X); (H,T), (H,H), (H,X)$. The transition matrix is a tensor product of $\mathbb{P}$ and $\mathbb{Q}$ and is just as easily computed. For instance, $(\mathbb{P}\otimes\mathbb{Q})_{(T,T),(T,H)}$ is the chance that Polly makes a transition from $\text T$ to $\text T$ and, at the same time (and independently), Quincy makes a transition from $\text T$ to $\text H$. The former has a chance of $1-p$ and the latter a chance of $1/2$. Because the chains are run independently, those chances multiply, giving $(1-p)/2$. The full transition matrix is
$$\mathbb{P}\otimes\mathbb{Q} = \pmatrix{
\frac{1-p}{2} & \frac{1-p}{2} & 0 & \frac{p}{2} & \frac{p}{2} & 0 \\
\frac{1-p}{2} & 0 & \frac{1-p}{2} & \frac{p}{2} & 0 & \frac{p}{2} \\
0 & 0 & 1-p & 0 & 0 & p \\
0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 \\
0 & 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 1
}.$$
It is in block matrix form with blocks corresponding to the second matrix $\mathbb Q$:
$$\mathbb{P}\otimes\mathbb{Q} = \pmatrix{
P_{11}\mathbb Q & P_{12}\mathbb Q \\
P_{21}\mathbb Q & P_{22}\mathbb Q
} = \pmatrix{
(1-p)\mathbb Q & p\mathbb Q \\
\mathbb 0 & \mathbb Q
}.$$
Polly and Quincy compete to see who will achieve their goal first. The winner will be Polly whenever a transition is first made to $(\text{H},\text{*})$ where $\text{*}$ is not $\text X$; the winner will be Quincy whenever a transition is first made to $(\text{T},\text{X})$; and if before either of those can happen a transition is made to $(\text{H},\text{X})$, the result will be a draw. To keep track, we will make the states $(\text{H},\text{T})$ and $(\text{H},\text{H})$ both absorbing (via construction (1)) and then merge them (via construction (2)). The resulting transition matrix, ordered by the states $(T,T), (T,H), (T,X), \{(H,T), (H,H)\}, (H,X)$ is
$$\mathbb{R} = \pmatrix{
\frac{1-p}{2} & \frac{1-p}{2} & 0 & p & 0 \\
\frac{1-p}{2} & 0 & \frac{1-p}{2} & \frac{p}{2} & \frac{p}{2} \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1
}.$$
The results of the simultaneous first throw by Polly and Quincy will be the states $(T,T), (T,H), (T,X), \{(H,T), (H,H)\}, (H,X)$ with probabilities $\mu = ((1-p)/2, (1-p)/2, 0, p, 0)$, respectively: this is the initial state at which to start the chain.
In the limit as $n\to \infty$,
$$\mu \cdot \mathbb{R}^n \to \frac{1}{1+4p-p^2}(0, 0, (1-p)^2, p(5-p), p(1-p)).$$
Thus the relative chances of the three absorbing states $(T,X), \{(H,T), (H,H)\}, (H,X)$ (representing Quincy wins, Polly wins, they draw) are $(1-p)^2:p(5-p):p(1-p)$.
As a function of $p$ (the chance that any one of Polly's throws will be heads), the red curve plots Polly's chance of winning, the blue curve plots Quincy's chance of winning, and the gold curve plots the chance of a draw. | Given two absorbing Markov chains, what is the probability that one will terminate before the other?
Run the chains in parallel. Define three absorbing states in the resulting product chain:
The first chain reaches an absorbing state but the second does not.
The second chain reaches an absorbing st |
31,159 | Is parametric equivalent to linear? | Linear is always parametric for all practical purposes. What does linear mean? It means that you're stating linear relationships between variables, such as $y=\beta_0+\beta x$. Your parameters are $\beta_0,\beta_1$. So, by stating that the relationship is linear, you are declaring the parameters.
When you say non-parametric, you don't state any particular form of the relationship. You may say $y=f(x)$, where $f(.)$ is some function, which could be linear or non-linear.
The limitations of linear models exist only when the true relationship is non-linear. For instance, the force is equal to the product of mass and acceleration $F=ma$, i.e. linear function of acceleration for a given mass. So, by modeling this as $F=f(a;m)$ you are not gaining anything relative to the linear form.
The trouble is that in social sciences we don't know what are the relationship, whether they are linear or not. So, we may often gain by not restricting ourselves t the linear models. The drawback is that non-linear models are more difficult to estimate, usually.
In non-linear model we usually assume some sort of non-linear relationship, e.g. $v_t=v_0\sqrt{t}$. In this case it's a square root function of time, and the parameter is $v_0$, i.e. this is a parametric model.
Non-parametric is even more loose, it doesn't even specify the form of the relationship in this detail. It'll just say $v_t=f(t)$, some function of time. An example is value-at-risk calculation, where the VaR is the $\alpha$ quantile of the losses of the portfolio. Here, we don't specify what is the loss distribution, we simply get the quantile of whatever the distribution is of the losses. | Is parametric equivalent to linear? | Linear is always parametric for all practical purposes. What does linear mean? It means that you're stating linear relationships between variables, such as $y=\beta_0+\beta x$. Your parameters are $\b | Is parametric equivalent to linear?
Linear is always parametric for all practical purposes. What does linear mean? It means that you're stating linear relationships between variables, such as $y=\beta_0+\beta x$. Your parameters are $\beta_0,\beta_1$. So, by stating that the relationship is linear, you are declaring the parameters.
When you say non-parametric, you don't state any particular form of the relationship. You may say $y=f(x)$, where $f(.)$ is some function, which could be linear or non-linear.
The limitations of linear models exist only when the true relationship is non-linear. For instance, the force is equal to the product of mass and acceleration $F=ma$, i.e. linear function of acceleration for a given mass. So, by modeling this as $F=f(a;m)$ you are not gaining anything relative to the linear form.
The trouble is that in social sciences we don't know what are the relationship, whether they are linear or not. So, we may often gain by not restricting ourselves t the linear models. The drawback is that non-linear models are more difficult to estimate, usually.
In non-linear model we usually assume some sort of non-linear relationship, e.g. $v_t=v_0\sqrt{t}$. In this case it's a square root function of time, and the parameter is $v_0$, i.e. this is a parametric model.
Non-parametric is even more loose, it doesn't even specify the form of the relationship in this detail. It'll just say $v_t=f(t)$, some function of time. An example is value-at-risk calculation, where the VaR is the $\alpha$ quantile of the losses of the portfolio. Here, we don't specify what is the loss distribution, we simply get the quantile of whatever the distribution is of the losses. | Is parametric equivalent to linear?
Linear is always parametric for all practical purposes. What does linear mean? It means that you're stating linear relationships between variables, such as $y=\beta_0+\beta x$. Your parameters are $\b |
31,160 | Is parametric equivalent to linear? | Parametric and linear: You know this one, it includes a bunch of things. (Ordinary least squares linear regression is an obvious example, but there are others)
Parametric and nonlinear: Nonlinear regression methods are an obvious example.
Nonparametric and nonlinear: again, you know this one; there are a bunch of things. splines or local regression methods are examples, as are things like ACE and AVAS (though the ones I mention all approximate nonlinear relationships via linear methods).
Nonparametric and linear: Since "nonparametric" can also refer to the infinite-dimensionality of the distributional form rather than the functional relationship (see the huge area of nonparametric statistics -- the term 'nonparametric' originates here, by the way, in Wolfowitz, 1942 [1]), and it's possible to fit linear relationships without assuming any parametric model, 'nonparametric and linear' is a thing.
This answer has some explicit examples, of which I reproduce a plot here:
(Blue is least squares, red is the linear fit whose slope is based on the Spearman rank correlation, and green is the linear fit whose slope is based on Kendall's tau.)
[1]: Wolfowitz, J. (1942),
"Additive Partition Functions and a Class of Statistical Hypotheses,"
Ann. Math. Statist., Volume 13, Number 3, 247-279. | Is parametric equivalent to linear? | Parametric and linear: You know this one, it includes a bunch of things. (Ordinary least squares linear regression is an obvious example, but there are others)
Parametric and nonlinear: Nonlinear regr | Is parametric equivalent to linear?
Parametric and linear: You know this one, it includes a bunch of things. (Ordinary least squares linear regression is an obvious example, but there are others)
Parametric and nonlinear: Nonlinear regression methods are an obvious example.
Nonparametric and nonlinear: again, you know this one; there are a bunch of things. splines or local regression methods are examples, as are things like ACE and AVAS (though the ones I mention all approximate nonlinear relationships via linear methods).
Nonparametric and linear: Since "nonparametric" can also refer to the infinite-dimensionality of the distributional form rather than the functional relationship (see the huge area of nonparametric statistics -- the term 'nonparametric' originates here, by the way, in Wolfowitz, 1942 [1]), and it's possible to fit linear relationships without assuming any parametric model, 'nonparametric and linear' is a thing.
This answer has some explicit examples, of which I reproduce a plot here:
(Blue is least squares, red is the linear fit whose slope is based on the Spearman rank correlation, and green is the linear fit whose slope is based on Kendall's tau.)
[1]: Wolfowitz, J. (1942),
"Additive Partition Functions and a Class of Statistical Hypotheses,"
Ann. Math. Statist., Volume 13, Number 3, 247-279. | Is parametric equivalent to linear?
Parametric and linear: You know this one, it includes a bunch of things. (Ordinary least squares linear regression is an obvious example, but there are others)
Parametric and nonlinear: Nonlinear regr |
31,161 | ImageNet: what does top-five error means? [duplicate] | Top-5 error, also known as rank-5 error is simply an instantiation of Rank-N error metric with $(N=5)$.
Rank $N$ error is the fraction of test samples $x_i$ where the correct label $y_i$ does not appear in the top $N$ predicted results of the model when results are sorted in decreasing order of confidence, or $P(y_i|x_i)$.
In ILSVRC 2014, the error metric for classification was:
\[
e = \tfrac{1}{n} \cdot \sum_k \min_{i} d(c_i, C_k)
\]
where
\[
d(a, b) =
\begin{cases}
0 & \text{if } a = b \\
1 & \text{otherwise}
\end{cases}
\]
$c_i$ is the predicted label and $C_k$ is the ground truth label. There are $n$ possible labels with $k=1, ..., n$. | ImageNet: what does top-five error means? [duplicate] | Top-5 error, also known as rank-5 error is simply an instantiation of Rank-N error metric with $(N=5)$.
Rank $N$ error is the fraction of test samples $x_i$ where the correct label $y_i$ does not appe | ImageNet: what does top-five error means? [duplicate]
Top-5 error, also known as rank-5 error is simply an instantiation of Rank-N error metric with $(N=5)$.
Rank $N$ error is the fraction of test samples $x_i$ where the correct label $y_i$ does not appear in the top $N$ predicted results of the model when results are sorted in decreasing order of confidence, or $P(y_i|x_i)$.
In ILSVRC 2014, the error metric for classification was:
\[
e = \tfrac{1}{n} \cdot \sum_k \min_{i} d(c_i, C_k)
\]
where
\[
d(a, b) =
\begin{cases}
0 & \text{if } a = b \\
1 & \text{otherwise}
\end{cases}
\]
$c_i$ is the predicted label and $C_k$ is the ground truth label. There are $n$ possible labels with $k=1, ..., n$. | ImageNet: what does top-five error means? [duplicate]
Top-5 error, also known as rank-5 error is simply an instantiation of Rank-N error metric with $(N=5)$.
Rank $N$ error is the fraction of test samples $x_i$ where the correct label $y_i$ does not appe |
31,162 | ImageNet: what does top-five error means? [duplicate] | the top-5 error rate (of a model) is the fraction of test images for which the correct label is not among the five labels considered most probable by the model [1] | ImageNet: what does top-five error means? [duplicate] | the top-5 error rate (of a model) is the fraction of test images for which the correct label is not among the five labels considered most probable by the model [1] | ImageNet: what does top-five error means? [duplicate]
the top-5 error rate (of a model) is the fraction of test images for which the correct label is not among the five labels considered most probable by the model [1] | ImageNet: what does top-five error means? [duplicate]
the top-5 error rate (of a model) is the fraction of test images for which the correct label is not among the five labels considered most probable by the model [1] |
31,163 | ImageNet: what does top-five error means? [duplicate] | I think the conditional evaluation of $d(A_i, B_k)$ should read:
$d(Ai, Bk) = 0$, if $A_i$ occurs in the top N probable classifications for a
given ground truth class Bk.
= 1, if it does not.
Here k takes values 0 to N-1. Where N is the number of classes.
For each $B_k$, there are $A_i$ probable classifications, where i takes values
[0, N-1] | ImageNet: what does top-five error means? [duplicate] | I think the conditional evaluation of $d(A_i, B_k)$ should read:
$d(Ai, Bk) = 0$, if $A_i$ occurs in the top N probable classifications for a
given ground truth class Bk.
= 1, | ImageNet: what does top-five error means? [duplicate]
I think the conditional evaluation of $d(A_i, B_k)$ should read:
$d(Ai, Bk) = 0$, if $A_i$ occurs in the top N probable classifications for a
given ground truth class Bk.
= 1, if it does not.
Here k takes values 0 to N-1. Where N is the number of classes.
For each $B_k$, there are $A_i$ probable classifications, where i takes values
[0, N-1] | ImageNet: what does top-five error means? [duplicate]
I think the conditional evaluation of $d(A_i, B_k)$ should read:
$d(Ai, Bk) = 0$, if $A_i$ occurs in the top N probable classifications for a
given ground truth class Bk.
= 1, |
31,164 | Inverse CDF sampling for a mixed distribution | The answer to the long version with background:
This answer to the long version somewhat addresses another issue and,
since we seem to have difficulties formulating the model and the problem, I choose to rephrase it here, hopefully correctly.
For $1\le i\le I$, the goal is to simulate vectors $y^i=(y^i_1,\ldots,y^i_K)$ such that, conditional on a covariate $x^i$,
$$
y_k^i = \begin{cases} 0 &\text{ with probability }\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\\
\log(\sigma_k z_k^i + \beta_k x^i) &\text{ with probability }1-\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\end{cases}
$$
with $z^i=(z^i_1,\ldots,z^i_K)\sim\mathcal{N}_K(0,R)$. Hence, if one wants to simulate data from this model, one could proceed as follows:
For $1\le i\le I$,
Generate $z^i=(z^i_1,\ldots,z^i_K)\sim\mathcal{N}_K(0,R)$
Generate $u^i_1,\ldots,u^i_K \stackrel{\text{iid}}{\sim} \mathcal{U}(0,1)$
Derive $y^i_k=\mathbb{I}\left\{u^i_k>\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\right\}\, \log\{ \sigma_k z_k^i + \beta_k x^i\}$ for $1\le k\le K$
If one is interested in the generation from the posterior of $(\alpha,\beta,\mu,\sigma,R)$ given the $y^i_ k$, this is a harder problem, albeit feasible by Gibbs sampling or ABC. | Inverse CDF sampling for a mixed distribution | The answer to the long version with background:
This answer to the long version somewhat addresses another issue and,
since we seem to have difficulties formulating the model and the problem, I choose | Inverse CDF sampling for a mixed distribution
The answer to the long version with background:
This answer to the long version somewhat addresses another issue and,
since we seem to have difficulties formulating the model and the problem, I choose to rephrase it here, hopefully correctly.
For $1\le i\le I$, the goal is to simulate vectors $y^i=(y^i_1,\ldots,y^i_K)$ such that, conditional on a covariate $x^i$,
$$
y_k^i = \begin{cases} 0 &\text{ with probability }\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\\
\log(\sigma_k z_k^i + \beta_k x^i) &\text{ with probability }1-\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\end{cases}
$$
with $z^i=(z^i_1,\ldots,z^i_K)\sim\mathcal{N}_K(0,R)$. Hence, if one wants to simulate data from this model, one could proceed as follows:
For $1\le i\le I$,
Generate $z^i=(z^i_1,\ldots,z^i_K)\sim\mathcal{N}_K(0,R)$
Generate $u^i_1,\ldots,u^i_K \stackrel{\text{iid}}{\sim} \mathcal{U}(0,1)$
Derive $y^i_k=\mathbb{I}\left\{u^i_k>\operatorname{logit}^{-1}\left( \alpha_k x^i \right)\right\}\, \log\{ \sigma_k z_k^i + \beta_k x^i\}$ for $1\le k\le K$
If one is interested in the generation from the posterior of $(\alpha,\beta,\mu,\sigma,R)$ given the $y^i_ k$, this is a harder problem, albeit feasible by Gibbs sampling or ABC. | Inverse CDF sampling for a mixed distribution
The answer to the long version with background:
This answer to the long version somewhat addresses another issue and,
since we seem to have difficulties formulating the model and the problem, I choose |
31,165 | Inverse CDF sampling for a mixed distribution | The answer to the out-of-context short version:
"Inverting" a cdf that is not invertible in the mathematical sense (like your mixed distribution) is feasible, as described in most Monte Carlo textbooks. (Like ours, see Lemma 2.4.) If you define the generalised inverse
$$
F^{-}(u) = \inf\left\{ x\in\mathbb{R};\ F(x)\ge u \right\}
$$
then
$$
X \sim F \text{ is equivalent to } X=F^{-}(U)\text{ when } U\sim\mathcal{U}(0,1)\,.
$$
This means that, when $F(y)$ has a jump of $\theta$ at $y=0$, $F^{-}(u)=0$ for $u\le\theta$. In other words, if you draw a uniform $\mathcal{U}(0,1)$ and it ends up smaller than $\theta$, your generation of $X$ is $x=0$. Else, when $u>\theta$, you end up generating from the continuous part, namely the log-normal in your case. This means using a second uniform random generation, $v$, independent from the first uniform draw and setting $y=\exp(\mu+\sigma\Phi^{-1}(v))$ to obtain a log-normal generation.
This is almost what your R code
Y_hat <- rbinom(1, 1, theta[i, k])
if (Y_hat == 1)
Y_hat <- rlnorm(1, mu[i, k], sigma[k])
is doing. You generate a Bernoulli with probability $\theta_k^i$ and if it is equal to $1$, you turn it into a log-normal. Since it is equal to 1 with probability $\theta_k^i$ you should instead turn it into a log-normal simulation when it is equal to zero instead, ending up with the modified R code:
Y_hat <- rbinom(1, 1, theta[i, k])
if (Y_hat == 0)
Y_hat <- rlnorm(1, mu[i, k], sigma[k]) | Inverse CDF sampling for a mixed distribution | The answer to the out-of-context short version:
"Inverting" a cdf that is not invertible in the mathematical sense (like your mixed distribution) is feasible, as described in most Monte Carlo textbook | Inverse CDF sampling for a mixed distribution
The answer to the out-of-context short version:
"Inverting" a cdf that is not invertible in the mathematical sense (like your mixed distribution) is feasible, as described in most Monte Carlo textbooks. (Like ours, see Lemma 2.4.) If you define the generalised inverse
$$
F^{-}(u) = \inf\left\{ x\in\mathbb{R};\ F(x)\ge u \right\}
$$
then
$$
X \sim F \text{ is equivalent to } X=F^{-}(U)\text{ when } U\sim\mathcal{U}(0,1)\,.
$$
This means that, when $F(y)$ has a jump of $\theta$ at $y=0$, $F^{-}(u)=0$ for $u\le\theta$. In other words, if you draw a uniform $\mathcal{U}(0,1)$ and it ends up smaller than $\theta$, your generation of $X$ is $x=0$. Else, when $u>\theta$, you end up generating from the continuous part, namely the log-normal in your case. This means using a second uniform random generation, $v$, independent from the first uniform draw and setting $y=\exp(\mu+\sigma\Phi^{-1}(v))$ to obtain a log-normal generation.
This is almost what your R code
Y_hat <- rbinom(1, 1, theta[i, k])
if (Y_hat == 1)
Y_hat <- rlnorm(1, mu[i, k], sigma[k])
is doing. You generate a Bernoulli with probability $\theta_k^i$ and if it is equal to $1$, you turn it into a log-normal. Since it is equal to 1 with probability $\theta_k^i$ you should instead turn it into a log-normal simulation when it is equal to zero instead, ending up with the modified R code:
Y_hat <- rbinom(1, 1, theta[i, k])
if (Y_hat == 0)
Y_hat <- rlnorm(1, mu[i, k], sigma[k]) | Inverse CDF sampling for a mixed distribution
The answer to the out-of-context short version:
"Inverting" a cdf that is not invertible in the mathematical sense (like your mixed distribution) is feasible, as described in most Monte Carlo textbook |
31,166 | Why is the mixtures of conjugate priors important? | Calculating posteriors with general/arbitrary priors directly may be a difficult task.
On the other hand, calculating posteriors with mixtures of conjugate priors is relatively simple, since a given mixture of priors becomes the same mixture of the corresponding posteriors.
[There are also many cases where some given prior may be quite well approximated by a finite mixture of conjugate priors -- this makes for a suitable approach in some situations, when that can give approximate posteriors that are sufficiently close to the exact one.] | Why is the mixtures of conjugate priors important? | Calculating posteriors with general/arbitrary priors directly may be a difficult task.
On the other hand, calculating posteriors with mixtures of conjugate priors is relatively simple, since a given m | Why is the mixtures of conjugate priors important?
Calculating posteriors with general/arbitrary priors directly may be a difficult task.
On the other hand, calculating posteriors with mixtures of conjugate priors is relatively simple, since a given mixture of priors becomes the same mixture of the corresponding posteriors.
[There are also many cases where some given prior may be quite well approximated by a finite mixture of conjugate priors -- this makes for a suitable approach in some situations, when that can give approximate posteriors that are sufficiently close to the exact one.] | Why is the mixtures of conjugate priors important?
Calculating posteriors with general/arbitrary priors directly may be a difficult task.
On the other hand, calculating posteriors with mixtures of conjugate priors is relatively simple, since a given m |
31,167 | Why is the mixtures of conjugate priors important? | To extend @Glen_b's answer just slightly, one implication is that we can get a closed form approximation to the posterior when a non-conjugate prior is used by first approximating the non-conjugate prior with a mixture of conjugate priors and then directly solving for the posterior of the approximation.
However, in general this method seems quite tricky to use. While it's true that you can make the mixture prior arbitrarily close to the non-conjugate prior, there will generally be some error in any finite approximation. Small errors in the prior can easily propagate to huge errors in the posterior. For example, if the prior is well approximated except on the extreme tails, but the data provides strong evidence that the parameters values are in the extreme tails, these errors on the extreme tails of the prior will lead to errors in high probability regions of the posterior. | Why is the mixtures of conjugate priors important? | To extend @Glen_b's answer just slightly, one implication is that we can get a closed form approximation to the posterior when a non-conjugate prior is used by first approximating the non-conjugate pr | Why is the mixtures of conjugate priors important?
To extend @Glen_b's answer just slightly, one implication is that we can get a closed form approximation to the posterior when a non-conjugate prior is used by first approximating the non-conjugate prior with a mixture of conjugate priors and then directly solving for the posterior of the approximation.
However, in general this method seems quite tricky to use. While it's true that you can make the mixture prior arbitrarily close to the non-conjugate prior, there will generally be some error in any finite approximation. Small errors in the prior can easily propagate to huge errors in the posterior. For example, if the prior is well approximated except on the extreme tails, but the data provides strong evidence that the parameters values are in the extreme tails, these errors on the extreme tails of the prior will lead to errors in high probability regions of the posterior. | Why is the mixtures of conjugate priors important?
To extend @Glen_b's answer just slightly, one implication is that we can get a closed form approximation to the posterior when a non-conjugate prior is used by first approximating the non-conjugate pr |
31,168 | Time series forecasting accuracy measures: MAPE and MASE | MASE compares the forecasts to those obtained from a naive method. The naive method turns out to be very poor for white noise, but not so bad for an AR(1) with $\phi=0.7$. Consequently, the forecasts for the AR have a worse MASE than the forecasts for the white noise.
We can make this more precise as follows.
Let $y_1,y_2,\dots,y_{T}$ be a non-seasonal time series process observed to time $T$. Then MASE is defined as
$$
\text{MASE} = \frac{1}{K}\sum_{k=1}^K |y_{T+k} - \hat{y}_{T+k|T}| / Q
$$
where $Q$ is a scaling factor equal to the in-sample one-step naive forecast error,
$$
Q = \frac{1}{T-1} \sum_{t=2}^T |y_t-y_{t-1}|,
$$
and $\hat{y}_{T+k|T}$ is an estimate of $y_{T+k}$ given the observations $y_1,\dots,y_T$.
MASE provides a measure of how accurate forecasts are for a given series and the $Q$ scaling is intended to allow comparisons between series of different scales.
Suppose $y_t$ is standard Gaussian white noise $N(0,1)$. Then the data has variance 1, and the optimal forecast is $\hat{y}_{T+k|T}=0$ with forecast variance $v_{T+k|T} = 1$. Therefore $\text{E}|y_{T+k} - \hat{y}_{T+k|T}| = \sqrt{2/\pi}$ and $y_t-y_{t-1}\sim N(0,2)$. Thus the scaling factor has mean $\text{E}(Q) = 2/\sqrt{\pi}$, so that MASE has asymptotic mean $1/\sqrt{2}\approx 0.707$ (as $T\rightarrow\infty$). Note also that the long-term forecast variance $v_{T+\infty|T}=1$ is less than the in-sample naive forecast variance of 2.
But suppose $y_t$ is an AR(1) process defined as $y_t = \phi y_{t-1} + e_t$ where $e_t$ is Gaussian white noise $N(0,\sigma^2)$. Then the data has variance $\sigma^2/(1-\phi^2)$, and optimal forecast is $\hat{y}_{T+k|T} = \phi^k y_{T}$ with variance $v_{T+k|T} = \sigma^2(1-\phi^{2k})/(1-\phi^2)$. Therefore
$\text{E}|y_{T+k} - \hat{y}_{T+k|T}| = \sigma\sqrt{2(1-\phi^{2k})/[(1-\phi^2)\pi]}$ and $y_t-y_{t-1} \sim N(0, 2\sigma^2/(1+\phi))$.
Thus the scaling factor has mean $\text{E}(Q) = 2\sigma/\sqrt{\pi(1+\phi)}$.
For large $k$, if $\sigma^2 = 1-\phi^2$ then $v_{T+k|T} \approx 1$, $\text{E}(Q) \approx 2\sqrt{(1-\phi)/\pi}\}$ and $\text{E}|y_{T+k} - \hat{y}_{T+k|T}| \approx \sqrt{2/\pi}$. So the asymptotic MASE (as $K\rightarrow\infty$ and $T\rightarrow\infty$) has mean of
$$1 / \sqrt{2(1-\phi)}$$
which is approximately 1.29 for $\phi=0.7$. | Time series forecasting accuracy measures: MAPE and MASE | MASE compares the forecasts to those obtained from a naive method. The naive method turns out to be very poor for white noise, but not so bad for an AR(1) with $\phi=0.7$. Consequently, the forecasts | Time series forecasting accuracy measures: MAPE and MASE
MASE compares the forecasts to those obtained from a naive method. The naive method turns out to be very poor for white noise, but not so bad for an AR(1) with $\phi=0.7$. Consequently, the forecasts for the AR have a worse MASE than the forecasts for the white noise.
We can make this more precise as follows.
Let $y_1,y_2,\dots,y_{T}$ be a non-seasonal time series process observed to time $T$. Then MASE is defined as
$$
\text{MASE} = \frac{1}{K}\sum_{k=1}^K |y_{T+k} - \hat{y}_{T+k|T}| / Q
$$
where $Q$ is a scaling factor equal to the in-sample one-step naive forecast error,
$$
Q = \frac{1}{T-1} \sum_{t=2}^T |y_t-y_{t-1}|,
$$
and $\hat{y}_{T+k|T}$ is an estimate of $y_{T+k}$ given the observations $y_1,\dots,y_T$.
MASE provides a measure of how accurate forecasts are for a given series and the $Q$ scaling is intended to allow comparisons between series of different scales.
Suppose $y_t$ is standard Gaussian white noise $N(0,1)$. Then the data has variance 1, and the optimal forecast is $\hat{y}_{T+k|T}=0$ with forecast variance $v_{T+k|T} = 1$. Therefore $\text{E}|y_{T+k} - \hat{y}_{T+k|T}| = \sqrt{2/\pi}$ and $y_t-y_{t-1}\sim N(0,2)$. Thus the scaling factor has mean $\text{E}(Q) = 2/\sqrt{\pi}$, so that MASE has asymptotic mean $1/\sqrt{2}\approx 0.707$ (as $T\rightarrow\infty$). Note also that the long-term forecast variance $v_{T+\infty|T}=1$ is less than the in-sample naive forecast variance of 2.
But suppose $y_t$ is an AR(1) process defined as $y_t = \phi y_{t-1} + e_t$ where $e_t$ is Gaussian white noise $N(0,\sigma^2)$. Then the data has variance $\sigma^2/(1-\phi^2)$, and optimal forecast is $\hat{y}_{T+k|T} = \phi^k y_{T}$ with variance $v_{T+k|T} = \sigma^2(1-\phi^{2k})/(1-\phi^2)$. Therefore
$\text{E}|y_{T+k} - \hat{y}_{T+k|T}| = \sigma\sqrt{2(1-\phi^{2k})/[(1-\phi^2)\pi]}$ and $y_t-y_{t-1} \sim N(0, 2\sigma^2/(1+\phi))$.
Thus the scaling factor has mean $\text{E}(Q) = 2\sigma/\sqrt{\pi(1+\phi)}$.
For large $k$, if $\sigma^2 = 1-\phi^2$ then $v_{T+k|T} \approx 1$, $\text{E}(Q) \approx 2\sqrt{(1-\phi)/\pi}\}$ and $\text{E}|y_{T+k} - \hat{y}_{T+k|T}| \approx \sqrt{2/\pi}$. So the asymptotic MASE (as $K\rightarrow\infty$ and $T\rightarrow\infty$) has mean of
$$1 / \sqrt{2(1-\phi)}$$
which is approximately 1.29 for $\phi=0.7$. | Time series forecasting accuracy measures: MAPE and MASE
MASE compares the forecasts to those obtained from a naive method. The naive method turns out to be very poor for white noise, but not so bad for an AR(1) with $\phi=0.7$. Consequently, the forecasts |
31,169 | Using topic words generated by LDA to represent a document | Can LDA be used to detect the topic of A SINGLE document?
Yes, in its particular representation of 'topic,' and given a training corpus of (usually related) documents.
LDA represents topics as distributions over words, and documents as distributions over topics. That is, one very purpose of LDA is to arrive at probabilistic representation of each document as a set of topics. For example, the LDA implementation in gensim can return this representation for any given document.
But this depends on the other documents in the corpus: Any given document will have a different representation if analyzed as part of a different corpus.
That's not typically considered a shortcoming: Most applications of LDA focus on related documents. The paper introducing LDA applies it to two corpora, one of Associated Press articles and one of scientific article abstracts. Edwin Chen's nicely approachable blog post applies LDA to a tranche of emails from Sarah Palin's time as Alaska governor.
If your application demands separating documents into known, mutually exclusive classes, then LDA-derived topics can be used as features for classification. Indeed, the initial paper does just that with the AP corpus, with good results.
Relatedly, Chen's demonstration doesn't sort documents into exclusive classes, but his documents' mostly concentrate their probability on single LDA topics. As David Blei explains in this video lecture, the Dirichlet priors can be chosen to favor sparsity. More simply, "a document is penalized for using many topics," as his slides put it. This seems the closest LDA can get to a single, unsupervised topic, but certainly doesn't guarantee every document will be represented as such. | Using topic words generated by LDA to represent a document | Can LDA be used to detect the topic of A SINGLE document?
Yes, in its particular representation of 'topic,' and given a training corpus of (usually related) documents.
LDA represents topics as distri | Using topic words generated by LDA to represent a document
Can LDA be used to detect the topic of A SINGLE document?
Yes, in its particular representation of 'topic,' and given a training corpus of (usually related) documents.
LDA represents topics as distributions over words, and documents as distributions over topics. That is, one very purpose of LDA is to arrive at probabilistic representation of each document as a set of topics. For example, the LDA implementation in gensim can return this representation for any given document.
But this depends on the other documents in the corpus: Any given document will have a different representation if analyzed as part of a different corpus.
That's not typically considered a shortcoming: Most applications of LDA focus on related documents. The paper introducing LDA applies it to two corpora, one of Associated Press articles and one of scientific article abstracts. Edwin Chen's nicely approachable blog post applies LDA to a tranche of emails from Sarah Palin's time as Alaska governor.
If your application demands separating documents into known, mutually exclusive classes, then LDA-derived topics can be used as features for classification. Indeed, the initial paper does just that with the AP corpus, with good results.
Relatedly, Chen's demonstration doesn't sort documents into exclusive classes, but his documents' mostly concentrate their probability on single LDA topics. As David Blei explains in this video lecture, the Dirichlet priors can be chosen to favor sparsity. More simply, "a document is penalized for using many topics," as his slides put it. This seems the closest LDA can get to a single, unsupervised topic, but certainly doesn't guarantee every document will be represented as such. | Using topic words generated by LDA to represent a document
Can LDA be used to detect the topic of A SINGLE document?
Yes, in its particular representation of 'topic,' and given a training corpus of (usually related) documents.
LDA represents topics as distri |
31,170 | Calculate inflation observed and expected p-values from uniform distribution in QQ plot | There are different ways we can test deviation from any distribution (uniform in your case):
(1) Non-parametric tests:
You can use Kolmogorov-Smirnov Tests to see distribution of observed value fits to expected.
R has ks.test function that can perform Kolmogorov-Smirnov test.
pvalue <- runif(100, min=0, max=1)
ks.test(pvalue, "punif", 0, 1)
One-sample Kolmogorov-Smirnov test
data: pvalue
D = 0.0647, p-value = 0.7974
alternative hypothesis: two-sided
pvalue1 <- rnorm (100, 0.5, 0.1)
ks.test(pvalue1, "punif", 0, 1)
One-sample Kolmogorov-Smirnov test
data: pvalue1
D = 0.2861, p-value = 1.548e-07
alternative hypothesis: two-sided
(2) Chi-square Goodness-of-Fit Test
In this case we categorize the data. We note the observed and expected frequencies in each
cell or category. For the continuous case, the data might be categorized by creating artificial intervals (bins).
# example 1
pvalue <- runif(100, min=0, max=1)
tb.pvalue <- table (cut(pvalue,breaks= seq(0,1,0.1)))
chisq.test(tb.pvalue, p=rep(0.1, 10))
Chi-squared test for given probabilities
data: tb.pvalue
X-squared = 6.4, df = 9, p-value = 0.6993
# example 2
pvalue1 <- rnorm (100, 0.5, 0.1)
tb.pvalue1 <- table (cut(pvalue1,breaks= seq(0,1,0.1)))
chisq.test(tb.pvalue1, p=rep(0.1, 10))
Chi-squared test for given probabilities
data: tb.pvalue1
X-squared = 162, df = 9, p-value < 2.2e-16
(3) Lambda
If you are doing genome-wide association study (GWAS) you might want to calculate the genomic inflation factor, also known as lambda(λ) (also see). This statistics is popular in the statistical genetics community. By definition, λ is defined as the median of the resulting chi-squared test statistics divided by the expected median of the chi-squared distribution. The median of a chi-squared distribution with one degree of freedom is 0.4549364. A λ value can be calculated from z-scores, chi-square statistics, or p-values, depending on the output you have from the association analysis. Sometime proportion of p-value from upper tail is discarded.
For p-values you can do this by:
set.seed(1234)
pvalue <- runif(1000, min=0, max=1)
chisq <- qchisq(1-pvalue,1)
# For z-scores as association, just square them
# chisq <- data$z^2
#For chi-squared values, keep as is
#chisq <- data$chisq
lambda = median(chisq)/qchisq(0.5,1)
lambda
[1] 0.9532617
set.seed(1121)
pvalue1 <- rnorm (1000, 0.4, 0.1)
chisq1 <- qchisq(1-pvalue1,1)
lambda1 = median(chisq1)/qchisq(0.5,1)
lambda1
[1] 1.567119
If analysis results your data follows the normal chi-squared distribution (no inflation), the expected λ value is 1. If the λ value is greater than 1, then this may be evidence for some systematic bias that needs to be corrected in your analysis.
Lambda can also be estimated using Regression analysis.
set.seed(1234)
pvalue <- runif(1000, min=0, max=1)
data <- qchisq(pvalue, 1, lower.tail = FALSE)
data <- sort(data)
ppoi <- ppoints(data) #Generates the sequence of probability points
ppoi <- sort(qchisq(ppoi, df = 1, lower.tail = FALSE))
out <- list()
s <- summary(lm(data ~ 0 + ppoi))$coeff
out$estimate <- s[1, 1] # lambda
out$se <- s[1, 2]
# median method
out$estimate <- median(data, na.rm = TRUE)/qchisq(0.5, 1)
Another method to calculate lambda is using 'KS' (optimizing the chi2.1df distribution fit by use of Kolmogorov-Smirnov test). | Calculate inflation observed and expected p-values from uniform distribution in QQ plot | There are different ways we can test deviation from any distribution (uniform in your case):
(1) Non-parametric tests:
You can use Kolmogorov-Smirnov Tests to see distribution of observed value fits t | Calculate inflation observed and expected p-values from uniform distribution in QQ plot
There are different ways we can test deviation from any distribution (uniform in your case):
(1) Non-parametric tests:
You can use Kolmogorov-Smirnov Tests to see distribution of observed value fits to expected.
R has ks.test function that can perform Kolmogorov-Smirnov test.
pvalue <- runif(100, min=0, max=1)
ks.test(pvalue, "punif", 0, 1)
One-sample Kolmogorov-Smirnov test
data: pvalue
D = 0.0647, p-value = 0.7974
alternative hypothesis: two-sided
pvalue1 <- rnorm (100, 0.5, 0.1)
ks.test(pvalue1, "punif", 0, 1)
One-sample Kolmogorov-Smirnov test
data: pvalue1
D = 0.2861, p-value = 1.548e-07
alternative hypothesis: two-sided
(2) Chi-square Goodness-of-Fit Test
In this case we categorize the data. We note the observed and expected frequencies in each
cell or category. For the continuous case, the data might be categorized by creating artificial intervals (bins).
# example 1
pvalue <- runif(100, min=0, max=1)
tb.pvalue <- table (cut(pvalue,breaks= seq(0,1,0.1)))
chisq.test(tb.pvalue, p=rep(0.1, 10))
Chi-squared test for given probabilities
data: tb.pvalue
X-squared = 6.4, df = 9, p-value = 0.6993
# example 2
pvalue1 <- rnorm (100, 0.5, 0.1)
tb.pvalue1 <- table (cut(pvalue1,breaks= seq(0,1,0.1)))
chisq.test(tb.pvalue1, p=rep(0.1, 10))
Chi-squared test for given probabilities
data: tb.pvalue1
X-squared = 162, df = 9, p-value < 2.2e-16
(3) Lambda
If you are doing genome-wide association study (GWAS) you might want to calculate the genomic inflation factor, also known as lambda(λ) (also see). This statistics is popular in the statistical genetics community. By definition, λ is defined as the median of the resulting chi-squared test statistics divided by the expected median of the chi-squared distribution. The median of a chi-squared distribution with one degree of freedom is 0.4549364. A λ value can be calculated from z-scores, chi-square statistics, or p-values, depending on the output you have from the association analysis. Sometime proportion of p-value from upper tail is discarded.
For p-values you can do this by:
set.seed(1234)
pvalue <- runif(1000, min=0, max=1)
chisq <- qchisq(1-pvalue,1)
# For z-scores as association, just square them
# chisq <- data$z^2
#For chi-squared values, keep as is
#chisq <- data$chisq
lambda = median(chisq)/qchisq(0.5,1)
lambda
[1] 0.9532617
set.seed(1121)
pvalue1 <- rnorm (1000, 0.4, 0.1)
chisq1 <- qchisq(1-pvalue1,1)
lambda1 = median(chisq1)/qchisq(0.5,1)
lambda1
[1] 1.567119
If analysis results your data follows the normal chi-squared distribution (no inflation), the expected λ value is 1. If the λ value is greater than 1, then this may be evidence for some systematic bias that needs to be corrected in your analysis.
Lambda can also be estimated using Regression analysis.
set.seed(1234)
pvalue <- runif(1000, min=0, max=1)
data <- qchisq(pvalue, 1, lower.tail = FALSE)
data <- sort(data)
ppoi <- ppoints(data) #Generates the sequence of probability points
ppoi <- sort(qchisq(ppoi, df = 1, lower.tail = FALSE))
out <- list()
s <- summary(lm(data ~ 0 + ppoi))$coeff
out$estimate <- s[1, 1] # lambda
out$se <- s[1, 2]
# median method
out$estimate <- median(data, na.rm = TRUE)/qchisq(0.5, 1)
Another method to calculate lambda is using 'KS' (optimizing the chi2.1df distribution fit by use of Kolmogorov-Smirnov test). | Calculate inflation observed and expected p-values from uniform distribution in QQ plot
There are different ways we can test deviation from any distribution (uniform in your case):
(1) Non-parametric tests:
You can use Kolmogorov-Smirnov Tests to see distribution of observed value fits t |
31,171 | chi squared test or Z test? [duplicate] | Yes, it's possible to do a chi-square test on this.
Specifically, this is the chi-square goodness of fit test. To do it correctly you set up two cells (one for dead, one for not dead), like so:
Dead NotDead Total
Obs 19 101 120
Exp 12 108 120
The chi-square is $\sum_i (O_i-E_i)^2/E_i$ and has $k-1$ df, where $k$ is the number of categories (k=2 in this case, meaning 1 df).
If you use the same information/approximations in both (including the same continuity corrections), the chi-square statistic will be the square of the two-tailed one-sample proportions Z statistic and will reject exactly the same cases. (Sometimes the p-values differ a little because different approximations/statistics are used.) | chi squared test or Z test? [duplicate] | Yes, it's possible to do a chi-square test on this.
Specifically, this is the chi-square goodness of fit test. To do it correctly you set up two cells (one for dead, one for not dead), like so:
| chi squared test or Z test? [duplicate]
Yes, it's possible to do a chi-square test on this.
Specifically, this is the chi-square goodness of fit test. To do it correctly you set up two cells (one for dead, one for not dead), like so:
Dead NotDead Total
Obs 19 101 120
Exp 12 108 120
The chi-square is $\sum_i (O_i-E_i)^2/E_i$ and has $k-1$ df, where $k$ is the number of categories (k=2 in this case, meaning 1 df).
If you use the same information/approximations in both (including the same continuity corrections), the chi-square statistic will be the square of the two-tailed one-sample proportions Z statistic and will reject exactly the same cases. (Sometimes the p-values differ a little because different approximations/statistics are used.) | chi squared test or Z test? [duplicate]
Yes, it's possible to do a chi-square test on this.
Specifically, this is the chi-square goodness of fit test. To do it correctly you set up two cells (one for dead, one for not dead), like so:
|
31,172 | Unbalanced data or Balanced data | The differences are usually more of a historic nature which is related to the matrix algebra involved. However, this was only a concern when econometrics had to be done by pen and paper way back in the days and today these technicalities barely matter. A discussion of this was provided in an earlier answer by StasK which you can find here.
The main concern with unbalanced panel data is the question why the data is unbalanced. If observations are missing at random then this is not a problem - for a good explanation of what "missing at random" means, have a look at this answer by Peter Flom. If the attrition of firms in your data over time is not random, i.e. it is related to the idiosyncratic errors $u_{it}$, then this sample selection may bias your estimates. For an example of such a case see here (the introductory textbook by Wooldridge, the example is also about panel data for firms as in your case).
A simple test for such sample selection was proposed by Nijman and Verbeek (1992) for fixed and random effects models. Generate a selection indicator $s_{it}$ which equals one if a firm is observed in a given year and zero otherwise. Add the the lagged selection indicator $s_{i,t-1}$ to your model and estimate it via fixed effects using the whole data. Then you test whether $s_{i,t-1}$ is significant. The hypothesis is that the error $u_{it}$ is uncorrelated with the lagged selection indicator, so $s_{i,t-1}$ should be insignificant in order to conclude that attrition is random.
If you want to learn more about this topic, Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data" devotes an entire chapter (ch. 19) to sample selection and attrition. | Unbalanced data or Balanced data | The differences are usually more of a historic nature which is related to the matrix algebra involved. However, this was only a concern when econometrics had to be done by pen and paper way back in th | Unbalanced data or Balanced data
The differences are usually more of a historic nature which is related to the matrix algebra involved. However, this was only a concern when econometrics had to be done by pen and paper way back in the days and today these technicalities barely matter. A discussion of this was provided in an earlier answer by StasK which you can find here.
The main concern with unbalanced panel data is the question why the data is unbalanced. If observations are missing at random then this is not a problem - for a good explanation of what "missing at random" means, have a look at this answer by Peter Flom. If the attrition of firms in your data over time is not random, i.e. it is related to the idiosyncratic errors $u_{it}$, then this sample selection may bias your estimates. For an example of such a case see here (the introductory textbook by Wooldridge, the example is also about panel data for firms as in your case).
A simple test for such sample selection was proposed by Nijman and Verbeek (1992) for fixed and random effects models. Generate a selection indicator $s_{it}$ which equals one if a firm is observed in a given year and zero otherwise. Add the the lagged selection indicator $s_{i,t-1}$ to your model and estimate it via fixed effects using the whole data. Then you test whether $s_{i,t-1}$ is significant. The hypothesis is that the error $u_{it}$ is uncorrelated with the lagged selection indicator, so $s_{i,t-1}$ should be insignificant in order to conclude that attrition is random.
If you want to learn more about this topic, Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data" devotes an entire chapter (ch. 19) to sample selection and attrition. | Unbalanced data or Balanced data
The differences are usually more of a historic nature which is related to the matrix algebra involved. However, this was only a concern when econometrics had to be done by pen and paper way back in th |
31,173 | Unbalanced data or Balanced data | It's better to use all available data. If your data is unbalanced, then it's not cool to remove the data to make the panel balanced. Instead, you apply methods which handle unbalanced panels. | Unbalanced data or Balanced data | It's better to use all available data. If your data is unbalanced, then it's not cool to remove the data to make the panel balanced. Instead, you apply methods which handle unbalanced panels. | Unbalanced data or Balanced data
It's better to use all available data. If your data is unbalanced, then it's not cool to remove the data to make the panel balanced. Instead, you apply methods which handle unbalanced panels. | Unbalanced data or Balanced data
It's better to use all available data. If your data is unbalanced, then it's not cool to remove the data to make the panel balanced. Instead, you apply methods which handle unbalanced panels. |
31,174 | How to properly handle Infs in a statistical function? | In this case the NaN (not a number) is returned because the calculation of the exponential overflows in double precision arithmetic.
An algebraically equivalent expression, expanded in a MacLaurin series around $0$, is
$$\frac{\exp(x)}{1+\exp(x)} = \frac{1}{1+\exp(-x)} = 1 - \exp(-x) + \exp(-2x) - \cdots.$$
Because this is an alternating series, the error made in dropping any term is no greater than the size of the next term. Thus when $x \gt 710$, the error is no greater than $\exp(-710) \approx 10^{-308} \approx 2^{-1024}$ relative to the true value. That is far more precise than any statistical calculation needs to be, so you're fine replacing the return value by $1$ in this situation.
Interestingly, R will not produce an NaN when the exponential underflows. Thus you could just choose the more reliable version of the calculation, depending on the sign of x, as in
f <- function(x) ifelse(x < 0, exp(x) / (1 + exp(x)), 1 / (1 + exp(-x)))
This issue shows up in almost all computing platforms (I have yet to see an exception) and they will vary in how they handle overflows and underflows. Exponentials are notorious for creating these kinds of problems, but they are not alone. Therefore it's not enough just to have a solution in R: a good statistician understands the principles of computer arithmetic and knows how to use these to detect and work around the idiosyncrasies of her computing environment. | How to properly handle Infs in a statistical function? | In this case the NaN (not a number) is returned because the calculation of the exponential overflows in double precision arithmetic.
An algebraically equivalent expression, expanded in a MacLaurin ser | How to properly handle Infs in a statistical function?
In this case the NaN (not a number) is returned because the calculation of the exponential overflows in double precision arithmetic.
An algebraically equivalent expression, expanded in a MacLaurin series around $0$, is
$$\frac{\exp(x)}{1+\exp(x)} = \frac{1}{1+\exp(-x)} = 1 - \exp(-x) + \exp(-2x) - \cdots.$$
Because this is an alternating series, the error made in dropping any term is no greater than the size of the next term. Thus when $x \gt 710$, the error is no greater than $\exp(-710) \approx 10^{-308} \approx 2^{-1024}$ relative to the true value. That is far more precise than any statistical calculation needs to be, so you're fine replacing the return value by $1$ in this situation.
Interestingly, R will not produce an NaN when the exponential underflows. Thus you could just choose the more reliable version of the calculation, depending on the sign of x, as in
f <- function(x) ifelse(x < 0, exp(x) / (1 + exp(x)), 1 / (1 + exp(-x)))
This issue shows up in almost all computing platforms (I have yet to see an exception) and they will vary in how they handle overflows and underflows. Exponentials are notorious for creating these kinds of problems, but they are not alone. Therefore it's not enough just to have a solution in R: a good statistician understands the principles of computer arithmetic and knows how to use these to detect and work around the idiosyncrasies of her computing environment. | How to properly handle Infs in a statistical function?
In this case the NaN (not a number) is returned because the calculation of the exponential overflows in double precision arithmetic.
An algebraically equivalent expression, expanded in a MacLaurin ser |
31,175 | How to properly handle Infs in a statistical function? | Others have already discussed the computational issues, so I'll leave that to them. Since I assume you're working with R, I thought I'd point out the boot package comes with its own inverse logit function for you to use that is pretty computationally stable:
require(boot)
inv.logit(710)
seems to evaluate to 1 as desired. | How to properly handle Infs in a statistical function? | Others have already discussed the computational issues, so I'll leave that to them. Since I assume you're working with R, I thought I'd point out the boot package comes with its own inverse logit func | How to properly handle Infs in a statistical function?
Others have already discussed the computational issues, so I'll leave that to them. Since I assume you're working with R, I thought I'd point out the boot package comes with its own inverse logit function for you to use that is pretty computationally stable:
require(boot)
inv.logit(710)
seems to evaluate to 1 as desired. | How to properly handle Infs in a statistical function?
Others have already discussed the computational issues, so I'll leave that to them. Since I assume you're working with R, I thought I'd point out the boot package comes with its own inverse logit func |
31,176 | In general, would you always prefer feasible GLS to OLS? | The answer to the question in the title is "Not really".
We have a linear regression model (matrix notation) $y = X\beta + u$, where $\operatorname {Var}(u) = \sigma^2V$, with $V$ unknown. Then the Feasible Generalized Least Squares estimator (FGLS) is
$$\hat \beta_{FGLS} = \left(X'\hat V^{-1}X\right)^{-1}X'\hat V^{-1}y$$
What are the finite-sample properties of this estimator? To quote Hayashi (2000), p.59
"If $V$ is estimated from the sample, $\hat V$ becomes a random variable, which affects the distribution of the GLS estimator. Very little is known about the finite-sample properties of the FGLS estimator".
This has not changed much in the intervening years, although for example, Ullah, A., & Huang, X. (2006). Finite Sample Properties of FGLS Estimator for Random-Effects Model under Non-Normality. ch. 3 in Contributions to Economic Analysis, 274, 67-89., provide (approximate) results for the Bias and MSE of the FGLS estimator in the context of panel-data under normality and non-normality of the errors (under normality FGLS is approximately unbiased).
Asymptotically, with only heteroskedasticity present, a simple version of FGLS, the Weighted Least Squares Estimator (WLS) has been proven to be more efficient than OLS, even when the $V$ is estimated from the sample, but under the assumption that the functional form of heteroskedasticity is correctly specified -if it is not, then the finite-sample reality may favor the OLS estimator because it estimates fewer population parameters.
So, as is usual the case, no clear-cut rule of thumb can be offered. | In general, would you always prefer feasible GLS to OLS? | The answer to the question in the title is "Not really".
We have a linear regression model (matrix notation) $y = X\beta + u$, where $\operatorname {Var}(u) = \sigma^2V$, with $V$ unknown. Then the F | In general, would you always prefer feasible GLS to OLS?
The answer to the question in the title is "Not really".
We have a linear regression model (matrix notation) $y = X\beta + u$, where $\operatorname {Var}(u) = \sigma^2V$, with $V$ unknown. Then the Feasible Generalized Least Squares estimator (FGLS) is
$$\hat \beta_{FGLS} = \left(X'\hat V^{-1}X\right)^{-1}X'\hat V^{-1}y$$
What are the finite-sample properties of this estimator? To quote Hayashi (2000), p.59
"If $V$ is estimated from the sample, $\hat V$ becomes a random variable, which affects the distribution of the GLS estimator. Very little is known about the finite-sample properties of the FGLS estimator".
This has not changed much in the intervening years, although for example, Ullah, A., & Huang, X. (2006). Finite Sample Properties of FGLS Estimator for Random-Effects Model under Non-Normality. ch. 3 in Contributions to Economic Analysis, 274, 67-89., provide (approximate) results for the Bias and MSE of the FGLS estimator in the context of panel-data under normality and non-normality of the errors (under normality FGLS is approximately unbiased).
Asymptotically, with only heteroskedasticity present, a simple version of FGLS, the Weighted Least Squares Estimator (WLS) has been proven to be more efficient than OLS, even when the $V$ is estimated from the sample, but under the assumption that the functional form of heteroskedasticity is correctly specified -if it is not, then the finite-sample reality may favor the OLS estimator because it estimates fewer population parameters.
So, as is usual the case, no clear-cut rule of thumb can be offered. | In general, would you always prefer feasible GLS to OLS?
The answer to the question in the title is "Not really".
We have a linear regression model (matrix notation) $y = X\beta + u$, where $\operatorname {Var}(u) = \sigma^2V$, with $V$ unknown. Then the F |
31,177 | Non-conjugate prior | Conjugacy is nice because it means that if you can deal with the pdf in the prior, you should be able to do the same with the posterior (since they're of the same form) -- but of course sometimes you want a prior that's not conjugate.
How does tractability of integrals come up in a practical Bayesian calculation?
Imagine we wish to make some inference about a parameter $\theta$:
$p(\theta|\mathbf x) \propto p(\mathbf x|\theta)\cdot p(\theta)$
where the first term on the right is the likelihood and the second term is the prior. The issue is basically to evaluate the constant of proportionality required to get a density on the right; and then you may want to be able to do various things with it (e.g. draw it; find summary statistics - its mean, or its mode, or some quantiles; perhaps even sample from it). Anyway, being able to find that integral in some way would be useful, and perhaps the most natural and obvious thing to do is attempt to find it 'algebraically' - that is, using the usual bag of tricks for evaluating integrals.
Usually, what we really mean by intractable is 'analytically intractable', but sometimes it's used a little more loosely. In some sense, "most" integrals are intractable, for various values of 'intractable' (scroll down to the discussion of integrals).
Example
As Zen points out for even that very simple example of a binomial model, there's no guarantee you can do the integration for the posterior on the parameter algebraically.
Here's a different example (a simplified version of something I've seen come up):
Consider a Bayesian posterior for the variance, $\sigma^2$ of a normal distribution with known mean $\mu$. The conjugate prior is inverse gamma, but what if we wanted a lognormal prior?
Then we'd effectively have an integral whose integrand is of the form
$$p(\sigma^2|\mu,\mathbf y)\propto p(\mathbf y|\mu,\sigma^2)\cdot p(\sigma^2)$$
where again the first term on the right of the $\propto$ is the likelihood and the second is the prior.
That likelihood is of the form:
$$f(\sigma^2; \alpha, \beta)= \frac{\beta^\alpha}{\Gamma(\alpha)}(\sigma^2)^{-\alpha - 1}\exp\left(-\frac{\beta}{\sigma^2}\right)$$
where $\alpha$ and $\beta$ are simple functions of the data, $y$, the sample size, $n$, and $\mu$, and the prior is of the form:
$$f(\sigma^2;\theta,\tau) = \frac{1}{\sigma^2 \tau \sqrt{2 \pi}}\, e^{-\frac{(\ln \sigma^2 - \theta)^2}{2\tau^2}}$$
... and the product of those is not at all algebraically "nice" to try to deal with. For example, Wolfram Alpha can't do the integral*, and it's more likely to get something like this out in a reasonable time than I am.
* (specifically, we can drop the constants and combine terms, and put $x$ for $\sigma^2$ to supply $x^{-\alpha - 2} \exp(-\frac{\beta}{x}-\frac{(\ln x - \theta)^2}{2\tau^2})$ for the integrand -- and the indefinite integral of that is what Wolfram Alpha can't do. Maybe there's a way to get it - or something else - to do the definite integral on $(0,\infty)$, though.)
Discussion of some approaches to analytical intractability
If it weren't for the fact that people so often tend to choose analytically 'nice' priors (especially when teaching the subject, but also frequently in real problems), it would be a problem that comes up almost every time. That's not to say that choosing analytically nice priors is wrong - usually we only have a vague sense of our prior information (I rarely have a specific prior distribution in mind, though I may well have some notion about possible or likely values - I may have a broad sense of where I want most of the probability on my prior to be, or very roughly where the mean might be, for example - if I don't know what specific functional-form I want for my prior and a conjugate prior can reflect the information I want to have in my prior, that may often be a quite reasonable choice).
However in a practical sense it is still quite possible to deal with this issue in a number of ways. We can, for example, approximate the posterior to varying degrees of accuracy. Here are a few examples (by no means exhaustive): (i) by approximating that desired prior in any number of ways - perhaps by a mixture of conjugate or otherwise tractable priors - yielding a corresponding mixture for the posterior, or (ii) by suitable numerical integration (which in the univariate case can work surprisingly well), or (iii) we can simulate from this distribution without knowing that integral - perhaps via rejection sampling, or via a Metropolis-Hastings type Markov Chain-Monte Carlo algorithm, as long as we have a suitable bounding function or approximant respectively).
In the past, common approaches to this issues tended to include numerical integration (or Monte Carlo integration in higher dimensions), and Laplace approximation. In fact these are still used on many problems, but we have many other tools.
Given so much Bayesian work is done using various versions of MCMC and related sampling approaches these days, analytical tractability is much less of an issue than it might once have been, even with problems with large numbers of parameters - I've seen all three of the approaches I've mentioned just above used in that context; this means we're pretty much free to choose just the prior we want, on the basis of how well it reflects our prior knowledge, or for its ability to regularize the inference - for its suitability for our inference rather than ease of algebraic manipulation. So you see, for example, Andrew Gelman advocating the use of half-Cauchy and half-t priors on variance parameters in hierarchical models, and weakly-informative Cauchy priors in logistic regression (however, that paper is not using MCMC, but rather achieving approximate inference via E-M coupled with the usual iteratively reweighted least squares for logistic regression). | Non-conjugate prior | Conjugacy is nice because it means that if you can deal with the pdf in the prior, you should be able to do the same with the posterior (since they're of the same form) -- but of course sometimes you | Non-conjugate prior
Conjugacy is nice because it means that if you can deal with the pdf in the prior, you should be able to do the same with the posterior (since they're of the same form) -- but of course sometimes you want a prior that's not conjugate.
How does tractability of integrals come up in a practical Bayesian calculation?
Imagine we wish to make some inference about a parameter $\theta$:
$p(\theta|\mathbf x) \propto p(\mathbf x|\theta)\cdot p(\theta)$
where the first term on the right is the likelihood and the second term is the prior. The issue is basically to evaluate the constant of proportionality required to get a density on the right; and then you may want to be able to do various things with it (e.g. draw it; find summary statistics - its mean, or its mode, or some quantiles; perhaps even sample from it). Anyway, being able to find that integral in some way would be useful, and perhaps the most natural and obvious thing to do is attempt to find it 'algebraically' - that is, using the usual bag of tricks for evaluating integrals.
Usually, what we really mean by intractable is 'analytically intractable', but sometimes it's used a little more loosely. In some sense, "most" integrals are intractable, for various values of 'intractable' (scroll down to the discussion of integrals).
Example
As Zen points out for even that very simple example of a binomial model, there's no guarantee you can do the integration for the posterior on the parameter algebraically.
Here's a different example (a simplified version of something I've seen come up):
Consider a Bayesian posterior for the variance, $\sigma^2$ of a normal distribution with known mean $\mu$. The conjugate prior is inverse gamma, but what if we wanted a lognormal prior?
Then we'd effectively have an integral whose integrand is of the form
$$p(\sigma^2|\mu,\mathbf y)\propto p(\mathbf y|\mu,\sigma^2)\cdot p(\sigma^2)$$
where again the first term on the right of the $\propto$ is the likelihood and the second is the prior.
That likelihood is of the form:
$$f(\sigma^2; \alpha, \beta)= \frac{\beta^\alpha}{\Gamma(\alpha)}(\sigma^2)^{-\alpha - 1}\exp\left(-\frac{\beta}{\sigma^2}\right)$$
where $\alpha$ and $\beta$ are simple functions of the data, $y$, the sample size, $n$, and $\mu$, and the prior is of the form:
$$f(\sigma^2;\theta,\tau) = \frac{1}{\sigma^2 \tau \sqrt{2 \pi}}\, e^{-\frac{(\ln \sigma^2 - \theta)^2}{2\tau^2}}$$
... and the product of those is not at all algebraically "nice" to try to deal with. For example, Wolfram Alpha can't do the integral*, and it's more likely to get something like this out in a reasonable time than I am.
* (specifically, we can drop the constants and combine terms, and put $x$ for $\sigma^2$ to supply $x^{-\alpha - 2} \exp(-\frac{\beta}{x}-\frac{(\ln x - \theta)^2}{2\tau^2})$ for the integrand -- and the indefinite integral of that is what Wolfram Alpha can't do. Maybe there's a way to get it - or something else - to do the definite integral on $(0,\infty)$, though.)
Discussion of some approaches to analytical intractability
If it weren't for the fact that people so often tend to choose analytically 'nice' priors (especially when teaching the subject, but also frequently in real problems), it would be a problem that comes up almost every time. That's not to say that choosing analytically nice priors is wrong - usually we only have a vague sense of our prior information (I rarely have a specific prior distribution in mind, though I may well have some notion about possible or likely values - I may have a broad sense of where I want most of the probability on my prior to be, or very roughly where the mean might be, for example - if I don't know what specific functional-form I want for my prior and a conjugate prior can reflect the information I want to have in my prior, that may often be a quite reasonable choice).
However in a practical sense it is still quite possible to deal with this issue in a number of ways. We can, for example, approximate the posterior to varying degrees of accuracy. Here are a few examples (by no means exhaustive): (i) by approximating that desired prior in any number of ways - perhaps by a mixture of conjugate or otherwise tractable priors - yielding a corresponding mixture for the posterior, or (ii) by suitable numerical integration (which in the univariate case can work surprisingly well), or (iii) we can simulate from this distribution without knowing that integral - perhaps via rejection sampling, or via a Metropolis-Hastings type Markov Chain-Monte Carlo algorithm, as long as we have a suitable bounding function or approximant respectively).
In the past, common approaches to this issues tended to include numerical integration (or Monte Carlo integration in higher dimensions), and Laplace approximation. In fact these are still used on many problems, but we have many other tools.
Given so much Bayesian work is done using various versions of MCMC and related sampling approaches these days, analytical tractability is much less of an issue than it might once have been, even with problems with large numbers of parameters - I've seen all three of the approaches I've mentioned just above used in that context; this means we're pretty much free to choose just the prior we want, on the basis of how well it reflects our prior knowledge, or for its ability to regularize the inference - for its suitability for our inference rather than ease of algebraic manipulation. So you see, for example, Andrew Gelman advocating the use of half-Cauchy and half-t priors on variance parameters in hierarchical models, and weakly-informative Cauchy priors in logistic regression (however, that paper is not using MCMC, but rather achieving approximate inference via E-M coupled with the usual iteratively reweighted least squares for logistic regression). | Non-conjugate prior
Conjugacy is nice because it means that if you can deal with the pdf in the prior, you should be able to do the same with the posterior (since they're of the same form) -- but of course sometimes you |
31,178 | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. Why? | These are two different phenomena:
$t$-statistic
Holding all else constant, if $N$ increases the $t$-value must increase as a simple matter of arithmetic. Consider the fraction in the denominator, $\hat\sigma/\sqrt{n}$, if $n$ gets bigger, then $\sqrt n$ will get bigger as well (albeit more slowly), because the square root is a monotonic transformation. Since the square root of $n$ is the denominator of that fraction, as it gets bigger, the fraction will get smaller. However, this fraction is, in turn, a denominator. As a result, as that denominator gets smaller, the second fraction gets bigger. Thus, the $t$-value will get bigger as $n$ gets bigger. (Assuming, again, that $\hat\sigma$ and $(\bar x - \mu_{\rm null})$ remain the same.)
What does this mean conceptually? Well, the more data we have / the closer the sample size gets to the population size, the less far the sample mean will tend to vary from the population mean due to sampling error (cf., the law of large numbers). With a small, finite population, this is easy to see, but although it may not be as intuitive, the same holds true if the population is infinite. Since the sample mean ($\bar x$) shouldn't fluctuate very far from the reference (null) value, we can be more confident that the observed distance of the sample mean from the null is because the null value is not actually the mean of the population from which the sample was drawn. More accurately, it becomes less and less probable to have found a sample mean that far or further away from the null value, if the null value really were the mean of the population from which the sample was drawn.
$t$-distribution
When you look at a $t$-table (say, in the back of a statistics book), what you are actually looking at is a table of critical values. That is, the value that the observed $t$ statistic must be greater than in order for the test to be 'significant' at that alpha. (Typically, these are listed for a small number of possible alphas: $\alpha=\{.10,\ .05,\ .01,\ .001\}$.) I suspect if you look closely at such tables, they are actually thinking in terms of the degrees of freedom associated with the $t$ statistic in question. Note that the degrees of freedom for the $t$-statistic is a function of $n$, being $df = n-2$ for a two group $t$-test, or $df = n-1$ for a one group $t$-test (your example seems to be the latter). This has to do with the fact that the $t$-distribution will converge to a standard normal distribution as the degrees of freedom approaches infinity.
The way to understand this conceptually is to think about why you need to use the $t$-distribution in the first place. You know what the reference mean value is that you are interested in and the sample mean that you observed. If the population from which the samples were drawn was normally distributed (which people are often implicitly assuming), then we know that the sampling distribution of the mean will be normally distributed as well. So why bother with the $t$-distribution? The answer is that are not sure what the standard deviation of the population is. (If we were sure, we really would use the normal distribution, i.e., the $z$-test instead of the $t$-test.) So we use our sample standard deviation, $\hat\sigma$, as a proxy for the unknown population value. However, the more data we have, the more sure we can be that $\hat\sigma$ is in fact approximately the right value. As $n$ approaches the population size (and/or infinity), we can be sure that $\hat\sigma$ in fact is exactly the right value. Thus, the $t$-distribution becomes the normal distribution. | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. W | These are two different phenomena:
$t$-statistic
Holding all else constant, if $N$ increases the $t$-value must increase as a simple matter of arithmetic. Consider the fraction in the denominator, | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. Why?
These are two different phenomena:
$t$-statistic
Holding all else constant, if $N$ increases the $t$-value must increase as a simple matter of arithmetic. Consider the fraction in the denominator, $\hat\sigma/\sqrt{n}$, if $n$ gets bigger, then $\sqrt n$ will get bigger as well (albeit more slowly), because the square root is a monotonic transformation. Since the square root of $n$ is the denominator of that fraction, as it gets bigger, the fraction will get smaller. However, this fraction is, in turn, a denominator. As a result, as that denominator gets smaller, the second fraction gets bigger. Thus, the $t$-value will get bigger as $n$ gets bigger. (Assuming, again, that $\hat\sigma$ and $(\bar x - \mu_{\rm null})$ remain the same.)
What does this mean conceptually? Well, the more data we have / the closer the sample size gets to the population size, the less far the sample mean will tend to vary from the population mean due to sampling error (cf., the law of large numbers). With a small, finite population, this is easy to see, but although it may not be as intuitive, the same holds true if the population is infinite. Since the sample mean ($\bar x$) shouldn't fluctuate very far from the reference (null) value, we can be more confident that the observed distance of the sample mean from the null is because the null value is not actually the mean of the population from which the sample was drawn. More accurately, it becomes less and less probable to have found a sample mean that far or further away from the null value, if the null value really were the mean of the population from which the sample was drawn.
$t$-distribution
When you look at a $t$-table (say, in the back of a statistics book), what you are actually looking at is a table of critical values. That is, the value that the observed $t$ statistic must be greater than in order for the test to be 'significant' at that alpha. (Typically, these are listed for a small number of possible alphas: $\alpha=\{.10,\ .05,\ .01,\ .001\}$.) I suspect if you look closely at such tables, they are actually thinking in terms of the degrees of freedom associated with the $t$ statistic in question. Note that the degrees of freedom for the $t$-statistic is a function of $n$, being $df = n-2$ for a two group $t$-test, or $df = n-1$ for a one group $t$-test (your example seems to be the latter). This has to do with the fact that the $t$-distribution will converge to a standard normal distribution as the degrees of freedom approaches infinity.
The way to understand this conceptually is to think about why you need to use the $t$-distribution in the first place. You know what the reference mean value is that you are interested in and the sample mean that you observed. If the population from which the samples were drawn was normally distributed (which people are often implicitly assuming), then we know that the sampling distribution of the mean will be normally distributed as well. So why bother with the $t$-distribution? The answer is that are not sure what the standard deviation of the population is. (If we were sure, we really would use the normal distribution, i.e., the $z$-test instead of the $t$-test.) So we use our sample standard deviation, $\hat\sigma$, as a proxy for the unknown population value. However, the more data we have, the more sure we can be that $\hat\sigma$ is in fact approximately the right value. As $n$ approaches the population size (and/or infinity), we can be sure that $\hat\sigma$ in fact is exactly the right value. Thus, the $t$-distribution becomes the normal distribution. | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. W
These are two different phenomena:
$t$-statistic
Holding all else constant, if $N$ increases the $t$-value must increase as a simple matter of arithmetic. Consider the fraction in the denominator, |
31,179 | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. Why? | Well, the short answer is that's what falls out of the math. The long answer would be to do the math$^3$. Instead I'll try to rephrase gung's explanation that these are two different (though related) things.
You've collected a sample $X_1...X_n$ that is normally distributed with unknown variance$^4$ and want to know if its average is different from some specified value $\mu$. The way you do this is to compute a value that represents how "different" your observations are from the assumption that $\bar{x}=\mu$. Thus the formula for the $t$-statistic$^1$ you presented. Probably the most intuitive way of thinking about why this increases with $n$ is that you have more "confidence" that things are different when you have more samples.
Moving on, this value follows a $t$-distribution$^2$ with $n-1$ degrees of freedom. The way to think about this is that the $t$-distribution is slightly different depending on your sample size. You can see plots of this distribution with 2, 3, 5, and 20 df below.
You'll notice that higher df has more mass in the center and less in the tails of the distribution (I have no intuitive reasoning for why the distributions behave this way, sorry). The critical $t$-value is the x-location where the area under the curve equals a somewhat arbitrary value of your choosing (traditionally 0.05). These values are marked on the graph as points. So for the green curve (df=5), the area under the curve to the left of the left green dot = 0.025, and the area under the curve to the right of the right green dot = 0.025, for a total of 0.05.
This is why the critical $t$-values decrease with increasing degrees of freedom - as df increases, the critical values must get closer to zero to keep the same area under the curve. And as gung mentioned, as df goes to $\infty$, the curve and critical values will approach that of a standard normal distribution.
So now you have your critical value and your $t$-statistic, and can perform the $t$-test. If your $t$-statistic is greater than the critical value, you then can make the statement that if $\bar{x}=\mu$ really was true, then you would have observed your sample less than 5% (or whatever arbitrary percentage you chose to calculate the critical value for) of the time.
$^1$ Why do we calculate this particular value out of the many arbitrary values we could calculate? Well, this is what falls out of a calculation of a likelihood ratio test$^3$.
If you knew the variance of the samples beforehand, the $z$-statistic (following a normal distribution) mentioned by gung would fall out of this calculation instead, and you would perform a $z$-test
$^2$ Again, this is what falls out of the math$^3$
$^3$ First good result from google: http://math.arizona.edu/~jwatkins/ttest.pdf
$^4$ It turns out the t-test works even if that assumption is not met, but that's a digression | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. W | Well, the short answer is that's what falls out of the math. The long answer would be to do the math$^3$. Instead I'll try to rephrase gung's explanation that these are two different (though related) | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. Why?
Well, the short answer is that's what falls out of the math. The long answer would be to do the math$^3$. Instead I'll try to rephrase gung's explanation that these are two different (though related) things.
You've collected a sample $X_1...X_n$ that is normally distributed with unknown variance$^4$ and want to know if its average is different from some specified value $\mu$. The way you do this is to compute a value that represents how "different" your observations are from the assumption that $\bar{x}=\mu$. Thus the formula for the $t$-statistic$^1$ you presented. Probably the most intuitive way of thinking about why this increases with $n$ is that you have more "confidence" that things are different when you have more samples.
Moving on, this value follows a $t$-distribution$^2$ with $n-1$ degrees of freedom. The way to think about this is that the $t$-distribution is slightly different depending on your sample size. You can see plots of this distribution with 2, 3, 5, and 20 df below.
You'll notice that higher df has more mass in the center and less in the tails of the distribution (I have no intuitive reasoning for why the distributions behave this way, sorry). The critical $t$-value is the x-location where the area under the curve equals a somewhat arbitrary value of your choosing (traditionally 0.05). These values are marked on the graph as points. So for the green curve (df=5), the area under the curve to the left of the left green dot = 0.025, and the area under the curve to the right of the right green dot = 0.025, for a total of 0.05.
This is why the critical $t$-values decrease with increasing degrees of freedom - as df increases, the critical values must get closer to zero to keep the same area under the curve. And as gung mentioned, as df goes to $\infty$, the curve and critical values will approach that of a standard normal distribution.
So now you have your critical value and your $t$-statistic, and can perform the $t$-test. If your $t$-statistic is greater than the critical value, you then can make the statement that if $\bar{x}=\mu$ really was true, then you would have observed your sample less than 5% (or whatever arbitrary percentage you chose to calculate the critical value for) of the time.
$^1$ Why do we calculate this particular value out of the many arbitrary values we could calculate? Well, this is what falls out of a calculation of a likelihood ratio test$^3$.
If you knew the variance of the samples beforehand, the $z$-statistic (following a normal distribution) mentioned by gung would fall out of this calculation instead, and you would perform a $z$-test
$^2$ Again, this is what falls out of the math$^3$
$^3$ First good result from google: http://math.arizona.edu/~jwatkins/ttest.pdf
$^4$ It turns out the t-test works even if that assumption is not met, but that's a digression | When n increases the t-value increases in a hypothesis test, but the t-table is just the opposite. W
Well, the short answer is that's what falls out of the math. The long answer would be to do the math$^3$. Instead I'll try to rephrase gung's explanation that these are two different (though related) |
31,180 | How to best visualize one-sample test? | Something like this?
Or were you after some interval for the median, like you get with notched boxplots (but suited to a one sample comparison, naturally)?
Here's an example of that:
This uses the interval suggested in McGill et al (the one in the references of ?boxplot.stats). One could actually use notches, but that might increase the chance that it is interpreted instead as an ordinary notched boxplot.
Of course if you need something to more directly replicate the signed rank test, various things can be constructed that do that, which could even include the interval for the pseudo-median (i.e. the one-sample Hodges-Lehmann location estimate, the median of pairwise averages).
Indeed, wilcox.test can generate the necessary information for us, so this is straightforward:
> wilcox.test(pd,mu=1.1,conf.int=TRUE)
Wilcoxon signed rank test
data: pd
V = 72, p-value = 0.5245
alternative hypothesis: true location is not equal to 1.1
95 percent confidence interval:
0.94 1.42
sample estimates:
(pseudo)median
1.1775
and this can be plotted also:
[The reason the boxplot interval is wider is that the standard error of a median at the normal (which is the assumption underlying the calculation based off the IQR) tends to be larger than that for a pseudomedian when the data are reasonably normalish.]
And of course, one might want to add the actual data to the plot:
Z-value
R uses the sum of the positive ranks as its test statistic (this is not the same statistic as discussed on the Wikipedia page on the test).
Hollander and Wolfe give the mean of the statistic as $n(n+1)/4$ and the variance as $n(n+1)(2n+1)/24$.
So for your data, this is a mean of 60 and a standard deviation of 17.61 and a z-value of 0.682 (ignoring continuity correction)
The code I used to generate the fourth plot (from which the earlier ones can also be done by omitting unneeded parts) is a bit rough (it's mostly specific to the question, rather than being a general plotting function), but I figured someone might want it:
notch1len <- function(x) {
stats <- stats::fivenum(x, na.rm = TRUE)
iqr <- diff(stats[c(2, 4)])
(1.96*1.253/1.35)*(iqr/sqrt(sum(!is.na(x))))
}
w <- notch1len(pd)
m <- median(pd)
boxplot(pd,horizontal=TRUE,boxwex=.4)
abline(v=1.1,col=8)
points(c(m-w,m+w),c(1,1),col=2,lwd=6,pch="|")
ci=wilcox.test(pd,mu=1.1,conf.int=TRUE)$conf.int #$
est=wilcox.test(pd,mu=1.1,conf.int=TRUE)$estimate
stripchart(pd,pch=16,add=TRUE,at=0.7,cex=.7,method="jitter",col=8)
points(c(ci,est),c(0.7,0.7,0.7),pch="|",col=4,cex=c(.9,.9,1.5))
lines(ci,c(0.7,0.7),col=4)
I may come back and post more functional code later. | How to best visualize one-sample test? | Something like this?
Or were you after some interval for the median, like you get with notched boxplots (but suited to a one sample comparison, naturally)?
Here's an example of that:
This uses the i | How to best visualize one-sample test?
Something like this?
Or were you after some interval for the median, like you get with notched boxplots (but suited to a one sample comparison, naturally)?
Here's an example of that:
This uses the interval suggested in McGill et al (the one in the references of ?boxplot.stats). One could actually use notches, but that might increase the chance that it is interpreted instead as an ordinary notched boxplot.
Of course if you need something to more directly replicate the signed rank test, various things can be constructed that do that, which could even include the interval for the pseudo-median (i.e. the one-sample Hodges-Lehmann location estimate, the median of pairwise averages).
Indeed, wilcox.test can generate the necessary information for us, so this is straightforward:
> wilcox.test(pd,mu=1.1,conf.int=TRUE)
Wilcoxon signed rank test
data: pd
V = 72, p-value = 0.5245
alternative hypothesis: true location is not equal to 1.1
95 percent confidence interval:
0.94 1.42
sample estimates:
(pseudo)median
1.1775
and this can be plotted also:
[The reason the boxplot interval is wider is that the standard error of a median at the normal (which is the assumption underlying the calculation based off the IQR) tends to be larger than that for a pseudomedian when the data are reasonably normalish.]
And of course, one might want to add the actual data to the plot:
Z-value
R uses the sum of the positive ranks as its test statistic (this is not the same statistic as discussed on the Wikipedia page on the test).
Hollander and Wolfe give the mean of the statistic as $n(n+1)/4$ and the variance as $n(n+1)(2n+1)/24$.
So for your data, this is a mean of 60 and a standard deviation of 17.61 and a z-value of 0.682 (ignoring continuity correction)
The code I used to generate the fourth plot (from which the earlier ones can also be done by omitting unneeded parts) is a bit rough (it's mostly specific to the question, rather than being a general plotting function), but I figured someone might want it:
notch1len <- function(x) {
stats <- stats::fivenum(x, na.rm = TRUE)
iqr <- diff(stats[c(2, 4)])
(1.96*1.253/1.35)*(iqr/sqrt(sum(!is.na(x))))
}
w <- notch1len(pd)
m <- median(pd)
boxplot(pd,horizontal=TRUE,boxwex=.4)
abline(v=1.1,col=8)
points(c(m-w,m+w),c(1,1),col=2,lwd=6,pch="|")
ci=wilcox.test(pd,mu=1.1,conf.int=TRUE)$conf.int #$
est=wilcox.test(pd,mu=1.1,conf.int=TRUE)$estimate
stripchart(pd,pch=16,add=TRUE,at=0.7,cex=.7,method="jitter",col=8)
points(c(ci,est),c(0.7,0.7,0.7),pch="|",col=4,cex=c(.9,.9,1.5))
lines(ci,c(0.7,0.7),col=4)
I may come back and post more functional code later. | How to best visualize one-sample test?
Something like this?
Or were you after some interval for the median, like you get with notched boxplots (but suited to a one sample comparison, naturally)?
Here's an example of that:
This uses the i |
31,181 | How to best visualize one-sample test? | If you like boxplots, you can as readily show a single boxplot with a line or other reference showing your hypothesised value. (@Glen_b posted an answer with an excellent simple example precisely as I was first writing this.)
It is arguable that boxplots, now very popular, are massively overused for one-sample and two-sample exploration. (Their real value, in my view, is when you are comparing many sets of values, with number of samples or groups or variables more like 10, 30 or 100, and there is a major need to see overall patterns amid a mass of possible detail.)
The key point is that with just one or two samples (groups, variables), you have space on a plot to show much more detail, detail that could be interesting or important for comparison. With a good design, such detail need not be distracting in visual comparison.
Evidently, in most usual versions the box plot suppresses all detail in its box, showing the middle half of the data, except in so far as the position of the median inside the box conveys some information. Depending on the exact rules used, such as the 1.5 IQR convention of showing data points individually if and only if they are 1.5 IQR or more from the nearer quartile, it is even possible that the box plot suppresses most of the detail about the other half of the data. Often, and perhaps even usually, such detail may be irrelevant to something like a Wilcoxon test, but being prepared to see something illuminating in the data display is always a good idea.
A display that remains drastically underused in many fields is the quantile plot, a display of the ordered values against an associated cumulative probability. (For other slightly technical reasons, this cumulative probability is typically not $1/n, \cdots, n/n$ for sample size $n$ but something like $(i - 0.5)/n$ for rank $i$, 1 being the rank of the smallest value.)
Here are your example data with a reference line added for 1.1.
In other examples, key points include
For two-sample comparisons, there are easy choices between superimposing traces, juxtaposing traces, or using related plots such as quantile-quantile plots.
The plot performs well over a range of sample sizes.
Outliers, granularity (lots of ties), gaps, bi- or multimodality will all be shown as or much more clearly than in box plots.
Quantile plots mesh well with monotonic transformations, which is not so true for box plots.
Some will want to point out that cumulative distribution plots or survival function plots show the same information, and that's fine by me.
See W.S. Cleveland's books (details at http://store.hobart.com/) for restrained but effective advocacy of quantile plots.
Another very useful plot is the dot or strip plot (which goes under many other names too), but I wanted to blow a small trumpet for quantile plots here.
R details I leave for others. I am focusing here on the more general statistical graphics question, which clearly cuts across statistical science and all software possibilities.
Incidentally, I don't know the background story but the name wilcox.test in R seems a poor choice to me. So, you save on typing two characters, but the name encourages confusion, not least because of past and present people in statistical fields called Wilcox. Lack of justice for Mann and Whitney is another detail. The person being honoured was Wilcoxon. | How to best visualize one-sample test? | If you like boxplots, you can as readily show a single boxplot with a line or other reference showing your hypothesised value. (@Glen_b posted an answer with an excellent simple example precisely as I | How to best visualize one-sample test?
If you like boxplots, you can as readily show a single boxplot with a line or other reference showing your hypothesised value. (@Glen_b posted an answer with an excellent simple example precisely as I was first writing this.)
It is arguable that boxplots, now very popular, are massively overused for one-sample and two-sample exploration. (Their real value, in my view, is when you are comparing many sets of values, with number of samples or groups or variables more like 10, 30 or 100, and there is a major need to see overall patterns amid a mass of possible detail.)
The key point is that with just one or two samples (groups, variables), you have space on a plot to show much more detail, detail that could be interesting or important for comparison. With a good design, such detail need not be distracting in visual comparison.
Evidently, in most usual versions the box plot suppresses all detail in its box, showing the middle half of the data, except in so far as the position of the median inside the box conveys some information. Depending on the exact rules used, such as the 1.5 IQR convention of showing data points individually if and only if they are 1.5 IQR or more from the nearer quartile, it is even possible that the box plot suppresses most of the detail about the other half of the data. Often, and perhaps even usually, such detail may be irrelevant to something like a Wilcoxon test, but being prepared to see something illuminating in the data display is always a good idea.
A display that remains drastically underused in many fields is the quantile plot, a display of the ordered values against an associated cumulative probability. (For other slightly technical reasons, this cumulative probability is typically not $1/n, \cdots, n/n$ for sample size $n$ but something like $(i - 0.5)/n$ for rank $i$, 1 being the rank of the smallest value.)
Here are your example data with a reference line added for 1.1.
In other examples, key points include
For two-sample comparisons, there are easy choices between superimposing traces, juxtaposing traces, or using related plots such as quantile-quantile plots.
The plot performs well over a range of sample sizes.
Outliers, granularity (lots of ties), gaps, bi- or multimodality will all be shown as or much more clearly than in box plots.
Quantile plots mesh well with monotonic transformations, which is not so true for box plots.
Some will want to point out that cumulative distribution plots or survival function plots show the same information, and that's fine by me.
See W.S. Cleveland's books (details at http://store.hobart.com/) for restrained but effective advocacy of quantile plots.
Another very useful plot is the dot or strip plot (which goes under many other names too), but I wanted to blow a small trumpet for quantile plots here.
R details I leave for others. I am focusing here on the more general statistical graphics question, which clearly cuts across statistical science and all software possibilities.
Incidentally, I don't know the background story but the name wilcox.test in R seems a poor choice to me. So, you save on typing two characters, but the name encourages confusion, not least because of past and present people in statistical fields called Wilcox. Lack of justice for Mann and Whitney is another detail. The person being honoured was Wilcoxon. | How to best visualize one-sample test?
If you like boxplots, you can as readily show a single boxplot with a line or other reference showing your hypothesised value. (@Glen_b posted an answer with an excellent simple example precisely as I |
31,182 | How to best visualize one-sample test? | I would like to get Z-value instead of V-value. I know that if I use
coin package instead of basic stats I will have z-values, but coin
package seems not to be able perform one-sample Wilcoxon test.
{coin} package works for one-sample tests, but they are identified as symmetrical problems and the reference vector must be supplied explicitly in a formula format. The function is wilcoxsign_test() for Wilcoxon-Pratt signed-rank test for symmetry in 1+ samples instead of wilcox_test() for location tests among 2+ samples.
wilcoxsign_test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21) ~
rep(1.1, 15),
distribution = "exact")
Exact Wilcoxon-Pratt Signed-Rank Test
data: y by x (pos, neg)
stratified by block
Z = 0.68155, p-value = 0.5245
alternative hypothesis: true mu is not equal to 0
The you can extract the z statistic through wilcoxsign_test(...)@statistic@standardizedlinearstatistic.
We can possibly derive the z statistic from wilcox.test() from the asymptotic p value without continuity correction.
wilcoxsign_test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21) ~
rep(1.1, 15))
Asymptotic Wilcoxon-Pratt Signed-Rank Test
data: y by x (pos, neg)
stratified by block
Z = 0.68155, p-value = 0.4955
alternative hypothesis: true mu is not equal to 0
wilcox.test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21),
mu = 1.1, exact = F, correct = F)
Wilcoxon signed rank test
data: c(0.8, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.9, 0.74, 1.21)
V = 72, p-value = 0.4955
alternative hypothesis: true location is not equal to 1.1
Then z statistic is qnorm(1 - wilcox.test(...)$p.value/2). | How to best visualize one-sample test? | I would like to get Z-value instead of V-value. I know that if I use
coin package instead of basic stats I will have z-values, but coin
package seems not to be able perform one-sample Wilcoxon test.
| How to best visualize one-sample test?
I would like to get Z-value instead of V-value. I know that if I use
coin package instead of basic stats I will have z-values, but coin
package seems not to be able perform one-sample Wilcoxon test.
{coin} package works for one-sample tests, but they are identified as symmetrical problems and the reference vector must be supplied explicitly in a formula format. The function is wilcoxsign_test() for Wilcoxon-Pratt signed-rank test for symmetry in 1+ samples instead of wilcox_test() for location tests among 2+ samples.
wilcoxsign_test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21) ~
rep(1.1, 15),
distribution = "exact")
Exact Wilcoxon-Pratt Signed-Rank Test
data: y by x (pos, neg)
stratified by block
Z = 0.68155, p-value = 0.5245
alternative hypothesis: true mu is not equal to 0
The you can extract the z statistic through wilcoxsign_test(...)@statistic@standardizedlinearstatistic.
We can possibly derive the z statistic from wilcox.test() from the asymptotic p value without continuity correction.
wilcoxsign_test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21) ~
rep(1.1, 15))
Asymptotic Wilcoxon-Pratt Signed-Rank Test
data: y by x (pos, neg)
stratified by block
Z = 0.68155, p-value = 0.4955
alternative hypothesis: true mu is not equal to 0
wilcox.test(
c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.90, 0.74, 1.21),
mu = 1.1, exact = F, correct = F)
Wilcoxon signed rank test
data: c(0.8, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64,
0.73, 1.46, 1.15, 0.88, 0.9, 0.74, 1.21)
V = 72, p-value = 0.4955
alternative hypothesis: true location is not equal to 1.1
Then z statistic is qnorm(1 - wilcox.test(...)$p.value/2). | How to best visualize one-sample test?
I would like to get Z-value instead of V-value. I know that if I use
coin package instead of basic stats I will have z-values, but coin
package seems not to be able perform one-sample Wilcoxon test.
|
31,183 | Monte Carlo estimation of probabilities | I think it is a reasonable method. The difficulty is usually how to select the number of trials (which affects simulation time).
If you conduct $n$ trials, of which $N \leq n$ give a "positive" result, you can estimate the probability $p$ of positive result as its relative frequency,
$$\hat p = N/n.$$
This estimator is unbiased and has mean-square error, or variance,
$$\mathrm E[(\hat p-p)^2] = \mathrm E[(N/n-p)^2] = \frac{p(1-p)}{n},$$
as follows from noting that $N$ is a binomial random variable with parameters $n$, $p$. The root-mean-square (RMS) error, or standard deviation, is just the square root of this.
In order to assess whether an RMS error value is acceptable or not, that value should be compared with the true $p$. For example, an RMS of $0.01$ may be acceptable for estimating a $p=0.1$, but not for $p=0.001$ (the error would be ten times the true value). The usual approach for this is to normalize the error by dividing it by $p$. So the normalized RMS error is
$$\frac{\sqrt{\mathrm E[(\hat p-p)^2]}}{p} = \sqrt\frac{1-p}{np}.$$
This can be approximated as $1/\sqrt{np}$ for $p$ small. As you can see, to maintain a certain normalized error you need to simulate a number of times $n$ inversely proportional to $p$. But $p$ is unknown, so it's difficult to know which $n$ you need.
A solution is to use sequential estimation, in which $n$ is not fixed in advance, but is adaptively selected according to a certain stopping rule to guarantee that the estimation error does no exceed a predefined level. A standard sequential method is inverse binomial sampling (also called negative-binomial Monte Carlo), which consists in the following: continue simulating until a target number $N$ of positive results is achieved. So now $N$ is fixed, and the number of trials, $n$, becomes the random variable.
The nice aspect of this approach is that by selecting the target $N$ you can control the normalized error level that will be achieved, irrespective of the unknown $p$. That is, to each $N$ corresponds a certain guaranteed value of normalized error. This works whether you consider error defined as mean-square-error, mean-absolute-error, or in terms of a confidence interval. A description of the procedure in each case is given in the following papers (please bear with my self-referencing):
Normalized RMS error: http://arxiv.org/abs/0809.4047 (or see IEEE Communications Letters, November 2009)
Normalized mean-absolute error: http://oa.upm.es/3222/ (or see IEEE Transactions on Communications, November 2008)
Normalized confidence interval: http://arxiv.org/abs/0809.2402 (or see Bernoulli Journal, May 2010) | Monte Carlo estimation of probabilities | I think it is a reasonable method. The difficulty is usually how to select the number of trials (which affects simulation time).
If you conduct $n$ trials, of which $N \leq n$ give a "positive" result | Monte Carlo estimation of probabilities
I think it is a reasonable method. The difficulty is usually how to select the number of trials (which affects simulation time).
If you conduct $n$ trials, of which $N \leq n$ give a "positive" result, you can estimate the probability $p$ of positive result as its relative frequency,
$$\hat p = N/n.$$
This estimator is unbiased and has mean-square error, or variance,
$$\mathrm E[(\hat p-p)^2] = \mathrm E[(N/n-p)^2] = \frac{p(1-p)}{n},$$
as follows from noting that $N$ is a binomial random variable with parameters $n$, $p$. The root-mean-square (RMS) error, or standard deviation, is just the square root of this.
In order to assess whether an RMS error value is acceptable or not, that value should be compared with the true $p$. For example, an RMS of $0.01$ may be acceptable for estimating a $p=0.1$, but not for $p=0.001$ (the error would be ten times the true value). The usual approach for this is to normalize the error by dividing it by $p$. So the normalized RMS error is
$$\frac{\sqrt{\mathrm E[(\hat p-p)^2]}}{p} = \sqrt\frac{1-p}{np}.$$
This can be approximated as $1/\sqrt{np}$ for $p$ small. As you can see, to maintain a certain normalized error you need to simulate a number of times $n$ inversely proportional to $p$. But $p$ is unknown, so it's difficult to know which $n$ you need.
A solution is to use sequential estimation, in which $n$ is not fixed in advance, but is adaptively selected according to a certain stopping rule to guarantee that the estimation error does no exceed a predefined level. A standard sequential method is inverse binomial sampling (also called negative-binomial Monte Carlo), which consists in the following: continue simulating until a target number $N$ of positive results is achieved. So now $N$ is fixed, and the number of trials, $n$, becomes the random variable.
The nice aspect of this approach is that by selecting the target $N$ you can control the normalized error level that will be achieved, irrespective of the unknown $p$. That is, to each $N$ corresponds a certain guaranteed value of normalized error. This works whether you consider error defined as mean-square-error, mean-absolute-error, or in terms of a confidence interval. A description of the procedure in each case is given in the following papers (please bear with my self-referencing):
Normalized RMS error: http://arxiv.org/abs/0809.4047 (or see IEEE Communications Letters, November 2009)
Normalized mean-absolute error: http://oa.upm.es/3222/ (or see IEEE Transactions on Communications, November 2008)
Normalized confidence interval: http://arxiv.org/abs/0809.2402 (or see Bernoulli Journal, May 2010) | Monte Carlo estimation of probabilities
I think it is a reasonable method. The difficulty is usually how to select the number of trials (which affects simulation time).
If you conduct $n$ trials, of which $N \leq n$ give a "positive" result |
31,184 | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata | Your first test returns a negative test statistic (-8.32) which should not happen. Usually the reason for this is a too small sample or mis-specification of the model. As it stands the result of your first test cannot be used to infer much more. Certainly it is not advisable to reverse the order of the estimates in the test for the reasons highlighted in the Statalist post you linked.
You may want to try the command xtoverid which gives a positive test statistic and also works with panels (unlike suest). In Stata you can install it by typing
ssc install xtoverid
At the bottom of the help file you will also find an example of how to use the test for deciding between FE or RE models. Run the RE model and then use the xtoverid command after that. The interpretation is the same as with hausman, i.e. a significant test statistic rejects the null hypothesis that RE is consistent. | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata | Your first test returns a negative test statistic (-8.32) which should not happen. Usually the reason for this is a too small sample or mis-specification of the model. As it stands the result of your | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata
Your first test returns a negative test statistic (-8.32) which should not happen. Usually the reason for this is a too small sample or mis-specification of the model. As it stands the result of your first test cannot be used to infer much more. Certainly it is not advisable to reverse the order of the estimates in the test for the reasons highlighted in the Statalist post you linked.
You may want to try the command xtoverid which gives a positive test statistic and also works with panels (unlike suest). In Stata you can install it by typing
ssc install xtoverid
At the bottom of the help file you will also find an example of how to use the test for deciding between FE or RE models. Run the RE model and then use the xtoverid command after that. The interpretation is the same as with hausman, i.e. a significant test statistic rejects the null hypothesis that RE is consistent. | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata
Your first test returns a negative test statistic (-8.32) which should not happen. Usually the reason for this is a too small sample or mis-specification of the model. As it stands the result of your |
31,185 | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata | The negative sign can arise if different estimates of the error variance are used in forming variance of b and variance of B. In that case, you need to use the sigmamore option, which specifies that both covariance matrices are based on the (same) estimated disturbance variance from the efficient estimator.
hausman FE RE , sigmamore
Note: FE and RE are estimates stored from fixed effect and random effect model. The answer is based on Microeconometrics using Stata by Cameron and Trivedi p. 261. | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata | The negative sign can arise if different estimates of the error variance are used in forming variance of b and variance of B. In that case, you need to use the sigmamore option, which specif | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata
The negative sign can arise if different estimates of the error variance are used in forming variance of b and variance of B. In that case, you need to use the sigmamore option, which specifies that both covariance matrices are based on the (same) estimated disturbance variance from the efficient estimator.
hausman FE RE , sigmamore
Note: FE and RE are estimates stored from fixed effect and random effect model. The answer is based on Microeconometrics using Stata by Cameron and Trivedi p. 261. | Hausman test for panel data, fe and re. Error in the estimation, what to do? Stata
The negative sign can arise if different estimates of the error variance are used in forming variance of b and variance of B. In that case, you need to use the sigmamore option, which specif |
31,186 | Sum of Products of Rademacher random variables | The algebraic relation
$$S = \sum_{i,j} x_i y_j = \sum_i x_i \sum_j y_j$$
exhibits $S$ as the product of two independent sums. Because $(x_i+1)/2$ and $(y_j+1)/2$ are independent Bernoulli$(1/2)$ variates, $X=\sum_{i=1}^a x_i$ is a Binomial$(a, 1/2)$ variable which has been doubled and shifted. Therefore its mean is $0$ and its variance is $a$. Similarly $Y=\sum_{j=1}^b y_j$ has a mean of $0$ and variance of $b$. Let's standardize them right now by defining
$$X_a = \frac{1}{\sqrt a} \sum_{i=1}^a x_i,$$
whence
$$S = \sqrt{ab} X_a X_b = \sqrt{ab}Z_{ab}.$$
To a high (and quantifiable) degree of accuracy, as $a$ grows large $X_a$ approaches the standard Normal distribution. Let us therefore approximate $S$ as $\sqrt{ab}$ times the product of two standard normals.
The next step is to notice that
$$Z_{ab} = X_aX_b = \frac{1}{2}\left(\left(\frac{X_a+X_b}{\sqrt 2}\right)^2 - \left(\frac{X_a-X_b}{\sqrt 2}\right)^2 \right) = \frac{1}{2}\left(U^2 - V^2\right).$$
is a multiple of the difference of the squares of independent standard Normal variables $U$ and $V$. The distribution of $Z_{ab}$ can be computed analytically (by inverting the characteristic function): its pdf is proportional to the Bessel function of order zero, $K_0(|z|)/\pi$. Because this function has exponential tails, we conclude immediately that for large $a$ and $b$ and fixed $t$, there is no better approximation to ${\Pr}_{a,b}(S \gt t)$ than given in the question.
There remains some room for improvement when one (at least) of $a$ and $b$ is not large or at points in the tail of $S$ close to $\pm a b$. Direct calculations of the distribution of $S$ show a curved tapering off of the tail probabilities at points much larger than $\sqrt{ab}$, roughly beyond $\sqrt{ab\max(a,b)}$. These log-linear plots of the CDF of $S$ for various values of $a$ (given in the titles) and $b$ (ranging roughly over the same values as $a$, distinguished by color in each plot) show what's going on. For reference, the graph of the limiting $K_0$ distribution is shown in black. (Because $S$ is symmetric around $0$, $\Pr(S \gt t) = \Pr(-S \lt -t)$, so it suffices to look at the negative tail.)
As $b$ grows larger, the CDF grows closer to the reference line.
Characterizing and quantifying this curvature would require a finer analysis of the Normal approximation to Binomial variates.
The quality of the Bessel function approximation becomes clearer in these magnified portions (of the upper right corner of each plot). We're already pretty far out into the tails. Although the logarithmic vertical scale can hide substantial differences, clearly by the time $a$ has reached $500$ the approximation is good for $|S| \lt a\sqrt{b}$.
R Code to Calculate the Distribution of $S$
The following will take a few seconds to execute. (It computes several million probabilities for 36 combinations of $a$ and $b$.) On slower machines, omit the larger one or two values of a and b and increase the lower plotting limit from $10^{-300}$ to around $10^{-160}$.
s <- function(a, b) {
# Returns the distribution of S as a vector indexed by its support.
products <- factor(as.vector(outer(seq(-a, a, by=2), seq(-b, b, by=2))))
probs <- as.vector(outer(dbinom(0:a, a, 1/2), dbinom(0:b, b, 1/2)))
tapply(probs, products, sum)
}
par(mfrow=c(2,3))
b.vec <- c(51, 101, 149, 201, 299, 501)
cols <- terrain.colors(length(b.vec)+1)
for (a in c(50, 100, 150, 200, 300, 500)) {
plot(c(-sqrt(a*max(b.vec)),0), c(10^(-300), 1), type="n", log="y",
xlab="S/sqrt(ab)", ylab="CDF", main=paste(a))
curve(besselK(abs(x), 0)/pi, lwd=2, add=TRUE)
for (j in 1:length(b.vec)) {
b <- b.vec[j]
x <- s(a,b)
n <- as.numeric(names(x))
k <- n <= 0
y <- cumsum(x[k])
lines(n[k]/sqrt(a*b), y, col=cols[j], lwd=2)
}
} | Sum of Products of Rademacher random variables | The algebraic relation
$$S = \sum_{i,j} x_i y_j = \sum_i x_i \sum_j y_j$$
exhibits $S$ as the product of two independent sums. Because $(x_i+1)/2$ and $(y_j+1)/2$ are independent Bernoulli$(1/2)$ var | Sum of Products of Rademacher random variables
The algebraic relation
$$S = \sum_{i,j} x_i y_j = \sum_i x_i \sum_j y_j$$
exhibits $S$ as the product of two independent sums. Because $(x_i+1)/2$ and $(y_j+1)/2$ are independent Bernoulli$(1/2)$ variates, $X=\sum_{i=1}^a x_i$ is a Binomial$(a, 1/2)$ variable which has been doubled and shifted. Therefore its mean is $0$ and its variance is $a$. Similarly $Y=\sum_{j=1}^b y_j$ has a mean of $0$ and variance of $b$. Let's standardize them right now by defining
$$X_a = \frac{1}{\sqrt a} \sum_{i=1}^a x_i,$$
whence
$$S = \sqrt{ab} X_a X_b = \sqrt{ab}Z_{ab}.$$
To a high (and quantifiable) degree of accuracy, as $a$ grows large $X_a$ approaches the standard Normal distribution. Let us therefore approximate $S$ as $\sqrt{ab}$ times the product of two standard normals.
The next step is to notice that
$$Z_{ab} = X_aX_b = \frac{1}{2}\left(\left(\frac{X_a+X_b}{\sqrt 2}\right)^2 - \left(\frac{X_a-X_b}{\sqrt 2}\right)^2 \right) = \frac{1}{2}\left(U^2 - V^2\right).$$
is a multiple of the difference of the squares of independent standard Normal variables $U$ and $V$. The distribution of $Z_{ab}$ can be computed analytically (by inverting the characteristic function): its pdf is proportional to the Bessel function of order zero, $K_0(|z|)/\pi$. Because this function has exponential tails, we conclude immediately that for large $a$ and $b$ and fixed $t$, there is no better approximation to ${\Pr}_{a,b}(S \gt t)$ than given in the question.
There remains some room for improvement when one (at least) of $a$ and $b$ is not large or at points in the tail of $S$ close to $\pm a b$. Direct calculations of the distribution of $S$ show a curved tapering off of the tail probabilities at points much larger than $\sqrt{ab}$, roughly beyond $\sqrt{ab\max(a,b)}$. These log-linear plots of the CDF of $S$ for various values of $a$ (given in the titles) and $b$ (ranging roughly over the same values as $a$, distinguished by color in each plot) show what's going on. For reference, the graph of the limiting $K_0$ distribution is shown in black. (Because $S$ is symmetric around $0$, $\Pr(S \gt t) = \Pr(-S \lt -t)$, so it suffices to look at the negative tail.)
As $b$ grows larger, the CDF grows closer to the reference line.
Characterizing and quantifying this curvature would require a finer analysis of the Normal approximation to Binomial variates.
The quality of the Bessel function approximation becomes clearer in these magnified portions (of the upper right corner of each plot). We're already pretty far out into the tails. Although the logarithmic vertical scale can hide substantial differences, clearly by the time $a$ has reached $500$ the approximation is good for $|S| \lt a\sqrt{b}$.
R Code to Calculate the Distribution of $S$
The following will take a few seconds to execute. (It computes several million probabilities for 36 combinations of $a$ and $b$.) On slower machines, omit the larger one or two values of a and b and increase the lower plotting limit from $10^{-300}$ to around $10^{-160}$.
s <- function(a, b) {
# Returns the distribution of S as a vector indexed by its support.
products <- factor(as.vector(outer(seq(-a, a, by=2), seq(-b, b, by=2))))
probs <- as.vector(outer(dbinom(0:a, a, 1/2), dbinom(0:b, b, 1/2)))
tapply(probs, products, sum)
}
par(mfrow=c(2,3))
b.vec <- c(51, 101, 149, 201, 299, 501)
cols <- terrain.colors(length(b.vec)+1)
for (a in c(50, 100, 150, 200, 300, 500)) {
plot(c(-sqrt(a*max(b.vec)),0), c(10^(-300), 1), type="n", log="y",
xlab="S/sqrt(ab)", ylab="CDF", main=paste(a))
curve(besselK(abs(x), 0)/pi, lwd=2, add=TRUE)
for (j in 1:length(b.vec)) {
b <- b.vec[j]
x <- s(a,b)
n <- as.numeric(names(x))
k <- n <= 0
y <- cumsum(x[k])
lines(n[k]/sqrt(a*b), y, col=cols[j], lwd=2)
}
} | Sum of Products of Rademacher random variables
The algebraic relation
$$S = \sum_{i,j} x_i y_j = \sum_i x_i \sum_j y_j$$
exhibits $S$ as the product of two independent sums. Because $(x_i+1)/2$ and $(y_j+1)/2$ are independent Bernoulli$(1/2)$ var |
31,187 | Sum of Products of Rademacher random variables | Comment: I edited the title in an attempt to reflect better what kind of r.v.'s are considered in the question. Anyone feel free to re-edit.
Motivation: I guess there is no need to settle for an upper bound, if we can derive the distribution of $|S_{ab}|$. (UPDATE: We can't -see Whuber's comments and answer).
Denote $Z_k = X_iY_j,\;\; k=1,...,ab$.
It is easy to verify that $Z$'s have the same distribution as the $X$'s and the $Y$'s. The moment generating function is
$$M_Z(t) = E[e^{zt}]=\frac 12e^{-t}+\frac 12e^t = \cosh(t)$$
Moreover the $Z$'s are, to begin with, pair-wise independent: The variable $W = Z_1+Z_2$ (indices can be any of course), has support $\{-2,0,2\}$ with corresponding probabilities $\{1/4,1/2,1/4\}$. Its moment generating function is
$$M_{W}(t) = E[e^{(z_1+z_2)t}] = \frac 14e^{-2t}+\frac 12 +\frac 14e^{2t}=\\
=\frac 14(e^{-2t}+1)+ \frac 14(e^{2t}+1) = \frac 14 2e^{-t}\cosh(t)+\frac 14 2e^{t}\cosh(t)\\
=\cosh(t)\cdot \cosh(t) = M_{Z_1}(t)M_{Z_2}(t)$$
I will attempt to suspect that full independence holds, as follows (is it obvious to the wiser ones?):
For this part, denote $Z_{ij}=X_iY_j$. Then by the chain rule
$$P[Z_{ab},...,Z_{11}] = P[Z_{ab}\mid Z_{a,b-1},...,Z_{11}]\cdot ...\cdot P[Z_{13}\mid Z_{12},Z_{11}]\cdot P[Z_{12}\mid Z_{11}]\cdot P[Z_{11}]$$
By pair-wise independence we have $P[Z_{12}\mid Z_{11}] = P[Z_{12}]$.
Consider
$P[Z_{13},Z_{12}\mid Z_{11}]$. $Z_{13}$ and $Z_{12}$ are independent conditional on $Z_{11}$ so we have
$$P[Z_{13}\mid Z_{12},Z_{11}] = P[Z_{13}\mid Z_{11}] = P[Z_{13}]$$
the second equality by pair-wise independence. But this implies that
$$P[Z_{13}\mid Z_{12},Z_{11}]\cdot P[Z_{12}\mid Z_{11}]\cdot P[Z_{11}] = P[Z_{13},\,Z_{12},\,Z_{11}] = P[Z_{13}]\cdot P[Z_{12}]\cdot P[Z_{11}]$$
Etc (I think). ( UPDATE : I think wrong. Independence probably holds for any triplet, but not for the whole bunch. So what follows is just the derivation of the distribution of a simple random walk, and not a correct answer to the question - see Wolfies' and Whuber's answers).
If full independence does indeed hold, we have the task of deriving the distribution of a sum of i.i.d dichotomous r.v.'s
$$S_{ab}=\sum_{k=1}^{ab}Z_k$$
which looks like a simple random walk, although without the clear interpretation of the latter as a sequence.
If $ab=even$ the support of $S$ will be the even integers in $[-ab,...,ab]$ including zero, while if $ab=odd$ the support of $S$ will be the odd integers in $[-ab,...,ab]$, without zero.
We treat the case of $ab=odd$.
Denote $m$ to be the number of $Z$'s taking the value $-1$. Then the support of $S$ can be written $S\in \{ab-2m;m\in \mathbb Z_+\cup\{0\};m\le ab\}$. For any given $m$, we obtain a unique value for $S$. Moreover, due to symmetric probabilities and independence (or just exchangeability?), all possible joint realizations of the $Z$-variables $\{Z_1=z_1,..., Z_{ab}=z_{ab}\}$ are equiprobable. So we count and we find that the probability mass function of $S$ is,
$$P(S=ab-2m)={ab \choose m}\cdot \frac 1{2^{ab}}, \qquad 0\le m\le ab$$
Defining $s\equiv ab-2m$, and odd number by construction, and the typical element of the support of $S$, we have
$$P(S=s)={ab \choose \frac{ab-s}{2}}\cdot \frac 1{2^{ab}}$$
Moving to $|S|$, since if $ab=odd$, the distribution of $S$ is symmetric around zero without allocating probability mass to zero, and so the distribution of $|S|$ is obtained by "folding" the density graph around the vertical axis, essentially doubling the probabilities for positive values,
$$P(|S|=|s|)={ab \choose \frac{ab-s}{2}}\cdot \frac 1{2^{ab-1}}$$
Then the distribution function is
$$P(|S|\le|s|)=\frac 1{2^{ab-1}}\sum_{1\le i\le s,\, i\,odd}{ab \choose \frac{ab-i}{2}}$$
Therefore, for any real $t$, $1\le t<ab$, we obtain the required probability
$$P(|S|> t) = 1- P(|S|\le t) = 1-\frac 1{2^{ab-1}}\sum_{1\le i\le t,\, i\,odd}{ab \choose {\frac{ab-i}{2}}} $$
Note that the indication $i=odd$ guarantees that the sum will run only up to values included in the support of $|S|$ - for example, if we set $t=10.5$, still $i$ will run up to $9$, since it is constrained to be odd, on top of being an integer. | Sum of Products of Rademacher random variables | Comment: I edited the title in an attempt to reflect better what kind of r.v.'s are considered in the question. Anyone feel free to re-edit.
Motivation: I guess there is no need to settle for an upp | Sum of Products of Rademacher random variables
Comment: I edited the title in an attempt to reflect better what kind of r.v.'s are considered in the question. Anyone feel free to re-edit.
Motivation: I guess there is no need to settle for an upper bound, if we can derive the distribution of $|S_{ab}|$. (UPDATE: We can't -see Whuber's comments and answer).
Denote $Z_k = X_iY_j,\;\; k=1,...,ab$.
It is easy to verify that $Z$'s have the same distribution as the $X$'s and the $Y$'s. The moment generating function is
$$M_Z(t) = E[e^{zt}]=\frac 12e^{-t}+\frac 12e^t = \cosh(t)$$
Moreover the $Z$'s are, to begin with, pair-wise independent: The variable $W = Z_1+Z_2$ (indices can be any of course), has support $\{-2,0,2\}$ with corresponding probabilities $\{1/4,1/2,1/4\}$. Its moment generating function is
$$M_{W}(t) = E[e^{(z_1+z_2)t}] = \frac 14e^{-2t}+\frac 12 +\frac 14e^{2t}=\\
=\frac 14(e^{-2t}+1)+ \frac 14(e^{2t}+1) = \frac 14 2e^{-t}\cosh(t)+\frac 14 2e^{t}\cosh(t)\\
=\cosh(t)\cdot \cosh(t) = M_{Z_1}(t)M_{Z_2}(t)$$
I will attempt to suspect that full independence holds, as follows (is it obvious to the wiser ones?):
For this part, denote $Z_{ij}=X_iY_j$. Then by the chain rule
$$P[Z_{ab},...,Z_{11}] = P[Z_{ab}\mid Z_{a,b-1},...,Z_{11}]\cdot ...\cdot P[Z_{13}\mid Z_{12},Z_{11}]\cdot P[Z_{12}\mid Z_{11}]\cdot P[Z_{11}]$$
By pair-wise independence we have $P[Z_{12}\mid Z_{11}] = P[Z_{12}]$.
Consider
$P[Z_{13},Z_{12}\mid Z_{11}]$. $Z_{13}$ and $Z_{12}$ are independent conditional on $Z_{11}$ so we have
$$P[Z_{13}\mid Z_{12},Z_{11}] = P[Z_{13}\mid Z_{11}] = P[Z_{13}]$$
the second equality by pair-wise independence. But this implies that
$$P[Z_{13}\mid Z_{12},Z_{11}]\cdot P[Z_{12}\mid Z_{11}]\cdot P[Z_{11}] = P[Z_{13},\,Z_{12},\,Z_{11}] = P[Z_{13}]\cdot P[Z_{12}]\cdot P[Z_{11}]$$
Etc (I think). ( UPDATE : I think wrong. Independence probably holds for any triplet, but not for the whole bunch. So what follows is just the derivation of the distribution of a simple random walk, and not a correct answer to the question - see Wolfies' and Whuber's answers).
If full independence does indeed hold, we have the task of deriving the distribution of a sum of i.i.d dichotomous r.v.'s
$$S_{ab}=\sum_{k=1}^{ab}Z_k$$
which looks like a simple random walk, although without the clear interpretation of the latter as a sequence.
If $ab=even$ the support of $S$ will be the even integers in $[-ab,...,ab]$ including zero, while if $ab=odd$ the support of $S$ will be the odd integers in $[-ab,...,ab]$, without zero.
We treat the case of $ab=odd$.
Denote $m$ to be the number of $Z$'s taking the value $-1$. Then the support of $S$ can be written $S\in \{ab-2m;m\in \mathbb Z_+\cup\{0\};m\le ab\}$. For any given $m$, we obtain a unique value for $S$. Moreover, due to symmetric probabilities and independence (or just exchangeability?), all possible joint realizations of the $Z$-variables $\{Z_1=z_1,..., Z_{ab}=z_{ab}\}$ are equiprobable. So we count and we find that the probability mass function of $S$ is,
$$P(S=ab-2m)={ab \choose m}\cdot \frac 1{2^{ab}}, \qquad 0\le m\le ab$$
Defining $s\equiv ab-2m$, and odd number by construction, and the typical element of the support of $S$, we have
$$P(S=s)={ab \choose \frac{ab-s}{2}}\cdot \frac 1{2^{ab}}$$
Moving to $|S|$, since if $ab=odd$, the distribution of $S$ is symmetric around zero without allocating probability mass to zero, and so the distribution of $|S|$ is obtained by "folding" the density graph around the vertical axis, essentially doubling the probabilities for positive values,
$$P(|S|=|s|)={ab \choose \frac{ab-s}{2}}\cdot \frac 1{2^{ab-1}}$$
Then the distribution function is
$$P(|S|\le|s|)=\frac 1{2^{ab-1}}\sum_{1\le i\le s,\, i\,odd}{ab \choose \frac{ab-i}{2}}$$
Therefore, for any real $t$, $1\le t<ab$, we obtain the required probability
$$P(|S|> t) = 1- P(|S|\le t) = 1-\frac 1{2^{ab-1}}\sum_{1\le i\le t,\, i\,odd}{ab \choose {\frac{ab-i}{2}}} $$
Note that the indication $i=odd$ guarantees that the sum will run only up to values included in the support of $|S|$ - for example, if we set $t=10.5$, still $i$ will run up to $9$, since it is constrained to be odd, on top of being an integer. | Sum of Products of Rademacher random variables
Comment: I edited the title in an attempt to reflect better what kind of r.v.'s are considered in the question. Anyone feel free to re-edit.
Motivation: I guess there is no need to settle for an upp |
31,188 | Sum of Products of Rademacher random variables | Not an answer, but a comment on Alecos’s interesting answer that is too long to fit into a comment box.
Let $(X_1, ..., X_a)$ be independent Rademacher random variables, and let $(Y_1, ..., Y_b)$ be independent Rademacher random variables. Alecos notes that:
$$S_{ab}=\sum_{k=1}^{ab}Z_k \qquad \text{where} \qquad Z_k = X_i Y_j$$
"… looks like a simple random walk”. If it were like a simple random walk, then the distribution of $S$ would be symmetric 'bell-shaped unimodal' around 0.
To illustrate that it is not a simple random walk, here is a quick Monte Carlo comparison of:
triangle dots: Monte Carlo simulation of the pmf of $S$ given $a = 5$ and $b = 7$
round dots: Monte Carlo simulation of a simple random walk with $n = 35$ steps
Clearly, $S$ is not a simple random walk; also note that S is not distributed on all the even (or odd) integers.
Monte Carlo
Here is the code (in Mathematica) used to generate a single iteration of the sum $S$, given $a$ and $b$:
SumAB[a_, b_] := Outer[Times, RandomChoice[{-1, 1}, a], RandomChoice[{-1, 1}, b]]
// Flatten // Total
Then, 500,000 such paths, say when $a = 5$ and $b = 7$, can be generated with:
data57 = Table[SumAB[5, 7], {500000}];
The domain of support for this combination of $a$ and $b$ is:
{-35, -25, -21, -15, -9, -7, -5, -3, -1, 1, 3, 5, 7, 9, 15, 21, 25, 35} | Sum of Products of Rademacher random variables | Not an answer, but a comment on Alecos’s interesting answer that is too long to fit into a comment box.
Let $(X_1, ..., X_a)$ be independent Rademacher random variables, and let $(Y_1, ..., Y_b)$ be i | Sum of Products of Rademacher random variables
Not an answer, but a comment on Alecos’s interesting answer that is too long to fit into a comment box.
Let $(X_1, ..., X_a)$ be independent Rademacher random variables, and let $(Y_1, ..., Y_b)$ be independent Rademacher random variables. Alecos notes that:
$$S_{ab}=\sum_{k=1}^{ab}Z_k \qquad \text{where} \qquad Z_k = X_i Y_j$$
"… looks like a simple random walk”. If it were like a simple random walk, then the distribution of $S$ would be symmetric 'bell-shaped unimodal' around 0.
To illustrate that it is not a simple random walk, here is a quick Monte Carlo comparison of:
triangle dots: Monte Carlo simulation of the pmf of $S$ given $a = 5$ and $b = 7$
round dots: Monte Carlo simulation of a simple random walk with $n = 35$ steps
Clearly, $S$ is not a simple random walk; also note that S is not distributed on all the even (or odd) integers.
Monte Carlo
Here is the code (in Mathematica) used to generate a single iteration of the sum $S$, given $a$ and $b$:
SumAB[a_, b_] := Outer[Times, RandomChoice[{-1, 1}, a], RandomChoice[{-1, 1}, b]]
// Flatten // Total
Then, 500,000 such paths, say when $a = 5$ and $b = 7$, can be generated with:
data57 = Table[SumAB[5, 7], {500000}];
The domain of support for this combination of $a$ and $b$ is:
{-35, -25, -21, -15, -9, -7, -5, -3, -1, 1, 3, 5, 7, 9, 15, 21, 25, 35} | Sum of Products of Rademacher random variables
Not an answer, but a comment on Alecos’s interesting answer that is too long to fit into a comment box.
Let $(X_1, ..., X_a)$ be independent Rademacher random variables, and let $(Y_1, ..., Y_b)$ be i |
31,189 | Bounds on Cov(X, Y) given Var(X), Var(Y)? | For a single covariance you only need the bottom equation. The bounds are that the covariance cannot be greater than the product of the standard deviations (and cannot be less than the negative of the same value). However for a covariance matrix of more than 2 terms there is an additional limit, the matrix has to be positive semi-definite (or positive definite in some cases). This eliminates cases like X and Y are strongly positively correlated and X and Z are strongly possitively correlated, but Y and Z are strongly negatively correlated (this does not work in the real world, but could still fit the pairwise bounds). One way to check this is that if all the eigen values are positive then it is postive definite and if the eigen values are all non-negative then it is positive semi-definite. | Bounds on Cov(X, Y) given Var(X), Var(Y)? | For a single covariance you only need the bottom equation. The bounds are that the covariance cannot be greater than the product of the standard deviations (and cannot be less than the negative of th | Bounds on Cov(X, Y) given Var(X), Var(Y)?
For a single covariance you only need the bottom equation. The bounds are that the covariance cannot be greater than the product of the standard deviations (and cannot be less than the negative of the same value). However for a covariance matrix of more than 2 terms there is an additional limit, the matrix has to be positive semi-definite (or positive definite in some cases). This eliminates cases like X and Y are strongly positively correlated and X and Z are strongly possitively correlated, but Y and Z are strongly negatively correlated (this does not work in the real world, but could still fit the pairwise bounds). One way to check this is that if all the eigen values are positive then it is postive definite and if the eigen values are all non-negative then it is positive semi-definite. | Bounds on Cov(X, Y) given Var(X), Var(Y)?
For a single covariance you only need the bottom equation. The bounds are that the covariance cannot be greater than the product of the standard deviations (and cannot be less than the negative of th |
31,190 | Bounds on Cov(X, Y) given Var(X), Var(Y)? | In the multivariate case, you can use what is called the multivariate Cauchy-Schwarz inequality:
$$
\newcommand{\Var}{Var} \newcommand{\Cov}{Cov}
\Var(z) \ge \Cov(z,y) \Var(y)^{-1} \Cov(y,z)
$$
Where here the inequality sign must be interpreted in the sense of the partial order on the cone of positive-definite matrices: $A \le B$ means that $B-A$ is positive definite. | Bounds on Cov(X, Y) given Var(X), Var(Y)? | In the multivariate case, you can use what is called the multivariate Cauchy-Schwarz inequality:
$$
\newcommand{\Var}{Var} \newcommand{\Cov}{Cov}
\Var(z) \ge \Cov(z,y) \Var(y)^{-1} \Cov(y,z)
| Bounds on Cov(X, Y) given Var(X), Var(Y)?
In the multivariate case, you can use what is called the multivariate Cauchy-Schwarz inequality:
$$
\newcommand{\Var}{Var} \newcommand{\Cov}{Cov}
\Var(z) \ge \Cov(z,y) \Var(y)^{-1} \Cov(y,z)
$$
Where here the inequality sign must be interpreted in the sense of the partial order on the cone of positive-definite matrices: $A \le B$ means that $B-A$ is positive definite. | Bounds on Cov(X, Y) given Var(X), Var(Y)?
In the multivariate case, you can use what is called the multivariate Cauchy-Schwarz inequality:
$$
\newcommand{\Var}{Var} \newcommand{\Cov}{Cov}
\Var(z) \ge \Cov(z,y) \Var(y)^{-1} \Cov(y,z)
|
31,191 | Comparing 2 classifiers with unlimited training data | Do I understand you correctly that you want to measure whether C1 is a faster/slower learner than C2?
With unlimited training data, I'd definitively construct (measure) the learning curves. That allows you to discuss both questions you pose.
As Dikran already hints, the learning curve does have a variance as well as a bias component: training on smaller data gives systematically worse models but there is also higher variance between different models trained with smaller $n_{train}$ which I'd also include in a discussion which classifier is better.
Make sure you test with large enough test sample size: proportions of counts (such as classifier accuracy) suffer from high variance which can mess up your conclusions. As you have an unlimited data source, you are in the very comfortable situation that it is actually possible to measure the learning curves without too much additional testing error on them.
I just got a paper accepted that summarizes some thoughts and findings about Sample Size Planning for Classification Models. The DOI does not yet function, but anyways here's the accepted manuscript at arXiv.
Of course computation time is your consideration now. Some thoughts on this
how much computer time you are willing to spend will depend on what you need your comparison for.
if it's just about finding a practically working set-up, I'd be pragmatic also about the time to get to a decision.
if it's a scientific question, I'd quote my old supervisor "
Computer time is not a scientific argument". This is meant in the sense that saving a couple of days or even a few weeks of server time by compromising the conclusions you can draw is not a good idea*.
The more so, as having better calculations doesn't necessarily require more of your time here: your time to set up the calculations will take roughly the same time whether you calculate on a fine grid of training sample sizes or a rough one, or whether you measure variance by 1000 iterations or just by 10. This means that you can do calculations in an order that allows to get a "sneak-preview" on the results quite fast, then you can sketch the results, and at the end pull in the fine-grained numbers.
(*) I may add that I come from an experimental field where you easily spend months or years on sample collection and weeks or months measurements which don't do themselves in the way a simulation runs on a server, neither.
Update about bootstapping / cross validation
It is certainly possible to use (iterated/repeated) cross validation or out-of-bootrap testing to measure the learning curve. Using resampling schemes instead of a proper independent test set is sensible if you are in a small sample size situation, i.e. you do not have enough independent samples for training of a good classifier and properly measuring its performance. According to the question, this is not the case here.
Data-driven model optimization
One more general point: choosing a "working point" (i.e. training sample size here) from the learning curve is a data-driven decision. This means that you need to do another independent validation of the "final" model (trained with that samples size) with another independen test set. However, if your test data for measuring the learining curve was independent and had huge (really large) sample size, then your risk to overfit to that test set is minute. I.e. if you find a drop in performance for the final test data, that indicates either too small test sample size for determining the learning curve or a problem in your data analysis set up (data not independent, training data leaking into test data).
Update 2: limited test sample size
is a real problem. Comparing many classifiers (each $n_{train}$ you evaluate ultimately leads to one classifier!) is a multiple testing problem from a statistics point of view. That means, judging by the same test set "skims" the variance uncertainty of the testing. This leads to overfitting.
(This is just another way to express the danger of cherry-picking Dikran commented about)
You really need to reserve an independent test set for final evaluation, if you want to be able to state the accuracy of the finally chosen model.
While it is hard to overfit to a test set of millions of instances, it it much easier to overfit to 350 samples per class.
Therefore, the paper I linked above may be of more interest for you than I initially thought: it also shows how to calculate how much test samples you need to show e.g. superioriority of one classifier (with fixed hyperparameters) over another. As you can test all models with the same test set, you may be lucky so that you are able to somewhat reduce the required test sample size by doing paired tests here. For paired comparison of 2 classifiers, McNemar test would be a keyword. | Comparing 2 classifiers with unlimited training data | Do I understand you correctly that you want to measure whether C1 is a faster/slower learner than C2?
With unlimited training data, I'd definitively construct (measure) the learning curves. That allow | Comparing 2 classifiers with unlimited training data
Do I understand you correctly that you want to measure whether C1 is a faster/slower learner than C2?
With unlimited training data, I'd definitively construct (measure) the learning curves. That allows you to discuss both questions you pose.
As Dikran already hints, the learning curve does have a variance as well as a bias component: training on smaller data gives systematically worse models but there is also higher variance between different models trained with smaller $n_{train}$ which I'd also include in a discussion which classifier is better.
Make sure you test with large enough test sample size: proportions of counts (such as classifier accuracy) suffer from high variance which can mess up your conclusions. As you have an unlimited data source, you are in the very comfortable situation that it is actually possible to measure the learning curves without too much additional testing error on them.
I just got a paper accepted that summarizes some thoughts and findings about Sample Size Planning for Classification Models. The DOI does not yet function, but anyways here's the accepted manuscript at arXiv.
Of course computation time is your consideration now. Some thoughts on this
how much computer time you are willing to spend will depend on what you need your comparison for.
if it's just about finding a practically working set-up, I'd be pragmatic also about the time to get to a decision.
if it's a scientific question, I'd quote my old supervisor "
Computer time is not a scientific argument". This is meant in the sense that saving a couple of days or even a few weeks of server time by compromising the conclusions you can draw is not a good idea*.
The more so, as having better calculations doesn't necessarily require more of your time here: your time to set up the calculations will take roughly the same time whether you calculate on a fine grid of training sample sizes or a rough one, or whether you measure variance by 1000 iterations or just by 10. This means that you can do calculations in an order that allows to get a "sneak-preview" on the results quite fast, then you can sketch the results, and at the end pull in the fine-grained numbers.
(*) I may add that I come from an experimental field where you easily spend months or years on sample collection and weeks or months measurements which don't do themselves in the way a simulation runs on a server, neither.
Update about bootstapping / cross validation
It is certainly possible to use (iterated/repeated) cross validation or out-of-bootrap testing to measure the learning curve. Using resampling schemes instead of a proper independent test set is sensible if you are in a small sample size situation, i.e. you do not have enough independent samples for training of a good classifier and properly measuring its performance. According to the question, this is not the case here.
Data-driven model optimization
One more general point: choosing a "working point" (i.e. training sample size here) from the learning curve is a data-driven decision. This means that you need to do another independent validation of the "final" model (trained with that samples size) with another independen test set. However, if your test data for measuring the learining curve was independent and had huge (really large) sample size, then your risk to overfit to that test set is minute. I.e. if you find a drop in performance for the final test data, that indicates either too small test sample size for determining the learning curve or a problem in your data analysis set up (data not independent, training data leaking into test data).
Update 2: limited test sample size
is a real problem. Comparing many classifiers (each $n_{train}$ you evaluate ultimately leads to one classifier!) is a multiple testing problem from a statistics point of view. That means, judging by the same test set "skims" the variance uncertainty of the testing. This leads to overfitting.
(This is just another way to express the danger of cherry-picking Dikran commented about)
You really need to reserve an independent test set for final evaluation, if you want to be able to state the accuracy of the finally chosen model.
While it is hard to overfit to a test set of millions of instances, it it much easier to overfit to 350 samples per class.
Therefore, the paper I linked above may be of more interest for you than I initially thought: it also shows how to calculate how much test samples you need to show e.g. superioriority of one classifier (with fixed hyperparameters) over another. As you can test all models with the same test set, you may be lucky so that you are able to somewhat reduce the required test sample size by doing paired tests here. For paired comparison of 2 classifiers, McNemar test would be a keyword. | Comparing 2 classifiers with unlimited training data
Do I understand you correctly that you want to measure whether C1 is a faster/slower learner than C2?
With unlimited training data, I'd definitively construct (measure) the learning curves. That allow |
31,192 | Comparing 2 classifiers with unlimited training data | If you have unlimited training data, then the optimal training set size depends on computational considerations, rather than statistical ones. From a statistical point of view, there are many classifiers based on universal approximations, so if you trained on an infinite dataset you would get a classifier that approached the Bayes error and could do no better.
If the classifier performs worse a size of the training set increases, that would be a rather worrying sign. If it still does this if you average over multiple random samples of training data, I would suspect there is something wrong with the implementation. | Comparing 2 classifiers with unlimited training data | If you have unlimited training data, then the optimal training set size depends on computational considerations, rather than statistical ones. From a statistical point of view, there are many classif | Comparing 2 classifiers with unlimited training data
If you have unlimited training data, then the optimal training set size depends on computational considerations, rather than statistical ones. From a statistical point of view, there are many classifiers based on universal approximations, so if you trained on an infinite dataset you would get a classifier that approached the Bayes error and could do no better.
If the classifier performs worse a size of the training set increases, that would be a rather worrying sign. If it still does this if you average over multiple random samples of training data, I would suspect there is something wrong with the implementation. | Comparing 2 classifiers with unlimited training data
If you have unlimited training data, then the optimal training set size depends on computational considerations, rather than statistical ones. From a statistical point of view, there are many classif |
31,193 | How to determine which variables are statistically significant in multiple regression? | Yes, based on the output, sex and income are statistically significant.
sex and possibly status are nominal variables, so it's odd that they appear in the model as is. It could work, if they are 0/1 variables, but it still opens up the potential for error.
To be on the safe side, for sex and any other nominal variable, include it in the model like this: factor(sex):
fitted.model <- lm(spending ~ factor(sex) + status + income, data=spending) | How to determine which variables are statistically significant in multiple regression? | Yes, based on the output, sex and income are statistically significant.
sex and possibly status are nominal variables, so it's odd that they appear in the model as is. It could work, if they are 0/1 v | How to determine which variables are statistically significant in multiple regression?
Yes, based on the output, sex and income are statistically significant.
sex and possibly status are nominal variables, so it's odd that they appear in the model as is. It could work, if they are 0/1 variables, but it still opens up the potential for error.
To be on the safe side, for sex and any other nominal variable, include it in the model like this: factor(sex):
fitted.model <- lm(spending ~ factor(sex) + status + income, data=spending) | How to determine which variables are statistically significant in multiple regression?
Yes, based on the output, sex and income are statistically significant.
sex and possibly status are nominal variables, so it's odd that they appear in the model as is. It could work, if they are 0/1 v |
31,194 | How to determine which variables are statistically significant in multiple regression? | The p-value in the last column tells you the significance of the regression coefficient for a given parameter. If the p-value is small enough to claim statistical significance, that just means there is strong evidence that the coefficient is different from 0. But in the regression context it might be a little naive to think that it means that sex and income are the only significant factors. As we have seen (I think with this data set) the variables are correlated and their coefficients and t statistics can change a lot depending on which other variables are included in the regression. You should look at what those t-tests say when only sex and income are included in the model. | How to determine which variables are statistically significant in multiple regression? | The p-value in the last column tells you the significance of the regression coefficient for a given parameter. If the p-value is small enough to claim statistical significance, that just means there | How to determine which variables are statistically significant in multiple regression?
The p-value in the last column tells you the significance of the regression coefficient for a given parameter. If the p-value is small enough to claim statistical significance, that just means there is strong evidence that the coefficient is different from 0. But in the regression context it might be a little naive to think that it means that sex and income are the only significant factors. As we have seen (I think with this data set) the variables are correlated and their coefficients and t statistics can change a lot depending on which other variables are included in the regression. You should look at what those t-tests say when only sex and income are included in the model. | How to determine which variables are statistically significant in multiple regression?
The p-value in the last column tells you the significance of the regression coefficient for a given parameter. If the p-value is small enough to claim statistical significance, that just means there |
31,195 | How to determine which variables are statistically significant in multiple regression? | Who has asked you to determine this? This looks like homework and if it is it should be tagged as such.
The answer to your question depends very much on what is meant by "statistically significant" in a regression context. Looking at the last column as you suggest will meet one definition, but a rather simplistic definition.
Your quoted output above does not include the rest of the summary which includes the overall F-test. That p-value should be examined before the individual tests, it is possible to have an overall test tell you that nothing is significant, but then an individual test or 2 show significance due to alpha inflation from multiple testing.
If status and verbal are correlated with each other then it is possible that either could be a very "significant" predictor of spending, but show up as redundant given the other. | How to determine which variables are statistically significant in multiple regression? | Who has asked you to determine this? This looks like homework and if it is it should be tagged as such.
The answer to your question depends very much on what is meant by "statistically significant" i | How to determine which variables are statistically significant in multiple regression?
Who has asked you to determine this? This looks like homework and if it is it should be tagged as such.
The answer to your question depends very much on what is meant by "statistically significant" in a regression context. Looking at the last column as you suggest will meet one definition, but a rather simplistic definition.
Your quoted output above does not include the rest of the summary which includes the overall F-test. That p-value should be examined before the individual tests, it is possible to have an overall test tell you that nothing is significant, but then an individual test or 2 show significance due to alpha inflation from multiple testing.
If status and verbal are correlated with each other then it is possible that either could be a very "significant" predictor of spending, but show up as redundant given the other. | How to determine which variables are statistically significant in multiple regression?
Who has asked you to determine this? This looks like homework and if it is it should be tagged as such.
The answer to your question depends very much on what is meant by "statistically significant" i |
31,196 | How to determine which variables are statistically significant in multiple regression? | Yes, you should look at the last column which contains the p-value parameter.
Usually, we consider that if p-value < 0.05 for a certain variable then it is significant and has some relationship with your predictor.
In this case, sex with p-value 0.0101 and income with 1.79e-05 are both below 0.05 and so therefore are significant.
The p-value can be obtained by looking the t-value (third column) up in a t-distribution table.
The t-value is given by dividing each coefficient (Estimate - first column) by its Std Error (second column). | How to determine which variables are statistically significant in multiple regression? | Yes, you should look at the last column which contains the p-value parameter.
Usually, we consider that if p-value < 0.05 for a certain variable then it is significant and has some relationship with y | How to determine which variables are statistically significant in multiple regression?
Yes, you should look at the last column which contains the p-value parameter.
Usually, we consider that if p-value < 0.05 for a certain variable then it is significant and has some relationship with your predictor.
In this case, sex with p-value 0.0101 and income with 1.79e-05 are both below 0.05 and so therefore are significant.
The p-value can be obtained by looking the t-value (third column) up in a t-distribution table.
The t-value is given by dividing each coefficient (Estimate - first column) by its Std Error (second column). | How to determine which variables are statistically significant in multiple regression?
Yes, you should look at the last column which contains the p-value parameter.
Usually, we consider that if p-value < 0.05 for a certain variable then it is significant and has some relationship with y |
31,197 | Use of the Gamma parameter with support vector machines | I would suggest the following theoretical guidance. When you are using Gaussian RBF kernel, your separating surface will be based on a combination of bell-shaped surfaces centered at each support vector. The width of each bell-shaped surface will be inversely proportional to $\gamma$. If this width is smaller than the minimum pair-wise distance for your data, you essentially have overfitting. If this width is larger than the maximum pair-wise distance for your data, all your points fall into one class and you don't have good performance either. So the optimal width should be somewhere between these two extremes. | Use of the Gamma parameter with support vector machines | I would suggest the following theoretical guidance. When you are using Gaussian RBF kernel, your separating surface will be based on a combination of bell-shaped surfaces centered at each support vect | Use of the Gamma parameter with support vector machines
I would suggest the following theoretical guidance. When you are using Gaussian RBF kernel, your separating surface will be based on a combination of bell-shaped surfaces centered at each support vector. The width of each bell-shaped surface will be inversely proportional to $\gamma$. If this width is smaller than the minimum pair-wise distance for your data, you essentially have overfitting. If this width is larger than the maximum pair-wise distance for your data, all your points fall into one class and you don't have good performance either. So the optimal width should be somewhere between these two extremes. | Use of the Gamma parameter with support vector machines
I would suggest the following theoretical guidance. When you are using Gaussian RBF kernel, your separating surface will be based on a combination of bell-shaped surfaces centered at each support vect |
31,198 | Use of the Gamma parameter with support vector machines | No, it is essentially data dependent. Grid search (over log-transformed hyper-parameters) is a very good method if you only have a small number of hyper-parameters to tune, but don't make the grid resolution too fine or you are likely to over-fit the tuning criterion. For problems with a larger number of kernel parameters, I find the Nelder-Mead simplex method works well. | Use of the Gamma parameter with support vector machines | No, it is essentially data dependent. Grid search (over log-transformed hyper-parameters) is a very good method if you only have a small number of hyper-parameters to tune, but don't make the grid re | Use of the Gamma parameter with support vector machines
No, it is essentially data dependent. Grid search (over log-transformed hyper-parameters) is a very good method if you only have a small number of hyper-parameters to tune, but don't make the grid resolution too fine or you are likely to over-fit the tuning criterion. For problems with a larger number of kernel parameters, I find the Nelder-Mead simplex method works well. | Use of the Gamma parameter with support vector machines
No, it is essentially data dependent. Grid search (over log-transformed hyper-parameters) is a very good method if you only have a small number of hyper-parameters to tune, but don't make the grid re |
31,199 | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation? | I believe the relationship between the mean correlation and the eigenvalue of the 1st PC exist but is not unique. I'm not a mathematician to be able to deduce it, but I can at least display the starting point where one's intuition or thought might grow from.
If you draw standardized variables as vectors in euclidean space that seats it (and this is the reduced space where axes are observations), correlation is the cosine between two vectors.
And because vectors are all of unit length (due to standardization) the cosines are the projections of the vectors on each other (like shown on the left picture with three variables). The 1st PC is such a line in this space that maximizes the sum of squared projections onto it, a's, called loadings; and this sum is the 1st eigenvalue.
So, when you establish the relationship between the mean of the three projections on the left with the sum (or mean) of the three squared projections on the right, you answer your question about the relationship between the mean correlation and the eigenvalue. | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation? | I believe the relationship between the mean correlation and the eigenvalue of the 1st PC exist but is not unique. I'm not a mathematician to be able to deduce it, but I can at least display the starti | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation?
I believe the relationship between the mean correlation and the eigenvalue of the 1st PC exist but is not unique. I'm not a mathematician to be able to deduce it, but I can at least display the starting point where one's intuition or thought might grow from.
If you draw standardized variables as vectors in euclidean space that seats it (and this is the reduced space where axes are observations), correlation is the cosine between two vectors.
And because vectors are all of unit length (due to standardization) the cosines are the projections of the vectors on each other (like shown on the left picture with three variables). The 1st PC is such a line in this space that maximizes the sum of squared projections onto it, a's, called loadings; and this sum is the 1st eigenvalue.
So, when you establish the relationship between the mean of the three projections on the left with the sum (or mean) of the three squared projections on the right, you answer your question about the relationship between the mean correlation and the eigenvalue. | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation?
I believe the relationship between the mean correlation and the eigenvalue of the 1st PC exist but is not unique. I'm not a mathematician to be able to deduce it, but I can at least display the starti |
31,200 | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation? | What I think happened here is that all variables were positively correlated with each other. In this case the 1st PC quite often turns out to be very close to the average of all the variables. If all variables are positively correlated with exactly the same correlation coefficient $c$, then the 1st PC is exactly proportional to the average of all the variables, as I explain here: Can averaging all the variables be seen as a crude form of PCA?
In this simple case one can actually mathematically derive the relationship you are asking about. Consider correlation matrix of $n\times n$ size that looks like that: $$\left(\begin{array}{}1&c&c&c\\c&1&c&c\\c&c&1&c\\c&c&c&1\end{array} \right).$$ Its first eigenvector is equal to $(1,1,1,1)^\top/\sqrt{n}$, which corresponds to the [scaled] average of all the variables. Its eigenvalue is $\lambda_1=1+(n-1)c$. The sum of all eigenvalues if of course given by the sum of all diagonal elements, i.e. $\sum \lambda_i=n$. So the proportion of explained variance by the first PC is equal to $$R^2=\frac{1}{n}+\frac{n-1}{n}c \approx c.$$
So in this most simple case the proportion of explained variance by the first PC is 100% correlated with the average correlation, and for large $n$ is approximately equal to it. Which is precisely what we see on your plot.
I expect that for large matrices, this result will approximately hold even if the correlations are not exactly identical.
Update. Using the figure posted in the question, one can even try to estimate the $n$ by noticing that $n=(1-c)/(R^2-c)$. If we take $c=0.5$ and $R^2-c=0.02$, then we get $n=25$. The OP said that the data was a "DAX stock index"; googling it, we see that it apparently consists of $30$ variables. Not a bad match. | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation? | What I think happened here is that all variables were positively correlated with each other. In this case the 1st PC quite often turns out to be very close to the average of all the variables. If all | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation?
What I think happened here is that all variables were positively correlated with each other. In this case the 1st PC quite often turns out to be very close to the average of all the variables. If all variables are positively correlated with exactly the same correlation coefficient $c$, then the 1st PC is exactly proportional to the average of all the variables, as I explain here: Can averaging all the variables be seen as a crude form of PCA?
In this simple case one can actually mathematically derive the relationship you are asking about. Consider correlation matrix of $n\times n$ size that looks like that: $$\left(\begin{array}{}1&c&c&c\\c&1&c&c\\c&c&1&c\\c&c&c&1\end{array} \right).$$ Its first eigenvector is equal to $(1,1,1,1)^\top/\sqrt{n}$, which corresponds to the [scaled] average of all the variables. Its eigenvalue is $\lambda_1=1+(n-1)c$. The sum of all eigenvalues if of course given by the sum of all diagonal elements, i.e. $\sum \lambda_i=n$. So the proportion of explained variance by the first PC is equal to $$R^2=\frac{1}{n}+\frac{n-1}{n}c \approx c.$$
So in this most simple case the proportion of explained variance by the first PC is 100% correlated with the average correlation, and for large $n$ is approximately equal to it. Which is precisely what we see on your plot.
I expect that for large matrices, this result will approximately hold even if the correlations are not exactly identical.
Update. Using the figure posted in the question, one can even try to estimate the $n$ by noticing that $n=(1-c)/(R^2-c)$. If we take $c=0.5$ and $R^2-c=0.02$, then we get $n=25$. The OP said that the data was a "DAX stock index"; googling it, we see that it apparently consists of $30$ variables. Not a bad match. | Why is the amount of variance explained by my 1st PC so close to the average pairwise correlation?
What I think happened here is that all variables were positively correlated with each other. In this case the 1st PC quite often turns out to be very close to the average of all the variables. If all |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.