idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
40,201
|
How can I use Propensity Scores to adjust for survey non-response bias?
|
The answer to this depends on whether you have a probability sample versus a nonprobability sample, where a probability sample refers to a sample selected using random sampling from the population.
If you have a probability sample
For probability samples, you know each sample member's sampling selection probability (i.e. the probability that they would be asked to take the survey), the inverse of which is the basic survey weight. If you know the weights for both responding and non-responding members of the sample, then you would typically apply a non-response adjustment to the weights for the responding members of the sample. A common approach is response propensity class adjustment, where you divide your sample into, say, three groups based on the estimated response propensities: low, middle, and high. For each group, you would adjust the weights by multiplying by the factor N_hat_full / N_hat_responding, where N_hat_full denotes the sum of sampling weights for the full sample in that group and N_hat_responding denotes the sum of sampling weights for the responding members of the sample in that group.
This R package vignette demonstrates how to do this in R:
'svrep' package vignette on nonresponse adjustments
The method you're referring to--where you use the inverse of the propensity scores as weights--is inverse propensity score (IPS) weighting, which is not typically used as a non-response adjustment method for probability surveys. The method of propensity class adjustment is often preferred because it produces less variation in the weights and so is often preferable in terms of reducing variance of weighted estimates. Different methods for using propensity scores to adjust for nonresponse in probability surveys are explained and compared by Haziza and Lesage (2016).
If you have a non-probability sample
In the context of data from a non-probability sample, the inverse propensity score weighting (IPSW) approach is very commonly used to compensate for the fact that you don't know sampling probabilities since you didn't take a controlled sample of the population. Its use in this area was outlined and (I believe) first proposed by Lee and Valliant (2009); a good recent study discussing this approach for non-probability samples is a 2018 Pew Research report discussing different options for estimating the propensity scores.
In a nutshell, the process works as follows.
The data from a given sample of opt-in survey responses is combined with dataset from a synthetic population (which can be created using data from a probability sample). The observations from your survey are stacked on top of the observations from the synthetic population.
A model is trained to predict, for each record in the stacked dataset, whether that record came from the opt-in sample or from the reference dataset.
For each respondent to the opt-in survey, the trained model is used to generate a probabilistic prediction for whether that respondent’s row in the stacked dataset
came from the synthetic population rather than the opt-in sample.
An inverse propensity weight is calculated for each respondent as w_i = p_i/(1−p_i), where p_i denotes the predicted probability that the respondent’s record was drawn from the reference dataset rather than the opt-in sample.
(Optional) Weights are rescaled to match the total population size
The weights are then used as if they were simple sampling weights: population totals are calculated as SUM(WEIGHT * X); means are typically calculated as the SUM(WEIGHT * X) / SUM(WEIGHT).
References
Haziza and Lesage (2016). A Discussion of Weighting Procedures for Unit Nonresponse. Journal of Official Statistics, 32(1) 129-145. https://doi.org/10.1515/jos-2016-0006
Lee and Valliant (2009). Estimation for Volunteer Panel Web Surveys Using Propensity Score Adjustment and Calibration Adjustment. https://journals.sagepub.com/doi/abs/10.1177/0049124108329643
Mercer, Andrew, Arnold Lau, and Courtney Kennedy. 2018. “For Weighting Online Opt-in Samples, What
Matters Most?” Pew Research Center. https://www.pewresearch.org/methods/2018/01/26/for-weightingonline-opt-in-samples-what-matters-most/.
Valliant, R., Dever, J., Kreuter, F. (2018). Practical Tools for Designing and Weighting Survey Samples, 2nd edition. New York: Springer. https://doi.org/10.1007/978-3-319-93632-1. Chapter 13 provides an excellent overview of nonresponse adjustment methods for probability surveys, and chapter 18 provides an overview of weighting methods for nonprobability surveys.
|
How can I use Propensity Scores to adjust for survey non-response bias?
|
The answer to this depends on whether you have a probability sample versus a nonprobability sample, where a probability sample refers to a sample selected using random sampling from the population.
If
|
How can I use Propensity Scores to adjust for survey non-response bias?
The answer to this depends on whether you have a probability sample versus a nonprobability sample, where a probability sample refers to a sample selected using random sampling from the population.
If you have a probability sample
For probability samples, you know each sample member's sampling selection probability (i.e. the probability that they would be asked to take the survey), the inverse of which is the basic survey weight. If you know the weights for both responding and non-responding members of the sample, then you would typically apply a non-response adjustment to the weights for the responding members of the sample. A common approach is response propensity class adjustment, where you divide your sample into, say, three groups based on the estimated response propensities: low, middle, and high. For each group, you would adjust the weights by multiplying by the factor N_hat_full / N_hat_responding, where N_hat_full denotes the sum of sampling weights for the full sample in that group and N_hat_responding denotes the sum of sampling weights for the responding members of the sample in that group.
This R package vignette demonstrates how to do this in R:
'svrep' package vignette on nonresponse adjustments
The method you're referring to--where you use the inverse of the propensity scores as weights--is inverse propensity score (IPS) weighting, which is not typically used as a non-response adjustment method for probability surveys. The method of propensity class adjustment is often preferred because it produces less variation in the weights and so is often preferable in terms of reducing variance of weighted estimates. Different methods for using propensity scores to adjust for nonresponse in probability surveys are explained and compared by Haziza and Lesage (2016).
If you have a non-probability sample
In the context of data from a non-probability sample, the inverse propensity score weighting (IPSW) approach is very commonly used to compensate for the fact that you don't know sampling probabilities since you didn't take a controlled sample of the population. Its use in this area was outlined and (I believe) first proposed by Lee and Valliant (2009); a good recent study discussing this approach for non-probability samples is a 2018 Pew Research report discussing different options for estimating the propensity scores.
In a nutshell, the process works as follows.
The data from a given sample of opt-in survey responses is combined with dataset from a synthetic population (which can be created using data from a probability sample). The observations from your survey are stacked on top of the observations from the synthetic population.
A model is trained to predict, for each record in the stacked dataset, whether that record came from the opt-in sample or from the reference dataset.
For each respondent to the opt-in survey, the trained model is used to generate a probabilistic prediction for whether that respondent’s row in the stacked dataset
came from the synthetic population rather than the opt-in sample.
An inverse propensity weight is calculated for each respondent as w_i = p_i/(1−p_i), where p_i denotes the predicted probability that the respondent’s record was drawn from the reference dataset rather than the opt-in sample.
(Optional) Weights are rescaled to match the total population size
The weights are then used as if they were simple sampling weights: population totals are calculated as SUM(WEIGHT * X); means are typically calculated as the SUM(WEIGHT * X) / SUM(WEIGHT).
References
Haziza and Lesage (2016). A Discussion of Weighting Procedures for Unit Nonresponse. Journal of Official Statistics, 32(1) 129-145. https://doi.org/10.1515/jos-2016-0006
Lee and Valliant (2009). Estimation for Volunteer Panel Web Surveys Using Propensity Score Adjustment and Calibration Adjustment. https://journals.sagepub.com/doi/abs/10.1177/0049124108329643
Mercer, Andrew, Arnold Lau, and Courtney Kennedy. 2018. “For Weighting Online Opt-in Samples, What
Matters Most?” Pew Research Center. https://www.pewresearch.org/methods/2018/01/26/for-weightingonline-opt-in-samples-what-matters-most/.
Valliant, R., Dever, J., Kreuter, F. (2018). Practical Tools for Designing and Weighting Survey Samples, 2nd edition. New York: Springer. https://doi.org/10.1007/978-3-319-93632-1. Chapter 13 provides an excellent overview of nonresponse adjustment methods for probability surveys, and chapter 18 provides an overview of weighting methods for nonprobability surveys.
|
How can I use Propensity Scores to adjust for survey non-response bias?
The answer to this depends on whether you have a probability sample versus a nonprobability sample, where a probability sample refers to a sample selected using random sampling from the population.
If
|
40,202
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
|
No, $X_t$ is not considered an ARIMA model.
Walter Enders writes here in his textbook (3rd edition, p. 191)
We have shown that differencing can sometimes be used to transform a nonstationary model into a stationary model with an ARMA representation. This does not mean that all nonstationary models can be transformed into well-behaved ARMA models by appropriate differencing. Consider, for example, the a model that is the sum of of a deterministic trend and a pure noise component
$y_t = y_0 + a_1 t + \epsilon_t$
The first difference of $y_t$ is not well-behaved because
$\Delta y_t = a_1 + \epsilon_t - \epsilon_{t-1}$
Here $\Delta y_t$ is not invertible in the sense that $\Delta y_t$ cannot be expressed in the form of an autoregressive process. Recall that invertibility of a stationary process requires that the MA component does not have a unit root.
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
|
No, $X_t$ is not considered an ARIMA model.
Walter Enders writes here in his textbook (3rd edition, p. 191)
We have shown that differencing can sometimes be used to transform a nonstationary model in
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
No, $X_t$ is not considered an ARIMA model.
Walter Enders writes here in his textbook (3rd edition, p. 191)
We have shown that differencing can sometimes be used to transform a nonstationary model into a stationary model with an ARMA representation. This does not mean that all nonstationary models can be transformed into well-behaved ARMA models by appropriate differencing. Consider, for example, the a model that is the sum of of a deterministic trend and a pure noise component
$y_t = y_0 + a_1 t + \epsilon_t$
The first difference of $y_t$ is not well-behaved because
$\Delta y_t = a_1 + \epsilon_t - \epsilon_{t-1}$
Here $\Delta y_t$ is not invertible in the sense that $\Delta y_t$ cannot be expressed in the form of an autoregressive process. Recall that invertibility of a stationary process requires that the MA component does not have a unit root.
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
No, $X_t$ is not considered an ARIMA model.
Walter Enders writes here in his textbook (3rd edition, p. 191)
We have shown that differencing can sometimes be used to transform a nonstationary model in
|
40,203
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
|
I personally consider Rob Hyndman's forecast package for R the gold standard in ARIMA modeling and forecasting. And this package will quite happily deal with time series of the form of your differenced series and call them "ARIMA with non-zero mean".
> set.seed(4); forecast::auto.arima(rnorm(100,5,1))
Series: rnorm(100, 5, 1)
ARIMA(5,1,0)
... snip ...
Similarly, a deterministic trend plus white noise is modeled as "ARIMA with drift":
> set.seed(4); forecast::auto.arima(1:100+rnorm(100,5,1))
Series: 1:100 + rnorm(100, 5, 1)
ARIMA(5,1,0) with drift
So yes, I would consider deterministic trends ARIMA processes.
In addition, Brockwell & Davis' Introduction to Time Series and Forecasting (3rd ed., 2016) also consider "ARMA(p,q) processes with mean" on p. 74. I couldn't find an explicit discussion of trends that upon differencing turn into such ARMA(p,q) processes with (nonzero) mean, but I would say this extension is obvious enough to be accepted.
And I agree that this is a question of convention.
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
|
I personally consider Rob Hyndman's forecast package for R the gold standard in ARIMA modeling and forecasting. And this package will quite happily deal with time series of the form of your difference
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
I personally consider Rob Hyndman's forecast package for R the gold standard in ARIMA modeling and forecasting. And this package will quite happily deal with time series of the form of your differenced series and call them "ARIMA with non-zero mean".
> set.seed(4); forecast::auto.arima(rnorm(100,5,1))
Series: rnorm(100, 5, 1)
ARIMA(5,1,0)
... snip ...
Similarly, a deterministic trend plus white noise is modeled as "ARIMA with drift":
> set.seed(4); forecast::auto.arima(1:100+rnorm(100,5,1))
Series: 1:100 + rnorm(100, 5, 1)
ARIMA(5,1,0) with drift
So yes, I would consider deterministic trends ARIMA processes.
In addition, Brockwell & Davis' Introduction to Time Series and Forecasting (3rd ed., 2016) also consider "ARMA(p,q) processes with mean" on p. 74. I couldn't find an explicit discussion of trends that upon differencing turn into such ARMA(p,q) processes with (nonzero) mean, but I would say this extension is obvious enough to be accepted.
And I agree that this is a question of convention.
|
Is a time series which is a deterministic linear trend + white noise considered an ARIMA model?
I personally consider Rob Hyndman's forecast package for R the gold standard in ARIMA modeling and forecasting. And this package will quite happily deal with time series of the form of your difference
|
40,204
|
F1 score, PR or ROC curve for regression
|
F1 score, PR or ROC curve are not specific to classification models only.
I have never seen the F1 score or ROC used to evaluate a numerical prediction. I am unfamiliar with "PR".
The definition of the F1 score crucially relies on precision and recall, or positive/negative predictive value, and I do not see how it can reasonably be generalized to a numerical forecast.
The ROC curve plots the true positive rate against the false positive rate as a threshold varies. Again, it relies on a notion of "true positive" and "false positive", and I don't see how these can be applied to numerical predictions.
All that is not to say that efforts have not been made to apply these concepts to numerical forecasts.
It would feel a lot like hammering square pegs into round holes to me, though. I would say that there is a reason why I (we?) haven't seen this a lot: it's unintuitive, and it does not provide the information that standard error measures like the MAE or the MSE do. Honestly, if I got a paper for review that used F1/ROC to evaluate numerical predictions, I would recommend that they throw these out and use more standard error measures.
My recommendation: ask the editor to communicate to the reviewer that you need more information on applying F1 and ROC in your case. Maybe the reviewer can provide a reference or two? You may want to provide a link to this CV thread as an indication that you did do your homework and asked statistical experts (cough), and that the experts were similarly bewildered.
The best possible outcome would be if your reviewer posted their thoughts here.
|
F1 score, PR or ROC curve for regression
|
F1 score, PR or ROC curve are not specific to classification models only.
I have never seen the F1 score or ROC used to evaluate a numerical prediction. I am unfamiliar with "PR".
The definition of t
|
F1 score, PR or ROC curve for regression
F1 score, PR or ROC curve are not specific to classification models only.
I have never seen the F1 score or ROC used to evaluate a numerical prediction. I am unfamiliar with "PR".
The definition of the F1 score crucially relies on precision and recall, or positive/negative predictive value, and I do not see how it can reasonably be generalized to a numerical forecast.
The ROC curve plots the true positive rate against the false positive rate as a threshold varies. Again, it relies on a notion of "true positive" and "false positive", and I don't see how these can be applied to numerical predictions.
All that is not to say that efforts have not been made to apply these concepts to numerical forecasts.
It would feel a lot like hammering square pegs into round holes to me, though. I would say that there is a reason why I (we?) haven't seen this a lot: it's unintuitive, and it does not provide the information that standard error measures like the MAE or the MSE do. Honestly, if I got a paper for review that used F1/ROC to evaluate numerical predictions, I would recommend that they throw these out and use more standard error measures.
My recommendation: ask the editor to communicate to the reviewer that you need more information on applying F1 and ROC in your case. Maybe the reviewer can provide a reference or two? You may want to provide a link to this CV thread as an indication that you did do your homework and asked statistical experts (cough), and that the experts were similarly bewildered.
The best possible outcome would be if your reviewer posted their thoughts here.
|
F1 score, PR or ROC curve for regression
F1 score, PR or ROC curve are not specific to classification models only.
I have never seen the F1 score or ROC used to evaluate a numerical prediction. I am unfamiliar with "PR".
The definition of t
|
40,205
|
A theoretical question on fractional factorials
|
You can read off the additional assumptions from the alias structure. For your example a $\mathbf{2}^{15}$ design needs, as you said, $32768$ experimental runs, but with that you can estimate even the 15-factor full interaction. Such many-factor interactions are seldom interpretable, and if you restrict yourself to main effects and two-factor interactions, the total number of parameters to be estimated is $1+15+\binom{15}{2}=1+15+105=121$ parameters, which can be accommodated with a $\mathbf{2}^{15-8}$ design.
But there are many ways to choose such a design, you need the concept of resolution of the design, see Intuition to the Resolution of a fractional factorial design. You can find a resolution V $\mathbf{2}^{15-8}$-design, if that is too many observations, and you can accept to alias some two-factor interactions, you can find a smaller resolution IV design. What you need to decide is which main affects and interactions you really need unaliased estimates for, and which you can ignore. It could also be wise to replicate the design twice (in two blocks) to admit a pure error variance estimate. See the linked post for references.
|
A theoretical question on fractional factorials
|
You can read off the additional assumptions from the alias structure. For your example a $\mathbf{2}^{15}$ design needs, as you said, $32768$ experimental runs, but with that you can estimate even the
|
A theoretical question on fractional factorials
You can read off the additional assumptions from the alias structure. For your example a $\mathbf{2}^{15}$ design needs, as you said, $32768$ experimental runs, but with that you can estimate even the 15-factor full interaction. Such many-factor interactions are seldom interpretable, and if you restrict yourself to main effects and two-factor interactions, the total number of parameters to be estimated is $1+15+\binom{15}{2}=1+15+105=121$ parameters, which can be accommodated with a $\mathbf{2}^{15-8}$ design.
But there are many ways to choose such a design, you need the concept of resolution of the design, see Intuition to the Resolution of a fractional factorial design. You can find a resolution V $\mathbf{2}^{15-8}$-design, if that is too many observations, and you can accept to alias some two-factor interactions, you can find a smaller resolution IV design. What you need to decide is which main affects and interactions you really need unaliased estimates for, and which you can ignore. It could also be wise to replicate the design twice (in two blocks) to admit a pure error variance estimate. See the linked post for references.
|
A theoretical question on fractional factorials
You can read off the additional assumptions from the alias structure. For your example a $\mathbf{2}^{15}$ design needs, as you said, $32768$ experimental runs, but with that you can estimate even the
|
40,206
|
Is there a parametric equivalent to the chi-squared test?
|
Poisson regression.
Here is an example of a potential table you may be describing.
category
method 1 2 3 4
hand 101 210 590 99
machine 97 401 403 99
A poisson regression with additive effects should yield the same expected cell count as the chi-square procedure.
Here is how we would fit the model and make the expected cell counts
tabl = xtabs(~method + category, data = d)
model_data = as.data.frame(tabl)
model = glm(Freq~method + factor(category), data =model_data, family = poisson)
model_data$expec = predict(model, type = 'response')
And here is the Chi-square test
library(tidyverse)
model_data %>%
mutate(X = (Freq-expec)^2/expec) %>%
summarise(test_stat = sum(X))
>>>95.00335
This test has 3 degrees of freedom, and I don't need to look up the p value to tell you this is significant (since the test stat is very far from the mean of the chi square).
Here is the chi-square test itself. Note the test statistic
chisq.test(tabl)
Pearson's Chi-squared test
data: tabl
X-squared = 95.003, df = 3, p-value < 2.2e-16
So here, I used the predictions from the model to do the test. Another way to do this -- which I would count as a parametric test -- would be to do a deviance goodness of fit test for the Poisson model. The proof of why the deviance goodness of fit test is similar to the chi-square escapes me, but it is easy to show from directly computing it that the results are not too different.
The deviance goodness of fit test statistic is obtained via
model$deviance
>>>96.227
which is close enough. You can simuilate some more examples to check that the deviance and the chi-square result in similar test stats.
EDIT:
Turns out the chi-square test is an approximation to the likelihood ratio test for these models, which is closely related to the deviance goodness of fit test. The approximation is made by taking a taylor series expansion of some terms, which explains why the deviance GOF test statistic is larger than the chi square.
|
Is there a parametric equivalent to the chi-squared test?
|
Poisson regression.
Here is an example of a potential table you may be describing.
category
method 1 2 3 4
hand 101 210 590 99
machine 97 401 403 99
A poisson regression
|
Is there a parametric equivalent to the chi-squared test?
Poisson regression.
Here is an example of a potential table you may be describing.
category
method 1 2 3 4
hand 101 210 590 99
machine 97 401 403 99
A poisson regression with additive effects should yield the same expected cell count as the chi-square procedure.
Here is how we would fit the model and make the expected cell counts
tabl = xtabs(~method + category, data = d)
model_data = as.data.frame(tabl)
model = glm(Freq~method + factor(category), data =model_data, family = poisson)
model_data$expec = predict(model, type = 'response')
And here is the Chi-square test
library(tidyverse)
model_data %>%
mutate(X = (Freq-expec)^2/expec) %>%
summarise(test_stat = sum(X))
>>>95.00335
This test has 3 degrees of freedom, and I don't need to look up the p value to tell you this is significant (since the test stat is very far from the mean of the chi square).
Here is the chi-square test itself. Note the test statistic
chisq.test(tabl)
Pearson's Chi-squared test
data: tabl
X-squared = 95.003, df = 3, p-value < 2.2e-16
So here, I used the predictions from the model to do the test. Another way to do this -- which I would count as a parametric test -- would be to do a deviance goodness of fit test for the Poisson model. The proof of why the deviance goodness of fit test is similar to the chi-square escapes me, but it is easy to show from directly computing it that the results are not too different.
The deviance goodness of fit test statistic is obtained via
model$deviance
>>>96.227
which is close enough. You can simuilate some more examples to check that the deviance and the chi-square result in similar test stats.
EDIT:
Turns out the chi-square test is an approximation to the likelihood ratio test for these models, which is closely related to the deviance goodness of fit test. The approximation is made by taking a taylor series expansion of some terms, which explains why the deviance GOF test statistic is larger than the chi square.
|
Is there a parametric equivalent to the chi-squared test?
Poisson regression.
Here is an example of a potential table you may be describing.
category
method 1 2 3 4
hand 101 210 590 99
machine 97 401 403 99
A poisson regression
|
40,207
|
If $X_n \sim \text{Beta}(n, n)$ Show that $[X_n - \text{E}(X_n)]/\sqrt{\text{Var}(X_n)} \stackrel{D}{\longrightarrow} N(0,1)$
|
I was pondering how to formulate the simplest possible elementary solution to this problem and it occurred to me we can avoid any consideration of Beta functions (no Stirling's approximation needed; indeed, even information about the moments of Beta distributions is unnecessary). The result is extremely general and, I hope, interesting.
Here, for the record, is what I will show:
Let $f$ be a positive multiple of any probability density function that is
bounded, unimodal, and twice differentiable in a neighborhood of
its mode. Let the second derivative at the mode equal $-a$. Then any sequence
of random variables $X_n$ with distribution functions proportional to
$$t\to f^n\left(\frac{t}{\sqrt{an}}\right)$$ converges in distribution to the
Standard Normal distribution.
Notation, assumptions, and preliminary simplifications
Permit me to use $n+1$ rather than $n$ as the index, so that $$f_n(t)\ \propto\ t^n(1-t)^n = (t(1-t))^n = f(t)^n$$ (for $0\le t\le 1$), thereby avoiding writing "$n-1$" too often. In the question $f(t) = t(1-t)$ for $0\le t \le 1$ (and otherwise equals zero). However, this formula is a distracting, irrelevant detail.
Here's all we need to assume about $f:$
There is a constant $c$ for which $cf$ is a probability density function. This means it is defined almost everywhere on all real numbers, integrable, with unit integral. Obviously $c^{-1}=\int f(t)\,\mathrm{d}t.$
$f$ is bounded and unimodal. That is, $f$ has a unique finite maximum value.
$f$ has a second derivative in a neighborhood of its mode.
These are clearly true of the $f$ in the question.
Letting $\mu$ be the mode, we may with no loss of generality analyze the function $t\to f(t-\mu),$ which has all the properties assumed of $f$ and whose mode is $0.$
Writing
$$f(t) = 1 - \frac{a}{2}\left(1 + g(t)\right)t^2,$$
the third assumption implies
$$\lim_{t\to 0} g(t) = 0$$
and there is some positive number $\epsilon$ for which whenever $|t|\le \epsilon,$ $g(t) \ge 0.$ Moreover, since $0$ is the unique mode, $a$ must be positive.
Without any loss of generality, replace $f$ by the function $t\to f(t)/f(0),$ making the largest value of $f$ exactly $1,$ attained at its mode $0.$
We are going to consider a sequence of probability density functions determined by powers of $f.$ First we need to normalize those powers, so let
$$c_n^{-1} = \int f^n (t)\,\mathrm{d}t.$$
This is always possible because
$$\int f^n(t)\,\mathrm{d}t \le \sup(f)\int f^{n-1}(t)\,\mathrm{d}t\ = \int f^{n-1}(t)\,\mathrm{d}t$$
shows recursively that the integrals of $f^n$ cannot increase and therefore are bounded.
A final preliminary manipulation is to standardize $f^n:$ we are going to analyze the sequence
$$f_n(t) = f\left(\frac{t}{\sqrt{an}}\right)^n.$$
The next few steps will show why this is effective at producing just the right cancellation of factors in the calculation. First, though, let's look at an example.
As $n$ grows, $f$ spreads out from its mode, pushing all "satellites" out and dampening them, leaving a graph that rapidly approaches a multiple of a Normal pdf. (The plot of $f$ in the upper left corner has not yet been rescaled to a height of $1$ at its mode. The next plot of $f_1$ has been so scaled and is plotted on an $x$ axis expanded by a factor of $\sqrt{a}$ to show detail.)
Analysis
Let $t$ be any real number. Once $n$ exceeds $N(t)=t^2 / (a\epsilon^2),$ $|t|/\sqrt{an}\le \epsilon$ puts this value into the neighborhood where $f$ behaves nicely. From now on take $n\gt N(t).$
We are going to estimate the value of $f^n(t)$ by using logarithms. This is the crux of the matter and it is where all the algebra is done. Fortunately, it's easy:
$$\begin{aligned}
\log\left(f^n(t)\right) &= n \log(f(t)) \\
&= n \log f\left(\frac{t}{\sqrt{an}}\right) \\
&= n \log \left(1 - \frac{a}{2}\left(\frac{t}{\sqrt{an}}\right)^2\left(1 + g\left(\frac{t}{\sqrt{an}}\right) \right) \right) \\
&= n\log\left(1 - \frac{t^2}{2n}\left(1 + g\left(\frac{t}{\sqrt{an}}\right)\right)\right)
\end{aligned}$$
Because $g$ shrinks to $0$ for small arguments, a sufficiently large value of $n$ assures that the argument of the logarithm in that last expression is of the form $1-u$ for an arbitrarily small value of $u.$ This permits us to approximate the logarithm using Taylor's Theorem (with remainder), giving
$$\begin{aligned}
n\log\left(f^n(t)\right) &= -\frac{t^2}{2}\left(1 + g\left(\frac{t}{\sqrt{an}}\right)\right) + \frac{R}{n}\, \tilde{t}^4 \left(1 + g\left(\frac{\tilde t}{\sqrt{an}}\right)\right)^2
\end{aligned}$$
where $0\le |\tilde{t}| \le |t|$ and $R$ is some number (related to the remainder term in the Taylor expansion). Taking the limit as $n\to\infty$ makes the remainder and all the $g()$ terms disappear, leaving
$$\lim_{n\to\infty} \log\left(f(t)^n\right) = -\frac{t^2}{2},$$
whence
$$\lim_{n\to\infty} f(t)^n = \exp\left(-\frac{t^2}{2}\right).$$
It follows (requiring only an intuitive, elementary proof) that the sequence of normalizing constants $c_n$ must approach the normalizing constant for the right hand side--which exists and, as is well known, equals $\sqrt{2\pi}.$ Consequently
$$\lim_{n\to\infty} f_n(t) = \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{t^2}{2}\right),$$
which is the standard Normal density $\phi.$
Conclusions
When $X_n$ is a sequence of random variables having densities $f_n,$ for every number $t$ the limit of their densities is $\phi(t).$ It follows easily that the limit of their distribution functions is $\Phi,$ the standard Normal distribution.
In the case of the Beta$(n,n)$ distributions, $f(t)=t(1-t)$ has a unique mode at $\mu=1/2,$ where it can be expressed (up to a constant multiple) as
$$4f(t) = 1 - \frac{8}{2}(t-1/2)^2.$$
From this we can read off the value $a=8.$ Following our preliminary simplifications, this says the distribution of $\sqrt{an}(X_n - \mu) = \sqrt{8n}(X_n-\mu)$ converges to the standard Normal distribution. Because asymptotically the ratio of $\sqrt{8n}$ and $2\sqrt{2n+1}$ becomes unity, the statement in the original question is proven.
|
If $X_n \sim \text{Beta}(n, n)$ Show that $[X_n - \text{E}(X_n)]/\sqrt{\text{Var}(X_n)} \stackrel{D}
|
I was pondering how to formulate the simplest possible elementary solution to this problem and it occurred to me we can avoid any consideration of Beta functions (no Stirling's approximation needed; i
|
If $X_n \sim \text{Beta}(n, n)$ Show that $[X_n - \text{E}(X_n)]/\sqrt{\text{Var}(X_n)} \stackrel{D}{\longrightarrow} N(0,1)$
I was pondering how to formulate the simplest possible elementary solution to this problem and it occurred to me we can avoid any consideration of Beta functions (no Stirling's approximation needed; indeed, even information about the moments of Beta distributions is unnecessary). The result is extremely general and, I hope, interesting.
Here, for the record, is what I will show:
Let $f$ be a positive multiple of any probability density function that is
bounded, unimodal, and twice differentiable in a neighborhood of
its mode. Let the second derivative at the mode equal $-a$. Then any sequence
of random variables $X_n$ with distribution functions proportional to
$$t\to f^n\left(\frac{t}{\sqrt{an}}\right)$$ converges in distribution to the
Standard Normal distribution.
Notation, assumptions, and preliminary simplifications
Permit me to use $n+1$ rather than $n$ as the index, so that $$f_n(t)\ \propto\ t^n(1-t)^n = (t(1-t))^n = f(t)^n$$ (for $0\le t\le 1$), thereby avoiding writing "$n-1$" too often. In the question $f(t) = t(1-t)$ for $0\le t \le 1$ (and otherwise equals zero). However, this formula is a distracting, irrelevant detail.
Here's all we need to assume about $f:$
There is a constant $c$ for which $cf$ is a probability density function. This means it is defined almost everywhere on all real numbers, integrable, with unit integral. Obviously $c^{-1}=\int f(t)\,\mathrm{d}t.$
$f$ is bounded and unimodal. That is, $f$ has a unique finite maximum value.
$f$ has a second derivative in a neighborhood of its mode.
These are clearly true of the $f$ in the question.
Letting $\mu$ be the mode, we may with no loss of generality analyze the function $t\to f(t-\mu),$ which has all the properties assumed of $f$ and whose mode is $0.$
Writing
$$f(t) = 1 - \frac{a}{2}\left(1 + g(t)\right)t^2,$$
the third assumption implies
$$\lim_{t\to 0} g(t) = 0$$
and there is some positive number $\epsilon$ for which whenever $|t|\le \epsilon,$ $g(t) \ge 0.$ Moreover, since $0$ is the unique mode, $a$ must be positive.
Without any loss of generality, replace $f$ by the function $t\to f(t)/f(0),$ making the largest value of $f$ exactly $1,$ attained at its mode $0.$
We are going to consider a sequence of probability density functions determined by powers of $f.$ First we need to normalize those powers, so let
$$c_n^{-1} = \int f^n (t)\,\mathrm{d}t.$$
This is always possible because
$$\int f^n(t)\,\mathrm{d}t \le \sup(f)\int f^{n-1}(t)\,\mathrm{d}t\ = \int f^{n-1}(t)\,\mathrm{d}t$$
shows recursively that the integrals of $f^n$ cannot increase and therefore are bounded.
A final preliminary manipulation is to standardize $f^n:$ we are going to analyze the sequence
$$f_n(t) = f\left(\frac{t}{\sqrt{an}}\right)^n.$$
The next few steps will show why this is effective at producing just the right cancellation of factors in the calculation. First, though, let's look at an example.
As $n$ grows, $f$ spreads out from its mode, pushing all "satellites" out and dampening them, leaving a graph that rapidly approaches a multiple of a Normal pdf. (The plot of $f$ in the upper left corner has not yet been rescaled to a height of $1$ at its mode. The next plot of $f_1$ has been so scaled and is plotted on an $x$ axis expanded by a factor of $\sqrt{a}$ to show detail.)
Analysis
Let $t$ be any real number. Once $n$ exceeds $N(t)=t^2 / (a\epsilon^2),$ $|t|/\sqrt{an}\le \epsilon$ puts this value into the neighborhood where $f$ behaves nicely. From now on take $n\gt N(t).$
We are going to estimate the value of $f^n(t)$ by using logarithms. This is the crux of the matter and it is where all the algebra is done. Fortunately, it's easy:
$$\begin{aligned}
\log\left(f^n(t)\right) &= n \log(f(t)) \\
&= n \log f\left(\frac{t}{\sqrt{an}}\right) \\
&= n \log \left(1 - \frac{a}{2}\left(\frac{t}{\sqrt{an}}\right)^2\left(1 + g\left(\frac{t}{\sqrt{an}}\right) \right) \right) \\
&= n\log\left(1 - \frac{t^2}{2n}\left(1 + g\left(\frac{t}{\sqrt{an}}\right)\right)\right)
\end{aligned}$$
Because $g$ shrinks to $0$ for small arguments, a sufficiently large value of $n$ assures that the argument of the logarithm in that last expression is of the form $1-u$ for an arbitrarily small value of $u.$ This permits us to approximate the logarithm using Taylor's Theorem (with remainder), giving
$$\begin{aligned}
n\log\left(f^n(t)\right) &= -\frac{t^2}{2}\left(1 + g\left(\frac{t}{\sqrt{an}}\right)\right) + \frac{R}{n}\, \tilde{t}^4 \left(1 + g\left(\frac{\tilde t}{\sqrt{an}}\right)\right)^2
\end{aligned}$$
where $0\le |\tilde{t}| \le |t|$ and $R$ is some number (related to the remainder term in the Taylor expansion). Taking the limit as $n\to\infty$ makes the remainder and all the $g()$ terms disappear, leaving
$$\lim_{n\to\infty} \log\left(f(t)^n\right) = -\frac{t^2}{2},$$
whence
$$\lim_{n\to\infty} f(t)^n = \exp\left(-\frac{t^2}{2}\right).$$
It follows (requiring only an intuitive, elementary proof) that the sequence of normalizing constants $c_n$ must approach the normalizing constant for the right hand side--which exists and, as is well known, equals $\sqrt{2\pi}.$ Consequently
$$\lim_{n\to\infty} f_n(t) = \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{t^2}{2}\right),$$
which is the standard Normal density $\phi.$
Conclusions
When $X_n$ is a sequence of random variables having densities $f_n,$ for every number $t$ the limit of their densities is $\phi(t).$ It follows easily that the limit of their distribution functions is $\Phi,$ the standard Normal distribution.
In the case of the Beta$(n,n)$ distributions, $f(t)=t(1-t)$ has a unique mode at $\mu=1/2,$ where it can be expressed (up to a constant multiple) as
$$4f(t) = 1 - \frac{8}{2}(t-1/2)^2.$$
From this we can read off the value $a=8.$ Following our preliminary simplifications, this says the distribution of $\sqrt{an}(X_n - \mu) = \sqrt{8n}(X_n-\mu)$ converges to the standard Normal distribution. Because asymptotically the ratio of $\sqrt{8n}$ and $2\sqrt{2n+1}$ becomes unity, the statement in the original question is proven.
|
If $X_n \sim \text{Beta}(n, n)$ Show that $[X_n - \text{E}(X_n)]/\sqrt{\text{Var}(X_n)} \stackrel{D}
I was pondering how to formulate the simplest possible elementary solution to this problem and it occurred to me we can avoid any consideration of Beta functions (no Stirling's approximation needed; i
|
40,208
|
Expressing a marginal probability using copulas
|
The issue of notation seems crucial. I propose, therefore, to disambiguate the ubiquitous and overloaded "$f$" by means of subscripts. Thus, $f_{XYZ}$ will be the full density function and (therefore) the marginal density for $(X,Y)$ is
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_{XYZ}(x,y,z)\,\mathrm{d}z.$$
If, for a sufficiently smooth version of $f_{XYZ}$ and real numbers $(x,y,z)$ you define a function $c$ on $[0,1]^3$ as
$$c\left(F_X(x),F_Y(y),F_Z(z)\right) = \left\{\begin{aligned}\frac{f_{XYZ}(x,y,z)}{f_X(x)f_Y(y)f_Z(z)} & & \text{if } f_X(x)f_Y(y)f_Z(z)\ne 0 \\ 0 && \text{otherwise,}\end{aligned}\right.$$
then indeed you may substitute this into the first expression for $f_{XY}$ to obtain
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_X(x)f_Y(y)f_Z(z) c(F_X(x),F_Y(y),F_Z(z))\,\mathrm{d}z$$
and, because $\mathrm{d}F_Z(z) = f_Z(z)\,\mathrm{d}z$ by definition, substituting that into the foregoing does give
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_X(x)f_Y(y)c(F_X(x),F_Y(y),F_Z(z))\,\mathrm{d}F_Z(z).$$
Concerning the calculation of such integrals, it comes down to what information you have and what form it's in; this is an unanswerable question in such generality.
Note that this $c$ is not the copula for $f_{XYZ}.$ The copula $C$ is given by
$$\begin{aligned}
C(F_X(x),F_Y(y),F_Z(z)) &= \Pr(X\le x,\,Y\le y,\,Z \le z) \\
&= F_{XYZ}(x,y,z) \\
&= \int^x\int^y\int^z f_{XYZ}(x,y,z)\,\mathrm{d}z\mathrm{d}y\mathrm{d}x.
\end{aligned}$$
Using a standard notation in literature on copulas,
$$DC(u,v,w) = \frac{\partial^3C(u,v,w)}{\partial u\partial v \partial w}$$
for $(u,v,w)\in[0,1]^3.$ Applying the Chain Rule (three times) we may relate that to the foregoing via
$$\begin{aligned}
f_{XYZ}(x,y,z) &= \frac{\partial^3C(F_X(x),F_Y(y),F_Z(z))}{\partial x\partial y \partial z} \\
&= DC(F_X(x),F_Y(y),F_Z(z))f_X(x)f_Y(y)f_X(z),
\end{aligned}$$
revealing $c$ as
$$c(u,v,w) = (DC)(u,v,w).$$
A simple example to contrast $c$ and $C$ is the case of independence of the variables $(X,Y,Z),$ for which $C(u,v,w)=uvw$ (the "independence copula") and $c(u,v,w)=DC(u,v,w)=1.$
Finally, to address the question in the title, a simple expression for the marginal probability in terms of the copula is
$$F_{XY}(x,y) = \Pr(X\le x,\,Y\le y) = \lim_{z\to\infty}\Pr(X\le x,Y\le y,Z\le z) = C(F(x),F(y),1).$$
Differentiate this with respect to $(x,y)$ to obtain the marginal density $f_{XY}.$
|
Expressing a marginal probability using copulas
|
The issue of notation seems crucial. I propose, therefore, to disambiguate the ubiquitous and overloaded "$f$" by means of subscripts. Thus, $f_{XYZ}$ will be the full density function and (therefor
|
Expressing a marginal probability using copulas
The issue of notation seems crucial. I propose, therefore, to disambiguate the ubiquitous and overloaded "$f$" by means of subscripts. Thus, $f_{XYZ}$ will be the full density function and (therefore) the marginal density for $(X,Y)$ is
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_{XYZ}(x,y,z)\,\mathrm{d}z.$$
If, for a sufficiently smooth version of $f_{XYZ}$ and real numbers $(x,y,z)$ you define a function $c$ on $[0,1]^3$ as
$$c\left(F_X(x),F_Y(y),F_Z(z)\right) = \left\{\begin{aligned}\frac{f_{XYZ}(x,y,z)}{f_X(x)f_Y(y)f_Z(z)} & & \text{if } f_X(x)f_Y(y)f_Z(z)\ne 0 \\ 0 && \text{otherwise,}\end{aligned}\right.$$
then indeed you may substitute this into the first expression for $f_{XY}$ to obtain
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_X(x)f_Y(y)f_Z(z) c(F_X(x),F_Y(y),F_Z(z))\,\mathrm{d}z$$
and, because $\mathrm{d}F_Z(z) = f_Z(z)\,\mathrm{d}z$ by definition, substituting that into the foregoing does give
$$f_{XY}(x,y) = \int_{-\infty}^{\infty} f_X(x)f_Y(y)c(F_X(x),F_Y(y),F_Z(z))\,\mathrm{d}F_Z(z).$$
Concerning the calculation of such integrals, it comes down to what information you have and what form it's in; this is an unanswerable question in such generality.
Note that this $c$ is not the copula for $f_{XYZ}.$ The copula $C$ is given by
$$\begin{aligned}
C(F_X(x),F_Y(y),F_Z(z)) &= \Pr(X\le x,\,Y\le y,\,Z \le z) \\
&= F_{XYZ}(x,y,z) \\
&= \int^x\int^y\int^z f_{XYZ}(x,y,z)\,\mathrm{d}z\mathrm{d}y\mathrm{d}x.
\end{aligned}$$
Using a standard notation in literature on copulas,
$$DC(u,v,w) = \frac{\partial^3C(u,v,w)}{\partial u\partial v \partial w}$$
for $(u,v,w)\in[0,1]^3.$ Applying the Chain Rule (three times) we may relate that to the foregoing via
$$\begin{aligned}
f_{XYZ}(x,y,z) &= \frac{\partial^3C(F_X(x),F_Y(y),F_Z(z))}{\partial x\partial y \partial z} \\
&= DC(F_X(x),F_Y(y),F_Z(z))f_X(x)f_Y(y)f_X(z),
\end{aligned}$$
revealing $c$ as
$$c(u,v,w) = (DC)(u,v,w).$$
A simple example to contrast $c$ and $C$ is the case of independence of the variables $(X,Y,Z),$ for which $C(u,v,w)=uvw$ (the "independence copula") and $c(u,v,w)=DC(u,v,w)=1.$
Finally, to address the question in the title, a simple expression for the marginal probability in terms of the copula is
$$F_{XY}(x,y) = \Pr(X\le x,\,Y\le y) = \lim_{z\to\infty}\Pr(X\le x,Y\le y,Z\le z) = C(F(x),F(y),1).$$
Differentiate this with respect to $(x,y)$ to obtain the marginal density $f_{XY}.$
|
Expressing a marginal probability using copulas
The issue of notation seems crucial. I propose, therefore, to disambiguate the ubiquitous and overloaded "$f$" by means of subscripts. Thus, $f_{XYZ}$ will be the full density function and (therefor
|
40,209
|
Simple example of the $\sigma$-field generated by a random variable (Concept check)
|
You're right, but you might appreciate knowing how to find this sigma field using the definition:
The sigma-field generated by a random variable $X:\Omega\to\mathbb{R}$ consists of all the inverse images $X^{-1}(B)$ of the Borel sets $B\subset \mathbb{R}.$
Because $y$ has only two possible values $b_1$ and $b_2,$ there are exactly four kinds of Borel sets $B$ relevant to $y:$
$b_1\in B$ and $b_2\in B.$ In this case, $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}= \Omega.$
$b_1\in B$ but $b_2\notin B.$ Now $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}=\{\omega_1\}.$
$b_1\notin B$ yet $b_2\in B.$ Now $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}=\{\omega_2,\omega_3\}.$
$b_1\notin B$ and $b_2\notin B.$ Clearly $y^{-1}(B) = \emptyset.$
That's it--we have listed precisely the elements you gave for $\mathfrak F.$
(Implicitly, we have used the facts that the Borel sets form a sigma field ; every real number is an element of some Borel set; and any two distinct real numbers can be separated by a Borel set in the sense that one of them is inside the set and the other is outside it.)
Some things to observe and remember:
You don't have to demonstrate the properties $(1)-(4)$ (in your question) of a sigma field . Because the Borel sets of $\mathbb R$ form a sigma field , necessarily the collection of their inverse images under $y$ forms a sigma field . That's proven using basic set theory and you only have to prove it once, not every time you deal with a random variable.
The sigma field for $y$ is generated by the inverse images of any pi-system that generates the Borel sets of $\mathbb R.$ A standard pi system consists of the sets of the form $(-\infty, a]$ that are used to define distribution functions. Although this observation wouldn't have simplified this exercise, it greatly simplifies the considerations involving more complicated random variables.
Sigma fields are logically prior to probabilities: you can't define a probability until you have a sigma field. Think of it this way: the sigma field is a declaration (by you, the modeler) of what events you may assign probabilities to. You can't make those assignments until you know what these events are! (The need for this comes to the fore in complex situations where there are infinitely many random variables to analyze: that is, for stochastic processes on infinite index sets.)
|
Simple example of the $\sigma$-field generated by a random variable (Concept check)
|
You're right, but you might appreciate knowing how to find this sigma field using the definition:
The sigma-field generated by a random variable $X:\Omega\to\mathbb{R}$ consists of all the inverse im
|
Simple example of the $\sigma$-field generated by a random variable (Concept check)
You're right, but you might appreciate knowing how to find this sigma field using the definition:
The sigma-field generated by a random variable $X:\Omega\to\mathbb{R}$ consists of all the inverse images $X^{-1}(B)$ of the Borel sets $B\subset \mathbb{R}.$
Because $y$ has only two possible values $b_1$ and $b_2,$ there are exactly four kinds of Borel sets $B$ relevant to $y:$
$b_1\in B$ and $b_2\in B.$ In this case, $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}= \Omega.$
$b_1\in B$ but $b_2\notin B.$ Now $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}=\{\omega_1\}.$
$b_1\notin B$ yet $b_2\in B.$ Now $y^{-1}(B) = \{\omega\in\Omega\mid y(\omega)\in B\}=\{\omega_2,\omega_3\}.$
$b_1\notin B$ and $b_2\notin B.$ Clearly $y^{-1}(B) = \emptyset.$
That's it--we have listed precisely the elements you gave for $\mathfrak F.$
(Implicitly, we have used the facts that the Borel sets form a sigma field ; every real number is an element of some Borel set; and any two distinct real numbers can be separated by a Borel set in the sense that one of them is inside the set and the other is outside it.)
Some things to observe and remember:
You don't have to demonstrate the properties $(1)-(4)$ (in your question) of a sigma field . Because the Borel sets of $\mathbb R$ form a sigma field , necessarily the collection of their inverse images under $y$ forms a sigma field . That's proven using basic set theory and you only have to prove it once, not every time you deal with a random variable.
The sigma field for $y$ is generated by the inverse images of any pi-system that generates the Borel sets of $\mathbb R.$ A standard pi system consists of the sets of the form $(-\infty, a]$ that are used to define distribution functions. Although this observation wouldn't have simplified this exercise, it greatly simplifies the considerations involving more complicated random variables.
Sigma fields are logically prior to probabilities: you can't define a probability until you have a sigma field. Think of it this way: the sigma field is a declaration (by you, the modeler) of what events you may assign probabilities to. You can't make those assignments until you know what these events are! (The need for this comes to the fore in complex situations where there are infinitely many random variables to analyze: that is, for stochastic processes on infinite index sets.)
|
Simple example of the $\sigma$-field generated by a random variable (Concept check)
You're right, but you might appreciate knowing how to find this sigma field using the definition:
The sigma-field generated by a random variable $X:\Omega\to\mathbb{R}$ consists of all the inverse im
|
40,210
|
Difference between Repeated measures ANOVA, ANCOVA and Linear mixed effects model
|
First there is the question of whether it is OK to use percent change as the outcome. In a regression model with baseline as a regressor this is a very bad idea because the outcome is mathematically coupled to the regressor which will induce correlation (ie statistically significant associations) where none is actually present (or mask actual change). This is easy to show with a simulation:
We simulate 2 goups of 100 each where in the first instance there is no change from baseline in either group:
set.seed(15)
N <- 200
x1 <- rnorm(N, 50, 10)
trt <- c(rep(0, N/2), rep(1, N/2)) # allocate to 2 groups
x2 <- rnorm(N, 50, 10) # no change from baseline
So we expect to find nothing of any interest:
summary(lm(x2 ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 45.75024 5.37505 8.512 4.43e-15 ***
x1 0.06776 0.10342 0.655 0.513
trt 3.25135 7.12887 0.456 0.649
x1:trt -0.01689 0.13942 -0.121 0.904
as expected. But now we create a percent change variable and use it as the outcome:
pct.change <- 100*(x2 - x1)/x1
summary(lm(pct.change ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 97.5339 12.7814 7.631 9.93e-13 ***
x1 -1.9096 0.2459 -7.765 4.44e-13 ***
trt 45.1394 16.9519 2.663 0.00839 **
x1:trt -0.7662 0.3315 -2.311 0.02188 *
Everything is significant ! So we would interpet this as: the expected percent change in weight for a subject in the control group with zero baseline weight is 97; the expected change in the percent change in weight for a subject in the control group for each additional unit of baseline weight is -1.91; the expected difference in the percent change in weight between the control and treatment group for a subject with zero baseline weight is 45; and the expected difference in the percent change in weight between the treatment and control groups for each additional unit of baseline weight is -0.77.... All completely suprious !!!! Note also that with a "percent change" variable, then we have to use language like "expected change in the percent change" which does not help with understanding.
Now let's introduce an actual treatment effect of 10,
x3 <- x1 + rnorm(N, 0, 1) + trt*10
summary(lm(x3 ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.95933 0.54404 -1.763 0.0794 .
x1 1.01921 0.01047 97.365 <2e-16 ***
trt 10.78643 0.72156 14.949 <2e-16 ***
x1:trt -0.01126 0.01411 -0.798 0.4260
...all good.
Now again, we create a percent change variable and use it as the outcome:
pct.change.trt <- 100*(x3 - x1)/x1
summary(lm(pct.change.trt ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.77928 1.23337 -1.443 0.151
x1 0.03439 0.02373 1.449 0.149
trt 49.11734 1.63580 30.027 <2e-16 ***
x1:trt -0.54947 0.03199 -17.175 <2e-16 ***
..more spurious results.
As to the specific models :
Repeated measures ANOVA (with "Weight" as outcome, ["Group", "Time"] as within-factors and adjusting for "subject").
This is one option that could work.
ANCOVA (with "Percent reduction in weight" as outcome, "Group" as between-factor and "baseline weight" as a covariate)
Besides the mathematical coupling problem, this would not control for repeated measures
Linear mixed effects method with "Weight" as outcome, [group, time, group*time] as fixed effects and [subject] as random effect. Again, can we use "Percent reduction in weight" here?
This would be my preferred option, but again, not with percent reduction. This should be equivalent to repeated measures ANOVA. For example with your data:
lmer(wt ~ group*time + age + gender + (1 |Subject, data=mydata)
lme(wt ~ group*time + age + gender, random= ~ 1 | Subject, data=mydata)
You may want to add random slopes by placing one or more of the fixed effects that vary within subjects (only time in this case) to the left of the |, if justified by the theory, study design, and supported by the data. Personally I always start from a model with only random intercepts.
Linear model with interaction: "Percent reduction in weight" ~ "Group" * "Baseline weight"
This should be avoided due to the mathematical coupling problem. Even if baseline was removed as a regressor, this would then just an ANOVA model, and while repeated measures are handled by the percent variable, the residuals may not be close to normal, so inference may be affected.
|
Difference between Repeated measures ANOVA, ANCOVA and Linear mixed effects model
|
First there is the question of whether it is OK to use percent change as the outcome. In a regression model with baseline as a regressor this is a very bad idea because the outcome is mathematically c
|
Difference between Repeated measures ANOVA, ANCOVA and Linear mixed effects model
First there is the question of whether it is OK to use percent change as the outcome. In a regression model with baseline as a regressor this is a very bad idea because the outcome is mathematically coupled to the regressor which will induce correlation (ie statistically significant associations) where none is actually present (or mask actual change). This is easy to show with a simulation:
We simulate 2 goups of 100 each where in the first instance there is no change from baseline in either group:
set.seed(15)
N <- 200
x1 <- rnorm(N, 50, 10)
trt <- c(rep(0, N/2), rep(1, N/2)) # allocate to 2 groups
x2 <- rnorm(N, 50, 10) # no change from baseline
So we expect to find nothing of any interest:
summary(lm(x2 ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 45.75024 5.37505 8.512 4.43e-15 ***
x1 0.06776 0.10342 0.655 0.513
trt 3.25135 7.12887 0.456 0.649
x1:trt -0.01689 0.13942 -0.121 0.904
as expected. But now we create a percent change variable and use it as the outcome:
pct.change <- 100*(x2 - x1)/x1
summary(lm(pct.change ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 97.5339 12.7814 7.631 9.93e-13 ***
x1 -1.9096 0.2459 -7.765 4.44e-13 ***
trt 45.1394 16.9519 2.663 0.00839 **
x1:trt -0.7662 0.3315 -2.311 0.02188 *
Everything is significant ! So we would interpet this as: the expected percent change in weight for a subject in the control group with zero baseline weight is 97; the expected change in the percent change in weight for a subject in the control group for each additional unit of baseline weight is -1.91; the expected difference in the percent change in weight between the control and treatment group for a subject with zero baseline weight is 45; and the expected difference in the percent change in weight between the treatment and control groups for each additional unit of baseline weight is -0.77.... All completely suprious !!!! Note also that with a "percent change" variable, then we have to use language like "expected change in the percent change" which does not help with understanding.
Now let's introduce an actual treatment effect of 10,
x3 <- x1 + rnorm(N, 0, 1) + trt*10
summary(lm(x3 ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.95933 0.54404 -1.763 0.0794 .
x1 1.01921 0.01047 97.365 <2e-16 ***
trt 10.78643 0.72156 14.949 <2e-16 ***
x1:trt -0.01126 0.01411 -0.798 0.4260
...all good.
Now again, we create a percent change variable and use it as the outcome:
pct.change.trt <- 100*(x3 - x1)/x1
summary(lm(pct.change.trt ~ x1 * trt))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.77928 1.23337 -1.443 0.151
x1 0.03439 0.02373 1.449 0.149
trt 49.11734 1.63580 30.027 <2e-16 ***
x1:trt -0.54947 0.03199 -17.175 <2e-16 ***
..more spurious results.
As to the specific models :
Repeated measures ANOVA (with "Weight" as outcome, ["Group", "Time"] as within-factors and adjusting for "subject").
This is one option that could work.
ANCOVA (with "Percent reduction in weight" as outcome, "Group" as between-factor and "baseline weight" as a covariate)
Besides the mathematical coupling problem, this would not control for repeated measures
Linear mixed effects method with "Weight" as outcome, [group, time, group*time] as fixed effects and [subject] as random effect. Again, can we use "Percent reduction in weight" here?
This would be my preferred option, but again, not with percent reduction. This should be equivalent to repeated measures ANOVA. For example with your data:
lmer(wt ~ group*time + age + gender + (1 |Subject, data=mydata)
lme(wt ~ group*time + age + gender, random= ~ 1 | Subject, data=mydata)
You may want to add random slopes by placing one or more of the fixed effects that vary within subjects (only time in this case) to the left of the |, if justified by the theory, study design, and supported by the data. Personally I always start from a model with only random intercepts.
Linear model with interaction: "Percent reduction in weight" ~ "Group" * "Baseline weight"
This should be avoided due to the mathematical coupling problem. Even if baseline was removed as a regressor, this would then just an ANOVA model, and while repeated measures are handled by the percent variable, the residuals may not be close to normal, so inference may be affected.
|
Difference between Repeated measures ANOVA, ANCOVA and Linear mixed effects model
First there is the question of whether it is OK to use percent change as the outcome. In a regression model with baseline as a regressor this is a very bad idea because the outcome is mathematically c
|
40,211
|
Analyzing a partially crossed design
|
As far as I can tell you are describing a partially crossed design. The good news is that this is one of Doug Bates's main development goals for lme4: efficiently fitting large, partially crossed linear mixed models. Disclaimer: I don't know that much about Rasch models nor how close a partially nested model like this gets to it: from a brief glance at this paper, it seems that it's pretty close.
Some general data checking and exploration:
dat <- read.csv('https://raw.githubusercontent.com/ilzl/i/master/d.csv')
plot(tt_item <- table(dat$item_id))
plot(tt_person <- table(dat$person_id))
table(tt_person)
tt <- with(dat,table(item_id,person_id))
table(tt)
Confirming that (1) items have highly variable counts; (2) persons have 21-32 counts; (3) person:item combinations are never repeated.
Examining the crossing structure:
library(lme4)
## run lmer without fitting (optimizer=NULL)
form <- y ~ item_type + (1| item_id) + (1 | person_id)
f0 <- lmer(form,
data = dat,
control=lmerControl(optimizer=NULL))
View the random effects model matrix:
image(getME(f0,"Zt"))
The lower diagonal line represents the indicator variable for persons: the upper stuff is for items. The fairly uniform fill confirms that there's no particular pattern to the combination of items with persons.
Re-do the model, this time actually fitting:
system.time(f1 <- update(f0, control=lmerControl(), verbose=TRUE))
This takes about 140 seconds on my (medium-powered) laptop. Check diagnostic plots:
plot(f1,pch=".", type=c("p","smooth"), col.line="red")
And the scale-location plot:
plot(f1,sqrt(abs(resid(.)))~fitted(.),
pch=".", type=c("p","smooth"), col.line="red")
So there do appear to be some problems with nonlinearity and heteroscedasticity here.
If you want to fit the (0,1) values in a more appropriate way (and maybe deal with the nonlinearity and heteroscedasticity problems), you can try a mixed Beta regression:
library(glmmTMB)
system.time(f2 <- glmmTMB(form,
data = dat,
family=beta_family()))
This is slower (~1000 seconds).
Diagnostics (I'm jumping through a few hoops here to deal with some slowness in glmmTMB's residuals() function.)
system.time(f2_fitted <- predict(f2, type="response", se.fit=FALSE))
v <- family(f2)$variance
resid <- (f2_fitted-dat$y)/sqrt(v(f2_fitted)) ## Pearson residuals
f2_diag <- data.frame(fitted=f2_fitted, resid)
g1 <- mgcv::gam(resid ~ s(fitted, bs ="cs"), data=f2_diag)
xvec <- seq(0,1, length.out=201)
plot(resid~fitted, pch=".", data=f2_diag)
lines(xvec, predict(g1,newdata=data.frame(fitted=xvec)), col=2,lwd=2)
Scale-location plot:
g2 <- mgcv::gam(sqrt(abs(resid)) ~ s(fitted, bs ="cs"), data=f2_diag)
plot(sqrt(abs(resid))~fitted, pch=".", data=f2_diag)
lines(xvec, predict(g2,newdata=data.frame(fitted=xvec)), col=2,lwd=2)
A few more questions/comments:
the ranef() method will retrieve the random effects, which represent the relative difficulties of items (and the relative skill of persons)
you might want to worry about the remaining nonlinearity and heteroscedasticity, but I don't immediately see easy options (suggestions from commenters welcome)
adding other covariates (e.g. gender) might help the patterns or change the results ...
this is not the 'maximal' model (see Barr et al 2013: i.e., since each individual gets multiple item types, you probably want a term of the form (item_type|person_id) in the model - however, beware that these fits will take even longer ...
|
Analyzing a partially crossed design
|
As far as I can tell you are describing a partially crossed design. The good news is that this is one of Doug Bates's main development goals for lme4: efficiently fitting large, partially crossed lin
|
Analyzing a partially crossed design
As far as I can tell you are describing a partially crossed design. The good news is that this is one of Doug Bates's main development goals for lme4: efficiently fitting large, partially crossed linear mixed models. Disclaimer: I don't know that much about Rasch models nor how close a partially nested model like this gets to it: from a brief glance at this paper, it seems that it's pretty close.
Some general data checking and exploration:
dat <- read.csv('https://raw.githubusercontent.com/ilzl/i/master/d.csv')
plot(tt_item <- table(dat$item_id))
plot(tt_person <- table(dat$person_id))
table(tt_person)
tt <- with(dat,table(item_id,person_id))
table(tt)
Confirming that (1) items have highly variable counts; (2) persons have 21-32 counts; (3) person:item combinations are never repeated.
Examining the crossing structure:
library(lme4)
## run lmer without fitting (optimizer=NULL)
form <- y ~ item_type + (1| item_id) + (1 | person_id)
f0 <- lmer(form,
data = dat,
control=lmerControl(optimizer=NULL))
View the random effects model matrix:
image(getME(f0,"Zt"))
The lower diagonal line represents the indicator variable for persons: the upper stuff is for items. The fairly uniform fill confirms that there's no particular pattern to the combination of items with persons.
Re-do the model, this time actually fitting:
system.time(f1 <- update(f0, control=lmerControl(), verbose=TRUE))
This takes about 140 seconds on my (medium-powered) laptop. Check diagnostic plots:
plot(f1,pch=".", type=c("p","smooth"), col.line="red")
And the scale-location plot:
plot(f1,sqrt(abs(resid(.)))~fitted(.),
pch=".", type=c("p","smooth"), col.line="red")
So there do appear to be some problems with nonlinearity and heteroscedasticity here.
If you want to fit the (0,1) values in a more appropriate way (and maybe deal with the nonlinearity and heteroscedasticity problems), you can try a mixed Beta regression:
library(glmmTMB)
system.time(f2 <- glmmTMB(form,
data = dat,
family=beta_family()))
This is slower (~1000 seconds).
Diagnostics (I'm jumping through a few hoops here to deal with some slowness in glmmTMB's residuals() function.)
system.time(f2_fitted <- predict(f2, type="response", se.fit=FALSE))
v <- family(f2)$variance
resid <- (f2_fitted-dat$y)/sqrt(v(f2_fitted)) ## Pearson residuals
f2_diag <- data.frame(fitted=f2_fitted, resid)
g1 <- mgcv::gam(resid ~ s(fitted, bs ="cs"), data=f2_diag)
xvec <- seq(0,1, length.out=201)
plot(resid~fitted, pch=".", data=f2_diag)
lines(xvec, predict(g1,newdata=data.frame(fitted=xvec)), col=2,lwd=2)
Scale-location plot:
g2 <- mgcv::gam(sqrt(abs(resid)) ~ s(fitted, bs ="cs"), data=f2_diag)
plot(sqrt(abs(resid))~fitted, pch=".", data=f2_diag)
lines(xvec, predict(g2,newdata=data.frame(fitted=xvec)), col=2,lwd=2)
A few more questions/comments:
the ranef() method will retrieve the random effects, which represent the relative difficulties of items (and the relative skill of persons)
you might want to worry about the remaining nonlinearity and heteroscedasticity, but I don't immediately see easy options (suggestions from commenters welcome)
adding other covariates (e.g. gender) might help the patterns or change the results ...
this is not the 'maximal' model (see Barr et al 2013: i.e., since each individual gets multiple item types, you probably want a term of the form (item_type|person_id) in the model - however, beware that these fits will take even longer ...
|
Analyzing a partially crossed design
As far as I can tell you are describing a partially crossed design. The good news is that this is one of Doug Bates's main development goals for lme4: efficiently fitting large, partially crossed lin
|
40,212
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
|
There is actually no such object as $Y|X$ --- whenever this notation appears, it is an abuse of notation which operates as shorthand for specifying the conditional distribution of a random variable conditional on another random variable.$^\dagger$ Thus, the statement $Y|X = m(X) + \epsilon$ actually doesn't make any sense; the conditionality notation $|X$ is used only in the context of stipulating the distribution of a random variable, not its functional relationship to other random variables. In a regression model, you would always say $Y = m(X) + \epsilon$, not $Y|X = m(X) + \epsilon$.
When you are referring to the distribution of either $Y$ or $\epsilon$, you could refer either to the marginal distribution or the conditional given $X$. In the context of regression analysis, analysis is done conditional on the explanatory variable $X$, and so it would be usual to refer to the distribution conditional on this. The notation $Y|X$ operates as shorthand for specifying a conditional distribution. For example, the statement:
$$Y|X \sim \text{N}(m(X), \sigma_\epsilon^2),$$
is actually shorthand for the conditional distribution:
$$p(Y=y|X=x) = \text{N}(y|m(x),\sigma_\epsilon^2).$$
$^\dagger$ Strictly speaking, it is possible to create a new object $Y|X$ which is a mapping from the range $\mathscr{X}$ to a set of random variables on different probability spaces, each conditional on the stipulated value of $X$. In most cases we do not want to bother with this, since the notation is just used as a shorthand for stipulating a conditional distribution.
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
|
There is actually no such object as $Y|X$ --- whenever this notation appears, it is an abuse of notation which operates as shorthand for specifying the conditional distribution of a random variable co
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
There is actually no such object as $Y|X$ --- whenever this notation appears, it is an abuse of notation which operates as shorthand for specifying the conditional distribution of a random variable conditional on another random variable.$^\dagger$ Thus, the statement $Y|X = m(X) + \epsilon$ actually doesn't make any sense; the conditionality notation $|X$ is used only in the context of stipulating the distribution of a random variable, not its functional relationship to other random variables. In a regression model, you would always say $Y = m(X) + \epsilon$, not $Y|X = m(X) + \epsilon$.
When you are referring to the distribution of either $Y$ or $\epsilon$, you could refer either to the marginal distribution or the conditional given $X$. In the context of regression analysis, analysis is done conditional on the explanatory variable $X$, and so it would be usual to refer to the distribution conditional on this. The notation $Y|X$ operates as shorthand for specifying a conditional distribution. For example, the statement:
$$Y|X \sim \text{N}(m(X), \sigma_\epsilon^2),$$
is actually shorthand for the conditional distribution:
$$p(Y=y|X=x) = \text{N}(y|m(x),\sigma_\epsilon^2).$$
$^\dagger$ Strictly speaking, it is possible to create a new object $Y|X$ which is a mapping from the range $\mathscr{X}$ to a set of random variables on different probability spaces, each conditional on the stipulated value of $X$. In most cases we do not want to bother with this, since the notation is just used as a shorthand for stipulating a conditional distribution.
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
There is actually no such object as $Y|X$ --- whenever this notation appears, it is an abuse of notation which operates as shorthand for specifying the conditional distribution of a random variable co
|
40,213
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
|
In my opinion, the use of the equality makes dependence on $X$ explicit. The only place I have seen notation like $y \vert X$ is when the distribution of $y$ is being discussed. You see this frequently in Bayesian models like
$$ y\vert \mu , \sigma \sim \mathcal{N}(\mu, \sigma) $$
This notation tells me that there is a prior for $\sigma$ and that the distribution of $y$ depends on whatever $\sigma$ is drawn from the data generating process.
In any case, I don't see much difference between $y = $ and $y\vert X = $.
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
|
In my opinion, the use of the equality makes dependence on $X$ explicit. The only place I have seen notation like $y \vert X$ is when the distribution of $y$ is being discussed. You see this frequen
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
In my opinion, the use of the equality makes dependence on $X$ explicit. The only place I have seen notation like $y \vert X$ is when the distribution of $y$ is being discussed. You see this frequently in Bayesian models like
$$ y\vert \mu , \sigma \sim \mathcal{N}(\mu, \sigma) $$
This notation tells me that there is a prior for $\sigma$ and that the distribution of $y$ depends on whatever $\sigma$ is drawn from the data generating process.
In any case, I don't see much difference between $y = $ and $y\vert X = $.
|
Is there a difference in interpretation between $Y|X = m(X) + \epsilon$ vs. $Y = m(X) + \epsilon$?
In my opinion, the use of the equality makes dependence on $X$ explicit. The only place I have seen notation like $y \vert X$ is when the distribution of $y$ is being discussed. You see this frequen
|
40,214
|
Why `arima()` and `Arima()` give different AIC and sigma2 while giving the same coefficients and same likelihood?
|
stats::arima() estimates $\sigma^2$ using the MLE of the innovations variance, while forecast::Arima() uses the unbiased estimate $\sum e_i^2/(n-k)$ where $n$ is the number of observations available and $k$ is the number of parameters estimated.
stats::arima() does not count $\sigma^2$ as a parameter in the computation of the AIC, whereas forecast::Arima() does count it. Burnham and Anderson (Springer, 2002) recommend including $\sigma^2$ as per Arima().
|
Why `arima()` and `Arima()` give different AIC and sigma2 while giving the same coefficients and sam
|
stats::arima() estimates $\sigma^2$ using the MLE of the innovations variance, while forecast::Arima() uses the unbiased estimate $\sum e_i^2/(n-k)$ where $n$ is the number of observations available a
|
Why `arima()` and `Arima()` give different AIC and sigma2 while giving the same coefficients and same likelihood?
stats::arima() estimates $\sigma^2$ using the MLE of the innovations variance, while forecast::Arima() uses the unbiased estimate $\sum e_i^2/(n-k)$ where $n$ is the number of observations available and $k$ is the number of parameters estimated.
stats::arima() does not count $\sigma^2$ as a parameter in the computation of the AIC, whereas forecast::Arima() does count it. Burnham and Anderson (Springer, 2002) recommend including $\sigma^2$ as per Arima().
|
Why `arima()` and `Arima()` give different AIC and sigma2 while giving the same coefficients and sam
stats::arima() estimates $\sigma^2$ using the MLE of the innovations variance, while forecast::Arima() uses the unbiased estimate $\sum e_i^2/(n-k)$ where $n$ is the number of observations available a
|
40,215
|
How to integrate the marginal likelihood numerically?
|
The marginal log-likelihood in mixed models is typically written as:
$$\ell(\theta) = \sum_{i = 1}^n \log \int p(y_i \mid b_i) \, p(b_i) \, db_i.$$
In specific settings, e.g., in linear mixed model, where both terms in the integrand are normal densities, this integral has a closed-form solution. But in general you need to approximate it using Monte Carlo of Gaussian quadrature or a similar technique. In this case we get the form:
$$\ell(\theta) \approx \sum_{i = 1}^n \log \sum_{q = 1}^Q \varpi_q \{p(y_i \mid b_{iq}) \, p(b_{iq})\},$$
where in the case of Gaussian quadrature $\varpi_q$ are the weights.
Now, going specifically to your question, indeed it is better to work in the log-scale when doing computations, however you still need to calculate the exponent, i.e.,
$$\ell(\theta) \approx \sum_{i = 1}^n \log \sum_{q = 1}^Q \exp \{\log(\varpi_q) + \log p(y_i \mid b_{iq}) + \log p(b_{iq})\}.$$
There are some ways to more accurately calculate the logarithm of sum of exponentials. If you happen to work in R, have a look in the logSumExp() function in the matrixStats package.
|
How to integrate the marginal likelihood numerically?
|
The marginal log-likelihood in mixed models is typically written as:
$$\ell(\theta) = \sum_{i = 1}^n \log \int p(y_i \mid b_i) \, p(b_i) \, db_i.$$
In specific settings, e.g., in linear mixed model, w
|
How to integrate the marginal likelihood numerically?
The marginal log-likelihood in mixed models is typically written as:
$$\ell(\theta) = \sum_{i = 1}^n \log \int p(y_i \mid b_i) \, p(b_i) \, db_i.$$
In specific settings, e.g., in linear mixed model, where both terms in the integrand are normal densities, this integral has a closed-form solution. But in general you need to approximate it using Monte Carlo of Gaussian quadrature or a similar technique. In this case we get the form:
$$\ell(\theta) \approx \sum_{i = 1}^n \log \sum_{q = 1}^Q \varpi_q \{p(y_i \mid b_{iq}) \, p(b_{iq})\},$$
where in the case of Gaussian quadrature $\varpi_q$ are the weights.
Now, going specifically to your question, indeed it is better to work in the log-scale when doing computations, however you still need to calculate the exponent, i.e.,
$$\ell(\theta) \approx \sum_{i = 1}^n \log \sum_{q = 1}^Q \exp \{\log(\varpi_q) + \log p(y_i \mid b_{iq}) + \log p(b_{iq})\}.$$
There are some ways to more accurately calculate the logarithm of sum of exponentials. If you happen to work in R, have a look in the logSumExp() function in the matrixStats package.
|
How to integrate the marginal likelihood numerically?
The marginal log-likelihood in mixed models is typically written as:
$$\ell(\theta) = \sum_{i = 1}^n \log \int p(y_i \mid b_i) \, p(b_i) \, db_i.$$
In specific settings, e.g., in linear mixed model, w
|
40,216
|
How to integrate the marginal likelihood numerically?
|
The log-sum-exp trick is a way to calculate sums over finite sets, operating in the log domain to avoid overflow. I'll show how to generalize this trick to integrals, giving a way to rewrite the log of your marginal likelihood.
The log marginal likelihood is:
$$\log \ell_m(\theta) =
\log \int \exp \big( \ell(\theta, b) \big) dF(b)$$
Let $\ell^*(\theta)$ be the maximum value the log joint likelihood can take, given $\theta$:
$$\ell^*(\theta) = \max_b \ \ell(\theta, b)$$
Use this to rewrite the log marginal likelihood:
$$\log \ell_m(\theta) =
\log \int \exp \Big( \ell(\theta, b) - \ell^*(\theta) + \ell^*(\theta) \Big) dF(b)$$
After a little algebra, we have:
$$\log \ell_m(\theta) =
\ell^*(\theta)
+ \log \int \exp \Big( \ell(\theta, b) - \ell^*(\theta) \Big) dF(b)$$
The above integral can be computed without overflow, since the values we need to exponentiate are smaller. Note that a precise solution for $\ell^*(\theta)$ isn't particularly necessary; all we really need is a value big enough to avoid overflow.
|
How to integrate the marginal likelihood numerically?
|
The log-sum-exp trick is a way to calculate sums over finite sets, operating in the log domain to avoid overflow. I'll show how to generalize this trick to integrals, giving a way to rewrite the log o
|
How to integrate the marginal likelihood numerically?
The log-sum-exp trick is a way to calculate sums over finite sets, operating in the log domain to avoid overflow. I'll show how to generalize this trick to integrals, giving a way to rewrite the log of your marginal likelihood.
The log marginal likelihood is:
$$\log \ell_m(\theta) =
\log \int \exp \big( \ell(\theta, b) \big) dF(b)$$
Let $\ell^*(\theta)$ be the maximum value the log joint likelihood can take, given $\theta$:
$$\ell^*(\theta) = \max_b \ \ell(\theta, b)$$
Use this to rewrite the log marginal likelihood:
$$\log \ell_m(\theta) =
\log \int \exp \Big( \ell(\theta, b) - \ell^*(\theta) + \ell^*(\theta) \Big) dF(b)$$
After a little algebra, we have:
$$\log \ell_m(\theta) =
\ell^*(\theta)
+ \log \int \exp \Big( \ell(\theta, b) - \ell^*(\theta) \Big) dF(b)$$
The above integral can be computed without overflow, since the values we need to exponentiate are smaller. Note that a precise solution for $\ell^*(\theta)$ isn't particularly necessary; all we really need is a value big enough to avoid overflow.
|
How to integrate the marginal likelihood numerically?
The log-sum-exp trick is a way to calculate sums over finite sets, operating in the log domain to avoid overflow. I'll show how to generalize this trick to integrals, giving a way to rewrite the log o
|
40,217
|
How to integrate the marginal likelihood numerically?
|
Accurate integration of this kind of function will require you to work in log-space, which means that you will use the $\text{logsumexp}$ function for sums of terms. You can then use any standard method of numerical integration (e.g., quadrature, importance sampling, etc.)
To give you a more specific idea of how to implement this technique, I will construct a simple method using an arbitrary weighting on the terms for the numerical integration, and working in log-space. For simplicity, I will assume you have a continuous distribution where you have a known quantile function $Q$ that you can compute. On this basis, we choose some odd value $n$ and split the unit interval into $n$ equal sub-intervals, and look at the midpoints of these sub-intervals. We consider the points $b_1,...,b_n$ (which are the values at the quantiles equal to those sub-interval midpoints) given by:
$$b_i \equiv Q \bigg( \frac{2i-1}{2n} \bigg).$$
Suppose we take some corresponding positive weights $w_1,...,w_n$ with mean $\bar{w} \equiv \sum w_i / n$, and we define the coefficients $\ell_{i*} \equiv \ell(\theta,b_i) + \log w_i$. The integral can be approximated (using these weights) as:
$$\begin{equation} \begin{aligned}
\log \ell_m(\theta)
&= \log \int \exp \left( \ell(\theta,b) \right) \ dF(b) \\[6pt]
&= \log \int \limits_0^1 \exp \left( \ell(\theta,Q(p)) \right) \ dp \\[6pt]
&\approx \log \Bigg( \sum_{i=1}^n \frac{w_i}{n \bar{w}} \cdot \exp \left( \ell(\theta,b_i) \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \log \Bigg( \sum_{i=1}^n \exp \left( \ell(\theta,b_i) + \log w_i \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \log \Bigg( \sum_{i=1}^n \exp \left( \ell_{i*} \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \text{logsumexp} \left( \ell_{1*}, ..., \ell_{n*} \right). \\[6pt]
\end{aligned} \end{equation}$$
Now, we will generally choose the weights $w_1,...,w_n$ to correspond to some appropriate quadrature rules. For example, we could implement the numerical integration using Simpson's rule by taking the weights to be $w_1,...,w_n = 1, 4, 2, 4, ..., 4, 2, 4, 1$.
We can program this method in R as follows. (Note that this is not a particularly efficient method. I am including it for purposes of exposition rather than recommending it as a method. There are much better numerical integration algorithms already programmed into R.) Here we program a function INTEGRATE that takes your function l and your quantile function Q and computes the integral using Simpson's rule with n terms. The option log specifies whether you want the logarithm of the integral as the output (or the actual integral).
INTEGRATE <- function(l, Q, n = 100001, log.out = TRUE) {
#Check the input n
if (length(n) != 1) { stop('Error: n should be a single value') }
if (n < 1) { stop('Error: n should be a positive integer') }
n <- as.integer(n);
if (n%%2 == 0) { n <- n+1; }
#Set Simpson weights
WW <- rep(0, n);
for (i in 1:n) { WW[i] <- ifelse(i%%2 == 0, 4, 2); }
WW[1] <- 1;
WW[n] <- 1;
#Set numerical terms
BB <- rep(0, n);
TT <- rep(-Inf, n);
for (i in 1:n) {
PROB <- (2*i-1)/(2*n);
BB[i] <- Q(PROB);
TT[i] <- l(BB[i]) + log(WW[i]); }
#Perform numerical integration
LOG_INT <- - log(sum(WW)) + matrixStats::logSumExp(TT);
#Give output
if (log.out) { LOG_INT } else { exp(LOG_INT) } }
We will implement this with an example where the true value of the integral is known. Taking $\ell(\theta, b) = \theta \cdot b$ means that the integral is just the moment generating function of the random variable $B$, evaluated at $\theta$. We can compute this integral numerically for the standard normal distribution and compare the result to the known value of the MGF. Taking $\theta = 2$ and using the standard normal distribution, the log-MGF value should be $\theta^2/2 = 2$. Our computation below shows that we get a numerical integral that is somewhat near this value by taking a sufficiently large value for $n$.
#Set an example function
theta <- 2;
l <- function(z) { theta*z; }
#Set the quantile function (standard normal)
Q <- function(p) { qnorm(p, 0, 1); }
#Compute the integral numerically
LOG_INT <- INTEGRATE(l, Q, n = 10^7, log.out = TRUE);
LOG_INT;
[1] 1.999567
We can see that our numerical integration in this case comes close to the true value of the integral. Note that this is not a particularly efficient numerical integration algorithm, and there are better numerical integration algorithms in R. Nevertheless, hopefully this illustrates a broad class of methods for integrating this kind of function in log-space.
|
How to integrate the marginal likelihood numerically?
|
Accurate integration of this kind of function will require you to work in log-space, which means that you will use the $\text{logsumexp}$ function for sums of terms. You can then use any standard met
|
How to integrate the marginal likelihood numerically?
Accurate integration of this kind of function will require you to work in log-space, which means that you will use the $\text{logsumexp}$ function for sums of terms. You can then use any standard method of numerical integration (e.g., quadrature, importance sampling, etc.)
To give you a more specific idea of how to implement this technique, I will construct a simple method using an arbitrary weighting on the terms for the numerical integration, and working in log-space. For simplicity, I will assume you have a continuous distribution where you have a known quantile function $Q$ that you can compute. On this basis, we choose some odd value $n$ and split the unit interval into $n$ equal sub-intervals, and look at the midpoints of these sub-intervals. We consider the points $b_1,...,b_n$ (which are the values at the quantiles equal to those sub-interval midpoints) given by:
$$b_i \equiv Q \bigg( \frac{2i-1}{2n} \bigg).$$
Suppose we take some corresponding positive weights $w_1,...,w_n$ with mean $\bar{w} \equiv \sum w_i / n$, and we define the coefficients $\ell_{i*} \equiv \ell(\theta,b_i) + \log w_i$. The integral can be approximated (using these weights) as:
$$\begin{equation} \begin{aligned}
\log \ell_m(\theta)
&= \log \int \exp \left( \ell(\theta,b) \right) \ dF(b) \\[6pt]
&= \log \int \limits_0^1 \exp \left( \ell(\theta,Q(p)) \right) \ dp \\[6pt]
&\approx \log \Bigg( \sum_{i=1}^n \frac{w_i}{n \bar{w}} \cdot \exp \left( \ell(\theta,b_i) \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \log \Bigg( \sum_{i=1}^n \exp \left( \ell(\theta,b_i) + \log w_i \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \log \Bigg( \sum_{i=1}^n \exp \left( \ell_{i*} \right) \Bigg) \\[6pt]
&= -\log(n \bar{w}) + \text{logsumexp} \left( \ell_{1*}, ..., \ell_{n*} \right). \\[6pt]
\end{aligned} \end{equation}$$
Now, we will generally choose the weights $w_1,...,w_n$ to correspond to some appropriate quadrature rules. For example, we could implement the numerical integration using Simpson's rule by taking the weights to be $w_1,...,w_n = 1, 4, 2, 4, ..., 4, 2, 4, 1$.
We can program this method in R as follows. (Note that this is not a particularly efficient method. I am including it for purposes of exposition rather than recommending it as a method. There are much better numerical integration algorithms already programmed into R.) Here we program a function INTEGRATE that takes your function l and your quantile function Q and computes the integral using Simpson's rule with n terms. The option log specifies whether you want the logarithm of the integral as the output (or the actual integral).
INTEGRATE <- function(l, Q, n = 100001, log.out = TRUE) {
#Check the input n
if (length(n) != 1) { stop('Error: n should be a single value') }
if (n < 1) { stop('Error: n should be a positive integer') }
n <- as.integer(n);
if (n%%2 == 0) { n <- n+1; }
#Set Simpson weights
WW <- rep(0, n);
for (i in 1:n) { WW[i] <- ifelse(i%%2 == 0, 4, 2); }
WW[1] <- 1;
WW[n] <- 1;
#Set numerical terms
BB <- rep(0, n);
TT <- rep(-Inf, n);
for (i in 1:n) {
PROB <- (2*i-1)/(2*n);
BB[i] <- Q(PROB);
TT[i] <- l(BB[i]) + log(WW[i]); }
#Perform numerical integration
LOG_INT <- - log(sum(WW)) + matrixStats::logSumExp(TT);
#Give output
if (log.out) { LOG_INT } else { exp(LOG_INT) } }
We will implement this with an example where the true value of the integral is known. Taking $\ell(\theta, b) = \theta \cdot b$ means that the integral is just the moment generating function of the random variable $B$, evaluated at $\theta$. We can compute this integral numerically for the standard normal distribution and compare the result to the known value of the MGF. Taking $\theta = 2$ and using the standard normal distribution, the log-MGF value should be $\theta^2/2 = 2$. Our computation below shows that we get a numerical integral that is somewhat near this value by taking a sufficiently large value for $n$.
#Set an example function
theta <- 2;
l <- function(z) { theta*z; }
#Set the quantile function (standard normal)
Q <- function(p) { qnorm(p, 0, 1); }
#Compute the integral numerically
LOG_INT <- INTEGRATE(l, Q, n = 10^7, log.out = TRUE);
LOG_INT;
[1] 1.999567
We can see that our numerical integration in this case comes close to the true value of the integral. Note that this is not a particularly efficient numerical integration algorithm, and there are better numerical integration algorithms in R. Nevertheless, hopefully this illustrates a broad class of methods for integrating this kind of function in log-space.
|
How to integrate the marginal likelihood numerically?
Accurate integration of this kind of function will require you to work in log-space, which means that you will use the $\text{logsumexp}$ function for sums of terms. You can then use any standard met
|
40,218
|
Probability mass function of product of two binomial variables
|
There are various ways you could write the mass function of this distribution. All of them will be messy, since they involve checking the possible products that give a stipulated value for the product variable. Here is the most obvious way to write the distribution.
Let $X, Y \sim \text{IID Bin}(n, p)$ and let $Z=XY$ be their product. For any integer $0 \leqslant z \leqslant n^2$ we define the set of pairs of values:
$$\mathcal{S}(z) \equiv \{ (x,y) \in \mathbb{N}_{0+}^2 \mid \max(x,y) \leqslant n, xy=z \}.$$
This is the set of all pairs of values within the support of the binomial that multiply to the value $z$. (Note that it will be an empty set for some values of $z$.) We then have:
$$\begin{equation} \begin{aligned}
p_Z(z) \equiv \mathbb{P}(Z=z)
&= \mathbb{P}(XY=z) \\[6pt]
&= \sum_{(x,y) \in \mathcal{S}(z)} \text{Bin}(x\mid n,p) \cdot \text{Bin}(y\mid n, p) \\[6pt]
&= \sum_{(x,y) \in \mathcal{S}(z)} {n \choose x} {n \choose y} \cdot p^{x+y} (1-p)^{2n-x-y}.
\end{aligned} \end{equation}$$
Computing this probability mass function requires you to find the set $\mathcal{S}(z)$ for each $z$ in your support. The distribution has mean and variance:
$$\mathbb{E}(Z) = (np)^2
\quad \quad \quad \quad \quad
\mathbb{V}(Z) = (np)^2 [(1-p+np)^2 - (np)^2].$$
The distribution will be quite jagged, owing to the fact that it is the distribution of a product of discrete random variables. Notwithstanding its jagged distribution, as $n \rightarrow \infty$ you will have convergence in probability to $Z/n^2 \rightarrow p^2$.
Implementation in R: The easiest way to code this mass function is to first create a matrix of joint probabilities for the underlying random variables $X$ and $Y$, and then allocate each of these probabilities to the appropriate resulting product value. In the code below I will create a function dprodbinom which is a vectorised function for the probability mass function of this "product-binomial" distribution.
#Create function for PMF of the product-binomial distribution
dprodbinom <- function(Z, size, prob, log = FALSE) {
#Check input vector is numeric
if (!is.numeric(Z)) { stop('Error: Input values are not numeric'); }
#Set parameters
n <- size;
p <- prob;
#Generate matrix of joint probabilities
SS <- matrix(-Inf, nrow = n+1, ncol = n+1);
XX <- dbinom(0:n, size = n, prob = p, log = TRUE);
for (x in 0:n) {
for (y in 0:n) {
SS[x+1, y+1] <- XX[x+1] + XX[y+1]; } }
#Compute the log-mass function of the product random variable
LOGPMF <- rep(-Inf, n^2+1);
for (x in 0:n) {
for (y in 0:n) {
LOGPMF[x*y+1] <- matrixStats::logSumExp(c(LOGPMF[x*y+1], SS[x+1, y+1])); } }
#Generate the output vector
OUT <- rep(-Inf, length(Z));
for (i in 1:length(Z)) {
if (Z[i] %in% 0:(n^2)) {
OUT[i] <- LOGPMF[Z[i]+1]; } }
#Give the output of the function
if (log) { OUT } else { exp(OUT) } }
We can now easily generate and plot the probability mass function of this distribution. For example, with $n=10$ and $p = 0.6$ we obtain the following probability mass function. As you can see, it is quite jagged, owing to the fact that the product values are distributed in a lagged pattern over the joint values of the underlying random variables.
#Load required libraries
library(matrixStats);
library(ggplot2);
#Generate the mass function
n <- 10;
p <- 0.6;
PMF <- dprodbinom(0:100, size = n, prob = p, log = FALSE);
#Plot the mass function
THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5, face = 'bold'));
DATA <- data.frame(Value = 0:100, Probability = PMF);
FIGURE <- ggplot(aes(x = Value, y = Probability), data = DATA) +
geom_bar(stat = 'identity', colour = 'blue') +
THEME +
ggtitle('Product-binomial probability mass function') +
labs(subtitle = paste0('(n = ', n, ', p = ', p, ')'));
FIGURE;
|
Probability mass function of product of two binomial variables
|
There are various ways you could write the mass function of this distribution. All of them will be messy, since they involve checking the possible products that give a stipulated value for the produc
|
Probability mass function of product of two binomial variables
There are various ways you could write the mass function of this distribution. All of them will be messy, since they involve checking the possible products that give a stipulated value for the product variable. Here is the most obvious way to write the distribution.
Let $X, Y \sim \text{IID Bin}(n, p)$ and let $Z=XY$ be their product. For any integer $0 \leqslant z \leqslant n^2$ we define the set of pairs of values:
$$\mathcal{S}(z) \equiv \{ (x,y) \in \mathbb{N}_{0+}^2 \mid \max(x,y) \leqslant n, xy=z \}.$$
This is the set of all pairs of values within the support of the binomial that multiply to the value $z$. (Note that it will be an empty set for some values of $z$.) We then have:
$$\begin{equation} \begin{aligned}
p_Z(z) \equiv \mathbb{P}(Z=z)
&= \mathbb{P}(XY=z) \\[6pt]
&= \sum_{(x,y) \in \mathcal{S}(z)} \text{Bin}(x\mid n,p) \cdot \text{Bin}(y\mid n, p) \\[6pt]
&= \sum_{(x,y) \in \mathcal{S}(z)} {n \choose x} {n \choose y} \cdot p^{x+y} (1-p)^{2n-x-y}.
\end{aligned} \end{equation}$$
Computing this probability mass function requires you to find the set $\mathcal{S}(z)$ for each $z$ in your support. The distribution has mean and variance:
$$\mathbb{E}(Z) = (np)^2
\quad \quad \quad \quad \quad
\mathbb{V}(Z) = (np)^2 [(1-p+np)^2 - (np)^2].$$
The distribution will be quite jagged, owing to the fact that it is the distribution of a product of discrete random variables. Notwithstanding its jagged distribution, as $n \rightarrow \infty$ you will have convergence in probability to $Z/n^2 \rightarrow p^2$.
Implementation in R: The easiest way to code this mass function is to first create a matrix of joint probabilities for the underlying random variables $X$ and $Y$, and then allocate each of these probabilities to the appropriate resulting product value. In the code below I will create a function dprodbinom which is a vectorised function for the probability mass function of this "product-binomial" distribution.
#Create function for PMF of the product-binomial distribution
dprodbinom <- function(Z, size, prob, log = FALSE) {
#Check input vector is numeric
if (!is.numeric(Z)) { stop('Error: Input values are not numeric'); }
#Set parameters
n <- size;
p <- prob;
#Generate matrix of joint probabilities
SS <- matrix(-Inf, nrow = n+1, ncol = n+1);
XX <- dbinom(0:n, size = n, prob = p, log = TRUE);
for (x in 0:n) {
for (y in 0:n) {
SS[x+1, y+1] <- XX[x+1] + XX[y+1]; } }
#Compute the log-mass function of the product random variable
LOGPMF <- rep(-Inf, n^2+1);
for (x in 0:n) {
for (y in 0:n) {
LOGPMF[x*y+1] <- matrixStats::logSumExp(c(LOGPMF[x*y+1], SS[x+1, y+1])); } }
#Generate the output vector
OUT <- rep(-Inf, length(Z));
for (i in 1:length(Z)) {
if (Z[i] %in% 0:(n^2)) {
OUT[i] <- LOGPMF[Z[i]+1]; } }
#Give the output of the function
if (log) { OUT } else { exp(OUT) } }
We can now easily generate and plot the probability mass function of this distribution. For example, with $n=10$ and $p = 0.6$ we obtain the following probability mass function. As you can see, it is quite jagged, owing to the fact that the product values are distributed in a lagged pattern over the joint values of the underlying random variables.
#Load required libraries
library(matrixStats);
library(ggplot2);
#Generate the mass function
n <- 10;
p <- 0.6;
PMF <- dprodbinom(0:100, size = n, prob = p, log = FALSE);
#Plot the mass function
THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5, face = 'bold'));
DATA <- data.frame(Value = 0:100, Probability = PMF);
FIGURE <- ggplot(aes(x = Value, y = Probability), data = DATA) +
geom_bar(stat = 'identity', colour = 'blue') +
THEME +
ggtitle('Product-binomial probability mass function') +
labs(subtitle = paste0('(n = ', n, ', p = ', p, ')'));
FIGURE;
|
Probability mass function of product of two binomial variables
There are various ways you could write the mass function of this distribution. All of them will be messy, since they involve checking the possible products that give a stipulated value for the produc
|
40,219
|
Where can I find standards for statistical acronyms and whether they should be capitalized or lower case?
|
If you need to include a statistical acronym in a report or manuscript, the best thing you can do in my view is to define its meaning in that write-up the first time you use the acronym and then reference the acronym in the remainder of the write-up. This way, your readers will be clear on how YOU want them to interpret the meaning of the acronym. This is particularly important for acronyms like GLM which can be interpreted as "Generalized Linear Model(s)" or "General Linear Model".
For example, you would first declare in your Methods section that "We will use a generalized linear model (GLM) to analyze our data." and then state in your Results section that "Our GLM model produced the results stated in Table 2."
You can't possibly control consistency of use/meaning of acronyms by other people but you can control how YOU want your readers to relate to the acronyms you use and how consistent YOU are in your use of these acronyms.
If I had a choice, I would stick with upper case letters for acronyms, but that is a matter of personal preference.
|
Where can I find standards for statistical acronyms and whether they should be capitalized or lower
|
If you need to include a statistical acronym in a report or manuscript, the best thing you can do in my view is to define its meaning in that write-up the first time you use the acronym and then refer
|
Where can I find standards for statistical acronyms and whether they should be capitalized or lower case?
If you need to include a statistical acronym in a report or manuscript, the best thing you can do in my view is to define its meaning in that write-up the first time you use the acronym and then reference the acronym in the remainder of the write-up. This way, your readers will be clear on how YOU want them to interpret the meaning of the acronym. This is particularly important for acronyms like GLM which can be interpreted as "Generalized Linear Model(s)" or "General Linear Model".
For example, you would first declare in your Methods section that "We will use a generalized linear model (GLM) to analyze our data." and then state in your Results section that "Our GLM model produced the results stated in Table 2."
You can't possibly control consistency of use/meaning of acronyms by other people but you can control how YOU want your readers to relate to the acronyms you use and how consistent YOU are in your use of these acronyms.
If I had a choice, I would stick with upper case letters for acronyms, but that is a matter of personal preference.
|
Where can I find standards for statistical acronyms and whether they should be capitalized or lower
If you need to include a statistical acronym in a report or manuscript, the best thing you can do in my view is to define its meaning in that write-up the first time you use the acronym and then refer
|
40,220
|
Causality: Structural Causal Model and DAG
|
Your model statement specifies a class of DAGs, not a single DAG. That is, all DAGs in which $x_1, \dots, x_n$ are direct causes of $y$, and $e$ is exogenous are DAGs compatibles with your assumptions.
For instance, for simplicity, say we have only $x_1$ and $x_2$. Then, among several other alternatives, the following DAGs would be compatible with your model specification:
But the following DAG would not be compatible (since the error term of $Y$ is correlated with the error term with $x_2$, but note in this DAG the causal effect of $x_1$ is still identified):
|
Causality: Structural Causal Model and DAG
|
Your model statement specifies a class of DAGs, not a single DAG. That is, all DAGs in which $x_1, \dots, x_n$ are direct causes of $y$, and $e$ is exogenous are DAGs compatibles with your assumptions
|
Causality: Structural Causal Model and DAG
Your model statement specifies a class of DAGs, not a single DAG. That is, all DAGs in which $x_1, \dots, x_n$ are direct causes of $y$, and $e$ is exogenous are DAGs compatibles with your assumptions.
For instance, for simplicity, say we have only $x_1$ and $x_2$. Then, among several other alternatives, the following DAGs would be compatible with your model specification:
But the following DAG would not be compatible (since the error term of $Y$ is correlated with the error term with $x_2$, but note in this DAG the causal effect of $x_1$ is still identified):
|
Causality: Structural Causal Model and DAG
Your model statement specifies a class of DAGs, not a single DAG. That is, all DAGs in which $x_1, \dots, x_n$ are direct causes of $y$, and $e$ is exogenous are DAGs compatibles with your assumptions
|
40,221
|
Definition of exponential family with dispersion parameter
|
The definition you quote which is used with generalized linear models (glm) is not an exponential family, it is an exponential dispersion family. For a fixed value of the dispersion parameter $\phi$ it is an exponential family (indexed by $\theta$), but when $\phi$ varies it is not.
When used in glm's, the exponential dispersion family is used for inference about $\theta$, but eventual inference about the dispersion parameter $\phi$ is done outside that framework.
|
Definition of exponential family with dispersion parameter
|
The definition you quote which is used with generalized linear models (glm) is not an exponential family, it is an exponential dispersion family. For a fixed value of the dispersion parameter $\phi$
|
Definition of exponential family with dispersion parameter
The definition you quote which is used with generalized linear models (glm) is not an exponential family, it is an exponential dispersion family. For a fixed value of the dispersion parameter $\phi$ it is an exponential family (indexed by $\theta$), but when $\phi$ varies it is not.
When used in glm's, the exponential dispersion family is used for inference about $\theta$, but eventual inference about the dispersion parameter $\phi$ is done outside that framework.
|
Definition of exponential family with dispersion parameter
The definition you quote which is used with generalized linear models (glm) is not an exponential family, it is an exponential dispersion family. For a fixed value of the dispersion parameter $\phi$
|
40,222
|
Why do stabilized IPW weights give the same estimates and SEs as unstabilized weights?
|
This is because the marginal structural models you're fitting (objects fitw and fitsw) are so-called saturated models. For saturated models, using weight stabilization does not change the relative weights of outcomes in subgroups defined by the treatment levels, because in any subgroup the weight is multiplied by a factor that is constant within the subgroup (i.e. the numerator). Hope this helps!
You can find more details in section 6.1 of this excellent paper:
Robins JM, Hernán MÁ, Brumback B (2000) Marginal Structural Models and Causal Inference in Epidemiology. Epidemiology 11:550–560. https://doi.org/10.1097/00001648-200009000-00011
|
Why do stabilized IPW weights give the same estimates and SEs as unstabilized weights?
|
This is because the marginal structural models you're fitting (objects fitw and fitsw) are so-called saturated models. For saturated models, using weight stabilization does not change the relative wei
|
Why do stabilized IPW weights give the same estimates and SEs as unstabilized weights?
This is because the marginal structural models you're fitting (objects fitw and fitsw) are so-called saturated models. For saturated models, using weight stabilization does not change the relative weights of outcomes in subgroups defined by the treatment levels, because in any subgroup the weight is multiplied by a factor that is constant within the subgroup (i.e. the numerator). Hope this helps!
You can find more details in section 6.1 of this excellent paper:
Robins JM, Hernán MÁ, Brumback B (2000) Marginal Structural Models and Causal Inference in Epidemiology. Epidemiology 11:550–560. https://doi.org/10.1097/00001648-200009000-00011
|
Why do stabilized IPW weights give the same estimates and SEs as unstabilized weights?
This is because the marginal structural models you're fitting (objects fitw and fitsw) are so-called saturated models. For saturated models, using weight stabilization does not change the relative wei
|
40,223
|
Relation between F test and T test. Are they mutually exclusive?
|
The terms t-test and F-Test are ambiguous, because any test where the test statistic has a t-distribution (under the null hypothesis) is called t-test and any test where the test statistic has an F-distribution is called F-test. There is more than one instance of these.
This is relevant to your question because there is an F-test that compares the variances of two samples, but this is not the F-test used in standard ANOVA-analysis. In fact the ANOVA F-test compares between-group and within-group variability, and between-group variability is in fact measured by squaring and summing up differences between group means, so in this setup both t- and F-tests are about comparing group means. In fact, if you have only two groups/factor levels, the F-test statistic is the square of the t-test statistic, and the F-test is equivalent to the two-sided t-test.
For more than two groups the issue with t-tests is that the t-test can only compare two groups at one time, meaning that you will need several t-tests to compare all groups, involving issues with multiple testing (i.e., if you test several hypotheses at 5% level, the probability to find at least one wrong significance assuming that the null hypotheses are all true can be substantially higher than 5%).
Additionally, you are right that one may be interested in exploring both differences between means and differences between variances, and groups with same mean may still have different variances. You may indeed check them both, though this again involves multiple testing; there's no free lunch. In many applications of ANOVA it is either fairly reasonable to assume equal variances, or only mean differences are of substantial interest (e.g., only wondering whether one group performs "better" than another), therefore differences in variances are often not explicitly investigated (I will abstain from a statement about whether this would be "good" or "correct"; or rather my answer would be "it depends"...).
|
Relation between F test and T test. Are they mutually exclusive?
|
The terms t-test and F-Test are ambiguous, because any test where the test statistic has a t-distribution (under the null hypothesis) is called t-test and any test where the test statistic has an F-di
|
Relation between F test and T test. Are they mutually exclusive?
The terms t-test and F-Test are ambiguous, because any test where the test statistic has a t-distribution (under the null hypothesis) is called t-test and any test where the test statistic has an F-distribution is called F-test. There is more than one instance of these.
This is relevant to your question because there is an F-test that compares the variances of two samples, but this is not the F-test used in standard ANOVA-analysis. In fact the ANOVA F-test compares between-group and within-group variability, and between-group variability is in fact measured by squaring and summing up differences between group means, so in this setup both t- and F-tests are about comparing group means. In fact, if you have only two groups/factor levels, the F-test statistic is the square of the t-test statistic, and the F-test is equivalent to the two-sided t-test.
For more than two groups the issue with t-tests is that the t-test can only compare two groups at one time, meaning that you will need several t-tests to compare all groups, involving issues with multiple testing (i.e., if you test several hypotheses at 5% level, the probability to find at least one wrong significance assuming that the null hypotheses are all true can be substantially higher than 5%).
Additionally, you are right that one may be interested in exploring both differences between means and differences between variances, and groups with same mean may still have different variances. You may indeed check them both, though this again involves multiple testing; there's no free lunch. In many applications of ANOVA it is either fairly reasonable to assume equal variances, or only mean differences are of substantial interest (e.g., only wondering whether one group performs "better" than another), therefore differences in variances are often not explicitly investigated (I will abstain from a statement about whether this would be "good" or "correct"; or rather my answer would be "it depends"...).
|
Relation between F test and T test. Are they mutually exclusive?
The terms t-test and F-Test are ambiguous, because any test where the test statistic has a t-distribution (under the null hypothesis) is called t-test and any test where the test statistic has an F-di
|
40,224
|
Relation between F test and T test. Are they mutually exclusive?
|
If you are comparing more than two groups and are interested in comparing their means then it is usual to do ANOVA as you say which tests the hypothesis that all group means are equal. Doing multiple $t$-tests is not quite equivalent because each test only tets if the means in those two groups are equal. Your point 1)
The use of the $F$ test to compare variances is used because what you compare in ANOVA is the variance between the group means versus the variance within groups. (Your point 3)
The remainder of your questions are hard to answer because, see my points above, I think you have some mis-conceptions about just what is going on.
|
Relation between F test and T test. Are they mutually exclusive?
|
If you are comparing more than two groups and are interested in comparing their means then it is usual to do ANOVA as you say which tests the hypothesis that all group means are equal. Doing multiple
|
Relation between F test and T test. Are they mutually exclusive?
If you are comparing more than two groups and are interested in comparing their means then it is usual to do ANOVA as you say which tests the hypothesis that all group means are equal. Doing multiple $t$-tests is not quite equivalent because each test only tets if the means in those two groups are equal. Your point 1)
The use of the $F$ test to compare variances is used because what you compare in ANOVA is the variance between the group means versus the variance within groups. (Your point 3)
The remainder of your questions are hard to answer because, see my points above, I think you have some mis-conceptions about just what is going on.
|
Relation between F test and T test. Are they mutually exclusive?
If you are comparing more than two groups and are interested in comparing their means then it is usual to do ANOVA as you say which tests the hypothesis that all group means are equal. Doing multiple
|
40,225
|
Relation between F test and T test. Are they mutually exclusive?
|
It is simple and depends upon your problem statement. If you want the comparsion based on an average you can use a t-test and if you want the comparision that how the data variate within the data then you will use the F-test.
|
Relation between F test and T test. Are they mutually exclusive?
|
It is simple and depends upon your problem statement. If you want the comparsion based on an average you can use a t-test and if you want the comparision that how the data variate within the data then
|
Relation between F test and T test. Are they mutually exclusive?
It is simple and depends upon your problem statement. If you want the comparsion based on an average you can use a t-test and if you want the comparision that how the data variate within the data then you will use the F-test.
|
Relation between F test and T test. Are they mutually exclusive?
It is simple and depends upon your problem statement. If you want the comparsion based on an average you can use a t-test and if you want the comparision that how the data variate within the data then
|
40,226
|
Relation between F test and T test. Are they mutually exclusive?
|
Consider this formula
Ho: group1 and group2 has the same average
(e.g. do they have the same average height)
t = (mean-k)/(s/sqrt(n)), basic assumption. variance is known.
Ho: Different level of fertilizer (NPK) has no significant effect on plants.
F = n(mean-k)^2 / s^2, w/c is simply t^2
from Practicality point of view this could be true correct.
2.If you have a control and treated group form the same population then they will be the same. But say if you have boys vs girls, location1 vs location2, they could be different.
Correct.
Possibly
Depending on your objective. If you simply want to know if the group have different characteristics (like average) then t-test. If you want to know if certain applied factors (like different levels of cigarette's nicotine) have significant effects then use F-test.
The formula is related but the application differs depending on your goal.
No. since it doesn't makes any sense since t and F test have different goal or problem they're solving.
|
Relation between F test and T test. Are they mutually exclusive?
|
Consider this formula
Ho: group1 and group2 has the same average
(e.g. do they have the same average height)
t = (mean-k)/(s/sqrt(n)), basic assumption. variance is known.
Ho: Different level of fert
|
Relation between F test and T test. Are they mutually exclusive?
Consider this formula
Ho: group1 and group2 has the same average
(e.g. do they have the same average height)
t = (mean-k)/(s/sqrt(n)), basic assumption. variance is known.
Ho: Different level of fertilizer (NPK) has no significant effect on plants.
F = n(mean-k)^2 / s^2, w/c is simply t^2
from Practicality point of view this could be true correct.
2.If you have a control and treated group form the same population then they will be the same. But say if you have boys vs girls, location1 vs location2, they could be different.
Correct.
Possibly
Depending on your objective. If you simply want to know if the group have different characteristics (like average) then t-test. If you want to know if certain applied factors (like different levels of cigarette's nicotine) have significant effects then use F-test.
The formula is related but the application differs depending on your goal.
No. since it doesn't makes any sense since t and F test have different goal or problem they're solving.
|
Relation between F test and T test. Are they mutually exclusive?
Consider this formula
Ho: group1 and group2 has the same average
(e.g. do they have the same average height)
t = (mean-k)/(s/sqrt(n)), basic assumption. variance is known.
Ho: Different level of fert
|
40,227
|
Is it appropriate to estimate a random slope without estimating the overall mean slope?
|
Fitting random slopes with the population-level slope fixed to zero is not out of the question - it's not mathematically or statistically ill-posed - but it's a rather weird model that would require some extra justification. Why would you expect that the average slope across cities would be exactly zero (which is what is implied by the model that omits the fixed effect)? The only cases where I've seen fitting such models make sense are
as a(n) (admittedly silly) null model, for doing a likelihood-ratio test of the significance of the population-level slope [not relevant in your case as you're using Bayesian methods]
in cases where the effect is zero based on the experimental design, e.g. when samples are randomly assigned to test and treatment conditions in a pre-treatment condition (this would be eliminating a fixed effect of treatment in the "before" period, not a fixed slope, but the idea is similar).
If you have a fixed-effect slope and among-city variation in the slope, you do indeed need to add the population-level slope to the individual-city slope deviation, and use the posterior distribution of the sum for inference - I don't know exactly how this is done in rstanarm (related) ... the tidybayes package might help.
|
Is it appropriate to estimate a random slope without estimating the overall mean slope?
|
Fitting random slopes with the population-level slope fixed to zero is not out of the question - it's not mathematically or statistically ill-posed - but it's a rather weird model that would require s
|
Is it appropriate to estimate a random slope without estimating the overall mean slope?
Fitting random slopes with the population-level slope fixed to zero is not out of the question - it's not mathematically or statistically ill-posed - but it's a rather weird model that would require some extra justification. Why would you expect that the average slope across cities would be exactly zero (which is what is implied by the model that omits the fixed effect)? The only cases where I've seen fitting such models make sense are
as a(n) (admittedly silly) null model, for doing a likelihood-ratio test of the significance of the population-level slope [not relevant in your case as you're using Bayesian methods]
in cases where the effect is zero based on the experimental design, e.g. when samples are randomly assigned to test and treatment conditions in a pre-treatment condition (this would be eliminating a fixed effect of treatment in the "before" period, not a fixed slope, but the idea is similar).
If you have a fixed-effect slope and among-city variation in the slope, you do indeed need to add the population-level slope to the individual-city slope deviation, and use the posterior distribution of the sum for inference - I don't know exactly how this is done in rstanarm (related) ... the tidybayes package might help.
|
Is it appropriate to estimate a random slope without estimating the overall mean slope?
Fitting random slopes with the population-level slope fixed to zero is not out of the question - it's not mathematically or statistically ill-posed - but it's a rather weird model that would require s
|
40,228
|
Kolmogorov Smirnov test vs. Anderson Darling test
|
The Kolmogorov-Smirnov looks for the largest difference between the cdf and the empirical cdf of the data.
There's another test -- the Cramér-von Mises test -- which looks at the sum of squares of the differences in cdf (at the data). It's often somewhat more sensitive than the Kolmogorov-Smirnov to the kind of differences we tend to want to pick up (because it can "accumulate" small but consistent differences rather than needing a single large one).
The problem with both of those tests is that the tail of the cdf is more precise than the middle (in the same sense that a sample estimate of a population proportion near 0 or 1 has lower variance than one near 0.5).
The algebra to show this is not particularly difficult, but we can observe this quite easily without doing the mathematics; we don't need to perform it to obtain intuition about what's going on; there's something even simpler that we can do.
Here I simulate 100 sets of data drawn from a standard uniform, each data set has a sample size of 25. I then draw the empirical cdf of each one (the first such data set is shown in blue; all the rest are in grey and for those I don't plot the point at the left of each step, just the step itself):
As we see in the plot, the (vertical) spread is widest when the population cdf is close to 0.5 and narrowest when the population cdf is close to 0 or 1; the population cdf ($F$) is a diagonal line from (0,0) to (1,1). This pattern of changing spread in the sampling distribution of the empirical cdf happens for every distribution; the spread (specifically, the standard deviation of $\hat{F}$) relates only to $n$ and $F(1-F)$.
We can make use of this additional information that simple tests like the Kolmogorov-Smirnov and the Cramér-von Mises test ignore.
If you calculate a weighted version of the Cramér-von Mises (with weights inversely proportional to this variance) then you end up with the Anderson-Darling statistic; which is to say, it correctly (optimally in a particular sense) accounts for the fact that the cdf in the tail is more precisely estimated; this makes it more sensitive to the differences in the tail than the first two statistics, which don't use the fact that we can estimate the cdf of the tail more precisely.
|
Kolmogorov Smirnov test vs. Anderson Darling test
|
The Kolmogorov-Smirnov looks for the largest difference between the cdf and the empirical cdf of the data.
There's another test -- the Cramér-von Mises test -- which looks at the sum of squares of the
|
Kolmogorov Smirnov test vs. Anderson Darling test
The Kolmogorov-Smirnov looks for the largest difference between the cdf and the empirical cdf of the data.
There's another test -- the Cramér-von Mises test -- which looks at the sum of squares of the differences in cdf (at the data). It's often somewhat more sensitive than the Kolmogorov-Smirnov to the kind of differences we tend to want to pick up (because it can "accumulate" small but consistent differences rather than needing a single large one).
The problem with both of those tests is that the tail of the cdf is more precise than the middle (in the same sense that a sample estimate of a population proportion near 0 or 1 has lower variance than one near 0.5).
The algebra to show this is not particularly difficult, but we can observe this quite easily without doing the mathematics; we don't need to perform it to obtain intuition about what's going on; there's something even simpler that we can do.
Here I simulate 100 sets of data drawn from a standard uniform, each data set has a sample size of 25. I then draw the empirical cdf of each one (the first such data set is shown in blue; all the rest are in grey and for those I don't plot the point at the left of each step, just the step itself):
As we see in the plot, the (vertical) spread is widest when the population cdf is close to 0.5 and narrowest when the population cdf is close to 0 or 1; the population cdf ($F$) is a diagonal line from (0,0) to (1,1). This pattern of changing spread in the sampling distribution of the empirical cdf happens for every distribution; the spread (specifically, the standard deviation of $\hat{F}$) relates only to $n$ and $F(1-F)$.
We can make use of this additional information that simple tests like the Kolmogorov-Smirnov and the Cramér-von Mises test ignore.
If you calculate a weighted version of the Cramér-von Mises (with weights inversely proportional to this variance) then you end up with the Anderson-Darling statistic; which is to say, it correctly (optimally in a particular sense) accounts for the fact that the cdf in the tail is more precisely estimated; this makes it more sensitive to the differences in the tail than the first two statistics, which don't use the fact that we can estimate the cdf of the tail more precisely.
|
Kolmogorov Smirnov test vs. Anderson Darling test
The Kolmogorov-Smirnov looks for the largest difference between the cdf and the empirical cdf of the data.
There's another test -- the Cramér-von Mises test -- which looks at the sum of squares of the
|
40,229
|
Logistic regression BIC: what's the right N?
|
The BIC (and the AIC) are relative measures for comparing models. However, it makes no sense to compare what is otherwise the same model between using an aggregated vs. a disaggregated response. Nor would it make sense to compare models that would otherwise be different (e.g., different regressors), but where one model uses an aggregated response and the other model uses a disaggregated version of the response. As long as the two models being compared both represent the response variable in the same format, everything will be fine. Note that the two formats are ultimately equivalent—they contain the same information and mostly just look different on the outside, see: Input format for response in binomial glm in R.
|
Logistic regression BIC: what's the right N?
|
The BIC (and the AIC) are relative measures for comparing models. However, it makes no sense to compare what is otherwise the same model between using an aggregated vs. a disaggregated response. Nor
|
Logistic regression BIC: what's the right N?
The BIC (and the AIC) are relative measures for comparing models. However, it makes no sense to compare what is otherwise the same model between using an aggregated vs. a disaggregated response. Nor would it make sense to compare models that would otherwise be different (e.g., different regressors), but where one model uses an aggregated response and the other model uses a disaggregated version of the response. As long as the two models being compared both represent the response variable in the same format, everything will be fine. Note that the two formats are ultimately equivalent—they contain the same information and mostly just look different on the outside, see: Input format for response in binomial glm in R.
|
Logistic regression BIC: what's the right N?
The BIC (and the AIC) are relative measures for comparing models. However, it makes no sense to compare what is otherwise the same model between using an aggregated vs. a disaggregated response. Nor
|
40,230
|
Logistic regression BIC: what's the right N?
|
Interesting question! Coming at this from an applied setting, I think you need to remember that both BIC and AIC are measures of relative model fit.
In other words, these measures don't tell you much when you examine them for a single model, but can help you to select an appropriate model among a set of competing models. In particular:
If your goal is to find the 'best' among those competing models for prediction of the outcome variable, then select the model with the lowest AIC value;
If your goal is to find the 'best' among those competing models for understanding and describing the effects of the predictor variables included in the model on the outcome variable, then select the model with the lowest BIC value.
In defining your set of competing models, you would have to make sure the models follow the same conceptual framework. Thus, you would either compare several binomial logistic regression models or several binary logistic models, but not a mixture of both. (It is important to compare like with like, otherwise you won't know if a model won the competition based on its own merits or simply because you changed the model specification/fitting procedure.)
From this perspective, the only thing that matters is that R is consistent when computing the AIC and BIC across models of the same type (e.g., binomial logistic regression models).
Just to clarify: g_bern is a binary logistic regression model, whereas g_binom is a binomial logistic regression model. While they both model the probability of success in one trial, you wouldn't mix together variations of these models when defining your set of competing models (for the reasons explained above and also covered by @gung).
|
Logistic regression BIC: what's the right N?
|
Interesting question! Coming at this from an applied setting, I think you need to remember that both BIC and AIC are measures of relative model fit.
In other words, these measures don't tell you much
|
Logistic regression BIC: what's the right N?
Interesting question! Coming at this from an applied setting, I think you need to remember that both BIC and AIC are measures of relative model fit.
In other words, these measures don't tell you much when you examine them for a single model, but can help you to select an appropriate model among a set of competing models. In particular:
If your goal is to find the 'best' among those competing models for prediction of the outcome variable, then select the model with the lowest AIC value;
If your goal is to find the 'best' among those competing models for understanding and describing the effects of the predictor variables included in the model on the outcome variable, then select the model with the lowest BIC value.
In defining your set of competing models, you would have to make sure the models follow the same conceptual framework. Thus, you would either compare several binomial logistic regression models or several binary logistic models, but not a mixture of both. (It is important to compare like with like, otherwise you won't know if a model won the competition based on its own merits or simply because you changed the model specification/fitting procedure.)
From this perspective, the only thing that matters is that R is consistent when computing the AIC and BIC across models of the same type (e.g., binomial logistic regression models).
Just to clarify: g_bern is a binary logistic regression model, whereas g_binom is a binomial logistic regression model. While they both model the probability of success in one trial, you wouldn't mix together variations of these models when defining your set of competing models (for the reasons explained above and also covered by @gung).
|
Logistic regression BIC: what's the right N?
Interesting question! Coming at this from an applied setting, I think you need to remember that both BIC and AIC are measures of relative model fit.
In other words, these measures don't tell you much
|
40,231
|
How is it possible for both the likelihood and log-likelihood to be asymptotically normal?
|
I think you just have to be precise about what you mean by "asymptotically normal." For example, when people say that "a sum of random variables is asymptotically normal by the central limit theorem," they usually really mean a precise statement about convergence in distribution, e.g.,
Central Limit Theorem (Lindeberg–Lévy version).
Suppose $(X_n)_{n=1}^\infty$ is a sequence of i.i.d. random variables with mean $\mu$ and variance $\sigma^2 < \infty$. Let $S_n = n^{-1}(X_1 + \cdots + X_n)$ (the $n$th sample mean). Then
$$
\sqrt{n} (S_n - \mu) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ (here $\Rightarrow$ denotes convergence in distribution).
This doesn't say that $S_n \Rightarrow N(\mu, \sigma^2/n)$ as $n \to \infty$, which is formally impossible because the expression on the right-hand side involves $n$, but it is often informally stated as $S_n \approx N(\mu, \sigma^2/n)$ for large $n$ (the symbol $\approx$ should be read "is approximately distributed as").
In your case, you have a sequence $(L_n)_{n=1}^\infty$ of log-likelihoods that, after appropriate standardization, become a sequence $(S_n)_{n=1}^\infty$ that satisfies
$$
\sqrt{n}(S_n - \theta) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ (for some $\theta$ and $\sigma^2$). Now you can recall the delta method:
Delta Method.
Suppose $(S_n)_{n=1}^\infty$ is a sequence of random variables satisfying
$$
\sqrt{n} (S_n - \theta) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ for some constants $\theta$ and $\sigma^2$.
Let $g : \mathbb{R} \to \mathbb{R}$ be a function such that $g^\prime(\theta)$ exists and is nonzero.
Then
$$
\sqrt{n}(g(S_n) - g(\theta)) \Rightarrow N(0, \sigma^2 \left(g^\prime(\theta)\right)^2)
$$
as $n \to \infty$.
The hand-wavey interpretastion of this is that if
$$
S_n \approx N(\theta, \sigma^2 / n)
$$
for large $n$, then
$$
g(S_n) \approx N(g(\theta), \sigma^2\left(g^\prime(\theta)\right)^2/n)
$$
for large $n$ (provided that $g^\prime(\theta)$ exists and is nonzero).
In particular, it shouldn't be surprising that the sequences $(S_n)_{n=1}^\infty$ and $(\exp(S_n))_{n=1}^\infty$ are simultaneously "asymptotically normal."
|
How is it possible for both the likelihood and log-likelihood to be asymptotically normal?
|
I think you just have to be precise about what you mean by "asymptotically normal." For example, when people say that "a sum of random variables is asymptotically normal by the central limit theorem,"
|
How is it possible for both the likelihood and log-likelihood to be asymptotically normal?
I think you just have to be precise about what you mean by "asymptotically normal." For example, when people say that "a sum of random variables is asymptotically normal by the central limit theorem," they usually really mean a precise statement about convergence in distribution, e.g.,
Central Limit Theorem (Lindeberg–Lévy version).
Suppose $(X_n)_{n=1}^\infty$ is a sequence of i.i.d. random variables with mean $\mu$ and variance $\sigma^2 < \infty$. Let $S_n = n^{-1}(X_1 + \cdots + X_n)$ (the $n$th sample mean). Then
$$
\sqrt{n} (S_n - \mu) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ (here $\Rightarrow$ denotes convergence in distribution).
This doesn't say that $S_n \Rightarrow N(\mu, \sigma^2/n)$ as $n \to \infty$, which is formally impossible because the expression on the right-hand side involves $n$, but it is often informally stated as $S_n \approx N(\mu, \sigma^2/n)$ for large $n$ (the symbol $\approx$ should be read "is approximately distributed as").
In your case, you have a sequence $(L_n)_{n=1}^\infty$ of log-likelihoods that, after appropriate standardization, become a sequence $(S_n)_{n=1}^\infty$ that satisfies
$$
\sqrt{n}(S_n - \theta) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ (for some $\theta$ and $\sigma^2$). Now you can recall the delta method:
Delta Method.
Suppose $(S_n)_{n=1}^\infty$ is a sequence of random variables satisfying
$$
\sqrt{n} (S_n - \theta) \Rightarrow N(0, \sigma^2)
$$
as $n \to \infty$ for some constants $\theta$ and $\sigma^2$.
Let $g : \mathbb{R} \to \mathbb{R}$ be a function such that $g^\prime(\theta)$ exists and is nonzero.
Then
$$
\sqrt{n}(g(S_n) - g(\theta)) \Rightarrow N(0, \sigma^2 \left(g^\prime(\theta)\right)^2)
$$
as $n \to \infty$.
The hand-wavey interpretastion of this is that if
$$
S_n \approx N(\theta, \sigma^2 / n)
$$
for large $n$, then
$$
g(S_n) \approx N(g(\theta), \sigma^2\left(g^\prime(\theta)\right)^2/n)
$$
for large $n$ (provided that $g^\prime(\theta)$ exists and is nonzero).
In particular, it shouldn't be surprising that the sequences $(S_n)_{n=1}^\infty$ and $(\exp(S_n))_{n=1}^\infty$ are simultaneously "asymptotically normal."
|
How is it possible for both the likelihood and log-likelihood to be asymptotically normal?
I think you just have to be precise about what you mean by "asymptotically normal." For example, when people say that "a sum of random variables is asymptotically normal by the central limit theorem,"
|
40,232
|
On masked multi-head attention and layer normalization in transformer model
|
This is answered in the Attention is All You Need paper by Vaswani et al (see also recording of the talk by one of the co-authors, and those three blogs: here, here, and here).
How is it possible to mask out illegal connections in decoder multi-head attention?
This is pretty simple. Attention can be defined as
$$
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}\Big(\frac{QK^T}{\sqrt{d_k}}\Big)V
$$
where $Q$ are queries, $K$ are keys, $V$ are values and $\sqrt{d_k}$ is the scaling constant equal to the square root of the dimension of the keys. The role of the product $QK^T$ is to calculate the similarity matrix between words in $Q$ and $K$ (where each word is a row encoded using embeddings). In the encoder, each $Q,K,V$, comes from the same document. In the decoder, $Q$ comes from target document, while $K,V$ come from source document.
In the Transformer network (and similar ones), there is no direct mechanism that records time dependence. It is recorded indirectly in the embeddings (by summing word embeddings and position embeddings), but at a cost of leaking "future" values when making predictions. Notice that in $QK^T$ we look at similarities between at each word in $Q$ with each word in $K$. To prevent the future leak, we use masking. This is done by performing a pointwise product of $QK^T$ and the upper triangular matrix of ones (illustrated below, image source).
This it zeroes-out the similarities between words and the words that appear after the source words ("in the future"), preventing predictions from depending on knowing the answer before they predict it. Since we remove such information, it cannot be used by the model, and we guarantee that only similarity to the preceding words is considered.
Is it alright to set some arbitrary max_length for layer normalization?
In the paper, all the inputs and outputs have fixed size of $d_\text{model}$, if this is what you ask. However I can't see why this would be a problem, since what the normalization does is it makes the features have same mean and standard deviation between the layers. So if something was relatively large locally, will be mapped to what is considered large globally. See the Layer normalization paper by Ba et al for details. Moreover, this is applied per feature, so excess zeros have no impact.
|
On masked multi-head attention and layer normalization in transformer model
|
This is answered in the Attention is All You Need paper by Vaswani et al (see also recording of the talk by one of the co-authors, and those three blogs: here, here, and here).
How is it possible t
|
On masked multi-head attention and layer normalization in transformer model
This is answered in the Attention is All You Need paper by Vaswani et al (see also recording of the talk by one of the co-authors, and those three blogs: here, here, and here).
How is it possible to mask out illegal connections in decoder multi-head attention?
This is pretty simple. Attention can be defined as
$$
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}\Big(\frac{QK^T}{\sqrt{d_k}}\Big)V
$$
where $Q$ are queries, $K$ are keys, $V$ are values and $\sqrt{d_k}$ is the scaling constant equal to the square root of the dimension of the keys. The role of the product $QK^T$ is to calculate the similarity matrix between words in $Q$ and $K$ (where each word is a row encoded using embeddings). In the encoder, each $Q,K,V$, comes from the same document. In the decoder, $Q$ comes from target document, while $K,V$ come from source document.
In the Transformer network (and similar ones), there is no direct mechanism that records time dependence. It is recorded indirectly in the embeddings (by summing word embeddings and position embeddings), but at a cost of leaking "future" values when making predictions. Notice that in $QK^T$ we look at similarities between at each word in $Q$ with each word in $K$. To prevent the future leak, we use masking. This is done by performing a pointwise product of $QK^T$ and the upper triangular matrix of ones (illustrated below, image source).
This it zeroes-out the similarities between words and the words that appear after the source words ("in the future"), preventing predictions from depending on knowing the answer before they predict it. Since we remove such information, it cannot be used by the model, and we guarantee that only similarity to the preceding words is considered.
Is it alright to set some arbitrary max_length for layer normalization?
In the paper, all the inputs and outputs have fixed size of $d_\text{model}$, if this is what you ask. However I can't see why this would be a problem, since what the normalization does is it makes the features have same mean and standard deviation between the layers. So if something was relatively large locally, will be mapped to what is considered large globally. See the Layer normalization paper by Ba et al for details. Moreover, this is applied per feature, so excess zeros have no impact.
|
On masked multi-head attention and layer normalization in transformer model
This is answered in the Attention is All You Need paper by Vaswani et al (see also recording of the talk by one of the co-authors, and those three blogs: here, here, and here).
How is it possible t
|
40,233
|
Fit data to parametric distribution
|
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student's t with location parameter $\mu$, scale parameter $\sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = \frac{1}{\sigma \sqrt{v} \; B\left(\frac{v}{2},\frac{1}{2}\right)} \left(\frac{v}{v+\frac{(x-\mu )^2}{\sigma ^2}}\right)^{\frac{v+1}{2}} \quad \text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student's t, with $\mu = 5.45$, $\sigma = 6.61$ and $v = 2.97$:
In the diagram:
the dashed red curve is the fitted Student's t pdf
the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP's opening statement: "I have data with nice bell-shaped histogram PDF". In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is -910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed ... and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student's t fit, we can see this feature of the data, in the right and left tails:
|
Fit data to parametric distribution
|
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natu
|
Fit data to parametric distribution
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student's t with location parameter $\mu$, scale parameter $\sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = \frac{1}{\sigma \sqrt{v} \; B\left(\frac{v}{2},\frac{1}{2}\right)} \left(\frac{v}{v+\frac{(x-\mu )^2}{\sigma ^2}}\right)^{\frac{v+1}{2}} \quad \text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student's t, with $\mu = 5.45$, $\sigma = 6.61$ and $v = 2.97$:
In the diagram:
the dashed red curve is the fitted Student's t pdf
the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP's opening statement: "I have data with nice bell-shaped histogram PDF". In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is -910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed ... and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student's t fit, we can see this feature of the data, in the right and left tails:
|
Fit data to parametric distribution
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natu
|
40,234
|
Fit data to parametric distribution
|
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $-30$, while the qqplot shows values down to around $-900$. All those lom-tailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= -30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= -30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.
|
Fit data to parametric distribution
|
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $-30$, while the qqplot shows values down to around $-900$. All those lom-tailed outliers is about
|
Fit data to parametric distribution
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $-30$, while the qqplot shows values down to around $-900$. All those lom-tailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= -30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= -30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.
|
Fit data to parametric distribution
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $-30$, while the qqplot shows values down to around $-900$. All those lom-tailed outliers is about
|
40,235
|
Fit data to parametric distribution
|
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a three-component Gaussian mixture (8 parameters total), with components
1: N(-69.269908, 6995.71627), p1 = 0.003970506
2: N( -4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is -352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at "x" is (in R form)
p1*pnorm(x, -69.269908, sqrt(6995.71627)) + p2*pnorm(x, -4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
|
Fit data to parametric distribution
|
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a three-component Gaussian
|
Fit data to parametric distribution
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a three-component Gaussian mixture (8 parameters total), with components
1: N(-69.269908, 6995.71627), p1 = 0.003970506
2: N( -4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is -352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at "x" is (in R form)
p1*pnorm(x, -69.269908, sqrt(6995.71627)) + p2*pnorm(x, -4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
|
Fit data to parametric distribution
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a three-component Gaussian
|
40,236
|
Proof of Approximate / Exact Bayesian Computation
|
This case is the original version of the algorithm, as in Rubin (1984) and Tavaré et al. (1997). Assuming that $$\mathbb{P}_\theta(X=D)>0$$ the values of $\theta$ that come out of the algorithm are distributed from a distribution with density proportional to
$$\pi(\theta) \times \mathbb{P}_\theta(X=D)$$
since the algorithm generates the pair $(\theta,\mathbb{I}_{X=D})$ with joint distribution
$$\pi(\theta) \times \mathbb{P}_\theta(X=D)^{\mathbb{I}_{X=D}} \times
\mathbb{P}_\theta(X\ne D)^{\mathbb{I}_{X\ne D}}$$
Conditioning on $\mathbb{I}_{X=D}=1$ leads to
$$\theta|\mathbb{I}_{X=D}=1 \sim \pi(\theta) \times \mathbb{P}_\theta(X=D)\Big/\int \pi(\theta) \times \mathbb{P}_\theta(X=D) \,\text{d}\theta$$
which is the posterior distribution.
On the side, I gave this very proof in class a few hours ago!
|
Proof of Approximate / Exact Bayesian Computation
|
This case is the original version of the algorithm, as in Rubin (1984) and Tavaré et al. (1997). Assuming that $$\mathbb{P}_\theta(X=D)>0$$ the values of $\theta$ that come out of the algorithm are di
|
Proof of Approximate / Exact Bayesian Computation
This case is the original version of the algorithm, as in Rubin (1984) and Tavaré et al. (1997). Assuming that $$\mathbb{P}_\theta(X=D)>0$$ the values of $\theta$ that come out of the algorithm are distributed from a distribution with density proportional to
$$\pi(\theta) \times \mathbb{P}_\theta(X=D)$$
since the algorithm generates the pair $(\theta,\mathbb{I}_{X=D})$ with joint distribution
$$\pi(\theta) \times \mathbb{P}_\theta(X=D)^{\mathbb{I}_{X=D}} \times
\mathbb{P}_\theta(X\ne D)^{\mathbb{I}_{X\ne D}}$$
Conditioning on $\mathbb{I}_{X=D}=1$ leads to
$$\theta|\mathbb{I}_{X=D}=1 \sim \pi(\theta) \times \mathbb{P}_\theta(X=D)\Big/\int \pi(\theta) \times \mathbb{P}_\theta(X=D) \,\text{d}\theta$$
which is the posterior distribution.
On the side, I gave this very proof in class a few hours ago!
|
Proof of Approximate / Exact Bayesian Computation
This case is the original version of the algorithm, as in Rubin (1984) and Tavaré et al. (1997). Assuming that $$\mathbb{P}_\theta(X=D)>0$$ the values of $\theta$ that come out of the algorithm are di
|
40,237
|
How to handle a changing action space in Reinforcement Learning
|
You don't need to do anything special to handle this. The only thing you need to change is to not take any illegal actions.
The typical Q-learning greedy policy is $\pi(s) = \text{argmax}_{a \in \mathcal{A}} \hat q(s,a)$ and the epsilon-greedy rollout policy is very similar. Simply replace the action space $\mathcal{A}$ with just the legal actions $\mathcal{A}_\text{legal}(s)$.
|
How to handle a changing action space in Reinforcement Learning
|
You don't need to do anything special to handle this. The only thing you need to change is to not take any illegal actions.
The typical Q-learning greedy policy is $\pi(s) = \text{argmax}_{a \in \math
|
How to handle a changing action space in Reinforcement Learning
You don't need to do anything special to handle this. The only thing you need to change is to not take any illegal actions.
The typical Q-learning greedy policy is $\pi(s) = \text{argmax}_{a \in \mathcal{A}} \hat q(s,a)$ and the epsilon-greedy rollout policy is very similar. Simply replace the action space $\mathcal{A}$ with just the legal actions $\mathcal{A}_\text{legal}(s)$.
|
How to handle a changing action space in Reinforcement Learning
You don't need to do anything special to handle this. The only thing you need to change is to not take any illegal actions.
The typical Q-learning greedy policy is $\pi(s) = \text{argmax}_{a \in \math
|
40,238
|
Acceptance-Rejection Method Acceptance Probability Proof
|
If you target $f$ with $g$, and you know $f(x) \le g(x)c$, then
\begin{align*}
P(\text{accept proposal}) &= P\left( U \le \frac{f(X)}{g(X)c} \right) \\
&= E\left( \mathbf{1}\left[U \le \frac{f(X)}{g(X)c}\right] \right) \tag{*}\\
&= E\left( E\left[ \mathbf{1}\left\{U \le \frac{f(X)}{g(X)c}\right\} \bigg\rvert X \right]\right) \\
&= E\left( P\left[ U \le \frac{f(X)}{g(X)c} \bigg\rvert X \right]\right) \\
&= E\left( \frac{f(X)}{g(X)c} \right) \\
&= c^{-1}\int f(x)g(x)/g(x) \text{d}x \\
&= c^{-1}.
\end{align*}
Note that (*) is a double integral because both $U$ and $X$ are random, so you must integrate their joint density over the appropriate region. They are independent, so their joint density factors, so the joint density is $f_U(u)g_X(x) = 1 \cdot g(x) = g(x)$. There is no $u$ in this expression because it is a uniform random variable--just make sure you integrate over the right bounds.
|
Acceptance-Rejection Method Acceptance Probability Proof
|
If you target $f$ with $g$, and you know $f(x) \le g(x)c$, then
\begin{align*}
P(\text{accept proposal}) &= P\left( U \le \frac{f(X)}{g(X)c} \right) \\
&= E\left( \mathbf{1}\left[U \le \frac{f(X)}{g(X
|
Acceptance-Rejection Method Acceptance Probability Proof
If you target $f$ with $g$, and you know $f(x) \le g(x)c$, then
\begin{align*}
P(\text{accept proposal}) &= P\left( U \le \frac{f(X)}{g(X)c} \right) \\
&= E\left( \mathbf{1}\left[U \le \frac{f(X)}{g(X)c}\right] \right) \tag{*}\\
&= E\left( E\left[ \mathbf{1}\left\{U \le \frac{f(X)}{g(X)c}\right\} \bigg\rvert X \right]\right) \\
&= E\left( P\left[ U \le \frac{f(X)}{g(X)c} \bigg\rvert X \right]\right) \\
&= E\left( \frac{f(X)}{g(X)c} \right) \\
&= c^{-1}\int f(x)g(x)/g(x) \text{d}x \\
&= c^{-1}.
\end{align*}
Note that (*) is a double integral because both $U$ and $X$ are random, so you must integrate their joint density over the appropriate region. They are independent, so their joint density factors, so the joint density is $f_U(u)g_X(x) = 1 \cdot g(x) = g(x)$. There is no $u$ in this expression because it is a uniform random variable--just make sure you integrate over the right bounds.
|
Acceptance-Rejection Method Acceptance Probability Proof
If you target $f$ with $g$, and you know $f(x) \le g(x)c$, then
\begin{align*}
P(\text{accept proposal}) &= P\left( U \le \frac{f(X)}{g(X)c} \right) \\
&= E\left( \mathbf{1}\left[U \le \frac{f(X)}{g(X
|
40,239
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
Solution
The matrix algebra can be dismaying and, if not carried out elegantly, can require an awful lot of (superfluous) algebraic manipulation. However, the situation is much simpler than it looks, because (creating the matrix $X$ by putting a column of ones in first, and then the column of independent values $(x_i)$ after it)
$$X^\prime X = \pmatrix{n & S_x \\ S_x & S_{xx}}$$
and
$$X^\prime y = \pmatrix{S_y & S_{xy}}$$
(The $S_{*}$ are handy--and fairly common--abbreviations for sums of the variables and their products). Thus, the normal equations for the estimates $\hat\beta = (\hat\beta_0, \hat\beta_1)$ are--when written out as simultaneous linear equations--merely
$$\matrix{n \hat\beta_0 + S_x\hat\beta_1 = S_y \\
S_x \hat\beta_0 + S_{xx}\hat\beta_1 = S_{xy},}$$
which are to be solved for $\hat\beta_0$ and $\hat\beta_1.$ Indeed, you don't really need to solve this ab initio: all you have to do at this point is check which formula for $\hat \beta_1$ actually works. That requires only elementary algebra. I won't show it because there's a better way that produces the same result in a much more illuminating and generalizable fashion.
Motivation and Generalization
Recall that the normal equations are derived by considering the problem of minimizing the sum of squares of residuals,
$$\operatorname{SSR} = \sum_i \left(y_i - (\beta_0 + \beta_1 x_i)\right)^2.$$
The appearance of $\beta_0$ corresponds to a column of ones in $X$ while the appearance of $\beta_1$ corresponds to a column $(x_i)$ in $X$. In general, those columns are not orthogonal. (Recall that we say two vectors are orthogonal when their dot product is zero. Geometrically, this means they are perpendicular. See the references for more about this.) We can make them orthogonal by subtracting some multiple of one of them from the other. The easiest choice is to subtract a constant from each $x_i$ to make the result orthogonal to the constant column; that is, we seek a number $c$ for which
$$0 = (1,1,\ldots, 1) \cdot (x_1-c, x_2-c, \ldots, x_n-c) = \sum_{i} (1(x_i-c)) = Sx - nc.$$
The unique solution clearly is $c = Sx/n = \bar x,$ the mean of the $x_i.$ Accordingly, let's rewrite the model in terms of the "centered" variables $x_i-\bar x.$ It asks us to minimize
$$\operatorname{SSR} = \sum_i \left(y_i - (\beta_0 + \beta_1\bar x + \beta_1 (x_i-\bar x))\right)^2.$$
For simplicity, write the unknown constant term as
$$\alpha = \beta_0 + \beta_1 \bar x,$$
understanding that once solutions $\hat\alpha$ and $\hat\beta_1$ are obtained, we easily find the estimate
$$\hat\beta_0 = \hat\alpha - \hat\beta_1\bar x.$$
In terms of the unknowns $(\hat\alpha,\hat\beta_1)$ the Normal equations are now
$$\pmatrix{n & 0 \\ 0 & \sum_i(x_i-\bar x)^2}\pmatrix{\hat\alpha\\\hat\beta_1}=\pmatrix{Sy \\ \sum_i (x_i-\bar x)y_i}.$$
When written out as two simultaneous linear equations, each unknown is isolated in its own equation, which is simple to solve: this is what having orthogonal columns in $X$ achieves. In particular, the equation for $\hat\beta_1$ is
$$\sum_i(x_i-\bar x)^2\ \hat\beta_1 = \sum_i (x_i-\bar x)y_i.$$
It's a short and simple algebraic step from this to the desired result. (Use the fact that $\sum_i (x_i-\bar x)\bar y = 0.$)
The generalization to multiple variables proceeds in the same manner: at the first step, subtract suitable multiples of the first column of $X$ from each of the other columns so that all the resulting columns are orthogonal to the first column. (Recall this comes down to solving a linear equation for one unknown constant $c,$ which is easy.) Repeat by subtracting suitable multiples of the second column from the (new) third, fourth, ..., etc. columns to make them orthogonal to the first two columns simultaneously. Continue "sweeping out" the columns in this fashion until they are mutually orthogonal. The resulting normal equations will involve at most one variable at a time and therefore are simple to solve. Finally, the solutions have to be converted back to the original variables (just like you have to convert the estimates $\hat\alpha$ and $\hat\beta_1$ back into an estimate of $\hat\beta_0$ in the ordinary regression case). At each step of the way, all you are doing is creating new equations from old ones and solving for a single variable at a time.
References
For a more formal account of this approach to solving the normal equations, see Gram-Schmidt orthogonalization.
Its use in multiple regression is discussed by Lynne Lamotte in The Gram-Schmidt Construction as a Basis for Linear Models, The American Statistician 68(1), February 2014.
To see how to find just a single coefficient estimate without having to compute the others, see the analysis at https://stats.stackexchange.com/a/166718/919.
For a geometric interpretation, see my answers at https://stats.stackexchange.com/a/97881/919, https://stats.stackexchange.com/a/113207/919,
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
Solution
The matrix algebra can be dismaying and, if not carried out elegantly, can require an awful lot of (superfluous) algebraic manipulation. However, the situation is much simpler than it looks,
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
Solution
The matrix algebra can be dismaying and, if not carried out elegantly, can require an awful lot of (superfluous) algebraic manipulation. However, the situation is much simpler than it looks, because (creating the matrix $X$ by putting a column of ones in first, and then the column of independent values $(x_i)$ after it)
$$X^\prime X = \pmatrix{n & S_x \\ S_x & S_{xx}}$$
and
$$X^\prime y = \pmatrix{S_y & S_{xy}}$$
(The $S_{*}$ are handy--and fairly common--abbreviations for sums of the variables and their products). Thus, the normal equations for the estimates $\hat\beta = (\hat\beta_0, \hat\beta_1)$ are--when written out as simultaneous linear equations--merely
$$\matrix{n \hat\beta_0 + S_x\hat\beta_1 = S_y \\
S_x \hat\beta_0 + S_{xx}\hat\beta_1 = S_{xy},}$$
which are to be solved for $\hat\beta_0$ and $\hat\beta_1.$ Indeed, you don't really need to solve this ab initio: all you have to do at this point is check which formula for $\hat \beta_1$ actually works. That requires only elementary algebra. I won't show it because there's a better way that produces the same result in a much more illuminating and generalizable fashion.
Motivation and Generalization
Recall that the normal equations are derived by considering the problem of minimizing the sum of squares of residuals,
$$\operatorname{SSR} = \sum_i \left(y_i - (\beta_0 + \beta_1 x_i)\right)^2.$$
The appearance of $\beta_0$ corresponds to a column of ones in $X$ while the appearance of $\beta_1$ corresponds to a column $(x_i)$ in $X$. In general, those columns are not orthogonal. (Recall that we say two vectors are orthogonal when their dot product is zero. Geometrically, this means they are perpendicular. See the references for more about this.) We can make them orthogonal by subtracting some multiple of one of them from the other. The easiest choice is to subtract a constant from each $x_i$ to make the result orthogonal to the constant column; that is, we seek a number $c$ for which
$$0 = (1,1,\ldots, 1) \cdot (x_1-c, x_2-c, \ldots, x_n-c) = \sum_{i} (1(x_i-c)) = Sx - nc.$$
The unique solution clearly is $c = Sx/n = \bar x,$ the mean of the $x_i.$ Accordingly, let's rewrite the model in terms of the "centered" variables $x_i-\bar x.$ It asks us to minimize
$$\operatorname{SSR} = \sum_i \left(y_i - (\beta_0 + \beta_1\bar x + \beta_1 (x_i-\bar x))\right)^2.$$
For simplicity, write the unknown constant term as
$$\alpha = \beta_0 + \beta_1 \bar x,$$
understanding that once solutions $\hat\alpha$ and $\hat\beta_1$ are obtained, we easily find the estimate
$$\hat\beta_0 = \hat\alpha - \hat\beta_1\bar x.$$
In terms of the unknowns $(\hat\alpha,\hat\beta_1)$ the Normal equations are now
$$\pmatrix{n & 0 \\ 0 & \sum_i(x_i-\bar x)^2}\pmatrix{\hat\alpha\\\hat\beta_1}=\pmatrix{Sy \\ \sum_i (x_i-\bar x)y_i}.$$
When written out as two simultaneous linear equations, each unknown is isolated in its own equation, which is simple to solve: this is what having orthogonal columns in $X$ achieves. In particular, the equation for $\hat\beta_1$ is
$$\sum_i(x_i-\bar x)^2\ \hat\beta_1 = \sum_i (x_i-\bar x)y_i.$$
It's a short and simple algebraic step from this to the desired result. (Use the fact that $\sum_i (x_i-\bar x)\bar y = 0.$)
The generalization to multiple variables proceeds in the same manner: at the first step, subtract suitable multiples of the first column of $X$ from each of the other columns so that all the resulting columns are orthogonal to the first column. (Recall this comes down to solving a linear equation for one unknown constant $c,$ which is easy.) Repeat by subtracting suitable multiples of the second column from the (new) third, fourth, ..., etc. columns to make them orthogonal to the first two columns simultaneously. Continue "sweeping out" the columns in this fashion until they are mutually orthogonal. The resulting normal equations will involve at most one variable at a time and therefore are simple to solve. Finally, the solutions have to be converted back to the original variables (just like you have to convert the estimates $\hat\alpha$ and $\hat\beta_1$ back into an estimate of $\hat\beta_0$ in the ordinary regression case). At each step of the way, all you are doing is creating new equations from old ones and solving for a single variable at a time.
References
For a more formal account of this approach to solving the normal equations, see Gram-Schmidt orthogonalization.
Its use in multiple regression is discussed by Lynne Lamotte in The Gram-Schmidt Construction as a Basis for Linear Models, The American Statistician 68(1), February 2014.
To see how to find just a single coefficient estimate without having to compute the others, see the analysis at https://stats.stackexchange.com/a/166718/919.
For a geometric interpretation, see my answers at https://stats.stackexchange.com/a/97881/919, https://stats.stackexchange.com/a/113207/919,
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
Solution
The matrix algebra can be dismaying and, if not carried out elegantly, can require an awful lot of (superfluous) algebraic manipulation. However, the situation is much simpler than it looks,
|
40,240
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
If you regress on a constant and $x_i$, your matrix $X$ is
\begin{pmatrix}1 &x_1\\
\vdots&\vdots\\1 &x_n
\end{pmatrix}
Hence,
$$
X'X=\begin{pmatrix}n &\sum_ix_i\\
\sum_ix_i&\sum_ix_i^2\\
\end{pmatrix}
$$
and
$$(X'X)^{-1}=\frac{1}{n\sum_ix_i^2-(\sum_ix_i)^2}
\begin{pmatrix}\sum_ix_i^2 &-\sum_ix_i\\
-\sum_ix_i&n\\
\end{pmatrix}
$$
Can you take it from here?
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
If you regress on a constant and $x_i$, your matrix $X$ is
\begin{pmatrix}1 &x_1\\
\vdots&\vdots\\1 &x_n
\end{pmatrix}
Hence,
$$
X'X=\begin{pmatrix}n &\sum_ix_i\\
\sum_ix_i&\sum_ix_i^2\\
\end{pmatrix}
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
If you regress on a constant and $x_i$, your matrix $X$ is
\begin{pmatrix}1 &x_1\\
\vdots&\vdots\\1 &x_n
\end{pmatrix}
Hence,
$$
X'X=\begin{pmatrix}n &\sum_ix_i\\
\sum_ix_i&\sum_ix_i^2\\
\end{pmatrix}
$$
and
$$(X'X)^{-1}=\frac{1}{n\sum_ix_i^2-(\sum_ix_i)^2}
\begin{pmatrix}\sum_ix_i^2 &-\sum_ix_i\\
-\sum_ix_i&n\\
\end{pmatrix}
$$
Can you take it from here?
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
If you regress on a constant and $x_i$, your matrix $X$ is
\begin{pmatrix}1 &x_1\\
\vdots&\vdots\\1 &x_n
\end{pmatrix}
Hence,
$$
X'X=\begin{pmatrix}n &\sum_ix_i\\
\sum_ix_i&\sum_ix_i^2\\
\end{pmatrix}
|
40,241
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
For anyone else out there that might be struggling with this, I've written it all out below step by step.
Suppose for the ease of explanation we have a minimum sample of 1 $x$ variable ($k=1$) and only 2 observations ($n=2$); Our estimation in scalar is $\hat{y_i} = \hat{\beta_0}+\hat{\beta_1}x_i$
\begin{equation*}
\boldsymbol{\hat{\beta}}=\begin{pmatrix}
\hat{\beta_0} \\
\hat{\beta_1}
\end{pmatrix}
\end{equation*}
\begin{equation*}
\boldsymbol{y}=\begin{pmatrix}
y_i \\
y_i
\end{pmatrix}
\end{equation*}
\begin{equation*}
\boldsymbol{X}=\begin{pmatrix}
1 & x_i \\
1& x_i
\end{pmatrix}
\end{equation*}
Therefore
\begin{equation*}
\boldsymbol{X'}=\begin{pmatrix}
1 & 1 \\
x_i & x_i
\end{pmatrix}
\end{equation*}
and;
\begin{equation*}
\boldsymbol{X'X}=\begin{pmatrix}
n & \sum_{i=1}^nx_i \\
\sum_{i=1}^nx_i & \sum_{i=1}^nx_i^2
\end{pmatrix}
\end{equation*}
Remember the rules of \textbf{inverse matrices}, where det[.] = the determinant of the matrix, and adj[.] = the adjugate (sometimes called adjoint) of the matrix.;
\begin{equation*}
\boldsymbol{(X'X)^{-1}}= \frac{1}{\textrm{det[$\boldsymbol{X'X}$]}}\times \textrm{adj[$X'X$]} \\
\end{equation*}
\begin{equation*}
\textrm{det[$\boldsymbol{X'X}$]}= \frac{1}{ad-bc}= \frac{1}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{equation*}
\begin{equation*}
\textrm{adj[$\boldsymbol{X'X}$]} =
\begin{pmatrix}
d & -b \\
-c & a
\end{pmatrix}
=\begin{pmatrix}
\sum_{i=1}^nx_i^2 & -\sum_{i=1}^nx_i \\
-\sum_{i=1}^nx_i & n
\end{pmatrix}
\end{equation*}
Therefore
\begin{equation}
\boldsymbol{(X'X)^{-1}}=\frac{1}{\textrm{det[$\boldsymbol{X'X}$]}}\times \textrm{adj[$X'X$]} = \begin{pmatrix}
\frac{\sum_{i=1}^nx_i^2}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{n}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{pmatrix}
\end{equation}
\begin{equation*}
\boldsymbol{X'y}=\begin{pmatrix}
1 & 1 \\
x_i & x_i
\end{pmatrix}
\times \begin{pmatrix}
y_i \\
y_i
\end{pmatrix} =
\begin{pmatrix}
\sum_{i=1}^ny_i \\
\sum_{i=1}^nx_iy_i
\end{pmatrix}
\end{equation*}
Therefore
\begin{align*}
\boldsymbol{\hat{\beta}} & =\boldsymbol{(X'X)^{-1}}\boldsymbol{X'y}\\
\begin{pmatrix}
\hat{\beta_0} \\
\hat{\beta_1}
\end{pmatrix} & = \begin{pmatrix}
\frac{\sum_{i=1}^nx_i^2}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{n}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{pmatrix} \times \begin{pmatrix}
\sum_{i=1}^ny_i \\
\sum_{i=1}^nx_iy_i
\end{pmatrix}
\end{align*}
\begin{align*}
\hat{\beta_1} & =\frac{-\sum_{i=1}^nx_i \times \sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} + \frac{n \times \sum_{i=1}^nx_iy_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
%
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - \sum_{i=1}^nx_i\sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\end{align*}
Remembering $\frac{1}{n}\sum_{i=1}^nx_i = \bar{x}$, therefore $\sum_{i=1}^nx_i = n\bar{x}$ (likewise for $y_i$);
%
\begin{align*}
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - n\bar{x}n\bar{y}}{n\sum_{i=1}^nx_i^2-(n\bar{x})^2} \\
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - n^2\bar{x}\bar{y}}{n\sum_{i=1}^nx_i^2-n^2(\bar{x})^2} \\
\textrm{Dividing by $n$;} \\
\hat{\beta_1} & =\frac{\sum_{i=1}^nx_iy_i - n\bar{x}\bar{y}}{\sum_{i=1}^nx_i^2-n(\bar{x})^2} \\
\end{align*}
\begin{equation}
\hat{\beta_1} = \frac{\sum_{i=1}^n(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^n(x_i-\bar{x})^2}
\end{equation}
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
|
For anyone else out there that might be struggling with this, I've written it all out below step by step.
Suppose for the ease of explanation we have a minimum sample of 1 $x$ variable ($k=1$) and onl
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
For anyone else out there that might be struggling with this, I've written it all out below step by step.
Suppose for the ease of explanation we have a minimum sample of 1 $x$ variable ($k=1$) and only 2 observations ($n=2$); Our estimation in scalar is $\hat{y_i} = \hat{\beta_0}+\hat{\beta_1}x_i$
\begin{equation*}
\boldsymbol{\hat{\beta}}=\begin{pmatrix}
\hat{\beta_0} \\
\hat{\beta_1}
\end{pmatrix}
\end{equation*}
\begin{equation*}
\boldsymbol{y}=\begin{pmatrix}
y_i \\
y_i
\end{pmatrix}
\end{equation*}
\begin{equation*}
\boldsymbol{X}=\begin{pmatrix}
1 & x_i \\
1& x_i
\end{pmatrix}
\end{equation*}
Therefore
\begin{equation*}
\boldsymbol{X'}=\begin{pmatrix}
1 & 1 \\
x_i & x_i
\end{pmatrix}
\end{equation*}
and;
\begin{equation*}
\boldsymbol{X'X}=\begin{pmatrix}
n & \sum_{i=1}^nx_i \\
\sum_{i=1}^nx_i & \sum_{i=1}^nx_i^2
\end{pmatrix}
\end{equation*}
Remember the rules of \textbf{inverse matrices}, where det[.] = the determinant of the matrix, and adj[.] = the adjugate (sometimes called adjoint) of the matrix.;
\begin{equation*}
\boldsymbol{(X'X)^{-1}}= \frac{1}{\textrm{det[$\boldsymbol{X'X}$]}}\times \textrm{adj[$X'X$]} \\
\end{equation*}
\begin{equation*}
\textrm{det[$\boldsymbol{X'X}$]}= \frac{1}{ad-bc}= \frac{1}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{equation*}
\begin{equation*}
\textrm{adj[$\boldsymbol{X'X}$]} =
\begin{pmatrix}
d & -b \\
-c & a
\end{pmatrix}
=\begin{pmatrix}
\sum_{i=1}^nx_i^2 & -\sum_{i=1}^nx_i \\
-\sum_{i=1}^nx_i & n
\end{pmatrix}
\end{equation*}
Therefore
\begin{equation}
\boldsymbol{(X'X)^{-1}}=\frac{1}{\textrm{det[$\boldsymbol{X'X}$]}}\times \textrm{adj[$X'X$]} = \begin{pmatrix}
\frac{\sum_{i=1}^nx_i^2}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{n}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{pmatrix}
\end{equation}
\begin{equation*}
\boldsymbol{X'y}=\begin{pmatrix}
1 & 1 \\
x_i & x_i
\end{pmatrix}
\times \begin{pmatrix}
y_i \\
y_i
\end{pmatrix} =
\begin{pmatrix}
\sum_{i=1}^ny_i \\
\sum_{i=1}^nx_iy_i
\end{pmatrix}
\end{equation*}
Therefore
\begin{align*}
\boldsymbol{\hat{\beta}} & =\boldsymbol{(X'X)^{-1}}\boldsymbol{X'y}\\
\begin{pmatrix}
\hat{\beta_0} \\
\hat{\beta_1}
\end{pmatrix} & = \begin{pmatrix}
\frac{\sum_{i=1}^nx_i^2}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\frac{-\sum_{i=1}^nx_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} & \frac{n}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2}
\end{pmatrix} \times \begin{pmatrix}
\sum_{i=1}^ny_i \\
\sum_{i=1}^nx_iy_i
\end{pmatrix}
\end{align*}
\begin{align*}
\hat{\beta_1} & =\frac{-\sum_{i=1}^nx_i \times \sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} + \frac{n \times \sum_{i=1}^nx_iy_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
%
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - \sum_{i=1}^nx_i\sum_{i=1}^ny_i}{n\sum_{i=1}^nx_i^2-(\sum_{i=1}^nx_i)^2} \\
\end{align*}
Remembering $\frac{1}{n}\sum_{i=1}^nx_i = \bar{x}$, therefore $\sum_{i=1}^nx_i = n\bar{x}$ (likewise for $y_i$);
%
\begin{align*}
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - n\bar{x}n\bar{y}}{n\sum_{i=1}^nx_i^2-(n\bar{x})^2} \\
\hat{\beta_1} & =\frac{n\sum_{i=1}^nx_iy_i - n^2\bar{x}\bar{y}}{n\sum_{i=1}^nx_i^2-n^2(\bar{x})^2} \\
\textrm{Dividing by $n$;} \\
\hat{\beta_1} & =\frac{\sum_{i=1}^nx_iy_i - n\bar{x}\bar{y}}{\sum_{i=1}^nx_i^2-n(\bar{x})^2} \\
\end{align*}
\begin{equation}
\hat{\beta_1} = \frac{\sum_{i=1}^n(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^n(x_i-\bar{x})^2}
\end{equation}
|
Converting the beta coefficient from matrix to scalar notation in OLS regression
For anyone else out there that might be struggling with this, I've written it all out below step by step.
Suppose for the ease of explanation we have a minimum sample of 1 $x$ variable ($k=1$) and onl
|
40,242
|
Why does Deep Q-Learning have "do nothing" actions?
|
At the start of every game, a random number of no-op actions are played (with the maximum number of such actions controlled by that parameter) to introduce variety in the initial game states.
If an agent starts from exactly the same initial state every time it plays the same game, they're afraid that the Reinforcement Learning will simply learn to memorize a good sequence of actions from that initial state, rather than learning to observe the current state and select a good action based on that observation (which is the more interesting thing we're interested in learning). The idea is that by introducing randomness in the state we "start playing from", it should become impossible / more difficult for the agent to "cheat" and simply memorize a complete sequence of actions from a single specific initial state.
Note that, in 2017, it has been argued in the Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents paper that these kinds of sequences no-op actions are not as effective at the goal described above as we would like, and an alternative solution is proposed which consists of introducing stochasticity throughout the entire game through "sticky actions" (basically sometimes randomly continue with the most recently-selected action rather than a new action selected by the agent).
|
Why does Deep Q-Learning have "do nothing" actions?
|
At the start of every game, a random number of no-op actions are played (with the maximum number of such actions controlled by that parameter) to introduce variety in the initial game states.
If an a
|
Why does Deep Q-Learning have "do nothing" actions?
At the start of every game, a random number of no-op actions are played (with the maximum number of such actions controlled by that parameter) to introduce variety in the initial game states.
If an agent starts from exactly the same initial state every time it plays the same game, they're afraid that the Reinforcement Learning will simply learn to memorize a good sequence of actions from that initial state, rather than learning to observe the current state and select a good action based on that observation (which is the more interesting thing we're interested in learning). The idea is that by introducing randomness in the state we "start playing from", it should become impossible / more difficult for the agent to "cheat" and simply memorize a complete sequence of actions from a single specific initial state.
Note that, in 2017, it has been argued in the Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents paper that these kinds of sequences no-op actions are not as effective at the goal described above as we would like, and an alternative solution is proposed which consists of introducing stochasticity throughout the entire game through "sticky actions" (basically sometimes randomly continue with the most recently-selected action rather than a new action selected by the agent).
|
Why does Deep Q-Learning have "do nothing" actions?
At the start of every game, a random number of no-op actions are played (with the maximum number of such actions controlled by that parameter) to introduce variety in the initial game states.
If an a
|
40,243
|
Is Cross Validation useless unless the Hypotheses are nested?
|
My logic tells me the answer is yes.
And, as @amoeba pointed out: your logic is right.
how is Cross Validation different than this procedure?
CV in itself has nothing to do with your overfitting. CV is just a scheme how to retain independent cases to test some model.
Note that if you select a model based on the CV results, this model selection procedure (including the CV) is actually part of your training.
You need to do an independent validation (rather, verification) of that final model (for which you can again use another CV as a strategy to retain cases independent of the training - see nested cross validation) in order to obtain a reliable estimate of its generalization performance.
To reiterate: the problem is not the CV, the problem is the data-driven model optimization (selection).
From this perspective random model generation should in theory overfit less than a penalized regression as my evaluation is on a bigger unseen data segment.
This I don't understand: why would the unseen data size differ?
Is there something in CV procedure that somehow mitigates the multiple testing problem?
No.
The only property of CV that slightly helps with multiple testing compared to a single split is that CV eventually tests all available cases, and is thus subject to somewhat smaller variance uncertainty due to the limited number of tested cases. This won't help much compared to limiting the search space (i.e. restricting the number of comparisons), though.
|
Is Cross Validation useless unless the Hypotheses are nested?
|
My logic tells me the answer is yes.
And, as @amoeba pointed out: your logic is right.
how is Cross Validation different than this procedure?
CV in itself has nothing to do with your overfitting
|
Is Cross Validation useless unless the Hypotheses are nested?
My logic tells me the answer is yes.
And, as @amoeba pointed out: your logic is right.
how is Cross Validation different than this procedure?
CV in itself has nothing to do with your overfitting. CV is just a scheme how to retain independent cases to test some model.
Note that if you select a model based on the CV results, this model selection procedure (including the CV) is actually part of your training.
You need to do an independent validation (rather, verification) of that final model (for which you can again use another CV as a strategy to retain cases independent of the training - see nested cross validation) in order to obtain a reliable estimate of its generalization performance.
To reiterate: the problem is not the CV, the problem is the data-driven model optimization (selection).
From this perspective random model generation should in theory overfit less than a penalized regression as my evaluation is on a bigger unseen data segment.
This I don't understand: why would the unseen data size differ?
Is there something in CV procedure that somehow mitigates the multiple testing problem?
No.
The only property of CV that slightly helps with multiple testing compared to a single split is that CV eventually tests all available cases, and is thus subject to somewhat smaller variance uncertainty due to the limited number of tested cases. This won't help much compared to limiting the search space (i.e. restricting the number of comparisons), though.
|
Is Cross Validation useless unless the Hypotheses are nested?
My logic tells me the answer is yes.
And, as @amoeba pointed out: your logic is right.
how is Cross Validation different than this procedure?
CV in itself has nothing to do with your overfitting
|
40,244
|
Is Cross Validation useless unless the Hypotheses are nested?
|
EDIT:
Tuning or selecting a model based on cross-validation is essentially trying to minimize the prediction error (e.g., mean-squared prediction error). You select a model conditional on some subset of input data and predict the output at the left out locations. Intuitively, it is a prediction because you are evaluating the model at out of sample locations. Your question is what happens if your set of candidate models are independent of the input data (i.e., you don't use any data when randomly generating models).
This assumption is not that different than any other model fitting procedure. For example, if I start with a parameterized model, and the parameters could be any real number, then I also have an infinite set of candidate models. We both still need to select the best model from the set of possible models by minimizing some error metric. Therefore, both of our model choices are conditional on some training data (perhaps a subset of all the training data if using cross-validation). You don't specify an error metric so lets assume it is mean-squared error (MSE). I pick model parameters and thereby my model using some black box procedure assuming MSE metric conditional on training data. You pick your model from your set of random models assuming MSE metric conditional on training data.
Do we choose the same model? It depends on if you started with different sets of candidate models.
Do we overfit the data? It depends on the set of candidate models we started with and the training data.
Do we know we overfit the data? If we do cross-validation then we can check the prediction error.
ORIGINAL RESPONSE:
In a broad sense, there is some signal in the data and some noise. When we overfit we are essentially fitting the noise.
In cross-validation, we leave out portions of the data when fitting and assess the error when predicting the left out points. It is similar to having training and test data in that we are measuring an out of sample error. The model must generalize well regardless of what points are omitted. If we fit the noise the model will not generalize well. The set of models we are comparing likely does not include those that try to interpolate a data point when it is omitted from the training data. If the model behaves this way (e.g., random behavior to improve fit) then it is likely we do not have a reasonable general model fitting procedure and cross-validation can't help us.
If you have an infinite set of models and an infinite amount of time then I guess in theory you could generate a model that was as good or better than any model that was generate through any other procedure. How will you know which model from your infinite set it is though? If it is the model that interpolates the training data, then yes it will overfit when the training data is noisy.
|
Is Cross Validation useless unless the Hypotheses are nested?
|
EDIT:
Tuning or selecting a model based on cross-validation is essentially trying to minimize the prediction error (e.g., mean-squared prediction error). You select a model conditional on some subset
|
Is Cross Validation useless unless the Hypotheses are nested?
EDIT:
Tuning or selecting a model based on cross-validation is essentially trying to minimize the prediction error (e.g., mean-squared prediction error). You select a model conditional on some subset of input data and predict the output at the left out locations. Intuitively, it is a prediction because you are evaluating the model at out of sample locations. Your question is what happens if your set of candidate models are independent of the input data (i.e., you don't use any data when randomly generating models).
This assumption is not that different than any other model fitting procedure. For example, if I start with a parameterized model, and the parameters could be any real number, then I also have an infinite set of candidate models. We both still need to select the best model from the set of possible models by minimizing some error metric. Therefore, both of our model choices are conditional on some training data (perhaps a subset of all the training data if using cross-validation). You don't specify an error metric so lets assume it is mean-squared error (MSE). I pick model parameters and thereby my model using some black box procedure assuming MSE metric conditional on training data. You pick your model from your set of random models assuming MSE metric conditional on training data.
Do we choose the same model? It depends on if you started with different sets of candidate models.
Do we overfit the data? It depends on the set of candidate models we started with and the training data.
Do we know we overfit the data? If we do cross-validation then we can check the prediction error.
ORIGINAL RESPONSE:
In a broad sense, there is some signal in the data and some noise. When we overfit we are essentially fitting the noise.
In cross-validation, we leave out portions of the data when fitting and assess the error when predicting the left out points. It is similar to having training and test data in that we are measuring an out of sample error. The model must generalize well regardless of what points are omitted. If we fit the noise the model will not generalize well. The set of models we are comparing likely does not include those that try to interpolate a data point when it is omitted from the training data. If the model behaves this way (e.g., random behavior to improve fit) then it is likely we do not have a reasonable general model fitting procedure and cross-validation can't help us.
If you have an infinite set of models and an infinite amount of time then I guess in theory you could generate a model that was as good or better than any model that was generate through any other procedure. How will you know which model from your infinite set it is though? If it is the model that interpolates the training data, then yes it will overfit when the training data is noisy.
|
Is Cross Validation useless unless the Hypotheses are nested?
EDIT:
Tuning or selecting a model based on cross-validation is essentially trying to minimize the prediction error (e.g., mean-squared prediction error). You select a model conditional on some subset
|
40,245
|
Distribution of $-\log f_X(X)$
|
The book mentioned by Xi'an is from 2004. It refers to an article from the year 1991 in which the following theorem appears.
From: Troutt M.D. 1991 A theorem on the density of the density ordinate
and an alternative interpretation of the Box-Muller
method
If a random variable X has a density $f(x)$, $x \in \mathbb{R}^n$, and if the random variable $v = f(x)$ has a density $g(v)$, then $$g(v) = -vA^\prime(v),$$ where $A(v)$ is the Lebesgue measure of the set $$S(v) = \lbrace x: f(x) \geq v \rbrace $$
Intuitively and non-formal:
$$\begin{array}\\
f_Z(z) dz = P(z<Z<z+dz) &= P(x(z)<X<x(z+dz)) \\
&= P(x(z)<X<x(z)+dz \frac{dx}{dz}) \\
&= f_X(X) \frac{dx}{dz} dz = z \frac{-dA(z)}{dz} dz \end{array}$$
In a similar way when we use a transformed variable $Y = g(f_x(x))$ then:
$$\begin{array}\\
f_Y(y) dy = P(y<Y<y+dy) &= P(x(y)<X<x(y+dy)) \\
&= P(x(y)<X<x(y)+dy \frac{dx}{dy}) \\
&= f_X(X) \frac{dx}{dy} dy = g^{-1}(y) \frac{-dA(y)}{dy} dy \end{array}$$
So
$$f_Y(y) = -e^{-y} \frac{A(y)}{dy}$$
example standard normal distribution:
$$f_X(x) = \frac{1}{\sqrt{2\pi}} e^{-0.5 x^2}$$
$$y = \log(\sqrt{2\pi}) + 0.5 x^2$$
$$A(y) = C-\sqrt{8(y-\log(\sqrt{2\pi}))} $$
thus
$$f_Y(y) = \frac{\sqrt{2} e^{-y}}{\sqrt{y-\frac{\log(2\pi)}{2}}} $$
example a multivariate normal distribution:
$$f_X(x_1,x_2) = \frac{1}{2\pi} e^{-0.5 (x_1^2 + x_2^2)}$$
$$y = \log(2\pi) + 0.5 (x_1^2+x_2^2)$$
$$A(y) = C-2\pi(y-\log(2\pi)) $$
thus
$$f_Y(y) = 2\pi e^{-y} \qquad \qquad \text{for $y \geq log(2\pi)$}$$
computational check:
# random draws/simulation
x_1 = rnorm(100000,0,1)
x_2 = rnorm(100000,0,1)
y = -log(dnorm(x_1,0,1)*dnorm(x_2,0,1))
# display simulation along with theoretic curve
hist(y,breaks=c(0,log(2*pi)+c(0:(max(y+1)*5))/5),
main = "computational check for distribution f_Y")
y_t <- seq(1,10,0.01)
lines(y_t,2*pi*exp(-y_t),col=2)
|
Distribution of $-\log f_X(X)$
|
The book mentioned by Xi'an is from 2004. It refers to an article from the year 1991 in which the following theorem appears.
From: Troutt M.D. 1991 A theorem on the density of the density ordinate
|
Distribution of $-\log f_X(X)$
The book mentioned by Xi'an is from 2004. It refers to an article from the year 1991 in which the following theorem appears.
From: Troutt M.D. 1991 A theorem on the density of the density ordinate
and an alternative interpretation of the Box-Muller
method
If a random variable X has a density $f(x)$, $x \in \mathbb{R}^n$, and if the random variable $v = f(x)$ has a density $g(v)$, then $$g(v) = -vA^\prime(v),$$ where $A(v)$ is the Lebesgue measure of the set $$S(v) = \lbrace x: f(x) \geq v \rbrace $$
Intuitively and non-formal:
$$\begin{array}\\
f_Z(z) dz = P(z<Z<z+dz) &= P(x(z)<X<x(z+dz)) \\
&= P(x(z)<X<x(z)+dz \frac{dx}{dz}) \\
&= f_X(X) \frac{dx}{dz} dz = z \frac{-dA(z)}{dz} dz \end{array}$$
In a similar way when we use a transformed variable $Y = g(f_x(x))$ then:
$$\begin{array}\\
f_Y(y) dy = P(y<Y<y+dy) &= P(x(y)<X<x(y+dy)) \\
&= P(x(y)<X<x(y)+dy \frac{dx}{dy}) \\
&= f_X(X) \frac{dx}{dy} dy = g^{-1}(y) \frac{-dA(y)}{dy} dy \end{array}$$
So
$$f_Y(y) = -e^{-y} \frac{A(y)}{dy}$$
example standard normal distribution:
$$f_X(x) = \frac{1}{\sqrt{2\pi}} e^{-0.5 x^2}$$
$$y = \log(\sqrt{2\pi}) + 0.5 x^2$$
$$A(y) = C-\sqrt{8(y-\log(\sqrt{2\pi}))} $$
thus
$$f_Y(y) = \frac{\sqrt{2} e^{-y}}{\sqrt{y-\frac{\log(2\pi)}{2}}} $$
example a multivariate normal distribution:
$$f_X(x_1,x_2) = \frac{1}{2\pi} e^{-0.5 (x_1^2 + x_2^2)}$$
$$y = \log(2\pi) + 0.5 (x_1^2+x_2^2)$$
$$A(y) = C-2\pi(y-\log(2\pi)) $$
thus
$$f_Y(y) = 2\pi e^{-y} \qquad \qquad \text{for $y \geq log(2\pi)$}$$
computational check:
# random draws/simulation
x_1 = rnorm(100000,0,1)
x_2 = rnorm(100000,0,1)
y = -log(dnorm(x_1,0,1)*dnorm(x_2,0,1))
# display simulation along with theoretic curve
hist(y,breaks=c(0,log(2*pi)+c(0:(max(y+1)*5))/5),
main = "computational check for distribution f_Y")
y_t <- seq(1,10,0.01)
lines(y_t,2*pi*exp(-y_t),col=2)
|
Distribution of $-\log f_X(X)$
The book mentioned by Xi'an is from 2004. It refers to an article from the year 1991 in which the following theorem appears.
From: Troutt M.D. 1991 A theorem on the density of the density ordinate
|
40,246
|
Why does sklearn Ridge not accept warm start?
|
Ridge regression can be solved in one shot as a system of linear equations:
$$ \hat \beta = (X^t X + \lambda I)^{-1} X^t y $$
So ridge regression is usually solved with a linear equation solver, just like linear regression.
For example, sklearn uses the singular value decomposition of the matrix $X$:
$$ X = UDV^{-1} $$
To re-express this system as
$$ \hat \beta = V (D^2 + \lambda I)^{-1} DU^{t}y $$
For a derivation of this equation, see The proof of shrinking coefficients using ridge regression through spectral decomposition.
Notice that this equation is rather nicer than it may seem. The $D^2 + \lambda I$ matrix is diagonal, so inverting it is just taking the reciporical of the entries. Then $(D^2 + \lambda I)^{-1} D$ is also diagonal, and the matrix product is just the product of the diagonal entries.
Here's the source code from sklearn:
def _solve_svd(X, y, alpha):
U, s, Vt = linalg.svd(X, full_matrices=False)
idx = s > 1e-15 # same default value as scipy.linalg.pinv
s_nnz = s[idx][:, np.newaxis]
UTy = np.dot(U.T, y)
d = np.zeros((s.size, alpha.size), dtype=X.dtype)
d[idx] = s_nnz / (s_nnz ** 2 + alpha)
d_UT_y = d * UTy
return np.dot(Vt.T, d_UT_y).T
Except for the small amount of gymnastics to deal with the zero singular values, this code lines up exactly with the equation above.
LASSO on the other hand, has no simple expression in terms of linear algebraic operations. For LASSO we need some kind of iterative solver, and so the concept of re-starting the iteration at the previous solutions makes sense.
|
Why does sklearn Ridge not accept warm start?
|
Ridge regression can be solved in one shot as a system of linear equations:
$$ \hat \beta = (X^t X + \lambda I)^{-1} X^t y $$
So ridge regression is usually solved with a linear equation solver, just
|
Why does sklearn Ridge not accept warm start?
Ridge regression can be solved in one shot as a system of linear equations:
$$ \hat \beta = (X^t X + \lambda I)^{-1} X^t y $$
So ridge regression is usually solved with a linear equation solver, just like linear regression.
For example, sklearn uses the singular value decomposition of the matrix $X$:
$$ X = UDV^{-1} $$
To re-express this system as
$$ \hat \beta = V (D^2 + \lambda I)^{-1} DU^{t}y $$
For a derivation of this equation, see The proof of shrinking coefficients using ridge regression through spectral decomposition.
Notice that this equation is rather nicer than it may seem. The $D^2 + \lambda I$ matrix is diagonal, so inverting it is just taking the reciporical of the entries. Then $(D^2 + \lambda I)^{-1} D$ is also diagonal, and the matrix product is just the product of the diagonal entries.
Here's the source code from sklearn:
def _solve_svd(X, y, alpha):
U, s, Vt = linalg.svd(X, full_matrices=False)
idx = s > 1e-15 # same default value as scipy.linalg.pinv
s_nnz = s[idx][:, np.newaxis]
UTy = np.dot(U.T, y)
d = np.zeros((s.size, alpha.size), dtype=X.dtype)
d[idx] = s_nnz / (s_nnz ** 2 + alpha)
d_UT_y = d * UTy
return np.dot(Vt.T, d_UT_y).T
Except for the small amount of gymnastics to deal with the zero singular values, this code lines up exactly with the equation above.
LASSO on the other hand, has no simple expression in terms of linear algebraic operations. For LASSO we need some kind of iterative solver, and so the concept of re-starting the iteration at the previous solutions makes sense.
|
Why does sklearn Ridge not accept warm start?
Ridge regression can be solved in one shot as a system of linear equations:
$$ \hat \beta = (X^t X + \lambda I)^{-1} X^t y $$
So ridge regression is usually solved with a linear equation solver, just
|
40,247
|
Is the LASSO really applicable for binary classification problems?
|
It is valid. Note the family="binomial" argument which is appropriate for a classification problem. A normal lasso regression problem would use the gaussian link function.
In this setting, it allows you to estimate the parameters of the binomial GLM by optimising the binomial likelihood whilst imposing the lasso penalty on the parameter estimates. The dichotomous response is perfectly fine here.
This is useful because it allows feature selection or parameter shrinkage to avoid overfitting.
|
Is the LASSO really applicable for binary classification problems?
|
It is valid. Note the family="binomial" argument which is appropriate for a classification problem. A normal lasso regression problem would use the gaussian link function.
In this setting, it allows y
|
Is the LASSO really applicable for binary classification problems?
It is valid. Note the family="binomial" argument which is appropriate for a classification problem. A normal lasso regression problem would use the gaussian link function.
In this setting, it allows you to estimate the parameters of the binomial GLM by optimising the binomial likelihood whilst imposing the lasso penalty on the parameter estimates. The dichotomous response is perfectly fine here.
This is useful because it allows feature selection or parameter shrinkage to avoid overfitting.
|
Is the LASSO really applicable for binary classification problems?
It is valid. Note the family="binomial" argument which is appropriate for a classification problem. A normal lasso regression problem would use the gaussian link function.
In this setting, it allows y
|
40,248
|
Standard Error of the MLE for Laplace Distribution
|
Appeal to the Fisher information gives you an asymptotic approximation to the standard error. As whuber correctly points out in the comments, so long as this function is almost surely differentiable, that should be sufficient to obtain your result.
However, in the present case it is also possible to obtain the exact distribution of the MLE via first principles methods, without appeal to the asymptotic theory of MLEs. Your MLE is the median, so its distribution can be obtained using standard distributional results for order statistics, where the underlying distribution is continuous. To derive the result we will assume an odd number of observations for simplicity in dealing with the median.
In this case we have $n = 2k+1$ for some non-negative integer $k$ and the MLE is $\hat{\mu} = X_{(k+1)}$. We let $f$ and $F$ be the respective density and distribution functions for the (zero-mean) sample distribution. For IID values from a continuous distribution we then have:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\mu}) = \mathbb{E}( (\hat{\mu} - \mu)^2 )
&= \int \limits_{-\infty}^\infty t^2 f(t) \text{Beta}(F(t) | k+1, k+1) dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^k (1-F(t))^k dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \int \limits_{-\infty}^\infty t^2 f(t) (F(t)-F(t)^2)^k dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^i F(t)^{2k-2i} dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^{2k-i} dt. \\[6pt]
\end{aligned} \end{equation}$$
For the Laplace distribution we have $F(t)(1-F(t)) = \tfrac{1}{2} \exp(- |t|/b) - \tfrac{1}{4} \exp(- 2|t|/b)$ and $f(t) = \tfrac{1}{2b} \exp(- |t|/b)$ so that:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\mu})
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^{2k-i} dt \\[6pt]
&= 2 \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_0^\infty t^2 f(t) F(t)^{2k-i} dt \\[6pt]
&= b^2 \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \Big( \frac{1}{2} \Big)^{2k-i} \int \limits_0^\infty t^2 \exp (-(2k-i+1)t) dt \\[6pt]
&= b^2 \frac{(2k+1)!}{2^{k-1} k! k!} \sum_{i=0}^k {k \choose i} \frac{(-1 /2)^{k-i} }{(2k-i+1)^3}. \\[6pt]
\end{aligned} \end{equation}$$
This gives us a closed form (finite sum) expression for the variance of the MLE estimator (i.e., the sample median). (It is important to be careful of rounding error when evaluating the variance expression, since it involves a product of very large and very small terms.) A related expression has been investigated in some excellent analysis by Claude Liebovici in this related question, which shows that $\mathbb{V}( \hat{\mu}) \approx b^2 / n$ as $n \rightarrow \infty$. This limiting result accords with the asymptotic theory of the sample median, which has $\mathbb{V}(\hat{\mu}) \rightarrow 1 / (4n f(\mu)^2) = b^2 / n$.
|
Standard Error of the MLE for Laplace Distribution
|
Appeal to the Fisher information gives you an asymptotic approximation to the standard error. As whuber correctly points out in the comments, so long as this function is almost surely differentiable,
|
Standard Error of the MLE for Laplace Distribution
Appeal to the Fisher information gives you an asymptotic approximation to the standard error. As whuber correctly points out in the comments, so long as this function is almost surely differentiable, that should be sufficient to obtain your result.
However, in the present case it is also possible to obtain the exact distribution of the MLE via first principles methods, without appeal to the asymptotic theory of MLEs. Your MLE is the median, so its distribution can be obtained using standard distributional results for order statistics, where the underlying distribution is continuous. To derive the result we will assume an odd number of observations for simplicity in dealing with the median.
In this case we have $n = 2k+1$ for some non-negative integer $k$ and the MLE is $\hat{\mu} = X_{(k+1)}$. We let $f$ and $F$ be the respective density and distribution functions for the (zero-mean) sample distribution. For IID values from a continuous distribution we then have:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\mu}) = \mathbb{E}( (\hat{\mu} - \mu)^2 )
&= \int \limits_{-\infty}^\infty t^2 f(t) \text{Beta}(F(t) | k+1, k+1) dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^k (1-F(t))^k dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \int \limits_{-\infty}^\infty t^2 f(t) (F(t)-F(t)^2)^k dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^i F(t)^{2k-2i} dt \\[6pt]
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^{2k-i} dt. \\[6pt]
\end{aligned} \end{equation}$$
For the Laplace distribution we have $F(t)(1-F(t)) = \tfrac{1}{2} \exp(- |t|/b) - \tfrac{1}{4} \exp(- 2|t|/b)$ and $f(t) = \tfrac{1}{2b} \exp(- |t|/b)$ so that:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\mu})
&= \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_{-\infty}^\infty t^2 f(t) F(t)^{2k-i} dt \\[6pt]
&= 2 \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \int \limits_0^\infty t^2 f(t) F(t)^{2k-i} dt \\[6pt]
&= b^2 \frac{(2k+1)!}{k! k!} \sum_{i=0}^k {k \choose i} (-1)^{k-i} \Big( \frac{1}{2} \Big)^{2k-i} \int \limits_0^\infty t^2 \exp (-(2k-i+1)t) dt \\[6pt]
&= b^2 \frac{(2k+1)!}{2^{k-1} k! k!} \sum_{i=0}^k {k \choose i} \frac{(-1 /2)^{k-i} }{(2k-i+1)^3}. \\[6pt]
\end{aligned} \end{equation}$$
This gives us a closed form (finite sum) expression for the variance of the MLE estimator (i.e., the sample median). (It is important to be careful of rounding error when evaluating the variance expression, since it involves a product of very large and very small terms.) A related expression has been investigated in some excellent analysis by Claude Liebovici in this related question, which shows that $\mathbb{V}( \hat{\mu}) \approx b^2 / n$ as $n \rightarrow \infty$. This limiting result accords with the asymptotic theory of the sample median, which has $\mathbb{V}(\hat{\mu}) \rightarrow 1 / (4n f(\mu)^2) = b^2 / n$.
|
Standard Error of the MLE for Laplace Distribution
Appeal to the Fisher information gives you an asymptotic approximation to the standard error. As whuber correctly points out in the comments, so long as this function is almost surely differentiable,
|
40,249
|
Stochastic gradient descent Vs Mini-batch size 1
|
Standard gradient descent and batch gradient descent were originally used to describe taking the gradient over all data points, and by some definitions, mini-batch corresponds to taking a small number of data points (the mini-batch size) to approximate the gradient in each iteration. Then officially, stochastic gradient descent is the case where the mini-batch size is 1.
However, perhaps in an attempt to not use the clunky term "mini-batch", stochastic gradient descent almost always actually refers to mini-batch gradient descent, and we talk about the "batch-size" to refer to the mini-batch size. Gradient descent with > 1 batch size is still stochastic, so I think it's not an unreasonable renaming, and pretty much no one uses true SGD with a batch size of 1, so nothing of value was lost.
|
Stochastic gradient descent Vs Mini-batch size 1
|
Standard gradient descent and batch gradient descent were originally used to describe taking the gradient over all data points, and by some definitions, mini-batch corresponds to taking a small number
|
Stochastic gradient descent Vs Mini-batch size 1
Standard gradient descent and batch gradient descent were originally used to describe taking the gradient over all data points, and by some definitions, mini-batch corresponds to taking a small number of data points (the mini-batch size) to approximate the gradient in each iteration. Then officially, stochastic gradient descent is the case where the mini-batch size is 1.
However, perhaps in an attempt to not use the clunky term "mini-batch", stochastic gradient descent almost always actually refers to mini-batch gradient descent, and we talk about the "batch-size" to refer to the mini-batch size. Gradient descent with > 1 batch size is still stochastic, so I think it's not an unreasonable renaming, and pretty much no one uses true SGD with a batch size of 1, so nothing of value was lost.
|
Stochastic gradient descent Vs Mini-batch size 1
Standard gradient descent and batch gradient descent were originally used to describe taking the gradient over all data points, and by some definitions, mini-batch corresponds to taking a small number
|
40,250
|
Stochastic gradient descent Vs Mini-batch size 1
|
Is stochastic gradient descent basically the name given to mini-batch
training where batch size = 1 and selecting random training rows?
Yes. Though shuffling the rows isn't necessarily implied.
|
Stochastic gradient descent Vs Mini-batch size 1
|
Is stochastic gradient descent basically the name given to mini-batch
training where batch size = 1 and selecting random training rows?
Yes. Though shuffling the rows isn't necessarily implied.
|
Stochastic gradient descent Vs Mini-batch size 1
Is stochastic gradient descent basically the name given to mini-batch
training where batch size = 1 and selecting random training rows?
Yes. Though shuffling the rows isn't necessarily implied.
|
Stochastic gradient descent Vs Mini-batch size 1
Is stochastic gradient descent basically the name given to mini-batch
training where batch size = 1 and selecting random training rows?
Yes. Though shuffling the rows isn't necessarily implied.
|
40,251
|
Generalized Normal Distribution
|
One option are Lambert W random variables (skewed, heavy-tailed), which can be parameterized either as $f(y \mid \mu_x, \sigma_x, \gamma)$ or $f(y \mid \mu_x, \sigma_x, \delta_{\ell}, \delta_r, \alpha)$, respectively (Disclaimer: I am the author of these, so I am biased on whether they are interpretable or not -- I find them def much more interpretable than a asinh() function ;) ).
As you care about 3rd and 4th moments, double heavy-tailed Lambert W x Gaussian (or Tukey's h / hh as special case) might be useful to look at. They arise as a non-linear transformation of $N(\mu_x, \sigma_x^2)$ random variable $X$ to (setting $\alpha = 1$ for simplicity)
$$
Y = \mu_x + \sigma_x \cdot \left( U \exp\left(\frac{\delta}{2} \cdot U^{2}\right) \right), \quad U := \frac{X - \mu_x}{\sigma_x} \sim N(0, 1)
$$
It can be extended to a skewed version, by allowing $\delta$ to be different for the left side ($X < \mu_x$) vs the right side ($X > \mu_x$); hence $\delta \rightarrow (\delta_l, \delta_r)$. Clearly, $Y \sim N(\mu_x, \sigma_x^2)$ if $\delta = 0$.
The interpretation is that there is a latent process $X$ that is Gaussian; however, we only observe & measure the extreme skewed / heavy-tailed version of it through $Y$. As an example take the stock market: here you could think of $X$ as "news" occurring in the world (Gaussian), but we can only observe / measure them through the lens of collective market actions -- and as we know people freak out over unlikely events (adding heavy-tails); and people react more extreme to negative news than to positive ones (adding skewness). This collective response is captured via $\delta_l$ and $\delta_r$ parameters, which push events far from the mean even further away (generating heavy tails). Obviously, this should not be taken as a literal explanation of the market, but as (one) interpretation (see Table 4 & Figure 7 for an illustration on SP500 returns).
The distribution of $Y$, $f(y \mid \mu_x, \sigma_x, \delta_l, \delta_r)$, has the properties you request in 1. & 2. (set $\delta \equiv 0$) and 3. (see Eq. (23) here); re 4.: I assume you mean that you want to exclude pathological cases that are theoretically interesting, but practically useless. For that matter several applications in the original papers as well as several posts here illustrating applications of it with simulations and real world examples (How to transform data to normality?, How to transform leptokurtic distribution to normality?, Transformations to approximate normality with high kurtosis data) should suffice.
|
Generalized Normal Distribution
|
One option are Lambert W random variables (skewed, heavy-tailed), which can be parameterized either as $f(y \mid \mu_x, \sigma_x, \gamma)$ or $f(y \mid \mu_x, \sigma_x, \delta_{\ell}, \delta_r, \alpha
|
Generalized Normal Distribution
One option are Lambert W random variables (skewed, heavy-tailed), which can be parameterized either as $f(y \mid \mu_x, \sigma_x, \gamma)$ or $f(y \mid \mu_x, \sigma_x, \delta_{\ell}, \delta_r, \alpha)$, respectively (Disclaimer: I am the author of these, so I am biased on whether they are interpretable or not -- I find them def much more interpretable than a asinh() function ;) ).
As you care about 3rd and 4th moments, double heavy-tailed Lambert W x Gaussian (or Tukey's h / hh as special case) might be useful to look at. They arise as a non-linear transformation of $N(\mu_x, \sigma_x^2)$ random variable $X$ to (setting $\alpha = 1$ for simplicity)
$$
Y = \mu_x + \sigma_x \cdot \left( U \exp\left(\frac{\delta}{2} \cdot U^{2}\right) \right), \quad U := \frac{X - \mu_x}{\sigma_x} \sim N(0, 1)
$$
It can be extended to a skewed version, by allowing $\delta$ to be different for the left side ($X < \mu_x$) vs the right side ($X > \mu_x$); hence $\delta \rightarrow (\delta_l, \delta_r)$. Clearly, $Y \sim N(\mu_x, \sigma_x^2)$ if $\delta = 0$.
The interpretation is that there is a latent process $X$ that is Gaussian; however, we only observe & measure the extreme skewed / heavy-tailed version of it through $Y$. As an example take the stock market: here you could think of $X$ as "news" occurring in the world (Gaussian), but we can only observe / measure them through the lens of collective market actions -- and as we know people freak out over unlikely events (adding heavy-tails); and people react more extreme to negative news than to positive ones (adding skewness). This collective response is captured via $\delta_l$ and $\delta_r$ parameters, which push events far from the mean even further away (generating heavy tails). Obviously, this should not be taken as a literal explanation of the market, but as (one) interpretation (see Table 4 & Figure 7 for an illustration on SP500 returns).
The distribution of $Y$, $f(y \mid \mu_x, \sigma_x, \delta_l, \delta_r)$, has the properties you request in 1. & 2. (set $\delta \equiv 0$) and 3. (see Eq. (23) here); re 4.: I assume you mean that you want to exclude pathological cases that are theoretically interesting, but practically useless. For that matter several applications in the original papers as well as several posts here illustrating applications of it with simulations and real world examples (How to transform data to normality?, How to transform leptokurtic distribution to normality?, Transformations to approximate normality with high kurtosis data) should suffice.
|
Generalized Normal Distribution
One option are Lambert W random variables (skewed, heavy-tailed), which can be parameterized either as $f(y \mid \mu_x, \sigma_x, \gamma)$ or $f(y \mid \mu_x, \sigma_x, \delta_{\ell}, \delta_r, \alpha
|
40,252
|
Generating t-distributed random numbers: lesser numbers near zero.
|
Short version: the problem stands with NumPy x.std() which does not
divide by the right degrees of freedom.
Repeating the experiment in R shows no discrepancy: either by comparing the histogram with the theoretical Student's $t$ density with three degrees of freedom
or the uniformity of the transform of the sample by the theoretical Student's $t$ cdf with three degrees of freedom
or the corresponding QQ-plot:
The sample of size 10⁵ was produced as follows in R:
X=matrix(rnorm(4*1e5),ncol=4)
Z=sqrt(4)*apply(X,1,mean)/apply(X,1,sd)
A Kolmogorov-Smirnov test also produces an acceptance of the null:
> ks.test(Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.0039382, p-value = 0.08992
alternative hypothesis: two-sided
for one sample and
> ks.test(Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.0019529, p-value = 0.8402
alternative hypothesis: two-sided
for the next.
However..., the reason is much more mundane: it just happens that NumPy does not define the standard variance in the standard (Gosset's) way! Indeed it uses instead the root of
$$\frac{1}{n}\sum_{i=1}^n (x_i-\bar{x})^2$$
which leads to a $t$ distribution inflated by $$\sqrt\frac{n}{n-1}$$ and hence to the observed discrepancy:
> ks.test(sqrt(4/3)*Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.030732, p-value < 2.2e-16
alternative hypothesis: two-sided
While I have no personal objection to using $n$ instead of $n-1$ in the denominator, this definition clashes with Gosset's one and hence with the definition of the Student's $t$ distribution.
|
Generating t-distributed random numbers: lesser numbers near zero.
|
Short version: the problem stands with NumPy x.std() which does not
divide by the right degrees of freedom.
Repeating the experiment in R shows no discrepancy: either by comparing the histogram wit
|
Generating t-distributed random numbers: lesser numbers near zero.
Short version: the problem stands with NumPy x.std() which does not
divide by the right degrees of freedom.
Repeating the experiment in R shows no discrepancy: either by comparing the histogram with the theoretical Student's $t$ density with three degrees of freedom
or the uniformity of the transform of the sample by the theoretical Student's $t$ cdf with three degrees of freedom
or the corresponding QQ-plot:
The sample of size 10⁵ was produced as follows in R:
X=matrix(rnorm(4*1e5),ncol=4)
Z=sqrt(4)*apply(X,1,mean)/apply(X,1,sd)
A Kolmogorov-Smirnov test also produces an acceptance of the null:
> ks.test(Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.0039382, p-value = 0.08992
alternative hypothesis: two-sided
for one sample and
> ks.test(Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.0019529, p-value = 0.8402
alternative hypothesis: two-sided
for the next.
However..., the reason is much more mundane: it just happens that NumPy does not define the standard variance in the standard (Gosset's) way! Indeed it uses instead the root of
$$\frac{1}{n}\sum_{i=1}^n (x_i-\bar{x})^2$$
which leads to a $t$ distribution inflated by $$\sqrt\frac{n}{n-1}$$ and hence to the observed discrepancy:
> ks.test(sqrt(4/3)*Z,"pt",df=3)
One-sample Kolmogorov-Smirnov test
data: Z
D = 0.030732, p-value < 2.2e-16
alternative hypothesis: two-sided
While I have no personal objection to using $n$ instead of $n-1$ in the denominator, this definition clashes with Gosset's one and hence with the definition of the Student's $t$ distribution.
|
Generating t-distributed random numbers: lesser numbers near zero.
Short version: the problem stands with NumPy x.std() which does not
divide by the right degrees of freedom.
Repeating the experiment in R shows no discrepancy: either by comparing the histogram wit
|
40,253
|
Why does t-SNE not separate linearly separable classes?
|
Yes.
You can use the following code to convince yourself.
N <- 1000
P <- 3
# Generates some random data
data <- matrix(data = rnorm(N*P), nrow = N, ncol = P)
# Assignate linearly separable classes
labels <- (data[,1]+data[,2]>0)+1.
# Make sure that the data can be separated
plot(data[,1],data[,2], col = labels, xlab = 'x1', ylab = 'x2')
require(Rtsne)
model <- Rtsne::Rtsne(data)
# Observe this result while varying P
plot(model$Y, col = labels, type = 'p', pch = 21, xlab = 'tSNE x_1', ylab = 'tSNE x_2')
This is what you would observe when P is 3 (only one irrelevant attribute with respect to the linear separation, we are close to reproducing the linear separation).
And for P is 15, the tSNE cannot reproduce the linear separation.
What happened ?
This is simple. The tSNE method relies on pairwise distances between points to produce clusters and is therefore totally unaware of any possible linear separability of your data.
If your points are "close" to each other, on different sides of a "border", a tSNE will consider that they belong to a same cluster.
This was exactly the point of the simulations above. When the number of dimensions is large, points look close to each other, no matter the side of the border they belong to. This is what tSNE fails to capture here.
On the other hand, when the number of irrelevant dimension is low, close points had "no other choice" but to be on the same side of the border.
Side note.
Even though you may have a nice performance with a neural network, it may not mean that your data is linearly separable (unless there is only one unit in your neural, hum, network). Indeed neural networks can recognize non linear boundaries. If you want to test how "linearly separable" a data set is, you should use linear Support Vector Machines or regressions.
|
Why does t-SNE not separate linearly separable classes?
|
Yes.
You can use the following code to convince yourself.
N <- 1000
P <- 3
# Generates some random data
data <- matrix(data = rnorm(N*P), nrow = N, ncol = P)
# Assignate linearly separable classes
|
Why does t-SNE not separate linearly separable classes?
Yes.
You can use the following code to convince yourself.
N <- 1000
P <- 3
# Generates some random data
data <- matrix(data = rnorm(N*P), nrow = N, ncol = P)
# Assignate linearly separable classes
labels <- (data[,1]+data[,2]>0)+1.
# Make sure that the data can be separated
plot(data[,1],data[,2], col = labels, xlab = 'x1', ylab = 'x2')
require(Rtsne)
model <- Rtsne::Rtsne(data)
# Observe this result while varying P
plot(model$Y, col = labels, type = 'p', pch = 21, xlab = 'tSNE x_1', ylab = 'tSNE x_2')
This is what you would observe when P is 3 (only one irrelevant attribute with respect to the linear separation, we are close to reproducing the linear separation).
And for P is 15, the tSNE cannot reproduce the linear separation.
What happened ?
This is simple. The tSNE method relies on pairwise distances between points to produce clusters and is therefore totally unaware of any possible linear separability of your data.
If your points are "close" to each other, on different sides of a "border", a tSNE will consider that they belong to a same cluster.
This was exactly the point of the simulations above. When the number of dimensions is large, points look close to each other, no matter the side of the border they belong to. This is what tSNE fails to capture here.
On the other hand, when the number of irrelevant dimension is low, close points had "no other choice" but to be on the same side of the border.
Side note.
Even though you may have a nice performance with a neural network, it may not mean that your data is linearly separable (unless there is only one unit in your neural, hum, network). Indeed neural networks can recognize non linear boundaries. If you want to test how "linearly separable" a data set is, you should use linear Support Vector Machines or regressions.
|
Why does t-SNE not separate linearly separable classes?
Yes.
You can use the following code to convince yourself.
N <- 1000
P <- 3
# Generates some random data
data <- matrix(data = rnorm(N*P), nrow = N, ncol = P)
# Assignate linearly separable classes
|
40,254
|
What's the point of reporting bootstrap bias?
|
The bootstrap bias estimate is an estimate of $E(\hat \theta_n) - \theta$, where $\theta$ is some function of the population and $\hat \theta_n$ is that function evaluated in your sample of size $n$. It estimates the bias in approximating $\theta$ with $\hat \theta_n$, for which you only have $n$ observations. A well-known example of this type of bias is the variance estimate $\frac{\sum (x_i - \bar x)}{n}$, which has expectation $\frac{n-1}{n}\sigma^2$. This is not the same as sampling bias, which has to do with how the sample was gathered (let's say you mistakenly sampled 99 women and one man when this ratio usually should be closer to 50-50). Sampling bias cannot be estimated by this method.
How this works:
If you had the population at hand you could calculate the true $\theta$ directly and draw several samples of size $n$ to estimate $\hat {E}[\hat \theta_n]$ empirically. Then $\hat E[\hat \theta_n] - \theta$ is an estimate of the bias due to having an $n$-sized sample.
When bootstrapping you use your sample to approximate this process. The empirical distribution function $\hat F$ is an estimate of the true distribution function $F$. The act of sampling from $\hat F$ is in a sense an estimate of the act of sampling from $F$. The idea is then to push the whole sample-from-the-population process above one step down:
Calculate $\hat \theta$ in your original sample as an estimate of $\theta$.
Calculate $\hat E[\hat \theta^*_n]$ from bootstrap samples as an estimate of $\hat E[\hat \theta_n]$.
The idea is that the bootstrapped $\hat \theta_n^*$ should be biased from the "true" $\hat \theta_n$ in the same way that the sampled $\hat \theta_n$ is biased from the true $\theta$. Here is a link to a previous answer of mine where I show that the bootstrap bias estimate for the $1/n$ estimate of variance is pretty good.
|
What's the point of reporting bootstrap bias?
|
The bootstrap bias estimate is an estimate of $E(\hat \theta_n) - \theta$, where $\theta$ is some function of the population and $\hat \theta_n$ is that function evaluated in your sample of size $n$.
|
What's the point of reporting bootstrap bias?
The bootstrap bias estimate is an estimate of $E(\hat \theta_n) - \theta$, where $\theta$ is some function of the population and $\hat \theta_n$ is that function evaluated in your sample of size $n$. It estimates the bias in approximating $\theta$ with $\hat \theta_n$, for which you only have $n$ observations. A well-known example of this type of bias is the variance estimate $\frac{\sum (x_i - \bar x)}{n}$, which has expectation $\frac{n-1}{n}\sigma^2$. This is not the same as sampling bias, which has to do with how the sample was gathered (let's say you mistakenly sampled 99 women and one man when this ratio usually should be closer to 50-50). Sampling bias cannot be estimated by this method.
How this works:
If you had the population at hand you could calculate the true $\theta$ directly and draw several samples of size $n$ to estimate $\hat {E}[\hat \theta_n]$ empirically. Then $\hat E[\hat \theta_n] - \theta$ is an estimate of the bias due to having an $n$-sized sample.
When bootstrapping you use your sample to approximate this process. The empirical distribution function $\hat F$ is an estimate of the true distribution function $F$. The act of sampling from $\hat F$ is in a sense an estimate of the act of sampling from $F$. The idea is then to push the whole sample-from-the-population process above one step down:
Calculate $\hat \theta$ in your original sample as an estimate of $\theta$.
Calculate $\hat E[\hat \theta^*_n]$ from bootstrap samples as an estimate of $\hat E[\hat \theta_n]$.
The idea is that the bootstrapped $\hat \theta_n^*$ should be biased from the "true" $\hat \theta_n$ in the same way that the sampled $\hat \theta_n$ is biased from the true $\theta$. Here is a link to a previous answer of mine where I show that the bootstrap bias estimate for the $1/n$ estimate of variance is pretty good.
|
What's the point of reporting bootstrap bias?
The bootstrap bias estimate is an estimate of $E(\hat \theta_n) - \theta$, where $\theta$ is some function of the population and $\hat \theta_n$ is that function evaluated in your sample of size $n$.
|
40,255
|
Is spline basis orthogonal?
|
Computationally, sometimes; conceptually, rarely. (This started as comment...)
As already presented here (upvote it if you don't have already) when we use a spline in the context a generalised additive model as soon as the spline basis is created, fitting reverts to standard GLM modelling basis coefficients for each separate basis function. This insight important because we can generalise it further.
Let's say we have a B-spline that is very constrained. Something like an order 1 B-spline so we can see the knot locations exactly:
set.seed(123)
myX = sort(runif(1000, max = 10))
myKnots = c(1,3)
Bmatrix <- bs(x = myX, degree = 1, knots = myKnots, intercept = FALSE)
matplot( myX, Bmatrix, type = "l");
This is a trivial B-spline basis $B$ that is clearly non-orthogonal (just do crossprod(Bmatrix) to check the inner products). So, B-splines bases are non-orthogonal by construction conceptually.
An orthogonal series method would represent the data with respect to a series of orthogonal basis functions, like sines and cosines (eg. Fourier basis). Notably, an orthogonal method would allows us to select only the "low frequency" terms for further analysis. This brings to the computational part.
Because the fitting of a spline is an expensive process we try to simplify the fitting procedure by employing low-rank approximations. An obvious case of these are the thin plate regression splines used by default in the s function from mgcv::gam where the "proper" thin plate spline would be very expensive computationally (see ?smooth.construct.tp.smooth.spec). We start with the full thin plate spline and then truncate this basis in an optimal manner, dictated by the truncated eigen-decomposition of that basis. In that sense, computationally, yes, we will have an orthogonal basis for our spline basis even when the basis itself is not orthogonal.
The spline is the "smoothest" function passing near our sampled values $X$. As now the basis of spline provides an equivalent representation of our $X$ in a space spanned by the spline basis $B$, further transforming that basis $B$ to another equivalent basis $Q$ does not alter our original results.
Going back to our trivial example, we can get the equivalent orthogonal basis $Q$ through SVD and then use it to get the equivalent results (depending on the order of the approximation). For example:
svdB = svd(t(Bmatrix));
Q = svdB$v;
Working now with this new system $Q$ is more desirable than with the original system $B$ because numerically $Q$ is far more stable (OK, $B$ is well-behaved here).
Base R tries to also exploit these orthogonality properties. If we use poly by default we get the equivalent orthogonal polynomials rather than the raw polynomials of our predictor (argument raw).
|
Is spline basis orthogonal?
|
Computationally, sometimes; conceptually, rarely. (This started as comment...)
As already presented here (upvote it if you don't have already) when we use a spline in the context a generalised additiv
|
Is spline basis orthogonal?
Computationally, sometimes; conceptually, rarely. (This started as comment...)
As already presented here (upvote it if you don't have already) when we use a spline in the context a generalised additive model as soon as the spline basis is created, fitting reverts to standard GLM modelling basis coefficients for each separate basis function. This insight important because we can generalise it further.
Let's say we have a B-spline that is very constrained. Something like an order 1 B-spline so we can see the knot locations exactly:
set.seed(123)
myX = sort(runif(1000, max = 10))
myKnots = c(1,3)
Bmatrix <- bs(x = myX, degree = 1, knots = myKnots, intercept = FALSE)
matplot( myX, Bmatrix, type = "l");
This is a trivial B-spline basis $B$ that is clearly non-orthogonal (just do crossprod(Bmatrix) to check the inner products). So, B-splines bases are non-orthogonal by construction conceptually.
An orthogonal series method would represent the data with respect to a series of orthogonal basis functions, like sines and cosines (eg. Fourier basis). Notably, an orthogonal method would allows us to select only the "low frequency" terms for further analysis. This brings to the computational part.
Because the fitting of a spline is an expensive process we try to simplify the fitting procedure by employing low-rank approximations. An obvious case of these are the thin plate regression splines used by default in the s function from mgcv::gam where the "proper" thin plate spline would be very expensive computationally (see ?smooth.construct.tp.smooth.spec). We start with the full thin plate spline and then truncate this basis in an optimal manner, dictated by the truncated eigen-decomposition of that basis. In that sense, computationally, yes, we will have an orthogonal basis for our spline basis even when the basis itself is not orthogonal.
The spline is the "smoothest" function passing near our sampled values $X$. As now the basis of spline provides an equivalent representation of our $X$ in a space spanned by the spline basis $B$, further transforming that basis $B$ to another equivalent basis $Q$ does not alter our original results.
Going back to our trivial example, we can get the equivalent orthogonal basis $Q$ through SVD and then use it to get the equivalent results (depending on the order of the approximation). For example:
svdB = svd(t(Bmatrix));
Q = svdB$v;
Working now with this new system $Q$ is more desirable than with the original system $B$ because numerically $Q$ is far more stable (OK, $B$ is well-behaved here).
Base R tries to also exploit these orthogonality properties. If we use poly by default we get the equivalent orthogonal polynomials rather than the raw polynomials of our predictor (argument raw).
|
Is spline basis orthogonal?
Computationally, sometimes; conceptually, rarely. (This started as comment...)
As already presented here (upvote it if you don't have already) when we use a spline in the context a generalised additiv
|
40,256
|
What is steepest descent? Is it gradient descent with exact line search?
|
Steepest descent is a special case of gradient descent where the step length is chosen to minimize the objective function value. Gradient descent refers to any of a class of algorithms that calculate the gradient of the objective function, then move "downhill" in the indicated direction; the step length can be fixed, estimated (e.g., via line search), or ... (see this link for some examples).
Gradient-based optimization is, as Cliff AB points out in comments to the OP, more general still, referring to any method that uses gradients to optimize a function. Note that this does not mean you necessarily move in the direction that would be indicated by the gradient (see, for example, Newton's method.)
|
What is steepest descent? Is it gradient descent with exact line search?
|
Steepest descent is a special case of gradient descent where the step length is chosen to minimize the objective function value. Gradient descent refers to any of a class of algorithms that calculate
|
What is steepest descent? Is it gradient descent with exact line search?
Steepest descent is a special case of gradient descent where the step length is chosen to minimize the objective function value. Gradient descent refers to any of a class of algorithms that calculate the gradient of the objective function, then move "downhill" in the indicated direction; the step length can be fixed, estimated (e.g., via line search), or ... (see this link for some examples).
Gradient-based optimization is, as Cliff AB points out in comments to the OP, more general still, referring to any method that uses gradients to optimize a function. Note that this does not mean you necessarily move in the direction that would be indicated by the gradient (see, for example, Newton's method.)
|
What is steepest descent? Is it gradient descent with exact line search?
Steepest descent is a special case of gradient descent where the step length is chosen to minimize the objective function value. Gradient descent refers to any of a class of algorithms that calculate
|
40,257
|
What is steepest descent? Is it gradient descent with exact line search?
|
Gradient is a multi-variable generalization of the derivative (at a point). While a derivative can be defined on functions of a single variable, for functions of several variables.
Since descent is negative sloped, and to perform gradient descent, we are minimizing error, then maximum steepness is the most negative slope.
|
What is steepest descent? Is it gradient descent with exact line search?
|
Gradient is a multi-variable generalization of the derivative (at a point). While a derivative can be defined on functions of a single variable, for functions of several variables.
Since descent is ne
|
What is steepest descent? Is it gradient descent with exact line search?
Gradient is a multi-variable generalization of the derivative (at a point). While a derivative can be defined on functions of a single variable, for functions of several variables.
Since descent is negative sloped, and to perform gradient descent, we are minimizing error, then maximum steepness is the most negative slope.
|
What is steepest descent? Is it gradient descent with exact line search?
Gradient is a multi-variable generalization of the derivative (at a point). While a derivative can be defined on functions of a single variable, for functions of several variables.
Since descent is ne
|
40,258
|
General principles for extending the Elo system to games in which the margin of victory matters
|
A simple version of ELO can be cast as a logistic regression: for players $i,j$ with ratings $R_i,R_j$,
$$P(i\mbox{ beats }j)=\frac{1}{1+\exp(\beta(R_i-R_j))}.$$
So you could just as easily predict score instead by using a different link function, for example a lorentzian or gaussian:
$$P(\mbox{Game score}=x)=a\exp(-\alpha|\beta(R_i-R_j)-x|^\gamma)$$,
where the game score can be positive (in favor of $i$) or negative (in favor of $j$). So you don't need to calculate the probability of beating and just directly optimize the game score.
|
General principles for extending the Elo system to games in which the margin of victory matters
|
A simple version of ELO can be cast as a logistic regression: for players $i,j$ with ratings $R_i,R_j$,
$$P(i\mbox{ beats }j)=\frac{1}{1+\exp(\beta(R_i-R_j))}.$$
So you could just as easily predict s
|
General principles for extending the Elo system to games in which the margin of victory matters
A simple version of ELO can be cast as a logistic regression: for players $i,j$ with ratings $R_i,R_j$,
$$P(i\mbox{ beats }j)=\frac{1}{1+\exp(\beta(R_i-R_j))}.$$
So you could just as easily predict score instead by using a different link function, for example a lorentzian or gaussian:
$$P(\mbox{Game score}=x)=a\exp(-\alpha|\beta(R_i-R_j)-x|^\gamma)$$,
where the game score can be positive (in favor of $i$) or negative (in favor of $j$). So you don't need to calculate the probability of beating and just directly optimize the game score.
|
General principles for extending the Elo system to games in which the margin of victory matters
A simple version of ELO can be cast as a logistic regression: for players $i,j$ with ratings $R_i,R_j$,
$$P(i\mbox{ beats }j)=\frac{1}{1+\exp(\beta(R_i-R_j))}.$$
So you could just as easily predict s
|
40,259
|
General principles for extending the Elo system to games in which the margin of victory matters
|
There are some works intended to include the margin of victory in rating system (e.g. FiveThirtyEight for NFL), but usually ranking systems (e.g. Elo, Glicko, or our rankade - here's a comparison) don't incorporate the margin of victory.
In most sports/games the margin of victory is not significant.
In chess the goal is to checkmate your opponent's king (and it doesn't matter how many pieces you and your opponent have on the board when you're able to do this), in basketball - like in most sports - winning 89-60, or 86-85, or 90-23 gives the team just a victory (and the score doesn't matter - except for mostly unused tiebreaker), and so on.
Consider a rating system where Chelsea and Man City have ratings 2000 and 2100. I'm looking for a rating system which not only predicts the score (around 0.64 for City) but also the margin of victory.
Considering that the rating somehow gives us an expected margin of +3.2 for Manchester City, and the team wins 2-0, I'd also expect the system to reduce City's rating for not winning by a large enough margin.
Opposite to rugby, in which you get a (little) bonus if you score 4+ tries, in soccer City gets same 3 points even if it wins 8-0 (and probably, while leading 4-0, City coach wants their best players to rest for next matches...). Margin of victory could be significant (showing that there's a big difference between teams), but it also could not, for many reasons. And, in a structure in which the goal is winning (no matter for the score), it's not a good idea to build a ranking system that rewards an 'unuseful' large victory (3 points for championship ranking) and subtracts points for a 1-0 win with last team in ranking (same 3 points!).
Eventually, you can somehow reward a bigger than expected win, but you can't 'punish' a team for not winning by a large enough margin. They won, so they did their job.
Sure there are (few) games in which the margin of victory matters, but soccer (and nearly all sports, both with round robin or brackets) is not in this list.
|
General principles for extending the Elo system to games in which the margin of victory matters
|
There are some works intended to include the margin of victory in rating system (e.g. FiveThirtyEight for NFL), but usually ranking systems (e.g. Elo, Glicko, or our rankade - here's a comparison) don
|
General principles for extending the Elo system to games in which the margin of victory matters
There are some works intended to include the margin of victory in rating system (e.g. FiveThirtyEight for NFL), but usually ranking systems (e.g. Elo, Glicko, or our rankade - here's a comparison) don't incorporate the margin of victory.
In most sports/games the margin of victory is not significant.
In chess the goal is to checkmate your opponent's king (and it doesn't matter how many pieces you and your opponent have on the board when you're able to do this), in basketball - like in most sports - winning 89-60, or 86-85, or 90-23 gives the team just a victory (and the score doesn't matter - except for mostly unused tiebreaker), and so on.
Consider a rating system where Chelsea and Man City have ratings 2000 and 2100. I'm looking for a rating system which not only predicts the score (around 0.64 for City) but also the margin of victory.
Considering that the rating somehow gives us an expected margin of +3.2 for Manchester City, and the team wins 2-0, I'd also expect the system to reduce City's rating for not winning by a large enough margin.
Opposite to rugby, in which you get a (little) bonus if you score 4+ tries, in soccer City gets same 3 points even if it wins 8-0 (and probably, while leading 4-0, City coach wants their best players to rest for next matches...). Margin of victory could be significant (showing that there's a big difference between teams), but it also could not, for many reasons. And, in a structure in which the goal is winning (no matter for the score), it's not a good idea to build a ranking system that rewards an 'unuseful' large victory (3 points for championship ranking) and subtracts points for a 1-0 win with last team in ranking (same 3 points!).
Eventually, you can somehow reward a bigger than expected win, but you can't 'punish' a team for not winning by a large enough margin. They won, so they did their job.
Sure there are (few) games in which the margin of victory matters, but soccer (and nearly all sports, both with round robin or brackets) is not in this list.
|
General principles for extending the Elo system to games in which the margin of victory matters
There are some works intended to include the margin of victory in rating system (e.g. FiveThirtyEight for NFL), but usually ranking systems (e.g. Elo, Glicko, or our rankade - here's a comparison) don
|
40,260
|
Gradient Descent (GD) vs Stochastic Gradient Descent (SGD)
|
Gradient Descent is an iterative method to solve the optimization problem. There is no concept of "epoch" or "batch" in classical gradient decent. The key of gradient decent are
Update the weights by the gradient direction.
The gradient is calculated precisely from all the data points.
Stochastic Gradient Descent can be explained as: quick and dirty way to "approximate gradient" from one single data point. If we relax on this "one single data point" to "a subset of data", then the concepts of batch and epoch come.
I have a related answer here (with code and plot for the demo)
How could stochastic gradient descent save time comparing to standard gradient descent?
|
Gradient Descent (GD) vs Stochastic Gradient Descent (SGD)
|
Gradient Descent is an iterative method to solve the optimization problem. There is no concept of "epoch" or "batch" in classical gradient decent. The key of gradient decent are
Update the weights b
|
Gradient Descent (GD) vs Stochastic Gradient Descent (SGD)
Gradient Descent is an iterative method to solve the optimization problem. There is no concept of "epoch" or "batch" in classical gradient decent. The key of gradient decent are
Update the weights by the gradient direction.
The gradient is calculated precisely from all the data points.
Stochastic Gradient Descent can be explained as: quick and dirty way to "approximate gradient" from one single data point. If we relax on this "one single data point" to "a subset of data", then the concepts of batch and epoch come.
I have a related answer here (with code and plot for the demo)
How could stochastic gradient descent save time comparing to standard gradient descent?
|
Gradient Descent (GD) vs Stochastic Gradient Descent (SGD)
Gradient Descent is an iterative method to solve the optimization problem. There is no concept of "epoch" or "batch" in classical gradient decent. The key of gradient decent are
Update the weights b
|
40,261
|
On Stationarity and Invertibility of a process
|
The confusion comes from the fact that these conditions (that you state under the label "can be easily proven") pertain to the $Y_t=\varepsilon_t-\theta_1\varepsilon_{t-1}-\theta_2\varepsilon_{t-2}$ formulation. In your case this means $\theta_1=1.3$, but $\theta_2=-0.4$ (not $0.4$!). Substituting these to the conditions, you'll see that all of them is fulfilled.
|
On Stationarity and Invertibility of a process
|
The confusion comes from the fact that these conditions (that you state under the label "can be easily proven") pertain to the $Y_t=\varepsilon_t-\theta_1\varepsilon_{t-1}-\theta_2\varepsilon_{t-2}$ f
|
On Stationarity and Invertibility of a process
The confusion comes from the fact that these conditions (that you state under the label "can be easily proven") pertain to the $Y_t=\varepsilon_t-\theta_1\varepsilon_{t-1}-\theta_2\varepsilon_{t-2}$ formulation. In your case this means $\theta_1=1.3$, but $\theta_2=-0.4$ (not $0.4$!). Substituting these to the conditions, you'll see that all of them is fulfilled.
|
On Stationarity and Invertibility of a process
The confusion comes from the fact that these conditions (that you state under the label "can be easily proven") pertain to the $Y_t=\varepsilon_t-\theta_1\varepsilon_{t-1}-\theta_2\varepsilon_{t-2}$ f
|
40,262
|
On Stationarity and Invertibility of a process
|
It's causal and stationary because the AR roots are outside the unit circle.
polyroot(c(1,-.5)) # 2+0i
It's invertible because the MA roots are outside the unit circle.
> polyroot(c(1,-1.3,.4)) #1.25-0i 2.00+0i
You got these already, so nice.
The other restrictions you're writing down must be equivalent to the above. I haven't checked, but my guess is that because quadratic polynomials have explicit formulas for the roots (the quadratic formula), so you can just set those roots to be greater than $1$, and then voila.
However, in this case, your AR polynomial is linear. So there's no reason why that should apply. And for the MA part, well, it's probably the above thing I mentioned. Although, you should check, because I haven't.
|
On Stationarity and Invertibility of a process
|
It's causal and stationary because the AR roots are outside the unit circle.
polyroot(c(1,-.5)) # 2+0i
It's invertible because the MA roots are outside the unit circle.
> polyroot(c(1,-1.3,.4)) #1.2
|
On Stationarity and Invertibility of a process
It's causal and stationary because the AR roots are outside the unit circle.
polyroot(c(1,-.5)) # 2+0i
It's invertible because the MA roots are outside the unit circle.
> polyroot(c(1,-1.3,.4)) #1.25-0i 2.00+0i
You got these already, so nice.
The other restrictions you're writing down must be equivalent to the above. I haven't checked, but my guess is that because quadratic polynomials have explicit formulas for the roots (the quadratic formula), so you can just set those roots to be greater than $1$, and then voila.
However, in this case, your AR polynomial is linear. So there's no reason why that should apply. And for the MA part, well, it's probably the above thing I mentioned. Although, you should check, because I haven't.
|
On Stationarity and Invertibility of a process
It's causal and stationary because the AR roots are outside the unit circle.
polyroot(c(1,-.5)) # 2+0i
It's invertible because the MA roots are outside the unit circle.
> polyroot(c(1,-1.3,.4)) #1.2
|
40,263
|
Empirical PDF from Empirical CDF
|
One of two things:
1) make fixed histogram bucket sizes and then count the number of points you get that occur in each bucket. In other words, break up the range of $x$ into n equal intervals, and then the count for each interval is the number of times your CDF has a 'step' up in that interval, for each interval. Caveat: you will need to normalize, when done, so that all buckets add to 100% probability.
2) Just take the differences between each pair of CDF points (thus the change in height between them), divide by $\delta x_i$ to get the slope of the CDF at that point along the $x$ axis, and use lines of those slopes to connect the points of a PDF plot. Essentially, you are taking and using the numerical approximation to the derivative to the CDF, which is the PDF. Warning: you will need to think through very carefully if how you do this does not, accidentally, shift the distribution up or down by something like $\delta x_i/2$ at each point. In other words, centering each segment will be important to get right.
If you have a good number of points, method 1 will be a lot less error-prone - e.g., with 1000 points you can probably get a good discrete histogram representation to something like a normal distribution with 20-50 buckets which you can do numerical statistics on easily (mean, moments). Since that is usually what you want, it does the job.
I sense your desire to do something that looks more like a continuous function, which method 2 would get, but I would warn you away from that,
unless you have a small number of data points. You will find that: (1) it is going to be hard to represent somehow (i.e., on a spreadsheet or as a data structure); (2) it will be hard to work with even a good representation, and (3) it will take a lot of thought to get right.
I do a lot of numerical methods with unknown distributions and method one is surprisingly accurate most of the time (again, with enough points).
|
Empirical PDF from Empirical CDF
|
One of two things:
1) make fixed histogram bucket sizes and then count the number of points you get that occur in each bucket. In other words, break up the range of $x$ into n equal intervals, and th
|
Empirical PDF from Empirical CDF
One of two things:
1) make fixed histogram bucket sizes and then count the number of points you get that occur in each bucket. In other words, break up the range of $x$ into n equal intervals, and then the count for each interval is the number of times your CDF has a 'step' up in that interval, for each interval. Caveat: you will need to normalize, when done, so that all buckets add to 100% probability.
2) Just take the differences between each pair of CDF points (thus the change in height between them), divide by $\delta x_i$ to get the slope of the CDF at that point along the $x$ axis, and use lines of those slopes to connect the points of a PDF plot. Essentially, you are taking and using the numerical approximation to the derivative to the CDF, which is the PDF. Warning: you will need to think through very carefully if how you do this does not, accidentally, shift the distribution up or down by something like $\delta x_i/2$ at each point. In other words, centering each segment will be important to get right.
If you have a good number of points, method 1 will be a lot less error-prone - e.g., with 1000 points you can probably get a good discrete histogram representation to something like a normal distribution with 20-50 buckets which you can do numerical statistics on easily (mean, moments). Since that is usually what you want, it does the job.
I sense your desire to do something that looks more like a continuous function, which method 2 would get, but I would warn you away from that,
unless you have a small number of data points. You will find that: (1) it is going to be hard to represent somehow (i.e., on a spreadsheet or as a data structure); (2) it will be hard to work with even a good representation, and (3) it will take a lot of thought to get right.
I do a lot of numerical methods with unknown distributions and method one is surprisingly accurate most of the time (again, with enough points).
|
Empirical PDF from Empirical CDF
One of two things:
1) make fixed histogram bucket sizes and then count the number of points you get that occur in each bucket. In other words, break up the range of $x$ into n equal intervals, and th
|
40,264
|
Empirical PDF from Empirical CDF
|
The empirical PDF of a random sample is a discrete probability distribution which assigns probability mass $1/N$ to each observation if there are no ties, 2 if there are 2 tied observations, 3 and so on.
|
Empirical PDF from Empirical CDF
|
The empirical PDF of a random sample is a discrete probability distribution which assigns probability mass $1/N$ to each observation if there are no ties, 2 if there are 2 tied observations, 3 and so
|
Empirical PDF from Empirical CDF
The empirical PDF of a random sample is a discrete probability distribution which assigns probability mass $1/N$ to each observation if there are no ties, 2 if there are 2 tied observations, 3 and so on.
|
Empirical PDF from Empirical CDF
The empirical PDF of a random sample is a discrete probability distribution which assigns probability mass $1/N$ to each observation if there are no ties, 2 if there are 2 tied observations, 3 and so
|
40,265
|
Variance of residuals vs. MLE of the variance of the error term
|
If $Y \sim \mathcal N(X\beta, \sigma^2 I)$ then the log likelihood is
$$
l(\beta, \sigma^2|y) = -\frac n2 \log (2\pi) - \frac n2 \log(\sigma^2) - \frac 1{2\sigma^2}||y-X\beta||^2
$$
and assuming non-stochastic and full rank predictors. From this we find that
$$
\frac{\partial l}{\partial \sigma^2} = 0 \implies \hat \sigma^2 = \frac 1n ||Y-X\hat \beta||^2.
$$
We want to know when the MLE $\hat \sigma^2$ is equal to the sample variance of the residuals
$$
\tilde \sigma^2 = \frac{1}{n}\sum_{i=1}^n (e_i - \bar e)^2
$$
where $e = Y - \hat Y$ are the residuals. We know
$$
n\tilde \sigma^2 = e^Te - n\bar e^2
$$
while
$$
n\hat \sigma^2 = e^Te
$$
so this tells us the two are equal when the constant vector $\mathbf 1$ is in the column space of $X$, which means $\bar e = 0$. If that is not the case then the two won't be exactly equal.
I'm leaving the rest of my answer here but as I understand OP's question better I don't think it applies.
Note
$$
||Y - X\hat \beta||^2 = (Y - HY)^T(Y - HY) = Y^T(I-H)Y
$$
where $H = X(X^TX)^{-1}X^T$. This means that we have a Gaussian quadratic form, so
$$
Var\left(Y^T (I-H)Y\right) = 2\sigma^4 \text{tr}(I-H) + 4\sigma^2 \beta^T X^T(I-H)X\beta.
$$
$X^T(I-H)X = X^TX - X^TX(X^TX)^{-1}X^TX = 0$ and $\text{tr}(I-H) = n-p$ so we have
$$
Var(\hat \sigma^2) = \frac{2\sigma^4(n-p)}{n^2}.
$$
The standard estimate of $\sigma^2$ is probably $\tilde \sigma^2 := \frac{1}{n-p}||Y - X\hat \beta||^2$ (which is unbiased, as we can see by computing $E\left(Y^T(I-H)Y\right)$) so
$$
Var(\tilde \sigma^2) = \frac{2\sigma^4}{n-p}.
$$
I'm not entirely sure what more than this you're looking for, as technically what you asked for was the variance of the residuals which is
$$
Var(e) = Var\left((I-H)Y\right) =\sigma^2 (I-H)
$$
but I don't think that's what you mean. Or if that is what you mean, then we can directly compare this to $Var(\varepsilon) = \sigma^2 I$ and the difference comes down to $\sigma^2 H$.
|
Variance of residuals vs. MLE of the variance of the error term
|
If $Y \sim \mathcal N(X\beta, \sigma^2 I)$ then the log likelihood is
$$
l(\beta, \sigma^2|y) = -\frac n2 \log (2\pi) - \frac n2 \log(\sigma^2) - \frac 1{2\sigma^2}||y-X\beta||^2
$$
and assuming non-s
|
Variance of residuals vs. MLE of the variance of the error term
If $Y \sim \mathcal N(X\beta, \sigma^2 I)$ then the log likelihood is
$$
l(\beta, \sigma^2|y) = -\frac n2 \log (2\pi) - \frac n2 \log(\sigma^2) - \frac 1{2\sigma^2}||y-X\beta||^2
$$
and assuming non-stochastic and full rank predictors. From this we find that
$$
\frac{\partial l}{\partial \sigma^2} = 0 \implies \hat \sigma^2 = \frac 1n ||Y-X\hat \beta||^2.
$$
We want to know when the MLE $\hat \sigma^2$ is equal to the sample variance of the residuals
$$
\tilde \sigma^2 = \frac{1}{n}\sum_{i=1}^n (e_i - \bar e)^2
$$
where $e = Y - \hat Y$ are the residuals. We know
$$
n\tilde \sigma^2 = e^Te - n\bar e^2
$$
while
$$
n\hat \sigma^2 = e^Te
$$
so this tells us the two are equal when the constant vector $\mathbf 1$ is in the column space of $X$, which means $\bar e = 0$. If that is not the case then the two won't be exactly equal.
I'm leaving the rest of my answer here but as I understand OP's question better I don't think it applies.
Note
$$
||Y - X\hat \beta||^2 = (Y - HY)^T(Y - HY) = Y^T(I-H)Y
$$
where $H = X(X^TX)^{-1}X^T$. This means that we have a Gaussian quadratic form, so
$$
Var\left(Y^T (I-H)Y\right) = 2\sigma^4 \text{tr}(I-H) + 4\sigma^2 \beta^T X^T(I-H)X\beta.
$$
$X^T(I-H)X = X^TX - X^TX(X^TX)^{-1}X^TX = 0$ and $\text{tr}(I-H) = n-p$ so we have
$$
Var(\hat \sigma^2) = \frac{2\sigma^4(n-p)}{n^2}.
$$
The standard estimate of $\sigma^2$ is probably $\tilde \sigma^2 := \frac{1}{n-p}||Y - X\hat \beta||^2$ (which is unbiased, as we can see by computing $E\left(Y^T(I-H)Y\right)$) so
$$
Var(\tilde \sigma^2) = \frac{2\sigma^4}{n-p}.
$$
I'm not entirely sure what more than this you're looking for, as technically what you asked for was the variance of the residuals which is
$$
Var(e) = Var\left((I-H)Y\right) =\sigma^2 (I-H)
$$
but I don't think that's what you mean. Or if that is what you mean, then we can directly compare this to $Var(\varepsilon) = \sigma^2 I$ and the difference comes down to $\sigma^2 H$.
|
Variance of residuals vs. MLE of the variance of the error term
If $Y \sim \mathcal N(X\beta, \sigma^2 I)$ then the log likelihood is
$$
l(\beta, \sigma^2|y) = -\frac n2 \log (2\pi) - \frac n2 \log(\sigma^2) - \frac 1{2\sigma^2}||y-X\beta||^2
$$
and assuming non-s
|
40,266
|
Faster R-CNN: How to avoid multiple detection in same area?
|
This is a common property of object detectors such as Faster R-CNN: They predict every object several times. It is the job of a Non-maximum suppression function to filter out the duplicates. Loosely explained, the NMS takes couples of overlapping boxes having equal class, and if their overlap is greater than some threshold, only the one with higher probability is kept. This procedure continues until there are no more boxes with sufficient overlap. This minimum overlap ratio is one of the hyperparameters you can tune.
The second hyperparameter you can tune is the threshold for the class probability (e.g. 70%). All the objects predicted with lower probability are simply ignored.
Tuning these two hyperparameters should give you a satisfactory prediction quality.
|
Faster R-CNN: How to avoid multiple detection in same area?
|
This is a common property of object detectors such as Faster R-CNN: They predict every object several times. It is the job of a Non-maximum suppression function to filter out the duplicates. Loosely e
|
Faster R-CNN: How to avoid multiple detection in same area?
This is a common property of object detectors such as Faster R-CNN: They predict every object several times. It is the job of a Non-maximum suppression function to filter out the duplicates. Loosely explained, the NMS takes couples of overlapping boxes having equal class, and if their overlap is greater than some threshold, only the one with higher probability is kept. This procedure continues until there are no more boxes with sufficient overlap. This minimum overlap ratio is one of the hyperparameters you can tune.
The second hyperparameter you can tune is the threshold for the class probability (e.g. 70%). All the objects predicted with lower probability are simply ignored.
Tuning these two hyperparameters should give you a satisfactory prediction quality.
|
Faster R-CNN: How to avoid multiple detection in same area?
This is a common property of object detectors such as Faster R-CNN: They predict every object several times. It is the job of a Non-maximum suppression function to filter out the duplicates. Loosely e
|
40,267
|
Faster R-CNN: How to avoid multiple detection in same area?
|
As mentioned before NMS is used to remove false positives.
Since you are using faster R-CNN, NMS is automatically set up at 0.7 in terms of the threshold.
Therefore, You can use OpenCV function cv2.dnn.NMSBoxes(boxes, confidences, confid, thresh)
You can dive into it deeper for any further information.
|
Faster R-CNN: How to avoid multiple detection in same area?
|
As mentioned before NMS is used to remove false positives.
Since you are using faster R-CNN, NMS is automatically set up at 0.7 in terms of the threshold.
Therefore, You can use OpenCV function cv2.dn
|
Faster R-CNN: How to avoid multiple detection in same area?
As mentioned before NMS is used to remove false positives.
Since you are using faster R-CNN, NMS is automatically set up at 0.7 in terms of the threshold.
Therefore, You can use OpenCV function cv2.dnn.NMSBoxes(boxes, confidences, confid, thresh)
You can dive into it deeper for any further information.
|
Faster R-CNN: How to avoid multiple detection in same area?
As mentioned before NMS is used to remove false positives.
Since you are using faster R-CNN, NMS is automatically set up at 0.7 in terms of the threshold.
Therefore, You can use OpenCV function cv2.dn
|
40,268
|
Clustering with Latent dirichlet allocation (LDA): Distance Measure
|
LDA does not have a distance metric
The intuition behind the LDA topic model is that words belonging to a topic appear together in documents. Unlike typical clustering algorithms like K-Means, it does not assume any distance measure between topics. Instead it infers topics purely based on word counts, based on the bag-of-words representation of documents.
This can be appreciated from the Gibbs sampler described in paper by Griffiths et al.:
$$
P(z_i=j \mid \textbf{z}_{-i} , \textbf{w} ) \propto \frac{n^{(w_i)}_{-i,j}+\beta}{n^{(.)}_{-i,j}+W\beta} \times \frac{n^{(d_i)}_{-i,j}+\alpha}{n^{(d_i)}_{-i,.}+T\alpha}
$$
$P(z_i=j \mid \textbf{z}_{-i} , \textbf{w} )$ refers to the probability of assigning topic $j$ to $i^{th}$ word, given all other assignments. This depends on two probabilities:
Probability of word $w_i$ in topic $j$
Probability of topic $j$ in document $d_i$
These probabilities can be easily computed using the following counts:
$n^{(w_i)}_{-i,j}:$ number of times word $w_i$ was assigned to topic $j$
$n^{(.)}_{-i,j}:$ total number of words assigned to topic $j$
$n^{(d_i)}_{-i,j}:$ number of times topic $j$ was assigned in document $d_i$
$n^{(d_i)}_{-i,.}:$ total number of topics assigned in document $d_i$
$T:$ number of topics
$W:$ number of words in vocabulary
$\alpha, \beta:$ Dirichlet hyperparameters
Note that all counts are excluding the current assignment, denoted by the $-i$ subscript.
Why does LDA work?
Referring to these Video Lectures, David Blei attributes it to the following:
|
Clustering with Latent dirichlet allocation (LDA): Distance Measure
|
LDA does not have a distance metric
The intuition behind the LDA topic model is that words belonging to a topic appear together in documents. Unlike typical clustering algorithms like K-Means, it does
|
Clustering with Latent dirichlet allocation (LDA): Distance Measure
LDA does not have a distance metric
The intuition behind the LDA topic model is that words belonging to a topic appear together in documents. Unlike typical clustering algorithms like K-Means, it does not assume any distance measure between topics. Instead it infers topics purely based on word counts, based on the bag-of-words representation of documents.
This can be appreciated from the Gibbs sampler described in paper by Griffiths et al.:
$$
P(z_i=j \mid \textbf{z}_{-i} , \textbf{w} ) \propto \frac{n^{(w_i)}_{-i,j}+\beta}{n^{(.)}_{-i,j}+W\beta} \times \frac{n^{(d_i)}_{-i,j}+\alpha}{n^{(d_i)}_{-i,.}+T\alpha}
$$
$P(z_i=j \mid \textbf{z}_{-i} , \textbf{w} )$ refers to the probability of assigning topic $j$ to $i^{th}$ word, given all other assignments. This depends on two probabilities:
Probability of word $w_i$ in topic $j$
Probability of topic $j$ in document $d_i$
These probabilities can be easily computed using the following counts:
$n^{(w_i)}_{-i,j}:$ number of times word $w_i$ was assigned to topic $j$
$n^{(.)}_{-i,j}:$ total number of words assigned to topic $j$
$n^{(d_i)}_{-i,j}:$ number of times topic $j$ was assigned in document $d_i$
$n^{(d_i)}_{-i,.}:$ total number of topics assigned in document $d_i$
$T:$ number of topics
$W:$ number of words in vocabulary
$\alpha, \beta:$ Dirichlet hyperparameters
Note that all counts are excluding the current assignment, denoted by the $-i$ subscript.
Why does LDA work?
Referring to these Video Lectures, David Blei attributes it to the following:
|
Clustering with Latent dirichlet allocation (LDA): Distance Measure
LDA does not have a distance metric
The intuition behind the LDA topic model is that words belonging to a topic appear together in documents. Unlike typical clustering algorithms like K-Means, it does
|
40,269
|
Can we think of a Random Variable as an instantiation of its distribution?
|
Yes it is a value, but no it doesn't necessarily have to be realized. A random variable can be realized or unrealized. Just as a house can be built or unfinished. The analogy is meant to emphasize that a random variable can be thought of as the value, while a distribution is a function that describe the probability of those values. A random variable is not the thing doing the generating (blueprint, probability distribution); rather it is the thing being generated (house, random variable).
You can take this a step further. A random variable can be "looked at" in a few ways. All of these entities are separate things but "describe" the same phenomenon. Depending on the question you want to answer, you might use a random variable's
value/label/representation, usually denoted by capital letters at the end of the alphabet. This is what he means when he talks about a random variable. This describes the outcome of one draw. Even though this convention is not always followed, usually it is capitalized if it has not be observed concretely. And it is written with a lower-case letter if it has.
probability density/mass function. This is usually what is meant by a random variable's "distribution." A random variable will have one of these if it is discrete (pmf) or continuous (pdf). Sometimes it is denoted by $f_X(x; \theta)$ or $p_X(x;\theta)$, or something similar. They are useful for finding a random variables expected value, or variance, or other expectations. They can also be summed (discrete rvs) or integrated (continuous rvs) to give you probabilities of certain events or outcomes of the random variable.
cumulative distribution function. This is a function that gives you probabilities that a random variable can be in a certain range.
moment generating function, when they exist they ``completely define a random variable," good for finding the distribution of linear combinations of independent random variables. They are also another way to find a random variable's moments.
characteristic function, similar to the mgf above.
|
Can we think of a Random Variable as an instantiation of its distribution?
|
Yes it is a value, but no it doesn't necessarily have to be realized. A random variable can be realized or unrealized. Just as a house can be built or unfinished. The analogy is meant to emphasize tha
|
Can we think of a Random Variable as an instantiation of its distribution?
Yes it is a value, but no it doesn't necessarily have to be realized. A random variable can be realized or unrealized. Just as a house can be built or unfinished. The analogy is meant to emphasize that a random variable can be thought of as the value, while a distribution is a function that describe the probability of those values. A random variable is not the thing doing the generating (blueprint, probability distribution); rather it is the thing being generated (house, random variable).
You can take this a step further. A random variable can be "looked at" in a few ways. All of these entities are separate things but "describe" the same phenomenon. Depending on the question you want to answer, you might use a random variable's
value/label/representation, usually denoted by capital letters at the end of the alphabet. This is what he means when he talks about a random variable. This describes the outcome of one draw. Even though this convention is not always followed, usually it is capitalized if it has not be observed concretely. And it is written with a lower-case letter if it has.
probability density/mass function. This is usually what is meant by a random variable's "distribution." A random variable will have one of these if it is discrete (pmf) or continuous (pdf). Sometimes it is denoted by $f_X(x; \theta)$ or $p_X(x;\theta)$, or something similar. They are useful for finding a random variables expected value, or variance, or other expectations. They can also be summed (discrete rvs) or integrated (continuous rvs) to give you probabilities of certain events or outcomes of the random variable.
cumulative distribution function. This is a function that gives you probabilities that a random variable can be in a certain range.
moment generating function, when they exist they ``completely define a random variable," good for finding the distribution of linear combinations of independent random variables. They are also another way to find a random variable's moments.
characteristic function, similar to the mgf above.
|
Can we think of a Random Variable as an instantiation of its distribution?
Yes it is a value, but no it doesn't necessarily have to be realized. A random variable can be realized or unrealized. Just as a house can be built or unfinished. The analogy is meant to emphasize tha
|
40,270
|
Can we think of a Random Variable as an instantiation of its distribution?
|
I'm going through the course, too. The Aha moment came with the distinction that a random variable is a function. Blitzstein isn't the only one who says this, but it was the first time I finally got it.
An r.v. is not an algebraic variable. In fact, it even makes sense if you make up privately, for didactic purposes only, a new name for it instead of variable. Just for one minute, you can beneficially lose any preconception you have for what a variable is in another context.
An r.v. maps one or more outcomes in the sample space to the real number line. It is therefore a function. The domain of an r.v. (a function) is the sample space, i.e. possible outcomes. The range of an r.v. (a function) is the support, namely the possible values of the r.v.
Sample space to real number support. That function is the r.v.
Support to probability. That function is the Probability Mass Function for a discrete r.v. or a Cumulative Distribution Function for a continuous r.v. Support (the real number the r.v. mapped to) was the range of the r.v. and it is now the domain of the PMF or CDF.
Until you run an experiment, you have no outcomes. You have probabilities of outcomes. The probability distribution tells you what those are for the r.v.'s support. When you run an experiment, you have outcomes. The name for that is an event. An expression like the random variable $X = 7$ in a probability formula is not an expression of algebraic equality. It is an expression of an event. The experiment had 1 or more outcomes which r.v. $X$ mapped to the number 7.
I can see the inclination to say this "instantiated" the r.v. Maybe the analogy of a programmatic class being allocated to memory as an instantiated object is a helpful visualization. However, the most helpful visualization for me has been the distinction that an r.v. is a function.
I think what gets "instantiated" in an experiment is the outcome! The sample space expressed the potentiality. The experiment realizes outcomes from the sample space, yielding events, which are subsets of the sample space. Before the experiment you had a function that said how you would map an event to the number line. That's the r.v. You could describe the probabilities of those events using a PMF or CDF. Once you have an outcome, you don't have a "concrete r.v.," you have an event. The function is still an abstraction. The outcome is concrete. The mapping tells you the output of the r.v.
Interestingly, the mapped value is not to be mistaken as the outcome.
If my experiment is flipping two coins, the outcomes in the sample space are: HH, HT, TH, TT. If I define r.v. $X$ as the number of heads in the outcome, then the range of the r.v. (called its support) is {0, 1, 2}. If the outcome of my flip is TH, that's an event, namely a subset of the sample space. The r.v. maps that to 1. However, the event $X = 1$ encompasses 2 outcomes, TH and HT. The probability of this event is: $P(X = 1) = 0.5$. I picked that one on purpose to highlight that an outcome (like TH) is not a necessarily a support (like 1) and to highlight that the meaningful action of an r.v. is this mapping.
In summary, an r.v. is a function.
|
Can we think of a Random Variable as an instantiation of its distribution?
|
I'm going through the course, too. The Aha moment came with the distinction that a random variable is a function. Blitzstein isn't the only one who says this, but it was the first time I finally got i
|
Can we think of a Random Variable as an instantiation of its distribution?
I'm going through the course, too. The Aha moment came with the distinction that a random variable is a function. Blitzstein isn't the only one who says this, but it was the first time I finally got it.
An r.v. is not an algebraic variable. In fact, it even makes sense if you make up privately, for didactic purposes only, a new name for it instead of variable. Just for one minute, you can beneficially lose any preconception you have for what a variable is in another context.
An r.v. maps one or more outcomes in the sample space to the real number line. It is therefore a function. The domain of an r.v. (a function) is the sample space, i.e. possible outcomes. The range of an r.v. (a function) is the support, namely the possible values of the r.v.
Sample space to real number support. That function is the r.v.
Support to probability. That function is the Probability Mass Function for a discrete r.v. or a Cumulative Distribution Function for a continuous r.v. Support (the real number the r.v. mapped to) was the range of the r.v. and it is now the domain of the PMF or CDF.
Until you run an experiment, you have no outcomes. You have probabilities of outcomes. The probability distribution tells you what those are for the r.v.'s support. When you run an experiment, you have outcomes. The name for that is an event. An expression like the random variable $X = 7$ in a probability formula is not an expression of algebraic equality. It is an expression of an event. The experiment had 1 or more outcomes which r.v. $X$ mapped to the number 7.
I can see the inclination to say this "instantiated" the r.v. Maybe the analogy of a programmatic class being allocated to memory as an instantiated object is a helpful visualization. However, the most helpful visualization for me has been the distinction that an r.v. is a function.
I think what gets "instantiated" in an experiment is the outcome! The sample space expressed the potentiality. The experiment realizes outcomes from the sample space, yielding events, which are subsets of the sample space. Before the experiment you had a function that said how you would map an event to the number line. That's the r.v. You could describe the probabilities of those events using a PMF or CDF. Once you have an outcome, you don't have a "concrete r.v.," you have an event. The function is still an abstraction. The outcome is concrete. The mapping tells you the output of the r.v.
Interestingly, the mapped value is not to be mistaken as the outcome.
If my experiment is flipping two coins, the outcomes in the sample space are: HH, HT, TH, TT. If I define r.v. $X$ as the number of heads in the outcome, then the range of the r.v. (called its support) is {0, 1, 2}. If the outcome of my flip is TH, that's an event, namely a subset of the sample space. The r.v. maps that to 1. However, the event $X = 1$ encompasses 2 outcomes, TH and HT. The probability of this event is: $P(X = 1) = 0.5$. I picked that one on purpose to highlight that an outcome (like TH) is not a necessarily a support (like 1) and to highlight that the meaningful action of an r.v. is this mapping.
In summary, an r.v. is a function.
|
Can we think of a Random Variable as an instantiation of its distribution?
I'm going through the course, too. The Aha moment came with the distinction that a random variable is a function. Blitzstein isn't the only one who says this, but it was the first time I finally got i
|
40,271
|
Can we think of a Random Variable as an instantiation of its distribution?
|
Yes you can --- this is both technically feasible and it can also aid intuition
Intuition: Probabilistic intuition is best when it is built on an epistemic foundation that views probability as a belief based on available information. For this reason, it is generally a bad idea to try to build up intuition by thinking about whether a random variable is a concrete "realised" value or a random "unrealised" value. Instead, it is more useful to think about a random variable as always having a true value, but you may or may not know that value. The random variable has either been "observed" in which case its value is known, or it is "unobserved" in which case its value is not known.
Now let's step back and look at the "house blueprint" analogy for the probability distribution. If I show you a house blueprint then you will have a fair idea of what the house will look like, but there are a lot of little random aspects that you don't know (e.g., minor variations in craftsmanship, paint-job, etc.). Suppose that I build a large number of houses from that blueprint and then I show you one of those houses. The house I am showing you is now "observed" and so you can see the structure without having to rely on the blueprint. Moreover, you can see a lot of aspects of how the house is that were not clear from the blueprint. For example, you can see what colour the house is painted, you can see if there are any cracks or imperfections in the building, and where they are, etc. For these things that you have seen with your own eyes, the blueprint is no longer giving you any information about this house. Now think about one of the houses you have not seen. For that house you are still relying on the blueprint for what you think it looks like. You are not sure what colour I have painted it, you are not sure if or where there are imperfections, cracks, etc.
This is (imperfectly) analogous to a random variable and its distribution. Once you have observed the random variable, its probability distribution is no longer giving you any information on its value, because you can now see its value. Conversely, if you have not observed the random variable, you beliefs about it are based on its probability distribution. Now, this analogy is slightly imperfect, insofar as looking at a house does not show you every aspect of the house (there are still some things you can't see where you still rely on the blueprint). A slightly better probabilistic analogy here would be to consider a house as a random vector composed of a number of random variables, and you observe some of those random variables when you look at the house.
Notwithstanding this slight imperfection in the analogy, it still serves to aid intuition, and one can imaging a "perfected" version of the analogy where it is assumed that your inspection of the house is so thorough that you observe everything about it. The value of this analogy lies in the fact that it shows when the blueprint/distribution is giving you information about the house/random variable and when it is not.
Technical feasibility: Every univariate probability distribution corresponds to a probability measure $\mathbb{P}$ that map subsets of the real numbers to a probability value between zero and one.$^\dagger$ From any distribution you can form a probability measure $\mathbb{P}_\infty$ corresponding to a sequence of independent and identically distributed random variables with that distribution. This means that if you have an initial distribution for a scalar random variable, it is always possible to define a sequence of IID random variables with that distribution. Technically speaking, if you start with any distribution $D$ then you can map this to a sequence $\mathbf{x} = (x_1,x_2,x_3,...) \sim \text{IID } D$.
This technical result ensures that we are on solid ground when we transition from thinking about a distribution to thinking about a sequence of "instantiations" of that distribution. We know that we will never encounter a situation where there is a technical impediment to transitioning from the distribution to an infinite number of "instantiations".
$^\dagger$ For technical reasons that are beyond the scope of this post, the domain of the probability measure does not include all subsets of the real numbers. Instead, the domain of the probability measure is the class of Borel sets, which includes sets that are made up from countable unions, intersections and negations of some initial real intervals.
|
Can we think of a Random Variable as an instantiation of its distribution?
|
Yes you can --- this is both technically feasible and it can also aid intuition
Intuition: Probabilistic intuition is best when it is built on an epistemic foundation that views probability as a beli
|
Can we think of a Random Variable as an instantiation of its distribution?
Yes you can --- this is both technically feasible and it can also aid intuition
Intuition: Probabilistic intuition is best when it is built on an epistemic foundation that views probability as a belief based on available information. For this reason, it is generally a bad idea to try to build up intuition by thinking about whether a random variable is a concrete "realised" value or a random "unrealised" value. Instead, it is more useful to think about a random variable as always having a true value, but you may or may not know that value. The random variable has either been "observed" in which case its value is known, or it is "unobserved" in which case its value is not known.
Now let's step back and look at the "house blueprint" analogy for the probability distribution. If I show you a house blueprint then you will have a fair idea of what the house will look like, but there are a lot of little random aspects that you don't know (e.g., minor variations in craftsmanship, paint-job, etc.). Suppose that I build a large number of houses from that blueprint and then I show you one of those houses. The house I am showing you is now "observed" and so you can see the structure without having to rely on the blueprint. Moreover, you can see a lot of aspects of how the house is that were not clear from the blueprint. For example, you can see what colour the house is painted, you can see if there are any cracks or imperfections in the building, and where they are, etc. For these things that you have seen with your own eyes, the blueprint is no longer giving you any information about this house. Now think about one of the houses you have not seen. For that house you are still relying on the blueprint for what you think it looks like. You are not sure what colour I have painted it, you are not sure if or where there are imperfections, cracks, etc.
This is (imperfectly) analogous to a random variable and its distribution. Once you have observed the random variable, its probability distribution is no longer giving you any information on its value, because you can now see its value. Conversely, if you have not observed the random variable, you beliefs about it are based on its probability distribution. Now, this analogy is slightly imperfect, insofar as looking at a house does not show you every aspect of the house (there are still some things you can't see where you still rely on the blueprint). A slightly better probabilistic analogy here would be to consider a house as a random vector composed of a number of random variables, and you observe some of those random variables when you look at the house.
Notwithstanding this slight imperfection in the analogy, it still serves to aid intuition, and one can imaging a "perfected" version of the analogy where it is assumed that your inspection of the house is so thorough that you observe everything about it. The value of this analogy lies in the fact that it shows when the blueprint/distribution is giving you information about the house/random variable and when it is not.
Technical feasibility: Every univariate probability distribution corresponds to a probability measure $\mathbb{P}$ that map subsets of the real numbers to a probability value between zero and one.$^\dagger$ From any distribution you can form a probability measure $\mathbb{P}_\infty$ corresponding to a sequence of independent and identically distributed random variables with that distribution. This means that if you have an initial distribution for a scalar random variable, it is always possible to define a sequence of IID random variables with that distribution. Technically speaking, if you start with any distribution $D$ then you can map this to a sequence $\mathbf{x} = (x_1,x_2,x_3,...) \sim \text{IID } D$.
This technical result ensures that we are on solid ground when we transition from thinking about a distribution to thinking about a sequence of "instantiations" of that distribution. We know that we will never encounter a situation where there is a technical impediment to transitioning from the distribution to an infinite number of "instantiations".
$^\dagger$ For technical reasons that are beyond the scope of this post, the domain of the probability measure does not include all subsets of the real numbers. Instead, the domain of the probability measure is the class of Borel sets, which includes sets that are made up from countable unions, intersections and negations of some initial real intervals.
|
Can we think of a Random Variable as an instantiation of its distribution?
Yes you can --- this is both technically feasible and it can also aid intuition
Intuition: Probabilistic intuition is best when it is built on an epistemic foundation that views probability as a beli
|
40,272
|
Can we think of a Random Variable as an instantiation of its distribution?
|
One intuitive distribution is the Bernoulli distribution. It describes the outcome of throwing a coin, which lands head with probability $p$ and tails with prob. $q=1-p$.
If you throw the coin once, you will observe either head or tail. However, this outcome is the random variable, it is not the distribution. The distribution however defines, with which probability you observe head and tails. The same true for all distributions -- continuous and discrete.
Blitzstein's analogy goes a bit further, because there exists not a single Bernoulli distribution, but a family of Bernoulli distributions: For each value of $p$ you will get a different Bernoulli distribution.
|
Can we think of a Random Variable as an instantiation of its distribution?
|
One intuitive distribution is the Bernoulli distribution. It describes the outcome of throwing a coin, which lands head with probability $p$ and tails with prob. $q=1-p$.
If you throw the coin once,
|
Can we think of a Random Variable as an instantiation of its distribution?
One intuitive distribution is the Bernoulli distribution. It describes the outcome of throwing a coin, which lands head with probability $p$ and tails with prob. $q=1-p$.
If you throw the coin once, you will observe either head or tail. However, this outcome is the random variable, it is not the distribution. The distribution however defines, with which probability you observe head and tails. The same true for all distributions -- continuous and discrete.
Blitzstein's analogy goes a bit further, because there exists not a single Bernoulli distribution, but a family of Bernoulli distributions: For each value of $p$ you will get a different Bernoulli distribution.
|
Can we think of a Random Variable as an instantiation of its distribution?
One intuitive distribution is the Bernoulli distribution. It describes the outcome of throwing a coin, which lands head with probability $p$ and tails with prob. $q=1-p$.
If you throw the coin once,
|
40,273
|
Can we think of a Random Variable as an instantiation of its distribution?
|
No. In lose sense a random sample can be seen as some sort of instantiation of the random variable. However, the RV itself is not an instance of its distribution in any meaningful context. The distribution function doesn't have an instance.
|
Can we think of a Random Variable as an instantiation of its distribution?
|
No. In lose sense a random sample can be seen as some sort of instantiation of the random variable. However, the RV itself is not an instance of its distribution in any meaningful context. The distrib
|
Can we think of a Random Variable as an instantiation of its distribution?
No. In lose sense a random sample can be seen as some sort of instantiation of the random variable. However, the RV itself is not an instance of its distribution in any meaningful context. The distribution function doesn't have an instance.
|
Can we think of a Random Variable as an instantiation of its distribution?
No. In lose sense a random sample can be seen as some sort of instantiation of the random variable. However, the RV itself is not an instance of its distribution in any meaningful context. The distrib
|
40,274
|
L1 (MAE) vs L2 (MSE) when data is normalized between 0 and 1
|
Scaling does not change relations between the values, because when scaling you divide all the values by the same constant. Outliers will stay outliers. If there is a big difference between two values $x_1,x_2$ squared and smaller difference between $x_1,x_3$ squared, then after normalizing the values, the differences between them will change, but their relations will be the same. Same with differences of absolute values. Obviously, after normalizing in squared case the distances will still be on squared scale, and in absolute values case, they still will be linear.
|
L1 (MAE) vs L2 (MSE) when data is normalized between 0 and 1
|
Scaling does not change relations between the values, because when scaling you divide all the values by the same constant. Outliers will stay outliers. If there is a big difference between two values
|
L1 (MAE) vs L2 (MSE) when data is normalized between 0 and 1
Scaling does not change relations between the values, because when scaling you divide all the values by the same constant. Outliers will stay outliers. If there is a big difference between two values $x_1,x_2$ squared and smaller difference between $x_1,x_3$ squared, then after normalizing the values, the differences between them will change, but their relations will be the same. Same with differences of absolute values. Obviously, after normalizing in squared case the distances will still be on squared scale, and in absolute values case, they still will be linear.
|
L1 (MAE) vs L2 (MSE) when data is normalized between 0 and 1
Scaling does not change relations between the values, because when scaling you divide all the values by the same constant. Outliers will stay outliers. If there is a big difference between two values
|
40,275
|
What is the backpropagation formula for Selu activation function?
|
Ok, let's try it myself. For the backward pass we get:
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ, x>0
$$
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ * a * e ^x , x=<0
$$
with $ λ=1.0507, a=1.6733$
|
What is the backpropagation formula for Selu activation function?
|
Ok, let's try it myself. For the backward pass we get:
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ, x>0
$$
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ * a *
|
What is the backpropagation formula for Selu activation function?
Ok, let's try it myself. For the backward pass we get:
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ, x>0
$$
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ * a * e ^x , x=<0
$$
with $ λ=1.0507, a=1.6733$
|
What is the backpropagation formula for Selu activation function?
Ok, let's try it myself. For the backward pass we get:
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ, x>0
$$
$$
\frac{\partial E}{\partial Y}\frac{\partial Y}{\partial X} = λ * a *
|
40,276
|
What is the backpropagation formula for Selu activation function?
|
The derivative (d) of the Selu function can be found from both the input (x) into and the output (y) from the Selu function.
To find derivative from the input:
$$
d = seluDerivative(x) =
\begin{cases}
\lambda & \text{if } x > 0\\
\lambda\alpha e^x & \text{if } x \leqslant 0\\
\end{cases}
$$
The problem with using this function in a backprop is that you might not want to save the temporary x value made from the forwardprop and recalculating it would be slow.
Luckily the Selu function is fully invertible so we can use this to find the derivative with y from the Selu function:
$$
y = selu(x) = \lambda
\begin{cases}
x & \text{if } x > 0\\
\alpha e^x - \alpha & \text{if } x \leqslant 0\\
\end{cases}
$$
$$
x = seluInverse(y) =
\begin{cases}
y \over \lambda & \text{if } y > 0\\
\ln \left( y + \lambda\alpha \over \lambda\alpha \right) & \text{if } y \leqslant 0\\
\end{cases}
$$
$$
d = seluInverseDerivative(y) =
\begin{cases}
\lambda & \text{if } y > 0\\
y + \lambda\alpha & \text{if } y \leqslant 0\\
\end{cases}
$$
|
What is the backpropagation formula for Selu activation function?
|
The derivative (d) of the Selu function can be found from both the input (x) into and the output (y) from the Selu function.
To find derivative from the input:
$$
d = seluDerivative(x) =
\begin{cases}
|
What is the backpropagation formula for Selu activation function?
The derivative (d) of the Selu function can be found from both the input (x) into and the output (y) from the Selu function.
To find derivative from the input:
$$
d = seluDerivative(x) =
\begin{cases}
\lambda & \text{if } x > 0\\
\lambda\alpha e^x & \text{if } x \leqslant 0\\
\end{cases}
$$
The problem with using this function in a backprop is that you might not want to save the temporary x value made from the forwardprop and recalculating it would be slow.
Luckily the Selu function is fully invertible so we can use this to find the derivative with y from the Selu function:
$$
y = selu(x) = \lambda
\begin{cases}
x & \text{if } x > 0\\
\alpha e^x - \alpha & \text{if } x \leqslant 0\\
\end{cases}
$$
$$
x = seluInverse(y) =
\begin{cases}
y \over \lambda & \text{if } y > 0\\
\ln \left( y + \lambda\alpha \over \lambda\alpha \right) & \text{if } y \leqslant 0\\
\end{cases}
$$
$$
d = seluInverseDerivative(y) =
\begin{cases}
\lambda & \text{if } y > 0\\
y + \lambda\alpha & \text{if } y \leqslant 0\\
\end{cases}
$$
|
What is the backpropagation formula for Selu activation function?
The derivative (d) of the Selu function can be found from both the input (x) into and the output (y) from the Selu function.
To find derivative from the input:
$$
d = seluDerivative(x) =
\begin{cases}
|
40,277
|
Forward-backward model selection: What is the starting model?
|
I believe "forward-backward" selection is another name for "forward-stepwise" selection. This is the default approach used by stepAIC. In this procedure, you start with an empty model and build up sequentially just like in forward selection. The only caveat is that every time you add a new variable, $X_{new}$, you have to check to see if any of the other variables that are already in the model should be dropped after $X_{new}$ is included. In this approach, you can end up searching "nonlinearly" through all the different models.
-------- EDIT --------
The following R code illustrates the difference between the three selection strategies:
# library(MASS)
set.seed(1)
N <- 200000
y <- rnorm(N)
x1 <- y + rnorm(N)
x2 <- y + rnorm(N)
x3 <- y + rnorm(N)
x4 <- rnorm(N)
x5 <- rnorm(N)
x6 <- x1 + x2 + x3 + rnorm(N)
data <- data.frame(y, x1, x2, x3, x4, x5, x6)
fit1 <- lm(y ~ ., data)
fit2 <- lm(y ~ 1, data)
stepAIC(fit1,direction="backward")
stepAIC(fit2,direction="forward",scope=list(upper=fit1,lower=fit2))
stepAIC(fit2,direction="both",scope=list(upper=fit1,lower=fit2))
I've modified your example just slightly in this code. First, I set a seed so that you can see the same data I used. I also made N smaller so the algorithm runs a little faster. I kept all your variables the same except for x6. x6 is now the most predictive of y individually - this will make it the first variable chosen in forward and forward-stepwise selection. But once x1, x2 and x3 enter the model, x6 becomes independent of y and should be excluded. You'll see that forward-stepwise does exactly this. It starts with x6, proceeds to include x1, x2 and x3, then it goes back and drops x6 and terminates. If you just use forward, then x6 will stay in the model because the algorithm never goes back to this sort of multicollinearity check.
|
Forward-backward model selection: What is the starting model?
|
I believe "forward-backward" selection is another name for "forward-stepwise" selection. This is the default approach used by stepAIC. In this procedure, you start with an empty model and build up seq
|
Forward-backward model selection: What is the starting model?
I believe "forward-backward" selection is another name for "forward-stepwise" selection. This is the default approach used by stepAIC. In this procedure, you start with an empty model and build up sequentially just like in forward selection. The only caveat is that every time you add a new variable, $X_{new}$, you have to check to see if any of the other variables that are already in the model should be dropped after $X_{new}$ is included. In this approach, you can end up searching "nonlinearly" through all the different models.
-------- EDIT --------
The following R code illustrates the difference between the three selection strategies:
# library(MASS)
set.seed(1)
N <- 200000
y <- rnorm(N)
x1 <- y + rnorm(N)
x2 <- y + rnorm(N)
x3 <- y + rnorm(N)
x4 <- rnorm(N)
x5 <- rnorm(N)
x6 <- x1 + x2 + x3 + rnorm(N)
data <- data.frame(y, x1, x2, x3, x4, x5, x6)
fit1 <- lm(y ~ ., data)
fit2 <- lm(y ~ 1, data)
stepAIC(fit1,direction="backward")
stepAIC(fit2,direction="forward",scope=list(upper=fit1,lower=fit2))
stepAIC(fit2,direction="both",scope=list(upper=fit1,lower=fit2))
I've modified your example just slightly in this code. First, I set a seed so that you can see the same data I used. I also made N smaller so the algorithm runs a little faster. I kept all your variables the same except for x6. x6 is now the most predictive of y individually - this will make it the first variable chosen in forward and forward-stepwise selection. But once x1, x2 and x3 enter the model, x6 becomes independent of y and should be excluded. You'll see that forward-stepwise does exactly this. It starts with x6, proceeds to include x1, x2 and x3, then it goes back and drops x6 and terminates. If you just use forward, then x6 will stay in the model because the algorithm never goes back to this sort of multicollinearity check.
|
Forward-backward model selection: What is the starting model?
I believe "forward-backward" selection is another name for "forward-stepwise" selection. This is the default approach used by stepAIC. In this procedure, you start with an empty model and build up seq
|
40,278
|
Forward-backward model selection: What is the starting model?
|
Forward-backward model selection are two greedy approaches to solve the combinatorial optimization problem of finding the optimal combination of features (which is known to be NP-complete). Hence, you need to look for suboptimal, computationally efficient strategies. See for example Floating search methods in feature selection by Pudil et. al.
In Forward method you start with an empty model, and iterate over all features. For each feature you train a model, and select the feature which yields the best model according to your metric. In a similar fashion you proceed by adding the next feature that yield the best improvement when combined with the already selected ones.
In backward method you just invert the procedure: start with all features, and iteratively remove that one whose removal least hurt the performance, or leads to the biggest improvement.
|
Forward-backward model selection: What is the starting model?
|
Forward-backward model selection are two greedy approaches to solve the combinatorial optimization problem of finding the optimal combination of features (which is known to be NP-complete). Hence, you
|
Forward-backward model selection: What is the starting model?
Forward-backward model selection are two greedy approaches to solve the combinatorial optimization problem of finding the optimal combination of features (which is known to be NP-complete). Hence, you need to look for suboptimal, computationally efficient strategies. See for example Floating search methods in feature selection by Pudil et. al.
In Forward method you start with an empty model, and iterate over all features. For each feature you train a model, and select the feature which yields the best model according to your metric. In a similar fashion you proceed by adding the next feature that yield the best improvement when combined with the already selected ones.
In backward method you just invert the procedure: start with all features, and iteratively remove that one whose removal least hurt the performance, or leads to the biggest improvement.
|
Forward-backward model selection: What is the starting model?
Forward-backward model selection are two greedy approaches to solve the combinatorial optimization problem of finding the optimal combination of features (which is known to be NP-complete). Hence, you
|
40,279
|
How to select hyperparameters for SVM regression after grid search?
|
Though I haven't fully understood the problem, I am answering as per my understanding of the question.
Have you tried including Epsilon in param_grid Dictionary of Grid_searchCV.
I see you have only used the C and gamma as the parameters in param_grid dict.
Then i think the system would itself pick the best Epsilon for you.
Example:
from sklearn.svm import SVR
import numpy as np
n_samples, n_features = 10, 5
np.random.seed(0)
y = np.random.randn(n_samples)
X = np.random.randn(n_samples, n_features)
parameters = {'kernel': ('linear', 'rbf','poly'), 'C':[1.5, 10],'gamma': [1e-7, 1e-4],'epsilon':[0.1,0.2,0.5,0.3]}
svr = svm.SVR()
clf = grid_search.GridSearchCV(svr, parameters)
clf.fit(X,y)
clf.best_params_
output: {'C': 1.5, 'epsilon': 0.1, 'gamma': 1e-07, 'kernel': 'poly'}
|
How to select hyperparameters for SVM regression after grid search?
|
Though I haven't fully understood the problem, I am answering as per my understanding of the question.
Have you tried including Epsilon in param_grid Dictionary of Grid_searchCV.
I see you have only
|
How to select hyperparameters for SVM regression after grid search?
Though I haven't fully understood the problem, I am answering as per my understanding of the question.
Have you tried including Epsilon in param_grid Dictionary of Grid_searchCV.
I see you have only used the C and gamma as the parameters in param_grid dict.
Then i think the system would itself pick the best Epsilon for you.
Example:
from sklearn.svm import SVR
import numpy as np
n_samples, n_features = 10, 5
np.random.seed(0)
y = np.random.randn(n_samples)
X = np.random.randn(n_samples, n_features)
parameters = {'kernel': ('linear', 'rbf','poly'), 'C':[1.5, 10],'gamma': [1e-7, 1e-4],'epsilon':[0.1,0.2,0.5,0.3]}
svr = svm.SVR()
clf = grid_search.GridSearchCV(svr, parameters)
clf.fit(X,y)
clf.best_params_
output: {'C': 1.5, 'epsilon': 0.1, 'gamma': 1e-07, 'kernel': 'poly'}
|
How to select hyperparameters for SVM regression after grid search?
Though I haven't fully understood the problem, I am answering as per my understanding of the question.
Have you tried including Epsilon in param_grid Dictionary of Grid_searchCV.
I see you have only
|
40,280
|
Is there a counterexample to the claim that throwing away "insignificant" predictors doesn't generally harm a model?
|
I think your question(s) have four answers in total:
1) Will dropping non-significant predictors increase the root-mean-square error? Yes, virtually always, in the same way and for the same reason that it will always increase the R-squared: a model will only ever use a predictor to improve its predictions (or, rather, its retrodictions, which I'll return to shortly). If the predictor's regression coefficient with the dependent variable is exactly zero, to infinite decimal places, then including it had no effect on the errors, and dropping it won't either, but that's about as realistic a scenario as flipping a coin and having it land on its edge. So generally speaking, the error will always increase when you drop a predictor.
2) Can it increase to some substantively meaningful degree even if the predictor you drop is insignificant? Yes, though the drop will always be less than if you dropped a significant predictor. By way of illustration/proof, here's some R code that will (somewhat) quickly produce variables where one predictor is significant while the other is not, using the same dependent variable, and yet the RMSE for the insignificant variable is only worse than the insignificant one by an arguably trivial degree (less than half a percent increase).
# Package that has the rmse function
require(hydroGOF)
# Predefine some placeholders
pvalx1 <- 0
rmsex1 <- 0
pvalx2 <- 0
rmsex2 <- 1
# Redraw these three variables (x1, x2, and y) until x1 is significant as a predictor of y
#and x2 is not, but x2's RMSE is less than 0.5% higher
while(pvalx1 > 0.05 | pvalx2 < 0.05 | rmsex2/rmsex1 > 1.005) {
y <<- runif(100, 0, 100)
x1 <<- y + rnorm(100, sd=300)
x2 <<- y + rnorm(100, sd=500)
pvalx1 <- summary(lm(y ~ x1))$coefficients[2,4] # P-value for x1
pvalx2 <- summary(lm(y ~ x2))$coefficients[2,4] # P-value for x2
rmsex1 <<- rmse(predict(lm(y ~ x1)), y)
rmsex2 <<- rmse(predict(lm(y ~ x2)), y)
}
# Output the results
summary(lm(y ~ x1))
summary(lm(y ~ x2))
print(rmsex1, digits=10); print(rmsex2, digits=10)
You can change the 1.005 to a 1.001 and eventually produce an example where the RMSE is less than a tenth of a percent higher for the non-significant predictor. Of course, this is mostly due to the fact that "significance" is defined using some arbitrary P-value cutpoint, so the difference in RMSE is tiny usually because the two variables are almost identical and just barely on different sides of the 0.05 significance threshold.
This leads me to an important point about the relationship between multicollinearity and the effect that dropping predictors has on overall prediction error/model quality: the relationship is inverse, not direct as you implied. That is to say, when there is high multicollinearity, dropping any variable will lave less of an effect on prediction error, because the other predictor(s), which were highly correlated with the dropped one, will pick up the slack, as it were, and happily take credit for the extra predictive power they now have, whether they are causal factors of the DV or just functioning as measurements for the actual causal factors which are not being measured and/or included. The error will still increase, but if the dropped predictor was strongly correlated with one or more of the remaining predictors, then much, or even most, of the increase in error that would otherwise occur will be prevented due to the increase in predictive power that one or more of the remaining predictors will now exhibit. This all is made clearest, I think, by an introduction to multivariate that includes ballantine graphs (basically Venn diagrams), such as the one in McClendon's fantastic book: https://books.google.com/books/about/Multiple_Regression_and_Causal_Analysis.html?id=kSgFAAAACAAJ
3) Does any of this matter if we only care about prediction and not causal inference? Yes, if only because it is always perfectly possible - especially if you have a lot of time on your hands - to build a model that retrodicts amazingly and yet predicts no better than chance. Consider one of the popular spurious correlations we all like to talk about:
Sure, you can hand-wave to some degree when it comes to causal inference, and say that you don't care why you can predict heat-related murders using just Miss America's age, so long as you can - but the thing is, you can't, can you? You can only retrodict it, i.e. accurately guess what the rate of heat-related murders was in a given past year based on Miss America's age that year. Unless there is some unfathomable causal chain that produced this correlation and that will continue to drive it in the future, then this robust observed correlation is useless to you, "even" if you "only" care about prediction. So even if your RMSE (or other goodness-of-fit measure) is excellent and/or made better by some predictor, you need, at a minimum, the general causal inference theory that there is some persistent process driving the observed correlation into the future as well as throughout the observed past.
4) Can dropping a non-significant predictor lead to false causal inferences and/or false inferences about what is driving a successful forecasting model? Yes, absolutely - in fact, the significance level of a predictor's coefficient in a multivariate model tells you nothing at all about what dropping that predictor will do to the coefficients and significance levels of other predictors. Whether or not a given predictor is significant, dropping it from a multivariate regression may, or may not, make any other predictors significant that weren't before, or insignificant when they were significant before. Here's an R example of a randomly-generated situation where one variable (x1) is a significant predictor of the DV (y) but this can only be seen when we include x2 in our model, even though x2 is not significant as an independent predictor of y.
# Predefine placeholders
brpvalx1 <- 0 # This will be the p-value for x1 in a bivariate regression of y
mrpvalx1 <- 0 # This will be the p-value for x1 in a multivariate regression
# of y alongside x2
mrpvalx2 <- 0 # This will be the x2's p-value in the multivariate model
# Redraw all the variables until x1 does correlate with y, and this can
# only be seen when we control for x2,
# even though x2 is not significant in the multivariate model
while(brpvalx1 < 0.05 | mrpvalx1 > 0.05 | mrpvalx2 < 0.05) {
x1 <- runif(1000, 0, 100)
y <- x1 + rnorm(1000, sd=500)
x2 <- x1 + rnorm(1000, sd=500)
brpvalx1 <- summary(lm(y ~ x1))$coefficients[2,4]
mrpvalx1 <- summary(lm(y ~ x1 + x2))$coefficients[2,4]
mrpvalx2 <- summary(lm(y ~ x1 + x2))$coefficients[3,4]
}
# Output the results
summary(lm(y ~ x1 + x2))
summary(lm(y ~ x1))
The significance level on any coefficient, including the predictor you're considering dropping, in a multivariate model tells you about that variable's correlation not with the DV but with what's left of the DV - or, rather, of its variance - after all the other predictors are given their shot at explaining the DV and its variance. A variable x2 can easily have no independent correlation with the DV in this sense, when other, better predictors are present, and yet have a very strong bivariate correlation with the DV and with the other predictors, in which case x2's inclusion in the model can drastically change the correlation that the other predictors appear to have with what's left of the DV and its variance after x2 has explained what it can as if in a bivariate regression. In terms of a ballantine graph, x2 can have large overlap with y but most or all of this overlap can be within the overlap of x1 and y, while much of the other overlap between x1 and y remains outside x2's overlap. That verbal description may not be clear, but I can't find online the kind of really appropriate graph that McClendon has.
I think the tricky thing here is that it is the case that, in order for the inclusion of some additional predictor to change the results for the other predictors' coefficients and significance levels, it is necessary that the new predictor be correlated with both the dependent variable and the predictor it's affecting. But those are both bivariate relationships with everything else left to vary, which a single multivariate model won't tell you anything about unless you include interaction terms. Again, though, all that refers to the causal-inference dynamic of appraising individual coefficients and testing their non-zero-ness - if you just care about the overall goodness of fit, then the story is relatively simple in that the exclusion of a given variable will lower the goodness of fit, but the decrease will be large if and only if the variable was not strongly correlated with any of the other predictors, and was correlated both consistently (low p-value) and substantially (large coefficient) with the dependent variable. This does not mean, though, that dropping a significant predictor will always have a much larger increase in error than dropping an insignificant one - a barely significant variable, especially one with a small coefficient, might not matter much either.
|
Is there a counterexample to the claim that throwing away "insignificant" predictors doesn't general
|
I think your question(s) have four answers in total:
1) Will dropping non-significant predictors increase the root-mean-square error? Yes, virtually always, in the same way and for the same reason tha
|
Is there a counterexample to the claim that throwing away "insignificant" predictors doesn't generally harm a model?
I think your question(s) have four answers in total:
1) Will dropping non-significant predictors increase the root-mean-square error? Yes, virtually always, in the same way and for the same reason that it will always increase the R-squared: a model will only ever use a predictor to improve its predictions (or, rather, its retrodictions, which I'll return to shortly). If the predictor's regression coefficient with the dependent variable is exactly zero, to infinite decimal places, then including it had no effect on the errors, and dropping it won't either, but that's about as realistic a scenario as flipping a coin and having it land on its edge. So generally speaking, the error will always increase when you drop a predictor.
2) Can it increase to some substantively meaningful degree even if the predictor you drop is insignificant? Yes, though the drop will always be less than if you dropped a significant predictor. By way of illustration/proof, here's some R code that will (somewhat) quickly produce variables where one predictor is significant while the other is not, using the same dependent variable, and yet the RMSE for the insignificant variable is only worse than the insignificant one by an arguably trivial degree (less than half a percent increase).
# Package that has the rmse function
require(hydroGOF)
# Predefine some placeholders
pvalx1 <- 0
rmsex1 <- 0
pvalx2 <- 0
rmsex2 <- 1
# Redraw these three variables (x1, x2, and y) until x1 is significant as a predictor of y
#and x2 is not, but x2's RMSE is less than 0.5% higher
while(pvalx1 > 0.05 | pvalx2 < 0.05 | rmsex2/rmsex1 > 1.005) {
y <<- runif(100, 0, 100)
x1 <<- y + rnorm(100, sd=300)
x2 <<- y + rnorm(100, sd=500)
pvalx1 <- summary(lm(y ~ x1))$coefficients[2,4] # P-value for x1
pvalx2 <- summary(lm(y ~ x2))$coefficients[2,4] # P-value for x2
rmsex1 <<- rmse(predict(lm(y ~ x1)), y)
rmsex2 <<- rmse(predict(lm(y ~ x2)), y)
}
# Output the results
summary(lm(y ~ x1))
summary(lm(y ~ x2))
print(rmsex1, digits=10); print(rmsex2, digits=10)
You can change the 1.005 to a 1.001 and eventually produce an example where the RMSE is less than a tenth of a percent higher for the non-significant predictor. Of course, this is mostly due to the fact that "significance" is defined using some arbitrary P-value cutpoint, so the difference in RMSE is tiny usually because the two variables are almost identical and just barely on different sides of the 0.05 significance threshold.
This leads me to an important point about the relationship between multicollinearity and the effect that dropping predictors has on overall prediction error/model quality: the relationship is inverse, not direct as you implied. That is to say, when there is high multicollinearity, dropping any variable will lave less of an effect on prediction error, because the other predictor(s), which were highly correlated with the dropped one, will pick up the slack, as it were, and happily take credit for the extra predictive power they now have, whether they are causal factors of the DV or just functioning as measurements for the actual causal factors which are not being measured and/or included. The error will still increase, but if the dropped predictor was strongly correlated with one or more of the remaining predictors, then much, or even most, of the increase in error that would otherwise occur will be prevented due to the increase in predictive power that one or more of the remaining predictors will now exhibit. This all is made clearest, I think, by an introduction to multivariate that includes ballantine graphs (basically Venn diagrams), such as the one in McClendon's fantastic book: https://books.google.com/books/about/Multiple_Regression_and_Causal_Analysis.html?id=kSgFAAAACAAJ
3) Does any of this matter if we only care about prediction and not causal inference? Yes, if only because it is always perfectly possible - especially if you have a lot of time on your hands - to build a model that retrodicts amazingly and yet predicts no better than chance. Consider one of the popular spurious correlations we all like to talk about:
Sure, you can hand-wave to some degree when it comes to causal inference, and say that you don't care why you can predict heat-related murders using just Miss America's age, so long as you can - but the thing is, you can't, can you? You can only retrodict it, i.e. accurately guess what the rate of heat-related murders was in a given past year based on Miss America's age that year. Unless there is some unfathomable causal chain that produced this correlation and that will continue to drive it in the future, then this robust observed correlation is useless to you, "even" if you "only" care about prediction. So even if your RMSE (or other goodness-of-fit measure) is excellent and/or made better by some predictor, you need, at a minimum, the general causal inference theory that there is some persistent process driving the observed correlation into the future as well as throughout the observed past.
4) Can dropping a non-significant predictor lead to false causal inferences and/or false inferences about what is driving a successful forecasting model? Yes, absolutely - in fact, the significance level of a predictor's coefficient in a multivariate model tells you nothing at all about what dropping that predictor will do to the coefficients and significance levels of other predictors. Whether or not a given predictor is significant, dropping it from a multivariate regression may, or may not, make any other predictors significant that weren't before, or insignificant when they were significant before. Here's an R example of a randomly-generated situation where one variable (x1) is a significant predictor of the DV (y) but this can only be seen when we include x2 in our model, even though x2 is not significant as an independent predictor of y.
# Predefine placeholders
brpvalx1 <- 0 # This will be the p-value for x1 in a bivariate regression of y
mrpvalx1 <- 0 # This will be the p-value for x1 in a multivariate regression
# of y alongside x2
mrpvalx2 <- 0 # This will be the x2's p-value in the multivariate model
# Redraw all the variables until x1 does correlate with y, and this can
# only be seen when we control for x2,
# even though x2 is not significant in the multivariate model
while(brpvalx1 < 0.05 | mrpvalx1 > 0.05 | mrpvalx2 < 0.05) {
x1 <- runif(1000, 0, 100)
y <- x1 + rnorm(1000, sd=500)
x2 <- x1 + rnorm(1000, sd=500)
brpvalx1 <- summary(lm(y ~ x1))$coefficients[2,4]
mrpvalx1 <- summary(lm(y ~ x1 + x2))$coefficients[2,4]
mrpvalx2 <- summary(lm(y ~ x1 + x2))$coefficients[3,4]
}
# Output the results
summary(lm(y ~ x1 + x2))
summary(lm(y ~ x1))
The significance level on any coefficient, including the predictor you're considering dropping, in a multivariate model tells you about that variable's correlation not with the DV but with what's left of the DV - or, rather, of its variance - after all the other predictors are given their shot at explaining the DV and its variance. A variable x2 can easily have no independent correlation with the DV in this sense, when other, better predictors are present, and yet have a very strong bivariate correlation with the DV and with the other predictors, in which case x2's inclusion in the model can drastically change the correlation that the other predictors appear to have with what's left of the DV and its variance after x2 has explained what it can as if in a bivariate regression. In terms of a ballantine graph, x2 can have large overlap with y but most or all of this overlap can be within the overlap of x1 and y, while much of the other overlap between x1 and y remains outside x2's overlap. That verbal description may not be clear, but I can't find online the kind of really appropriate graph that McClendon has.
I think the tricky thing here is that it is the case that, in order for the inclusion of some additional predictor to change the results for the other predictors' coefficients and significance levels, it is necessary that the new predictor be correlated with both the dependent variable and the predictor it's affecting. But those are both bivariate relationships with everything else left to vary, which a single multivariate model won't tell you anything about unless you include interaction terms. Again, though, all that refers to the causal-inference dynamic of appraising individual coefficients and testing their non-zero-ness - if you just care about the overall goodness of fit, then the story is relatively simple in that the exclusion of a given variable will lower the goodness of fit, but the decrease will be large if and only if the variable was not strongly correlated with any of the other predictors, and was correlated both consistently (low p-value) and substantially (large coefficient) with the dependent variable. This does not mean, though, that dropping a significant predictor will always have a much larger increase in error than dropping an insignificant one - a barely significant variable, especially one with a small coefficient, might not matter much either.
|
Is there a counterexample to the claim that throwing away "insignificant" predictors doesn't general
I think your question(s) have four answers in total:
1) Will dropping non-significant predictors increase the root-mean-square error? Yes, virtually always, in the same way and for the same reason tha
|
40,281
|
What's the receptive field of a stack of dilated convolutions?
|
I think it should be 1024*3.
After the first block, the indices of the receptive fields of the outputs should be 1-1024, 2-1025, 3-1026, etc. (assuming no padding, but receptive field size should be same with padding anyways).
When you make the second block with a receptive field size of 1024, the first output of that block will "see" the outputs that had receptive field indices 1-1024, 2-1025, ... 1024-2048. So its receptive field covers 1-2048. So each block just adds 1024 to the overall receptive field size I think.
In general, I think the formula for the receptive field size s of a layer l should be:
$s_{l_0} = 1$
$s_{l_i}=s_{l_i} + (kernel size - 1) \cdot dilationfactor$
If this is correct, their kernel size seems to be 2 (to arrive at 1024 receptive field size), which is a bit surprising, I hope it is not due to some fault of my logic :)
Stacking of the blocks might be also more useful to refine outputs at a more finegrained level after having processed larger receptive fields in the previous block, rather than just maximally increasing receptive field size.
|
What's the receptive field of a stack of dilated convolutions?
|
I think it should be 1024*3.
After the first block, the indices of the receptive fields of the outputs should be 1-1024, 2-1025, 3-1026, etc. (assuming no padding, but receptive field size should be s
|
What's the receptive field of a stack of dilated convolutions?
I think it should be 1024*3.
After the first block, the indices of the receptive fields of the outputs should be 1-1024, 2-1025, 3-1026, etc. (assuming no padding, but receptive field size should be same with padding anyways).
When you make the second block with a receptive field size of 1024, the first output of that block will "see" the outputs that had receptive field indices 1-1024, 2-1025, ... 1024-2048. So its receptive field covers 1-2048. So each block just adds 1024 to the overall receptive field size I think.
In general, I think the formula for the receptive field size s of a layer l should be:
$s_{l_0} = 1$
$s_{l_i}=s_{l_i} + (kernel size - 1) \cdot dilationfactor$
If this is correct, their kernel size seems to be 2 (to arrive at 1024 receptive field size), which is a bit surprising, I hope it is not due to some fault of my logic :)
Stacking of the blocks might be also more useful to refine outputs at a more finegrained level after having processed larger receptive fields in the previous block, rather than just maximally increasing receptive field size.
|
What's the receptive field of a stack of dilated convolutions?
I think it should be 1024*3.
After the first block, the indices of the receptive fields of the outputs should be 1-1024, 2-1025, 3-1026, etc. (assuming no padding, but receptive field size should be s
|
40,282
|
What's the receptive field of a stack of dilated convolutions?
|
I created a script (github gist) that uses the equation above and plots the receptive field for a pixel on a 2D image.
The same can be understood for 1D signals.
Some examples
|
What's the receptive field of a stack of dilated convolutions?
|
I created a script (github gist) that uses the equation above and plots the receptive field for a pixel on a 2D image.
The same can be understood for 1D signals.
Some examples
|
What's the receptive field of a stack of dilated convolutions?
I created a script (github gist) that uses the equation above and plots the receptive field for a pixel on a 2D image.
The same can be understood for 1D signals.
Some examples
|
What's the receptive field of a stack of dilated convolutions?
I created a script (github gist) that uses the equation above and plots the receptive field for a pixel on a 2D image.
The same can be understood for 1D signals.
Some examples
|
40,283
|
Use loess regression with many zero values
|
A Loess confidence interval doesn't mean much unless the Loess parameters have been cross-validated (which usually is not the case). When you use Loess for exploration, as it was originally intended, understanding how to control it will help you guide your exploration and interpret its results better.
Consider this small study of a synthetic dataset which has only $0$ or $1$ as responses: it is an extreme example of your situation. The data, plotted as black points, are outcomes of Bernoulli$(p)$ variables ("coin flips") where $p$ varies in a damped sinusoidal manner with the horizontal coordinate $x$, as shown by the white reference curve in each panel. The panels vary only by the "span" of the Loess smooth, which determines how local each Loess estimate is: smaller spans produce estimates that are more localized; that is, they reflect the responses for the closest neighbors of each $x$ value much more than for distant neighbors. The smooth is shown in blue and its surrounding confidence band in dark gray.
The lefthand panel uses the default span of $0.75$. This causes the Loess estimate at each point to depend on most of the points in the plot: it is a heavy smooth for these data. In many cases the white plot lies outside the shaded confidence band, showing this confidence band may be misleading.
It is clear that only with the final span of $0.25$ does the smooth come at all close to the true values: here, the white graph is contained within the shaded gray area. Unfortunately, in practice we do not have access to any true underlying curve: that's precisely what we're trying to estimate.
All three of these smooths are perfectly valid, insofar as they are efforts to sketch out the overall trend in the response ("y") relative to the regressor ("x"). The heavy smooth at the left suggests the response rate is approximately stable (which, on average, it is). The lighter smooth at the right captures higher-frequency variation. In practice, it might not be apparent whether what it shows is "real" or is "noise."
In practice, we never accept just one default level of smoothing: we vary the amount of smoothing, exactly as illustrated here, in order to learn about the data at varying levels of local resolution. We might also vary the smoothing in order to create different kinds of visual descriptions of the data, guiding the viewer's eye to global trends (as at the left) or local behaviors (as at the right), as we see appropriate.
The best tool for "checking appropriateness" is to study the residuals of the smooth in the context of a particular analytical or visualization objective. Good books on Exploratory Data Analysis, such as John Tukey's EDA, provide a wealth of techniques for computing and analyzing smooths and their residuals.
If you would like to experiment, here is the R code that created these illustrations.
#
# Generate data.
#
n <- 2e2
x <- 1:n
p <- (sin(x/100 * 2*pi)^2 - 1/2)*exp(-x/n) + 1/2
set.seed(17)
y <- rbinom(n, 1, p)
df <- data.frame(x=x, y=y, p=p)
#
# Set up for drawing.
#
library(ggplot2)
spans <- c(0.75, 0.5, 0.25)
k <- length(spans)
viewports <- lapply(1:k, function(i)
grid:::viewport(width=1/k, height=1, x=(i-1/2)/k, y=1/2))
names(viewports) <- spans
#
# Create the plots.
#
g <- ggplot(df, aes(x, y)) + geom_point(aes(x,y), df, alpha=0.25) +
coord_cartesian(ylim=c(0,1))
for (i in 1:k) {
print(g + geom_smooth(method="loess", span=spans[i]) +
geom_line(aes(x,p), df, color="White", lwd=1) +
labs(title=paste("Span =", spans[i])),
vp=viewports[[i]])
}
References
John W. Tukey, EDA. Addison-Wesley, 1977.
|
Use loess regression with many zero values
|
A Loess confidence interval doesn't mean much unless the Loess parameters have been cross-validated (which usually is not the case). When you use Loess for exploration, as it was originally intended,
|
Use loess regression with many zero values
A Loess confidence interval doesn't mean much unless the Loess parameters have been cross-validated (which usually is not the case). When you use Loess for exploration, as it was originally intended, understanding how to control it will help you guide your exploration and interpret its results better.
Consider this small study of a synthetic dataset which has only $0$ or $1$ as responses: it is an extreme example of your situation. The data, plotted as black points, are outcomes of Bernoulli$(p)$ variables ("coin flips") where $p$ varies in a damped sinusoidal manner with the horizontal coordinate $x$, as shown by the white reference curve in each panel. The panels vary only by the "span" of the Loess smooth, which determines how local each Loess estimate is: smaller spans produce estimates that are more localized; that is, they reflect the responses for the closest neighbors of each $x$ value much more than for distant neighbors. The smooth is shown in blue and its surrounding confidence band in dark gray.
The lefthand panel uses the default span of $0.75$. This causes the Loess estimate at each point to depend on most of the points in the plot: it is a heavy smooth for these data. In many cases the white plot lies outside the shaded confidence band, showing this confidence band may be misleading.
It is clear that only with the final span of $0.25$ does the smooth come at all close to the true values: here, the white graph is contained within the shaded gray area. Unfortunately, in practice we do not have access to any true underlying curve: that's precisely what we're trying to estimate.
All three of these smooths are perfectly valid, insofar as they are efforts to sketch out the overall trend in the response ("y") relative to the regressor ("x"). The heavy smooth at the left suggests the response rate is approximately stable (which, on average, it is). The lighter smooth at the right captures higher-frequency variation. In practice, it might not be apparent whether what it shows is "real" or is "noise."
In practice, we never accept just one default level of smoothing: we vary the amount of smoothing, exactly as illustrated here, in order to learn about the data at varying levels of local resolution. We might also vary the smoothing in order to create different kinds of visual descriptions of the data, guiding the viewer's eye to global trends (as at the left) or local behaviors (as at the right), as we see appropriate.
The best tool for "checking appropriateness" is to study the residuals of the smooth in the context of a particular analytical or visualization objective. Good books on Exploratory Data Analysis, such as John Tukey's EDA, provide a wealth of techniques for computing and analyzing smooths and their residuals.
If you would like to experiment, here is the R code that created these illustrations.
#
# Generate data.
#
n <- 2e2
x <- 1:n
p <- (sin(x/100 * 2*pi)^2 - 1/2)*exp(-x/n) + 1/2
set.seed(17)
y <- rbinom(n, 1, p)
df <- data.frame(x=x, y=y, p=p)
#
# Set up for drawing.
#
library(ggplot2)
spans <- c(0.75, 0.5, 0.25)
k <- length(spans)
viewports <- lapply(1:k, function(i)
grid:::viewport(width=1/k, height=1, x=(i-1/2)/k, y=1/2))
names(viewports) <- spans
#
# Create the plots.
#
g <- ggplot(df, aes(x, y)) + geom_point(aes(x,y), df, alpha=0.25) +
coord_cartesian(ylim=c(0,1))
for (i in 1:k) {
print(g + geom_smooth(method="loess", span=spans[i]) +
geom_line(aes(x,p), df, color="White", lwd=1) +
labs(title=paste("Span =", spans[i])),
vp=viewports[[i]])
}
References
John W. Tukey, EDA. Addison-Wesley, 1977.
|
Use loess regression with many zero values
A Loess confidence interval doesn't mean much unless the Loess parameters have been cross-validated (which usually is not the case). When you use Loess for exploration, as it was originally intended,
|
40,284
|
Do I have to add the seasonal effect and trend back to ARIMA forecast?
|
No, you do not need to remove trend and/or seasonality before fitting an ARIMA model.
These models can handle certain types of trends and certain types of seasonality by themselves, or by including external regressors (the xreg argument, where you could include more complicated related effects like moving holidays, or non-polynomial trends, breaks in the trend, etc).
Including these regressors is intuitively similar to "removing trend and seasonality first, then ARIMA" (regression-with-ARIMA-errors model), but is done efficiently in one fitting step.
Yes, if you have removed trend and seasonality before fitting an ARIMA model, you will need to "add them back in" to get a forecast of your original series; that is, you need a forecast of the trend and seasonality to add back to your forecast of the rest. In some cases, the "forecast" of the seasonality is known exactly (e.g. the future dates of a moving holiday), which makes it easier.
If you have removed trend or seasonality by some process which does not define dynamics for these components (e.g. the X11 procedure for seasonality adjustment), then there is no canonical way to do this; you would need to estimate a new model to forecast these components.
Edit: Here's a simple R example for a classic series (AirPassengers) which has both trend and seasonality of a kind which can be reasonably well-captured by a standard (seasonal) ARIMA model, without additional regressors:
library(forecast)
mod <- auto.arima(AirPassengers)
fc <- forecast(mod, h=12)
plot(fc)
|
Do I have to add the seasonal effect and trend back to ARIMA forecast?
|
No, you do not need to remove trend and/or seasonality before fitting an ARIMA model.
These models can handle certain types of trends and certain types of seasonality by themselves, or by including e
|
Do I have to add the seasonal effect and trend back to ARIMA forecast?
No, you do not need to remove trend and/or seasonality before fitting an ARIMA model.
These models can handle certain types of trends and certain types of seasonality by themselves, or by including external regressors (the xreg argument, where you could include more complicated related effects like moving holidays, or non-polynomial trends, breaks in the trend, etc).
Including these regressors is intuitively similar to "removing trend and seasonality first, then ARIMA" (regression-with-ARIMA-errors model), but is done efficiently in one fitting step.
Yes, if you have removed trend and seasonality before fitting an ARIMA model, you will need to "add them back in" to get a forecast of your original series; that is, you need a forecast of the trend and seasonality to add back to your forecast of the rest. In some cases, the "forecast" of the seasonality is known exactly (e.g. the future dates of a moving holiday), which makes it easier.
If you have removed trend or seasonality by some process which does not define dynamics for these components (e.g. the X11 procedure for seasonality adjustment), then there is no canonical way to do this; you would need to estimate a new model to forecast these components.
Edit: Here's a simple R example for a classic series (AirPassengers) which has both trend and seasonality of a kind which can be reasonably well-captured by a standard (seasonal) ARIMA model, without additional regressors:
library(forecast)
mod <- auto.arima(AirPassengers)
fc <- forecast(mod, h=12)
plot(fc)
|
Do I have to add the seasonal effect and trend back to ARIMA forecast?
No, you do not need to remove trend and/or seasonality before fitting an ARIMA model.
These models can handle certain types of trends and certain types of seasonality by themselves, or by including e
|
40,285
|
What does one mean by ARCH effect?
|
If the squared residuals/errors of your time series model exhibit autocorrelation, then ARCH effects are present.
A quick google search offers a clear definition:
A time series exhibiting conditional heteroscedasticity—or autocorrelation in the squared series—is said to have autoregressive conditional heteroscedastic (ARCH) effects. Engle's ARCH test is a Lagrange multiplier test to assess the significance of ARCH effects
Source: https://www.mathworks.com/help/econ/engles-arch-test.html?requestedDomain=www.mathworks.com
|
What does one mean by ARCH effect?
|
If the squared residuals/errors of your time series model exhibit autocorrelation, then ARCH effects are present.
A quick google search offers a clear definition:
A time series exhibiting conditiona
|
What does one mean by ARCH effect?
If the squared residuals/errors of your time series model exhibit autocorrelation, then ARCH effects are present.
A quick google search offers a clear definition:
A time series exhibiting conditional heteroscedasticity—or autocorrelation in the squared series—is said to have autoregressive conditional heteroscedastic (ARCH) effects. Engle's ARCH test is a Lagrange multiplier test to assess the significance of ARCH effects
Source: https://www.mathworks.com/help/econ/engles-arch-test.html?requestedDomain=www.mathworks.com
|
What does one mean by ARCH effect?
If the squared residuals/errors of your time series model exhibit autocorrelation, then ARCH effects are present.
A quick google search offers a clear definition:
A time series exhibiting conditiona
|
40,286
|
What does one mean by ARCH effect?
|
I think that by ARCH effect they mean the correlation between volatility of a time series, measured by conditional variance, and its values or innovations in the past. The letter AR stands for auto regressive, C for conditional (i.e conditional variance), and H for heteroskedasticity. So if non-constant conditional variance of x(t) has some correlation with itself/or innovation in the past, then we say there exists ARCH effect.
|
What does one mean by ARCH effect?
|
I think that by ARCH effect they mean the correlation between volatility of a time series, measured by conditional variance, and its values or innovations in the past. The letter AR stands for auto re
|
What does one mean by ARCH effect?
I think that by ARCH effect they mean the correlation between volatility of a time series, measured by conditional variance, and its values or innovations in the past. The letter AR stands for auto regressive, C for conditional (i.e conditional variance), and H for heteroskedasticity. So if non-constant conditional variance of x(t) has some correlation with itself/or innovation in the past, then we say there exists ARCH effect.
|
What does one mean by ARCH effect?
I think that by ARCH effect they mean the correlation between volatility of a time series, measured by conditional variance, and its values or innovations in the past. The letter AR stands for auto re
|
40,287
|
What is Sequential MNIST, Permuted MNIST?
|
As in comment by @Batman, sequential MNIST is explained in section 4.3 of your link: https://arxiv.org/abs/1610.09038.
We evaluated Professor Forcing on the task of sequentially generating the pixels in MNIST digits.
As far as I am aware, sequential MNIST always implies the model does not get to see/generate the whole image at once (like for example a normal 2d-ConvNet would), but only one pixel at a time sequentially. So sequential MNIST should have the same meaning also in other non-generative contexts.
Also in section 4.3. they explain permuted mnist:
Applying a fixed random permutation to the pixels makes the problem even harder but IRNNs on the permuted pixels are still better than LSTMs on the non-permuted pixels.
The problem should be harder after permuting the pixels in all images with the same permutation, because you have to learn more long-range patterns: Distinctive shapes, like the horizontal bar of the 7 are typically more spread apart in the input after permutation compared to before.
|
What is Sequential MNIST, Permuted MNIST?
|
As in comment by @Batman, sequential MNIST is explained in section 4.3 of your link: https://arxiv.org/abs/1610.09038.
We evaluated Professor Forcing on the task of sequentially generating the pixels
|
What is Sequential MNIST, Permuted MNIST?
As in comment by @Batman, sequential MNIST is explained in section 4.3 of your link: https://arxiv.org/abs/1610.09038.
We evaluated Professor Forcing on the task of sequentially generating the pixels in MNIST digits.
As far as I am aware, sequential MNIST always implies the model does not get to see/generate the whole image at once (like for example a normal 2d-ConvNet would), but only one pixel at a time sequentially. So sequential MNIST should have the same meaning also in other non-generative contexts.
Also in section 4.3. they explain permuted mnist:
Applying a fixed random permutation to the pixels makes the problem even harder but IRNNs on the permuted pixels are still better than LSTMs on the non-permuted pixels.
The problem should be harder after permuting the pixels in all images with the same permutation, because you have to learn more long-range patterns: Distinctive shapes, like the horizontal bar of the 7 are typically more spread apart in the input after permutation compared to before.
|
What is Sequential MNIST, Permuted MNIST?
As in comment by @Batman, sequential MNIST is explained in section 4.3 of your link: https://arxiv.org/abs/1610.09038.
We evaluated Professor Forcing on the task of sequentially generating the pixels
|
40,288
|
What is Sequential MNIST, Permuted MNIST?
|
Permutated Sequential MNIST is introduced in the "A Simple Way to Initialize Recurrent Networks of Rectified Linear Units" paper from 2015 which has Hinton as a co-author.
Sequential MNIST: "classify the MNIST digits [21] when the 784 pixels
are presented sequentially to the recurrent net"
Permutated Sequential MNIST: same thing with "a fixed random permutation of the pixels of the MNIST digits"
Permutated Sequential MNIST is not a real dataset, it's just a transformation of the MNIST dataset to evaluate recurrent neural networks.
|
What is Sequential MNIST, Permuted MNIST?
|
Permutated Sequential MNIST is introduced in the "A Simple Way to Initialize Recurrent Networks of Rectified Linear Units" paper from 2015 which has Hinton as a co-author.
Sequential MNIST: "classify
|
What is Sequential MNIST, Permuted MNIST?
Permutated Sequential MNIST is introduced in the "A Simple Way to Initialize Recurrent Networks of Rectified Linear Units" paper from 2015 which has Hinton as a co-author.
Sequential MNIST: "classify the MNIST digits [21] when the 784 pixels
are presented sequentially to the recurrent net"
Permutated Sequential MNIST: same thing with "a fixed random permutation of the pixels of the MNIST digits"
Permutated Sequential MNIST is not a real dataset, it's just a transformation of the MNIST dataset to evaluate recurrent neural networks.
|
What is Sequential MNIST, Permuted MNIST?
Permutated Sequential MNIST is introduced in the "A Simple Way to Initialize Recurrent Networks of Rectified Linear Units" paper from 2015 which has Hinton as a co-author.
Sequential MNIST: "classify
|
40,289
|
How can I calculate the AICc if the number of samples equals the number of parameters plus one
|
The case $n = k + 1$ corresponds to a saturated model,
$$
\# \textrm{parameters} = \# \textrm{observations}
$$
which is why you are seeing effectively an "infinite" penalisation.
One of the contexts in which Akaike's Information Criterion along with a host of others were developed, and is used frequently today, is linear regression. It's not always clear when the intercept or noise variance are counted or not, hence the "off by one" confusion.
Reference:
Two different formulas for AICc
|
How can I calculate the AICc if the number of samples equals the number of parameters plus one
|
The case $n = k + 1$ corresponds to a saturated model,
$$
\# \textrm{parameters} = \# \textrm{observations}
$$
which is why you are seeing effectively an "infinite" penalisation.
One of the contexts
|
How can I calculate the AICc if the number of samples equals the number of parameters plus one
The case $n = k + 1$ corresponds to a saturated model,
$$
\# \textrm{parameters} = \# \textrm{observations}
$$
which is why you are seeing effectively an "infinite" penalisation.
One of the contexts in which Akaike's Information Criterion along with a host of others were developed, and is used frequently today, is linear regression. It's not always clear when the intercept or noise variance are counted or not, hence the "off by one" confusion.
Reference:
Two different formulas for AICc
|
How can I calculate the AICc if the number of samples equals the number of parameters plus one
The case $n = k + 1$ corresponds to a saturated model,
$$
\# \textrm{parameters} = \# \textrm{observations}
$$
which is why you are seeing effectively an "infinite" penalisation.
One of the contexts
|
40,290
|
Comparing t-SNE solutions using their Kullback-Leibler divergences
|
Unfortunately, no; comparing the optimality of a perplexity parameter through the correspond $KL(P||Q)$ divergence is not a valid approach. As I explained in this question: "The perplexity parameter increases monotonically with the variance of the Gaussian used to calculate the distances/probabilities $P$. Therefore as you increase the perplexity parameter as a whole you will get smaller distances in absolute terms and subsequent KL-divergence values." This is described in detail in the original 2008 JMLR paper of Visualizing Data using $t$-SNE by Van der Maaten and Hinton.
You can easily see this programmatically with a toy dataset too. Say for example you want to use $t$-SNE for the famous iris dataset and you try different perplexities eg. $10, 20, 30, 40$. What would be emperical distribution of the scores? Well, we are lazy so let's get the computer do the job for us and run the Rtsne routine with a few ($50$) different starting values and see what we get:
(Note: I use the Barnes-Hut implementation of $t$-SNE (van der Maaten, 2014) but the behaviour is the same.)
library(Rtsne)
REPS = 50; # Number of random starts
per10 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 10, check_duplicates= FALSE)}, simplify = FALSE)
per20 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 20, check_duplicates= FALSE)}, simplify = FALSE)
per30 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 30, check_duplicates= FALSE)}, simplify = FALSE)
per40 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 40, check_duplicates= FALSE)}, simplify = FALSE)
costs <- c( sapply(per10, function(u) min(u$itercosts)),
sapply(per20, function(u) min(u$itercosts)),
sapply(per30, function(u) min(u$itercosts)),
sapply(per40, function(u) min(u$itercosts)))
perplexities <- c( rep(10,REPS), rep(20,REPS), rep(30,REPS), rep(40,REPS))
plot(density(costs[perplexities == 10]), xlim= c(0,0.3), ylim=c(0,250), lwd= 2,
main='KL scores from difference perplexities on the same dataset'); grid()
lines(density(costs[perplexities == 20]), col='red', lwd= 2);
lines(density(costs[perplexities == 30]), col='blue', lwd= 2)
lines(density(costs[perplexities == 40]), col='magenta', lwd= 2);
legend('topright', col=c('black','red','blue','magenta'),
c('perp. = 10', 'perp. = 20', 'perp. = 30','perp. = 40'), lwd = 2)
Looking at the picture it is clear that the smaller perplexity values correspond to higher $KL$ scores as expected based on the reading of the original paper above. Using the $KL$ scores to pick the optimal perplexity is not very helpful. You can still use it to pick the optimal run for a given solution though!
|
Comparing t-SNE solutions using their Kullback-Leibler divergences
|
Unfortunately, no; comparing the optimality of a perplexity parameter through the correspond $KL(P||Q)$ divergence is not a valid approach. As I explained in this question: "The perplexity parameter i
|
Comparing t-SNE solutions using their Kullback-Leibler divergences
Unfortunately, no; comparing the optimality of a perplexity parameter through the correspond $KL(P||Q)$ divergence is not a valid approach. As I explained in this question: "The perplexity parameter increases monotonically with the variance of the Gaussian used to calculate the distances/probabilities $P$. Therefore as you increase the perplexity parameter as a whole you will get smaller distances in absolute terms and subsequent KL-divergence values." This is described in detail in the original 2008 JMLR paper of Visualizing Data using $t$-SNE by Van der Maaten and Hinton.
You can easily see this programmatically with a toy dataset too. Say for example you want to use $t$-SNE for the famous iris dataset and you try different perplexities eg. $10, 20, 30, 40$. What would be emperical distribution of the scores? Well, we are lazy so let's get the computer do the job for us and run the Rtsne routine with a few ($50$) different starting values and see what we get:
(Note: I use the Barnes-Hut implementation of $t$-SNE (van der Maaten, 2014) but the behaviour is the same.)
library(Rtsne)
REPS = 50; # Number of random starts
per10 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 10, check_duplicates= FALSE)}, simplify = FALSE)
per20 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 20, check_duplicates= FALSE)}, simplify = FALSE)
per30 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 30, check_duplicates= FALSE)}, simplify = FALSE)
per40 <- sapply(1:REPS, function(u) {set.seed(u);
Rtsne(iris, perplexity = 40, check_duplicates= FALSE)}, simplify = FALSE)
costs <- c( sapply(per10, function(u) min(u$itercosts)),
sapply(per20, function(u) min(u$itercosts)),
sapply(per30, function(u) min(u$itercosts)),
sapply(per40, function(u) min(u$itercosts)))
perplexities <- c( rep(10,REPS), rep(20,REPS), rep(30,REPS), rep(40,REPS))
plot(density(costs[perplexities == 10]), xlim= c(0,0.3), ylim=c(0,250), lwd= 2,
main='KL scores from difference perplexities on the same dataset'); grid()
lines(density(costs[perplexities == 20]), col='red', lwd= 2);
lines(density(costs[perplexities == 30]), col='blue', lwd= 2)
lines(density(costs[perplexities == 40]), col='magenta', lwd= 2);
legend('topright', col=c('black','red','blue','magenta'),
c('perp. = 10', 'perp. = 20', 'perp. = 30','perp. = 40'), lwd = 2)
Looking at the picture it is clear that the smaller perplexity values correspond to higher $KL$ scores as expected based on the reading of the original paper above. Using the $KL$ scores to pick the optimal perplexity is not very helpful. You can still use it to pick the optimal run for a given solution though!
|
Comparing t-SNE solutions using their Kullback-Leibler divergences
Unfortunately, no; comparing the optimality of a perplexity parameter through the correspond $KL(P||Q)$ divergence is not a valid approach. As I explained in this question: "The perplexity parameter i
|
40,291
|
Finding variance of AR process
|
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \text{Var}(\varepsilon_{t}).$$
As we know, $E(\varepsilon_{t}^2)=\sigma^2$. Then we have:
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \sigma^2.$$
Now using variance properties we take out $\phi$ from the variance:
$$\text{Var}(y_t)=\phi^2\text{Var}(y_{t-1}) + \sigma^2.$$
Given that $\text{Var}(y_t)=\text{Var}(y_{t-1})$ we solve to get:
$$\text{Var}(y)=\frac{\sigma^2}{1-\phi^2}.$$
|
Finding variance of AR process
|
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \text{Var}(\varepsilon_{t}).$$
As we know, $E(\varepsilon_{t}^2)=\sigma^2$. Then we have:
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \sigma^2.$$
Now u
|
Finding variance of AR process
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \text{Var}(\varepsilon_{t}).$$
As we know, $E(\varepsilon_{t}^2)=\sigma^2$. Then we have:
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \sigma^2.$$
Now using variance properties we take out $\phi$ from the variance:
$$\text{Var}(y_t)=\phi^2\text{Var}(y_{t-1}) + \sigma^2.$$
Given that $\text{Var}(y_t)=\text{Var}(y_{t-1})$ we solve to get:
$$\text{Var}(y)=\frac{\sigma^2}{1-\phi^2}.$$
|
Finding variance of AR process
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \text{Var}(\varepsilon_{t}).$$
As we know, $E(\varepsilon_{t}^2)=\sigma^2$. Then we have:
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \sigma^2.$$
Now u
|
40,292
|
Finding variance of AR process
|
$$y_t = \varepsilon_{t} +\phi y_{t-1}= \varepsilon_{t} +\phi (\varepsilon_{t-1} +\phi y_{t-2}) = \sum_{j=0}^{\infty}\phi^j \varepsilon_{t-j},$$ and If $|\phi|<1$, then $$\text{Var}(y_t) = \sum_{j=0}^{\infty}(\phi^j)^2 \text{Var}(\varepsilon_{t-j})=\sigma^2 (1+\phi^2+\phi^4+\dots) = \sigma^2\frac{1-\lim_{n \rightarrow \infty} \phi^{2n}}{1-\phi^2} = \frac{\sigma^2}{1-\phi^2}.$$
P.S. From here, we can conclude that $\text{Var}(y_t)$ isn't a function of time.
|
Finding variance of AR process
|
$$y_t = \varepsilon_{t} +\phi y_{t-1}= \varepsilon_{t} +\phi (\varepsilon_{t-1} +\phi y_{t-2}) = \sum_{j=0}^{\infty}\phi^j \varepsilon_{t-j},$$ and If $|\phi|<1$, then $$\text{Var}(y_t) = \sum_{j=0}^
|
Finding variance of AR process
$$y_t = \varepsilon_{t} +\phi y_{t-1}= \varepsilon_{t} +\phi (\varepsilon_{t-1} +\phi y_{t-2}) = \sum_{j=0}^{\infty}\phi^j \varepsilon_{t-j},$$ and If $|\phi|<1$, then $$\text{Var}(y_t) = \sum_{j=0}^{\infty}(\phi^j)^2 \text{Var}(\varepsilon_{t-j})=\sigma^2 (1+\phi^2+\phi^4+\dots) = \sigma^2\frac{1-\lim_{n \rightarrow \infty} \phi^{2n}}{1-\phi^2} = \frac{\sigma^2}{1-\phi^2}.$$
P.S. From here, we can conclude that $\text{Var}(y_t)$ isn't a function of time.
|
Finding variance of AR process
$$y_t = \varepsilon_{t} +\phi y_{t-1}= \varepsilon_{t} +\phi (\varepsilon_{t-1} +\phi y_{t-2}) = \sum_{j=0}^{\infty}\phi^j \varepsilon_{t-j},$$ and If $|\phi|<1$, then $$\text{Var}(y_t) = \sum_{j=0}^
|
40,293
|
Constructing features from k-means
|
There is a great paper relating k-means with sparse coding of features, and how to address some of its weaknesses to produce good features. Even through it is focused on the particular case of image processing, it has valuable advice for the general case (how to do prewhitening to decorrelate data and so on).
Finally, it is well known that algorithms like kmeans and knn which (in its original formulation) use euclidean distance as a metric, perform poorly in a high dimensional setting. Here there is an importance reference addressing this point.
Edit: I came across this (IMHO really interesting) paper Deterministic Feature Selection for k-Means Clustering, which provides a deterministic algorithms with theoretical analysis and performance warranties. See also some of the references therein, specially those by the first author.
Just to make one thing clear: what is the problem you are addressing (number of samples, dimensionality, etc)?. The motivation for feature selection in this paper is the poor performance of k-means in high dimensional space.
Often one assumption is made: only a few of the many features are relevant. Many approaches are suboptimal in some way: like greedy search and randomized search, and not all have warranties on their performance.
So what you do is iterate over a number of trials/alternative heuristics until you find a satisfactory result.
So in case you need to build new features, you could try to generate new, sensible features and then perform feature selection on whole set of features.
Hope this helps.
|
Constructing features from k-means
|
There is a great paper relating k-means with sparse coding of features, and how to address some of its weaknesses to produce good features. Even through it is focused on the particular case of image p
|
Constructing features from k-means
There is a great paper relating k-means with sparse coding of features, and how to address some of its weaknesses to produce good features. Even through it is focused on the particular case of image processing, it has valuable advice for the general case (how to do prewhitening to decorrelate data and so on).
Finally, it is well known that algorithms like kmeans and knn which (in its original formulation) use euclidean distance as a metric, perform poorly in a high dimensional setting. Here there is an importance reference addressing this point.
Edit: I came across this (IMHO really interesting) paper Deterministic Feature Selection for k-Means Clustering, which provides a deterministic algorithms with theoretical analysis and performance warranties. See also some of the references therein, specially those by the first author.
Just to make one thing clear: what is the problem you are addressing (number of samples, dimensionality, etc)?. The motivation for feature selection in this paper is the poor performance of k-means in high dimensional space.
Often one assumption is made: only a few of the many features are relevant. Many approaches are suboptimal in some way: like greedy search and randomized search, and not all have warranties on their performance.
So what you do is iterate over a number of trials/alternative heuristics until you find a satisfactory result.
So in case you need to build new features, you could try to generate new, sensible features and then perform feature selection on whole set of features.
Hope this helps.
|
Constructing features from k-means
There is a great paper relating k-means with sparse coding of features, and how to address some of its weaknesses to produce good features. Even through it is focused on the particular case of image p
|
40,294
|
Constructing features from k-means
|
Basically there is no you must add this or this.....you actually should add features that verifiable improve your classification.
You can be creative here and try several things.
E.g. you can create statistics for variables (mean, sd,...) for each cluster and add these. You can also then add the difference to this new mean / median /... for example.
Also adding cluster 'quality measures' might be an idea, like
Intra cluster distance for each cluster,....
You can also try different clustering methods to create additional features.
Keep in mind, just creating these variables is not everything, you also have to check if your classification improves.
From my own experience:
Most of the times I could not improve classification results with new features created by clustering (but this is of course highly dataset dependent)
Another important thing:
Make sure you do not include the target variable of your later on testset for classification in the clustering. This will give misleading results of classification performance.
|
Constructing features from k-means
|
Basically there is no you must add this or this.....you actually should add features that verifiable improve your classification.
You can be creative here and try several things.
E.g. you can create
|
Constructing features from k-means
Basically there is no you must add this or this.....you actually should add features that verifiable improve your classification.
You can be creative here and try several things.
E.g. you can create statistics for variables (mean, sd,...) for each cluster and add these. You can also then add the difference to this new mean / median /... for example.
Also adding cluster 'quality measures' might be an idea, like
Intra cluster distance for each cluster,....
You can also try different clustering methods to create additional features.
Keep in mind, just creating these variables is not everything, you also have to check if your classification improves.
From my own experience:
Most of the times I could not improve classification results with new features created by clustering (but this is of course highly dataset dependent)
Another important thing:
Make sure you do not include the target variable of your later on testset for classification in the clustering. This will give misleading results of classification performance.
|
Constructing features from k-means
Basically there is no you must add this or this.....you actually should add features that verifiable improve your classification.
You can be creative here and try several things.
E.g. you can create
|
40,295
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
This is very typical behavior of k-means when applies to non-continuous data. It's not what k-means is designed for, you are essentially operating it out of its specifications. Also, k-means is very sensitive to noise. You probably have a lot of one-element clusters, too?
Also Spark is one of the worst tools for clustering. Consider getting the C code from BaylorML / Greg Hamerly. You will be surprised by how much faster it is. People always assume Spark would be the fastest, but the only thing it actually outperforms is Hadoop MapReduce. Depending on your sparsity, 1.3 Mio points should still fit into main memory of a single machine. Then tools such as BaylorML and ELKI will just shine and be a lot faster than Spark.
But that doesn't really help you with your clustering problem, because it most likely is a data problem.
I suggest you do A) visualize your data and the clustering results (PCA is more appropriate than tSNE because it preserves distances better, so you see the outliers!) B) start with a sample rather than all 1.3 million at once! Only scale up once you have a working approach. And you may need to use other clustering algorithms than k-means...
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
This is very typical behavior of k-means when applies to non-continuous data. It's not what k-means is designed for, you are essentially operating it out of its specifications. Also, k-means is very s
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
This is very typical behavior of k-means when applies to non-continuous data. It's not what k-means is designed for, you are essentially operating it out of its specifications. Also, k-means is very sensitive to noise. You probably have a lot of one-element clusters, too?
Also Spark is one of the worst tools for clustering. Consider getting the C code from BaylorML / Greg Hamerly. You will be surprised by how much faster it is. People always assume Spark would be the fastest, but the only thing it actually outperforms is Hadoop MapReduce. Depending on your sparsity, 1.3 Mio points should still fit into main memory of a single machine. Then tools such as BaylorML and ELKI will just shine and be a lot faster than Spark.
But that doesn't really help you with your clustering problem, because it most likely is a data problem.
I suggest you do A) visualize your data and the clustering results (PCA is more appropriate than tSNE because it preserves distances better, so you see the outliers!) B) start with a sample rather than all 1.3 million at once! Only scale up once you have a working approach. And you may need to use other clustering algorithms than k-means...
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
This is very typical behavior of k-means when applies to non-continuous data. It's not what k-means is designed for, you are essentially operating it out of its specifications. Also, k-means is very s
|
40,296
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
I am using K means clustering on the "words" matrix from an SVD of a Tf Idf matrix and got similar results. I found the sum of the squares of the features for this large cluster and found they were all low magnitude words.
Also similar to your situation, I got a lot of 1 word clusters. To combat this, I only chose data points with magnitudes between .025 and 1. (you could try magnitudes that fit your scale. Mine was based on an orthonormal matrix with 400 columns).
I can't say this is the best approach, but it has helped.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
I am using K means clustering on the "words" matrix from an SVD of a Tf Idf matrix and got similar results. I found the sum of the squares of the features for this large cluster and found they were a
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
I am using K means clustering on the "words" matrix from an SVD of a Tf Idf matrix and got similar results. I found the sum of the squares of the features for this large cluster and found they were all low magnitude words.
Also similar to your situation, I got a lot of 1 word clusters. To combat this, I only chose data points with magnitudes between .025 and 1. (you could try magnitudes that fit your scale. Mine was based on an orthonormal matrix with 400 columns).
I can't say this is the best approach, but it has helped.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
I am using K means clustering on the "words" matrix from an SVD of a Tf Idf matrix and got similar results. I found the sum of the squares of the features for this large cluster and found they were a
|
40,297
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
It's very interesting that you are getting a giant cluster with 400k entries using bisecting k-means.
Bisecting k-means iteratively breaks down the cluster with the highest dissimilarity into smaller clusters. Since you are already producing 100+ clusters, it seems to me that maybe the 400k entry cluster has a very high similarity score.
I'd try to visualize the clusters via stratified sampling and then t-SNE. It might be that the 400k entries are more homogenous than we think.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
It's very interesting that you are getting a giant cluster with 400k entries using bisecting k-means.
Bisecting k-means iteratively breaks down the cluster with the highest dissimilarity into smaller
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
It's very interesting that you are getting a giant cluster with 400k entries using bisecting k-means.
Bisecting k-means iteratively breaks down the cluster with the highest dissimilarity into smaller clusters. Since you are already producing 100+ clusters, it seems to me that maybe the 400k entry cluster has a very high similarity score.
I'd try to visualize the clusters via stratified sampling and then t-SNE. It might be that the 400k entries are more homogenous than we think.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
It's very interesting that you are getting a giant cluster with 400k entries using bisecting k-means.
Bisecting k-means iteratively breaks down the cluster with the highest dissimilarity into smaller
|
40,298
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
When you say "optimise the clustering", I take this to mean that you wish to divide your clusters in an efficient manner.
Before running k-means or bisecting k-means, it is advisable to run a principal component analysis (PCA) on your data. The PCA generates a scree plot of the Number of Clusters and the Within Groups Sum of Squares, and the point at which the Within Groups SSE levels off indicates the ideal number of clusters.
The following link could also be of use to you:
https://spark.apache.org/docs/1.2.1/mllib-dimensionality-reduction.html#principal-component-analysis-pca
Run this test and see if you still get such a high concentration of observations in one cluster. It could be that an estimate of 150-200 clusters is in fact significantly different from the more realistic estimate.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
When you say "optimise the clustering", I take this to mean that you wish to divide your clusters in an efficient manner.
Before running k-means or bisecting k-means, it is advisable to run a principa
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
When you say "optimise the clustering", I take this to mean that you wish to divide your clusters in an efficient manner.
Before running k-means or bisecting k-means, it is advisable to run a principal component analysis (PCA) on your data. The PCA generates a scree plot of the Number of Clusters and the Within Groups Sum of Squares, and the point at which the Within Groups SSE levels off indicates the ideal number of clusters.
The following link could also be of use to you:
https://spark.apache.org/docs/1.2.1/mllib-dimensionality-reduction.html#principal-component-analysis-pca
Run this test and see if you still get such a high concentration of observations in one cluster. It could be that an estimate of 150-200 clusters is in fact significantly different from the more realistic estimate.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
When you say "optimise the clustering", I take this to mean that you wish to divide your clusters in an efficient manner.
Before running k-means or bisecting k-means, it is advisable to run a principa
|
40,299
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
In several situations like this, I found it helpful to cluster the big cluster into subclusters. It doesn't make sense every time, but if your data has a lot of separate little islands, your K probably will never be enough to cover the biggest "island".
Another approach would be dropping the smallest centroids with their neighbors and putting the same amount of new random centroids into the game.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
|
In several situations like this, I found it helpful to cluster the big cluster into subclusters. It doesn't make sense every time, but if your data has a lot of separate little islands, your K probabl
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
In several situations like this, I found it helpful to cluster the big cluster into subclusters. It doesn't make sense every time, but if your data has a lot of separate little islands, your K probably will never be enough to cover the biggest "island".
Another approach would be dropping the smallest centroids with their neighbors and putting the same amount of new random centroids into the game.
|
K-Means Cluster has over 50% of the points in one cluster. How to optimize it?
In several situations like this, I found it helpful to cluster the big cluster into subclusters. It doesn't make sense every time, but if your data has a lot of separate little islands, your K probabl
|
40,300
|
How to get the confidence interval of a Bernoulli trial if $\hat{p} = 0$?
|
The reason the usual "CLT" confidence interval becomes 0 is because when $p$ is very close to 0 or 1 (and the relative number of samples is low), the CLT becomes a bad approximation. This is because when $p=0,1$, your random variable is constant. On the other hand, when $p$ is very close to 1 or 0, you need a very large amount of samples to distinguish $p$ from exactly 1 or 0.
There are a couple of approaches to get the true confidence interval. The easy way is to appeal to the Wilson score interval:
$$\frac{1}{1 + \frac{1}{n} z^2}
\left[
\hat{p} + \frac{1}{2n} z^2 \pm
z \sqrt{
\frac{1}{n}\hat{p} \left(1 - \hat{p}\right) +
\frac{1}{4n^2}z^2
}
\right].$$
The second option is to numerically estimate the true confidence interval by explicitly using the binomial distribution, as opposed to appealing to the normal distribution.
|
How to get the confidence interval of a Bernoulli trial if $\hat{p} = 0$?
|
The reason the usual "CLT" confidence interval becomes 0 is because when $p$ is very close to 0 or 1 (and the relative number of samples is low), the CLT becomes a bad approximation. This is because w
|
How to get the confidence interval of a Bernoulli trial if $\hat{p} = 0$?
The reason the usual "CLT" confidence interval becomes 0 is because when $p$ is very close to 0 or 1 (and the relative number of samples is low), the CLT becomes a bad approximation. This is because when $p=0,1$, your random variable is constant. On the other hand, when $p$ is very close to 1 or 0, you need a very large amount of samples to distinguish $p$ from exactly 1 or 0.
There are a couple of approaches to get the true confidence interval. The easy way is to appeal to the Wilson score interval:
$$\frac{1}{1 + \frac{1}{n} z^2}
\left[
\hat{p} + \frac{1}{2n} z^2 \pm
z \sqrt{
\frac{1}{n}\hat{p} \left(1 - \hat{p}\right) +
\frac{1}{4n^2}z^2
}
\right].$$
The second option is to numerically estimate the true confidence interval by explicitly using the binomial distribution, as opposed to appealing to the normal distribution.
|
How to get the confidence interval of a Bernoulli trial if $\hat{p} = 0$?
The reason the usual "CLT" confidence interval becomes 0 is because when $p$ is very close to 0 or 1 (and the relative number of samples is low), the CLT becomes a bad approximation. This is because w
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.