idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
44,801 | Is it confounding variable? | Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the outcome. It does not have to be a direct cause.
So here, it is sufficient for there to simply be correlation between co... | Is it confounding variable? | Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the | Is it confounding variable?
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the outcome. It does not have to be a direct cause.
So here, it is sufficient for there to simp... | Is it confounding variable?
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the |
44,802 | Is it confounding variable? | As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mistake / confuse something for something else".
A confounding variable, therefore, putting strict technical contexts asid... | Is it confounding variable? | As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mi | Is it confounding variable?
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mistake / confuse something for something else".
A confounding variable, therefore, putting st... | Is it confounding variable?
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mi |
44,803 | Demeaning with two (n) fixed effects in panel regressions | If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume following regression model:
$$y_{it} = u_i + \nu_t + \beta X_{it} + e_{it} \,\,\,\,\, i = 1, 2, \dots, n \,\,\,\,\, T = 1, 2, ... | Demeaning with two (n) fixed effects in panel regressions | If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume followi | Demeaning with two (n) fixed effects in panel regressions
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume following regression model:
$$y_{it} = u_i + \nu_t + \beta X_{it} + ... | Demeaning with two (n) fixed effects in panel regressions
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume followi |
44,804 | Demeaning with two (n) fixed effects in panel regressions | Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) on Fixed Time and Group Effects (section 11.4.4). You can try it out yourself and see that just subtracting the time and ... | Demeaning with two (n) fixed effects in panel regressions | Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) o | Demeaning with two (n) fixed effects in panel regressions
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) on Fixed Time and Group Effects (section 11.4.4). You can try i... | Demeaning with two (n) fixed effects in panel regressions
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) o |
44,805 | Demeaning with two (n) fixed effects in panel regressions | The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Neyman and Scott's (1948) Econometrica paper. To do so, you should subtract the individual-specific and time-specific means... | Demeaning with two (n) fixed effects in panel regressions | The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Ney | Demeaning with two (n) fixed effects in panel regressions
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Neyman and Scott's (1948) Econometrica paper. To do so, you shoul... | Demeaning with two (n) fixed effects in panel regressions
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Ney |
44,806 | Demeaning with two (n) fixed effects in panel regressions | There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an unbalanced panel. I assume that the formula shown by Greene is also subject to some restriction as adressed by Helix123. | Demeaning with two (n) fixed effects in panel regressions | There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an u | Demeaning with two (n) fixed effects in panel regressions
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an unbalanced panel. I assume that the formula shown by Greene is ... | Demeaning with two (n) fixed effects in panel regressions
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an u |
44,807 | Is there a name for a moving average when it is done not across time but some other variable? | A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the averaging is written for the time index of the most recent data point in the averaging window, hence the name filter. If a... | Is there a name for a moving average when it is done not across time but some other variable? | A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the av | Is there a name for a moving average when it is done not across time but some other variable?
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the averaging is written for the... | Is there a name for a moving average when it is done not across time but some other variable?
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the av |
44,808 | Is there a name for a moving average when it is done not across time but some other variable? | Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response may be smoothed as a function of predictors) I propose simply that a moving average is still a moving average outside a t... | Is there a name for a moving average when it is done not across time but some other variable? | Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response ma | Is there a name for a moving average when it is done not across time but some other variable?
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response may be smoothed as a functio... | Is there a name for a moving average when it is done not across time but some other variable?
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response ma |
44,809 | Chi square test when sample sizes are different? | You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of the $82$ verbs in sample one were oral verbs and $72$ were not, while $20$ of the $89$ verbs in sample two were oral verb... | Chi square test when sample sizes are different? | You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of t | Chi square test when sample sizes are different?
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of the $82$ verbs in sample one were oral verbs and $72$ were not, while $2... | Chi square test when sample sizes are different?
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of t |
44,810 | Chi square test when sample sizes are different? | Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same example as @Henry
import numpy as np
from scipy.stats import chi2_contingency
obs = np.array([[10, 72], [20, 69]])
chi2... | Chi square test when sample sizes are different? | Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same | Chi square test when sample sizes are different?
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same example as @Henry
import numpy as np
from scipy.stats import chi2_conti... | Chi square test when sample sizes are different?
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same |
44,811 | How to generate samples of Poisson-Lognormal distribution | You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distribution. The resulting samples of this three-step process will be Poisson-Lognormally distributed. | How to generate samples of Poisson-Lognormal distribution | You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distrib | How to generate samples of Poisson-Lognormal distribution
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distribution. The resulting samples of this three-step process will b... | How to generate samples of Poisson-Lognormal distribution
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distrib |
44,812 | Is causal inference only from data possible? | Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something about the
causal relationship between X and Y?
No you can't, even when all variables are observed, see here for instanc... | Is causal inference only from data possible? | Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something abo | Is causal inference only from data possible?
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something about the
causal relationship between X and Y?
No you can't, even when all... | Is causal inference only from data possible?
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something abo |
44,813 | Is causal inference only from data possible? | From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The closest we have is a randomized control experiment, but even that has problems with external validity (e.g. we assume tha... | Is causal inference only from data possible? | From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The c | Is causal inference only from data possible?
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The closest we have is a randomized control experiment, but even that has proble... | Is causal inference only from data possible?
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The c |
44,814 | Is causal inference only from data possible? | Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can relax this problem.
There is a lot of work on causal inference from observational data where we do not explicitly formula... | Is causal inference only from data possible? | Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can r | Is causal inference only from data possible?
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can relax this problem.
There is a lot of work on causal inference from observat... | Is causal inference only from data possible?
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can r |
44,815 | How is it that an ML estimator might not be unique or consistent? | A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equation $\partial l(\theta; x) /\partial \theta = 0$.
Example of such a likelihood from Wikipedia:
Here, see that there's no ... | How is it that an ML estimator might not be unique or consistent? | A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equatio | How is it that an ML estimator might not be unique or consistent?
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equation $\partial l(\theta; x) /\partial \theta = 0$.
Exampl... | How is it that an ML estimator might not be unique or consistent?
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equatio |
44,816 | How is it that an ML estimator might not be unique or consistent? | One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maximum likelihood value. This problem isn't unique to OLS regression, but OLS regression is a simple enough example.
Anothe... | How is it that an ML estimator might not be unique or consistent? | One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maxi | How is it that an ML estimator might not be unique or consistent?
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maximum likelihood value. This problem isn't unique to OLS... | How is it that an ML estimator might not be unique or consistent?
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maxi |
44,817 | How is it that an ML estimator might not be unique or consistent? | One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac{|x_i - \hat{\mu}|}{x_i - \hat{\mu}} = \sum_{i=1}^n \mathrm{sgn}\left(x_i - \hat{\mu}\right) = 0,$$
so $\hat{\mu}$ must... | How is it that an ML estimator might not be unique or consistent? | One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac | How is it that an ML estimator might not be unique or consistent?
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac{|x_i - \hat{\mu}|}{x_i - \hat{\mu}} = \sum_{i=1}^n \m... | How is it that an ML estimator might not be unique or consistent?
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac |
44,818 | How is it that an ML estimator might not be unique or consistent? | Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this sample is 1 if $x_i \in [\theta, \theta +1] \forall i=1...n$ and $0$ otherwise. | How is it that an ML estimator might not be unique or consistent? | Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this s | How is it that an ML estimator might not be unique or consistent?
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this sample is 1 if $x_i \in [\theta, \theta +1] \forall i=1... | How is it that an ML estimator might not be unique or consistent?
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this s |
44,819 | What is the state of the art in statistics tests for distinguishing good from bad random number generators? | In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieharder and STS as "bad" and TestU01 as "good". But, unlike the other test suites, PractRand is not as well known, and the... | What is the state of the art in statistics tests for distinguishing good from bad random number gene | In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieh | What is the state of the art in statistics tests for distinguishing good from bad random number generators?
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieharder and ST... | What is the state of the art in statistics tests for distinguishing good from bad random number gene
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieh |
44,820 | What is the state of the art in statistics tests for distinguishing good from bad random number generators? | In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of random numbers may consume many more, so tests should base their conclusions on larger samples.
A successor to the Diehard s... | What is the state of the art in statistics tests for distinguishing good from bad random number gene | In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of rand | What is the state of the art in statistics tests for distinguishing good from bad random number generators?
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of random numbers m... | What is the state of the art in statistics tests for distinguishing good from bad random number gene
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of rand |
44,821 | What is the intuition behind getting a slope distribution in linear regression? | Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this one sample from a population of cases.
We are generally, however, interested in the characteristics of the population, no... | What is the intuition behind getting a slope distribution in linear regression? | Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this o | What is the intuition behind getting a slope distribution in linear regression?
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this one sample from a population of cases.
We... | What is the intuition behind getting a slope distribution in linear regression?
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this o |
44,822 | What is the intuition behind getting a slope distribution in linear regression? | The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual term $\epsilon_i$ is assumed to be distributed according to some distribution.
So the true parameter/coefficient is assume... | What is the intuition behind getting a slope distribution in linear regression? | The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual te | What is the intuition behind getting a slope distribution in linear regression?
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual term $\epsilon_i$ is assumed to be distrib... | What is the intuition behind getting a slope distribution in linear regression?
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual te |
44,823 | A huge gap between training and validation accuracy, confusion with the concept of Overfitting | Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more complex a model you can fit without overfitting.
I do not think you are going to get meaningful results using a CNN on such ... | A huge gap between training and validation accuracy, confusion with the concept of Overfitting | Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more comp | A huge gap between training and validation accuracy, confusion with the concept of Overfitting
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more complex a model you can fit w... | A huge gap between training and validation accuracy, confusion with the concept of Overfitting
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more comp |
44,824 | How does eigenvalues measure variance along the principal components in PCA? [duplicate] | We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{T}(\mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}) \mu = \mathbb E((\mu^{T} X)(\mu^{T} X)^{T})-\mathbb E(\mu^{T} X) \m... | How does eigenvalues measure variance along the principal components in PCA? [duplicate] | We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{ | How does eigenvalues measure variance along the principal components in PCA? [duplicate]
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{T}(\mathbb E(XX^{T})- \mathbb E... | How does eigenvalues measure variance along the principal components in PCA? [duplicate]
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{ |
44,825 | How does eigenvalues measure variance along the principal components in PCA? [duplicate] | Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Eigenvectors are basis vectors that capture the inherent patterns that make up a dataset. By convention these are unit vec... | How does eigenvalues measure variance along the principal components in PCA? [duplicate] | Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Ei | How does eigenvalues measure variance along the principal components in PCA? [duplicate]
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Eigenvectors are basis vectors th... | How does eigenvalues measure variance along the principal components in PCA? [duplicate]
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Ei |
44,826 | Regression when output is in a specific interval | The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
For example, the value $0.5(y+1)$ may be distributed as $Beta(\alpha(x), \beta(x))$. In this case, you may perform maximu... | Regression when output is in a specific interval | The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
F | Regression when output is in a specific interval
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
For example, the value $0.5(y+1)$ may be distributed as $Beta(\alpha(x),... | Regression when output is in a specific interval
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
F |
44,827 | Regression when output is in a specific interval | The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I will use a change like:
$$ z = \frac {2y} {1-y^2} = \frac {1} {1-y} - \frac {1} {1+y} $$
This function is increasing: i... | Regression when output is in a specific interval | The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I | Regression when output is in a specific interval
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I will use a change like:
$$ z = \frac {2y} {1-y^2} = \frac {1} {1-y} - ... | Regression when output is in a specific interval
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I |
44,828 | How to find the expectation of the maximum of independent exponential variables? | The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definitions.
Because $x_{(n)}$ is the largest of $n$ independent variables, the event $x_{(n)}\le x$ is the event that all the ... | How to find the expectation of the maximum of independent exponential variables? | The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definit | How to find the expectation of the maximum of independent exponential variables?
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definitions.
Because $x_{(n)}$ is the largest ... | How to find the expectation of the maximum of independent exponential variables?
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definit |
44,829 | How to find the expectation of the maximum of independent exponential variables? | Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with the method of moments which gives the expected value as,
\begin{equation*}
\begin{aligned}[b]
E[X] = \left[\frac{\parti... | How to find the expectation of the maximum of independent exponential variables? | Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with | How to find the expectation of the maximum of independent exponential variables?
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with the method of moments which gives the ... | How to find the expectation of the maximum of independent exponential variables?
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with |
44,830 | How to find the expectation of the maximum of independent exponential variables? | A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots , X_n) \qquad \sim \qquad \sum_{k=1}^n Y_k$$
with $Y_k \sim Exp(n+1-k)$.
And then you can compute the expectation value ... | How to find the expectation of the maximum of independent exponential variables? | A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots | How to find the expectation of the maximum of independent exponential variables?
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots , X_n) \qquad \sim \qquad \sum_{k=1}^n ... | How to find the expectation of the maximum of independent exponential variables?
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots |
44,831 | Why variance of OLS estimate decreases as sample size increases? | If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the estimator decreases with any additional observation if $\sigma^2$ is known. Suppose $X$ is your current design matrix an... | Why variance of OLS estimate decreases as sample size increases? | If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the | Why variance of OLS estimate decreases as sample size increases?
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the estimator decreases with any additional observation if ... | Why variance of OLS estimate decreases as sample size increases?
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the |
44,832 | Why variance of OLS estimate decreases as sample size increases? | Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is sufficiently large that the variance of a sample of length $n$ is always the same, or may be approximated as such.
Let's start ... | Why variance of OLS estimate decreases as sample size increases? | Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is suffici | Why variance of OLS estimate decreases as sample size increases?
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is sufficiently large that the variance of a sample of length $n$... | Why variance of OLS estimate decreases as sample size increases?
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is suffici |
44,833 | Mixture models vs Mixed models | Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generating data from a mixture model. More specifically, this is a Gaussian mixture model with two components; to adapt this to T... | Mixture models vs Mixed models | Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generatin | Mixture models vs Mixed models
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generating data from a mixture model. More specifically, this is a Gaussian mixture model with two... | Mixture models vs Mixed models
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generatin |
44,834 | Mixture models vs Mixed models | Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
where the final distribution $g$ is a mixture of $K$ component-distributions $f_k$ parametrized by... | Mixture models vs Mixed models | Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f | Mixture models vs Mixed models
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
where the final distribution $g$ is a mixture of $K$ component-dist... | Mixture models vs Mixed models
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f |
44,835 | Which mean to use in a one sample t-test on transformed data | I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed variable you can be highly likely to reject when the original population mean is the one given in the null (that is, you w... | Which mean to use in a one sample t-test on transformed data | I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed va | Which mean to use in a one sample t-test on transformed data
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed variable you can be highly likely to reject when the original... | Which mean to use in a one sample t-test on transformed data
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed va |
44,836 | Which mean to use in a one sample t-test on transformed data | As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the US population. You cannot assess that from what you have. Instead, you are just testing if your mean is above a fixed ... | Which mean to use in a one sample t-test on transformed data | As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the | Which mean to use in a one sample t-test on transformed data
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the US population. You cannot assess that from what you have. ... | Which mean to use in a one sample t-test on transformed data
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the |
44,837 | Function with multiple local minima | Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problems from Simon Fraser University. It contains many examples of functions with many local minima. A trivial two-factor exam... | Function with multiple local minima | Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problem | Function with multiple local minima
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problems from Simon Fraser University. It contains many examples of functions with many loc... | Function with multiple local minima
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problem |
44,838 | Function with multiple local minima | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The functions that you are looking for are known as te... | Function with multiple local minima | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Function with multiple local minima
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The functions that... | Function with multiple local minima
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
44,839 | Function with multiple local minima | If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the time, but as we increase the number of dimensions (or as we know less about the shape of the function), this simplistic ... | Function with multiple local minima | If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the | Function with multiple local minima
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the time, but as we increase the number of dimensions (or as we know less about the shap... | Function with multiple local minima
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the |
44,840 | If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use ReLUs/leaky ReLUs with RNNs instead? | I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients because most initial conditions make your outputs end up on either the far left or far right of your softmax layer, giving ... | If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use R | I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients bec | If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use ReLUs/leaky ReLUs with RNNs instead?
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishi... | If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use R
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients bec |
44,841 | Difference between confounding and interaction | A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and also is reflected in the independent variable. In essence they share a common quality that means when both are included... | Difference between confounding and interaction | A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and | Difference between confounding and interaction
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and also is reflected in the independent variable. In essence they share a c... | Difference between confounding and interaction
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and |
44,842 | Why is a Normal Mixture Model not identifiable and why does it matter? | Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = 1, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 0, \sigma_2^2 = 1)$ Thus, there is no way to empirically learn th... | Why is a Normal Mixture Model not identifiable and why does it matter? | Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = | Why is a Normal Mixture Model not identifiable and why does it matter?
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = 1, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \m... | Why is a Normal Mixture Model not identifiable and why does it matter?
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = |
44,843 | Why is a Normal Mixture Model not identifiable and why does it matter? | The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instance, here is the surface of a log-likelihood associated with the mixture$$\frac{1}{2}\mathrm{N}(\mu_1,1)+\frac{1}{2}\math... | Why is a Normal Mixture Model not identifiable and why does it matter? | The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instan | Why is a Normal Mixture Model not identifiable and why does it matter?
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instance, here is the surface of a log-likelihood assoc... | Why is a Normal Mixture Model not identifiable and why does it matter?
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instan |
44,844 | Deriving exponential distribution from sum of two squared normal random variables | First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dxdy=\frac{1}{\pi}\int_{0}^{2\pi}\int_0^{\sqrt{z}}e^{-r^2}r\;drd\theta$$
$$=2\int_0^{\sqrt{z}}re^{-r^2}\;dr $$
Now if we s... | Deriving exponential distribution from sum of two squared normal random variables | First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dx | Deriving exponential distribution from sum of two squared normal random variables
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dxdy=\frac{1}{\pi}\int_{0}^{2\pi}\int_0^... | Deriving exponential distribution from sum of two squared normal random variables
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dx |
44,845 | Deriving exponential distribution from sum of two squared normal random variables | $Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent. | Deriving exponential distribution from sum of two squared normal random variables | $Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent. | Deriving exponential distribution from sum of two squared normal random variables
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent. | Deriving exponential distribution from sum of two squared normal random variables
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent. |
44,846 | A p-value greater than 0.05 means that my results are meaningless? | A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the population correlation is zero.
Loosely this means you can't confidently distinguish the population correlation your sample... | A p-value greater than 0.05 means that my results are meaningless? | A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the popul | A p-value greater than 0.05 means that my results are meaningless?
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the population correlation is zero.
Loosely this means you c... | A p-value greater than 0.05 means that my results are meaningless?
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the popul |
44,847 | A p-value greater than 0.05 means that my results are meaningless? | It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, not the value that the estimate may or may not be significantly different from.
On the other hand if the purpose is the ... | A p-value greater than 0.05 means that my results are meaningless? | It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, no | A p-value greater than 0.05 means that my results are meaningless?
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, not the value that the estimate may or may not be signi... | A p-value greater than 0.05 means that my results are meaningless?
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, no |
44,848 | A p-value greater than 0.05 means that my results are meaningless? | The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following evidence scale:
p(X) < 0.01 very strong evidence,
p(X) ∈ (0.01, 0.05) strong evidence,
p(X) ∈ (0.05, 0.1) weak eviden... | A p-value greater than 0.05 means that my results are meaningless? | The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following | A p-value greater than 0.05 means that my results are meaningless?
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following evidence scale:
p(X) < 0.01 very strong evidence,
p... | A p-value greater than 0.05 means that my results are meaningless?
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following |
44,849 | A p-value greater than 0.05 means that my results are meaningless? | You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute. | A p-value greater than 0.05 means that my results are meaningless? | You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute. | A p-value greater than 0.05 means that my results are meaningless?
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute. | A p-value greater than 0.05 means that my results are meaningless?
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute. |
44,850 | Removing intercept from GLM for multiple factorial predictors only works for first factor in model | That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of freedom: Let factor $a$ have $a$ levels, factor $b$ with $b$ levels. Then factor $a$ have $a-1$degrees of freedom, which mea... | Removing intercept from GLM for multiple factorial predictors only works for first factor in model | That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of free | Removing intercept from GLM for multiple factorial predictors only works for first factor in model
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of freedom: Let factor $a$ h... | Removing intercept from GLM for multiple factorial predictors only works for first factor in model
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of free |
44,851 | Removing intercept from GLM for multiple factorial predictors only works for first factor in model | @kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding, instead of the default reference level coding. I explain this in greater detail here: How can logistic regression ha... | Removing intercept from GLM for multiple factorial predictors only works for first factor in model | @kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding | Removing intercept from GLM for multiple factorial predictors only works for first factor in model
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding, instead of the defa... | Removing intercept from GLM for multiple factorial predictors only works for first factor in model
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding |
44,852 | Robust methods and penalized regression | The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likelihood that's penalized.
In robust statistics, one views maximum likelihood as a special case of the general optimization... | Robust methods and penalized regression | The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likel | Robust methods and penalized regression
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likelihood that's penalized.
In robust statistics, one views maximum likelihood as a ... | Robust methods and penalized regression
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likel |
44,853 | Robust methods and penalized regression | Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like algorithm). Their Lasso-LTS estimator is defined as:
$$(1)\quad\hat{\pmb\beta}_{\text{LLTS}} = \arg\min_{\pmb\beta}\su... | Robust methods and penalized regression | Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like | Robust methods and penalized regression
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like algorithm). Their Lasso-LTS estimator is defined as:
$$(1)\quad\hat{\pmb\beta}... | Robust methods and penalized regression
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like |
44,854 | What is the name for the distribution shape of a histogram with this kind of curvature? | It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimedia commons; licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license | What is the name for the distribution shape of a histogram with this kind of curvature? | It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimed | What is the name for the distribution shape of a histogram with this kind of curvature?
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimedia commons; licensed under the C... | What is the name for the distribution shape of a histogram with this kind of curvature?
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimed |
44,855 | What is the name for the distribution shape of a histogram with this kind of curvature? | A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongated in height would have positive kurtosis). Your curve also appears to have a long right tail, so it is skewed to the ri... | What is the name for the distribution shape of a histogram with this kind of curvature? | A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongate | What is the name for the distribution shape of a histogram with this kind of curvature?
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongated in height would have positive ... | What is the name for the distribution shape of a histogram with this kind of curvature?
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongate |
44,856 | How to interpret Quadratic Terms | Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | SS df MS Number of obs = 2,229
-------------+---------------------------------- F(5,... | How to interpret Quadratic Terms | Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | | How to interpret Quadratic Terms
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | SS df MS Number of obs = 2,229
-------------+--------... | How to interpret Quadratic Terms
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | |
44,857 | How to interpret Quadratic Terms | 1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (Or, you could consider log transformation.)
2) Significance of quadratic terms could signal that the relation is non-lin... | How to interpret Quadratic Terms | 1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. ( | How to interpret Quadratic Terms
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (Or, you could consider log transformation.)
2) Significance of quadratic terms could si... | How to interpret Quadratic Terms
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. ( |
44,858 | Rescale predictions of regression model fitted on scaled predictors | The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibutes:
scaleList <- list(scale = attr(scars, "scaled:scale"),
center... | Rescale predictions of regression model fitted on scaled predictors | The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale ca | Rescale predictions of regression model fitted on scaled predictors
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibute... | Rescale predictions of regression model fitted on scaled predictors
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale ca |
44,859 | Rescale predictions of regression model fitted on scaled predictors | I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really only need the last couple lines of this answer).
The scale function centers (subtracts mean value), and then scales (di... | Rescale predictions of regression model fitted on scaled predictors | I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really | Rescale predictions of regression model fitted on scaled predictors
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really only need the last couple lines of this answer).
Th... | Rescale predictions of regression model fitted on scaled predictors
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really |
44,860 | Arima Model with weekday dummy variables Forecast | To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think that this answer will help you think about what is happening in the models and design better experiments to model your data... | Arima Model with weekday dummy variables Forecast | To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think tha | Arima Model with weekday dummy variables Forecast
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think that this answer will help you think about what is happening in the model... | Arima Model with weekday dummy variables Forecast
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think tha |
44,861 | Arima Model with weekday dummy variables Forecast | Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this external regressor approach doesn't work I'd try fitting a seasonal arima model with m=7 manually. | Arima Model with weekday dummy variables Forecast | Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this | Arima Model with weekday dummy variables Forecast
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this external regressor approach doesn't work I'd try fitting a seasonal ar... | Arima Model with weekday dummy variables Forecast
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this |
44,862 | p values and significance in RLM (MASS package) R | The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stackloss)
#Residuals:
# Min 1Q Median 3Q Max
#-8.91753 -1.73127 0.06187 1.54306 6.50163
#
#Coef... | p values and significance in RLM (MASS package) R | The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stac | p values and significance in RLM (MASS package) R
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stackloss)
#Residuals:
# Min 1Q Median 3Q Max
#-8.... | p values and significance in RLM (MASS package) R
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stac |
44,863 | How To quickly do derivatives with respect to matrices | There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\mathbf{A}\mathbf{s})^T\mathbf{W}(\mathbf{x} -\mathbf{A}\mathbf{s}) = -2\mathbf{W}(\mathbf{x}-\mathbf{A}\mathbf{s})\mathb... | How To quickly do derivatives with respect to matrices | There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\ | How To quickly do derivatives with respect to matrices
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\mathbf{A}\mathbf{s})^T\mathbf{W}(\mathbf{x} -\mathbf{A}\mathbf{s}... | How To quickly do derivatives with respect to matrices
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\ |
44,864 | How is an ROC curve constructed for a set of data? | Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use white blood cell counts to diagnose appendicitis. We'd like to collect a white blood cell count from a patient and then tell... | How is an ROC curve constructed for a set of data? | Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use whit | How is an ROC curve constructed for a set of data?
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use white blood cell counts to diagnose appendicitis. We'd like to collect a ... | How is an ROC curve constructed for a set of data?
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use whit |
44,865 | How is an ROC curve constructed for a set of data? | A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remission, guilty or innocent, etc..
The ranges of possible values for that risk prediction/score are sorted and enumerated... | How is an ROC curve constructed for a set of data? | A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remi | How is an ROC curve constructed for a set of data?
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remission, guilty or innocent, etc..
The ranges of possible values for t... | How is an ROC curve constructed for a set of data?
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remi |
44,866 | How is an ROC curve constructed for a set of data? | So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's call it "K", for values between 0 and 100%. For every value K and every estimate Y, you say something like ... if K is gre... | How is an ROC curve constructed for a set of data? | So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's cal | How is an ROC curve constructed for a set of data?
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's call it "K", for values between 0 and 100%. For every value K and every... | How is an ROC curve constructed for a set of data?
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's cal |
44,867 | How is an ROC curve constructed for a set of data? | A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce multiple ROC curves for a single model and set- say, one for the results of the model applied to the training set itself an... | How is an ROC curve constructed for a set of data? | A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce mul | How is an ROC curve constructed for a set of data?
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce multiple ROC curves for a single model and set- say, one for the results... | How is an ROC curve constructed for a set of data?
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce mul |
44,868 | Why can't I simulate variables with negative correlation? How can I fix it? | Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=matrix(c(1.0, 0.6, 0.6, 0.6,
0.6, 1.0, -0.2, 0.3,
0.6, -0.2, 1.0, 0.3,
0.6, 0... | Why can't I simulate variables with negative correlation? How can I fix it? | Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=mat | Why can't I simulate variables with negative correlation? How can I fix it?
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=matrix(c(1.0, 0.6, 0.6, 0.6,
0.... | Why can't I simulate variables with negative correlation? How can I fix it?
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=mat |
44,869 | Why can't I simulate variables with negative correlation? How can I fix it? | Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example. | Why can't I simulate variables with negative correlation? How can I fix it? | Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example. | Why can't I simulate variables with negative correlation? How can I fix it?
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example. | Why can't I simulate variables with negative correlation? How can I fix it?
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example. |
44,870 | Are LOESS and GAM with one covariate the same? | Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a local regression, as in
library(gam)
set.seed(1234)
# generate data
x <- sort(runif(100))
y <- sin(2*pi*x) + rnorm... | Are LOESS and GAM with one covariate the same? | Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a | Are LOESS and GAM with one covariate the same?
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a local regression, as in
library(gam)
set.seed(1234)
# generate data
... | Are LOESS and GAM with one covariate the same?
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a |
44,871 | Are LOESS and GAM with one covariate the same? | "LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear function of the data.
LOESS is however, non-linear, in that it attempts to introduce a degree of "robustification" to outli... | Are LOESS and GAM with one covariate the same? | "LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear fun | Are LOESS and GAM with one covariate the same?
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear function of the data.
LOESS is however, non-linear, in that it attempts to i... | Are LOESS and GAM with one covariate the same?
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear fun |
44,872 | Are LOESS and GAM with one covariate the same? | If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatterplot smoother. Read Hastie and Tibshirani 1986, particularly their section 5.2: They fit the GAMs by Fisher local scori... | Are LOESS and GAM with one covariate the same? | If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatte | Are LOESS and GAM with one covariate the same?
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatterplot smoother. Read Hastie and Tibshirani 1986, particularly their secti... | Are LOESS and GAM with one covariate the same?
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatte |
44,873 | Random Forests overfitting/unbalanced classes? | In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A classifier that always predicts positive samples will an Accuracy of 0.99, but Precision, Recall or F1-score of 0.00
What... | Random Forests overfitting/unbalanced classes? | In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A c | Random Forests overfitting/unbalanced classes?
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A classifier that always predicts positive samples will an Accuracy of 0.99,... | Random Forests overfitting/unbalanced classes?
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A c |
44,874 | Random Forests overfitting/unbalanced classes? | I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
If you want to detect overfitting, you can plot learning curves. Here, you are going to train the model multiple times,... | Random Forests overfitting/unbalanced classes? | I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation. | Random Forests overfitting/unbalanced classes?
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
If you want to detect overfitting, you can plot learning curves. Here, y... | Random Forests overfitting/unbalanced classes?
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation. |
44,875 | Degrees of freedom | There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of data being used to make a calculation. (...).
The number of degrees of freedom is a measure of how certain we are
that ... | Degrees of freedom | There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of dat | Degrees of freedom
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of data being used to make a calculation. (...).
The number of degrees of freedom is a measure of how cer... | Degrees of freedom
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of dat |
44,876 | Regession diagnostics | Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. Draw a different straight line through the same point. Draw a third one. ... and so on.
$\hspace{3cm}$
They all fit the d... | Regession diagnostics | Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. D | Regession diagnostics
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. Draw a different straight line through the same point. Draw a third one. ... and so on.
$\hspace{3c... | Regession diagnostics
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. D |
44,877 | Distributions similar to Normal distribution | You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested for $\delta = 0$ (in this case the input $X$ equals output $Y$). In R you can simulate, estimate, plot, etc. several Lam... | Distributions similar to Normal distribution | You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested f | Distributions similar to Normal distribution
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested for $\delta = 0$ (in this case the input $X$ equals output $Y$). In R you c... | Distributions similar to Normal distribution
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested f |
44,878 | Distributions similar to Normal distribution | This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less sharply curved) seen in some distribution when compared to the normal.
It's often the case that - at a fixed variance - a... | Distributions similar to Normal distribution | This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less s | Distributions similar to Normal distribution
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less sharply curved) seen in some distribution when compared to the normal.
It's ... | Distributions similar to Normal distribution
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less s |
44,879 | Distributions similar to Normal distribution | Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approximately proportional to
$|x| e^{-a|x|^3 + b|x|}$
for constants $a$ and $b$.
Noting that a normal density is proportional t... | Distributions similar to Normal distribution | Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approxima | Distributions similar to Normal distribution
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approximately proportional to
$|x| e^{-a|x|^3 + b|x|}$
for constants $a$ and $b$.
N... | Distributions similar to Normal distribution
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approxima |
44,880 | Normalization to non-degenerate distribution | Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the expected value, of the random variables from which the sample is generated.
So at the limit, $\bar X$ has a degenerate... | Normalization to non-degenerate distribution | Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the | Normalization to non-degenerate distribution
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the expected value, of the random variables from which the sample is generated.... | Normalization to non-degenerate distribution
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the |
44,881 | Normalization to non-degenerate distribution | Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting sequence of random variables converges to a distribution that isn't degenerate.
Presumably in the situation under discus... | Normalization to non-degenerate distribution | Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting | Normalization to non-degenerate distribution
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting sequence of random variables converges to a distribution that isn't degener... | Normalization to non-degenerate distribution
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting |
44,882 | Tool for generating correlated data sets | You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of taking independent random variables with the same standard deviation and creating a third variable from those two that has t... | Tool for generating correlated data sets | You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of takin | Tool for generating correlated data sets
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of taking independent random variables with the same standard deviation and creating a... | Tool for generating correlated data sets
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of takin |
44,883 | Tool for generating correlated data sets | Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file. | Tool for generating correlated data sets | Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file. | Tool for generating correlated data sets
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file. | Tool for generating correlated data sets
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file. |
44,884 | Tool for generating correlated data sets | Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables and a third one near to one of them and far to the other of them - it might be more useful to begin with a "factorloadings... | Tool for generating correlated data sets | Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables an | Tool for generating correlated data sets
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables and a third one near to one of them and far to the other of them - it might be mo... | Tool for generating correlated data sets
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables an |
44,885 | Does testing for assumptions affect type I error? | Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality of variance (for which several papers point it out), and testing normality. It should be expected that it will be the c... | Does testing for assumptions affect type I error? | Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality | Does testing for assumptions affect type I error?
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality of variance (for which several papers point it out), and testing norm... | Does testing for assumptions affect type I error?
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality |
44,886 | Does testing for assumptions affect type I error? | Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ is the significance level test for each individual test), just by standard multiple testing issues.
But when testing ass... | Does testing for assumptions affect type I error? | Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ i | Does testing for assumptions affect type I error?
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ is the significance level test for each individual test), just by stand... | Does testing for assumptions affect type I error?
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ i |
44,887 | What can't be expressed as a linear model? | The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as follows:
$E(Y)=\beta_0 + \beta_1X_i + \beta_2X^2 + \beta_3 e^{X_i}$
for example.
So the limits of linear regressions are: th... | What can't be expressed as a linear model? | The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as foll | What can't be expressed as a linear model?
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as follows:
$E(Y)=\beta_0 + \beta_1X_i + \beta_2X^2 + \beta_3 e^{X_i}$
for example.
... | What can't be expressed as a linear model?
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as foll |
44,888 | What can't be expressed as a linear model? | (Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kernels). For instance, Fourier series - you can produce an infinite sine/cosine series, where the amplitude of the wave of ... | What can't be expressed as a linear model? | (Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kerne | What can't be expressed as a linear model?
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kernels). For instance, Fourier series - you can produce an infinite sine/cosine ... | What can't be expressed as a linear model?
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kerne |
44,889 | What happens to adjusted R squared as sample size increases? | Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and there are various definitions of population variance explained (e.g., fixed versus random-x assumptions). Most commonly, st... | What happens to adjusted R squared as sample size increases? | Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and the | What happens to adjusted R squared as sample size increases?
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and there are various definitions of population variance explained... | What happens to adjusted R squared as sample size increases?
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and the |
44,890 | What happens to adjusted R squared as sample size increases? | Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a line at $R^2$.
Adj.R.Squared=function(Sample.Size=10,Max.Copies=30,Noise=1){Adj.R²=c();y=rnorm(Sample.Size)
x=y+Noise*rnorm... | What happens to adjusted R squared as sample size increases? | Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a lin | What happens to adjusted R squared as sample size increases?
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a line at $R^2$.
Adj.R.Squared=function(Sample.Size=10,Max.Copie... | What happens to adjusted R squared as sample size increases?
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a lin |
44,891 | How is the formula for the Standard error of the slope in linear regression derived? [duplicate] | There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + \epsilon$ where $\epsilon \sim N(0,\sigma^2 I)$.
We estimate $\hat\beta = (X^T X)^{-1}X^T Y$
So: $\hat\beta = (X^T X)^{... | How is the formula for the Standard error of the slope in linear regression derived? [duplicate] | There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + | How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + \epsilon$ where $\epsi... | How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + |
44,892 | How is the formula for the Standard error of the slope in linear regression derived? [duplicate] | To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
We can model the linear regression as $Y_i \sim N(\mu_i, \sigma^2)$ independently over i, where $\mu_... | How is the formula for the Standard error of the slope in linear regression derived? [duplicate] | To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1 | How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
We c... | How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1 |
44,893 | Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test? | No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, and
do not employ pooled variance for the pairwise tests.
See, for example, Kruskal-Wallis Test and Mann-Whitney U Tes... | Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test? | No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, | Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, and
do not employ pooled variance fo... | Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, |
44,894 | Do discriminative models overfit more than generative models? | This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's consider what is in some sense the simplest problem, binary classification. As a specific example we will take the canoni... | Do discriminative models overfit more than generative models? | This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's c | Do discriminative models overfit more than generative models?
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's consider what is in some sense the simplest problem, binary... | Do discriminative models overfit more than generative models?
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's c |
44,895 | Do discriminative models overfit more than generative models? | A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the class conditionals are mulitvariate normals with shared covariance, this will have a linear decision boundary. Thus, th... | Do discriminative models overfit more than generative models? | A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the | Do discriminative models overfit more than generative models?
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the class conditionals are mulitvariate normals with shared c... | Do discriminative models overfit more than generative models?
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the |
44,896 | Sparse parameters when computing AIC, BIC, etc | Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and that's lasso:
H Zou, T Hastie, R Tibshirani On the “degrees of freedom” of the lasso. The Annals of Statistics, 2007 | Sparse parameters when computing AIC, BIC, etc | Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and | Sparse parameters when computing AIC, BIC, etc
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and that's lasso:
H Zou, T Hastie, R Tibshirani On the “degrees of freedom” o... | Sparse parameters when computing AIC, BIC, etc
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and |
44,897 | Sparse parameters when computing AIC, BIC, etc | This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will justify AIC, BIC or other "information criteria" in general.
If estimation is done by $\ell_1$-penalized maximum-likelihood... | Sparse parameters when computing AIC, BIC, etc | This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will just | Sparse parameters when computing AIC, BIC, etc
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will justify AIC, BIC or other "information criteria" in general.
If estimation i... | Sparse parameters when computing AIC, BIC, etc
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will just |
44,898 | Is the null model for binary logistic regression just the natural log function? | The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (coded 1 for "success" & 0 for "failure")
The null model, as @Michael says, contains just the intercept:
$$\ln \frac {\pi}{1... | Is the null model for binary logistic regression just the natural log function? | The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (cod | Is the null model for binary logistic regression just the natural log function?
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (coded 1 for "success" & 0 for "failure")
Th... | Is the null model for binary logistic regression just the natural log function?
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (cod |
44,899 | Is the absolute value of the difference between two Poisson distributions a Poisson distribution? | Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship between the mean and variance doesn't hold.
what is the distribution of the absolute value of the Skellam distribution.
... | Is the absolute value of the difference between two Poisson distributions a Poisson distribution? | Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship betw | Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship between the mean and varia... | Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship betw |
44,900 | Is the absolute value of the difference between two Poisson distributions a Poisson distribution? | OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sqrt{a b}\right)$$
Then, the pmf of $Y=|X|$ will be, say $g(y)$:
$$g(y) = \begin{cases}f(0) & y = 0 \\ f(y) + f(-y) & y \... | Is the absolute value of the difference between two Poisson distributions a Poisson distribution? | OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sq | Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sqrt{a b}\right)$$
Then,... | Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sq |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.