idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
19,201 | How to choose optimal bin width while calibrating probability models? | In my experience binning is good for visualizing probability distributions, but it is usually a bad idea, if one wants to use if for statistical tests and/or parameter inference. Primarily because one immediately limits the precision by the bin width. Another common problem is when the variable is not bound, i.e. one has to introduce low and high cutoffs.
Working with cumulative distributions in Kolmogorov-Smirnov spirit circumvents many of these problems. There are also many good statistical methods available in this case. (see, e.g., https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) | How to choose optimal bin width while calibrating probability models? | In my experience binning is good for visualizing probability distributions, but it is usually a bad idea, if one wants to use if for statistical tests and/or parameter inference. Primarily because one | How to choose optimal bin width while calibrating probability models?
In my experience binning is good for visualizing probability distributions, but it is usually a bad idea, if one wants to use if for statistical tests and/or parameter inference. Primarily because one immediately limits the precision by the bin width. Another common problem is when the variable is not bound, i.e. one has to introduce low and high cutoffs.
Working with cumulative distributions in Kolmogorov-Smirnov spirit circumvents many of these problems. There are also many good statistical methods available in this case. (see, e.g., https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) | How to choose optimal bin width while calibrating probability models?
In my experience binning is good for visualizing probability distributions, but it is usually a bad idea, if one wants to use if for statistical tests and/or parameter inference. Primarily because one |
19,202 | You observe k heads out of n tosses. Is the coin fair? | The standard Bayesian way to solve this problem (without Normal approximations) is to explicitly state your prior, combine it with your likelihood, which is Beta-distributed. Then integrate your posterior around 50%, say two standard deviations or from 49%–51% or whatever you like.
If your prior belief is continuous on [0,1] — e.g. Beta(100,100) (this one puts a lot of mass on roughly fair coins) — then the probability that the coin is fair is zero since the likelihood is also continuous [0, 1].
Even if the probability that the coin is fair is zero, you can usually answer whatever question you were going to answer with the posterior over the bias. For example, what is the casino edge given the posterior distribution over coin probabilities. | You observe k heads out of n tosses. Is the coin fair? | The standard Bayesian way to solve this problem (without Normal approximations) is to explicitly state your prior, combine it with your likelihood, which is Beta-distributed. Then integrate your post | You observe k heads out of n tosses. Is the coin fair?
The standard Bayesian way to solve this problem (without Normal approximations) is to explicitly state your prior, combine it with your likelihood, which is Beta-distributed. Then integrate your posterior around 50%, say two standard deviations or from 49%–51% or whatever you like.
If your prior belief is continuous on [0,1] — e.g. Beta(100,100) (this one puts a lot of mass on roughly fair coins) — then the probability that the coin is fair is zero since the likelihood is also continuous [0, 1].
Even if the probability that the coin is fair is zero, you can usually answer whatever question you were going to answer with the posterior over the bias. For example, what is the casino edge given the posterior distribution over coin probabilities. | You observe k heads out of n tosses. Is the coin fair?
The standard Bayesian way to solve this problem (without Normal approximations) is to explicitly state your prior, combine it with your likelihood, which is Beta-distributed. Then integrate your post |
19,203 | You observe k heads out of n tosses. Is the coin fair? | Let's say for the Bernoulli Distribution, in this case the toss of a coin.
Clearly this is a binomial distribution $B(n=400,p=0.5)$, and it is indeed close to $N(\mu=200,\sigma^2=100)$.
Obviously the interviewer is asking for the result of $k$ with $95\%$ confidence interval with $B(n=400,p=0.5)$, or the $p$-value of $B(n=400,p=0.5,k=220)$.
In the Bayesian approach, your prior is that $p=0.5$ instead of $\pi(p=0.5)=0.5$ and $\pi(p\neq0.5)=0.5$
Let's use some other more fair prior that $\pi(0.49\leq p\leq0.51)=0.9$ and $\pi(p<0.49 \cup p>0.51)=0.1$. We assume $p$ has uniform distribution within each interval.
We then can calculate the posterior $P(0.49\leq p\leq0.51|k=220)$.
Or highly likely the prior is a normal distribution $p$ ~ $N(\mu=0.5,\sigma^2=0.25)$, or we may assume much smaller variance such as $\sigma^2=0.1$.
Then we calculate the posterior distribution of $p$ as $f(p|k=220)$.
My reputation is not enough for me to write comment under the Question. Instead I'm gonna write something here regarding You Can't Bias a Coin. @Adrian
Here's what we have
The experiment result $B(n=400,k=220,p=\theta)$
The theoretical and experiment study You Can't Bias a Coin
Here's our Hypothesis
$H_0:$ The coin is fair, or $\hat\theta=0.5$
$H_1$: The experiment data is wrongly recorded
Here's our result
Based on the paper You Can Load a Die, But You Can't Bias a Coin, we accept hypothesis $H_0$.
Based on the experiment result that the difference is twice the standard deviation, we roughly have 95% confidence level to accept hypothesis $H_1$, that the experiment study is wrongly recorded.
Since the $p$-value for hypothesis test to reject either $H_0$ or $H_1$ is roughly below 5%, we must accept them both. Or we must reject them both.
Otherwise we create double standard for hypothesis testing here.
We cannot accept the Hypothesis that the toss of coin is fair and the experiment data is correctly recorded.
It does not make sense to say that the coin has a probability p of heads
We have experiment result to back up this hypothesis.
If the experiment is repeated n times, is it possible that we have the prior of $p$ for the coin toss as $N(\mu=0.5,\sigma^2)$ when n is considerably large?
If that is acceptable, we can then estimate $\sigma^s$ with 95% CI based on method of maximum likelihood. | You observe k heads out of n tosses. Is the coin fair? | Let's say for the Bernoulli Distribution, in this case the toss of a coin.
Clearly this is a binomial distribution $B(n=400,p=0.5)$, and it is indeed close to $N(\mu=200,\sigma^2=100)$.
Obviously the | You observe k heads out of n tosses. Is the coin fair?
Let's say for the Bernoulli Distribution, in this case the toss of a coin.
Clearly this is a binomial distribution $B(n=400,p=0.5)$, and it is indeed close to $N(\mu=200,\sigma^2=100)$.
Obviously the interviewer is asking for the result of $k$ with $95\%$ confidence interval with $B(n=400,p=0.5)$, or the $p$-value of $B(n=400,p=0.5,k=220)$.
In the Bayesian approach, your prior is that $p=0.5$ instead of $\pi(p=0.5)=0.5$ and $\pi(p\neq0.5)=0.5$
Let's use some other more fair prior that $\pi(0.49\leq p\leq0.51)=0.9$ and $\pi(p<0.49 \cup p>0.51)=0.1$. We assume $p$ has uniform distribution within each interval.
We then can calculate the posterior $P(0.49\leq p\leq0.51|k=220)$.
Or highly likely the prior is a normal distribution $p$ ~ $N(\mu=0.5,\sigma^2=0.25)$, or we may assume much smaller variance such as $\sigma^2=0.1$.
Then we calculate the posterior distribution of $p$ as $f(p|k=220)$.
My reputation is not enough for me to write comment under the Question. Instead I'm gonna write something here regarding You Can't Bias a Coin. @Adrian
Here's what we have
The experiment result $B(n=400,k=220,p=\theta)$
The theoretical and experiment study You Can't Bias a Coin
Here's our Hypothesis
$H_0:$ The coin is fair, or $\hat\theta=0.5$
$H_1$: The experiment data is wrongly recorded
Here's our result
Based on the paper You Can Load a Die, But You Can't Bias a Coin, we accept hypothesis $H_0$.
Based on the experiment result that the difference is twice the standard deviation, we roughly have 95% confidence level to accept hypothesis $H_1$, that the experiment study is wrongly recorded.
Since the $p$-value for hypothesis test to reject either $H_0$ or $H_1$ is roughly below 5%, we must accept them both. Or we must reject them both.
Otherwise we create double standard for hypothesis testing here.
We cannot accept the Hypothesis that the toss of coin is fair and the experiment data is correctly recorded.
It does not make sense to say that the coin has a probability p of heads
We have experiment result to back up this hypothesis.
If the experiment is repeated n times, is it possible that we have the prior of $p$ for the coin toss as $N(\mu=0.5,\sigma^2)$ when n is considerably large?
If that is acceptable, we can then estimate $\sigma^s$ with 95% CI based on method of maximum likelihood. | You observe k heads out of n tosses. Is the coin fair?
Let's say for the Bernoulli Distribution, in this case the toss of a coin.
Clearly this is a binomial distribution $B(n=400,p=0.5)$, and it is indeed close to $N(\mu=200,\sigma^2=100)$.
Obviously the |
19,204 | How to generate predicted survivor curves from frailty models (using R coxph)? | The problem here is the same as would be obtained trying to predict outcomes from a linear mixed effects model. Since the survival curve is non-collapsible, each litter in your example has a litter-specific survival curve according to the model you fit. A frailty as you may know is the same as a random intercept indicating common levels of confounding and prognostic variables endemic to each litter, presumably vis-à-vis genetic traits. Therefore the linear predictor for the hazard ratio is a mix of the observed fixed effects and random litter effects. Unlike mixed models, the Cox model fits the frailty term with penalized regression, the fitted object is of class coxph-penal and there is no method for survreg.coxph-penal, so the attempts to create the linear predictor fail. There are a couple workarounds.
Just fit the marginal model with centered covariates.
Center the covariates, fit 1, then fit the random effects model using coxme and extract the random effects, add them to the linear predictor with an offset to calculate the stratum specific survival curve for each litter.
Perform 2 and marginalize them by averaging all survival curves together, a separate approach to fitting the marginal model.
Use fixed effects or strata in a marginal Cox model to predict different survival curves for each litter. | How to generate predicted survivor curves from frailty models (using R coxph)? | The problem here is the same as would be obtained trying to predict outcomes from a linear mixed effects model. Since the survival curve is non-collapsible, each litter in your example has a litter-sp | How to generate predicted survivor curves from frailty models (using R coxph)?
The problem here is the same as would be obtained trying to predict outcomes from a linear mixed effects model. Since the survival curve is non-collapsible, each litter in your example has a litter-specific survival curve according to the model you fit. A frailty as you may know is the same as a random intercept indicating common levels of confounding and prognostic variables endemic to each litter, presumably vis-à-vis genetic traits. Therefore the linear predictor for the hazard ratio is a mix of the observed fixed effects and random litter effects. Unlike mixed models, the Cox model fits the frailty term with penalized regression, the fitted object is of class coxph-penal and there is no method for survreg.coxph-penal, so the attempts to create the linear predictor fail. There are a couple workarounds.
Just fit the marginal model with centered covariates.
Center the covariates, fit 1, then fit the random effects model using coxme and extract the random effects, add them to the linear predictor with an offset to calculate the stratum specific survival curve for each litter.
Perform 2 and marginalize them by averaging all survival curves together, a separate approach to fitting the marginal model.
Use fixed effects or strata in a marginal Cox model to predict different survival curves for each litter. | How to generate predicted survivor curves from frailty models (using R coxph)?
The problem here is the same as would be obtained trying to predict outcomes from a linear mixed effects model. Since the survival curve is non-collapsible, each litter in your example has a litter-sp |
19,205 | Is there a connection between empirical Bayes and random effects? | There is a really great article in JASA back in the mid 1970s on the James-Stein estimator and empirical Bayes estimation with a particular application to predicting baseball players batting averages. The insight I can give on this is the result of James and Stein who showed to the surprise of the statistical world that for a multivariate normal distribution in three dimensions or more the MLE, which is the vector of coordinate averages, is inadmissible.
The proof was achieved by showing that an estimator that shrinks the mean vector toward the origin is uniformly better based on mean square error as a loss function. Efron and Morris showed that in a multivariate regression problem using an empirical Bayes approach the estimators they arrive at are shrinkage estimators of the James-Stein type. They use this methodology to predict the final season batting averages of major league baseball players based on their early season result. The estimate moves everyone's individual average to the grand average of all the players.
I think this explains how such estimators can arise in multivariate linear models. It doesn't completely connect it to any particular mixed effects model but may be a good lead in that direction.
Some references:
B. Efron and C. Morris (1975), Data analysis using Stein's estimator and its generalizations, J. Amer. Stat. Assoc., vol. 70, no. 350, 311–319.
B. Efron and C. Morris (1973), Stein's estimation rule and its competitors–An empirical Bayes approach, J. Amer. Stat. Assoc., vol. 68, no. 341, 117–130.
B. Efron and C. Morris (1977), Stein's paradox in statistics, Scientific American, vol. 236, no. 5, 119–127.
G. Casella (1985), An introduction to empirical Bayes data analysis, Amer. Statistician, vol. 39, no. 2, 83–87. | Is there a connection between empirical Bayes and random effects? | There is a really great article in JASA back in the mid 1970s on the James-Stein estimator and empirical Bayes estimation with a particular application to predicting baseball players batting averages | Is there a connection between empirical Bayes and random effects?
There is a really great article in JASA back in the mid 1970s on the James-Stein estimator and empirical Bayes estimation with a particular application to predicting baseball players batting averages. The insight I can give on this is the result of James and Stein who showed to the surprise of the statistical world that for a multivariate normal distribution in three dimensions or more the MLE, which is the vector of coordinate averages, is inadmissible.
The proof was achieved by showing that an estimator that shrinks the mean vector toward the origin is uniformly better based on mean square error as a loss function. Efron and Morris showed that in a multivariate regression problem using an empirical Bayes approach the estimators they arrive at are shrinkage estimators of the James-Stein type. They use this methodology to predict the final season batting averages of major league baseball players based on their early season result. The estimate moves everyone's individual average to the grand average of all the players.
I think this explains how such estimators can arise in multivariate linear models. It doesn't completely connect it to any particular mixed effects model but may be a good lead in that direction.
Some references:
B. Efron and C. Morris (1975), Data analysis using Stein's estimator and its generalizations, J. Amer. Stat. Assoc., vol. 70, no. 350, 311–319.
B. Efron and C. Morris (1973), Stein's estimation rule and its competitors–An empirical Bayes approach, J. Amer. Stat. Assoc., vol. 68, no. 341, 117–130.
B. Efron and C. Morris (1977), Stein's paradox in statistics, Scientific American, vol. 236, no. 5, 119–127.
G. Casella (1985), An introduction to empirical Bayes data analysis, Amer. Statistician, vol. 39, no. 2, 83–87. | Is there a connection between empirical Bayes and random effects?
There is a really great article in JASA back in the mid 1970s on the James-Stein estimator and empirical Bayes estimation with a particular application to predicting baseball players batting averages |
19,206 | Calculate prediction interval for ridge regression? | This has been partly discussed on this related thread. The problem is that this technique introduces bias while trying to decrease the variance of parameter estimates, which works well in situations where multicollinearity does exist. However, the nice properties of the OLS estimators are lost and one has to resort to approximations in order to compute confidence intervals. While I think the bootstrap might offer a good solution to this, here are two references that might be useful:
Crivelli, A., Firinguetti, L., Montano, R., and Munoz, M. (1995). Confidence intervals in ridge regression by bootstrapping the dependent variable: a simulation study. Communications in statistics. Simulation and computation, 24(3), 631-652.
Firinguetti, L. and Bobadilla, G. (2011). Asymptotic confidence intervals in ridge regression based on the Edgeworth expansion. Statistical Papers, 52(2), 287-307. | Calculate prediction interval for ridge regression? | This has been partly discussed on this related thread. The problem is that this technique introduces bias while trying to decrease the variance of parameter estimates, which works well in situations w | Calculate prediction interval for ridge regression?
This has been partly discussed on this related thread. The problem is that this technique introduces bias while trying to decrease the variance of parameter estimates, which works well in situations where multicollinearity does exist. However, the nice properties of the OLS estimators are lost and one has to resort to approximations in order to compute confidence intervals. While I think the bootstrap might offer a good solution to this, here are two references that might be useful:
Crivelli, A., Firinguetti, L., Montano, R., and Munoz, M. (1995). Confidence intervals in ridge regression by bootstrapping the dependent variable: a simulation study. Communications in statistics. Simulation and computation, 24(3), 631-652.
Firinguetti, L. and Bobadilla, G. (2011). Asymptotic confidence intervals in ridge regression based on the Edgeworth expansion. Statistical Papers, 52(2), 287-307. | Calculate prediction interval for ridge regression?
This has been partly discussed on this related thread. The problem is that this technique introduces bias while trying to decrease the variance of parameter estimates, which works well in situations w |
19,207 | How does gentle boosting differ from AdaBoost? | The second paper you cite seems to contain the answer to your question. To recap; mathematically, the main difference is in the shape of the loss function being used. Friedman, Hastie, and Tibshirani's loss function being easier to optimize at each iteration. – | How does gentle boosting differ from AdaBoost? | The second paper you cite seems to contain the answer to your question. To recap; mathematically, the main difference is in the shape of the loss function being used. Friedman, Hastie, and Tibshirani' | How does gentle boosting differ from AdaBoost?
The second paper you cite seems to contain the answer to your question. To recap; mathematically, the main difference is in the shape of the loss function being used. Friedman, Hastie, and Tibshirani's loss function being easier to optimize at each iteration. – | How does gentle boosting differ from AdaBoost?
The second paper you cite seems to contain the answer to your question. To recap; mathematically, the main difference is in the shape of the loss function being used. Friedman, Hastie, and Tibshirani' |
19,208 | Cause of a high condition number in a python statsmodels regression? | I found this page in a search, because I had the same question, but I think I have figured out what's going on.
First, a demonstration of the problem:
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
x = np.arange(1000.,1030.,1.)
y = 0.5*x
X = sm.add_constant(x)
plt.plot(x, y,'x')
plt.show()
mod_ols = sm.OLS(y, X)
res_ols = mod_ols.fit()
print(res_ols.summary())
Notice the very high condition number of 1.19e+05. This is because we're fitting a line to the points and then projecting the line all the way back to the origin (x=0) to find the y-intercept. That y-intercept will be very sensitive to small movements in the data points. The condition number takes into account high sensitivity in either fitted parameter to the input data, hence the high condition number when all of the data are far to one side of x=0.
To solve this, we simply center the x-values:
x -= np.average(x)
X = sm.add_constant(x)
plt.plot(x, y,'x')
plt.show()
The condition number is now greatly reduced to only 8.66. Notice that the fitted slope and calculated R**2 etc. are unchanged.
My conclusion: in the case of regression against a single variable, don't worry about the condition number UNLESS you care about the sensitivity of your y-intercept to the input data. If you do, then center the x-values first. | Cause of a high condition number in a python statsmodels regression? | I found this page in a search, because I had the same question, but I think I have figured out what's going on.
First, a demonstration of the problem:
import numpy as np
import statsmodels.api as sm
i | Cause of a high condition number in a python statsmodels regression?
I found this page in a search, because I had the same question, but I think I have figured out what's going on.
First, a demonstration of the problem:
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
x = np.arange(1000.,1030.,1.)
y = 0.5*x
X = sm.add_constant(x)
plt.plot(x, y,'x')
plt.show()
mod_ols = sm.OLS(y, X)
res_ols = mod_ols.fit()
print(res_ols.summary())
Notice the very high condition number of 1.19e+05. This is because we're fitting a line to the points and then projecting the line all the way back to the origin (x=0) to find the y-intercept. That y-intercept will be very sensitive to small movements in the data points. The condition number takes into account high sensitivity in either fitted parameter to the input data, hence the high condition number when all of the data are far to one side of x=0.
To solve this, we simply center the x-values:
x -= np.average(x)
X = sm.add_constant(x)
plt.plot(x, y,'x')
plt.show()
The condition number is now greatly reduced to only 8.66. Notice that the fitted slope and calculated R**2 etc. are unchanged.
My conclusion: in the case of regression against a single variable, don't worry about the condition number UNLESS you care about the sensitivity of your y-intercept to the input data. If you do, then center the x-values first. | Cause of a high condition number in a python statsmodels regression?
I found this page in a search, because I had the same question, but I think I have figured out what's going on.
First, a demonstration of the problem:
import numpy as np
import statsmodels.api as sm
i |
19,209 | When to use Exponential Smoothing vs ARIMA? | Exponential Smoothing is in fact a subset of an ARIMA model. You don't want to assume a model, but rather build a customized model for the data. The ARIMA process let's you do that, but you need to also consider other items. You need to identify and adjust for outliers also. See more on Tsay's work with outliers here | When to use Exponential Smoothing vs ARIMA? | Exponential Smoothing is in fact a subset of an ARIMA model. You don't want to assume a model, but rather build a customized model for the data. The ARIMA process let's you do that, but you need to a | When to use Exponential Smoothing vs ARIMA?
Exponential Smoothing is in fact a subset of an ARIMA model. You don't want to assume a model, but rather build a customized model for the data. The ARIMA process let's you do that, but you need to also consider other items. You need to identify and adjust for outliers also. See more on Tsay's work with outliers here | When to use Exponential Smoothing vs ARIMA?
Exponential Smoothing is in fact a subset of an ARIMA model. You don't want to assume a model, but rather build a customized model for the data. The ARIMA process let's you do that, but you need to a |
19,210 | When to use Exponential Smoothing vs ARIMA? | I've performed a fairly extensive testing of ARIMA, Holt winters and others and tabulated the results here.
It's notable that ARIMA(3,0,0) does pretty well, as does ARIMA(2,0,1) across a pretty wide range of time-series, but of course you should see what works for your problem. | When to use Exponential Smoothing vs ARIMA? | I've performed a fairly extensive testing of ARIMA, Holt winters and others and tabulated the results here.
It's notable that ARIMA(3,0,0) does pretty well, as does ARIMA(2,0,1) across a pretty wide r | When to use Exponential Smoothing vs ARIMA?
I've performed a fairly extensive testing of ARIMA, Holt winters and others and tabulated the results here.
It's notable that ARIMA(3,0,0) does pretty well, as does ARIMA(2,0,1) across a pretty wide range of time-series, but of course you should see what works for your problem. | When to use Exponential Smoothing vs ARIMA?
I've performed a fairly extensive testing of ARIMA, Holt winters and others and tabulated the results here.
It's notable that ARIMA(3,0,0) does pretty well, as does ARIMA(2,0,1) across a pretty wide r |
19,211 | Why don't people use deeper RBFs or RBF in combination with MLP? | The fundamental problem is that RBFs are
a) too nonlinear,
b) do not do dimension reduction.
because of a) RBFs were always trained by k-means rather than gradient descent.
I would claim that the main success in Deep NNs is conv nets, where one of the key parts is dimension reduction: although working with say 128x128x3=50,000 inputs, each neuron has a restricted receptive field, and there are much fewer neurons in each layer.In a given layer in an MLP- each neuron represents a feature/dimension) so you are constantly reducing dimensionality (in going from layer to layer).
Although one could make the RBF covariance matrix adaptive and so do dimension reduction, this makes it even harder to train. | Why don't people use deeper RBFs or RBF in combination with MLP? | The fundamental problem is that RBFs are
a) too nonlinear,
b) do not do dimension reduction.
because of a) RBFs were always trained by k-means rather than gradient descent.
I would claim that the ma | Why don't people use deeper RBFs or RBF in combination with MLP?
The fundamental problem is that RBFs are
a) too nonlinear,
b) do not do dimension reduction.
because of a) RBFs were always trained by k-means rather than gradient descent.
I would claim that the main success in Deep NNs is conv nets, where one of the key parts is dimension reduction: although working with say 128x128x3=50,000 inputs, each neuron has a restricted receptive field, and there are much fewer neurons in each layer.In a given layer in an MLP- each neuron represents a feature/dimension) so you are constantly reducing dimensionality (in going from layer to layer).
Although one could make the RBF covariance matrix adaptive and so do dimension reduction, this makes it even harder to train. | Why don't people use deeper RBFs or RBF in combination with MLP?
The fundamental problem is that RBFs are
a) too nonlinear,
b) do not do dimension reduction.
because of a) RBFs were always trained by k-means rather than gradient descent.
I would claim that the ma |
19,212 | Linear transformation of a random variable by a tall rectangular matrix | For those who might run across this in the future... the source of the error actually stems from the integration. In the example above, integration takes place over the line $y = x$. It is therefore necessary to "parametrize" the line and consider the Jacobian of the parametrization when taking the integral, since each unit step in the $x$-axis corresponds to steps of length $\sqrt{2}$ on the line. The parametrization I was implicitly using was given by $x \mapsto (x, x)$, in other words specifying both identical entries of $\vec{y}$ by value. This has Jacobian $\sqrt{2}$, which neatly cancels with the $\sqrt{2}$ (coming from exactly the same Jacobian) in the denominator.
The example was artificially simple — for a general transformation $B$, one may have another parametrization for the output that is natural in the context of the problem. Since the parametrization needs to cover the same subspace $G$ as $B$, and this subspace is a hyperplane, the parameterization is itself likely to be linear. Calling the $m \times n$ matrix representation of the parametrization $L$, the requirement is simply that it have the same column space as $B$ (cover the same hyperplane). Then the final density becomes $$
f_{\vec{Z}}(\vec{z}) = \frac{\left|\det^+ L\right|}{\left|\det^+ B\right|}f_{\vec{X}}(B^+ \vec{z}).
$$
In general, this setup is kind of odd, and I think the right thing to do is to find a maximal linearly independent set of rows of $B$, and remove the rest of the rows (along with the corresponding components of the transformed variable $\vec{z}$) to get a square matrix $\hat B$. Then the problem reduces to the full-rank $n \times n$ case (assuming $B$ has full column rank). | Linear transformation of a random variable by a tall rectangular matrix | For those who might run across this in the future... the source of the error actually stems from the integration. In the example above, integration takes place over the line $y = x$. It is therefore n | Linear transformation of a random variable by a tall rectangular matrix
For those who might run across this in the future... the source of the error actually stems from the integration. In the example above, integration takes place over the line $y = x$. It is therefore necessary to "parametrize" the line and consider the Jacobian of the parametrization when taking the integral, since each unit step in the $x$-axis corresponds to steps of length $\sqrt{2}$ on the line. The parametrization I was implicitly using was given by $x \mapsto (x, x)$, in other words specifying both identical entries of $\vec{y}$ by value. This has Jacobian $\sqrt{2}$, which neatly cancels with the $\sqrt{2}$ (coming from exactly the same Jacobian) in the denominator.
The example was artificially simple — for a general transformation $B$, one may have another parametrization for the output that is natural in the context of the problem. Since the parametrization needs to cover the same subspace $G$ as $B$, and this subspace is a hyperplane, the parameterization is itself likely to be linear. Calling the $m \times n$ matrix representation of the parametrization $L$, the requirement is simply that it have the same column space as $B$ (cover the same hyperplane). Then the final density becomes $$
f_{\vec{Z}}(\vec{z}) = \frac{\left|\det^+ L\right|}{\left|\det^+ B\right|}f_{\vec{X}}(B^+ \vec{z}).
$$
In general, this setup is kind of odd, and I think the right thing to do is to find a maximal linearly independent set of rows of $B$, and remove the rest of the rows (along with the corresponding components of the transformed variable $\vec{z}$) to get a square matrix $\hat B$. Then the problem reduces to the full-rank $n \times n$ case (assuming $B$ has full column rank). | Linear transformation of a random variable by a tall rectangular matrix
For those who might run across this in the future... the source of the error actually stems from the integration. In the example above, integration takes place over the line $y = x$. It is therefore n |
19,213 | What is the intuition behind the expected transaction value for a customer in the gamma-gamma model? | This is a (super) late answer, but I myself was looking for some information related to gamma-gamma models for monetary value, and came across this. The short answer is yes, the negative values for expected transaction values exposes issues with the underlying dataset used to fit the model.
In case it is helpful for you or others with similar questions, I'll try to illustrate why it's concerning to have $q<1$. The purpose of these spend models is to understand observed spend per transaction with the goal of predicting future spend per transaction at the individual level. The use of a gamma distribution was first proposed by Colombo and Jiang (1999) and was motivated by the observation that if transactions are distributed normal, then 1) it is not bounded below by $0$ for any choice of mean and variance parameters, and 2) you get symmetric spend distributions, when the observed data consistently appears to be right skewed.
Following the paper you refer to, a customer with $x$ transactions values $z_1,\dots,z_x$ is modeled such that $z_i \sim \text{Gamma}(p,\nu),$ and we allow for heterogeneity across customers by also having that $\nu \sim \text{Gamma}(q,\gamma)$. A key observation is that conditional on $p$ and $\nu$, a customer's mean transaction value $\delta$ is $\delta = p/\nu$. Now $\nu$ varies across customers, so you may want to know what the mean transaction value $\delta$ is across all individuals. Denote this random variable $D$. It can be shown that
$$E[D|p,q,\gamma] = \frac{p\gamma}{q-1}$$
which says that the mean transaction value for customers is $\frac{p\gamma}{q-1}$ (showing this is a bit involved, but the way to do it is to derive the distribution and show it is an inverse-gamma distribution with specific parameters and find the expected value given that). In any gamma distribution, the parameters are strictly positive, so $p>0,\gamma >0$, so if you have $q<1$, then it must be that the expected transaction value across individuals is negative.
This should be pause for concern: why is the expected transaction value negative? You can try to validate this by thinking of compensating individuals for each transaction, but this is quite odd and there are other models if this is the kind of situation you are dealing with, and so the fact that your model finds $q<1$ should immediately raise some serious concerns for this reason alone.
As a final point, I think it's nice to better understand
$$
\begin{align}
\mathbb{E}(M\mid p, q, \gamma, m_x, x) & = \frac{(\gamma + m_xx)p}{px+q-1}\\
& = \bigg(\frac{q-1}{px+q-1}\bigg)\frac{\gamma p}{q-1}+\bigg(\frac{px}{px+q-1}\bigg)m_x\\
\end{align}
$$
as noting that it is simply the weighted average of the population mean transaction value $E[D|p,q,\gamma] = \frac{p\gamma}{q-1}$ and the observed average transaction value $m_x = \frac{1}{x}\sum_{i=1}^x z_i$ of a given customer, and the weightings can be fully understood from a bayesian framework as having a prior (the mean average transaction value), and the weight you place on it goes down as you observe more data $x$ on an given individual! | What is the intuition behind the expected transaction value for a customer in the gamma-gamma model? | This is a (super) late answer, but I myself was looking for some information related to gamma-gamma models for monetary value, and came across this. The short answer is yes, the negative values for ex | What is the intuition behind the expected transaction value for a customer in the gamma-gamma model?
This is a (super) late answer, but I myself was looking for some information related to gamma-gamma models for monetary value, and came across this. The short answer is yes, the negative values for expected transaction values exposes issues with the underlying dataset used to fit the model.
In case it is helpful for you or others with similar questions, I'll try to illustrate why it's concerning to have $q<1$. The purpose of these spend models is to understand observed spend per transaction with the goal of predicting future spend per transaction at the individual level. The use of a gamma distribution was first proposed by Colombo and Jiang (1999) and was motivated by the observation that if transactions are distributed normal, then 1) it is not bounded below by $0$ for any choice of mean and variance parameters, and 2) you get symmetric spend distributions, when the observed data consistently appears to be right skewed.
Following the paper you refer to, a customer with $x$ transactions values $z_1,\dots,z_x$ is modeled such that $z_i \sim \text{Gamma}(p,\nu),$ and we allow for heterogeneity across customers by also having that $\nu \sim \text{Gamma}(q,\gamma)$. A key observation is that conditional on $p$ and $\nu$, a customer's mean transaction value $\delta$ is $\delta = p/\nu$. Now $\nu$ varies across customers, so you may want to know what the mean transaction value $\delta$ is across all individuals. Denote this random variable $D$. It can be shown that
$$E[D|p,q,\gamma] = \frac{p\gamma}{q-1}$$
which says that the mean transaction value for customers is $\frac{p\gamma}{q-1}$ (showing this is a bit involved, but the way to do it is to derive the distribution and show it is an inverse-gamma distribution with specific parameters and find the expected value given that). In any gamma distribution, the parameters are strictly positive, so $p>0,\gamma >0$, so if you have $q<1$, then it must be that the expected transaction value across individuals is negative.
This should be pause for concern: why is the expected transaction value negative? You can try to validate this by thinking of compensating individuals for each transaction, but this is quite odd and there are other models if this is the kind of situation you are dealing with, and so the fact that your model finds $q<1$ should immediately raise some serious concerns for this reason alone.
As a final point, I think it's nice to better understand
$$
\begin{align}
\mathbb{E}(M\mid p, q, \gamma, m_x, x) & = \frac{(\gamma + m_xx)p}{px+q-1}\\
& = \bigg(\frac{q-1}{px+q-1}\bigg)\frac{\gamma p}{q-1}+\bigg(\frac{px}{px+q-1}\bigg)m_x\\
\end{align}
$$
as noting that it is simply the weighted average of the population mean transaction value $E[D|p,q,\gamma] = \frac{p\gamma}{q-1}$ and the observed average transaction value $m_x = \frac{1}{x}\sum_{i=1}^x z_i$ of a given customer, and the weightings can be fully understood from a bayesian framework as having a prior (the mean average transaction value), and the weight you place on it goes down as you observe more data $x$ on an given individual! | What is the intuition behind the expected transaction value for a customer in the gamma-gamma model?
This is a (super) late answer, but I myself was looking for some information related to gamma-gamma models for monetary value, and came across this. The short answer is yes, the negative values for ex |
19,214 | Non-parametric bootstrap p-values vs confidence intervals | As @MichaelChernick said in response to a comment on his answer to a linked question:
There is a 1-1 correspondence in general between confidence intervals
and hypothesis tests. For example a 95% confidence interval for a
model parameter represents the non-rejection region for the
corresponding 5% level hypothesis test regarding the value of that
parameter. There is no requirement about the shape of the population
distributions. Obviously if it applies to confidence intervals in
general it will apply to bootstrap confidence intervals.
So this answer will address two associated issues: (1) why might presentations of bootstrap results seem more frequently to specify confidence intervals (CI) rather than p-values, as suggested in the question, and (2) when might both p-values and CI determined by bootstrap be suspected to be unreliable thus requiring an alternate approach.
I don't know data that specifically support the claim in this question on the first issue. Perhaps in practice many bootstrap-derived point estimates are (or at least seem to be) so far from test decision boundaries that there is little interest in the p-value of the corresponding null hypothesis, with primary interest in the point estimate itself and in some reasonable measure of the magnitude of its likely variability.
With respect to the second issue, many practical applications involve "symmetrical distribution of test statistic, pivotal test statistic, CLT applying, no or few nuisance parameters etc" (as in a comment by @XavierBourretSicotte above), for which there is little difficulty. The question then becomes how to detect potential deviations from these conditions and how to deal with them when they arise.
These potential deviations from ideal behavior have been appreciated for decades, with several bootstrap CI approaches developed early on to deal with them. The Studentized bootstrap helps provide a pivotal statistic, and the BCa method deals with both bias and skewness in terms of obtaining more reliable CI from bootstraps. Variance-stabilizing transformation of data before determining bootstrapped CI, followed by back-transformation to the original scale, also can help.
The example in this question on sampling from 14 heads out of 20 tosses from a fair coin is nicely handled by using CI from the BCa method; in R:
> dat14 <- c(rep(1,14),rep(0,6))
> datbf <- function(data,index){d <- data[index]; sum(d)}
> set.seed(1)
> dat14boot <- boot(dat14,datbf,R=999)
> boot.ci(dat14boot)
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 999 bootstrap replicates
CALL :
boot.ci(boot.out = dat14boot)
Intervals :
Level Normal Basic
95% (9.82, 18.22 ) (10.00, 18.00 )
Level Percentile BCa
95% (10, 18 ) ( 8, 17 )
Calculations and Intervals on Original Scale
The other CI estimates pose the noted problem of being very close to or at the edge of the population value of 10 heads per 20 tosses. The BCa CI account for skewness (as introduced by binomial sampling away from even odds), so they nicely include the population value of 10.
But you have to be looking for such deviations from ideal behavior before you can take advantage of these solutions. As in so much of statistical practice, actually looking at the data rather than just plugging into an algorithm can be key. For example, this question about CI for a biased bootstrap result shows results for the first 3 CI shown in the above code, but excluded the BCa CI. When I tried to reproduce the analysis shown in that question to include BCa CI, I got the result:
> boot.ci(boot(xi,H.boot,R=1000))
Error in bca.ci(boot.out, conf, index[1L], L = L, t = t.o, t0 = t0.o, :
estimated adjustment 'w' is infinite
where 'w' is involved in the bias correction. The statistic being examined has a fixed maximum value and the plug-in estimate that was bootstrapped was also inherently biased. Getting a result like that should indicate that the usual assumptions underlying bootstrapped CI are being violated.
Analyzing a pivotal quantity avoids such problems; even though an empirical distribution can't have useful strictly pivotal statistics, coming as close as reasonable is an important goal. The last few paragraphs of this answer provide links to further aids, like pivot plots to estimate via bootstrap whether a statistic (potentially after some data transformation) is close to pivotal, and the computationally expensive but potentially decisive double bootstrap. | Non-parametric bootstrap p-values vs confidence intervals | As @MichaelChernick said in response to a comment on his answer to a linked question:
There is a 1-1 correspondence in general between confidence intervals
and hypothesis tests. For example a 95% c | Non-parametric bootstrap p-values vs confidence intervals
As @MichaelChernick said in response to a comment on his answer to a linked question:
There is a 1-1 correspondence in general between confidence intervals
and hypothesis tests. For example a 95% confidence interval for a
model parameter represents the non-rejection region for the
corresponding 5% level hypothesis test regarding the value of that
parameter. There is no requirement about the shape of the population
distributions. Obviously if it applies to confidence intervals in
general it will apply to bootstrap confidence intervals.
So this answer will address two associated issues: (1) why might presentations of bootstrap results seem more frequently to specify confidence intervals (CI) rather than p-values, as suggested in the question, and (2) when might both p-values and CI determined by bootstrap be suspected to be unreliable thus requiring an alternate approach.
I don't know data that specifically support the claim in this question on the first issue. Perhaps in practice many bootstrap-derived point estimates are (or at least seem to be) so far from test decision boundaries that there is little interest in the p-value of the corresponding null hypothesis, with primary interest in the point estimate itself and in some reasonable measure of the magnitude of its likely variability.
With respect to the second issue, many practical applications involve "symmetrical distribution of test statistic, pivotal test statistic, CLT applying, no or few nuisance parameters etc" (as in a comment by @XavierBourretSicotte above), for which there is little difficulty. The question then becomes how to detect potential deviations from these conditions and how to deal with them when they arise.
These potential deviations from ideal behavior have been appreciated for decades, with several bootstrap CI approaches developed early on to deal with them. The Studentized bootstrap helps provide a pivotal statistic, and the BCa method deals with both bias and skewness in terms of obtaining more reliable CI from bootstraps. Variance-stabilizing transformation of data before determining bootstrapped CI, followed by back-transformation to the original scale, also can help.
The example in this question on sampling from 14 heads out of 20 tosses from a fair coin is nicely handled by using CI from the BCa method; in R:
> dat14 <- c(rep(1,14),rep(0,6))
> datbf <- function(data,index){d <- data[index]; sum(d)}
> set.seed(1)
> dat14boot <- boot(dat14,datbf,R=999)
> boot.ci(dat14boot)
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 999 bootstrap replicates
CALL :
boot.ci(boot.out = dat14boot)
Intervals :
Level Normal Basic
95% (9.82, 18.22 ) (10.00, 18.00 )
Level Percentile BCa
95% (10, 18 ) ( 8, 17 )
Calculations and Intervals on Original Scale
The other CI estimates pose the noted problem of being very close to or at the edge of the population value of 10 heads per 20 tosses. The BCa CI account for skewness (as introduced by binomial sampling away from even odds), so they nicely include the population value of 10.
But you have to be looking for such deviations from ideal behavior before you can take advantage of these solutions. As in so much of statistical practice, actually looking at the data rather than just plugging into an algorithm can be key. For example, this question about CI for a biased bootstrap result shows results for the first 3 CI shown in the above code, but excluded the BCa CI. When I tried to reproduce the analysis shown in that question to include BCa CI, I got the result:
> boot.ci(boot(xi,H.boot,R=1000))
Error in bca.ci(boot.out, conf, index[1L], L = L, t = t.o, t0 = t0.o, :
estimated adjustment 'w' is infinite
where 'w' is involved in the bias correction. The statistic being examined has a fixed maximum value and the plug-in estimate that was bootstrapped was also inherently biased. Getting a result like that should indicate that the usual assumptions underlying bootstrapped CI are being violated.
Analyzing a pivotal quantity avoids such problems; even though an empirical distribution can't have useful strictly pivotal statistics, coming as close as reasonable is an important goal. The last few paragraphs of this answer provide links to further aids, like pivot plots to estimate via bootstrap whether a statistic (potentially after some data transformation) is close to pivotal, and the computationally expensive but potentially decisive double bootstrap. | Non-parametric bootstrap p-values vs confidence intervals
As @MichaelChernick said in response to a comment on his answer to a linked question:
There is a 1-1 correspondence in general between confidence intervals
and hypothesis tests. For example a 95% c |
19,215 | Bootstrap: estimate is outside of confidence interval | The difficulty you are facing is from the implied mathematics. A center of location estimator, or an interval estimator, can be thought of as the minimization of a cost function over a distribution. The sample mean over the Gaussian minimizes quadratic loss, while the median minimizes the absolute linear loss function over the Gaussian. Even though in the population they are located at the same point, they are discovered using different cost functions.
We give you an algorithm and say "do this," but before the algorithm was developed someone solved an optimization problem.
You have applied four different cost functions giving you three intervals and a point estimator. Since the cost functions are different, they provide you different points and intervals. There is nothing to be done about it except to manually unify the methodology.
You need to find the underlying papers and look at the underlying code to understand which ones map to what types of problems.
Sorry to say this, but you were betrayed by the software. It did its job, and on average this works out great, but you got the sample where the software won't work. Or, rather, it is working perfectly and you need to actually work your way backward through the literature to determine what it is really doing. | Bootstrap: estimate is outside of confidence interval | The difficulty you are facing is from the implied mathematics. A center of location estimator, or an interval estimator, can be thought of as the minimization of a cost function over a distribution. | Bootstrap: estimate is outside of confidence interval
The difficulty you are facing is from the implied mathematics. A center of location estimator, or an interval estimator, can be thought of as the minimization of a cost function over a distribution. The sample mean over the Gaussian minimizes quadratic loss, while the median minimizes the absolute linear loss function over the Gaussian. Even though in the population they are located at the same point, they are discovered using different cost functions.
We give you an algorithm and say "do this," but before the algorithm was developed someone solved an optimization problem.
You have applied four different cost functions giving you three intervals and a point estimator. Since the cost functions are different, they provide you different points and intervals. There is nothing to be done about it except to manually unify the methodology.
You need to find the underlying papers and look at the underlying code to understand which ones map to what types of problems.
Sorry to say this, but you were betrayed by the software. It did its job, and on average this works out great, but you got the sample where the software won't work. Or, rather, it is working perfectly and you need to actually work your way backward through the literature to determine what it is really doing. | Bootstrap: estimate is outside of confidence interval
The difficulty you are facing is from the implied mathematics. A center of location estimator, or an interval estimator, can be thought of as the minimization of a cost function over a distribution. |
19,216 | Are there non-trivial settings where the MAD statistic has a closed-form density? | For the uniform distribution, twice the MAD among $2n-1$ samples seems to have
the same distribution as the $(n-1)^{th}$-smallest of those samples.
The calculation can be done as
$$F(m)=(2n-1)! \int_{A_m} dx_1\ldots dx_{2n-1}$$
where
\begin{align}
A_m = \{(x_1,\ldots,x_{2n-1}): \ &0< x_{1}< \cdots < x_{2n-1} < 1 \\
&\ \& \, \min(g_1, \ldots, g_n)<m\}
\end{align}
and $g_i = \max(x_{(i+n-1)}-x_{(n)},x_{(n)}-x_{(i)})$
I conjecture that the result is always $F(m)=I_{2m}(n-1,n+1)$,
where $I$ is the incomplete beta function, and I have verified this for
$n=2,3,4,5,6$. If so, the intepretation of this in terms of the order statistics is here, and the corresponding pdf is
$$f(m)=\frac{2(2n-1)!}{n!(n-2)!}\, (2m)^{n-2}(1-2m)^n.$$
I hope someone will be able to find a simple argument that this conjecture is correct. | Are there non-trivial settings where the MAD statistic has a closed-form density? | For the uniform distribution, twice the MAD among $2n-1$ samples seems to have
the same distribution as the $(n-1)^{th}$-smallest of those samples.
The calculation can be done as
$$F(m)=(2n-1)! \int_{ | Are there non-trivial settings where the MAD statistic has a closed-form density?
For the uniform distribution, twice the MAD among $2n-1$ samples seems to have
the same distribution as the $(n-1)^{th}$-smallest of those samples.
The calculation can be done as
$$F(m)=(2n-1)! \int_{A_m} dx_1\ldots dx_{2n-1}$$
where
\begin{align}
A_m = \{(x_1,\ldots,x_{2n-1}): \ &0< x_{1}< \cdots < x_{2n-1} < 1 \\
&\ \& \, \min(g_1, \ldots, g_n)<m\}
\end{align}
and $g_i = \max(x_{(i+n-1)}-x_{(n)},x_{(n)}-x_{(i)})$
I conjecture that the result is always $F(m)=I_{2m}(n-1,n+1)$,
where $I$ is the incomplete beta function, and I have verified this for
$n=2,3,4,5,6$. If so, the intepretation of this in terms of the order statistics is here, and the corresponding pdf is
$$f(m)=\frac{2(2n-1)!}{n!(n-2)!}\, (2m)^{n-2}(1-2m)^n.$$
I hope someone will be able to find a simple argument that this conjecture is correct. | Are there non-trivial settings where the MAD statistic has a closed-form density?
For the uniform distribution, twice the MAD among $2n-1$ samples seems to have
the same distribution as the $(n-1)^{th}$-smallest of those samples.
The calculation can be done as
$$F(m)=(2n-1)! \int_{ |
19,217 | Thesaurus for statistics and machine learning terms | In this link you can find my contribution to the matter.
It is work in progress so any comments will be appreciated. | Thesaurus for statistics and machine learning terms | In this link you can find my contribution to the matter.
It is work in progress so any comments will be appreciated. | Thesaurus for statistics and machine learning terms
In this link you can find my contribution to the matter.
It is work in progress so any comments will be appreciated. | Thesaurus for statistics and machine learning terms
In this link you can find my contribution to the matter.
It is work in progress so any comments will be appreciated. |
19,218 | Model performance in quantile modelling | A useful reference may be Haupt, Kagerer, and Schnurbus (2011) discussing the use of quantile-specific measures of predictive accuracy based on cross-validations for various classes of quantile regression models. | Model performance in quantile modelling | A useful reference may be Haupt, Kagerer, and Schnurbus (2011) discussing the use of quantile-specific measures of predictive accuracy based on cross-validations for various classes of quantile regres | Model performance in quantile modelling
A useful reference may be Haupt, Kagerer, and Schnurbus (2011) discussing the use of quantile-specific measures of predictive accuracy based on cross-validations for various classes of quantile regression models. | Model performance in quantile modelling
A useful reference may be Haupt, Kagerer, and Schnurbus (2011) discussing the use of quantile-specific measures of predictive accuracy based on cross-validations for various classes of quantile regres |
19,219 | What statistical methods are archaic and should be omitted from textbooks? [closed] | These three would probably rank somewhere in a list of deprecated exercises:
looking for quantiles of the normal/F/t distribution in a table.
Tests of normality.
Tests of equality of variances before doing the two sample t-tests or anova.
Classical (e.g. non robust) univariate parametric tests and confidence intervals.
Statistics has moved in the age of computers and large multivariate dataset. I don't expect this to be rolled back. By necessity, the approaches taught in more advanced courses have in some sense been influenced by Breiman's and Tukey's critics. The focus has, IMO, permanently shifted towards those approach that require fewer assumptions to be met in order to work. An introductory course should reflect that.
I think some of the elements could still be taught in a latter stage to students interested in the history of statistical thoughts. | What statistical methods are archaic and should be omitted from textbooks? [closed] | These three would probably rank somewhere in a list of deprecated exercises:
looking for quantiles of the normal/F/t distribution in a table.
Tests of normality.
Tests of equality of variances befo | What statistical methods are archaic and should be omitted from textbooks? [closed]
These three would probably rank somewhere in a list of deprecated exercises:
looking for quantiles of the normal/F/t distribution in a table.
Tests of normality.
Tests of equality of variances before doing the two sample t-tests or anova.
Classical (e.g. non robust) univariate parametric tests and confidence intervals.
Statistics has moved in the age of computers and large multivariate dataset. I don't expect this to be rolled back. By necessity, the approaches taught in more advanced courses have in some sense been influenced by Breiman's and Tukey's critics. The focus has, IMO, permanently shifted towards those approach that require fewer assumptions to be met in order to work. An introductory course should reflect that.
I think some of the elements could still be taught in a latter stage to students interested in the history of statistical thoughts. | What statistical methods are archaic and should be omitted from textbooks? [closed]
These three would probably rank somewhere in a list of deprecated exercises:
looking for quantiles of the normal/F/t distribution in a table.
Tests of normality.
Tests of equality of variances befo |
19,220 | Halton sequence vs Sobol' sequence? | Yes, Halton is easier to calculate, but it has the problems you mentioned. Halton can be improved by the leaped Halton method, but it will be not really better than Sobol. For high dimensions (like $d>10$) and moderate count (like $N$ around 500) all methods will run into problems, e.g. some 2D projections in Sobol will look strange, showing strong patterns, not diagonal but more like a chessboard! One way to improve is randomization and e.g. the so-called tent transformation. | Halton sequence vs Sobol' sequence? | Yes, Halton is easier to calculate, but it has the problems you mentioned. Halton can be improved by the leaped Halton method, but it will be not really better than Sobol. For high dimensions (like $d | Halton sequence vs Sobol' sequence?
Yes, Halton is easier to calculate, but it has the problems you mentioned. Halton can be improved by the leaped Halton method, but it will be not really better than Sobol. For high dimensions (like $d>10$) and moderate count (like $N$ around 500) all methods will run into problems, e.g. some 2D projections in Sobol will look strange, showing strong patterns, not diagonal but more like a chessboard! One way to improve is randomization and e.g. the so-called tent transformation. | Halton sequence vs Sobol' sequence?
Yes, Halton is easier to calculate, but it has the problems you mentioned. Halton can be improved by the leaped Halton method, but it will be not really better than Sobol. For high dimensions (like $d |
19,221 | LASSO/LARS vs general to specific (GETS) method | Disclaimer: I am only remotely familiar with the work on model selection by David F. Hendry among others. I know, however, from respected colleagues that Hendry has done very interesting progress on model selection problems within econometrics. To judge whether the statistical literature is not paying enough attention to his work on model selection would require a lot more work for my part.
It is, however, interesting to try to understand why one method or idea generates much more activity than others. No doubt that there are aspects of fashion in science too. As I see it, lasso (and friends) has one major advantage of being the solution of a very easily expressed optimization problem. This is key to the detailed theoretical understanding of the solution and the efficient algorithms developed. The recent book, Statistics for High-Dimensional Data by Bühlmann and Van De Geer, illustrates how much is already known about lasso.
You can do endless simulation studies and you can, of course, apply the methods you find most relevant and suitable for a particular application, but for parts of the statistical literature substantial theoretical results must also be obtained. That lasso has generated a lot of activity reflects that there are theoretical questions that can actually be approached and they have interesting solutions.
Another point is that lasso or variations do perform well in many cases. I am simply not convinced that it is correct that lasso is so easily outperformed by other methods as the OP suggests. Maybe in terms of (artificial) model selection but not in terms of predictive performance. None of the references mentioned seem to really compare Gets and lasso either. | LASSO/LARS vs general to specific (GETS) method | Disclaimer: I am only remotely familiar with the work on model selection by David F. Hendry among others. I know, however, from respected colleagues that Hendry has done very interesting progress on m | LASSO/LARS vs general to specific (GETS) method
Disclaimer: I am only remotely familiar with the work on model selection by David F. Hendry among others. I know, however, from respected colleagues that Hendry has done very interesting progress on model selection problems within econometrics. To judge whether the statistical literature is not paying enough attention to his work on model selection would require a lot more work for my part.
It is, however, interesting to try to understand why one method or idea generates much more activity than others. No doubt that there are aspects of fashion in science too. As I see it, lasso (and friends) has one major advantage of being the solution of a very easily expressed optimization problem. This is key to the detailed theoretical understanding of the solution and the efficient algorithms developed. The recent book, Statistics for High-Dimensional Data by Bühlmann and Van De Geer, illustrates how much is already known about lasso.
You can do endless simulation studies and you can, of course, apply the methods you find most relevant and suitable for a particular application, but for parts of the statistical literature substantial theoretical results must also be obtained. That lasso has generated a lot of activity reflects that there are theoretical questions that can actually be approached and they have interesting solutions.
Another point is that lasso or variations do perform well in many cases. I am simply not convinced that it is correct that lasso is so easily outperformed by other methods as the OP suggests. Maybe in terms of (artificial) model selection but not in terms of predictive performance. None of the references mentioned seem to really compare Gets and lasso either. | LASSO/LARS vs general to specific (GETS) method
Disclaimer: I am only remotely familiar with the work on model selection by David F. Hendry among others. I know, however, from respected colleagues that Hendry has done very interesting progress on m |
19,222 | LASSO/LARS vs general to specific (GETS) method | why are LASSO and LARS model selection methods so popular even though they are basically just variations of step-wise forward selection
There is a difference between LASSO and (GETS) subset selection: LASSO shrinks the coefficients towards zero in a data-dependent way while (GETS) subset selection does not. This seems to be an advantage of LASSO over (GETS) subset selection, even if occasionally it might fail (it needs parameter tuning, which is normally done via cross validation, and occasionally we might happen to tune poorly).
(GETS) methods <...> do better than LARS/LASSO
The performance of GETS seems to be of comparable quality to LASSO when done by impartial (?) researchers (although not necessarily so in the papers where a new version of GETS is proposed - but that is what you would expect); see some references in this thread.
Perhaps Sir Hendry & Co are getting good results using GETS due to the specifics of their applications (mostly macroeconomic time series modelling)? But why could that be? This is a separate question. | LASSO/LARS vs general to specific (GETS) method | why are LASSO and LARS model selection methods so popular even though they are basically just variations of step-wise forward selection
There is a difference between LASSO and (GETS) subset selection | LASSO/LARS vs general to specific (GETS) method
why are LASSO and LARS model selection methods so popular even though they are basically just variations of step-wise forward selection
There is a difference between LASSO and (GETS) subset selection: LASSO shrinks the coefficients towards zero in a data-dependent way while (GETS) subset selection does not. This seems to be an advantage of LASSO over (GETS) subset selection, even if occasionally it might fail (it needs parameter tuning, which is normally done via cross validation, and occasionally we might happen to tune poorly).
(GETS) methods <...> do better than LARS/LASSO
The performance of GETS seems to be of comparable quality to LASSO when done by impartial (?) researchers (although not necessarily so in the papers where a new version of GETS is proposed - but that is what you would expect); see some references in this thread.
Perhaps Sir Hendry & Co are getting good results using GETS due to the specifics of their applications (mostly macroeconomic time series modelling)? But why could that be? This is a separate question. | LASSO/LARS vs general to specific (GETS) method
why are LASSO and LARS model selection methods so popular even though they are basically just variations of step-wise forward selection
There is a difference between LASSO and (GETS) subset selection |
19,223 | What is the logic behind "rule of thumb" for meaningful differences in AIC? | I encountered the same issue, and was trying to search an answer in related articles. The Burnham & Anderson 2002 (Model Selection and Multimodel Inference - A Practical Information-Theoretic Approach Second Edition) book actually used three approaches to derive these empirical numbers. As stated in Chapter 4.5 in this book, one approach (which is easiest to understand), is let $\Delta_p = AIC_{best} - AIC_{min}$ be a random variable with a sampling distribution. They have done Monte Carlo simulation studies on this variable, and the sampling distribution of this $\Delta_p$ has substantial stability and the 95th percentile of the sampling distribution of $\Delta_p$ is generally much less than 10, and in fact generally less than 7 (often closer to 4 in simple situations), as long as when observations are independent, sample sizes are large, and models are nested.
$\Delta_p > 10$ is way beyond the 95th percentile, and is thus highly unlikely to be the Kullback-Leibler best model.
In addition, they actually argued against using 2 as a rule of thumb in their Burnham, Anderson, and Huyvaert et al., 2011 paper. They said $\Delta$ in the 2-7 range have some support and should rarely be dismissed. | What is the logic behind "rule of thumb" for meaningful differences in AIC? | I encountered the same issue, and was trying to search an answer in related articles. The Burnham & Anderson 2002 (Model Selection and Multimodel Inference - A Practical Information-Theoretic Approach | What is the logic behind "rule of thumb" for meaningful differences in AIC?
I encountered the same issue, and was trying to search an answer in related articles. The Burnham & Anderson 2002 (Model Selection and Multimodel Inference - A Practical Information-Theoretic Approach Second Edition) book actually used three approaches to derive these empirical numbers. As stated in Chapter 4.5 in this book, one approach (which is easiest to understand), is let $\Delta_p = AIC_{best} - AIC_{min}$ be a random variable with a sampling distribution. They have done Monte Carlo simulation studies on this variable, and the sampling distribution of this $\Delta_p$ has substantial stability and the 95th percentile of the sampling distribution of $\Delta_p$ is generally much less than 10, and in fact generally less than 7 (often closer to 4 in simple situations), as long as when observations are independent, sample sizes are large, and models are nested.
$\Delta_p > 10$ is way beyond the 95th percentile, and is thus highly unlikely to be the Kullback-Leibler best model.
In addition, they actually argued against using 2 as a rule of thumb in their Burnham, Anderson, and Huyvaert et al., 2011 paper. They said $\Delta$ in the 2-7 range have some support and should rarely be dismissed. | What is the logic behind "rule of thumb" for meaningful differences in AIC?
I encountered the same issue, and was trying to search an answer in related articles. The Burnham & Anderson 2002 (Model Selection and Multimodel Inference - A Practical Information-Theoretic Approach |
19,224 | What is the logic behind "rule of thumb" for meaningful differences in AIC? | I might be able to provide some justification for the cutoff for AICs less than 2 units apart. I wrote a paper analyzing Quetelet's famous analysis of 5723 Scottish chest girths, one of the first applications of what would come to be called the normal distribution. Quetelet, long before goodness of fit tests, argued that the chest data were normal. Others have disagreed. The AIC for the fit of the normal to Quetelet's Scottish chest data is 24629. I generated 10000 random data with n = 5732 using Matlab's pseudo-random normal generator with the same mean and sd as Quetelet's data, obtaining a mean AIC of 24630 ± 2 [± half 95% confidence interval]. I would certainly agree with the ▵AIC = 2 cutoff, but I have no idea about justification for the 4-7 or >10 cutoffs.
Gallagher, E. D. (2020). Was Quetelet's Average Man Normal? The American Statistician/Taylor & Francis, 74(3), 301-306. https://doi.org/10.1080/00031305.2019.1706635. | What is the logic behind "rule of thumb" for meaningful differences in AIC? | I might be able to provide some justification for the cutoff for AICs less than 2 units apart. I wrote a paper analyzing Quetelet's famous analysis of 5723 Scottish chest girths, one of the first appl | What is the logic behind "rule of thumb" for meaningful differences in AIC?
I might be able to provide some justification for the cutoff for AICs less than 2 units apart. I wrote a paper analyzing Quetelet's famous analysis of 5723 Scottish chest girths, one of the first applications of what would come to be called the normal distribution. Quetelet, long before goodness of fit tests, argued that the chest data were normal. Others have disagreed. The AIC for the fit of the normal to Quetelet's Scottish chest data is 24629. I generated 10000 random data with n = 5732 using Matlab's pseudo-random normal generator with the same mean and sd as Quetelet's data, obtaining a mean AIC of 24630 ± 2 [± half 95% confidence interval]. I would certainly agree with the ▵AIC = 2 cutoff, but I have no idea about justification for the 4-7 or >10 cutoffs.
Gallagher, E. D. (2020). Was Quetelet's Average Man Normal? The American Statistician/Taylor & Francis, 74(3), 301-306. https://doi.org/10.1080/00031305.2019.1706635. | What is the logic behind "rule of thumb" for meaningful differences in AIC?
I might be able to provide some justification for the cutoff for AICs less than 2 units apart. I wrote a paper analyzing Quetelet's famous analysis of 5723 Scottish chest girths, one of the first appl |
19,225 | What bagging algorithms are worthy successors to Random Forest? | xgboost, catboost and lightgbm use some features of random forest (random sampling of variables/observations), so I think they are a successor of boosting and RF together and take the best things from both. ;) | What bagging algorithms are worthy successors to Random Forest? | xgboost, catboost and lightgbm use some features of random forest (random sampling of variables/observations), so I think they are a successor of boosting and RF together and take the best things from | What bagging algorithms are worthy successors to Random Forest?
xgboost, catboost and lightgbm use some features of random forest (random sampling of variables/observations), so I think they are a successor of boosting and RF together and take the best things from both. ;) | What bagging algorithms are worthy successors to Random Forest?
xgboost, catboost and lightgbm use some features of random forest (random sampling of variables/observations), so I think they are a successor of boosting and RF together and take the best things from |
19,226 | Can the Burnham-Anderson book on multimodel inference be recommended? | The OP appears to be seeking a high-quality survey of high-quality statisticians to help assess whether one particular book is of high quality particularly with regards to the AIC versus AICc debate. This site is not particularly geared towards systematic surveys. Instead I'll try to address the underlying question directly.
The AIC and AICc both score models according to a heuristic tradeoff between model fit (in terms of the likelihood) and overfit (in terms of the number of parameters). In this tradeoff, the AICc gives slightly greater penalty on the number of parameters. Thus, the AICc always recommends in favor of models that are of complexity less-than-or-equal to the complexity of the best AIC model. In this sense the relationship between the two is very simple, despite the horribly complicated arguments underlying their derivations.
The AIC and AICc are only two out of a large field of candidate information criteria, with the BIC and DIC being perhaps the leading alternatives. The BIC is far more conservative (penalizing large numbers of model parameters) than either of the AIC or AICc in most cases. The question of which criterion is the best is truly problem specific. One could legitimately prefer an extremely conservative criterion in cases where robust out-of-sample prediction is needed.
FWIW, I found the conservatism level of the AICc to be typically preferable over the AIC in extensive simulation studies on the prediction error in capture-recapture models. | Can the Burnham-Anderson book on multimodel inference be recommended? | The OP appears to be seeking a high-quality survey of high-quality statisticians to help assess whether one particular book is of high quality particularly with regards to the AIC versus AICc debate. | Can the Burnham-Anderson book on multimodel inference be recommended?
The OP appears to be seeking a high-quality survey of high-quality statisticians to help assess whether one particular book is of high quality particularly with regards to the AIC versus AICc debate. This site is not particularly geared towards systematic surveys. Instead I'll try to address the underlying question directly.
The AIC and AICc both score models according to a heuristic tradeoff between model fit (in terms of the likelihood) and overfit (in terms of the number of parameters). In this tradeoff, the AICc gives slightly greater penalty on the number of parameters. Thus, the AICc always recommends in favor of models that are of complexity less-than-or-equal to the complexity of the best AIC model. In this sense the relationship between the two is very simple, despite the horribly complicated arguments underlying their derivations.
The AIC and AICc are only two out of a large field of candidate information criteria, with the BIC and DIC being perhaps the leading alternatives. The BIC is far more conservative (penalizing large numbers of model parameters) than either of the AIC or AICc in most cases. The question of which criterion is the best is truly problem specific. One could legitimately prefer an extremely conservative criterion in cases where robust out-of-sample prediction is needed.
FWIW, I found the conservatism level of the AICc to be typically preferable over the AIC in extensive simulation studies on the prediction error in capture-recapture models. | Can the Burnham-Anderson book on multimodel inference be recommended?
The OP appears to be seeking a high-quality survey of high-quality statisticians to help assess whether one particular book is of high quality particularly with regards to the AIC versus AICc debate. |
19,227 | R-squared in linear model verses deviance in generalized linear model? | From what I can tell, we cannot run an ordinary least squares
regression in R when using weighted data and the survey package. Here,
we have to use svyglm(), which instead runs a generalized linear model
(which may be the same thing? I am fuzzy here in terms of what is
different).
svyglm will give you a linear model if you use family = gaussian() which seems to be the default from the survey vignette (in version 3.32-1). See the example where they find the regmodel.
It seems that the package just makes sure that you use the correct weights when it calls glm. Thus, if your outcome is continuous and you assume that it is normally iid distributed then you should use family = gaussian(). The result is a weighted linear model. This answer
Why can we not run OLS in the survey package, while it seems that this
is possible to do with weighted data in Stata?
by stating that you indeed can do that with the survey package. As for the following question
What is the difference in interpretation between the deviance of a
generalized linear model and an r-squared value?
There is a straight forward formula to get the $R^2$ with family = gaussian() as some people have mentioned in the comments. Adding weights does not change anything either as I show below
> set.seed(42293888)
> x <- (-4):5
> y <- 2 + x + rnorm(length(x))
> org <- data.frame(x = x, y = y, weights = 1:10)
>
> # show data and fit model. Notice the R-squared
> head(org)
x y weights
1 -4 0.4963671 1
2 -3 -0.5675720 2
3 -2 -0.3615302 3
4 -1 0.7091697 4
5 0 0.6485203 5
6 1 3.8495979 6
> summary(lm(y ~ x, org, weights = weights))
Call:
lm(formula = y ~ x, data = org, weights = weights)
Weighted Residuals:
Min 1Q Median 3Q Max
-3.1693 -0.4463 0.2017 0.9100 2.9667
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.7368 0.3514 4.942 0.00113 **
x 0.9016 0.1111 8.113 3.95e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.019 on 8 degrees of freedom
Multiple R-squared: 0.8916, Adjusted R-squared: 0.8781
F-statistic: 65.83 on 1 and 8 DF, p-value: 3.946e-05
>
> # make redundant data set with redundant rows
> idx <- unlist(mapply(rep, x = 1:nrow(org), times = org$weights))
> org_redundant <- org[idx, ]
> head(org_redundant)
x y weights
1 -4 0.4963671 1
2 -3 -0.5675720 2
2.1 -3 -0.5675720 2
3 -2 -0.3615302 3
3.1 -2 -0.3615302 3
3.2 -2 -0.3615302 3
>
> # fit model and notice the same R-squared
> summary(lm(y ~ x, org_redundant))
Call:
lm(formula = y ~ x, data = org_redundant)
Residuals:
Min 1Q Median 3Q Max
-1.19789 -0.29506 -0.05435 0.33131 2.36610
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.73680 0.13653 12.72 <2e-16 ***
x 0.90163 0.04318 20.88 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.7843 on 53 degrees of freedom
Multiple R-squared: 0.8916, Adjusted R-squared: 0.8896
F-statistic: 436.1 on 1 and 53 DF, p-value: < 2.2e-16
>
> # glm gives you the same with family = gaussian()
> # just compute the R^2 from the deviances. See
> # https://stats.stackexchange.com/a/46358/81865
> fit <- glm(y ~ x, family = gaussian(), org_redundant)
> fit$coefficients
(Intercept) x
1.7368017 0.9016347
> 1 - fit$deviance / fit$null.deviance
[1] 0.8916387
The deviance is just the sum of square errors when you use family = gaussian().
Caveats
I assume that you want a linear model from your question. Further, I have never used the survey package but quickly scanned through it and made assumptions about what it does which I state in my answer. | R-squared in linear model verses deviance in generalized linear model? | From what I can tell, we cannot run an ordinary least squares
regression in R when using weighted data and the survey package. Here,
we have to use svyglm(), which instead runs a generalized linea | R-squared in linear model verses deviance in generalized linear model?
From what I can tell, we cannot run an ordinary least squares
regression in R when using weighted data and the survey package. Here,
we have to use svyglm(), which instead runs a generalized linear model
(which may be the same thing? I am fuzzy here in terms of what is
different).
svyglm will give you a linear model if you use family = gaussian() which seems to be the default from the survey vignette (in version 3.32-1). See the example where they find the regmodel.
It seems that the package just makes sure that you use the correct weights when it calls glm. Thus, if your outcome is continuous and you assume that it is normally iid distributed then you should use family = gaussian(). The result is a weighted linear model. This answer
Why can we not run OLS in the survey package, while it seems that this
is possible to do with weighted data in Stata?
by stating that you indeed can do that with the survey package. As for the following question
What is the difference in interpretation between the deviance of a
generalized linear model and an r-squared value?
There is a straight forward formula to get the $R^2$ with family = gaussian() as some people have mentioned in the comments. Adding weights does not change anything either as I show below
> set.seed(42293888)
> x <- (-4):5
> y <- 2 + x + rnorm(length(x))
> org <- data.frame(x = x, y = y, weights = 1:10)
>
> # show data and fit model. Notice the R-squared
> head(org)
x y weights
1 -4 0.4963671 1
2 -3 -0.5675720 2
3 -2 -0.3615302 3
4 -1 0.7091697 4
5 0 0.6485203 5
6 1 3.8495979 6
> summary(lm(y ~ x, org, weights = weights))
Call:
lm(formula = y ~ x, data = org, weights = weights)
Weighted Residuals:
Min 1Q Median 3Q Max
-3.1693 -0.4463 0.2017 0.9100 2.9667
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.7368 0.3514 4.942 0.00113 **
x 0.9016 0.1111 8.113 3.95e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.019 on 8 degrees of freedom
Multiple R-squared: 0.8916, Adjusted R-squared: 0.8781
F-statistic: 65.83 on 1 and 8 DF, p-value: 3.946e-05
>
> # make redundant data set with redundant rows
> idx <- unlist(mapply(rep, x = 1:nrow(org), times = org$weights))
> org_redundant <- org[idx, ]
> head(org_redundant)
x y weights
1 -4 0.4963671 1
2 -3 -0.5675720 2
2.1 -3 -0.5675720 2
3 -2 -0.3615302 3
3.1 -2 -0.3615302 3
3.2 -2 -0.3615302 3
>
> # fit model and notice the same R-squared
> summary(lm(y ~ x, org_redundant))
Call:
lm(formula = y ~ x, data = org_redundant)
Residuals:
Min 1Q Median 3Q Max
-1.19789 -0.29506 -0.05435 0.33131 2.36610
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.73680 0.13653 12.72 <2e-16 ***
x 0.90163 0.04318 20.88 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.7843 on 53 degrees of freedom
Multiple R-squared: 0.8916, Adjusted R-squared: 0.8896
F-statistic: 436.1 on 1 and 53 DF, p-value: < 2.2e-16
>
> # glm gives you the same with family = gaussian()
> # just compute the R^2 from the deviances. See
> # https://stats.stackexchange.com/a/46358/81865
> fit <- glm(y ~ x, family = gaussian(), org_redundant)
> fit$coefficients
(Intercept) x
1.7368017 0.9016347
> 1 - fit$deviance / fit$null.deviance
[1] 0.8916387
The deviance is just the sum of square errors when you use family = gaussian().
Caveats
I assume that you want a linear model from your question. Further, I have never used the survey package but quickly scanned through it and made assumptions about what it does which I state in my answer. | R-squared in linear model verses deviance in generalized linear model?
From what I can tell, we cannot run an ordinary least squares
regression in R when using weighted data and the survey package. Here,
we have to use svyglm(), which instead runs a generalized linea |
19,228 | How to read the the goodness of fit on nls of R? | You can simply use the F test and anova to compare them. Here are some codes.
> x <- 1:10
> y <- 2*x + 3
> yeps <- y + rnorm(length(y), sd = 0.01)
>
>
> m1=nls(yeps ~ a + b*x, start = list(a = 0.12345, b = 0.54321))
> summary(m1)
Formula: yeps ~ a + b * x
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 2.9965562 0.0052838 567.1 <2e-16 ***
b 2.0016282 0.0008516 2350.6 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.007735 on 8 degrees of freedom
Number of iterations to convergence: 2
Achieved convergence tolerance: 3.386e-09
>
>
> m2=nls(yeps ~ a + b*x+c*I(x^5), start = list(a = 0.12345, b = 0.54321,c=10))
> summary(m2)
Formula: yeps ~ a + b * x + c * I(x^5)
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 3.003e+00 5.820e-03 516.010 <2e-16 ***
b 1.999e+00 1.364e-03 1466.004 <2e-16 ***
c 2.332e-07 1.236e-07 1.886 0.101
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.006733 on 7 degrees of freedom
Number of iterations to convergence: 2
Achieved convergence tolerance: 1.300e-06
>
> anova(m1,m2)
Analysis of Variance Table
Model 1: yeps ~ a + b * x
Model 2: yeps ~ a + b * x + c * I(x^5)
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 8 0.00047860
2 7 0.00031735 1 0.00016124 3.5567 0.1013
> | How to read the the goodness of fit on nls of R? | You can simply use the F test and anova to compare them. Here are some codes.
> x <- 1:10
> y <- 2*x + 3
> yeps <- y + rnorm(length(y), sd = 0.01)
>
>
> m1=nls(yeps ~ | How to read the the goodness of fit on nls of R?
You can simply use the F test and anova to compare them. Here are some codes.
> x <- 1:10
> y <- 2*x + 3
> yeps <- y + rnorm(length(y), sd = 0.01)
>
>
> m1=nls(yeps ~ a + b*x, start = list(a = 0.12345, b = 0.54321))
> summary(m1)
Formula: yeps ~ a + b * x
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 2.9965562 0.0052838 567.1 <2e-16 ***
b 2.0016282 0.0008516 2350.6 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.007735 on 8 degrees of freedom
Number of iterations to convergence: 2
Achieved convergence tolerance: 3.386e-09
>
>
> m2=nls(yeps ~ a + b*x+c*I(x^5), start = list(a = 0.12345, b = 0.54321,c=10))
> summary(m2)
Formula: yeps ~ a + b * x + c * I(x^5)
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 3.003e+00 5.820e-03 516.010 <2e-16 ***
b 1.999e+00 1.364e-03 1466.004 <2e-16 ***
c 2.332e-07 1.236e-07 1.886 0.101
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.006733 on 7 degrees of freedom
Number of iterations to convergence: 2
Achieved convergence tolerance: 1.300e-06
>
> anova(m1,m2)
Analysis of Variance Table
Model 1: yeps ~ a + b * x
Model 2: yeps ~ a + b * x + c * I(x^5)
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 8 0.00047860
2 7 0.00031735 1 0.00016124 3.5567 0.1013
> | How to read the the goodness of fit on nls of R?
You can simply use the F test and anova to compare them. Here are some codes.
> x <- 1:10
> y <- 2*x + 3
> yeps <- y + rnorm(length(y), sd = 0.01)
>
>
> m1=nls(yeps ~ |
19,229 | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | The most elegant way would be to use a parametric survival model (Gompertz, Weibull, Exponential, ...) if you have some idea what the baseline hazard might look like.
If you want to stay with your Cox model you can take up an extended cox model with time-dependent coefficients. Bear in mind that there are also extended cox models with time depending covariats - these do not solve your problem!
For R see here: http://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | The most elegant way would be to use a parametric survival model (Gompertz, Weibull, Exponential, ...) if you have some idea what the baseline hazard might look like.
If you want to stay with your Cox | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
The most elegant way would be to use a parametric survival model (Gompertz, Weibull, Exponential, ...) if you have some idea what the baseline hazard might look like.
If you want to stay with your Cox model you can take up an extended cox model with time-dependent coefficients. Bear in mind that there are also extended cox models with time depending covariats - these do not solve your problem!
For R see here: http://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
The most elegant way would be to use a parametric survival model (Gompertz, Weibull, Exponential, ...) if you have some idea what the baseline hazard might look like.
If you want to stay with your Cox |
19,230 | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | Couple of ideas -
1) Try the Royston-Parmar modelling approach e.g. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0047804 and references therein. We've had useful results with it.
2) Centring and standardising continuous variables can be useful numerically.
3) In many models with factors with lots of levels there are a few levels where there are basically no data. Merging levels to remove these, but based on good substantive criteria, can be very helpful.
Good luck! | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | Couple of ideas -
1) Try the Royston-Parmar modelling approach e.g. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0047804 and references therein. We've had useful results with it.
2 | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
Couple of ideas -
1) Try the Royston-Parmar modelling approach e.g. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0047804 and references therein. We've had useful results with it.
2) Centring and standardising continuous variables can be useful numerically.
3) In many models with factors with lots of levels there are a few levels where there are basically no data. Merging levels to remove these, but based on good substantive criteria, can be very helpful.
Good luck! | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
Couple of ideas -
1) Try the Royston-Parmar modelling approach e.g. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0047804 and references therein. We've had useful results with it.
2 |
19,231 | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | If using an interaction with the underlying time doesn't work, you can try step functions (for more information see Therneau's 2016 vignette).
Step functions is stratify in specific coefficients at specific intervals. After seeing your plotted Schoenfeld residuals for the problematic covariates (i.e. plot(cox.zph(model.coxph))) you need to visually check where the lines change angle. Try to find one or two points where the beta seems markedly different. Suppose this occurred at time 10 and 20. So we will create data using survSplit() from the survival package which will create a data frame for the specific data model grouping at the aforementioned times:
step.data <- survSplit(Surv(t1, t2, event) ~
x1 + x2,
data = data, cut = c(10, 20), episode = "tgroup")
And then run the cox.ph model with the strata function as interactions with the problematic variables (as with interacting with time, do not add a main effect for time or the strata):
> model.coxph2 <- coxph(Surv(t1, t2, event) ~
x1 + x2:strata(tgroup), data = step.data)
And that should help. | What are the options in proportional hazard regression model when Schoenfeld residuals are not good? | If using an interaction with the underlying time doesn't work, you can try step functions (for more information see Therneau's 2016 vignette).
Step functions is stratify in specific coefficients at sp | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
If using an interaction with the underlying time doesn't work, you can try step functions (for more information see Therneau's 2016 vignette).
Step functions is stratify in specific coefficients at specific intervals. After seeing your plotted Schoenfeld residuals for the problematic covariates (i.e. plot(cox.zph(model.coxph))) you need to visually check where the lines change angle. Try to find one or two points where the beta seems markedly different. Suppose this occurred at time 10 and 20. So we will create data using survSplit() from the survival package which will create a data frame for the specific data model grouping at the aforementioned times:
step.data <- survSplit(Surv(t1, t2, event) ~
x1 + x2,
data = data, cut = c(10, 20), episode = "tgroup")
And then run the cox.ph model with the strata function as interactions with the problematic variables (as with interacting with time, do not add a main effect for time or the strata):
> model.coxph2 <- coxph(Surv(t1, t2, event) ~
x1 + x2:strata(tgroup), data = step.data)
And that should help. | What are the options in proportional hazard regression model when Schoenfeld residuals are not good?
If using an interaction with the underlying time doesn't work, you can try step functions (for more information see Therneau's 2016 vignette).
Step functions is stratify in specific coefficients at sp |
19,232 | Time series modeling of circular data | Is the von Mises distribution a good model for wind direction. It has support over 0 to 2\pi (or -pi to +pi) https://www.statisticshowto.datasciencecentral.com/von-mises-distribution/
If so, there are examples (https://iris.unipa.it/retrieve/handle/10447/94147/118553/basile_et_al_icrera_2013.pdf) who use a von Mises distribution with a time series. It's hooked up to a Hidden Markov Model rather than ARIMA, but I think the key thing is the von Mises (Tikhonov) distribution? | Time series modeling of circular data | Is the von Mises distribution a good model for wind direction. It has support over 0 to 2\pi (or -pi to +pi) https://www.statisticshowto.datasciencecentral.com/von-mises-distribution/
If so, there are | Time series modeling of circular data
Is the von Mises distribution a good model for wind direction. It has support over 0 to 2\pi (or -pi to +pi) https://www.statisticshowto.datasciencecentral.com/von-mises-distribution/
If so, there are examples (https://iris.unipa.it/retrieve/handle/10447/94147/118553/basile_et_al_icrera_2013.pdf) who use a von Mises distribution with a time series. It's hooked up to a Hidden Markov Model rather than ARIMA, but I think the key thing is the von Mises (Tikhonov) distribution? | Time series modeling of circular data
Is the von Mises distribution a good model for wind direction. It has support over 0 to 2\pi (or -pi to +pi) https://www.statisticshowto.datasciencecentral.com/von-mises-distribution/
If so, there are |
19,233 | Restricted maximum likelihood with less than full column rank of $X$ | Deriving the exponential part is no problem for any X(α)X(α) and it
may be written in terms of the Moore-Penrose inverse as above
I have doubt that this observation is correct. The generalized inverse actually put additional linear restriction on your estimators[Rao&Mitra], therefore we should consider the joint likelihood as a whole instead of guessing "Moore-Penrose inverse will work for exponential part". This seems formally correct yet you probably do not understand mixed model correctly.
$\blacksquare$ (1)How to think mixed effect models correctly?
You have to think mixed effect model in a different way before you try to plug the g-inverse(OR Moore-Penrose inverse, which is a special kind of reflexive g-inverse [Rao&Mitra]) mechanically into the formula given by RMLE(Restricted Maximum Likelihood Estimator, same below.).
$$\boldsymbol{X}=\left(\begin{array}{cc}
fixed\quad effect\\
& random\quad effect
\end{array}\right)$$
A common way of thinking mixed effect is that the random effect part in the design matrix is introduced by measurement error, which bears another name of "stochastic predictor" if we care more about prediction rather than estimation. This is also one historical motivation of study of stochastic matrix in setting of statistics.
My problem is that for some perfectly reasonable, and scientifically interesting, αα the matrix X(α)X(α) is not of full column rank.
Given this way of thinking the likelihood, the probability that $X(\alpha)$ is not of full rank is zero. This is because determinant function is continuous in entries of matrix and the normal distribution is a continuous distribution that assigns zero probability to a single point. The probability of defective rank $X(\alpha)$ is positive iff you parameterized it in a pathological way like $\left(\begin{array}{ccc}
\alpha & \alpha\\
\alpha & \alpha\\
& & random\quad effect
\end{array}\right)$.
So the solution to your question is also rather straight forward, you simply perturb your design matrix $X_\epsilon(\alpha)=X(\alpha)+\epsilon\left(\begin{array}{cc}
I & 0\\
0 & 0
\end{array}\right)$(perturb the fixed effect part only), and use the perturbed matrix(which is full rank) to carry out all derivations. Unless your model has complicated hierarchies or $X$ itself is near singular, I do not see there is a serious problem when you take $\epsilon\rightarrow 0$ in the final result since determinant function is continuous and we can take the limit inside the determinant function. $lim_{\epsilon\rightarrow 0}|X_\epsilon|=|lim_{\epsilon\rightarrow 0}X_\epsilon|$. And in perturbation form the inverse of $X_\epsilon$ can be obtained by Sherman-Morrision-Woodbury Theorem. And the determinant of matrix $I+X$ is given in standard linear algebra book like [Horn&Johnson]. Of course we can write the determinant in terms of each entry of the matrix, but perturbation is always preferred[Horn&Johnson].
$\blacksquare$ (2)How should we deal with nuisance parameters in a model?
As you see, to deal with the random effect part in the model, we should regard it as sort of "nuisance parameter". The problem is: Is RMLE the most appropriate way of eliminating a nuisance parameter? Even in GLM and mixed effect models, RMLE is far from the only choice. [Basu] pointed out that many other ways of eliminating parameters in setting of estimation. Today people tend to choose inbetween RMLE and Bayesian modeling because they correspond to two popular computer based solutions: EM and MCMC respectively.
In my opinion it is definitely more suitable to introduce a prior in the situation of defective rank in the fixed effect part. Or you can reparameterize your model in order to make it into a full rank one.
Further, in case your fixed effect is not of full rank, you might worry above mis-specified covariance structure because the degrees of freedom in fixed effects should have go into the error part. To see this point more clearly, you may want to consider the MLE(also LSE) for the GLS(General least squre) $\hat{\beta}=(X\Sigma^{-1} X')^{-1}\Sigma^{-1}y$ where $\Sigma$ is the covariance structure of the error term, for the case where $X(\alpha)$ is not full rank.
$\blacksquare$ (3)Further comments
The problem is not how you modify the RMLE to make it work in the case that fixed effect part of the matrix is not of full rank; the problem is that in that case your model itself may be problematic if non full-rank case has positive probability.
One relevant case I have encountered is that in the spatial case people may want to reduce the rank of fixed effect part due to computational consideration[Wikle].
I have not seen any "scientifically interesting" case in such situation, can you point out some literature where the non full-rank case is of major concern? I would like to know and discuss further, thanks.
$\blacksquare$ Reference
[Rao&Mitra]Rao, Calyampudi Radhakrishna, and Sujit Kumar Mitra. Generalized inverse of matrices and its applications. Vol. 7. New York: Wiley, 1971.
[Basu]Basu, Debabrata. "On the elimination of nuisance parameters." Journal of the American Statistical Association 72.358 (1977): 355-366.
[Horn&Johnson]Horn, Roger A., and Charles R. Johnson. Matrix analysis. Cambridge university press, 2012.
[Wikle]Wikle, Christopher K. "Low-rank representations for spatial processes." Handbook of Spatial Statistics (2010): 107-118. | Restricted maximum likelihood with less than full column rank of $X$ | Deriving the exponential part is no problem for any X(α)X(α) and it
may be written in terms of the Moore-Penrose inverse as above
I have doubt that this observation is correct. The generalized inve | Restricted maximum likelihood with less than full column rank of $X$
Deriving the exponential part is no problem for any X(α)X(α) and it
may be written in terms of the Moore-Penrose inverse as above
I have doubt that this observation is correct. The generalized inverse actually put additional linear restriction on your estimators[Rao&Mitra], therefore we should consider the joint likelihood as a whole instead of guessing "Moore-Penrose inverse will work for exponential part". This seems formally correct yet you probably do not understand mixed model correctly.
$\blacksquare$ (1)How to think mixed effect models correctly?
You have to think mixed effect model in a different way before you try to plug the g-inverse(OR Moore-Penrose inverse, which is a special kind of reflexive g-inverse [Rao&Mitra]) mechanically into the formula given by RMLE(Restricted Maximum Likelihood Estimator, same below.).
$$\boldsymbol{X}=\left(\begin{array}{cc}
fixed\quad effect\\
& random\quad effect
\end{array}\right)$$
A common way of thinking mixed effect is that the random effect part in the design matrix is introduced by measurement error, which bears another name of "stochastic predictor" if we care more about prediction rather than estimation. This is also one historical motivation of study of stochastic matrix in setting of statistics.
My problem is that for some perfectly reasonable, and scientifically interesting, αα the matrix X(α)X(α) is not of full column rank.
Given this way of thinking the likelihood, the probability that $X(\alpha)$ is not of full rank is zero. This is because determinant function is continuous in entries of matrix and the normal distribution is a continuous distribution that assigns zero probability to a single point. The probability of defective rank $X(\alpha)$ is positive iff you parameterized it in a pathological way like $\left(\begin{array}{ccc}
\alpha & \alpha\\
\alpha & \alpha\\
& & random\quad effect
\end{array}\right)$.
So the solution to your question is also rather straight forward, you simply perturb your design matrix $X_\epsilon(\alpha)=X(\alpha)+\epsilon\left(\begin{array}{cc}
I & 0\\
0 & 0
\end{array}\right)$(perturb the fixed effect part only), and use the perturbed matrix(which is full rank) to carry out all derivations. Unless your model has complicated hierarchies or $X$ itself is near singular, I do not see there is a serious problem when you take $\epsilon\rightarrow 0$ in the final result since determinant function is continuous and we can take the limit inside the determinant function. $lim_{\epsilon\rightarrow 0}|X_\epsilon|=|lim_{\epsilon\rightarrow 0}X_\epsilon|$. And in perturbation form the inverse of $X_\epsilon$ can be obtained by Sherman-Morrision-Woodbury Theorem. And the determinant of matrix $I+X$ is given in standard linear algebra book like [Horn&Johnson]. Of course we can write the determinant in terms of each entry of the matrix, but perturbation is always preferred[Horn&Johnson].
$\blacksquare$ (2)How should we deal with nuisance parameters in a model?
As you see, to deal with the random effect part in the model, we should regard it as sort of "nuisance parameter". The problem is: Is RMLE the most appropriate way of eliminating a nuisance parameter? Even in GLM and mixed effect models, RMLE is far from the only choice. [Basu] pointed out that many other ways of eliminating parameters in setting of estimation. Today people tend to choose inbetween RMLE and Bayesian modeling because they correspond to two popular computer based solutions: EM and MCMC respectively.
In my opinion it is definitely more suitable to introduce a prior in the situation of defective rank in the fixed effect part. Or you can reparameterize your model in order to make it into a full rank one.
Further, in case your fixed effect is not of full rank, you might worry above mis-specified covariance structure because the degrees of freedom in fixed effects should have go into the error part. To see this point more clearly, you may want to consider the MLE(also LSE) for the GLS(General least squre) $\hat{\beta}=(X\Sigma^{-1} X')^{-1}\Sigma^{-1}y$ where $\Sigma$ is the covariance structure of the error term, for the case where $X(\alpha)$ is not full rank.
$\blacksquare$ (3)Further comments
The problem is not how you modify the RMLE to make it work in the case that fixed effect part of the matrix is not of full rank; the problem is that in that case your model itself may be problematic if non full-rank case has positive probability.
One relevant case I have encountered is that in the spatial case people may want to reduce the rank of fixed effect part due to computational consideration[Wikle].
I have not seen any "scientifically interesting" case in such situation, can you point out some literature where the non full-rank case is of major concern? I would like to know and discuss further, thanks.
$\blacksquare$ Reference
[Rao&Mitra]Rao, Calyampudi Radhakrishna, and Sujit Kumar Mitra. Generalized inverse of matrices and its applications. Vol. 7. New York: Wiley, 1971.
[Basu]Basu, Debabrata. "On the elimination of nuisance parameters." Journal of the American Statistical Association 72.358 (1977): 355-366.
[Horn&Johnson]Horn, Roger A., and Charles R. Johnson. Matrix analysis. Cambridge university press, 2012.
[Wikle]Wikle, Christopher K. "Low-rank representations for spatial processes." Handbook of Spatial Statistics (2010): 107-118. | Restricted maximum likelihood with less than full column rank of $X$
Deriving the exponential part is no problem for any X(α)X(α) and it
may be written in terms of the Moore-Penrose inverse as above
I have doubt that this observation is correct. The generalized inve |
19,234 | State-of-the-art in Collaborative Filtering | You can also take a look on the Gravity Recommendation System (GRS) paper, which is also about Matrix Factorization. The authors competed using this algorithm in the well known Netflix Prize. | State-of-the-art in Collaborative Filtering | You can also take a look on the Gravity Recommendation System (GRS) paper, which is also about Matrix Factorization. The authors competed using this algorithm in the well known Netflix Prize. | State-of-the-art in Collaborative Filtering
You can also take a look on the Gravity Recommendation System (GRS) paper, which is also about Matrix Factorization. The authors competed using this algorithm in the well known Netflix Prize. | State-of-the-art in Collaborative Filtering
You can also take a look on the Gravity Recommendation System (GRS) paper, which is also about Matrix Factorization. The authors competed using this algorithm in the well known Netflix Prize. |
19,235 | Dirichlet Processes for clustering: how to deal with labels? | My tentative answer would be to treat $\mathbf{c}$ as a parameter so that $p(\mathbf{c},\theta)$ is simply the posterior mode. This is what I suspect Niekum and Barto did (the paper referenced in option 3). The reason they were vague about whether they used $p(\mathbf{c}, \theta)$ or $p(\mathbf{c}|\theta)$ is that one is proportional to the other.
The reason I say this answer is "tentative" is that I'm not sure if designating a value as a "parameter" is just a matter of semantics, or if there's a more technical/theoretical definition that one of the PhD-holding users here would be able to elucidate. | Dirichlet Processes for clustering: how to deal with labels? | My tentative answer would be to treat $\mathbf{c}$ as a parameter so that $p(\mathbf{c},\theta)$ is simply the posterior mode. This is what I suspect Niekum and Barto did (the paper referenced in opti | Dirichlet Processes for clustering: how to deal with labels?
My tentative answer would be to treat $\mathbf{c}$ as a parameter so that $p(\mathbf{c},\theta)$ is simply the posterior mode. This is what I suspect Niekum and Barto did (the paper referenced in option 3). The reason they were vague about whether they used $p(\mathbf{c}, \theta)$ or $p(\mathbf{c}|\theta)$ is that one is proportional to the other.
The reason I say this answer is "tentative" is that I'm not sure if designating a value as a "parameter" is just a matter of semantics, or if there's a more technical/theoretical definition that one of the PhD-holding users here would be able to elucidate. | Dirichlet Processes for clustering: how to deal with labels?
My tentative answer would be to treat $\mathbf{c}$ as a parameter so that $p(\mathbf{c},\theta)$ is simply the posterior mode. This is what I suspect Niekum and Barto did (the paper referenced in opti |
19,236 | Dirichlet Processes for clustering: how to deal with labels? | I just wanted to share some resources on the topic, hoping that some of them could be helpful in answering this question. There are many tutorials on Dirichlet processes (DP), including some on using DP for clustering. They range from "gentle", like this presentation tutorial, to more advanced, like this presentation tutorial. The latter is an updated version of the same tutorial, presented by Yee Whye Teh at MLSS'07. You can watch the video of that talk with synchronized slides here. Speaking about videos, you can watch another interesting and relevant talk with slides by Tom Griffith here. In terms of the paper-formatted tutorials, this tutorial is a nice and pretty popular one.
Finally, I would like to share a couple of related papers. This paper on hierarchical DP seems to be important and relevant. The same applies to this paper by Radford Neal. If you are interested in topic modeling, latent Dirichlet allocation (LDA) should most likely be on your radar as well. In that case, this very recent paper presents a novel and much improved LDA approach. In regard to topic modeling domain, I would recommend to read research papers by David Blei and his collaborators. This paper is an introductory one, the rest you can find on his research publications page. I realize that some of the materials that I've recommended might be too basic for you, but I thought that by including everything that I ran across on the topic, I'd increase chances for you to find the answer. | Dirichlet Processes for clustering: how to deal with labels? | I just wanted to share some resources on the topic, hoping that some of them could be helpful in answering this question. There are many tutorials on Dirichlet processes (DP), including some on using | Dirichlet Processes for clustering: how to deal with labels?
I just wanted to share some resources on the topic, hoping that some of them could be helpful in answering this question. There are many tutorials on Dirichlet processes (DP), including some on using DP for clustering. They range from "gentle", like this presentation tutorial, to more advanced, like this presentation tutorial. The latter is an updated version of the same tutorial, presented by Yee Whye Teh at MLSS'07. You can watch the video of that talk with synchronized slides here. Speaking about videos, you can watch another interesting and relevant talk with slides by Tom Griffith here. In terms of the paper-formatted tutorials, this tutorial is a nice and pretty popular one.
Finally, I would like to share a couple of related papers. This paper on hierarchical DP seems to be important and relevant. The same applies to this paper by Radford Neal. If you are interested in topic modeling, latent Dirichlet allocation (LDA) should most likely be on your radar as well. In that case, this very recent paper presents a novel and much improved LDA approach. In regard to topic modeling domain, I would recommend to read research papers by David Blei and his collaborators. This paper is an introductory one, the rest you can find on his research publications page. I realize that some of the materials that I've recommended might be too basic for you, but I thought that by including everything that I ran across on the topic, I'd increase chances for you to find the answer. | Dirichlet Processes for clustering: how to deal with labels?
I just wanted to share some resources on the topic, hoping that some of them could be helpful in answering this question. There are many tutorials on Dirichlet processes (DP), including some on using |
19,237 | Spatial autocorrelation versus spatial stationarity | I think you are answering properly your own set of questions.
Housing market research is normally tackled by using non-parametric models.
For your second question, I agree in the use of SAR models, and I will go with the Durbin for two reasons: First, the Durbin model produces unbiased coefficient estimates. Second, it is able to produce spillover effects that in relation to their correspondent direct effect may be different for each explanatory variable.
Hope this helps! | Spatial autocorrelation versus spatial stationarity | I think you are answering properly your own set of questions.
Housing market research is normally tackled by using non-parametric models.
For your second question, I agree in the use of SAR models, | Spatial autocorrelation versus spatial stationarity
I think you are answering properly your own set of questions.
Housing market research is normally tackled by using non-parametric models.
For your second question, I agree in the use of SAR models, and I will go with the Durbin for two reasons: First, the Durbin model produces unbiased coefficient estimates. Second, it is able to produce spillover effects that in relation to their correspondent direct effect may be different for each explanatory variable.
Hope this helps! | Spatial autocorrelation versus spatial stationarity
I think you are answering properly your own set of questions.
Housing market research is normally tackled by using non-parametric models.
For your second question, I agree in the use of SAR models, |
19,238 | Spatial autocorrelation versus spatial stationarity | The problem is not with spatial Durbin estimation itself. It could be estimated by maximum likelihood and you can calculate the partial effects. The problem occurs when the space effect is not stationary in dgp, so that you cannot properly model its effect this way. GWR does many regressions over your space, thus provides you a vector of coefficients over your space. Statistical inferences over those coefficients is not straightforward but it shows well on a map as an exploratory tool.
So, for finding out the premium of an additional bedroom in a specific neighborhood, your best bet would probably be running a separate spatial regression on that neighborhood. For finding premium of an additional bedroom globally, use spatial regression as well, but also do mind that the coefficients are not linear in parameters with such regressions; for that reason, the premiums are defined at specific values such as the mean. | Spatial autocorrelation versus spatial stationarity | The problem is not with spatial Durbin estimation itself. It could be estimated by maximum likelihood and you can calculate the partial effects. The problem occurs when the space effect is not station | Spatial autocorrelation versus spatial stationarity
The problem is not with spatial Durbin estimation itself. It could be estimated by maximum likelihood and you can calculate the partial effects. The problem occurs when the space effect is not stationary in dgp, so that you cannot properly model its effect this way. GWR does many regressions over your space, thus provides you a vector of coefficients over your space. Statistical inferences over those coefficients is not straightforward but it shows well on a map as an exploratory tool.
So, for finding out the premium of an additional bedroom in a specific neighborhood, your best bet would probably be running a separate spatial regression on that neighborhood. For finding premium of an additional bedroom globally, use spatial regression as well, but also do mind that the coefficients are not linear in parameters with such regressions; for that reason, the premiums are defined at specific values such as the mean. | Spatial autocorrelation versus spatial stationarity
The problem is not with spatial Durbin estimation itself. It could be estimated by maximum likelihood and you can calculate the partial effects. The problem occurs when the space effect is not station |
19,239 | Statistical measure for if an image consists of spatially connected separate regions | I was thinking that a Gaussian blur acts as a low-pass filter leaving the large-scale structure behind and removing the high wave-number components.
You could also look at the scale of wavelets required to generate the image. If all the information is living in the small scale wavelets then it is likely not the river.
You might consider some sort of auto-correlation of one line of the river with itself. So if you took a row of pixels of the river, even with noise, and found the cross-correlation function with the next row, then you could both find the location and value of the peak. This value is going to be much higher than what you are going to get with the random noise. A column of pixels is not going to produce much of a signal unless you pick something from the region where the river is.
http://en.wikipedia.org/wiki/Gaussian_blur
http://en.wikipedia.org/wiki/Cross-correlation | Statistical measure for if an image consists of spatially connected separate regions | I was thinking that a Gaussian blur acts as a low-pass filter leaving the large-scale structure behind and removing the high wave-number components.
You could also look at the scale of wavelets requ | Statistical measure for if an image consists of spatially connected separate regions
I was thinking that a Gaussian blur acts as a low-pass filter leaving the large-scale structure behind and removing the high wave-number components.
You could also look at the scale of wavelets required to generate the image. If all the information is living in the small scale wavelets then it is likely not the river.
You might consider some sort of auto-correlation of one line of the river with itself. So if you took a row of pixels of the river, even with noise, and found the cross-correlation function with the next row, then you could both find the location and value of the peak. This value is going to be much higher than what you are going to get with the random noise. A column of pixels is not going to produce much of a signal unless you pick something from the region where the river is.
http://en.wikipedia.org/wiki/Gaussian_blur
http://en.wikipedia.org/wiki/Cross-correlation | Statistical measure for if an image consists of spatially connected separate regions
I was thinking that a Gaussian blur acts as a low-pass filter leaving the large-scale structure behind and removing the high wave-number components.
You could also look at the scale of wavelets requ |
19,240 | Statistical measure for if an image consists of spatially connected separate regions | This is a bit late, but I cannot resist one suggestion and one observation.
First, I believe a more "image processing" approach may be better suited than histogram/variogram analysis. I would say that the "smoothing" suggestion of EngrStudent is on the right track, but the "blur" part is counter-productive. What is called for is an edge-preserving smoother, such as a Bilateral filter, or a median filter. These are more sophisticated than moving average filters, as they are by necessity nonlinear.
Here is a demonstration of what I mean. Below are two images approximating your two scenarios, along with their histograms. (The images are each 100 by 100, with normalized intensities).
Raw Images
For each of these images I then apply a 5 by 5 median filter 15 times*, which smooths the patterns while preserving the edges. The results are shown below.
Smoothed Images
(*Using a larger filter would still maintain the sharp contrast across the edges, but would smooth their position.)
Note how the "river" image still has a bimodal histogram, but it is now nicely separated into 2 components*. Meanwhile, the "white noise" image still has a single-component unimodal histogram. (*Easily thresholded via, e.g. Otsu's method, to make a mask and finalize the segmentation.)
Second, your image is certainly not a "river"! Aside from the fact that it is too anisotropic (stretched in the "x" direction), to the extent that meandering rivers can be described by a simple equation, their geometry is actually much closer to a sine-generated curve than to a sine curve (e.g. see here or here). For low amplitudes this is approximately a sine curve, but for higher amplitudes the loops become "overturned" ($x\neq f[y]$), which in nature eventually leads to cutoff.
(Sorry for the rant ... my training was as a geomorphologist, originally) | Statistical measure for if an image consists of spatially connected separate regions | This is a bit late, but I cannot resist one suggestion and one observation.
First, I believe a more "image processing" approach may be better suited than histogram/variogram analysis. I would say that | Statistical measure for if an image consists of spatially connected separate regions
This is a bit late, but I cannot resist one suggestion and one observation.
First, I believe a more "image processing" approach may be better suited than histogram/variogram analysis. I would say that the "smoothing" suggestion of EngrStudent is on the right track, but the "blur" part is counter-productive. What is called for is an edge-preserving smoother, such as a Bilateral filter, or a median filter. These are more sophisticated than moving average filters, as they are by necessity nonlinear.
Here is a demonstration of what I mean. Below are two images approximating your two scenarios, along with their histograms. (The images are each 100 by 100, with normalized intensities).
Raw Images
For each of these images I then apply a 5 by 5 median filter 15 times*, which smooths the patterns while preserving the edges. The results are shown below.
Smoothed Images
(*Using a larger filter would still maintain the sharp contrast across the edges, but would smooth their position.)
Note how the "river" image still has a bimodal histogram, but it is now nicely separated into 2 components*. Meanwhile, the "white noise" image still has a single-component unimodal histogram. (*Easily thresholded via, e.g. Otsu's method, to make a mask and finalize the segmentation.)
Second, your image is certainly not a "river"! Aside from the fact that it is too anisotropic (stretched in the "x" direction), to the extent that meandering rivers can be described by a simple equation, their geometry is actually much closer to a sine-generated curve than to a sine curve (e.g. see here or here). For low amplitudes this is approximately a sine curve, but for higher amplitudes the loops become "overturned" ($x\neq f[y]$), which in nature eventually leads to cutoff.
(Sorry for the rant ... my training was as a geomorphologist, originally) | Statistical measure for if an image consists of spatially connected separate regions
This is a bit late, but I cannot resist one suggestion and one observation.
First, I believe a more "image processing" approach may be better suited than histogram/variogram analysis. I would say that |
19,241 | Statistical measure for if an image consists of spatially connected separate regions | A suggestion which may be a quick win (or may not work at all, but can easily be eliminated) - have you tried looking at the ratio of mean to variance of the image intensity histograms?
Take the random noise image. Assuming it's generated by randomly emitted photons (or similar) hitting a camera, and each pixel is equally likely to be hit, and that you have the raw readings (i.e. values are not rescaled, or are rescaled in a known way you can undo), then the number of readings in each pixel ought to be poisson distributed; you're counting the number of events (photons hitting a pixel) that occur in a fixed time period (exposure time) multiple times (over all pixels).
In the case where there's a river of two different intensity values, you have a mixture of two poisson distributions.
A really quick way to test an image then might be to look at the ratio of mean to variance of the intensities. For a poisson distribution the mean will approximately equal the variance. For a mixture of two poisson distributions, the variance will be bigger than the mean. You'll end up needing to test the ratio of the two against some pre-set threshold.
It's very crude. But if it works, you'll be able to calculate the necessary sufficient statistics with just one pass over each pixel in your image :) | Statistical measure for if an image consists of spatially connected separate regions | A suggestion which may be a quick win (or may not work at all, but can easily be eliminated) - have you tried looking at the ratio of mean to variance of the image intensity histograms?
Take the rando | Statistical measure for if an image consists of spatially connected separate regions
A suggestion which may be a quick win (or may not work at all, but can easily be eliminated) - have you tried looking at the ratio of mean to variance of the image intensity histograms?
Take the random noise image. Assuming it's generated by randomly emitted photons (or similar) hitting a camera, and each pixel is equally likely to be hit, and that you have the raw readings (i.e. values are not rescaled, or are rescaled in a known way you can undo), then the number of readings in each pixel ought to be poisson distributed; you're counting the number of events (photons hitting a pixel) that occur in a fixed time period (exposure time) multiple times (over all pixels).
In the case where there's a river of two different intensity values, you have a mixture of two poisson distributions.
A really quick way to test an image then might be to look at the ratio of mean to variance of the intensities. For a poisson distribution the mean will approximately equal the variance. For a mixture of two poisson distributions, the variance will be bigger than the mean. You'll end up needing to test the ratio of the two against some pre-set threshold.
It's very crude. But if it works, you'll be able to calculate the necessary sufficient statistics with just one pass over each pixel in your image :) | Statistical measure for if an image consists of spatially connected separate regions
A suggestion which may be a quick win (or may not work at all, but can easily be eliminated) - have you tried looking at the ratio of mean to variance of the image intensity histograms?
Take the rando |
19,242 | Why should we discuss convergence behaviors of different estimators in different topologies? | To understand Watanabe's discussion, it is important to realize that what he meant by "singularity". The (strict) singularity coincides with the geometric notion of singular metric in his theory.
p.10 [Watanabe] :"A statistical model $p(x\mid w)$ is said to be regular if it
is identifiable and has a positive definite metric. If a statistical
model is not regular, then it is called strictly singular."
In practice, singularity usually arise when the Fisher information metric induced by the model in degenerated on the manifold defined by the model, like low rank or sparse cases in "machine learning" works.
What Watanabe said about the convergence of empirical KL divergence to its theoretic value can be understood as follows. One origin of the notion of divergence comes from robust statistics. The M-estimators, which include MLE as a special case with contrast function $\rho(\theta,\delta(X))=-\log p(X\mid \theta)$, are usually discussed using weak topology. It is reasonable to discuss the convergence behavior using weak topology over the space $M(\cal{X})$(the manifold of all possible measures defined on Polish space $\cal{X}$) because we want to study the robustness behavior of MLE. A classical theorem in [Huber] stated that with well separated divergence function $D(\theta_0,\theta)=E_{\theta_{0}}\rho(\theta,\delta)$. $$\inf_{|\theta-\theta_0|\geq\epsilon}(|D(\theta_0,\theta)-D(\theta_0,\theta_0)| )>0$$
and good empirical approximation of contrast function to divergence,
$$\sup_{\theta}\left|\frac{1}{n}\sum_{i}\rho(\theta,\delta(X_i))- D(\theta_0,\theta)\right|\rightarrow 0,n\rightarrow\infty$$
along with regularity, we can yield consistency in sense
$$\hat{\theta_n}:=\mathrm{arg\,min}_{\theta}\rho(\theta,\delta(X_n))$$
will converge to $\theta_0$ in probability $P_{\theta_0}$. This result requires far more precise conditions if we compared with Doob's result [Doob] in weak consistency of Bayesian estimator.
So here Bayesian estimators and MLE diverges. If we still use weak topology to discuss consistency of Bayesian estimators, it is meaningless because Bayesian estimators will always(with probability one) be consistent by Doob. Therefore a more appropriate topology is Schwarz distribution topology which allows weak derivatives and von Mises' theory came into play. Barron had a very nice technical report on this topic how we could use Schwartz theorem to obtain consistency.
In another perspective, Bayesian estimators are distributions and their topology should be something different. Then what kind of role the divergence $D$ plays in that kind of topology? The answer is that it defines KL support of priors which allows Bayesian estimator to be strongly consistent.
The "singular learning result" is affected because, as we see, Doob's consistency theorem ensures that Bayesian estimators to be weakly consistent(even in singular model) in weak topology while MLE should meet certain requirements in the same topology.
Just one word, [Watanabe] is not for beginners. It has some deep implications on real analytic sets which requires more mathematical maturity than most statisticians have, so it is probably not a good idea to read it without appropriate guidance.
$\blacksquare$ References
[Watanabe] Watanabe, Sumio. Algebraic geometry and statistical learning theory. Vol. 25. Cambridge University Press, 2009.
[Huber] Huber, Peter J. "The behavior of maximum likelihood estimates under nonstandard conditions." Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. Vol. 1. No. 1. 1967.
[Doob] Doob, Joseph L. "Application of the theory of martingales." Le calcul des probabilites et ses applications (1949): 23-27. | Why should we discuss convergence behaviors of different estimators in different topologies? | To understand Watanabe's discussion, it is important to realize that what he meant by "singularity". The (strict) singularity coincides with the geometric notion of singular metric in his theory.
p.1 | Why should we discuss convergence behaviors of different estimators in different topologies?
To understand Watanabe's discussion, it is important to realize that what he meant by "singularity". The (strict) singularity coincides with the geometric notion of singular metric in his theory.
p.10 [Watanabe] :"A statistical model $p(x\mid w)$ is said to be regular if it
is identifiable and has a positive definite metric. If a statistical
model is not regular, then it is called strictly singular."
In practice, singularity usually arise when the Fisher information metric induced by the model in degenerated on the manifold defined by the model, like low rank or sparse cases in "machine learning" works.
What Watanabe said about the convergence of empirical KL divergence to its theoretic value can be understood as follows. One origin of the notion of divergence comes from robust statistics. The M-estimators, which include MLE as a special case with contrast function $\rho(\theta,\delta(X))=-\log p(X\mid \theta)$, are usually discussed using weak topology. It is reasonable to discuss the convergence behavior using weak topology over the space $M(\cal{X})$(the manifold of all possible measures defined on Polish space $\cal{X}$) because we want to study the robustness behavior of MLE. A classical theorem in [Huber] stated that with well separated divergence function $D(\theta_0,\theta)=E_{\theta_{0}}\rho(\theta,\delta)$. $$\inf_{|\theta-\theta_0|\geq\epsilon}(|D(\theta_0,\theta)-D(\theta_0,\theta_0)| )>0$$
and good empirical approximation of contrast function to divergence,
$$\sup_{\theta}\left|\frac{1}{n}\sum_{i}\rho(\theta,\delta(X_i))- D(\theta_0,\theta)\right|\rightarrow 0,n\rightarrow\infty$$
along with regularity, we can yield consistency in sense
$$\hat{\theta_n}:=\mathrm{arg\,min}_{\theta}\rho(\theta,\delta(X_n))$$
will converge to $\theta_0$ in probability $P_{\theta_0}$. This result requires far more precise conditions if we compared with Doob's result [Doob] in weak consistency of Bayesian estimator.
So here Bayesian estimators and MLE diverges. If we still use weak topology to discuss consistency of Bayesian estimators, it is meaningless because Bayesian estimators will always(with probability one) be consistent by Doob. Therefore a more appropriate topology is Schwarz distribution topology which allows weak derivatives and von Mises' theory came into play. Barron had a very nice technical report on this topic how we could use Schwartz theorem to obtain consistency.
In another perspective, Bayesian estimators are distributions and their topology should be something different. Then what kind of role the divergence $D$ plays in that kind of topology? The answer is that it defines KL support of priors which allows Bayesian estimator to be strongly consistent.
The "singular learning result" is affected because, as we see, Doob's consistency theorem ensures that Bayesian estimators to be weakly consistent(even in singular model) in weak topology while MLE should meet certain requirements in the same topology.
Just one word, [Watanabe] is not for beginners. It has some deep implications on real analytic sets which requires more mathematical maturity than most statisticians have, so it is probably not a good idea to read it without appropriate guidance.
$\blacksquare$ References
[Watanabe] Watanabe, Sumio. Algebraic geometry and statistical learning theory. Vol. 25. Cambridge University Press, 2009.
[Huber] Huber, Peter J. "The behavior of maximum likelihood estimates under nonstandard conditions." Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. Vol. 1. No. 1. 1967.
[Doob] Doob, Joseph L. "Application of the theory of martingales." Le calcul des probabilites et ses applications (1949): 23-27. | Why should we discuss convergence behaviors of different estimators in different topologies?
To understand Watanabe's discussion, it is important to realize that what he meant by "singularity". The (strict) singularity coincides with the geometric notion of singular metric in his theory.
p.1 |
19,243 | Does GSVD implement all linear multivariate techniques? | Section 4.1 of the article describes what the matrices, M and W, have to be for the generalized SVD to yield results comparable to correspondence analysis. The author also cites his reference #3 to explain how the generalized SVD can yield results comparable to the other multivariate methods mentioned. | Does GSVD implement all linear multivariate techniques? | Section 4.1 of the article describes what the matrices, M and W, have to be for the generalized SVD to yield results comparable to correspondence analysis. The author also cites his reference #3 to e | Does GSVD implement all linear multivariate techniques?
Section 4.1 of the article describes what the matrices, M and W, have to be for the generalized SVD to yield results comparable to correspondence analysis. The author also cites his reference #3 to explain how the generalized SVD can yield results comparable to the other multivariate methods mentioned. | Does GSVD implement all linear multivariate techniques?
Section 4.1 of the article describes what the matrices, M and W, have to be for the generalized SVD to yield results comparable to correspondence analysis. The author also cites his reference #3 to e |
19,244 | Specify correlation structure for different groups in mixed-effects model (lme4/nlme) | An alternative very flexible approach is Bayesian. You can implement it in R using JAGS (you will have to go through some steps to download beyond just a package). If you do this, you can specify any correlation structure you want.
To structure it this way, you could either 1) treat your spatially correlated outcomes as part of a multivariate normal model (now y has 2 dimensions, the outcome and the space). or 2) Add another random component for space to the model which has its own correlation structure.
For example, you could modify this code for (2) and build a random intercept and slope model in R. Each person is indexed by i (say a n x 1 vector) and you also provide another nx1 vector of their spatial index (called site_indicator). You also need to provide the total number of sites.
model_RIAS_MVN<-"
model{
#Likelihood
for(i in 1:N_tot) {# all obs
# outcome is normally distributed
bodymass ~ dnorm(mu, sigmainverse)
# outcome model
mu <- b[1] + RI[subj_num[i], 1] + b[2]*Age[i] + RI[subj_num[i], 2]*Age[i] + RandomSpace[site_indicator[i]]
}
# Prior for random intercepts and slopes
# this allows them to be correlated
for (j in 1:N_people) {
RI[j, 1:2] ~ dmnorm(meanU[1:2], G.inv[1:2, 1:2])
}
# CHANGE HERE FOR NEW SITE CORRELATION #
# change number_sites in meanspace and G.invspace to actual number bc it # could throw error
for (j in 1:number_sites) {
RandomSpace[j] ~ dmnorm(meanspace[1:number_sites], G.invspace[1:number_sites, 1:number_sites])
}
for(i in 1:2){
meanU[i] <- 0 # zero mean for random components
}
G.inv[1:2, 1:2] ~ dwish(G0[1:2, 1:2], Gdf)
G[1:2, 1:2] <- inverse(G.inv[1:2, 1:2])
# whatever structure you want for correlation
G.invspace[1:number_sites, 1:number_sites] ~ dwish(G0inv[1:number_sites, 1:number_sites], Gdf_space)
Gspace[1:number_sites, 1:number_sites] <- inverse(G.invspace[1:number_sites, 1:number_sites])
sigmainverse ~ dgamma(1,1)
# informative priors for fixed effects
b[1] ~ dnorm(20, 0.25)
b[2] ~ dnorm(1, 4)
# uncomment for uninformative priors
# b[1] ~ dnorm(0, 0.01)
# b[2] ~ dnorm(0, 0.01)
}
This is just working code and it will need tweaked, but hopefully you get an idea of the flexibility and how you can specify the correlation structure this way. | Specify correlation structure for different groups in mixed-effects model (lme4/nlme) | An alternative very flexible approach is Bayesian. You can implement it in R using JAGS (you will have to go through some steps to download beyond just a package). If you do this, you can specify any | Specify correlation structure for different groups in mixed-effects model (lme4/nlme)
An alternative very flexible approach is Bayesian. You can implement it in R using JAGS (you will have to go through some steps to download beyond just a package). If you do this, you can specify any correlation structure you want.
To structure it this way, you could either 1) treat your spatially correlated outcomes as part of a multivariate normal model (now y has 2 dimensions, the outcome and the space). or 2) Add another random component for space to the model which has its own correlation structure.
For example, you could modify this code for (2) and build a random intercept and slope model in R. Each person is indexed by i (say a n x 1 vector) and you also provide another nx1 vector of their spatial index (called site_indicator). You also need to provide the total number of sites.
model_RIAS_MVN<-"
model{
#Likelihood
for(i in 1:N_tot) {# all obs
# outcome is normally distributed
bodymass ~ dnorm(mu, sigmainverse)
# outcome model
mu <- b[1] + RI[subj_num[i], 1] + b[2]*Age[i] + RI[subj_num[i], 2]*Age[i] + RandomSpace[site_indicator[i]]
}
# Prior for random intercepts and slopes
# this allows them to be correlated
for (j in 1:N_people) {
RI[j, 1:2] ~ dmnorm(meanU[1:2], G.inv[1:2, 1:2])
}
# CHANGE HERE FOR NEW SITE CORRELATION #
# change number_sites in meanspace and G.invspace to actual number bc it # could throw error
for (j in 1:number_sites) {
RandomSpace[j] ~ dmnorm(meanspace[1:number_sites], G.invspace[1:number_sites, 1:number_sites])
}
for(i in 1:2){
meanU[i] <- 0 # zero mean for random components
}
G.inv[1:2, 1:2] ~ dwish(G0[1:2, 1:2], Gdf)
G[1:2, 1:2] <- inverse(G.inv[1:2, 1:2])
# whatever structure you want for correlation
G.invspace[1:number_sites, 1:number_sites] ~ dwish(G0inv[1:number_sites, 1:number_sites], Gdf_space)
Gspace[1:number_sites, 1:number_sites] <- inverse(G.invspace[1:number_sites, 1:number_sites])
sigmainverse ~ dgamma(1,1)
# informative priors for fixed effects
b[1] ~ dnorm(20, 0.25)
b[2] ~ dnorm(1, 4)
# uncomment for uninformative priors
# b[1] ~ dnorm(0, 0.01)
# b[2] ~ dnorm(0, 0.01)
}
This is just working code and it will need tweaked, but hopefully you get an idea of the flexibility and how you can specify the correlation structure this way. | Specify correlation structure for different groups in mixed-effects model (lme4/nlme)
An alternative very flexible approach is Bayesian. You can implement it in R using JAGS (you will have to go through some steps to download beyond just a package). If you do this, you can specify any |
19,245 | References that justify use of Gaussian Mixtures | With respect to your questions:
For the very similar Bayesian problem of Dirichlet Process mixture of gaussians, I understand the answer is yes. Ghosal (2013).
When I attended some talks on this topic, it seemed progress had mainly been made using KL divergence. See Harry van Zanten's slides.
I'm not clear. However, this looks like a source separation problem ($P_N, P_S$ unkown). These are generally much more difficult than mixture modelling alone. In particular for the simple case of $P_N = P_S = N(0,1)$ you wouldn't be able to identify the true $X$ and $Y$ due to symmetry of the distributions about zero.
See the fourth of the slides linked above, there's a list of Bayesian models for which convergence guarantees hold. | References that justify use of Gaussian Mixtures | With respect to your questions:
For the very similar Bayesian problem of Dirichlet Process mixture of gaussians, I understand the answer is yes. Ghosal (2013).
When I attended some talks on this top | References that justify use of Gaussian Mixtures
With respect to your questions:
For the very similar Bayesian problem of Dirichlet Process mixture of gaussians, I understand the answer is yes. Ghosal (2013).
When I attended some talks on this topic, it seemed progress had mainly been made using KL divergence. See Harry van Zanten's slides.
I'm not clear. However, this looks like a source separation problem ($P_N, P_S$ unkown). These are generally much more difficult than mixture modelling alone. In particular for the simple case of $P_N = P_S = N(0,1)$ you wouldn't be able to identify the true $X$ and $Y$ due to symmetry of the distributions about zero.
See the fourth of the slides linked above, there's a list of Bayesian models for which convergence guarantees hold. | References that justify use of Gaussian Mixtures
With respect to your questions:
For the very similar Bayesian problem of Dirichlet Process mixture of gaussians, I understand the answer is yes. Ghosal (2013).
When I attended some talks on this top |
19,246 | References that justify use of Gaussian Mixtures | In econometrics, where the context is of mixture distributions of coefficients in logit models, the standard reference is: MIXED MNL MODELS FOR DISCRETE RESPONSE
DANIEL MCFADDEN AND KENNETH TRAIN, JOURNAL OF APPLIED ECONOMETRICS, J. Appl. Econ. 15: 447-470 (2000). | References that justify use of Gaussian Mixtures | In econometrics, where the context is of mixture distributions of coefficients in logit models, the standard reference is: MIXED MNL MODELS FOR DISCRETE RESPONSE
DANIEL MCFADDEN AND KENNETH TRAIN, JO | References that justify use of Gaussian Mixtures
In econometrics, where the context is of mixture distributions of coefficients in logit models, the standard reference is: MIXED MNL MODELS FOR DISCRETE RESPONSE
DANIEL MCFADDEN AND KENNETH TRAIN, JOURNAL OF APPLIED ECONOMETRICS, J. Appl. Econ. 15: 447-470 (2000). | References that justify use of Gaussian Mixtures
In econometrics, where the context is of mixture distributions of coefficients in logit models, the standard reference is: MIXED MNL MODELS FOR DISCRETE RESPONSE
DANIEL MCFADDEN AND KENNETH TRAIN, JO |
19,247 | References that justify use of Gaussian Mixtures | Here is a partial answer.
Say $S_n$ is the class of all Gaussian mixtures with $n$ components. For any continuous distribution $P$ on the reals, are we guaranteed that as $n$ grows, we can approximate $P$ with a GMM with negligible loss in the sense of relative entropy? That is, does $$\lim_{n\rightarrow \infty}\inf_{\hat{P}\in S_n} D(P||\hat{P})=0?$$
No. You can only hope that a KL divergence $D(P\|Q)$ is small if you know that $Q$'s tails are eventually of the same order as $P$'s. This isn't true in general. It is not hard to see that for $P$ Cauchy then for any $n$, $$\inf_{\hat{P}\in S_n} D(P||\hat{P})=\infty$$
More conditions on $P$ are needed to say that.
Say we have a continuous distribution $P$ and we have found an $N$-component Gaussian mixture $\hat{P}$ which is close to $P$ in total variation: $\delta(P,\hat{P})<\varepsilon$. Can we bound $D(P||\hat{P})$ in terms of $\epsilon$?
No. The same example above applies.
If we want to observe $X\sim P_X$ through independent additive noise $Y\sim P_Y$ (both real, continuous), and we have GMMs $\hat{X} \sim Q_X, \hat{Y} \sim Q_Y$ where $\delta(P,Q)<\epsilon$, then is this value small: $$\left|\mathsf{mmse}(X|X+Y)-\mathsf{mmse}(\hat{X}| \hat{X}+\hat{Y})\right|,$$
i.e. is it true that estimating $X$ through $Y$ noise is about as hard as estimating $\hat{X}$ through $\hat{Y}$ noise?
I don't know. If $X,Y,\hat{X},\hat{Y}$ have finite mean and variance then the MMSEs are $E[X|Y]$ and $E[\hat{X}|\hat{Y}]$ (simple derivation here). With these assumptions, the object is to determine whether $|E_P[(E_P[X|Y]-X)^2]-E_Q[(E_Q[X|Y]-X)^2]|$ is small when $TV(P,Q)$ is small. Related.
I haven't been able to prove this, either in general or using the extra additive structure we have assumed on P,Q, or come up with any counterexamples.
Can you do it for non-additive noise models like Poisson noise?
This is ambiguous. In the context of the previous question, if the statement in that answer can be proven in general then the answer is yes. | References that justify use of Gaussian Mixtures | Here is a partial answer.
Say $S_n$ is the class of all Gaussian mixtures with $n$ components. For any continuous distribution $P$ on the reals, are we guaranteed that as $n$ grows, we can approximat | References that justify use of Gaussian Mixtures
Here is a partial answer.
Say $S_n$ is the class of all Gaussian mixtures with $n$ components. For any continuous distribution $P$ on the reals, are we guaranteed that as $n$ grows, we can approximate $P$ with a GMM with negligible loss in the sense of relative entropy? That is, does $$\lim_{n\rightarrow \infty}\inf_{\hat{P}\in S_n} D(P||\hat{P})=0?$$
No. You can only hope that a KL divergence $D(P\|Q)$ is small if you know that $Q$'s tails are eventually of the same order as $P$'s. This isn't true in general. It is not hard to see that for $P$ Cauchy then for any $n$, $$\inf_{\hat{P}\in S_n} D(P||\hat{P})=\infty$$
More conditions on $P$ are needed to say that.
Say we have a continuous distribution $P$ and we have found an $N$-component Gaussian mixture $\hat{P}$ which is close to $P$ in total variation: $\delta(P,\hat{P})<\varepsilon$. Can we bound $D(P||\hat{P})$ in terms of $\epsilon$?
No. The same example above applies.
If we want to observe $X\sim P_X$ through independent additive noise $Y\sim P_Y$ (both real, continuous), and we have GMMs $\hat{X} \sim Q_X, \hat{Y} \sim Q_Y$ where $\delta(P,Q)<\epsilon$, then is this value small: $$\left|\mathsf{mmse}(X|X+Y)-\mathsf{mmse}(\hat{X}| \hat{X}+\hat{Y})\right|,$$
i.e. is it true that estimating $X$ through $Y$ noise is about as hard as estimating $\hat{X}$ through $\hat{Y}$ noise?
I don't know. If $X,Y,\hat{X},\hat{Y}$ have finite mean and variance then the MMSEs are $E[X|Y]$ and $E[\hat{X}|\hat{Y}]$ (simple derivation here). With these assumptions, the object is to determine whether $|E_P[(E_P[X|Y]-X)^2]-E_Q[(E_Q[X|Y]-X)^2]|$ is small when $TV(P,Q)$ is small. Related.
I haven't been able to prove this, either in general or using the extra additive structure we have assumed on P,Q, or come up with any counterexamples.
Can you do it for non-additive noise models like Poisson noise?
This is ambiguous. In the context of the previous question, if the statement in that answer can be proven in general then the answer is yes. | References that justify use of Gaussian Mixtures
Here is a partial answer.
Say $S_n$ is the class of all Gaussian mixtures with $n$ components. For any continuous distribution $P$ on the reals, are we guaranteed that as $n$ grows, we can approximat |
19,248 | When does a UMP test fail to exist? | In the example that you have provided, go on to calculate the likelihood ratio and you will find that it comes out to be a function of the order statistics, X(1) and X(2). Question 8.33 of Statistical Inference by George and Casella will help. The solution is provided in the link below:
http://www.ams.sunysb.edu/~zhu/ams570/Solutions-Casella-Berger.pdf
Coming back to the existence of a UMP test, Karlin Rubin Theorem tells that the MLR should exist, so that the inverse operation can be applied to get the test. The example on the link below will surely help.
http://web.eecs.umich.edu/~cscott/past_courses/eecs564w11/25_ump.pdf | When does a UMP test fail to exist? | In the example that you have provided, go on to calculate the likelihood ratio and you will find that it comes out to be a function of the order statistics, X(1) and X(2). Question 8.33 of Statistical | When does a UMP test fail to exist?
In the example that you have provided, go on to calculate the likelihood ratio and you will find that it comes out to be a function of the order statistics, X(1) and X(2). Question 8.33 of Statistical Inference by George and Casella will help. The solution is provided in the link below:
http://www.ams.sunysb.edu/~zhu/ams570/Solutions-Casella-Berger.pdf
Coming back to the existence of a UMP test, Karlin Rubin Theorem tells that the MLR should exist, so that the inverse operation can be applied to get the test. The example on the link below will surely help.
http://web.eecs.umich.edu/~cscott/past_courses/eecs564w11/25_ump.pdf | When does a UMP test fail to exist?
In the example that you have provided, go on to calculate the likelihood ratio and you will find that it comes out to be a function of the order statistics, X(1) and X(2). Question 8.33 of Statistical |
19,249 | Predicting variance of heteroscedastic data | I think your first problem is that $N\left(0,\sigma\left(x,t\right)\right)$ is not longer a normal distribution, and how the data needs to be transformed to be homoscedastic depends on exactly what $\sigma\left(x,t\right)$ is. For example, if $\sigma\left(x,t\right)= ax+bt$, then the error is proportional type and the logarithm of the y data should be taken before regression, or, the regression adjusted from ordinary least squares (OLS) to weighted least squares with a $1/y^2$ weight (that changes the regression to minimized proportional type error). Similarly, if $\sigma\left(x,t\right)= e^{a x+b t}$, one would have to take the logarithm of the logarithm and regress that.
I think the reason why prediction of error types is poorly covered is that one first does any old regression (groan, typically ordinary least squares, OLS). And from the residual plot, i.e., $model-y$, one observes the residual shape, and one plots the frequency histogram of the data, and looks at that. Then, if the residuals are a fan beam opening to the right, one tries proportional data modeling, if the histogram looks like an exponential decay one might try reciprocation, $1/y$, and so on and so forth for square roots, squaring, exponentiation, taking exponential-y.
Now, that is only the short story. The longer version includes an awful lot more types of regression including Theil median regression, Deming bivariate regression, and regression for minimization of ill-posed problems' error that have no particular goodness-of-curve-fit relationship to the propagated error being minimized. That last one is a whopper, but, see this as an example. So that it makes a big difference what answers one is trying to obtain. Typically, if one wants to establish a relationship between variables, routine OLS is not the method of choice, and Theil regression would be a quick and dirty improvement on that. OLS only minimizes in the y-direction, so the slope is too shallow, and the intercept too large to establish what the underlying relationship between the variables is. To say this another way, OLS gives a least error estimate of a y given an x, it does not give an estimate of how x changes with y. When the r-values are very high (0.99999+) is makes little difference what regression one uses and OLS in y is approximately the same as OLS in x, but, when the r-values are low, OLS in y is very different from OLS in x.
In summary, a lot depends on exactly what the reasoning is that motivated doing the regression analysis in the first place. That dictates the numerical methods needed. After that choice is made, the residuals then have a structure that is related to the purpose of the regression, and need to be analyzed in that larger context. | Predicting variance of heteroscedastic data | I think your first problem is that $N\left(0,\sigma\left(x,t\right)\right)$ is not longer a normal distribution, and how the data needs to be transformed to be homoscedastic depends on exactly what $\ | Predicting variance of heteroscedastic data
I think your first problem is that $N\left(0,\sigma\left(x,t\right)\right)$ is not longer a normal distribution, and how the data needs to be transformed to be homoscedastic depends on exactly what $\sigma\left(x,t\right)$ is. For example, if $\sigma\left(x,t\right)= ax+bt$, then the error is proportional type and the logarithm of the y data should be taken before regression, or, the regression adjusted from ordinary least squares (OLS) to weighted least squares with a $1/y^2$ weight (that changes the regression to minimized proportional type error). Similarly, if $\sigma\left(x,t\right)= e^{a x+b t}$, one would have to take the logarithm of the logarithm and regress that.
I think the reason why prediction of error types is poorly covered is that one first does any old regression (groan, typically ordinary least squares, OLS). And from the residual plot, i.e., $model-y$, one observes the residual shape, and one plots the frequency histogram of the data, and looks at that. Then, if the residuals are a fan beam opening to the right, one tries proportional data modeling, if the histogram looks like an exponential decay one might try reciprocation, $1/y$, and so on and so forth for square roots, squaring, exponentiation, taking exponential-y.
Now, that is only the short story. The longer version includes an awful lot more types of regression including Theil median regression, Deming bivariate regression, and regression for minimization of ill-posed problems' error that have no particular goodness-of-curve-fit relationship to the propagated error being minimized. That last one is a whopper, but, see this as an example. So that it makes a big difference what answers one is trying to obtain. Typically, if one wants to establish a relationship between variables, routine OLS is not the method of choice, and Theil regression would be a quick and dirty improvement on that. OLS only minimizes in the y-direction, so the slope is too shallow, and the intercept too large to establish what the underlying relationship between the variables is. To say this another way, OLS gives a least error estimate of a y given an x, it does not give an estimate of how x changes with y. When the r-values are very high (0.99999+) is makes little difference what regression one uses and OLS in y is approximately the same as OLS in x, but, when the r-values are low, OLS in y is very different from OLS in x.
In summary, a lot depends on exactly what the reasoning is that motivated doing the regression analysis in the first place. That dictates the numerical methods needed. After that choice is made, the residuals then have a structure that is related to the purpose of the regression, and need to be analyzed in that larger context. | Predicting variance of heteroscedastic data
I think your first problem is that $N\left(0,\sigma\left(x,t\right)\right)$ is not longer a normal distribution, and how the data needs to be transformed to be homoscedastic depends on exactly what $\ |
19,250 | Predicting variance of heteroscedastic data | The STATS BREUSCH PAGAN extension command can both test residuals for heteroscedasticity and estimate it as a function of some or all of the regressors. | Predicting variance of heteroscedastic data | The STATS BREUSCH PAGAN extension command can both test residuals for heteroscedasticity and estimate it as a function of some or all of the regressors. | Predicting variance of heteroscedastic data
The STATS BREUSCH PAGAN extension command can both test residuals for heteroscedasticity and estimate it as a function of some or all of the regressors. | Predicting variance of heteroscedastic data
The STATS BREUSCH PAGAN extension command can both test residuals for heteroscedasticity and estimate it as a function of some or all of the regressors. |
19,251 | Predicting variance of heteroscedastic data | The general approach to problems of this kind is to maximize the (regularized) likelihood of your data.
In your case, the log-likelihood would look like
$$
LL(y_0, a, b, \sigma_0, c, d)
= \sum_{i=1}^n \log \phi(y_i, y_0 + a x_i + b t_i, \sigma_0 + c x_i + d t_i)
$$
where
$$
\phi(x, \mu, \sigma) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
$$
You can code this expression into a function in your favorite statistical package (I would prefer Python, R or Stata, for I never did programming in SPSS). Then you can feed it to a numerical optimizer, which will estimate optimal value $\hat{\theta}$ of your parameters $\theta=(y_0, a, b, \sigma_0, c, d)$.
If you need confidence intervals, this optimizer can also estimate Hessian matrix $H$ of $\theta$ (second derivatives) around the optimum. Theory of maximum likelihood estimation says that for large $n$ covariance matrix of $\hat{\theta}$ may be estimated as $H^{-1}$.
Here is an example code in Python:
import scipy
import numpy as np
# generate toy data for the problem
np.random.seed(1) # fix random seed
n = 1000 # fix problem size
x = np.random.normal(size=n)
t = np.random.normal(size=n)
mean = 1 + x * 2 + t * 3
std = 4 + x * 0.5 + t * 0.6
y = np.random.normal(size=n, loc=mean, scale=std)
# create negative log likelihood
def neg_log_lik(theta):
est_mean = theta[0] + x * theta[1] + t * theta[2]
est_std = np.maximum(theta[3] + x * theta[4] + t * theta[5], 1e-10)
return -sum(scipy.stats.norm.logpdf(y, loc=est_mean, scale=est_std))
# maximize
initial = np.array([0,0,0,1,0,0])
result = scipy.optimize.minimize(neg_log_lik, initial)
# extract point estimation
param = result.x
print(param)
# extract standard error for confidence intervals
std_error = np.sqrt(np.diag(result.hess_inv))
print(std_error)
Notice that your problem formulation can produce negative $\sigma$, and I had to defend myself from it by brute force replacement of too small $\sigma$ with $10^{-10}$.
The result (parameter estimates and their standard errors) produced by the code is:
[ 0.8724218 1.75510897 2.87661843 3.88917283 0.63696726 0.5788625 ]
[ 0.15073344 0.07351353 0.09515104 0.08086239 0.08422978 0.0853192 ]
You can see that estimates are close to their true values, which confirms correctness of this simulation. | Predicting variance of heteroscedastic data | The general approach to problems of this kind is to maximize the (regularized) likelihood of your data.
In your case, the log-likelihood would look like
$$
LL(y_0, a, b, \sigma_0, c, d)
= \sum_{i=1}^n | Predicting variance of heteroscedastic data
The general approach to problems of this kind is to maximize the (regularized) likelihood of your data.
In your case, the log-likelihood would look like
$$
LL(y_0, a, b, \sigma_0, c, d)
= \sum_{i=1}^n \log \phi(y_i, y_0 + a x_i + b t_i, \sigma_0 + c x_i + d t_i)
$$
where
$$
\phi(x, \mu, \sigma) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
$$
You can code this expression into a function in your favorite statistical package (I would prefer Python, R or Stata, for I never did programming in SPSS). Then you can feed it to a numerical optimizer, which will estimate optimal value $\hat{\theta}$ of your parameters $\theta=(y_0, a, b, \sigma_0, c, d)$.
If you need confidence intervals, this optimizer can also estimate Hessian matrix $H$ of $\theta$ (second derivatives) around the optimum. Theory of maximum likelihood estimation says that for large $n$ covariance matrix of $\hat{\theta}$ may be estimated as $H^{-1}$.
Here is an example code in Python:
import scipy
import numpy as np
# generate toy data for the problem
np.random.seed(1) # fix random seed
n = 1000 # fix problem size
x = np.random.normal(size=n)
t = np.random.normal(size=n)
mean = 1 + x * 2 + t * 3
std = 4 + x * 0.5 + t * 0.6
y = np.random.normal(size=n, loc=mean, scale=std)
# create negative log likelihood
def neg_log_lik(theta):
est_mean = theta[0] + x * theta[1] + t * theta[2]
est_std = np.maximum(theta[3] + x * theta[4] + t * theta[5], 1e-10)
return -sum(scipy.stats.norm.logpdf(y, loc=est_mean, scale=est_std))
# maximize
initial = np.array([0,0,0,1,0,0])
result = scipy.optimize.minimize(neg_log_lik, initial)
# extract point estimation
param = result.x
print(param)
# extract standard error for confidence intervals
std_error = np.sqrt(np.diag(result.hess_inv))
print(std_error)
Notice that your problem formulation can produce negative $\sigma$, and I had to defend myself from it by brute force replacement of too small $\sigma$ with $10^{-10}$.
The result (parameter estimates and their standard errors) produced by the code is:
[ 0.8724218 1.75510897 2.87661843 3.88917283 0.63696726 0.5788625 ]
[ 0.15073344 0.07351353 0.09515104 0.08086239 0.08422978 0.0853192 ]
You can see that estimates are close to their true values, which confirms correctness of this simulation. | Predicting variance of heteroscedastic data
The general approach to problems of this kind is to maximize the (regularized) likelihood of your data.
In your case, the log-likelihood would look like
$$
LL(y_0, a, b, \sigma_0, c, d)
= \sum_{i=1}^n |
19,252 | Irregularly spaced time-series in finance/economics research | Full disclosure! I don't know about finance/economy, so sorry in advance for my ignorance. But I find this question wider than finance. Analyzing irregularly sampled data arises in many other fields, such as biology and medicine. One of the shortcomings of classical approaches like Autoregressive Regression (AR) is their weakness in dealing with irregularly sampled data. However this problem can be tackled by Gaussian processes (GPs). It's used for example here or here. | Irregularly spaced time-series in finance/economics research | Full disclosure! I don't know about finance/economy, so sorry in advance for my ignorance. But I find this question wider than finance. Analyzing irregularly sampled data arises in many other fields, | Irregularly spaced time-series in finance/economics research
Full disclosure! I don't know about finance/economy, so sorry in advance for my ignorance. But I find this question wider than finance. Analyzing irregularly sampled data arises in many other fields, such as biology and medicine. One of the shortcomings of classical approaches like Autoregressive Regression (AR) is their weakness in dealing with irregularly sampled data. However this problem can be tackled by Gaussian processes (GPs). It's used for example here or here. | Irregularly spaced time-series in finance/economics research
Full disclosure! I don't know about finance/economy, so sorry in advance for my ignorance. But I find this question wider than finance. Analyzing irregularly sampled data arises in many other fields, |
19,253 | Irregularly spaced time-series in finance/economics research | Traditionally, we don't worry about non trading days and count this as regularly spaced data. There are however two possible effects that you'd have to worry about.
The first is the effect of time on momentum and interaction with leading indicators. If you have a lagged variable that is a good leader - let's say it's mean temperature - then some of your data points will be lagged to the next day (Friday -> Thurs) while others are lagged three days (Monday -> Friday). There's likely to be spurious results because of that.
The second issue is activity that happens when markets are closed. After hours trading, options pricing, etc. If those are a factor, you may be better off calculating a regularly spaced time series and interpolating or accounting for non-trading days some other way. | Irregularly spaced time-series in finance/economics research | Traditionally, we don't worry about non trading days and count this as regularly spaced data. There are however two possible effects that you'd have to worry about.
The first is the effect of time | Irregularly spaced time-series in finance/economics research
Traditionally, we don't worry about non trading days and count this as regularly spaced data. There are however two possible effects that you'd have to worry about.
The first is the effect of time on momentum and interaction with leading indicators. If you have a lagged variable that is a good leader - let's say it's mean temperature - then some of your data points will be lagged to the next day (Friday -> Thurs) while others are lagged three days (Monday -> Friday). There's likely to be spurious results because of that.
The second issue is activity that happens when markets are closed. After hours trading, options pricing, etc. If those are a factor, you may be better off calculating a regularly spaced time series and interpolating or accounting for non-trading days some other way. | Irregularly spaced time-series in finance/economics research
Traditionally, we don't worry about non trading days and count this as regularly spaced data. There are however two possible effects that you'd have to worry about.
The first is the effect of time |
19,254 | Testing for a significant difference between ML estimates: Likelihood ratio or Wald test? | ONe should use the wald-test when they have the maximum likelihood estimate of the observed data, and the fisher-information/variance-covariance matrix.
The Likelihood-ratio only depends on null and alternative distribution's parameters and doesn't require a mL estimate, and parameters are ghosts. No one sees them, which means they are likely to be mis-specified which coincides when your hypothesis are wrong, which they are often. | Testing for a significant difference between ML estimates: Likelihood ratio or Wald test? | ONe should use the wald-test when they have the maximum likelihood estimate of the observed data, and the fisher-information/variance-covariance matrix.
The Likelihood-ratio only depends on null and a | Testing for a significant difference between ML estimates: Likelihood ratio or Wald test?
ONe should use the wald-test when they have the maximum likelihood estimate of the observed data, and the fisher-information/variance-covariance matrix.
The Likelihood-ratio only depends on null and alternative distribution's parameters and doesn't require a mL estimate, and parameters are ghosts. No one sees them, which means they are likely to be mis-specified which coincides when your hypothesis are wrong, which they are often. | Testing for a significant difference between ML estimates: Likelihood ratio or Wald test?
ONe should use the wald-test when they have the maximum likelihood estimate of the observed data, and the fisher-information/variance-covariance matrix.
The Likelihood-ratio only depends on null and a |
19,255 | Eigenfunctions of an adjacency matrix of a time series? | This looks like a variation on "Principal Component Analysis".
http://mathworld.wolfram.com/PrincipalComponentAnalysis.html
In structural analysis the eigenvalues of a system are used to look at linear deformations, places where superposition is still valid. The method is called "Modal Analysis".
http://macl.caeds.eng.uml.edu/macl-pa/modes/modal2.html | Eigenfunctions of an adjacency matrix of a time series? | This looks like a variation on "Principal Component Analysis".
http://mathworld.wolfram.com/PrincipalComponentAnalysis.html
In structural analysis the eigenvalues of a system are used to look at linea | Eigenfunctions of an adjacency matrix of a time series?
This looks like a variation on "Principal Component Analysis".
http://mathworld.wolfram.com/PrincipalComponentAnalysis.html
In structural analysis the eigenvalues of a system are used to look at linear deformations, places where superposition is still valid. The method is called "Modal Analysis".
http://macl.caeds.eng.uml.edu/macl-pa/modes/modal2.html | Eigenfunctions of an adjacency matrix of a time series?
This looks like a variation on "Principal Component Analysis".
http://mathworld.wolfram.com/PrincipalComponentAnalysis.html
In structural analysis the eigenvalues of a system are used to look at linea |
19,256 | Adjustments to (Linear Regression) Forecast | Here's a simple suggestion. I don't know whether it works for you and maybe I should have made it as a comment, but it seems you need more privileges to make a comment than to make a reply.
If I understand correctly, the figures you are using are the amounts of storage you are using each month. Probably these usualy increase, and you want to predict what the amount will be at some time in the future if trends continue. Once you realise that your big change has happened (e.g. that 500 GB has been released) can you go back and change the previous months' figures (e.g. delete 500 GB from all of them)? Basically what you would be doing is to adjust the previous months' figures to what they should have been, if you knew then what you know now.
Of course I don't recommend this unless you make sure you can go back to the old figures. But the forecasting you want to do sounds like it could even be done in Excel, in which case you can have as many versions as you want. | Adjustments to (Linear Regression) Forecast | Here's a simple suggestion. I don't know whether it works for you and maybe I should have made it as a comment, but it seems you need more privileges to make a comment than to make a reply.
If I unde | Adjustments to (Linear Regression) Forecast
Here's a simple suggestion. I don't know whether it works for you and maybe I should have made it as a comment, but it seems you need more privileges to make a comment than to make a reply.
If I understand correctly, the figures you are using are the amounts of storage you are using each month. Probably these usualy increase, and you want to predict what the amount will be at some time in the future if trends continue. Once you realise that your big change has happened (e.g. that 500 GB has been released) can you go back and change the previous months' figures (e.g. delete 500 GB from all of them)? Basically what you would be doing is to adjust the previous months' figures to what they should have been, if you knew then what you know now.
Of course I don't recommend this unless you make sure you can go back to the old figures. But the forecasting you want to do sounds like it could even be done in Excel, in which case you can have as many versions as you want. | Adjustments to (Linear Regression) Forecast
Here's a simple suggestion. I don't know whether it works for you and maybe I should have made it as a comment, but it seems you need more privileges to make a comment than to make a reply.
If I unde |
19,257 | Adjustments to (Linear Regression) Forecast | What you're looking at are outliers. If you have reason to believe the outlier(s) do not represent your data, you may remove them. In a validated environment, you would have to investigate each one and justify them, but in your case you can probably just delete them.
If you're looking to find these data points automatically, look at Cook's Distance, which analyzes the residuals and can make a mathematical determination of reject criteria (typically 4/n, where n is 12 in your case)
Another suggestion is to open up the data range of your data, can you look at two years, or are data that old completely irrelevant? That of course would reduce the impact of an outlier or two, and also gives more power to analysis methods such as Cook's Distance.
Now the tricky thing can be an offset - so if that outlier causes the entire line to jump down, it will cause the regression to face down even if there's a general upward trend.
To prevent that, you can plot the change in hard drive space. Removing the outlier removes the spurious data points, and you can see the overall trend in change in hard drive space, leading to more accurate conclusions. | Adjustments to (Linear Regression) Forecast | What you're looking at are outliers. If you have reason to believe the outlier(s) do not represent your data, you may remove them. In a validated environment, you would have to investigate each one | Adjustments to (Linear Regression) Forecast
What you're looking at are outliers. If you have reason to believe the outlier(s) do not represent your data, you may remove them. In a validated environment, you would have to investigate each one and justify them, but in your case you can probably just delete them.
If you're looking to find these data points automatically, look at Cook's Distance, which analyzes the residuals and can make a mathematical determination of reject criteria (typically 4/n, where n is 12 in your case)
Another suggestion is to open up the data range of your data, can you look at two years, or are data that old completely irrelevant? That of course would reduce the impact of an outlier or two, and also gives more power to analysis methods such as Cook's Distance.
Now the tricky thing can be an offset - so if that outlier causes the entire line to jump down, it will cause the regression to face down even if there's a general upward trend.
To prevent that, you can plot the change in hard drive space. Removing the outlier removes the spurious data points, and you can see the overall trend in change in hard drive space, leading to more accurate conclusions. | Adjustments to (Linear Regression) Forecast
What you're looking at are outliers. If you have reason to believe the outlier(s) do not represent your data, you may remove them. In a validated environment, you would have to investigate each one |
19,258 | Adjustments to (Linear Regression) Forecast | Here's what I understand of your situation: You have a forecast, a regression model that you evaluate monthly, of your storage needs for "y" months, that uses the data over the previous year from the current month. Once and awhile, someone deletes a chunk of data and suddenly the slope of the line changes dramatically from the usual forecast. This change in the slope effects capital expense planning for however long it takes for the point to run its course through the model.
Your resistance to throwing out the data is appropriate. You have decisions to make a-priori. How to define and handle outliers. Outliers can be defined based on business and/or statistical definitions. I am going to make the assumption that the outliers you are concerned about are simply "obvious".
Once found, we investigate outliers to see if they are generated from a real data generating process, then we handle them according to our a-priori decided upon procedures. These procedures can be anything from 'trimming' a certain percentage of data points from one or both edges of data, to replacing the value, to controlling for the specific outliers, changing the model so it uses a different underlying distribution, or changing the model so it uses a different central tendency.
One way to handle the outliers after they are investigated, assuming the outliers are relatively infrequent, is to add them as a predictor in the model. Each outlier gets an entire predictor. The predictor would be a seperate column that would predict one for the data point that is the outlier and zero otherwise. Once controlled for, they will not pull the slope of the model anymore. This procedure has several advantages, one being that no data are thrown out or changed to some other value.
Some disadvantage include taking extra time to model the outlier and having to re-specify the model each month as the column of the outlier moves through the modeling year window. However, any action with outliers will require extra time in modeling, which is a necessary step in cleaning the data. Also, if I understand correctly, you're respecifying the forecast each month anyway.
Example Regression formula:
memory_capital = ... + mem_usage_nov + mem_usage_dec + outlier_column1_dec + ... | Adjustments to (Linear Regression) Forecast | Here's what I understand of your situation: You have a forecast, a regression model that you evaluate monthly, of your storage needs for "y" months, that uses the data over the previous year from the | Adjustments to (Linear Regression) Forecast
Here's what I understand of your situation: You have a forecast, a regression model that you evaluate monthly, of your storage needs for "y" months, that uses the data over the previous year from the current month. Once and awhile, someone deletes a chunk of data and suddenly the slope of the line changes dramatically from the usual forecast. This change in the slope effects capital expense planning for however long it takes for the point to run its course through the model.
Your resistance to throwing out the data is appropriate. You have decisions to make a-priori. How to define and handle outliers. Outliers can be defined based on business and/or statistical definitions. I am going to make the assumption that the outliers you are concerned about are simply "obvious".
Once found, we investigate outliers to see if they are generated from a real data generating process, then we handle them according to our a-priori decided upon procedures. These procedures can be anything from 'trimming' a certain percentage of data points from one or both edges of data, to replacing the value, to controlling for the specific outliers, changing the model so it uses a different underlying distribution, or changing the model so it uses a different central tendency.
One way to handle the outliers after they are investigated, assuming the outliers are relatively infrequent, is to add them as a predictor in the model. Each outlier gets an entire predictor. The predictor would be a seperate column that would predict one for the data point that is the outlier and zero otherwise. Once controlled for, they will not pull the slope of the model anymore. This procedure has several advantages, one being that no data are thrown out or changed to some other value.
Some disadvantage include taking extra time to model the outlier and having to re-specify the model each month as the column of the outlier moves through the modeling year window. However, any action with outliers will require extra time in modeling, which is a necessary step in cleaning the data. Also, if I understand correctly, you're respecifying the forecast each month anyway.
Example Regression formula:
memory_capital = ... + mem_usage_nov + mem_usage_dec + outlier_column1_dec + ... | Adjustments to (Linear Regression) Forecast
Here's what I understand of your situation: You have a forecast, a regression model that you evaluate monthly, of your storage needs for "y" months, that uses the data over the previous year from the |
19,259 | Random forest on grouped data | Very late to the party as well, but I think that could be related to something I did a few years ago. That work got published here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0093379
and is about dealing with variable correlation into ensemble of decision trees. You should have a look at the bibliography which is pointing to many proposal to deal with this type of issues (which is common in the "genetic" area).
The source code is available here (but is not really maintained anymore). | Random forest on grouped data | Very late to the party as well, but I think that could be related to something I did a few years ago. That work got published here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.00 | Random forest on grouped data
Very late to the party as well, but I think that could be related to something I did a few years ago. That work got published here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0093379
and is about dealing with variable correlation into ensemble of decision trees. You should have a look at the bibliography which is pointing to many proposal to deal with this type of issues (which is common in the "genetic" area).
The source code is available here (but is not really maintained anymore). | Random forest on grouped data
Very late to the party as well, but I think that could be related to something I did a few years ago. That work got published here:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.00 |
19,260 | Random forest on grouped data | Over-Fitting of the Random Forest can be caused by different reasons, and it highly depends on the RF parameters. It is not clear from your post how you tuned your RF.
Here are some tips that may help:
Increase the number of trees
Tune the Maximum Depth of the trees. This parameter highly depends on the problem at hand. Using smaller trees can help with overfitting problem. | Random forest on grouped data | Over-Fitting of the Random Forest can be caused by different reasons, and it highly depends on the RF parameters. It is not clear from your post how you tuned your RF.
Here are some tips that may help | Random forest on grouped data
Over-Fitting of the Random Forest can be caused by different reasons, and it highly depends on the RF parameters. It is not clear from your post how you tuned your RF.
Here are some tips that may help:
Increase the number of trees
Tune the Maximum Depth of the trees. This parameter highly depends on the problem at hand. Using smaller trees can help with overfitting problem. | Random forest on grouped data
Over-Fitting of the Random Forest can be caused by different reasons, and it highly depends on the RF parameters. It is not clear from your post how you tuned your RF.
Here are some tips that may help |
19,261 | Practical thoughts on explanatory vs. predictive modeling | In one sentence
Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?"
In many sentences
I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate.
As an incredibly loose example, using "cancer data".
Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B.
Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates.
Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome. | Practical thoughts on explanatory vs. predictive modeling | In one sentence
Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?"
In many sentences
I think the main difference is wh | Practical thoughts on explanatory vs. predictive modeling
In one sentence
Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?"
In many sentences
I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate.
As an incredibly loose example, using "cancer data".
Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B.
Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates.
Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome. | Practical thoughts on explanatory vs. predictive modeling
In one sentence
Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?"
In many sentences
I think the main difference is wh |
19,262 | Practical thoughts on explanatory vs. predictive modeling | In my view the differences are as follows:
Explanatory/Descriptive
When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relationships between the data after noise has been accounted for.
Example: Is it true that exercising regularly (say 30 minutes per day) leads to lower blood pressure? To answer this question we may collect data from patients about their exercise regimen and their blood pressure values over time. The goal is to see if we can explain variations in blood pressure by variations in exercise regimen.
Blood pressure is impacted by not only exercise by wide variety of other factors as well such as amount of sodium a person eats etc. These other factors would be considered noise in the above example as the focus is on teasing out the relationship between exercise regimen and blood pressure.
Prediction
When doing a predictive exercise, we are extrapolating into the unknown using the known relationships between the data we have at hand. The known relationship may emerge from an explanatory/descriptive analysis or some other technique.
Example: If I exercise 1 hour per day to what extent is my blood pressure likely to drop? To answer this question, we may use a previously uncovered relationship between blood pressure and exercise regimen to perform the prediction.
In the above context, the focus is not on explanation, although an explanatory model can help with the prediction process. There are also non-explanatory approaches (e.g., neural nets) which are good at predicting the unknown without necessarily adding to our knowledge as to the nature of the underlying relationship between the variables. | Practical thoughts on explanatory vs. predictive modeling | In my view the differences are as follows:
Explanatory/Descriptive
When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relati | Practical thoughts on explanatory vs. predictive modeling
In my view the differences are as follows:
Explanatory/Descriptive
When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relationships between the data after noise has been accounted for.
Example: Is it true that exercising regularly (say 30 minutes per day) leads to lower blood pressure? To answer this question we may collect data from patients about their exercise regimen and their blood pressure values over time. The goal is to see if we can explain variations in blood pressure by variations in exercise regimen.
Blood pressure is impacted by not only exercise by wide variety of other factors as well such as amount of sodium a person eats etc. These other factors would be considered noise in the above example as the focus is on teasing out the relationship between exercise regimen and blood pressure.
Prediction
When doing a predictive exercise, we are extrapolating into the unknown using the known relationships between the data we have at hand. The known relationship may emerge from an explanatory/descriptive analysis or some other technique.
Example: If I exercise 1 hour per day to what extent is my blood pressure likely to drop? To answer this question, we may use a previously uncovered relationship between blood pressure and exercise regimen to perform the prediction.
In the above context, the focus is not on explanation, although an explanatory model can help with the prediction process. There are also non-explanatory approaches (e.g., neural nets) which are good at predicting the unknown without necessarily adding to our knowledge as to the nature of the underlying relationship between the variables. | Practical thoughts on explanatory vs. predictive modeling
In my view the differences are as follows:
Explanatory/Descriptive
When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relati |
19,263 | Practical thoughts on explanatory vs. predictive modeling | One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive purposes (i.e., its inclusion in the model leads to worse predictive accuracy). I see this mistake almost every day in published papers.
Another difference is in the distinction between principal components analysis and factor analysis. PCA is often used in prediction, but is not so useful for explanation. FA involves the additional step of rotation which is done to improve interpretation (and hence explanation). There is a nice post today on Galit Shmueli's blog about this.
Update: a third case arises in time series when a variable may be an important explanatory variable but it just isn't available for the future. For example, home loans may be strongly related to GDP but that isn't much use for predicting future home loans unless we also have good predictions of GDP. | Practical thoughts on explanatory vs. predictive modeling | One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive p | Practical thoughts on explanatory vs. predictive modeling
One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive purposes (i.e., its inclusion in the model leads to worse predictive accuracy). I see this mistake almost every day in published papers.
Another difference is in the distinction between principal components analysis and factor analysis. PCA is often used in prediction, but is not so useful for explanation. FA involves the additional step of rotation which is done to improve interpretation (and hence explanation). There is a nice post today on Galit Shmueli's blog about this.
Update: a third case arises in time series when a variable may be an important explanatory variable but it just isn't available for the future. For example, home loans may be strongly related to GDP but that isn't much use for predicting future home loans unless we also have good predictions of GDP. | Practical thoughts on explanatory vs. predictive modeling
One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive p |
19,264 | Practical thoughts on explanatory vs. predictive modeling | Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinction. Here is a deck of slides that I use in my data mining course to teach linear regression from both angles. Even with linear regression alone and with this tiny example various issues emerge that lead to different models for explanatory vs. predictive goals (choice of variables, variable selection, performance measures, etc.)
Galit | Practical thoughts on explanatory vs. predictive modeling | Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinct | Practical thoughts on explanatory vs. predictive modeling
Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinction. Here is a deck of slides that I use in my data mining course to teach linear regression from both angles. Even with linear regression alone and with this tiny example various issues emerge that lead to different models for explanatory vs. predictive goals (choice of variables, variable selection, performance measures, etc.)
Galit | Practical thoughts on explanatory vs. predictive modeling
Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinct |
19,265 | Practical thoughts on explanatory vs. predictive modeling | Example: A classic example that I have seen is in the context of predicting human performance.
Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a strong predictor of task performance. Thus, if you put self-efficacy into a multiple regression along with other variables such as intelligence and degree of prior experience, you often find that self-efficacy is a strong predictor.
This has lead some researchers to suggest that self-efficacy causes task performance. And that effective interventions are those which focus on increasing a person's sense of self-efficacy.
However, the alternative theoretical model sees self-efficacy largely as a consequence of task performance. I.e., If you are good, you'll know it. In this framework interventions should focus on increasing actual competence and not perceived competence.
Thus, including a variable like self-efficacy might increase prediction, but assuming you adopt the self-efficacy-as-consequence model, it should not be included as a predictor if the aim of the model is to elucidate causal processes influencing performance.
This of course raises the issue of how to develop and validate a causal theoretical model. This clearly relies on multiple studies, ideally with some experimental manipulation, and a coherent argument about dynamic processes.
Proximal versus distal: I've seen similar issues when researchers are interested in the effects of distal and proximal causes. Proximal causes tend to predict better than distal causes. However, theoretical interest may be in understanding the ways in which distal and proximal causes operate.
Variable selection issue: Finally, a huge issue in social science research is the variable selection issue.
In any given study, there is an infinite number of variables that could have been measured
but weren't. Thus, interpretation of models need to consider the implications of this when making theoretical interpretations. | Practical thoughts on explanatory vs. predictive modeling | Example: A classic example that I have seen is in the context of predicting human performance.
Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a st | Practical thoughts on explanatory vs. predictive modeling
Example: A classic example that I have seen is in the context of predicting human performance.
Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a strong predictor of task performance. Thus, if you put self-efficacy into a multiple regression along with other variables such as intelligence and degree of prior experience, you often find that self-efficacy is a strong predictor.
This has lead some researchers to suggest that self-efficacy causes task performance. And that effective interventions are those which focus on increasing a person's sense of self-efficacy.
However, the alternative theoretical model sees self-efficacy largely as a consequence of task performance. I.e., If you are good, you'll know it. In this framework interventions should focus on increasing actual competence and not perceived competence.
Thus, including a variable like self-efficacy might increase prediction, but assuming you adopt the self-efficacy-as-consequence model, it should not be included as a predictor if the aim of the model is to elucidate causal processes influencing performance.
This of course raises the issue of how to develop and validate a causal theoretical model. This clearly relies on multiple studies, ideally with some experimental manipulation, and a coherent argument about dynamic processes.
Proximal versus distal: I've seen similar issues when researchers are interested in the effects of distal and proximal causes. Proximal causes tend to predict better than distal causes. However, theoretical interest may be in understanding the ways in which distal and proximal causes operate.
Variable selection issue: Finally, a huge issue in social science research is the variable selection issue.
In any given study, there is an infinite number of variables that could have been measured
but weren't. Thus, interpretation of models need to consider the implications of this when making theoretical interpretations. | Practical thoughts on explanatory vs. predictive modeling
Example: A classic example that I have seen is in the context of predicting human performance.
Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a st |
19,266 | Practical thoughts on explanatory vs. predictive modeling | Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the document) are as follows:
"Higher predictive accuracy is associated with
more reliable information about the underlying data
mechanism. Weak predictive accuracy can lead to
questionable conclusions."
"Algorithmic models can give better predictive
accuracy than data models, and provide better information about the underlying mechanism." | Practical thoughts on explanatory vs. predictive modeling | Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the docum | Practical thoughts on explanatory vs. predictive modeling
Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the document) are as follows:
"Higher predictive accuracy is associated with
more reliable information about the underlying data
mechanism. Weak predictive accuracy can lead to
questionable conclusions."
"Algorithmic models can give better predictive
accuracy than data models, and provide better information about the underlying mechanism." | Practical thoughts on explanatory vs. predictive modeling
Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the docum |
19,267 | Practical thoughts on explanatory vs. predictive modeling | I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction between the aims of the practitioner, which are either "causal" or "predictive". In general, I think "explanation" is such a vague word that it means nearly nothing. For example, is Hooke's Law explanatory or predictive? On the other end of the spectrum, are predictively accurate recommendation systems good causal models of explicit item ratings? I think we all share the intuition that the goal of science is explanation, while the goal of technology is prediction; and this intuition somehow gets lost in consideration of the tools we use, like supervised learning algorithms, that can be employed for both causal inference and predictive modeling, but are really purely mathematical devices that are not intrinsically linked to "prediction" or "explanation".
Having said all of that, maybe the only word that I would apply to a model is interpretable. Regressions are usually interpretable; neural nets with many layers are often not so. I think people sometimes naively assume that a model that is interpretable is providing causal information, while uninterpretable models only provide predictive information. This attitude seems simply confused to me. | Practical thoughts on explanatory vs. predictive modeling | I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction bet | Practical thoughts on explanatory vs. predictive modeling
I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction between the aims of the practitioner, which are either "causal" or "predictive". In general, I think "explanation" is such a vague word that it means nearly nothing. For example, is Hooke's Law explanatory or predictive? On the other end of the spectrum, are predictively accurate recommendation systems good causal models of explicit item ratings? I think we all share the intuition that the goal of science is explanation, while the goal of technology is prediction; and this intuition somehow gets lost in consideration of the tools we use, like supervised learning algorithms, that can be employed for both causal inference and predictive modeling, but are really purely mathematical devices that are not intrinsically linked to "prediction" or "explanation".
Having said all of that, maybe the only word that I would apply to a model is interpretable. Regressions are usually interpretable; neural nets with many layers are often not so. I think people sometimes naively assume that a model that is interpretable is providing causal information, while uninterpretable models only provide predictive information. This attitude seems simply confused to me. | Practical thoughts on explanatory vs. predictive modeling
I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction bet |
19,268 | Practical thoughts on explanatory vs. predictive modeling | I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus.
Explanatory Models
By definition explanatory models have as their primary focus the goal of explaining something in the real world. In most instances, we seek to offer simple and clean explanations. By simple I mean that we prefer parsimony (explain the phenomena with as few parameters as possible) and by clean I mean that we would like to make statements of the following form: "the effect of changing $x$ by one unit changes $y$ by $\beta$ holding everything else constant". Given these goals of simple and clear explanations, explanatory models seek to penalize complex models (by using appropriate criteria such as AIC) and prefer to obtain orthogonal independent variables (either via controlled experiments or via suitable data transformations).
Predictive Models
The goal of predictive models is to predict something. Thus, they tend to focus less on parsimony or simplicity but more on their ability to predict the dependent variable.
However, the above is somewhat of an artificial distinction as explanatory models can be used for prediction and sometimes predictive models can explain something. | Practical thoughts on explanatory vs. predictive modeling | I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus.
Explanatory Mod | Practical thoughts on explanatory vs. predictive modeling
I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus.
Explanatory Models
By definition explanatory models have as their primary focus the goal of explaining something in the real world. In most instances, we seek to offer simple and clean explanations. By simple I mean that we prefer parsimony (explain the phenomena with as few parameters as possible) and by clean I mean that we would like to make statements of the following form: "the effect of changing $x$ by one unit changes $y$ by $\beta$ holding everything else constant". Given these goals of simple and clear explanations, explanatory models seek to penalize complex models (by using appropriate criteria such as AIC) and prefer to obtain orthogonal independent variables (either via controlled experiments or via suitable data transformations).
Predictive Models
The goal of predictive models is to predict something. Thus, they tend to focus less on parsimony or simplicity but more on their ability to predict the dependent variable.
However, the above is somewhat of an artificial distinction as explanatory models can be used for prediction and sometimes predictive models can explain something. | Practical thoughts on explanatory vs. predictive modeling
I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus.
Explanatory Mod |
19,269 | Practical thoughts on explanatory vs. predictive modeling | as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned.
Brad Efron, one of the commentators on The Two Cultures paper, made the following observation (as discussed in my earlier question):
Prediction by itself is
only occasionally sufficient. The post
office is happy with any method that
predicts correct addresses from
hand-written scrawls. Peter
Gregory undertook his study for
prediction purposes, but also to
better understand the medical basis of
hepatitis. Most statistical surveys
have the identification of causal
factors as their ultimate goal.
Certain fields (eg. Medicine) place a heavy weight on model fitting as explanatory process (the distribution, etc.), as a means to understanding the underlying process that generates the data. Other fields are less concerned with this, and will be happy with a "black box" model that has a very high predictive success. This can work its way into the model building process as well. | Practical thoughts on explanatory vs. predictive modeling | as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned.
Brad Efron, one of the commentators on The Two Cultures paper, made | Practical thoughts on explanatory vs. predictive modeling
as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned.
Brad Efron, one of the commentators on The Two Cultures paper, made the following observation (as discussed in my earlier question):
Prediction by itself is
only occasionally sufficient. The post
office is happy with any method that
predicts correct addresses from
hand-written scrawls. Peter
Gregory undertook his study for
prediction purposes, but also to
better understand the medical basis of
hepatitis. Most statistical surveys
have the identification of causal
factors as their ultimate goal.
Certain fields (eg. Medicine) place a heavy weight on model fitting as explanatory process (the distribution, etc.), as a means to understanding the underlying process that generates the data. Other fields are less concerned with this, and will be happy with a "black box" model that has a very high predictive success. This can work its way into the model building process as well. | Practical thoughts on explanatory vs. predictive modeling
as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned.
Brad Efron, one of the commentators on The Two Cultures paper, made |
19,270 | Practical thoughts on explanatory vs. predictive modeling | With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want to be pedantic. Sometimes people are just sloppy or lazy in their terminology. This is true of many people, and I'm certainly no better.
What's of potential value here (discussing explanation vs. prediction on CV), is to clarify the distinction between the two approaches. In short, the distinction centers on the role of causality. If you want to understand some dynamic in the world, and explain why something happens the way it does, you need to identify the causal relationships amongst the relevant variables. To predict, you can ignore causality. For example, you can predict an effect from knowledge about its cause; you can predict the existence of the cause from knowledge that the effect occurred; and you can predict the approximate level of one effect by knowledge of another effect that is driven by the same cause. Why would someone want to be able to do this? To increase their knowledge of what might happen in the future, so that they can plan accordingly. For example, a parole board may want to be able to predict the probability that a convict will recidivate if paroled. However, this is not sufficient for explanation. Of course, estimating the true causal relationship between two variables can be extremely difficult. In addition, models that do capture (what are thought to be) the real causal relationships are often worse for making predictions. So why do it, then? First, most of this is done in science, where understanding is pursued for its own sake. Second, if we can reliably pick out true causes, and can develop the ability to affect them, we can exert some influence over the effects.
With regard to the statistical modeling strategy, there isn't a large difference. Primarily the difference lies in how to conduct the study. If your goal is to be able to predict, find out what information will be available to users of the model when they will need to make the prediction. Information they won't have access to is of no value. If they will most likely want to be able to predict at a certain level (or within a narrow range) of the predictors, try to center the sampled range of the predictor on that level and oversample there. For instance, if a parole board will mostly want to know about criminals with 2 major convictions, you might gather info about criminals with 1, 2, and 3 convictions. On the other hand, assessing the causal status of a variable basically requires an experiment. That is, experimental units need to be assigned at random to prespecified levels of the explanatory variables. If there is concern about whether or not the nature of the causal effect is contingent on some other variable, that variable must be included in the experiment. If it is not possible to conduct a true experiment, then you face a much more difficult situation, one that is too complex to go into here. | Practical thoughts on explanatory vs. predictive modeling | With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want | Practical thoughts on explanatory vs. predictive modeling
With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want to be pedantic. Sometimes people are just sloppy or lazy in their terminology. This is true of many people, and I'm certainly no better.
What's of potential value here (discussing explanation vs. prediction on CV), is to clarify the distinction between the two approaches. In short, the distinction centers on the role of causality. If you want to understand some dynamic in the world, and explain why something happens the way it does, you need to identify the causal relationships amongst the relevant variables. To predict, you can ignore causality. For example, you can predict an effect from knowledge about its cause; you can predict the existence of the cause from knowledge that the effect occurred; and you can predict the approximate level of one effect by knowledge of another effect that is driven by the same cause. Why would someone want to be able to do this? To increase their knowledge of what might happen in the future, so that they can plan accordingly. For example, a parole board may want to be able to predict the probability that a convict will recidivate if paroled. However, this is not sufficient for explanation. Of course, estimating the true causal relationship between two variables can be extremely difficult. In addition, models that do capture (what are thought to be) the real causal relationships are often worse for making predictions. So why do it, then? First, most of this is done in science, where understanding is pursued for its own sake. Second, if we can reliably pick out true causes, and can develop the ability to affect them, we can exert some influence over the effects.
With regard to the statistical modeling strategy, there isn't a large difference. Primarily the difference lies in how to conduct the study. If your goal is to be able to predict, find out what information will be available to users of the model when they will need to make the prediction. Information they won't have access to is of no value. If they will most likely want to be able to predict at a certain level (or within a narrow range) of the predictors, try to center the sampled range of the predictor on that level and oversample there. For instance, if a parole board will mostly want to know about criminals with 2 major convictions, you might gather info about criminals with 1, 2, and 3 convictions. On the other hand, assessing the causal status of a variable basically requires an experiment. That is, experimental units need to be assigned at random to prespecified levels of the explanatory variables. If there is concern about whether or not the nature of the causal effect is contingent on some other variable, that variable must be included in the experiment. If it is not possible to conduct a true experiment, then you face a much more difficult situation, one that is too complex to go into here. | Practical thoughts on explanatory vs. predictive modeling
With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want |
19,271 | Practical thoughts on explanatory vs. predictive modeling | Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offer an example that might be useful.
Suppose we are intereted in modeling College GPA as a function of academic preparation. As measures of academic preparation, we have:
Aptitude Test Scores;
HS GPA; and
Number of AP Tests passed.
Strategy for Prediction
If the goal is prediction, I might use all of these variables simultaneously in a linear model and my primary concern would be predictive accuracy. Whichever of the variables prove most useful for predicting College GPA would be included in the final model.
Strategy for Explanation
If the goal is explanation, I might be more concerned about data reduction and think carefully about the correlations among the independent variables. My primary concern would be interpreting the coefficients.
Example
In a typical multivariate problem with correlated predictors, it would not be uncommon to observe regression coefficients that are "unexpected". Given the interrelationships among the independent variables, it would not be surprising to see partial coefficients for some of these variables that are not in the same direction as their zero-order relationships and which may seem counter intuitive and tough to explain.
For example, suppose the model suggests that (with Aptitude Test Scores and Number of AP Tests Successfully Completed taken into account) higher High School GPAs are associated with lower College GPAs. This is not a problem for prediction, but it does pose problems for an explanatory model where such a relationship is difficult to interpret. This model might provide the best out of sample predictions but it does little to help us understand the relationship between academic preparation and College GPA.
Instead, an explanatory strategy might seek some form of variable reduction, such as principal components, factor analysis, or SEM to:
focus on the variable that is the best measure of "academic
performance" and model College GPA on that one variable; or
use factor scores/latent variables derived from the combination
of the three measures of academic preparation rather than the
original variables.
Strategies such as these might reduce the predictive power of the model, but they may yield a better understanding of how Academic Preparation is related to College GPA. | Practical thoughts on explanatory vs. predictive modeling | Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offe | Practical thoughts on explanatory vs. predictive modeling
Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offer an example that might be useful.
Suppose we are intereted in modeling College GPA as a function of academic preparation. As measures of academic preparation, we have:
Aptitude Test Scores;
HS GPA; and
Number of AP Tests passed.
Strategy for Prediction
If the goal is prediction, I might use all of these variables simultaneously in a linear model and my primary concern would be predictive accuracy. Whichever of the variables prove most useful for predicting College GPA would be included in the final model.
Strategy for Explanation
If the goal is explanation, I might be more concerned about data reduction and think carefully about the correlations among the independent variables. My primary concern would be interpreting the coefficients.
Example
In a typical multivariate problem with correlated predictors, it would not be uncommon to observe regression coefficients that are "unexpected". Given the interrelationships among the independent variables, it would not be surprising to see partial coefficients for some of these variables that are not in the same direction as their zero-order relationships and which may seem counter intuitive and tough to explain.
For example, suppose the model suggests that (with Aptitude Test Scores and Number of AP Tests Successfully Completed taken into account) higher High School GPAs are associated with lower College GPAs. This is not a problem for prediction, but it does pose problems for an explanatory model where such a relationship is difficult to interpret. This model might provide the best out of sample predictions but it does little to help us understand the relationship between academic preparation and College GPA.
Instead, an explanatory strategy might seek some form of variable reduction, such as principal components, factor analysis, or SEM to:
focus on the variable that is the best measure of "academic
performance" and model College GPA on that one variable; or
use factor scores/latent variables derived from the combination
of the three measures of academic preparation rather than the
original variables.
Strategies such as these might reduce the predictive power of the model, but they may yield a better understanding of how Academic Preparation is related to College GPA. | Practical thoughts on explanatory vs. predictive modeling
Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offe |
19,272 | Practical thoughts on explanatory vs. predictive modeling | I would like to offer a model-centered view on the matter.
Predictive modeling is what happens in most analyses. For example, a
researcher sets up a regression model with a bunch of predictors. The
regression coefficients then represent predictive comparisons between
groups. The predictive aspect comes from the probability model: the
inference is done with regard to a superpopulation model which may
have produced the observed population or sample. The purpose of this
model is to predict new outcomes for units emerging from this
superpopulation. Often, this is a vain objective because things are
always changing, especially in the social world. Or because your model
is about rare units such as countries and you cannot draw a new
sample. The usefulness of the model in this case is left to the
appreciation of the analyst.
When you try to generalize the results to other groups or future
units, this is still prediction but of a different kind. We may call
it forecasting for example. The key point is that the predictive power
of estimated models is, by default, of descriptive nature. You
compare an outcome across groups and hypothesize a probability model
for these comparisons, but you cannot conclude that these comparisons
constitute causal effects.
The reason is that these groups may suffer from selection bias. Ie,
they may naturally have a higher score in the outcome of interest,
irrespective of the treatment (the hypothetical causal
intervention). Or they may be subject to a different treatment effect
size than other groups. This is why, especially for observational
data, the estimated models are generally about predictive
comparisons and not explanation. Explanation is about the
identification and estimation of causal effect and requires well
designed experiments or thoughtful use of instrumental variables. In
this case, the predictive comparisons are cut from any selection bias
and represent causal effects. The model may thus be regarded as
explanatory.
I found that thinking in these terms has often clarified what I was
really doing when setting up a model for some data. | Practical thoughts on explanatory vs. predictive modeling | I would like to offer a model-centered view on the matter.
Predictive modeling is what happens in most analyses. For example, a
researcher sets up a regression model with a bunch of predictors. The
re | Practical thoughts on explanatory vs. predictive modeling
I would like to offer a model-centered view on the matter.
Predictive modeling is what happens in most analyses. For example, a
researcher sets up a regression model with a bunch of predictors. The
regression coefficients then represent predictive comparisons between
groups. The predictive aspect comes from the probability model: the
inference is done with regard to a superpopulation model which may
have produced the observed population or sample. The purpose of this
model is to predict new outcomes for units emerging from this
superpopulation. Often, this is a vain objective because things are
always changing, especially in the social world. Or because your model
is about rare units such as countries and you cannot draw a new
sample. The usefulness of the model in this case is left to the
appreciation of the analyst.
When you try to generalize the results to other groups or future
units, this is still prediction but of a different kind. We may call
it forecasting for example. The key point is that the predictive power
of estimated models is, by default, of descriptive nature. You
compare an outcome across groups and hypothesize a probability model
for these comparisons, but you cannot conclude that these comparisons
constitute causal effects.
The reason is that these groups may suffer from selection bias. Ie,
they may naturally have a higher score in the outcome of interest,
irrespective of the treatment (the hypothetical causal
intervention). Or they may be subject to a different treatment effect
size than other groups. This is why, especially for observational
data, the estimated models are generally about predictive
comparisons and not explanation. Explanation is about the
identification and estimation of causal effect and requires well
designed experiments or thoughtful use of instrumental variables. In
this case, the predictive comparisons are cut from any selection bias
and represent causal effects. The model may thus be regarded as
explanatory.
I found that thinking in these terms has often clarified what I was
really doing when setting up a model for some data. | Practical thoughts on explanatory vs. predictive modeling
I would like to offer a model-centered view on the matter.
Predictive modeling is what happens in most analyses. For example, a
researcher sets up a regression model with a bunch of predictors. The
re |
19,273 | Practical thoughts on explanatory vs. predictive modeling | We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affected by changes in the INPUT space. In this sense even a purely predictive model can provide explanatory insights. This is a point that is often overlooked or misunderstood by the research community. Just because we do not understand why an algorithm is working doesn't mean the algorithm lacks explanatory power...
Overall from a mainstream point of view, probabilityislogic's succinct reply is absolutely correct... | Practical thoughts on explanatory vs. predictive modeling | We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affecte | Practical thoughts on explanatory vs. predictive modeling
We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affected by changes in the INPUT space. In this sense even a purely predictive model can provide explanatory insights. This is a point that is often overlooked or misunderstood by the research community. Just because we do not understand why an algorithm is working doesn't mean the algorithm lacks explanatory power...
Overall from a mainstream point of view, probabilityislogic's succinct reply is absolutely correct... | Practical thoughts on explanatory vs. predictive modeling
We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affecte |
19,274 | Practical thoughts on explanatory vs. predictive modeling | There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we often mix them up, hence conflation.
I agree that in social science applications, the distinction is sensible, but in natural sciences they are and should be the same. Also, I call them inference vs. forecasting, and agree that in social sciences one should not mix them up.
I'll start with the natural sciences. In physics we're focused on explaining, we're trying to understand how the world works, what causes what etc. So, the focus is on causality, inference and such. On the other hand, the predictive aspect is also a part of the scientific process. In fact, the way you prove a theory, which already explained observations well (think of in-sample), is to predict new observations then check how prediction worked. Any theory that lack predictive abilities will have big trouble gaining acceptance in physics. That's why experiments such as Michelson-Morley's are so important.
In social sciences, unfortunately, the underlying phenomena are unstable, unrepeatable, unreproducible. If you watch nuclei decay you'll get the same results every time you observe them, and the same results that I or a dude one hundred years ago got. Not in economics or finance. Also, the ability to conduct experiments is very limited, almost non existent for all practical purposes, we only observe and conduct random samples of observations. I can keep going on but the idea's that the phenomena that we deal with are very unstable, hence our theories are not of the same quality as in physics. Therefore, one of the ways we deal with the situation is to focus on either inference (when you try to understand what causes what or impact what) or forecasting (just say what you think will happen to this or that ignore the structure). | Practical thoughts on explanatory vs. predictive modeling | There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we | Practical thoughts on explanatory vs. predictive modeling
There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we often mix them up, hence conflation.
I agree that in social science applications, the distinction is sensible, but in natural sciences they are and should be the same. Also, I call them inference vs. forecasting, and agree that in social sciences one should not mix them up.
I'll start with the natural sciences. In physics we're focused on explaining, we're trying to understand how the world works, what causes what etc. So, the focus is on causality, inference and such. On the other hand, the predictive aspect is also a part of the scientific process. In fact, the way you prove a theory, which already explained observations well (think of in-sample), is to predict new observations then check how prediction worked. Any theory that lack predictive abilities will have big trouble gaining acceptance in physics. That's why experiments such as Michelson-Morley's are so important.
In social sciences, unfortunately, the underlying phenomena are unstable, unrepeatable, unreproducible. If you watch nuclei decay you'll get the same results every time you observe them, and the same results that I or a dude one hundred years ago got. Not in economics or finance. Also, the ability to conduct experiments is very limited, almost non existent for all practical purposes, we only observe and conduct random samples of observations. I can keep going on but the idea's that the phenomena that we deal with are very unstable, hence our theories are not of the same quality as in physics. Therefore, one of the ways we deal with the situation is to focus on either inference (when you try to understand what causes what or impact what) or forecasting (just say what you think will happen to this or that ignore the structure). | Practical thoughts on explanatory vs. predictive modeling
There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we |
19,275 | Practical thoughts on explanatory vs. predictive modeling | A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression and factor analysis
The latent variables are manifested in the form of multi collinearity in predictive models (regression). | Practical thoughts on explanatory vs. predictive modeling | A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression a | Practical thoughts on explanatory vs. predictive modeling
A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression and factor analysis
The latent variables are manifested in the form of multi collinearity in predictive models (regression). | Practical thoughts on explanatory vs. predictive modeling
A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression a |
19,276 | Practical thoughts on explanatory vs. predictive modeling | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Explanatory model has also been used in medicine and the health area as well, with a very different meaning. Basically what people have as internal beliefs or meanings can be quite different from accepted explanations. For example a religious person may have an explanatory model that an illness was due to punishment or karma for a past behaviour along with accepting th biological reasons as well.
https://thehealthcareblog.com/blog/2013/06/11/the-patient-explanatory-model/
https://pdfs.semanticscholar.org/0b69/ffd5cc4c7bb2f401be6819c946a955344880.pdf | Practical thoughts on explanatory vs. predictive modeling | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Practical thoughts on explanatory vs. predictive modeling
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Explanatory model has also been used in medicine and the health area as well, with a very different meaning. Basically what people have as internal beliefs or meanings can be quite different from accepted explanations. For example a religious person may have an explanatory model that an illness was due to punishment or karma for a past behaviour along with accepting th biological reasons as well.
https://thehealthcareblog.com/blog/2013/06/11/the-patient-explanatory-model/
https://pdfs.semanticscholar.org/0b69/ffd5cc4c7bb2f401be6819c946a955344880.pdf | Practical thoughts on explanatory vs. predictive modeling
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
19,277 | Why don't Bayesian methods require multiple testing corrections? | One odd way to answer the question is to note that the Bayesian method provides no way to do this because Bayesian methods are consistent with accepted rules of evidence and frequentist methods are often at odds with them. Examples:
With frequentist statistics, comparing treatment A to B must penalize for comparing treatments C and D because of family-wise type I error considerations; with Bayesian the A-B comparison stands on its own.
For sequential frequentist testing, penalties are usually required for multiple looks at the data. In a group sequential setting, an early comparison for A vs B must be penalized for a later comparison that has not been made yet, and a later comparison must be penalized for an earlier comparison even if the earlier comparison did not alter the course of the study.
The problem stems from the frequentist's reversal of the flow of time and information, making frequentists have to consider what could have happened instead of what did happen. In contrast, Bayesian assessments anchor all assessment to the prior distribution, which calibrates evidence. For example, the prior distribution for the A-B difference calibrates all future assessments of A-B and does not have to consider C-D.
With sequential testing, there is great confusion about how to adjust point estimates when an experiment is terminated early using frequentist inference. In the Bayesian world, the prior "pulls back" on any point estimates, and the updated posterior distribution applies to inference at any time and requires no complex sample space considerations. | Why don't Bayesian methods require multiple testing corrections? | One odd way to answer the question is to note that the Bayesian method provides no way to do this because Bayesian methods are consistent with accepted rules of evidence and frequentist methods are of | Why don't Bayesian methods require multiple testing corrections?
One odd way to answer the question is to note that the Bayesian method provides no way to do this because Bayesian methods are consistent with accepted rules of evidence and frequentist methods are often at odds with them. Examples:
With frequentist statistics, comparing treatment A to B must penalize for comparing treatments C and D because of family-wise type I error considerations; with Bayesian the A-B comparison stands on its own.
For sequential frequentist testing, penalties are usually required for multiple looks at the data. In a group sequential setting, an early comparison for A vs B must be penalized for a later comparison that has not been made yet, and a later comparison must be penalized for an earlier comparison even if the earlier comparison did not alter the course of the study.
The problem stems from the frequentist's reversal of the flow of time and information, making frequentists have to consider what could have happened instead of what did happen. In contrast, Bayesian assessments anchor all assessment to the prior distribution, which calibrates evidence. For example, the prior distribution for the A-B difference calibrates all future assessments of A-B and does not have to consider C-D.
With sequential testing, there is great confusion about how to adjust point estimates when an experiment is terminated early using frequentist inference. In the Bayesian world, the prior "pulls back" on any point estimates, and the updated posterior distribution applies to inference at any time and requires no complex sample space considerations. | Why don't Bayesian methods require multiple testing corrections?
One odd way to answer the question is to note that the Bayesian method provides no way to do this because Bayesian methods are consistent with accepted rules of evidence and frequentist methods are of |
19,278 | Why don't Bayesian methods require multiple testing corrections? | This type of hierarchical model does shrink the estimates and reduces the number of false claims to a reasonable extent for a small to moderate number of hypotheses. Does it guarantee some specific type I error rate? No.
This particular suggestion by Gelman (who acknowledges the issue with looking at too many different things and then too easily wrongly concluding that you see something for some of them - in fact one of his pet topics on his blog) is distinct from a the extreme alternative viewpoint that holds that Bayesian methods do not need to account for multiplicity, because all that matters are your likelihood (and your prior). | Why don't Bayesian methods require multiple testing corrections? | This type of hierarchical model does shrink the estimates and reduces the number of false claims to a reasonable extent for a small to moderate number of hypotheses. Does it guarantee some specific ty | Why don't Bayesian methods require multiple testing corrections?
This type of hierarchical model does shrink the estimates and reduces the number of false claims to a reasonable extent for a small to moderate number of hypotheses. Does it guarantee some specific type I error rate? No.
This particular suggestion by Gelman (who acknowledges the issue with looking at too many different things and then too easily wrongly concluding that you see something for some of them - in fact one of his pet topics on his blog) is distinct from a the extreme alternative viewpoint that holds that Bayesian methods do not need to account for multiplicity, because all that matters are your likelihood (and your prior). | Why don't Bayesian methods require multiple testing corrections?
This type of hierarchical model does shrink the estimates and reduces the number of false claims to a reasonable extent for a small to moderate number of hypotheses. Does it guarantee some specific ty |
19,279 | Why don't Bayesian methods require multiple testing corrections? | Very interesting question, here's my take on it.
It's all about encoding information, then turn the Bayesian crank. It seems too good to be true - but both of these are harder than they seem.
I start with asking the question
What information is being used when we worry about multiple comparisons?
I can think of some - the first is "data dredging" - test "everything" until you get enough passes/fails (I would think almost every stats trained person would be exposed to this problem). You also have less sinister, but essentially the same "I have so many tests to run - surely all can't be correct".
After thinking about this, one thing I notice is that you don't tend to hear much about specific hypothesis or specific comparisons. It's all about the "collection" - this triggers my thinking towards exchangeability - the hypothesis being compared are "similar" to each other in some way. And how do you encode exchangeability into bayesian analysis? - hyper-priors, mixed models, random effects, etc!!!
But exchangeability only gets you part of the way there. Is everything exchangeable? Or do you have "sparsity" - such as only a few non-zero regression coefficients with a large pool of candidates. Mixed models and normally distributed random effects don't work here. They get "stuck" in between squashing noise and leaving signals untouched (e.g. in your example keep locationB and locationC "true" parameters equal, and set locationA "true" parameter arbitrarily large or small, and watch the standard linear mixed model fail.). But it can be fixed - e.g. with "spike and slab" priors or "horse shoe" priors.
So it's really more about describing what sort of hypothesis you are talking about and getting as many known features reflected in the prior and likelihood. Andrew Gelman's approach is just a way to handle a broad class of multiple comparisons implicitly. Just like least squares and normal distributions tend to work well in most cases (but not all).
In terms of how it does this, you could think of a person reasoning as follows
- group A and group B might have the same mean
- I looked at the data, and the means are "close"
- Hence, to get a better estimate for both, I should pool the data, as my initial thought was they have the same mean.
- If they are not the same, the data provides evidence that they are "close", so pooling "a little bit" won't hurt me too badly if my hypothesis was wrong (a la all models are wrong, some are useful)
Note that all the above hinges on the initial premise "they might be the same". Take that away, and there is no justification for pooling. You can probably also see a "normalish distribution" way of thinking about the tests. "Zero is most likely", "if not zero, then close to zero is next most likely", "extreme values are unlikely". Consider this alternative:
group A and group B means might be equal, but they could also be drastically different
Then the argument about pooling "a little bit" is a very bad idea. You are better off choosing total pooling or zero pooling. Much more like a Cauchy, spike&slab, type of situation (lots of mass around zero, and lots of mass for extreme values)
The whole multiple comparisons doesn't need to be dealt with, because the Bayesian approach is incorporating the information that leads us to worry into the prior and/or likelihood. In a sense it more a reminder to properly think about what information is available to you, and making sure you have included it in your analysis. | Why don't Bayesian methods require multiple testing corrections? | Very interesting question, here's my take on it.
It's all about encoding information, then turn the Bayesian crank. It seems too good to be true - but both of these are harder than they seem.
I start | Why don't Bayesian methods require multiple testing corrections?
Very interesting question, here's my take on it.
It's all about encoding information, then turn the Bayesian crank. It seems too good to be true - but both of these are harder than they seem.
I start with asking the question
What information is being used when we worry about multiple comparisons?
I can think of some - the first is "data dredging" - test "everything" until you get enough passes/fails (I would think almost every stats trained person would be exposed to this problem). You also have less sinister, but essentially the same "I have so many tests to run - surely all can't be correct".
After thinking about this, one thing I notice is that you don't tend to hear much about specific hypothesis or specific comparisons. It's all about the "collection" - this triggers my thinking towards exchangeability - the hypothesis being compared are "similar" to each other in some way. And how do you encode exchangeability into bayesian analysis? - hyper-priors, mixed models, random effects, etc!!!
But exchangeability only gets you part of the way there. Is everything exchangeable? Or do you have "sparsity" - such as only a few non-zero regression coefficients with a large pool of candidates. Mixed models and normally distributed random effects don't work here. They get "stuck" in between squashing noise and leaving signals untouched (e.g. in your example keep locationB and locationC "true" parameters equal, and set locationA "true" parameter arbitrarily large or small, and watch the standard linear mixed model fail.). But it can be fixed - e.g. with "spike and slab" priors or "horse shoe" priors.
So it's really more about describing what sort of hypothesis you are talking about and getting as many known features reflected in the prior and likelihood. Andrew Gelman's approach is just a way to handle a broad class of multiple comparisons implicitly. Just like least squares and normal distributions tend to work well in most cases (but not all).
In terms of how it does this, you could think of a person reasoning as follows
- group A and group B might have the same mean
- I looked at the data, and the means are "close"
- Hence, to get a better estimate for both, I should pool the data, as my initial thought was they have the same mean.
- If they are not the same, the data provides evidence that they are "close", so pooling "a little bit" won't hurt me too badly if my hypothesis was wrong (a la all models are wrong, some are useful)
Note that all the above hinges on the initial premise "they might be the same". Take that away, and there is no justification for pooling. You can probably also see a "normalish distribution" way of thinking about the tests. "Zero is most likely", "if not zero, then close to zero is next most likely", "extreme values are unlikely". Consider this alternative:
group A and group B means might be equal, but they could also be drastically different
Then the argument about pooling "a little bit" is a very bad idea. You are better off choosing total pooling or zero pooling. Much more like a Cauchy, spike&slab, type of situation (lots of mass around zero, and lots of mass for extreme values)
The whole multiple comparisons doesn't need to be dealt with, because the Bayesian approach is incorporating the information that leads us to worry into the prior and/or likelihood. In a sense it more a reminder to properly think about what information is available to you, and making sure you have included it in your analysis. | Why don't Bayesian methods require multiple testing corrections?
Very interesting question, here's my take on it.
It's all about encoding information, then turn the Bayesian crank. It seems too good to be true - but both of these are harder than they seem.
I start |
19,280 | Why don't Bayesian methods require multiple testing corrections? | First, as I understand the model you presented I think it is a bit different to Gelman proposal, that more looks like:
A ~ Distribution(locationA)
B ~ Distribution(locationB)
C ~ Distribution(locationC)
locationA ~ Normal(commonLocation)
locationB ~ Normal(commonLocation)
locationC ~ Normal(commonLocation)
commonLocation ~ hyperPrior
In practice, by adding this commonLocation parameter, the inferences over the parameters the 3 distributions (here locations 1, 2 and 3) are no longer independent from each other. Moreover, commonLocation tends to shrink expectational values of the parameters toward a central (generally estimated) one. In a certain sense, it works as a regularization over all the inferences making the need of correction for multiple correction not needed (as in practice we perform one single multivariate estimation accounting from the interaction between each of them through the use of model).
As pointed out by the other answer, this correction does not offer any control on type I error but in most cases, Bayesian method does not offer any such control even at the single inference scale and correction for multiple comparison must be thought differently in the Bayesian setting. | Why don't Bayesian methods require multiple testing corrections? | First, as I understand the model you presented I think it is a bit different to Gelman proposal, that more looks like:
A ~ Distribution(locationA)
B ~ Distribution(locationB)
C ~ Distribution(location | Why don't Bayesian methods require multiple testing corrections?
First, as I understand the model you presented I think it is a bit different to Gelman proposal, that more looks like:
A ~ Distribution(locationA)
B ~ Distribution(locationB)
C ~ Distribution(locationC)
locationA ~ Normal(commonLocation)
locationB ~ Normal(commonLocation)
locationC ~ Normal(commonLocation)
commonLocation ~ hyperPrior
In practice, by adding this commonLocation parameter, the inferences over the parameters the 3 distributions (here locations 1, 2 and 3) are no longer independent from each other. Moreover, commonLocation tends to shrink expectational values of the parameters toward a central (generally estimated) one. In a certain sense, it works as a regularization over all the inferences making the need of correction for multiple correction not needed (as in practice we perform one single multivariate estimation accounting from the interaction between each of them through the use of model).
As pointed out by the other answer, this correction does not offer any control on type I error but in most cases, Bayesian method does not offer any such control even at the single inference scale and correction for multiple comparison must be thought differently in the Bayesian setting. | Why don't Bayesian methods require multiple testing corrections?
First, as I understand the model you presented I think it is a bit different to Gelman proposal, that more looks like:
A ~ Distribution(locationA)
B ~ Distribution(locationB)
C ~ Distribution(location |
19,281 | Is a distribution that is normal, but highly skewed, considered Gaussian? | A fraction per day is certainly not negative. This rules out the normal distribution, which has probability mass over the entire real axis - in particular over the negative half.
Power law distributions are often used to model things like income distributions, sizes of cities etc. They are nonnegative and typically highly skewed. These would be the first I would try in modeling time spent watching YouTube. (Or monitoring CrossValidated questions.)
More information on power laws can be found here or here, or in our power-law tag. | Is a distribution that is normal, but highly skewed, considered Gaussian? | A fraction per day is certainly not negative. This rules out the normal distribution, which has probability mass over the entire real axis - in particular over the negative half.
Power law distributio | Is a distribution that is normal, but highly skewed, considered Gaussian?
A fraction per day is certainly not negative. This rules out the normal distribution, which has probability mass over the entire real axis - in particular over the negative half.
Power law distributions are often used to model things like income distributions, sizes of cities etc. They are nonnegative and typically highly skewed. These would be the first I would try in modeling time spent watching YouTube. (Or monitoring CrossValidated questions.)
More information on power laws can be found here or here, or in our power-law tag. | Is a distribution that is normal, but highly skewed, considered Gaussian?
A fraction per day is certainly not negative. This rules out the normal distribution, which has probability mass over the entire real axis - in particular over the negative half.
Power law distributio |
19,282 | Is a distribution that is normal, but highly skewed, considered Gaussian? | A distribution that is normal is not highly skewed. That is a contradiction. Normally distributed variables have skew = 0. | Is a distribution that is normal, but highly skewed, considered Gaussian? | A distribution that is normal is not highly skewed. That is a contradiction. Normally distributed variables have skew = 0. | Is a distribution that is normal, but highly skewed, considered Gaussian?
A distribution that is normal is not highly skewed. That is a contradiction. Normally distributed variables have skew = 0. | Is a distribution that is normal, but highly skewed, considered Gaussian?
A distribution that is normal is not highly skewed. That is a contradiction. Normally distributed variables have skew = 0. |
19,283 | Is a distribution that is normal, but highly skewed, considered Gaussian? | If it has long right tail, then it's right skewed.
It can't be a normal distribution since skew !=0, it's perhaps a unimodal skew normal distribution:
https://en.wikipedia.org/wiki/Skew_normal_distribution | Is a distribution that is normal, but highly skewed, considered Gaussian? | If it has long right tail, then it's right skewed.
It can't be a normal distribution since skew !=0, it's perhaps a unimodal skew normal distribution:
https://en.wikipedia.org/wiki/Skew_normal_distr | Is a distribution that is normal, but highly skewed, considered Gaussian?
If it has long right tail, then it's right skewed.
It can't be a normal distribution since skew !=0, it's perhaps a unimodal skew normal distribution:
https://en.wikipedia.org/wiki/Skew_normal_distribution | Is a distribution that is normal, but highly skewed, considered Gaussian?
If it has long right tail, then it's right skewed.
It can't be a normal distribution since skew !=0, it's perhaps a unimodal skew normal distribution:
https://en.wikipedia.org/wiki/Skew_normal_distr |
19,284 | Is a distribution that is normal, but highly skewed, considered Gaussian? | It could be a log-normal distribution. As mentioned here:
Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution.
The reference given is: Yin, Peifeng; Luo, Ping; Lee, Wang-Chien; Wang, Min (2013). Silence is also evidence: interpreting dwell time for recommendation from psychological perspective. ACM International Conference on KDD. | Is a distribution that is normal, but highly skewed, considered Gaussian? | It could be a log-normal distribution. As mentioned here:
Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution.
The reference given is: Yin, Peifeng; Luo, Ping; | Is a distribution that is normal, but highly skewed, considered Gaussian?
It could be a log-normal distribution. As mentioned here:
Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution.
The reference given is: Yin, Peifeng; Luo, Ping; Lee, Wang-Chien; Wang, Min (2013). Silence is also evidence: interpreting dwell time for recommendation from psychological perspective. ACM International Conference on KDD. | Is a distribution that is normal, but highly skewed, considered Gaussian?
It could be a log-normal distribution. As mentioned here:
Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution.
The reference given is: Yin, Peifeng; Luo, Ping; |
19,285 | Is a distribution that is normal, but highly skewed, considered Gaussian? | "Is there a better word for that distribution?"
There's a worthwhile distinction here between using words to describe the properties of the distribution, versus trying to find a "name" for the distribution so that you can identify it as (approximately) an instance of a particular standard distribution: one for which a formula or statistical tables might exist for its distribution function, and for which you could estimate its parameters. In this latter case, you are likely using the named distribution, e.g. "normal/Gaussian" (the two terms are generally synonymous), as a model that captures some of the key features of your data, rather than claiming the population your data is drawn from exactly follows that theoretical distribution. To slightly misquote George Box, all models are "wrong", but some are useful. If you are thinking about the modelling approach, it is worth considering what features you want to incorporate and how complicated or parsimonious you want your model to be.
Being positively skewed is an example of describing a property that the distribution has, but doesn't come close to specifying which off-the-shelf distribution is "the" appropriate model. It does rule out some candidates, for example the Gaussian (i.e. normal) distribution has zero skew so will not be appropriate to model your data if the skew is an important feature. There may be other properties of the data that are important to you too, e.g. that it's unimodal (has just one peak) or that it is bounded between 0 and 24 hours (or between 0 and 1, if you are writing it as a fraction of the day), or that there is a probability mass concentrated at zero (since there are people who do not watch youtube at all on a given day). You may also be interested in other properties like the kurtosis. And it is worth bearing in mind that even if your distribution had a "hump" or "bell-curve" shape and had zero or near-zero skew, it doesn't automatically follow that the normal distribution is "correct" for it! On the other hand, even if the population your data is drawn from actually did follow a particular distribution precisely, due to sampling error your dataset may not quite resemble it. Small data sets are likely to be "noisy", and it may be unclear whether certain features you can see, e.g. additional small humps or asymmetric tails, are properties of the underlying population the data was drawn from (and perhaps therefore ought to be incorporated in your model) or whether they are just artefacts from your particular sample (and for modelling purposes should be ignored). If you have a small data set and the skew is close to zero, then it is even plausible the underlying distribution is actually symmetric. The larger your data set and the larger the skewness, the less plausible this becomes — but while you could perform a significance test to see how convincing is the evidence your data provides for skewness in the population it was drawn from, this may be missing the point as to whether a normal (or other zero skew) distribution is appropriate as a model ...
Which properties of the data really matter for the purposes you are intending to model it? Note that if the skew is reasonably small and you do not care very much about it, even if the underlying population is genuinely skewed, then you might still find the normal distribution a useful model to approximate this true distribution of watching times. But you should check that this doesn't end up making silly predictions. Because a normal distribution has no highest or lowest possible value, then although extremely high or low values become increasingly unlikely, you will always find that your model predicts there is some probability of watching for a negative number of hours per day, or more than 24 hours. This gets more problematic for you if the predicted probability of such impossible events becomes high. A symmetric distribution like the normal will predict that as many people will watch for lengths of time more than e.g. 50% above the mean, as watch for less than 50% below the mean. If watching times are very skewed, then this kind of prediction may also be so implausible as to be silly, and give you misleading results if you are taking the results of your model and using them as inputs for some other purpose (for instance, you're running a simulation of watching times in order to calculate optimal advertisement scheduling). If the skewness is so noteworthy you want to capture it as part of your model, then the skew normal distribution may be more appropriate. If you want to capture both skewness and kurtosis, then consider the skewed t. If you want to incorporate the physically possible upper and lower bounds, then consider using the truncated versions of these distributions. Many other probability distributions exist that can be skewed and unimodal (for appropriate parameter choices) such as the F or gamma distributions, and again you can truncate these so they do not predict impossibly high watching times. A beta distribution may be a good choice if you are modelling the fraction of the day spent watching, as this is always bounded between 0 and 1 without further truncation being necessary. If you want to incorporate the concentration of probability at exactly zero due to non-watchers, then consider building in a hurdle model.
But at the point you are trying to throw in every feature you can identify from your data, and build an ever more sophisticated model, perhaps you should ask yourself why you are doing this? Would there be an advantage to a simpler model, for example it being easier to work with mathematically or having fewer parameters to estimate? If you are concerned that such simplification will leave you unable to capture all of the properties of interest to you, it may well be that no "off-the-shelf" distribution does quite what you want. However, we are not restricted to working with named distributions whose mathematical properties have been elucidated previously. Instead, consider using your data to construct an empirical distribution function. This will capture all the behaviour that was present in your data, but you can no longer give it a name like "normal" or "gamma", nor can you apply mathematical properties that pertain only to a particular distribution. For instance, the "95% of the data lies within 1.96 standard deviations of the mean" rule is for normally distributed data and may not apply to your distribution; though note that some rules apply to all distributions, e.g. Chebyshev's inequality guarantees at least 75% of your data must lie within two standard deviations of the mean, regardless of the skew. Unfortunately the empirical distribution will also inherit all those properties of your data set arising purely by sampling error, not just those possessed by the underlying population, so you may find a histogram of your empirical distribution has some humps and dips that the population itself does not. You may want to investigate smoothed empirical distribution functions, or better yet, increasing your sample size.
In summary: although the normal distribution has zero skew, the fact your data are skewed doesn't rule out the normal distribution as a useful model, though it does suggest some other distribution may be more appropriate. You should consider other properties of the data when choosing your model, besides the skew, and consider too the purposes you are going to use the model for. It's safe to say that your true population of watching times does not exactly follow some famous, named distribution, but this does not mean such a distribution is doomed to be useless as a model. However, for some purposes you may prefer to just use the empirical distribution itself, rather than try fitting a standard distribution to it. | Is a distribution that is normal, but highly skewed, considered Gaussian? | "Is there a better word for that distribution?"
There's a worthwhile distinction here between using words to describe the properties of the distribution, versus trying to find a "name" for the distrib | Is a distribution that is normal, but highly skewed, considered Gaussian?
"Is there a better word for that distribution?"
There's a worthwhile distinction here between using words to describe the properties of the distribution, versus trying to find a "name" for the distribution so that you can identify it as (approximately) an instance of a particular standard distribution: one for which a formula or statistical tables might exist for its distribution function, and for which you could estimate its parameters. In this latter case, you are likely using the named distribution, e.g. "normal/Gaussian" (the two terms are generally synonymous), as a model that captures some of the key features of your data, rather than claiming the population your data is drawn from exactly follows that theoretical distribution. To slightly misquote George Box, all models are "wrong", but some are useful. If you are thinking about the modelling approach, it is worth considering what features you want to incorporate and how complicated or parsimonious you want your model to be.
Being positively skewed is an example of describing a property that the distribution has, but doesn't come close to specifying which off-the-shelf distribution is "the" appropriate model. It does rule out some candidates, for example the Gaussian (i.e. normal) distribution has zero skew so will not be appropriate to model your data if the skew is an important feature. There may be other properties of the data that are important to you too, e.g. that it's unimodal (has just one peak) or that it is bounded between 0 and 24 hours (or between 0 and 1, if you are writing it as a fraction of the day), or that there is a probability mass concentrated at zero (since there are people who do not watch youtube at all on a given day). You may also be interested in other properties like the kurtosis. And it is worth bearing in mind that even if your distribution had a "hump" or "bell-curve" shape and had zero or near-zero skew, it doesn't automatically follow that the normal distribution is "correct" for it! On the other hand, even if the population your data is drawn from actually did follow a particular distribution precisely, due to sampling error your dataset may not quite resemble it. Small data sets are likely to be "noisy", and it may be unclear whether certain features you can see, e.g. additional small humps or asymmetric tails, are properties of the underlying population the data was drawn from (and perhaps therefore ought to be incorporated in your model) or whether they are just artefacts from your particular sample (and for modelling purposes should be ignored). If you have a small data set and the skew is close to zero, then it is even plausible the underlying distribution is actually symmetric. The larger your data set and the larger the skewness, the less plausible this becomes — but while you could perform a significance test to see how convincing is the evidence your data provides for skewness in the population it was drawn from, this may be missing the point as to whether a normal (or other zero skew) distribution is appropriate as a model ...
Which properties of the data really matter for the purposes you are intending to model it? Note that if the skew is reasonably small and you do not care very much about it, even if the underlying population is genuinely skewed, then you might still find the normal distribution a useful model to approximate this true distribution of watching times. But you should check that this doesn't end up making silly predictions. Because a normal distribution has no highest or lowest possible value, then although extremely high or low values become increasingly unlikely, you will always find that your model predicts there is some probability of watching for a negative number of hours per day, or more than 24 hours. This gets more problematic for you if the predicted probability of such impossible events becomes high. A symmetric distribution like the normal will predict that as many people will watch for lengths of time more than e.g. 50% above the mean, as watch for less than 50% below the mean. If watching times are very skewed, then this kind of prediction may also be so implausible as to be silly, and give you misleading results if you are taking the results of your model and using them as inputs for some other purpose (for instance, you're running a simulation of watching times in order to calculate optimal advertisement scheduling). If the skewness is so noteworthy you want to capture it as part of your model, then the skew normal distribution may be more appropriate. If you want to capture both skewness and kurtosis, then consider the skewed t. If you want to incorporate the physically possible upper and lower bounds, then consider using the truncated versions of these distributions. Many other probability distributions exist that can be skewed and unimodal (for appropriate parameter choices) such as the F or gamma distributions, and again you can truncate these so they do not predict impossibly high watching times. A beta distribution may be a good choice if you are modelling the fraction of the day spent watching, as this is always bounded between 0 and 1 without further truncation being necessary. If you want to incorporate the concentration of probability at exactly zero due to non-watchers, then consider building in a hurdle model.
But at the point you are trying to throw in every feature you can identify from your data, and build an ever more sophisticated model, perhaps you should ask yourself why you are doing this? Would there be an advantage to a simpler model, for example it being easier to work with mathematically or having fewer parameters to estimate? If you are concerned that such simplification will leave you unable to capture all of the properties of interest to you, it may well be that no "off-the-shelf" distribution does quite what you want. However, we are not restricted to working with named distributions whose mathematical properties have been elucidated previously. Instead, consider using your data to construct an empirical distribution function. This will capture all the behaviour that was present in your data, but you can no longer give it a name like "normal" or "gamma", nor can you apply mathematical properties that pertain only to a particular distribution. For instance, the "95% of the data lies within 1.96 standard deviations of the mean" rule is for normally distributed data and may not apply to your distribution; though note that some rules apply to all distributions, e.g. Chebyshev's inequality guarantees at least 75% of your data must lie within two standard deviations of the mean, regardless of the skew. Unfortunately the empirical distribution will also inherit all those properties of your data set arising purely by sampling error, not just those possessed by the underlying population, so you may find a histogram of your empirical distribution has some humps and dips that the population itself does not. You may want to investigate smoothed empirical distribution functions, or better yet, increasing your sample size.
In summary: although the normal distribution has zero skew, the fact your data are skewed doesn't rule out the normal distribution as a useful model, though it does suggest some other distribution may be more appropriate. You should consider other properties of the data when choosing your model, besides the skew, and consider too the purposes you are going to use the model for. It's safe to say that your true population of watching times does not exactly follow some famous, named distribution, but this does not mean such a distribution is doomed to be useless as a model. However, for some purposes you may prefer to just use the empirical distribution itself, rather than try fitting a standard distribution to it. | Is a distribution that is normal, but highly skewed, considered Gaussian?
"Is there a better word for that distribution?"
There's a worthwhile distinction here between using words to describe the properties of the distribution, versus trying to find a "name" for the distrib |
19,286 | Is a distribution that is normal, but highly skewed, considered Gaussian? | The gamma distribution could be a good candidate to describe this kind of distribution over nonnegative, right-skewed data. See the green line in the image here:
https://en.m.wikipedia.org/wiki/Gamma_distribution | Is a distribution that is normal, but highly skewed, considered Gaussian? | The gamma distribution could be a good candidate to describe this kind of distribution over nonnegative, right-skewed data. See the green line in the image here:
https://en.m.wikipedia.org/wiki/Gamma_ | Is a distribution that is normal, but highly skewed, considered Gaussian?
The gamma distribution could be a good candidate to describe this kind of distribution over nonnegative, right-skewed data. See the green line in the image here:
https://en.m.wikipedia.org/wiki/Gamma_distribution | Is a distribution that is normal, but highly skewed, considered Gaussian?
The gamma distribution could be a good candidate to describe this kind of distribution over nonnegative, right-skewed data. See the green line in the image here:
https://en.m.wikipedia.org/wiki/Gamma_ |
19,287 | Is a distribution that is normal, but highly skewed, considered Gaussian? | In the case at hand, since the time spent per day is bound from $0$ to $1$ (if quantified as a fraction of the day), distributions that are unbounded above (e.g. Pareto, skew-normal, Gamma, log-normal) won't work, but Beta would. | Is a distribution that is normal, but highly skewed, considered Gaussian? | In the case at hand, since the time spent per day is bound from $0$ to $1$ (if quantified as a fraction of the day), distributions that are unbounded above (e.g. Pareto, skew-normal, Gamma, log-normal | Is a distribution that is normal, but highly skewed, considered Gaussian?
In the case at hand, since the time spent per day is bound from $0$ to $1$ (if quantified as a fraction of the day), distributions that are unbounded above (e.g. Pareto, skew-normal, Gamma, log-normal) won't work, but Beta would. | Is a distribution that is normal, but highly skewed, considered Gaussian?
In the case at hand, since the time spent per day is bound from $0$ to $1$ (if quantified as a fraction of the day), distributions that are unbounded above (e.g. Pareto, skew-normal, Gamma, log-normal |
19,288 | Is a distribution that is normal, but highly skewed, considered Gaussian? | "Normal" and "Gaussian" mean exactly the same thing. As other answers explain, the distribution you're talking about is not normal/Gaussian, because that distribution assigns probabilities to every value on the real line, whereas your distribution only exists between $0$ and $24$. | Is a distribution that is normal, but highly skewed, considered Gaussian? | "Normal" and "Gaussian" mean exactly the same thing. As other answers explain, the distribution you're talking about is not normal/Gaussian, because that distribution assigns probabilities to every va | Is a distribution that is normal, but highly skewed, considered Gaussian?
"Normal" and "Gaussian" mean exactly the same thing. As other answers explain, the distribution you're talking about is not normal/Gaussian, because that distribution assigns probabilities to every value on the real line, whereas your distribution only exists between $0$ and $24$. | Is a distribution that is normal, but highly skewed, considered Gaussian?
"Normal" and "Gaussian" mean exactly the same thing. As other answers explain, the distribution you're talking about is not normal/Gaussian, because that distribution assigns probabilities to every va |
19,289 | Is a distribution that is normal, but highly skewed, considered Gaussian? | How about a hurdle model?
A hurdle model has two parts. The first is Bernoulli experiment that determines whether you use YouTube at all. If you don't, then your usage time is obviously zero and you're done. If you do, you "pass that hurdle", then the usage time comes from some other strictly positive distribution.
A closely related concept are zero-inflated models. These are meant to deal with a situation where we observe a bunch of zeros, but can't distinguish between always-zeros and sometimes-zeros. For example, consider the number of cigarettes that a person smokes each day. For non-smokers, that number is always zero, but some smokers may not smoke on a given day (out of cigarettes? on a long flight?). Unlike the hurdle model, the "smoker" distribution here should include zero, but these counts are 'inflated' by the non-smokers' contribution too. | Is a distribution that is normal, but highly skewed, considered Gaussian? | How about a hurdle model?
A hurdle model has two parts. The first is Bernoulli experiment that determines whether you use YouTube at all. If you don't, then your usage time is obviously zero and you'r | Is a distribution that is normal, but highly skewed, considered Gaussian?
How about a hurdle model?
A hurdle model has two parts. The first is Bernoulli experiment that determines whether you use YouTube at all. If you don't, then your usage time is obviously zero and you're done. If you do, you "pass that hurdle", then the usage time comes from some other strictly positive distribution.
A closely related concept are zero-inflated models. These are meant to deal with a situation where we observe a bunch of zeros, but can't distinguish between always-zeros and sometimes-zeros. For example, consider the number of cigarettes that a person smokes each day. For non-smokers, that number is always zero, but some smokers may not smoke on a given day (out of cigarettes? on a long flight?). Unlike the hurdle model, the "smoker" distribution here should include zero, but these counts are 'inflated' by the non-smokers' contribution too. | Is a distribution that is normal, but highly skewed, considered Gaussian?
How about a hurdle model?
A hurdle model has two parts. The first is Bernoulli experiment that determines whether you use YouTube at all. If you don't, then your usage time is obviously zero and you'r |
19,290 | Is a distribution that is normal, but highly skewed, considered Gaussian? | If the distribution is indeed a 'subset' of the normal distribution, you should considder a truncated model. Widely used in this context is the family of TOBIT models.
They essentialy suggest a pdf with a (positive) probability mass at 0 and then a 'cut of part of the normal distribution' for positive values.
I will refrain from typing the formula here and rather refere you to the Wikipedia Article: https://en.wikipedia.org/wiki/Tobit_model | Is a distribution that is normal, but highly skewed, considered Gaussian? | If the distribution is indeed a 'subset' of the normal distribution, you should considder a truncated model. Widely used in this context is the family of TOBIT models.
They essentialy suggest a pdf wi | Is a distribution that is normal, but highly skewed, considered Gaussian?
If the distribution is indeed a 'subset' of the normal distribution, you should considder a truncated model. Widely used in this context is the family of TOBIT models.
They essentialy suggest a pdf with a (positive) probability mass at 0 and then a 'cut of part of the normal distribution' for positive values.
I will refrain from typing the formula here and rather refere you to the Wikipedia Article: https://en.wikipedia.org/wiki/Tobit_model | Is a distribution that is normal, but highly skewed, considered Gaussian?
If the distribution is indeed a 'subset' of the normal distribution, you should considder a truncated model. Widely used in this context is the family of TOBIT models.
They essentialy suggest a pdf wi |
19,291 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | As they stand, neither one of Statement 1 or 2 is very useful. If 90% of passengers were women and 90% of people survived at random, then both statements would be true. The statements need to be considered in the context of the overall composition of the passengers. And the overall chance of surviving.
Suppose we had as many men as women, 100 each. Here are a few possible matrices of men (M) against women (W) and surviving (S) against dead (D):
| M | W
------------
S | 90 | 90
------------
D | 10 | 10
90% of women survived. As did 90% of men. Statement 1 is true, Statement 2 is false, since half of survivors were women. This is consistent with many survivors, but no difference between genders.
| M | W
------------
S | 10 | 90
------------
D | 90 | 10
90% of women survived, but only 10% of men. 90% of the survivors were women. Both statements are true. This is consistent with a difference between genders: women were more likely to survive than men.
| M | W
------------
S | 1 | 9
------------
D | 99 | 91
9% of women survived, but only 1% of men. 90% of the survivors were women. Statement 1 is false, Statement 2 is true. This is again consistent with a difference between genders: women were more likely to survive than men. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | As they stand, neither one of Statement 1 or 2 is very useful. If 90% of passengers were women and 90% of people survived at random, then both statements would be true. The statements need to be consi | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
As they stand, neither one of Statement 1 or 2 is very useful. If 90% of passengers were women and 90% of people survived at random, then both statements would be true. The statements need to be considered in the context of the overall composition of the passengers. And the overall chance of surviving.
Suppose we had as many men as women, 100 each. Here are a few possible matrices of men (M) against women (W) and surviving (S) against dead (D):
| M | W
------------
S | 90 | 90
------------
D | 10 | 10
90% of women survived. As did 90% of men. Statement 1 is true, Statement 2 is false, since half of survivors were women. This is consistent with many survivors, but no difference between genders.
| M | W
------------
S | 10 | 90
------------
D | 90 | 10
90% of women survived, but only 10% of men. 90% of the survivors were women. Both statements are true. This is consistent with a difference between genders: women were more likely to survive than men.
| M | W
------------
S | 1 | 9
------------
D | 99 | 91
9% of women survived, but only 1% of men. 90% of the survivors were women. Statement 1 is false, Statement 2 is true. This is again consistent with a difference between genders: women were more likely to survive than men. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
As they stand, neither one of Statement 1 or 2 is very useful. If 90% of passengers were women and 90% of people survived at random, then both statements would be true. The statements need to be consi |
19,292 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | At its face, the conditional probability of surviving conditional on sex is more useful, simply because of the direction of information flow. A person's sex is known before her or his survival status, and this probability can be used in a predictive sense, prospectively. Also, it is not influenced by the prevalence of females. When in doubt, think prediction. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | At its face, the conditional probability of surviving conditional on sex is more useful, simply because of the direction of information flow. A person's sex is known before her or his survival status | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
At its face, the conditional probability of surviving conditional on sex is more useful, simply because of the direction of information flow. A person's sex is known before her or his survival status, and this probability can be used in a predictive sense, prospectively. Also, it is not influenced by the prevalence of females. When in doubt, think prediction. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
At its face, the conditional probability of surviving conditional on sex is more useful, simply because of the direction of information flow. A person's sex is known before her or his survival status |
19,293 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | The first indicates that saving women was probably of high priority (irrespective of whether saving men was)
The word "priority" comes from the Latin for "before". A priority is something one comes before something else (where "before" is being used in the sense of "more important"). If you say that saving women was a priority, then saving women has to come before something else. And the natural assumption is that what it comes before is saving men. If you say "irrespective of whether saving men was", then we're left wondering what it came before.
That women had a high survival rate doesn't say much, if we don't know what the general survival rate was. The last ship I was on, over 90% of the women survived, but I wouldn't characterize that as showing that saving women was a high priority.
And knowing what percentage of survivors were women doesn't say much without knowing what percentage of people overall were women.
What statistic is more useful really depends on the situation. If you want to know how dangerous something is, the death rate is more important. If you want to know what affects how dangerous something is, then percentage breakdown of casualties is important. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | The first indicates that saving women was probably of high priority (irrespective of whether saving men was)
The word "priority" comes from the Latin for "before". A priority is something one comes b | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
The first indicates that saving women was probably of high priority (irrespective of whether saving men was)
The word "priority" comes from the Latin for "before". A priority is something one comes before something else (where "before" is being used in the sense of "more important"). If you say that saving women was a priority, then saving women has to come before something else. And the natural assumption is that what it comes before is saving men. If you say "irrespective of whether saving men was", then we're left wondering what it came before.
That women had a high survival rate doesn't say much, if we don't know what the general survival rate was. The last ship I was on, over 90% of the women survived, but I wouldn't characterize that as showing that saving women was a high priority.
And knowing what percentage of survivors were women doesn't say much without knowing what percentage of people overall were women.
What statistic is more useful really depends on the situation. If you want to know how dangerous something is, the death rate is more important. If you want to know what affects how dangerous something is, then percentage breakdown of casualties is important. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
The first indicates that saving women was probably of high priority (irrespective of whether saving men was)
The word "priority" comes from the Latin for "before". A priority is something one comes b |
19,294 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | It is possibly useful for us to examine how these probabilities are related.
Let $W$ be the event that a person is a woman, and let $S$ be the event that a person survived.
Statement 1:
$$P(S|W) = 0.9$$
Statement 2:
$$P(W|S) = 0.9$$
Bayes Theorem illustrates how these statements of probability are related.
$$P(S|W) = P(W|S)\frac{P(S)}{P(W)}$$
In this particular case, $P(S)$ (the probability of survival) and $P(W)$ (the proportion of Women on the titanic) are quite easy to look up, and therefore the probabilities are dependent on each other. That is, knowing one fully defines the other.
Treating $P(S)$ and $P(W)$ as known, they are the different ways of expressing the same information (albeit with different interpretations). | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | It is possibly useful for us to examine how these probabilities are related.
Let $W$ be the event that a person is a woman, and let $S$ be the event that a person survived.
Statement 1:
$$P(S|W) = 0.9 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
It is possibly useful for us to examine how these probabilities are related.
Let $W$ be the event that a person is a woman, and let $S$ be the event that a person survived.
Statement 1:
$$P(S|W) = 0.9$$
Statement 2:
$$P(W|S) = 0.9$$
Bayes Theorem illustrates how these statements of probability are related.
$$P(S|W) = P(W|S)\frac{P(S)}{P(W)}$$
In this particular case, $P(S)$ (the probability of survival) and $P(W)$ (the proportion of Women on the titanic) are quite easy to look up, and therefore the probabilities are dependent on each other. That is, knowing one fully defines the other.
Treating $P(S)$ and $P(W)$ as known, they are the different ways of expressing the same information (albeit with different interpretations). | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
It is possibly useful for us to examine how these probabilities are related.
Let $W$ be the event that a person is a woman, and let $S$ be the event that a person survived.
Statement 1:
$$P(S|W) = 0.9 |
19,295 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | It depends on what what one considers useful.
If one is primarily interested in whether women were given higher priority than men, i.e. whether $P(S|W) > P(S|M)$, then both statements are equally useless without more information, as @StephanKolassa and @knrumsey have already said in their answers. If someone is meaning to express this kind of information, they'd need to say something more than statement 1, such as "90 percent of the women survived, but only 20 percent of the men survived".
On the other hand, if you're wondering why survivor stories are mostly from women, then statement 2 would explain that, making statement 2 useful even in the absence of other information.
I can't think of anything statement 1 is useful for out of context. It certainly doesn't say anything about the priority given to saving women, compared to anything else. The only thing statement 1 does for me is it makes me say "tell me more". | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | It depends on what what one considers useful.
If one is primarily interested in whether women were given higher priority than men, i.e. whether $P(S|W) > P(S|M)$, then both statements are equally usel | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
It depends on what what one considers useful.
If one is primarily interested in whether women were given higher priority than men, i.e. whether $P(S|W) > P(S|M)$, then both statements are equally useless without more information, as @StephanKolassa and @knrumsey have already said in their answers. If someone is meaning to express this kind of information, they'd need to say something more than statement 1, such as "90 percent of the women survived, but only 20 percent of the men survived".
On the other hand, if you're wondering why survivor stories are mostly from women, then statement 2 would explain that, making statement 2 useful even in the absence of other information.
I can't think of anything statement 1 is useful for out of context. It certainly doesn't say anything about the priority given to saving women, compared to anything else. The only thing statement 1 does for me is it makes me say "tell me more". | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
It depends on what what one considers useful.
If one is primarily interested in whether women were given higher priority than men, i.e. whether $P(S|W) > P(S|M)$, then both statements are equally usel |
19,296 | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'? | On the surface (or in isolation from reality) both statements appear to be equally useless for the state goal. However, considering the context, the second statement is clearly more useful.
Statement 2
Let's see what we can extract from the second statement. The ratio of women $w$ among all survived is:
$$w = p x /(p x +(1-p) z) $$
where $p$ - ratio of women among passengers, $x$ and $z$ are probabilities of survival of women and men. The denominator is the total survival rate.
We are testing hypo $H_0:x>z$
Let's re-write the equation to obtain the necessary conditions for $H_0$:
$$(1-w) p x = w (1-p) z$$
$$ x = w (1-p) z/((1-w) p)$$
For $H_0$ to hold we have:
$$ x = w (1-p) z/((1-w) p)>z$$
$$ w (1-p) >(1-w) p $$
$$ 0.9 (1-p) >0.1 p $$
$$ 1-p > p/9 $$
$$p<0.9$$
So, for your hypo that women were more likely to survive, all you need is to check that there were less than 90% women among the passengers. This is consistent with your assumption 2, which seems to imply that $p\approx 1/2$. Hence, I declare that statement 2 all but asserts that women were more likely to survive, i.e. it's quite useful for your goal.
Statement 1
The first statement is truly useless in isolation, but has a limited use in the context. If we pretend we know nothing about the event, then saying that $x=0.9$ tells us nothing about $z$, and whether $x>z$?
However, from that little that I know about the event - I haven't seen the movie - it seems unlikely that $x\le z$. Why?
We know from Assumption 2 that $p\approx 1/2$, so the total survival rate is
$p x+(1-p) z$. If we assume that $x\approx z$ and $p\approx 1/2$ we get
$$p x+(1-p) z\approx x=0.9$$
In other words 90% of all passengers survived, which doesn't ring true to me. Would they make a movie and talk about it for 100 years if 90% of passengers survived? So, it must be that $x>>z$ and less than half of passengers made it.
Conclusion
I'd say that both statements support your hypo that women were more likely to survive than men, but Statement 1 does so rather weakly, while Statement 2 in combination with assumptions almost surely establishes your hypo as a fact. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur | On the surface (or in isolation from reality) both statements appear to be equally useless for the state goal. However, considering the context, the second statement is clearly more useful.
Statement | The more important statistic: '90 percent of all women survived' or '90 percent of all those who survived were women'?
On the surface (or in isolation from reality) both statements appear to be equally useless for the state goal. However, considering the context, the second statement is clearly more useful.
Statement 2
Let's see what we can extract from the second statement. The ratio of women $w$ among all survived is:
$$w = p x /(p x +(1-p) z) $$
where $p$ - ratio of women among passengers, $x$ and $z$ are probabilities of survival of women and men. The denominator is the total survival rate.
We are testing hypo $H_0:x>z$
Let's re-write the equation to obtain the necessary conditions for $H_0$:
$$(1-w) p x = w (1-p) z$$
$$ x = w (1-p) z/((1-w) p)$$
For $H_0$ to hold we have:
$$ x = w (1-p) z/((1-w) p)>z$$
$$ w (1-p) >(1-w) p $$
$$ 0.9 (1-p) >0.1 p $$
$$ 1-p > p/9 $$
$$p<0.9$$
So, for your hypo that women were more likely to survive, all you need is to check that there were less than 90% women among the passengers. This is consistent with your assumption 2, which seems to imply that $p\approx 1/2$. Hence, I declare that statement 2 all but asserts that women were more likely to survive, i.e. it's quite useful for your goal.
Statement 1
The first statement is truly useless in isolation, but has a limited use in the context. If we pretend we know nothing about the event, then saying that $x=0.9$ tells us nothing about $z$, and whether $x>z$?
However, from that little that I know about the event - I haven't seen the movie - it seems unlikely that $x\le z$. Why?
We know from Assumption 2 that $p\approx 1/2$, so the total survival rate is
$p x+(1-p) z$. If we assume that $x\approx z$ and $p\approx 1/2$ we get
$$p x+(1-p) z\approx x=0.9$$
In other words 90% of all passengers survived, which doesn't ring true to me. Would they make a movie and talk about it for 100 years if 90% of passengers survived? So, it must be that $x>>z$ and less than half of passengers made it.
Conclusion
I'd say that both statements support your hypo that women were more likely to survive than men, but Statement 1 does so rather weakly, while Statement 2 in combination with assumptions almost surely establishes your hypo as a fact. | The more important statistic: '90 percent of all women survived' or '90 percent of all those who sur
On the surface (or in isolation from reality) both statements appear to be equally useless for the state goal. However, considering the context, the second statement is clearly more useful.
Statement |
19,297 | What is the best out-of-the-box 2-class classifier for your application? [closed] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Random forest
easily captures complicated structure/nonlinear relationship
invariant to variables' scale
no need to create dummy variables for categorical predictors
variable selection is not much needed
relatively hard to overfit | What is the best out-of-the-box 2-class classifier for your application? [closed] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Random forest
easily captures complicated structure/nonlinear relationship
invariant to variables' scale
no need to create dummy variables for categorical predictors
variable selection is not much needed
relatively hard to overfit | What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
19,298 | What is the best out-of-the-box 2-class classifier for your application? [closed] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Support vector machine | What is the best out-of-the-box 2-class classifier for your application? [closed] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Support vector machine | What is the best out-of-the-box 2-class classifier for your application? [closed]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
19,299 | What is the best out-of-the-box 2-class classifier for your application? [closed] | Logistic Regression:
fast and perform well on most datasets
almost no parameters to tune
handles both discrete/continuous features
model is easily interpretable
(not really restricted to binary classifications) | What is the best out-of-the-box 2-class classifier for your application? [closed] | Logistic Regression:
fast and perform well on most datasets
almost no parameters to tune
handles both discrete/continuous features
model is easily interpretable
(not really restricted to binary class | What is the best out-of-the-box 2-class classifier for your application? [closed]
Logistic Regression:
fast and perform well on most datasets
almost no parameters to tune
handles both discrete/continuous features
model is easily interpretable
(not really restricted to binary classifications) | What is the best out-of-the-box 2-class classifier for your application? [closed]
Logistic Regression:
fast and perform well on most datasets
almost no parameters to tune
handles both discrete/continuous features
model is easily interpretable
(not really restricted to binary class |
19,300 | What is the best out-of-the-box 2-class classifier for your application? [closed] | Regularized discriminant for supervised problems with noisy data
Computationally efficient
Robust to noise and outliers in data
Both linear discriminant (LD) and quadratic discriminant (QD) classifiers can can be obtained from the same implementation setting the regularization parameters '[lambda, r]' to '[1 0]' for LD classifier and '[0 0]' for QD classifier - very useful for reference purposes.
Model is easy to interpret and export
Works well for sparse and 'wide' data sets where class covariance matrices may not be well defined.
An estimate of posterior class probability can be estimated for each sample by applying the softmax function to the discriminant values for each class.
Link to original 1989 paper by Friedman et al here. Also, there very good explanation by Kuncheva in her book "Combining pattern classifiers". | What is the best out-of-the-box 2-class classifier for your application? [closed] | Regularized discriminant for supervised problems with noisy data
Computationally efficient
Robust to noise and outliers in data
Both linear discriminant (LD) and quadratic discriminant (QD) classifie | What is the best out-of-the-box 2-class classifier for your application? [closed]
Regularized discriminant for supervised problems with noisy data
Computationally efficient
Robust to noise and outliers in data
Both linear discriminant (LD) and quadratic discriminant (QD) classifiers can can be obtained from the same implementation setting the regularization parameters '[lambda, r]' to '[1 0]' for LD classifier and '[0 0]' for QD classifier - very useful for reference purposes.
Model is easy to interpret and export
Works well for sparse and 'wide' data sets where class covariance matrices may not be well defined.
An estimate of posterior class probability can be estimated for each sample by applying the softmax function to the discriminant values for each class.
Link to original 1989 paper by Friedman et al here. Also, there very good explanation by Kuncheva in her book "Combining pattern classifiers". | What is the best out-of-the-box 2-class classifier for your application? [closed]
Regularized discriminant for supervised problems with noisy data
Computationally efficient
Robust to noise and outliers in data
Both linear discriminant (LD) and quadratic discriminant (QD) classifie |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.