idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
32,501 | Selecting priors based on measurement error | Two standard methods are
Consult the "instrument maker's specifications," as indicated in the quotation. This is usually a crude fall-back to be used when no other information is available, because (a) what the instrument maker really means by "accuracy" and "precision" is often indeterminate and (b) how the instrument responded when new in a test lab was likely much better than it performs when used in the field.
Collect replicate samples. In environmental sampling there are about a half dozen levels at which samples are routinely replicated (and many more at which they could be replicated), with each level used to control for an assignable source of variation. Such sources may include:
Identity of the person taking the sample.
Preliminary procedures, such as bailing wells, taken before obtaining a sample.
Variability in the physical sampling process.
Heterogeneity within the sample volume itself.
Changes that might occur when preserving and shipping a sample to a laboratory.
Variations in preliminary laboratory procedures, such as homogenizing a physical sample or digesting it for analysis.
The identify of the laboratory analyst(s).
Differences between laboratories.
Differences between physically distinct instruments, such as two gas chromatographs.
Drift in instrument calibration over time.
Diurnal variation. (This may be natural and systematic but can appear random when sampling times are arbitrary.)
A full quantitative assessment of components of variability can only be obtained by systematically varying each of these factors according to a suitable experimental design.
Usually only the sources believed to contribute the most variability are studied. For instance, many studies will systematically split a certain portion of the samples once they have been obtained and ship them to two different laboratories. A study of the differences among the results of those splits can quantify their contribution to the measurement variability. If enough such splits are obtained, the full distribution of measurement variability can be estimated as a prior in a hierarchical Bayesian spatio-temporal model. Because many models assuming Gaussian distributions (for each of calculation), obtaining a Gaussian prior eventually comes down to estimating the mean and variance of the differences between the splits. In more complicated studies, which aim to identify more than one component of variance, this is performed with the Analysis of Variance (ANOVA) apparatus.
One of the benefits of even thinking about these issues is that they help you identify ways to reduce or even eliminate some of these components of error (without ever having to quantify them), thereby getting closer to Cressie & Wikle's ideal of "reducing the uncertainty as much as the science allows."
For an extended worked example (in soil sampling), see
Van Ee, Blume, and Starks, A Rationale for the Assessment of Errors in the Sampling of Soils. US EPA, May 1990: EPA/600/4-90/013. | Selecting priors based on measurement error | Two standard methods are
Consult the "instrument maker's specifications," as indicated in the quotation. This is usually a crude fall-back to be used when no other information is available, because | Selecting priors based on measurement error
Two standard methods are
Consult the "instrument maker's specifications," as indicated in the quotation. This is usually a crude fall-back to be used when no other information is available, because (a) what the instrument maker really means by "accuracy" and "precision" is often indeterminate and (b) how the instrument responded when new in a test lab was likely much better than it performs when used in the field.
Collect replicate samples. In environmental sampling there are about a half dozen levels at which samples are routinely replicated (and many more at which they could be replicated), with each level used to control for an assignable source of variation. Such sources may include:
Identity of the person taking the sample.
Preliminary procedures, such as bailing wells, taken before obtaining a sample.
Variability in the physical sampling process.
Heterogeneity within the sample volume itself.
Changes that might occur when preserving and shipping a sample to a laboratory.
Variations in preliminary laboratory procedures, such as homogenizing a physical sample or digesting it for analysis.
The identify of the laboratory analyst(s).
Differences between laboratories.
Differences between physically distinct instruments, such as two gas chromatographs.
Drift in instrument calibration over time.
Diurnal variation. (This may be natural and systematic but can appear random when sampling times are arbitrary.)
A full quantitative assessment of components of variability can only be obtained by systematically varying each of these factors according to a suitable experimental design.
Usually only the sources believed to contribute the most variability are studied. For instance, many studies will systematically split a certain portion of the samples once they have been obtained and ship them to two different laboratories. A study of the differences among the results of those splits can quantify their contribution to the measurement variability. If enough such splits are obtained, the full distribution of measurement variability can be estimated as a prior in a hierarchical Bayesian spatio-temporal model. Because many models assuming Gaussian distributions (for each of calculation), obtaining a Gaussian prior eventually comes down to estimating the mean and variance of the differences between the splits. In more complicated studies, which aim to identify more than one component of variance, this is performed with the Analysis of Variance (ANOVA) apparatus.
One of the benefits of even thinking about these issues is that they help you identify ways to reduce or even eliminate some of these components of error (without ever having to quantify them), thereby getting closer to Cressie & Wikle's ideal of "reducing the uncertainty as much as the science allows."
For an extended worked example (in soil sampling), see
Van Ee, Blume, and Starks, A Rationale for the Assessment of Errors in the Sampling of Soils. US EPA, May 1990: EPA/600/4-90/013. | Selecting priors based on measurement error
Two standard methods are
Consult the "instrument maker's specifications," as indicated in the quotation. This is usually a crude fall-back to be used when no other information is available, because |
32,502 | full batch vs online learning vs mini batch | Isn't online-learning a special case of mini-batch where each iteration contains only a single training case?
This is true, but somewhat irrelevant (since the question is specifically comparing full batch to batch size 1 to batch size 100).
(b) will be absolutely unaffected by the change (modulo memory usage and cache efficiency issues), since each step costs the same as it would have before and is identical. (Well, depending on the formulation, a regularization constant might be effectively halved as well.)
(c) is affected because when we choose a batch of size 100, we might choose some points twice, overweighting those points and removing the other useful information that may have been in their place. Thus we have a slightly worse estimate of the training data distribution, and so will probably be slightly less effective in learning the model. | full batch vs online learning vs mini batch | Isn't online-learning a special case of mini-batch where each iteration contains only a single training case?
This is true, but somewhat irrelevant (since the question is specifically comparing full | full batch vs online learning vs mini batch
Isn't online-learning a special case of mini-batch where each iteration contains only a single training case?
This is true, but somewhat irrelevant (since the question is specifically comparing full batch to batch size 1 to batch size 100).
(b) will be absolutely unaffected by the change (modulo memory usage and cache efficiency issues), since each step costs the same as it would have before and is identical. (Well, depending on the formulation, a regularization constant might be effectively halved as well.)
(c) is affected because when we choose a batch of size 100, we might choose some points twice, overweighting those points and removing the other useful information that may have been in their place. Thus we have a slightly worse estimate of the training data distribution, and so will probably be slightly less effective in learning the model. | full batch vs online learning vs mini batch
Isn't online-learning a special case of mini-batch where each iteration contains only a single training case?
This is true, but somewhat irrelevant (since the question is specifically comparing full |
32,503 | When is it appropriate to use PCA as a preprocessing step? | Using PCA for feature selection (removing non-predictive features) is an extremely expensive way to do it. PCA algos are often O(n^3). Rather a much better and more efficient approach would be to use a measure of inter-dependence between the feature and the class - for this Mutual Information tends to perform very well, furthermore it's the only measure of dependence that a) fully generalizes and b) actually has a good philosophical foundation based on Kullback-Leibler divergence.
For example, we compute (using maximum likelihood probability approx with some smoothing)
MI-above-expected = MI(F, C) - E_{X, N}[MI(X, C)]
where the second term is the 'expected mutual information given N examples'. We then take the top M features after sorting by MI-above-expected.
The reason why one would want to use PCA is if one expects that many of the features are in fact dependent. This would be particularly handy for Naive Bayes where independence is assumed. Now the datasets I've worked with have always been far too large to use PCA, so I don't use PCA and we have to use more sophisticated methods. But if your dataset is small, and you don't have the time to investigate more sophisticated methods, then by all means go ahead and apply an out-of-box PCA. | When is it appropriate to use PCA as a preprocessing step? | Using PCA for feature selection (removing non-predictive features) is an extremely expensive way to do it. PCA algos are often O(n^3). Rather a much better and more efficient approach would be to us | When is it appropriate to use PCA as a preprocessing step?
Using PCA for feature selection (removing non-predictive features) is an extremely expensive way to do it. PCA algos are often O(n^3). Rather a much better and more efficient approach would be to use a measure of inter-dependence between the feature and the class - for this Mutual Information tends to perform very well, furthermore it's the only measure of dependence that a) fully generalizes and b) actually has a good philosophical foundation based on Kullback-Leibler divergence.
For example, we compute (using maximum likelihood probability approx with some smoothing)
MI-above-expected = MI(F, C) - E_{X, N}[MI(X, C)]
where the second term is the 'expected mutual information given N examples'. We then take the top M features after sorting by MI-above-expected.
The reason why one would want to use PCA is if one expects that many of the features are in fact dependent. This would be particularly handy for Naive Bayes where independence is assumed. Now the datasets I've worked with have always been far too large to use PCA, so I don't use PCA and we have to use more sophisticated methods. But if your dataset is small, and you don't have the time to investigate more sophisticated methods, then by all means go ahead and apply an out-of-box PCA. | When is it appropriate to use PCA as a preprocessing step?
Using PCA for feature selection (removing non-predictive features) is an extremely expensive way to do it. PCA algos are often O(n^3). Rather a much better and more efficient approach would be to us |
32,504 | What is the two-sample CDF of of $D^{+}$ and $D^{-}$ from the one-sided Kolmogorov-Smirnov Test? | Ok, I am going to have a stab at this. Critical insights welcome.
On page 192 Gibbons and Chakraborti (1992), citing Hodges, 1958, start with a small-sample (exact?) CDF for the two-sided test (I am swapping their $m,n$ and $d$ notation for $n_{1},n_{2}$ and $x$, respectively):
$$\text{P}{\left(D_{n_{1},n_{2}}\ge x\right)} = 1 - \text{P}\left(D_{n_{1},n_{2}} \leq x\right)=1-\frac{A\left(n_{1},n_{2}\right)}{\binom{n_{1}+n_{2}}{n_{1}}}$$
Where $A\left(n_{1},n_{2}\right)$ is produced through an enumeration of paths (increasing monotonically in $n_{1}$ and $n_{2}$) from the origin to the point $\left(n_{1},n_{2}\right)$ through a graph with—substituting $S_{m}(x)$ for $F_{n_{1}}(x)$—the values of the x-axis and y-axis are $n_{1}F_{1}\left(x\right)$ and $n_{2}F_{2}\left(x\right)$. The paths must furthermore obey the constraint of staying inside the boundaries (where $x$ is the value of the Kolmogorov-Smirnov test statistic):
$$\frac{n_{2}}{n_{1}} \pm \frac{\left(n_{1}+n_{2}\right)x}{\binom{n_{1}+n_{2}}{n_{1}}}$$
Below is their image Figure 3.2 providing an example for $A(3,4)$, with 12 such paths:
Gibbons and Chakaborti go on to say that the one-sided $p$-value is obtained using this same graphical method, but with only the lower bound for $D^{+}_{n_{1},n_{2}}$, and only the upper for $D^{-}_{n_{1},n_{2}}$.
These small sample approaches entail path enumeration algorithms and/or recurrence relations, which undoubtedly make asymptotic calculations desirable. Gibbons and Chakraborti also note the limiting CDFs as $n_{1}$ and $n_{2}$ approach infinity, of $D_{n_{1},n_{2}}$:
$$\lim_{n_{1},n_{2}\to \infty}\text{P}\left(\sqrt{\frac{n_{1}n_{2}}{n_{1}+n_{2}}}D_{n_{1},n_{2}} \le x\right) = 1 - 2\sum_{i=1}^{\infty}{\left(-1\right)^{i-1}e^{-2i^{2}x^{2}}}$$
And they give the limiting CDF of $D^{+}_{n_{1},n_{2}}$ (or $D^{-}_{n_{1},n_{2}}$) as:
$$\lim_{n_{1},n_{2}\to \infty}\text{P}\left(\sqrt{\frac{n_{1}n_{2}}{n_{1}+n_{2}}}D^{+}_{n_{1},n_{2}} \le x\right) = 1 - e^{-2x^{2}}$$
Because $D^{+}$ and $D^{-}$ are strictly non-negative, the CDF can only take non-zero values over $[0,\infty)$:
$D^{+}$ (or $D^{-}$)" />
**References**
Gibbons, J. D. and Chakraborti, S. (1992). *Nonparametric Statistical Inference*. Marcel Decker, Inc., 3rd edition, revised and expanded edition.
Hodges, J. L. (1958). The significance probability of the Smirnov two-sample test. Arkiv för matematik. 3(5):469–486. | What is the two-sample CDF of of $D^{+}$ and $D^{-}$ from the one-sided Kolmogorov-Smirnov Test? | Ok, I am going to have a stab at this. Critical insights welcome.
On page 192 Gibbons and Chakraborti (1992), citing Hodges, 1958, start with a small-sample (exact?) CDF for the two-sided test (I am s | What is the two-sample CDF of of $D^{+}$ and $D^{-}$ from the one-sided Kolmogorov-Smirnov Test?
Ok, I am going to have a stab at this. Critical insights welcome.
On page 192 Gibbons and Chakraborti (1992), citing Hodges, 1958, start with a small-sample (exact?) CDF for the two-sided test (I am swapping their $m,n$ and $d$ notation for $n_{1},n_{2}$ and $x$, respectively):
$$\text{P}{\left(D_{n_{1},n_{2}}\ge x\right)} = 1 - \text{P}\left(D_{n_{1},n_{2}} \leq x\right)=1-\frac{A\left(n_{1},n_{2}\right)}{\binom{n_{1}+n_{2}}{n_{1}}}$$
Where $A\left(n_{1},n_{2}\right)$ is produced through an enumeration of paths (increasing monotonically in $n_{1}$ and $n_{2}$) from the origin to the point $\left(n_{1},n_{2}\right)$ through a graph with—substituting $S_{m}(x)$ for $F_{n_{1}}(x)$—the values of the x-axis and y-axis are $n_{1}F_{1}\left(x\right)$ and $n_{2}F_{2}\left(x\right)$. The paths must furthermore obey the constraint of staying inside the boundaries (where $x$ is the value of the Kolmogorov-Smirnov test statistic):
$$\frac{n_{2}}{n_{1}} \pm \frac{\left(n_{1}+n_{2}\right)x}{\binom{n_{1}+n_{2}}{n_{1}}}$$
Below is their image Figure 3.2 providing an example for $A(3,4)$, with 12 such paths:
Gibbons and Chakaborti go on to say that the one-sided $p$-value is obtained using this same graphical method, but with only the lower bound for $D^{+}_{n_{1},n_{2}}$, and only the upper for $D^{-}_{n_{1},n_{2}}$.
These small sample approaches entail path enumeration algorithms and/or recurrence relations, which undoubtedly make asymptotic calculations desirable. Gibbons and Chakraborti also note the limiting CDFs as $n_{1}$ and $n_{2}$ approach infinity, of $D_{n_{1},n_{2}}$:
$$\lim_{n_{1},n_{2}\to \infty}\text{P}\left(\sqrt{\frac{n_{1}n_{2}}{n_{1}+n_{2}}}D_{n_{1},n_{2}} \le x\right) = 1 - 2\sum_{i=1}^{\infty}{\left(-1\right)^{i-1}e^{-2i^{2}x^{2}}}$$
And they give the limiting CDF of $D^{+}_{n_{1},n_{2}}$ (or $D^{-}_{n_{1},n_{2}}$) as:
$$\lim_{n_{1},n_{2}\to \infty}\text{P}\left(\sqrt{\frac{n_{1}n_{2}}{n_{1}+n_{2}}}D^{+}_{n_{1},n_{2}} \le x\right) = 1 - e^{-2x^{2}}$$
Because $D^{+}$ and $D^{-}$ are strictly non-negative, the CDF can only take non-zero values over $[0,\infty)$:
$D^{+}$ (or $D^{-}$)" />
**References**
Gibbons, J. D. and Chakraborti, S. (1992). *Nonparametric Statistical Inference*. Marcel Decker, Inc., 3rd edition, revised and expanded edition.
Hodges, J. L. (1958). The significance probability of the Smirnov two-sample test. Arkiv för matematik. 3(5):469–486. | What is the two-sample CDF of of $D^{+}$ and $D^{-}$ from the one-sided Kolmogorov-Smirnov Test?
Ok, I am going to have a stab at this. Critical insights welcome.
On page 192 Gibbons and Chakraborti (1992), citing Hodges, 1958, start with a small-sample (exact?) CDF for the two-sided test (I am s |
32,505 | Multiple regression in directional / circular statistics? | In the book that I have it says that only recently some papers have begun to explore multivariate regression where one or more variables are circular. I have not checked them myself, but relevant sources seem to be:
Bhattacharya, S. and SenGupta, A. (2009). Bayesian analysis of semiparametric linear-circular models. Journal of Agricultural, Biological and Environmental Statistics, 14, 33-65.
Lund, U. (1999). Least circular distance regression for directional data. Journal of Applied Statistics, 26, 723-733
Lund, U. (2002). Tree-based regression or a circular response. Communications in Statistics - Theory and Methods, 31, 1549-1560.
Qin, X., Zhang, J.-S., and Yan, X.-D. (2011). A nonparametric circular-linear multivariate regression model with a rule of thumb bandwidth selector. Computers and Mathematics with Applications, 62, 3048-3055.
In case for a circular response you have only a single circular regressor (which I understand that is not the case for you, but maybe separate regressions would be of interest as well) there is a way to estimate the model. [1] recommend fitting general linear model
$$\cos(\Theta_j) = \gamma_0^c + \sum_{k=1}^m\left(\gamma_{ck}^c\cos(k\psi_j)+\gamma_{sk}^c\sin(k\psi_j)\right)+\varepsilon_{1j},$$
$$\sin(\Theta_j) = \gamma_0^s + \sum_{k=1}^m\left(\gamma_{ck}^s\cos(k\psi_j)+\gamma_{sk}^s\sin(k\psi_j)\right)+\varepsilon_{2j}.$$
The good thing is that this model can be estimated using the function lm.circular from the R library circular.
[1] Jammalamadaka, S. R. and SenGupta, A. (2001). Topics in Circular Statistics. World Scientific, Singapore. | Multiple regression in directional / circular statistics? | In the book that I have it says that only recently some papers have begun to explore multivariate regression where one or more variables are circular. I have not checked them myself, but relevant sour | Multiple regression in directional / circular statistics?
In the book that I have it says that only recently some papers have begun to explore multivariate regression where one or more variables are circular. I have not checked them myself, but relevant sources seem to be:
Bhattacharya, S. and SenGupta, A. (2009). Bayesian analysis of semiparametric linear-circular models. Journal of Agricultural, Biological and Environmental Statistics, 14, 33-65.
Lund, U. (1999). Least circular distance regression for directional data. Journal of Applied Statistics, 26, 723-733
Lund, U. (2002). Tree-based regression or a circular response. Communications in Statistics - Theory and Methods, 31, 1549-1560.
Qin, X., Zhang, J.-S., and Yan, X.-D. (2011). A nonparametric circular-linear multivariate regression model with a rule of thumb bandwidth selector. Computers and Mathematics with Applications, 62, 3048-3055.
In case for a circular response you have only a single circular regressor (which I understand that is not the case for you, but maybe separate regressions would be of interest as well) there is a way to estimate the model. [1] recommend fitting general linear model
$$\cos(\Theta_j) = \gamma_0^c + \sum_{k=1}^m\left(\gamma_{ck}^c\cos(k\psi_j)+\gamma_{sk}^c\sin(k\psi_j)\right)+\varepsilon_{1j},$$
$$\sin(\Theta_j) = \gamma_0^s + \sum_{k=1}^m\left(\gamma_{ck}^s\cos(k\psi_j)+\gamma_{sk}^s\sin(k\psi_j)\right)+\varepsilon_{2j}.$$
The good thing is that this model can be estimated using the function lm.circular from the R library circular.
[1] Jammalamadaka, S. R. and SenGupta, A. (2001). Topics in Circular Statistics. World Scientific, Singapore. | Multiple regression in directional / circular statistics?
In the book that I have it says that only recently some papers have begun to explore multivariate regression where one or more variables are circular. I have not checked them myself, but relevant sour |
32,506 | Multiple regression in directional / circular statistics? | You can take a look at these articles that deal with multiple regression when the dependent variable is circular, or spherical. The approach is based on the projected normal distribution.
Hernandez-Stumpfhauser, Daniel, F. Jay Breidt, and Mark J. van der Woerd. "The general projected normal distribution of arbitrary dimension: modeling and Bayesian inference." Bayesian Analysis 12.1 (2017): 113-133.
Wang, Fangpo, and Alan E. Gelfand. "Directional data analysis under the general projected normal distribution." Statistical methodology 10.1 (2013): 113-127
Nuñez-Antonio, Gabriel, Eduardo Gutiérrez-Peña, and Gabriel Escarela. "A Bayesian regression model for circular data based on the projected normal distribution." Statistical Modelling 11.3 (2011): 185-201.
Presnell, Brett, Scott P. Morrison, and Ramon C. Littell. "Projected multivariate linear models for directional data." Journal of the American Statistical Association 93.443 (1998): 1068-1077.
This last one was the first one to come out using this projected normal approach | Multiple regression in directional / circular statistics? | You can take a look at these articles that deal with multiple regression when the dependent variable is circular, or spherical. The approach is based on the projected normal distribution.
Hernandez-St | Multiple regression in directional / circular statistics?
You can take a look at these articles that deal with multiple regression when the dependent variable is circular, or spherical. The approach is based on the projected normal distribution.
Hernandez-Stumpfhauser, Daniel, F. Jay Breidt, and Mark J. van der Woerd. "The general projected normal distribution of arbitrary dimension: modeling and Bayesian inference." Bayesian Analysis 12.1 (2017): 113-133.
Wang, Fangpo, and Alan E. Gelfand. "Directional data analysis under the general projected normal distribution." Statistical methodology 10.1 (2013): 113-127
Nuñez-Antonio, Gabriel, Eduardo Gutiérrez-Peña, and Gabriel Escarela. "A Bayesian regression model for circular data based on the projected normal distribution." Statistical Modelling 11.3 (2011): 185-201.
Presnell, Brett, Scott P. Morrison, and Ramon C. Littell. "Projected multivariate linear models for directional data." Journal of the American Statistical Association 93.443 (1998): 1068-1077.
This last one was the first one to come out using this projected normal approach | Multiple regression in directional / circular statistics?
You can take a look at these articles that deal with multiple regression when the dependent variable is circular, or spherical. The approach is based on the projected normal distribution.
Hernandez-St |
32,507 | Variational inference engines | Have you looked at Edward? The Inference API supports among other things Variational inference:
Black box variational inference
Stochastic variational inference
Variational auto-encoders
Inclusive KL divergence: KL(p∥q) | Variational inference engines | Have you looked at Edward? The Inference API supports among other things Variational inference:
Black box variational inference
Stochastic variational inference
Variational auto-encoders
Inclusive KL | Variational inference engines
Have you looked at Edward? The Inference API supports among other things Variational inference:
Black box variational inference
Stochastic variational inference
Variational auto-encoders
Inclusive KL divergence: KL(p∥q) | Variational inference engines
Have you looked at Edward? The Inference API supports among other things Variational inference:
Black box variational inference
Stochastic variational inference
Variational auto-encoders
Inclusive KL |
32,508 | Variational inference engines | Some years have passed. STAN now implements ADVI (gradients of the ELBO with reparameterization trick), using the vb command (which I guess stands for Variational Bayes). E.g., in R:
library("rstan") # load the rstan library
fit = vb(model, data, output_samples = 20000, adapt_iter = 10000 ,init = list('param1' = param1, ...), seed)
Pyro is a python library that implements BBVI (gradients of the ELBO with log-derivative trick + variance reduction techniques). Main code is:
import pyro
auto_guide = pyro.infer.autoguide.AutoNormal(model)
adam = pyro.optim.Adam({"lr": 0.02})
elbo = pyro.infer.Trace_ELBO()
svi = pyro.infer.SVI(model, auto_guide, adam, elbo)
for step in range(1000):
loss = svi.step(data) | Variational inference engines | Some years have passed. STAN now implements ADVI (gradients of the ELBO with reparameterization trick), using the vb command (which I guess stands for Variational Bayes). E.g., in R:
library("rstan") | Variational inference engines
Some years have passed. STAN now implements ADVI (gradients of the ELBO with reparameterization trick), using the vb command (which I guess stands for Variational Bayes). E.g., in R:
library("rstan") # load the rstan library
fit = vb(model, data, output_samples = 20000, adapt_iter = 10000 ,init = list('param1' = param1, ...), seed)
Pyro is a python library that implements BBVI (gradients of the ELBO with log-derivative trick + variance reduction techniques). Main code is:
import pyro
auto_guide = pyro.infer.autoguide.AutoNormal(model)
adam = pyro.optim.Adam({"lr": 0.02})
elbo = pyro.infer.Trace_ELBO()
svi = pyro.infer.SVI(model, auto_guide, adam, elbo)
for step in range(1000):
loss = svi.step(data) | Variational inference engines
Some years have passed. STAN now implements ADVI (gradients of the ELBO with reparameterization trick), using the vb command (which I guess stands for Variational Bayes). E.g., in R:
library("rstan") |
32,509 | How to get the p-value for the full model from R's coxph? | 1) Put summary(coxphobject) into a variable
summcph <- summary(coxphobject)
2) examine it with str()
str(summcph)
Values! Values everywhere!
so we find, (proceeding line by line in your above output):
a) the Concordance values
summcph$concordance
b) the Rsquare values
summcph$rsq
c) The Likelihood ratio test values
summcph$logtest
d) The Wald test values
summcph$waldtest
e) The score test values
summcph$sctest
f) The robust values
summcph$robscore
It really helps if you post a reproducible example, rather than make us go find your data set in order to check we're doing all the options the same. For example, you didn't mention you had a cluster term. (It would take an extra few moments for you, and would have saved me ten minutes while I tried to figure out why I couldn't get the last couple of values. At the least you could have mentioned which example you ran in the help!) | How to get the p-value for the full model from R's coxph? | 1) Put summary(coxphobject) into a variable
summcph <- summary(coxphobject)
2) examine it with str()
str(summcph)
Values! Values everywhere!
so we find, (proceeding line by line in your above output | How to get the p-value for the full model from R's coxph?
1) Put summary(coxphobject) into a variable
summcph <- summary(coxphobject)
2) examine it with str()
str(summcph)
Values! Values everywhere!
so we find, (proceeding line by line in your above output):
a) the Concordance values
summcph$concordance
b) the Rsquare values
summcph$rsq
c) The Likelihood ratio test values
summcph$logtest
d) The Wald test values
summcph$waldtest
e) The score test values
summcph$sctest
f) The robust values
summcph$robscore
It really helps if you post a reproducible example, rather than make us go find your data set in order to check we're doing all the options the same. For example, you didn't mention you had a cluster term. (It would take an extra few moments for you, and would have saved me ten minutes while I tried to figure out why I couldn't get the last couple of values. At the least you could have mentioned which example you ran in the help!) | How to get the p-value for the full model from R's coxph?
1) Put summary(coxphobject) into a variable
summcph <- summary(coxphobject)
2) examine it with str()
str(summcph)
Values! Values everywhere!
so we find, (proceeding line by line in your above output |
32,510 | Assumptions of the Ordered Probit model | There are some distributional assumptions about the error, but these cannot be tested in a formal way (as far as I know).
There is also a parallel regression assumption, which is frequently violated. Long and Freese's categorical dependent variables book describes an approximate LR and a Wald (aka Brant) test (and provides Stata code). | Assumptions of the Ordered Probit model | There are some distributional assumptions about the error, but these cannot be tested in a formal way (as far as I know).
There is also a parallel regression assumption, which is frequently violated. | Assumptions of the Ordered Probit model
There are some distributional assumptions about the error, but these cannot be tested in a formal way (as far as I know).
There is also a parallel regression assumption, which is frequently violated. Long and Freese's categorical dependent variables book describes an approximate LR and a Wald (aka Brant) test (and provides Stata code). | Assumptions of the Ordered Probit model
There are some distributional assumptions about the error, but these cannot be tested in a formal way (as far as I know).
There is also a parallel regression assumption, which is frequently violated. |
32,511 | Assumptions of the Ordered Probit model | The error term needs to be normally distributed to justify the ML estimation of the model. This assumption can be tested using the LM test developed by Johnson (1996) "A Test of the Normality Assumption in the Ordered Probit Model," Metron, LIV, 213-221. The issue is further discussed by Giles at https://davegiles.blogspot.com/2015/06/specification-testing-in-ordered-probit.html. | Assumptions of the Ordered Probit model | The error term needs to be normally distributed to justify the ML estimation of the model. This assumption can be tested using the LM test developed by Johnson (1996) "A Test of the Normality Assumpti | Assumptions of the Ordered Probit model
The error term needs to be normally distributed to justify the ML estimation of the model. This assumption can be tested using the LM test developed by Johnson (1996) "A Test of the Normality Assumption in the Ordered Probit Model," Metron, LIV, 213-221. The issue is further discussed by Giles at https://davegiles.blogspot.com/2015/06/specification-testing-in-ordered-probit.html. | Assumptions of the Ordered Probit model
The error term needs to be normally distributed to justify the ML estimation of the model. This assumption can be tested using the LM test developed by Johnson (1996) "A Test of the Normality Assumpti |
32,512 | What are important pure mathematics courses for a prospective statistics PhD student? | In my opinion, some options to investigate at the graduate level could be: functional analysis (a natural framework for statistical formulations), stochastic processes, stochastic control (sequential analysis is optimal stopping), various flavors of PDE (many probabilistic problems are formulated as parabolic and nonlinear PDE's). Pretty much all of these require real analysis at an undergrad level. If you're interested in theoretical stuff, then taking measure theory is also pretty important as a prerequisite for the full treatment of these topics. Complex analysis will have some use, but less than the above; there are connections to probability (i.e. harmonic functions), but it could very well be not worth it
Commutative algebra and algebraic geometry will be not be very useful (one connection I can think of is algebraic statistics, which isn't widely taught). These topics will also be very challenging without a solid background in mathematics. | What are important pure mathematics courses for a prospective statistics PhD student? | In my opinion, some options to investigate at the graduate level could be: functional analysis (a natural framework for statistical formulations), stochastic processes, stochastic control (sequential | What are important pure mathematics courses for a prospective statistics PhD student?
In my opinion, some options to investigate at the graduate level could be: functional analysis (a natural framework for statistical formulations), stochastic processes, stochastic control (sequential analysis is optimal stopping), various flavors of PDE (many probabilistic problems are formulated as parabolic and nonlinear PDE's). Pretty much all of these require real analysis at an undergrad level. If you're interested in theoretical stuff, then taking measure theory is also pretty important as a prerequisite for the full treatment of these topics. Complex analysis will have some use, but less than the above; there are connections to probability (i.e. harmonic functions), but it could very well be not worth it
Commutative algebra and algebraic geometry will be not be very useful (one connection I can think of is algebraic statistics, which isn't widely taught). These topics will also be very challenging without a solid background in mathematics. | What are important pure mathematics courses for a prospective statistics PhD student?
In my opinion, some options to investigate at the graduate level could be: functional analysis (a natural framework for statistical formulations), stochastic processes, stochastic control (sequential |
32,513 | What are important pure mathematics courses for a prospective statistics PhD student? | If you want to understand measure theory you have no choice but to take real analysis and advanced analysis (i.e. point set topology). Abstract algebra is definitely more grade-friendly than analysis, however I think it is far less useful. | What are important pure mathematics courses for a prospective statistics PhD student? | If you want to understand measure theory you have no choice but to take real analysis and advanced analysis (i.e. point set topology). Abstract algebra is definitely more grade-friendly than analysis, | What are important pure mathematics courses for a prospective statistics PhD student?
If you want to understand measure theory you have no choice but to take real analysis and advanced analysis (i.e. point set topology). Abstract algebra is definitely more grade-friendly than analysis, however I think it is far less useful. | What are important pure mathematics courses for a prospective statistics PhD student?
If you want to understand measure theory you have no choice but to take real analysis and advanced analysis (i.e. point set topology). Abstract algebra is definitely more grade-friendly than analysis, |
32,514 | What are important pure mathematics courses for a prospective statistics PhD student? | Get real analysis, but not in the way I see people do it. When we interview math undergrads they don't seem to master the tools of real analysis, simple things like taking integrals are out of reach for most of them. I still don't understand why. So, my advice: pay attention to applications first and foremost.
Also get ODE and PDE course, and functional analysis and differential geometry. Linear algebra and tensors, of course, too. All with focus on applications. | What are important pure mathematics courses for a prospective statistics PhD student? | Get real analysis, but not in the way I see people do it. When we interview math undergrads they don't seem to master the tools of real analysis, simple things like taking integrals are out of reach f | What are important pure mathematics courses for a prospective statistics PhD student?
Get real analysis, but not in the way I see people do it. When we interview math undergrads they don't seem to master the tools of real analysis, simple things like taking integrals are out of reach for most of them. I still don't understand why. So, my advice: pay attention to applications first and foremost.
Also get ODE and PDE course, and functional analysis and differential geometry. Linear algebra and tensors, of course, too. All with focus on applications. | What are important pure mathematics courses for a prospective statistics PhD student?
Get real analysis, but not in the way I see people do it. When we interview math undergrads they don't seem to master the tools of real analysis, simple things like taking integrals are out of reach f |
32,515 | What are important pure mathematics courses for a prospective statistics PhD student? | With regards to commutative algebra and algebraic geometry, the subjects which are least addressed in the other answers, my impression is that as long as you avoid algebraic statistics, you can get by entirely without them. Avoiding algebraic statistics may be more and more difficult in the future though, since it has a lot of applications and intersections with machine/statistical learning, which is very prominent in present-day research, as well as applications in other areas. Commutative algebra and algebraic geometry are the subjects you want to learn the most specifically for algebraic statistics, see for example the answers to this question: Algebraic Geometry for Statistics
In contrast, all subfields of statistics use analysis. (Not so much complex analysis though, although that may be useful for understanding characteristic functions, a point which seems not to have been raised yet.) I think undergraduate level measure theory would probably be sufficient, since I have met professional statisticians (e.g. professors at top departments) who look down on measure theory, but if you really want to understand measure theory, a graduate level course in real analysis is a great help. Undergraduate measure theory tends to focus exclusively on Lebesgue measure on the real line, which has a lot of nice properties which general measures may not necessarily have, and moreover is an infinite measure. In contrast, a graduate level real analysis course will tend to have more emphasis on abstract measures, which make probability measures in general easier to understand, and also make the relationship clearer between continuous and discrete probability measures -- in other words, you will be able to see both subjects come together within one framework in your mind for the first time. Likewise, one might prove the Kolmogorov extension theorem in such a course. And an understanding of abstract measures is really indispensable for a rigorous understanding of stochastic processes in continuous time. It is even useful for understanding stochastic processes in discrete time, although less important than in the continuous case. | What are important pure mathematics courses for a prospective statistics PhD student? | With regards to commutative algebra and algebraic geometry, the subjects which are least addressed in the other answers, my impression is that as long as you avoid algebraic statistics, you can get by | What are important pure mathematics courses for a prospective statistics PhD student?
With regards to commutative algebra and algebraic geometry, the subjects which are least addressed in the other answers, my impression is that as long as you avoid algebraic statistics, you can get by entirely without them. Avoiding algebraic statistics may be more and more difficult in the future though, since it has a lot of applications and intersections with machine/statistical learning, which is very prominent in present-day research, as well as applications in other areas. Commutative algebra and algebraic geometry are the subjects you want to learn the most specifically for algebraic statistics, see for example the answers to this question: Algebraic Geometry for Statistics
In contrast, all subfields of statistics use analysis. (Not so much complex analysis though, although that may be useful for understanding characteristic functions, a point which seems not to have been raised yet.) I think undergraduate level measure theory would probably be sufficient, since I have met professional statisticians (e.g. professors at top departments) who look down on measure theory, but if you really want to understand measure theory, a graduate level course in real analysis is a great help. Undergraduate measure theory tends to focus exclusively on Lebesgue measure on the real line, which has a lot of nice properties which general measures may not necessarily have, and moreover is an infinite measure. In contrast, a graduate level real analysis course will tend to have more emphasis on abstract measures, which make probability measures in general easier to understand, and also make the relationship clearer between continuous and discrete probability measures -- in other words, you will be able to see both subjects come together within one framework in your mind for the first time. Likewise, one might prove the Kolmogorov extension theorem in such a course. And an understanding of abstract measures is really indispensable for a rigorous understanding of stochastic processes in continuous time. It is even useful for understanding stochastic processes in discrete time, although less important than in the continuous case. | What are important pure mathematics courses for a prospective statistics PhD student?
With regards to commutative algebra and algebraic geometry, the subjects which are least addressed in the other answers, my impression is that as long as you avoid algebraic statistics, you can get by |
32,516 | Do M-estimators and L-estimators overlap? | There are certainly M-estimators that are L-estimators.
An example: the sample median is both an M-estimator and an L-estimator. It's maximum likelihood for the location parameter of a Laplace (double exponential).
The sample midrange is arguably both an M-estimator and an L-estimator, too. I believe it's the maximum likelihood estimator for the center ($\theta$) of a uniform on $(\theta-\phi/2,\theta+\phi/2)$.
the mean seems to me that it can be derived as an M-estimator, or as an L-estimator
Yes, that's another example.
Put another way, if we were to draw a Venn Diagram of all M and L estimators, how much would they overlap with each other, if at all?
I don't know that this is an especially useful way to look at it; my guess is that the number of estimators in the intersection could well be infinite, but may well be a vanishingly-small fraction of the union. But I think it would be very difficult to actually do this calculation, and I believe most of the estimators contained in it would not be interesting for any real-world application.
What might be of more relevance is the proportion of estimators in wide use (amongst those that are M-, or perhaps L-), that are both. But this then gets into issues of personal opinion about definitions and difficulties of estimation (what's 'wide'? how would we measure this?)
I'd think there's probably quite a few that are both, but as a proportion of say all M-estimators that get used/discussed, probably not that large a fraction. | Do M-estimators and L-estimators overlap? | There are certainly M-estimators that are L-estimators.
An example: the sample median is both an M-estimator and an L-estimator. It's maximum likelihood for the location parameter of a Laplace (doubl | Do M-estimators and L-estimators overlap?
There are certainly M-estimators that are L-estimators.
An example: the sample median is both an M-estimator and an L-estimator. It's maximum likelihood for the location parameter of a Laplace (double exponential).
The sample midrange is arguably both an M-estimator and an L-estimator, too. I believe it's the maximum likelihood estimator for the center ($\theta$) of a uniform on $(\theta-\phi/2,\theta+\phi/2)$.
the mean seems to me that it can be derived as an M-estimator, or as an L-estimator
Yes, that's another example.
Put another way, if we were to draw a Venn Diagram of all M and L estimators, how much would they overlap with each other, if at all?
I don't know that this is an especially useful way to look at it; my guess is that the number of estimators in the intersection could well be infinite, but may well be a vanishingly-small fraction of the union. But I think it would be very difficult to actually do this calculation, and I believe most of the estimators contained in it would not be interesting for any real-world application.
What might be of more relevance is the proportion of estimators in wide use (amongst those that are M-, or perhaps L-), that are both. But this then gets into issues of personal opinion about definitions and difficulties of estimation (what's 'wide'? how would we measure this?)
I'd think there's probably quite a few that are both, but as a proportion of say all M-estimators that get used/discussed, probably not that large a fraction. | Do M-estimators and L-estimators overlap?
There are certainly M-estimators that are L-estimators.
An example: the sample median is both an M-estimator and an L-estimator. It's maximum likelihood for the location parameter of a Laplace (doubl |
32,517 | Do M-estimators and L-estimators overlap? | Here is a Venn Diagram showing the overlap between M- and L-estimators. | Do M-estimators and L-estimators overlap? | Here is a Venn Diagram showing the overlap between M- and L-estimators. | Do M-estimators and L-estimators overlap?
Here is a Venn Diagram showing the overlap between M- and L-estimators. | Do M-estimators and L-estimators overlap?
Here is a Venn Diagram showing the overlap between M- and L-estimators. |
32,518 | Coupling time series information from sources with multiple spatial resolutions/scales | Spatial Domain:
It seems more like an image processing problem to me. Clustering methods might help but which metric (distance, variance,discontiguity...) and which algorithm (k-means, mean-shift, EM...) is best fit in your case is determined by your image topology and features you are going to use. You may implement the image binning on medium and fine rasters. Then try different clustering techniques to see which one gives you the overall best segmentation accuracy compared with your original medium/fine rasters. Some pre-processing strategies in order to find the scale space hierarchy might help. There is one hierarchy segmentation algorithm shown in Chapter 3 of this report in which you
(1) Build a scale space;
(2) Find the extrema and saddles at every scale level;
(3) Link each critical point at a certain scale level to its corresponding location at the next scale level, and find the critical paths;
(4) Scale space hierarchy determination based on the iso-intensity surface searching.
For the clustering methods that the random initialization is required, such as k-means, you can use the found hierarchy as the initial clusters and centroid for further clustering. Besides, depending on the characters of your image, you may also want to add more features (such as texture changes, other space information than RGB space, etc) in clustering algorithms.
Temporal Domain
Now you have the images with different time scale but the same resolution (hopefully). If your prediction job is estimate the movement of some of the continent, storms, or precipitation, you may try motion estimation with Kalman filter. The motion for each pixel can be weighted inside the corresponding region(cluster) based on its metric compared with the centroid of the region. You can use neural network for short-term time sequence forecasting (chapter 3 in this thesis). And since Kalman filter is simply a method for implementing Bayes rule, the maximum likelihood can be applied for state estimation. State-estimation procedures can be implemented recursively. The posterior from the previous time step is run through the dynamics model and becomes the new prior for the current time
step. Then this prior can be converted into a new posterior by using the current observation. As a result, iterative parameter re-estimation procedures such as EM can be used to learn the parameters in Kalman filter. Chapter 6 of the same thesis, and the study on Kalman smoothing both include more details on the parameters learning with EM. | Coupling time series information from sources with multiple spatial resolutions/scales | Spatial Domain:
It seems more like an image processing problem to me. Clustering methods might help but which metric (distance, variance,discontiguity...) and which algorithm (k-means, mean-shift, EM. | Coupling time series information from sources with multiple spatial resolutions/scales
Spatial Domain:
It seems more like an image processing problem to me. Clustering methods might help but which metric (distance, variance,discontiguity...) and which algorithm (k-means, mean-shift, EM...) is best fit in your case is determined by your image topology and features you are going to use. You may implement the image binning on medium and fine rasters. Then try different clustering techniques to see which one gives you the overall best segmentation accuracy compared with your original medium/fine rasters. Some pre-processing strategies in order to find the scale space hierarchy might help. There is one hierarchy segmentation algorithm shown in Chapter 3 of this report in which you
(1) Build a scale space;
(2) Find the extrema and saddles at every scale level;
(3) Link each critical point at a certain scale level to its corresponding location at the next scale level, and find the critical paths;
(4) Scale space hierarchy determination based on the iso-intensity surface searching.
For the clustering methods that the random initialization is required, such as k-means, you can use the found hierarchy as the initial clusters and centroid for further clustering. Besides, depending on the characters of your image, you may also want to add more features (such as texture changes, other space information than RGB space, etc) in clustering algorithms.
Temporal Domain
Now you have the images with different time scale but the same resolution (hopefully). If your prediction job is estimate the movement of some of the continent, storms, or precipitation, you may try motion estimation with Kalman filter. The motion for each pixel can be weighted inside the corresponding region(cluster) based on its metric compared with the centroid of the region. You can use neural network for short-term time sequence forecasting (chapter 3 in this thesis). And since Kalman filter is simply a method for implementing Bayes rule, the maximum likelihood can be applied for state estimation. State-estimation procedures can be implemented recursively. The posterior from the previous time step is run through the dynamics model and becomes the new prior for the current time
step. Then this prior can be converted into a new posterior by using the current observation. As a result, iterative parameter re-estimation procedures such as EM can be used to learn the parameters in Kalman filter. Chapter 6 of the same thesis, and the study on Kalman smoothing both include more details on the parameters learning with EM. | Coupling time series information from sources with multiple spatial resolutions/scales
Spatial Domain:
It seems more like an image processing problem to me. Clustering methods might help but which metric (distance, variance,discontiguity...) and which algorithm (k-means, mean-shift, EM. |
32,519 | Coupling time series information from sources with multiple spatial resolutions/scales | You should look into the literature for super-resolution. This area typically solves the problem of taking in multiple coarse resolution images to create one high resolution image by borrowing strength across multiple images effectively.
I've listed some relevant literature that should be a good starting point.
My favorite approach here uses nonlocal means. This involves splitting all the images up into patches of $5x5$ or $7x7$ pixels, creating better estimates of pixels in the finer resolution image using a weighted combination of pixels in the coarser images.
References
Elad, Michael, and Arie Feuer. "Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images." Image Processing, IEEE Transactions on 6.12 (1997): 1646-1658.
Park, Sung Cheol, Min Kyu Park, and Moon Gi Kang. "Super-resolution image reconstruction: a technical overview." Signal Processing Magazine, IEEE 20.3 (2003): 21-36.
Protter, Matan, et al. "Generalizing the nonlocal-means to super-resolution reconstruction." Image Processing, IEEE Transactions on 18.1 (2009): 36-51. | Coupling time series information from sources with multiple spatial resolutions/scales | You should look into the literature for super-resolution. This area typically solves the problem of taking in multiple coarse resolution images to create one high resolution image by borrowing strengt | Coupling time series information from sources with multiple spatial resolutions/scales
You should look into the literature for super-resolution. This area typically solves the problem of taking in multiple coarse resolution images to create one high resolution image by borrowing strength across multiple images effectively.
I've listed some relevant literature that should be a good starting point.
My favorite approach here uses nonlocal means. This involves splitting all the images up into patches of $5x5$ or $7x7$ pixels, creating better estimates of pixels in the finer resolution image using a weighted combination of pixels in the coarser images.
References
Elad, Michael, and Arie Feuer. "Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images." Image Processing, IEEE Transactions on 6.12 (1997): 1646-1658.
Park, Sung Cheol, Min Kyu Park, and Moon Gi Kang. "Super-resolution image reconstruction: a technical overview." Signal Processing Magazine, IEEE 20.3 (2003): 21-36.
Protter, Matan, et al. "Generalizing the nonlocal-means to super-resolution reconstruction." Image Processing, IEEE Transactions on 18.1 (2009): 36-51. | Coupling time series information from sources with multiple spatial resolutions/scales
You should look into the literature for super-resolution. This area typically solves the problem of taking in multiple coarse resolution images to create one high resolution image by borrowing strengt |
32,520 | Expected value of minimum order statistic from a normal sample | Your results do not appear correct. This is easy to see, without any calculation, because in your table, your $E[X_{(1)}]$ increases with sample size $n$; plainly, the expected value of the sample minimum must get smaller (i.e. become more negative) as the sample size $n$ gets larger.
The problem is conceptually quite easy.
In brief: if $X$ ~ $N(0,1)$ with pdf $f(x)$:
... then the pdf of the 1st order statistic (in a sample of size $n$) is:
... obtained here using the OrderStat function in mathStatica, with domain of support:
Then, $E[X_{(1)}]$, for $n = 1,2,3$ can be easily obtained exactly as:
The exact $n = 3$ case is approximately $-0.846284$, which is obviously different to your workings of -1.06 (line 1 of your Table), so it seems clear something is wrong with your workings (or perhaps my understanding of what you are seeking).
For $n \ge 4$, obtaining closed-form solutions is more tricky, but even if symbolic integration proves difficult, we can always use numerical integration (to arbitrary precision if desired). This is really very easy ... here, for instance, is $E[X_{(1)}]$, for sample size $n = 1$ to 14, using Mathematica:
sol = Table[NIntegrate[x g, {x, -Infinity, Infinity}], {n, 1, 14}]
{0., -0.56419, -0.846284, -1.02938, -1.16296, -1.26721, -1.35218, -1.4236, -1.48501,
-1.53875, -1.58644, -1.62923, -1.66799, -1.70338}
All done. These values are obviously very different to those in your table (right hand column).
To consider the more general case of a $N(\mu, \sigma^2)$ parent, proceed exactly as above, starting with the general Normal pdf. | Expected value of minimum order statistic from a normal sample | Your results do not appear correct. This is easy to see, without any calculation, because in your table, your $E[X_{(1)}]$ increases with sample size $n$; plainly, the expected value of the sample min | Expected value of minimum order statistic from a normal sample
Your results do not appear correct. This is easy to see, without any calculation, because in your table, your $E[X_{(1)}]$ increases with sample size $n$; plainly, the expected value of the sample minimum must get smaller (i.e. become more negative) as the sample size $n$ gets larger.
The problem is conceptually quite easy.
In brief: if $X$ ~ $N(0,1)$ with pdf $f(x)$:
... then the pdf of the 1st order statistic (in a sample of size $n$) is:
... obtained here using the OrderStat function in mathStatica, with domain of support:
Then, $E[X_{(1)}]$, for $n = 1,2,3$ can be easily obtained exactly as:
The exact $n = 3$ case is approximately $-0.846284$, which is obviously different to your workings of -1.06 (line 1 of your Table), so it seems clear something is wrong with your workings (or perhaps my understanding of what you are seeking).
For $n \ge 4$, obtaining closed-form solutions is more tricky, but even if symbolic integration proves difficult, we can always use numerical integration (to arbitrary precision if desired). This is really very easy ... here, for instance, is $E[X_{(1)}]$, for sample size $n = 1$ to 14, using Mathematica:
sol = Table[NIntegrate[x g, {x, -Infinity, Infinity}], {n, 1, 14}]
{0., -0.56419, -0.846284, -1.02938, -1.16296, -1.26721, -1.35218, -1.4236, -1.48501,
-1.53875, -1.58644, -1.62923, -1.66799, -1.70338}
All done. These values are obviously very different to those in your table (right hand column).
To consider the more general case of a $N(\mu, \sigma^2)$ parent, proceed exactly as above, starting with the general Normal pdf. | Expected value of minimum order statistic from a normal sample
Your results do not appear correct. This is easy to see, without any calculation, because in your table, your $E[X_{(1)}]$ increases with sample size $n$; plainly, the expected value of the sample min |
32,521 | Demonstration of sample quantile bias | Bias in estimating $p$-quantiles is investigated in a distribution-free way in
http://www.sciencedirect.com/science/article/pii/S016771520000242X
(a pdf can be found on the same page). The authors focus on the quantile estimator based on ECDF inversion. No assumptions on the underlying distribution is made (except finite second moment), thus also discrete distributions are included.
Some highlights:
Bias is proportional to the standard deviation $\sigma$ of the underlying distribution
Bias is smaller in central quantiles than in extreme ones. This stems from the fact that among all distributions with standard deviation $\sigma < \infty$, bias oscillates in an interval of length $\frac{\sigma}{\sqrt{p (1-p)}}$. Strikingly, this does not depend on the sample size $n$.
For $np>3$, among all standardized distributions (mean 0, standard deviation 1), the worst bias is associated with the distribution having an atom of probability $p$ at $-\sqrt{(1-p)/p}$ and an atom of probability $1-p$ at $\sqrt{p/(1-p)}$. | Demonstration of sample quantile bias | Bias in estimating $p$-quantiles is investigated in a distribution-free way in
http://www.sciencedirect.com/science/article/pii/S016771520000242X
(a pdf can be found on the same page). The authors foc | Demonstration of sample quantile bias
Bias in estimating $p$-quantiles is investigated in a distribution-free way in
http://www.sciencedirect.com/science/article/pii/S016771520000242X
(a pdf can be found on the same page). The authors focus on the quantile estimator based on ECDF inversion. No assumptions on the underlying distribution is made (except finite second moment), thus also discrete distributions are included.
Some highlights:
Bias is proportional to the standard deviation $\sigma$ of the underlying distribution
Bias is smaller in central quantiles than in extreme ones. This stems from the fact that among all distributions with standard deviation $\sigma < \infty$, bias oscillates in an interval of length $\frac{\sigma}{\sqrt{p (1-p)}}$. Strikingly, this does not depend on the sample size $n$.
For $np>3$, among all standardized distributions (mean 0, standard deviation 1), the worst bias is associated with the distribution having an atom of probability $p$ at $-\sqrt{(1-p)/p}$ and an atom of probability $1-p$ at $\sqrt{p/(1-p)}$. | Demonstration of sample quantile bias
Bias in estimating $p$-quantiles is investigated in a distribution-free way in
http://www.sciencedirect.com/science/article/pii/S016771520000242X
(a pdf can be found on the same page). The authors foc |
32,522 | Demonstration of sample quantile bias | Just to add to this old post, The ECDF is only unbiased at high samples sizes. At low values of N it is biased. Take the trivial case of N=1 and the ECDF takes a value of 1 at and above the sample value. Ask yourself what is the value of the underlying distribution that gives a probability of 1?
The bias actually exceeds sqrt(2*pi)/(2N)*SD or 1.25/N * SD so for an N of 5 this is a 0.25 SD bias.
Instead of an ECDF based on k/N, try (k-0.5)/N to get an unbiased ECDF.
That might give you unbiased sample quantiles.
It also ensures that ECDF(x)=1-ECDF(-x) which is enjoyed by all other cumulative distributions.
In my very humble opinion the ECDF as defined and used is a huge misnomer.
It biases Kolmogorov Smirnov, Lilliefors and other standard tests at low N.
Check out Gilchrist "Statistical modelling with Quantile Functions" | Demonstration of sample quantile bias | Just to add to this old post, The ECDF is only unbiased at high samples sizes. At low values of N it is biased. Take the trivial case of N=1 and the ECDF takes a value of 1 at and above the sample val | Demonstration of sample quantile bias
Just to add to this old post, The ECDF is only unbiased at high samples sizes. At low values of N it is biased. Take the trivial case of N=1 and the ECDF takes a value of 1 at and above the sample value. Ask yourself what is the value of the underlying distribution that gives a probability of 1?
The bias actually exceeds sqrt(2*pi)/(2N)*SD or 1.25/N * SD so for an N of 5 this is a 0.25 SD bias.
Instead of an ECDF based on k/N, try (k-0.5)/N to get an unbiased ECDF.
That might give you unbiased sample quantiles.
It also ensures that ECDF(x)=1-ECDF(-x) which is enjoyed by all other cumulative distributions.
In my very humble opinion the ECDF as defined and used is a huge misnomer.
It biases Kolmogorov Smirnov, Lilliefors and other standard tests at low N.
Check out Gilchrist "Statistical modelling with Quantile Functions" | Demonstration of sample quantile bias
Just to add to this old post, The ECDF is only unbiased at high samples sizes. At low values of N it is biased. Take the trivial case of N=1 and the ECDF takes a value of 1 at and above the sample val |
32,523 | Demonstration of sample quantile bias | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There exists a unique true sample quantile definition (which is not the one usually presented). See: http://dx.doi.org/10.1155/2014/326579 | Demonstration of sample quantile bias | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Demonstration of sample quantile bias
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There exists a unique true sample quantile definition (which is not the one usually presented). See: http://dx.doi.org/10.1155/2014/326579 | Demonstration of sample quantile bias
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
32,524 | Confidence interval on R-squared | You probably can be served by the CI.Rsq function in the psychometric package. The equation is provided in the help. An alternative would be bootstrapping the CI. | Confidence interval on R-squared | You probably can be served by the CI.Rsq function in the psychometric package. The equation is provided in the help. An alternative would be bootstrapping the CI. | Confidence interval on R-squared
You probably can be served by the CI.Rsq function in the psychometric package. The equation is provided in the help. An alternative would be bootstrapping the CI. | Confidence interval on R-squared
You probably can be served by the CI.Rsq function in the psychometric package. The equation is provided in the help. An alternative would be bootstrapping the CI. |
32,525 | Quadratic programming when the matrix is not positive definite | There are optimization routines specifically for local or global optimization of Quadratic Programming problems, whether or not the objective function is convex.
BARON is a general purpose global optimizer which can handle and take advantage of quadratic programming problems, convex or not.
CPLEX has a quadratic programming solver which can be invoked with solutiontarget = 2 to find a local optimum or = 3 to find a global optimum. In MATLAB, that can be invoked with cplexqp.
General purpose local optimizers which can handle linear constraints can also be used to find a local optimum. An example in R is https://cran.r-project.org/web/packages/trust/trust.pdf . Optimizers for R are listed at https://cran.r-project.org/web/views/Optimization.html .
In MATLAB, the function quadprog in the Optimization Toolbox can be used to find a local optimum.
In Julia, there are a variety of optimizers available.
"Any" gradient descent algorithm might not land you on anything, let alone dealing with constraints. Use a package developed by someone who knows what they are doing.
The example problem provided is easily solved to provable global optimality. Perhaps with the passage of more than 2 years it is no longer needed, or maybe being an example it never was, but in any event, the global optimum is at x = 0.321429, y = 0.535714 | Quadratic programming when the matrix is not positive definite | There are optimization routines specifically for local or global optimization of Quadratic Programming problems, whether or not the objective function is convex.
BARON is a general purpose global opti | Quadratic programming when the matrix is not positive definite
There are optimization routines specifically for local or global optimization of Quadratic Programming problems, whether or not the objective function is convex.
BARON is a general purpose global optimizer which can handle and take advantage of quadratic programming problems, convex or not.
CPLEX has a quadratic programming solver which can be invoked with solutiontarget = 2 to find a local optimum or = 3 to find a global optimum. In MATLAB, that can be invoked with cplexqp.
General purpose local optimizers which can handle linear constraints can also be used to find a local optimum. An example in R is https://cran.r-project.org/web/packages/trust/trust.pdf . Optimizers for R are listed at https://cran.r-project.org/web/views/Optimization.html .
In MATLAB, the function quadprog in the Optimization Toolbox can be used to find a local optimum.
In Julia, there are a variety of optimizers available.
"Any" gradient descent algorithm might not land you on anything, let alone dealing with constraints. Use a package developed by someone who knows what they are doing.
The example problem provided is easily solved to provable global optimality. Perhaps with the passage of more than 2 years it is no longer needed, or maybe being an example it never was, but in any event, the global optimum is at x = 0.321429, y = 0.535714 | Quadratic programming when the matrix is not positive definite
There are optimization routines specifically for local or global optimization of Quadratic Programming problems, whether or not the objective function is convex.
BARON is a general purpose global opti |
32,526 | Quadratic programming when the matrix is not positive definite | You can build a workaround by using nearPD from the Matrix package like so: nearPD(D)$mat.
nearPD computes the nearest positive definite matrix. | Quadratic programming when the matrix is not positive definite | You can build a workaround by using nearPD from the Matrix package like so: nearPD(D)$mat.
nearPD computes the nearest positive definite matrix. | Quadratic programming when the matrix is not positive definite
You can build a workaround by using nearPD from the Matrix package like so: nearPD(D)$mat.
nearPD computes the nearest positive definite matrix. | Quadratic programming when the matrix is not positive definite
You can build a workaround by using nearPD from the Matrix package like so: nearPD(D)$mat.
nearPD computes the nearest positive definite matrix. |
32,527 | Is there a conventional meaning of $\bumpeq$ symbol in statistics? | As far as I can tell, there's no special intent beyond "approximately equal to", i.e. its meaning is not distinct from $\approx$ and has no special statistical connotation I'm aware of. (Though as I said in comments, you could always double check with the authors)
The symbol was once fairly commonly used for that purpose, though I haven't seen it much in more recent publications.
Edit: corey979 points out a recent example via a comment: Jones M.C. & Pewsey, A. (2009) Sinh-arcsinh distributions
Biometrika, 96: 4, p761–780
- middle equation in sect. 4.3 | Is there a conventional meaning of $\bumpeq$ symbol in statistics? | As far as I can tell, there's no special intent beyond "approximately equal to", i.e. its meaning is not distinct from $\approx$ and has no special statistical connotation I'm aware of. (Though as I s | Is there a conventional meaning of $\bumpeq$ symbol in statistics?
As far as I can tell, there's no special intent beyond "approximately equal to", i.e. its meaning is not distinct from $\approx$ and has no special statistical connotation I'm aware of. (Though as I said in comments, you could always double check with the authors)
The symbol was once fairly commonly used for that purpose, though I haven't seen it much in more recent publications.
Edit: corey979 points out a recent example via a comment: Jones M.C. & Pewsey, A. (2009) Sinh-arcsinh distributions
Biometrika, 96: 4, p761–780
- middle equation in sect. 4.3 | Is there a conventional meaning of $\bumpeq$ symbol in statistics?
As far as I can tell, there's no special intent beyond "approximately equal to", i.e. its meaning is not distinct from $\approx$ and has no special statistical connotation I'm aware of. (Though as I s |
32,528 | Does adjusted R-square seek to estimate fixed score or random score population r-squared? | Raju et al (1997) note that
Pedhazur (1982) and Mitchell & Klimoski (1986) have argued that results are
relatively unaffected by the model [fixed-x or random-x] selected when Ns are at
least of moderate size (approximately 50).
Nonetheless, Raju et al (1997) classify some adjusted $R^2$ formulas for estimating $\rho^2$ as "Fixed X formulas" and "Random X formulas".
Fixed X formulas:
Several formulas are mentioned including the formula proposed by Ezekiel (1930) which is standard in most statistical software:
$$\hat{\rho}_{(E)}^2 = 1 - \frac{N-1}{N-p-1}(1-R^2)$$
Thus, the short answer to the question is the standard adjusted $R^2$ formula typically reported and built into standard statistical software is an estimate of fixed-x $\rho^2$.
Random X formulas:
Olkin and Pratt (1958) proposed a formula
$$\hat{ \rho}^2 _{(OP)} = 1 - \left[ {\frac{{N - 3}}{{N - p - 1}}} \right](1 - {R^2})F\left[ {1,1;\frac{{N - p + 1}}{2};(1 - {R^2})} \right]$$
where F is the hypergeometric function.
Raju et al (1997) explain how various other formulas, such as Pratt's and Herzberg's "are approximations to the expected hypergeometric function". E.g., Pratt's formula is
$${\hat \rho}^2_{(P)} = 1 - \frac{{(N - 3)(1 - {R^2})}}{{N - p - 1}}\left[ {1 + \frac{{2(1 - {R^2})}}{{N - p - 2.3}}} \right]$$
How do estimates differ?
Leach and Hansen (2003) report present a nice table showing the effect of different formulas on a sample of different published datasets in psychology (see Table 3).
The mean Ezekiel $R^2_{adj}$ was .2864 compared to Olkin and Pratt $R^2_{adj}$ of .2917 and Pratt $R^2_{adj}$ of .2910. As per Raju et al's initial quotation about the distinction between fixed and random-x formulas being most relevant to small sample sizes, Leach and Hansen's table shows how the difference between Ezekiel's fixed-x formula and Olkin and Pratt's random-x formula is most prominent in small sample sizes, particularly those less than 50.
References
Leach, L. F., & Henson, R. K. (2003). The use and impact of adjusted R2 effects in published regression research. In annual meeting of the Southwest Educational Research Assocation, San Antonio, TX. PDF
Mitchell, T. W., & Klimoski, R. J. (1986). Estimating the validity of cross-validity estimation. Journal of Applied Psychology, 71, 311-317.
Pedhazur, E. J. (1982). Multiple Regression in Behavioral Research (2nd ed.) New York: Holt, Rinehart, and Winston.
Raju, N. S., Bilgic, R., Edwards, J. E., & Fleer, P. F. (1997). Methodology review: Estimation of population validity and cross-validity, and the use of equal weights in prediction. Applied Psychological Measurement, 21(4), 291-305. | Does adjusted R-square seek to estimate fixed score or random score population r-squared? | Raju et al (1997) note that
Pedhazur (1982) and Mitchell & Klimoski (1986) have argued that results are
relatively unaffected by the model [fixed-x or random-x] selected when Ns are at
least o | Does adjusted R-square seek to estimate fixed score or random score population r-squared?
Raju et al (1997) note that
Pedhazur (1982) and Mitchell & Klimoski (1986) have argued that results are
relatively unaffected by the model [fixed-x or random-x] selected when Ns are at
least of moderate size (approximately 50).
Nonetheless, Raju et al (1997) classify some adjusted $R^2$ formulas for estimating $\rho^2$ as "Fixed X formulas" and "Random X formulas".
Fixed X formulas:
Several formulas are mentioned including the formula proposed by Ezekiel (1930) which is standard in most statistical software:
$$\hat{\rho}_{(E)}^2 = 1 - \frac{N-1}{N-p-1}(1-R^2)$$
Thus, the short answer to the question is the standard adjusted $R^2$ formula typically reported and built into standard statistical software is an estimate of fixed-x $\rho^2$.
Random X formulas:
Olkin and Pratt (1958) proposed a formula
$$\hat{ \rho}^2 _{(OP)} = 1 - \left[ {\frac{{N - 3}}{{N - p - 1}}} \right](1 - {R^2})F\left[ {1,1;\frac{{N - p + 1}}{2};(1 - {R^2})} \right]$$
where F is the hypergeometric function.
Raju et al (1997) explain how various other formulas, such as Pratt's and Herzberg's "are approximations to the expected hypergeometric function". E.g., Pratt's formula is
$${\hat \rho}^2_{(P)} = 1 - \frac{{(N - 3)(1 - {R^2})}}{{N - p - 1}}\left[ {1 + \frac{{2(1 - {R^2})}}{{N - p - 2.3}}} \right]$$
How do estimates differ?
Leach and Hansen (2003) report present a nice table showing the effect of different formulas on a sample of different published datasets in psychology (see Table 3).
The mean Ezekiel $R^2_{adj}$ was .2864 compared to Olkin and Pratt $R^2_{adj}$ of .2917 and Pratt $R^2_{adj}$ of .2910. As per Raju et al's initial quotation about the distinction between fixed and random-x formulas being most relevant to small sample sizes, Leach and Hansen's table shows how the difference between Ezekiel's fixed-x formula and Olkin and Pratt's random-x formula is most prominent in small sample sizes, particularly those less than 50.
References
Leach, L. F., & Henson, R. K. (2003). The use and impact of adjusted R2 effects in published regression research. In annual meeting of the Southwest Educational Research Assocation, San Antonio, TX. PDF
Mitchell, T. W., & Klimoski, R. J. (1986). Estimating the validity of cross-validity estimation. Journal of Applied Psychology, 71, 311-317.
Pedhazur, E. J. (1982). Multiple Regression in Behavioral Research (2nd ed.) New York: Holt, Rinehart, and Winston.
Raju, N. S., Bilgic, R., Edwards, J. E., & Fleer, P. F. (1997). Methodology review: Estimation of population validity and cross-validity, and the use of equal weights in prediction. Applied Psychological Measurement, 21(4), 291-305. | Does adjusted R-square seek to estimate fixed score or random score population r-squared?
Raju et al (1997) note that
Pedhazur (1982) and Mitchell & Klimoski (1986) have argued that results are
relatively unaffected by the model [fixed-x or random-x] selected when Ns are at
least o |
32,529 | Generate covariance matrix with fixed values in certain cells | It should be possible to sample from a Wishart distribution conditional on some of the entries being fixed. It may not be possible out-of-the-box with any of the BUGS-like languages (e.g. JAGS or STAN), but you may be able to rely on the Wishart's distribution with the Gaussian as described on page 5 of this document.
Edited to add: It looks like the STAN manual addresses this issue directly on page 40 (section 8.2, "Partially Known Parameters"). PDF here. Their covariance matrix is small, but it should be possible to do the same thing with a bigger one. Stan's Hamiltonian Monte Carlo should be quite fast, so you can ignore the brute force approach in the next paragraph.
The following advice is probably not useful, but I'll keep it below for posterity:
Alternatively, since you say you just need the values to be similar to their fixed values, you could just keep resampling with wishrnd until you get something that's close enough. See rejection sampling and Approximate Bayesian Computation. The MCMC-type methods from my first paragraph could be overkill. | Generate covariance matrix with fixed values in certain cells | It should be possible to sample from a Wishart distribution conditional on some of the entries being fixed. It may not be possible out-of-the-box with any of the BUGS-like languages (e.g. JAGS or STA | Generate covariance matrix with fixed values in certain cells
It should be possible to sample from a Wishart distribution conditional on some of the entries being fixed. It may not be possible out-of-the-box with any of the BUGS-like languages (e.g. JAGS or STAN), but you may be able to rely on the Wishart's distribution with the Gaussian as described on page 5 of this document.
Edited to add: It looks like the STAN manual addresses this issue directly on page 40 (section 8.2, "Partially Known Parameters"). PDF here. Their covariance matrix is small, but it should be possible to do the same thing with a bigger one. Stan's Hamiltonian Monte Carlo should be quite fast, so you can ignore the brute force approach in the next paragraph.
The following advice is probably not useful, but I'll keep it below for posterity:
Alternatively, since you say you just need the values to be similar to their fixed values, you could just keep resampling with wishrnd until you get something that's close enough. See rejection sampling and Approximate Bayesian Computation. The MCMC-type methods from my first paragraph could be overkill. | Generate covariance matrix with fixed values in certain cells
It should be possible to sample from a Wishart distribution conditional on some of the entries being fixed. It may not be possible out-of-the-box with any of the BUGS-like languages (e.g. JAGS or STA |
32,530 | Confusion related to data normalization | Normalizing the target in linear regression doesn't matter. In linear regression, your fit will be of the form
$$ \hat{y}_i = a_0 + a \cdot x_i. $$
When you predictors $x_i$ are centered, the constant term $a_0$ will always be the mean of the $y_i$. So if you center the $y_i$ before running a regression, you will just get $a_0 = 0$, but all your other coefficients will remain unchanged.
(That being said, normalizing the predictors---as you are currently doing---is a good idea.) | Confusion related to data normalization | Normalizing the target in linear regression doesn't matter. In linear regression, your fit will be of the form
$$ \hat{y}_i = a_0 + a \cdot x_i. $$
When you predictors $x_i$ are centered, the constant | Confusion related to data normalization
Normalizing the target in linear regression doesn't matter. In linear regression, your fit will be of the form
$$ \hat{y}_i = a_0 + a \cdot x_i. $$
When you predictors $x_i$ are centered, the constant term $a_0$ will always be the mean of the $y_i$. So if you center the $y_i$ before running a regression, you will just get $a_0 = 0$, but all your other coefficients will remain unchanged.
(That being said, normalizing the predictors---as you are currently doing---is a good idea.) | Confusion related to data normalization
Normalizing the target in linear regression doesn't matter. In linear regression, your fit will be of the form
$$ \hat{y}_i = a_0 + a \cdot x_i. $$
When you predictors $x_i$ are centered, the constant |
32,531 | Confusion related to data normalization | I think theoretically it doesn't matter, but numerically it does matter. Take a look at this answer.
https://stats.stackexchange.com/a/111997/57240 | Confusion related to data normalization | I think theoretically it doesn't matter, but numerically it does matter. Take a look at this answer.
https://stats.stackexchange.com/a/111997/57240 | Confusion related to data normalization
I think theoretically it doesn't matter, but numerically it does matter. Take a look at this answer.
https://stats.stackexchange.com/a/111997/57240 | Confusion related to data normalization
I think theoretically it doesn't matter, but numerically it does matter. Take a look at this answer.
https://stats.stackexchange.com/a/111997/57240 |
32,532 | How could I visualise the importance of different inputs to the forecast for a black-box non-linear model? | One way that you can assess predictor influence on forecasts is to estimate the gradient of the output with respect to the predictors. This can be done by estimating the partial derivatives of the non-linear prediction function with respect to each of the predictors by finite differences.
Ideally you will do this on the actually observed test inputs. For example, you may average the absolute values of the estimated gradients at all the test inputs in the previous 2 days. The magnitude of this average gradient can be used to sort the predictors' importance. (You will need to careful with the gradient estimation to use appropriate units by z-scoring or some such method.) You can save these estimated gradients by season for comparative analysis.
See "How to Explain Individual Classification Decisions", by David Baehrens et. al. in JMLR for more on this idea. The paper deals with classification but easily generalizes to regression as well. | How could I visualise the importance of different inputs to the forecast for a black-box non-linear | One way that you can assess predictor influence on forecasts is to estimate the gradient of the output with respect to the predictors. This can be done by estimating the partial derivatives of the non | How could I visualise the importance of different inputs to the forecast for a black-box non-linear model?
One way that you can assess predictor influence on forecasts is to estimate the gradient of the output with respect to the predictors. This can be done by estimating the partial derivatives of the non-linear prediction function with respect to each of the predictors by finite differences.
Ideally you will do this on the actually observed test inputs. For example, you may average the absolute values of the estimated gradients at all the test inputs in the previous 2 days. The magnitude of this average gradient can be used to sort the predictors' importance. (You will need to careful with the gradient estimation to use appropriate units by z-scoring or some such method.) You can save these estimated gradients by season for comparative analysis.
See "How to Explain Individual Classification Decisions", by David Baehrens et. al. in JMLR for more on this idea. The paper deals with classification but easily generalizes to regression as well. | How could I visualise the importance of different inputs to the forecast for a black-box non-linear
One way that you can assess predictor influence on forecasts is to estimate the gradient of the output with respect to the predictors. This can be done by estimating the partial derivatives of the non |
32,533 | How could I visualise the importance of different inputs to the forecast for a black-box non-linear model? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have you tried scikit-learn module in python.
You can "computer_importance" for the features of its randomForestClassifier | How could I visualise the importance of different inputs to the forecast for a black-box non-linear | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How could I visualise the importance of different inputs to the forecast for a black-box non-linear model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have you tried scikit-learn module in python.
You can "computer_importance" for the features of its randomForestClassifier | How could I visualise the importance of different inputs to the forecast for a black-box non-linear
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
32,534 | Collapsibility: Odds Ratios versus Risk Ratios | Yes and no. The risk ratio is collapsible, so adjusting for any variable that is not associated with either the exposure or outcome should not change the magnitude of the risk ratio. In addition, the summary risk ratio across strata should be between the values of the stratum-specific risk ratios (something that does not hold with the odds ratio).
But there are many ways in which a risk ratio can be biased after adjusting for variables that are not technically confounders. For example, adjusting for an intermediate will cause bias, despite collapsibility, since some of the association will be removed. Also, adjusting for a common effect of the exposure and outcome will cause bias in the risk ratio - this is sometimes referred to as collider bias, or selection bias. Finally, adjusting for variables associated only with the outcome can under some conditions change the magnitude of the risk ratio - in this case the risk ratio is not technically biased, it is simply estimating a different quantity (the effect of exposure on the outcome among people with a certain distribution of these third variables).
In terms of the formal definition you give for collapsibility, I am not entirely familiar with this, but I believe you are correct that g(P(x,y)) would be the risk ratio. Also, it is my understanding that the risk ratio (and more importantly the variance and confidence intervals for the risk ratio) is calculated under the assumption that the risks each follow independent Binomial distributions. | Collapsibility: Odds Ratios versus Risk Ratios | Yes and no. The risk ratio is collapsible, so adjusting for any variable that is not associated with either the exposure or outcome should not change the magnitude of the risk ratio. In addition, the | Collapsibility: Odds Ratios versus Risk Ratios
Yes and no. The risk ratio is collapsible, so adjusting for any variable that is not associated with either the exposure or outcome should not change the magnitude of the risk ratio. In addition, the summary risk ratio across strata should be between the values of the stratum-specific risk ratios (something that does not hold with the odds ratio).
But there are many ways in which a risk ratio can be biased after adjusting for variables that are not technically confounders. For example, adjusting for an intermediate will cause bias, despite collapsibility, since some of the association will be removed. Also, adjusting for a common effect of the exposure and outcome will cause bias in the risk ratio - this is sometimes referred to as collider bias, or selection bias. Finally, adjusting for variables associated only with the outcome can under some conditions change the magnitude of the risk ratio - in this case the risk ratio is not technically biased, it is simply estimating a different quantity (the effect of exposure on the outcome among people with a certain distribution of these third variables).
In terms of the formal definition you give for collapsibility, I am not entirely familiar with this, but I believe you are correct that g(P(x,y)) would be the risk ratio. Also, it is my understanding that the risk ratio (and more importantly the variance and confidence intervals for the risk ratio) is calculated under the assumption that the risks each follow independent Binomial distributions. | Collapsibility: Odds Ratios versus Risk Ratios
Yes and no. The risk ratio is collapsible, so adjusting for any variable that is not associated with either the exposure or outcome should not change the magnitude of the risk ratio. In addition, the |
32,535 | Collapsibility: Odds Ratios versus Risk Ratios | It is well known, that the risk ratio (RR) is collapsible. Consequently, if
RR(|z)<1 for all z, then E(RR(|)<1 for any distribution of z. Let us consider the initial Simpson example
x 0 1 x 0 1 x 0 1
z=0: Y=0: 5 8 z=1:Y=0: 15 12 Total: Y=0: 20 20
Y=1 3 4 Y=1 3 2 Y=1 6 6.
Then RR(Y=1|z=0) = (4/12)/(3/8) = 8/9,
RR(Y=1|z=1) = (2/14)/(3/18) = 6/7
Marginal RR(Y=1) = (6/26)/(6/26) = 1
How it can be if RR is collapsible?
Also, RR is not strictly collapsible i.e. it may be that RR(|z) is constant over z, but differs from marginal RR. Example
x 0 1 x 0 1 x 0 1
z=0: Y=0: 3 2 z=1:Y=0: 2 5 Total: Y=0: 5 7
Y=1 7 1 Y=1 3 2 Y=1: 10 3.
Then RR(Y=1|z=0) = (1/3)/(7/10) = 10/21,
RR(Y=1|z=1) = (2/7)/(3/5) = 10/21
Marginal RR(Y=1) = (3/10)/(10/15) = 9/20
Ilya Novikov | Collapsibility: Odds Ratios versus Risk Ratios | It is well known, that the risk ratio (RR) is collapsible. Consequently, if
RR(|z)<1 for all z, then E(RR(|)<1 for any distribution of z. Let us consider the initial Simpson example
x 0 1 | Collapsibility: Odds Ratios versus Risk Ratios
It is well known, that the risk ratio (RR) is collapsible. Consequently, if
RR(|z)<1 for all z, then E(RR(|)<1 for any distribution of z. Let us consider the initial Simpson example
x 0 1 x 0 1 x 0 1
z=0: Y=0: 5 8 z=1:Y=0: 15 12 Total: Y=0: 20 20
Y=1 3 4 Y=1 3 2 Y=1 6 6.
Then RR(Y=1|z=0) = (4/12)/(3/8) = 8/9,
RR(Y=1|z=1) = (2/14)/(3/18) = 6/7
Marginal RR(Y=1) = (6/26)/(6/26) = 1
How it can be if RR is collapsible?
Also, RR is not strictly collapsible i.e. it may be that RR(|z) is constant over z, but differs from marginal RR. Example
x 0 1 x 0 1 x 0 1
z=0: Y=0: 3 2 z=1:Y=0: 2 5 Total: Y=0: 5 7
Y=1 7 1 Y=1 3 2 Y=1: 10 3.
Then RR(Y=1|z=0) = (1/3)/(7/10) = 10/21,
RR(Y=1|z=1) = (2/7)/(3/5) = 10/21
Marginal RR(Y=1) = (3/10)/(10/15) = 9/20
Ilya Novikov | Collapsibility: Odds Ratios versus Risk Ratios
It is well known, that the risk ratio (RR) is collapsible. Consequently, if
RR(|z)<1 for all z, then E(RR(|)<1 for any distribution of z. Let us consider the initial Simpson example
x 0 1 |
32,536 | Interval censored Cox proportional hazards model in R | As stated above, you can use the survreg function. A note though: this is not strictly a Cox PH model, but rather location-scale models. Using the default log-transformation, this is the aft model. In the case of the exponential distribution, the proportional hazards and aft model are equivalent, so if distribution is set to exponential, this is a proportional hazards model with an exponential baseline. Likewise, if a baseline Weibull distribution aft model is used, the parameter estimates are just a linear transformation of those used in the proportional hazards model
with Weibull baseline distribution. But in general, survreg does not fit a Cox PH model.
If a semi-parametric model is desired, as found implemented in intcox, a word of caution: there are several issues with the current version of intcox (algorithm typically prematurely terminates significantly far from the MLE, fails outright with uncensored observations, no standard errors automatically presented).
A new alternative that you could use is the package "icenReg".
Admission of bias: this is the author of icenReg. | Interval censored Cox proportional hazards model in R | As stated above, you can use the survreg function. A note though: this is not strictly a Cox PH model, but rather location-scale models. Using the default log-transformation, this is the aft model. In | Interval censored Cox proportional hazards model in R
As stated above, you can use the survreg function. A note though: this is not strictly a Cox PH model, but rather location-scale models. Using the default log-transformation, this is the aft model. In the case of the exponential distribution, the proportional hazards and aft model are equivalent, so if distribution is set to exponential, this is a proportional hazards model with an exponential baseline. Likewise, if a baseline Weibull distribution aft model is used, the parameter estimates are just a linear transformation of those used in the proportional hazards model
with Weibull baseline distribution. But in general, survreg does not fit a Cox PH model.
If a semi-parametric model is desired, as found implemented in intcox, a word of caution: there are several issues with the current version of intcox (algorithm typically prematurely terminates significantly far from the MLE, fails outright with uncensored observations, no standard errors automatically presented).
A new alternative that you could use is the package "icenReg".
Admission of bias: this is the author of icenReg. | Interval censored Cox proportional hazards model in R
As stated above, you can use the survreg function. A note though: this is not strictly a Cox PH model, but rather location-scale models. Using the default log-transformation, this is the aft model. In |
32,537 | Interval censored Cox proportional hazards model in R | To do interval censored analysis in R, you must create a Surv object, and then use survfit(). If you have more than a variable, the intcox package solves the problem. | Interval censored Cox proportional hazards model in R | To do interval censored analysis in R, you must create a Surv object, and then use survfit(). If you have more than a variable, the intcox package solves the problem. | Interval censored Cox proportional hazards model in R
To do interval censored analysis in R, you must create a Surv object, and then use survfit(). If you have more than a variable, the intcox package solves the problem. | Interval censored Cox proportional hazards model in R
To do interval censored analysis in R, you must create a Surv object, and then use survfit(). If you have more than a variable, the intcox package solves the problem. |
32,538 | K-Fold Cross Validation for mixed-effect models: how to score them? | I've mostly seen cross-validation used in a machine-learning context where one thinks in terms of a loss function that one is trying to minimize. The natural loss function associated with linear models is mean squared error (which is basically the same as $R^2$). Calculating this for test data is very simple.
You could also use other loss functions (mean absolute error, rank correlation, etc.). However, since the linear model learns by minimizing $R^2$, it might be advisable to try a different model in this case that maximizes whatever loss function you chose (e.g. quantile regression for the mean absolute error). | K-Fold Cross Validation for mixed-effect models: how to score them? | I've mostly seen cross-validation used in a machine-learning context where one thinks in terms of a loss function that one is trying to minimize. The natural loss function associated with linear model | K-Fold Cross Validation for mixed-effect models: how to score them?
I've mostly seen cross-validation used in a machine-learning context where one thinks in terms of a loss function that one is trying to minimize. The natural loss function associated with linear models is mean squared error (which is basically the same as $R^2$). Calculating this for test data is very simple.
You could also use other loss functions (mean absolute error, rank correlation, etc.). However, since the linear model learns by minimizing $R^2$, it might be advisable to try a different model in this case that maximizes whatever loss function you chose (e.g. quantile regression for the mean absolute error). | K-Fold Cross Validation for mixed-effect models: how to score them?
I've mostly seen cross-validation used in a machine-learning context where one thinks in terms of a loss function that one is trying to minimize. The natural loss function associated with linear model |
32,539 | K-Fold Cross Validation for mixed-effect models: how to score them? | The goal with cross validation is to estimate how well your model will perform on new data. So you are correct in that you'll fit the model on a subset of your data ($k-1$ folds). Then you'll use the test set (fold $k$) to make predictions using the model you just built.
You'll now have the true values and predicted values for fold k (your test set), which is generally all you need to calculate different performance measures. Repeat $k$ times and average to get the average performance of your model. Chapter 5 of An Introduction to Statistical Learning provides a good overview of k-fold cross validation.
Edit: If the concern is that you need people from each group/cohort in both your train and test sets, then you could do stratified sampling of each of your groups, such that you end up with members of each cohort in both your test and train sets. | K-Fold Cross Validation for mixed-effect models: how to score them? | The goal with cross validation is to estimate how well your model will perform on new data. So you are correct in that you'll fit the model on a subset of your data ($k-1$ folds). Then you'll use the | K-Fold Cross Validation for mixed-effect models: how to score them?
The goal with cross validation is to estimate how well your model will perform on new data. So you are correct in that you'll fit the model on a subset of your data ($k-1$ folds). Then you'll use the test set (fold $k$) to make predictions using the model you just built.
You'll now have the true values and predicted values for fold k (your test set), which is generally all you need to calculate different performance measures. Repeat $k$ times and average to get the average performance of your model. Chapter 5 of An Introduction to Statistical Learning provides a good overview of k-fold cross validation.
Edit: If the concern is that you need people from each group/cohort in both your train and test sets, then you could do stratified sampling of each of your groups, such that you end up with members of each cohort in both your test and train sets. | K-Fold Cross Validation for mixed-effect models: how to score them?
The goal with cross validation is to estimate how well your model will perform on new data. So you are correct in that you'll fit the model on a subset of your data ($k-1$ folds). Then you'll use the |
32,540 | Decision threshold for a 3-class Naive Bayes ROC curve | As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, you should have some justification for putting the threshold to 0.5: you may also put it up to 0.9 or any other value that is reasonable.
You describe a setup with mutually exclusive classes (closed-world problem). "No class reaches the threshold" can always happen as soon as that threshold is higher than 1/$n_{classes}$, i.e. the same problem occurs in a 2-class problem with threshold, say, 0.9. For threshold = 1/$n_{classes}$ it could happen in theory, but in practice it is highly unlikely.
So your problem is not related (just more pronounced) to the 3-class set-up.
To your second question: you can compute ROC curves for any kind of continuous output scores, they don't even need to claim that they are probabilities. Personally, I don't calibrate, because I don't want to waste another test set on that (I work with very restricted sample sizes). The shape of the ROC anyways won't change.
Answer to your comment:
The ROC conceptually belongs to a set-up that in my field is called single-class classification: does a patient have a particular disease or not. From that point of view, you can assign a 10% probability that the patient does have the disease. But this does not imply that with 90% probability he has something defined - the complementary 90% actually belong to a "dummy" class: not having that disease. For some diseases & tests, finding everyone may be so important that you set your working point at a threshold of 0.1. Textbook example where you choose an extreme working point is HIV test in blood donations.
So for constucting the ROC for class A (you'd say: the patient is A positive), you look at class A posterior probabilities only. For binary classification with probability (not A) = 1 - probability (A), you don't need to plot the second ROC as it does not contain any information that is not readily accessible from the first one.
In your 3 class set up you can plot a ROC for each class. Depending on how you choose your threshold, no classification, exactly one class, or more than one class assigned can result. What is sensible depends on your problem. E.g. if the classes are "Hepatitis", "HIV", and "broken arm", then this policy is appropriate as a patient may have none or all of these. | Decision threshold for a 3-class Naive Bayes ROC curve | As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, | Decision threshold for a 3-class Naive Bayes ROC curve
As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, you should have some justification for putting the threshold to 0.5: you may also put it up to 0.9 or any other value that is reasonable.
You describe a setup with mutually exclusive classes (closed-world problem). "No class reaches the threshold" can always happen as soon as that threshold is higher than 1/$n_{classes}$, i.e. the same problem occurs in a 2-class problem with threshold, say, 0.9. For threshold = 1/$n_{classes}$ it could happen in theory, but in practice it is highly unlikely.
So your problem is not related (just more pronounced) to the 3-class set-up.
To your second question: you can compute ROC curves for any kind of continuous output scores, they don't even need to claim that they are probabilities. Personally, I don't calibrate, because I don't want to waste another test set on that (I work with very restricted sample sizes). The shape of the ROC anyways won't change.
Answer to your comment:
The ROC conceptually belongs to a set-up that in my field is called single-class classification: does a patient have a particular disease or not. From that point of view, you can assign a 10% probability that the patient does have the disease. But this does not imply that with 90% probability he has something defined - the complementary 90% actually belong to a "dummy" class: not having that disease. For some diseases & tests, finding everyone may be so important that you set your working point at a threshold of 0.1. Textbook example where you choose an extreme working point is HIV test in blood donations.
So for constucting the ROC for class A (you'd say: the patient is A positive), you look at class A posterior probabilities only. For binary classification with probability (not A) = 1 - probability (A), you don't need to plot the second ROC as it does not contain any information that is not readily accessible from the first one.
In your 3 class set up you can plot a ROC for each class. Depending on how you choose your threshold, no classification, exactly one class, or more than one class assigned can result. What is sensible depends on your problem. E.g. if the classes are "Hepatitis", "HIV", and "broken arm", then this policy is appropriate as a patient may have none or all of these. | Decision threshold for a 3-class Naive Bayes ROC curve
As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, |
32,541 | Benjamini-Hochberg dependency assumptions justified? | The validity of the BH procedure depends on the hypothesis tests being positively dependent. If you read their 2001 paper you would see that it is not necessary to be multivariate normal, they gave weak conditions in the paper:
Rosenbaum’s (1984) conditional (positive) association, is enough to
imply PRDS: $X$ is conditionally associated, if for any partition $(X1,$
$X2)$ of $X$, and any function $h(X1), X2$ given $h(X1)$ is positively
associated.
If these seems like a reasonable assumption to make about your data, then just declare it as an assumption and try to come up with scenarios where it is and isn't met to clarify it to yourself. | Benjamini-Hochberg dependency assumptions justified? | The validity of the BH procedure depends on the hypothesis tests being positively dependent. If you read their 2001 paper you would see that it is not necessary to be multivariate normal, they gave we | Benjamini-Hochberg dependency assumptions justified?
The validity of the BH procedure depends on the hypothesis tests being positively dependent. If you read their 2001 paper you would see that it is not necessary to be multivariate normal, they gave weak conditions in the paper:
Rosenbaum’s (1984) conditional (positive) association, is enough to
imply PRDS: $X$ is conditionally associated, if for any partition $(X1,$
$X2)$ of $X$, and any function $h(X1), X2$ given $h(X1)$ is positively
associated.
If these seems like a reasonable assumption to make about your data, then just declare it as an assumption and try to come up with scenarios where it is and isn't met to clarify it to yourself. | Benjamini-Hochberg dependency assumptions justified?
The validity of the BH procedure depends on the hypothesis tests being positively dependent. If you read their 2001 paper you would see that it is not necessary to be multivariate normal, they gave we |
32,542 | Benjamini-Hochberg dependency assumptions justified? | PRDS is a sufficient but not necessary condition for B-H to control the FDR.
I would suggest you use it, and also use the Benjamini-Yekutieli procedure for general dependence.
If the difference in inference is large, try demonstrating that B-H controls the FDR in your particular setup using permutations or resampling based techniques that conserve your dependence structure. | Benjamini-Hochberg dependency assumptions justified? | PRDS is a sufficient but not necessary condition for B-H to control the FDR.
I would suggest you use it, and also use the Benjamini-Yekutieli procedure for general dependence.
If the difference in inf | Benjamini-Hochberg dependency assumptions justified?
PRDS is a sufficient but not necessary condition for B-H to control the FDR.
I would suggest you use it, and also use the Benjamini-Yekutieli procedure for general dependence.
If the difference in inference is large, try demonstrating that B-H controls the FDR in your particular setup using permutations or resampling based techniques that conserve your dependence structure. | Benjamini-Hochberg dependency assumptions justified?
PRDS is a sufficient but not necessary condition for B-H to control the FDR.
I would suggest you use it, and also use the Benjamini-Yekutieli procedure for general dependence.
If the difference in inf |
32,543 | Latent variables, overparameterization and MCMC convergence in bayesian models | So is the intuition correct that I shouldn't be concerned that the
overparameterized latent variables are not identifiable and aren't
fully converged when I take my posterior samples?
I think your intuition is correct: you shouldn't be concerned that the overparameterized latent variables are not identifiable and aren't
fully converged. In fact, the latent variables likely can't converge; my understanding is that in this situation the full state space chain is null recurrent, even though by your account there is a transformed state space of smaller dimension in which the chain is full recurrent (and hence has a stationary distribution). For what it's worth, I have deliberately created and used such MCMC chains myself in my applied research.
Sometimes stochastic processes with these features are used to model time series data (key word: cointegration). A quick look at this plot might generate some intuition:
The upper figure shows two price time series, which one might think of as nonstationary in due to inflation even though no inflation can be seen on the time scale of the plot. Although each time series taken alone is nonstationary, there can exist a smaller dimensional manifold within the full state space (in this case, the "spread", i.e., the difference of the time series) such that the stochastic process generated by projecting the original process onto the manifold is stationary.
Are there some good references which discuss the use of non-identified latent variables in this way?
I don't know of any references that discuss the use of non-identified latent variables in this exact way, but here are a technical report and a published paper on the subject by Andrew Gelman, and here is a more recent manuscript by a different author that I think might be closer to what you're doing than the previous two references. | Latent variables, overparameterization and MCMC convergence in bayesian models | So is the intuition correct that I shouldn't be concerned that the
overparameterized latent variables are not identifiable and aren't
fully converged when I take my posterior samples?
I think you | Latent variables, overparameterization and MCMC convergence in bayesian models
So is the intuition correct that I shouldn't be concerned that the
overparameterized latent variables are not identifiable and aren't
fully converged when I take my posterior samples?
I think your intuition is correct: you shouldn't be concerned that the overparameterized latent variables are not identifiable and aren't
fully converged. In fact, the latent variables likely can't converge; my understanding is that in this situation the full state space chain is null recurrent, even though by your account there is a transformed state space of smaller dimension in which the chain is full recurrent (and hence has a stationary distribution). For what it's worth, I have deliberately created and used such MCMC chains myself in my applied research.
Sometimes stochastic processes with these features are used to model time series data (key word: cointegration). A quick look at this plot might generate some intuition:
The upper figure shows two price time series, which one might think of as nonstationary in due to inflation even though no inflation can be seen on the time scale of the plot. Although each time series taken alone is nonstationary, there can exist a smaller dimensional manifold within the full state space (in this case, the "spread", i.e., the difference of the time series) such that the stochastic process generated by projecting the original process onto the manifold is stationary.
Are there some good references which discuss the use of non-identified latent variables in this way?
I don't know of any references that discuss the use of non-identified latent variables in this exact way, but here are a technical report and a published paper on the subject by Andrew Gelman, and here is a more recent manuscript by a different author that I think might be closer to what you're doing than the previous two references. | Latent variables, overparameterization and MCMC convergence in bayesian models
So is the intuition correct that I shouldn't be concerned that the
overparameterized latent variables are not identifiable and aren't
fully converged when I take my posterior samples?
I think you |
32,544 | Manifold regularization using laplacian graph in SVM | While I did not test it, reading the article, the optimization problem, both for SVM and LapSVM, is given as:
$$\beta^*=\max_{\beta\in\mathbb R^l}
\sum_{i = 1}^{l}\beta_i - {1\over 2}\beta^TQ\beta$$
subject to:
$$\sum_{i = 1}^{l}\beta_iy_i = 0\\
0 \le \beta_i \le {1\over l}\text{, with }i=1,\dots,l$$
For SVM:
$$Q_{\text{SVM}} = Y\left(K \over 2\gamma\right)Y\\
\alpha^*_{\text{SVM}}={Y\beta^* \over 2\gamma}$$
While for LapSVM we have the following (added parentheses to make the relationship clearer):
$$Q_{\text{LapSVM}} = Y\left(
JK
\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK\right)^{-1}
J^T\right)Y\\
\alpha^*_{\text{LapSVM}}=
\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK\right)^{-1}J^TY\beta^*$$
We can define $$Q_{\text{SVM*}} \equiv Q_{\text{LapSVM}}$$ if:
$$
\left\{\begin{matrix}
\gamma_{\text{SVM*}} = 1/2
\\
K_{\text{SVM*}}=JK_{\text{LapSVM}}\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK_{\text{LapSVM}}\right)^{-1}J^T
\end{matrix}\right.
$$
Last:
$$\alpha^*_{\text{LapSVM}}=
K_{\text{LapSVM}}\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK_{\text{LapSVM}}\right)^{-1}J^T
\alpha^*_{\text{SVM*}}$$
I can confirm that it works. See this example with a Gaussian kernel, and how the class virginica starts creeping into the unlabelled data when $\gamma_I = 2500$ compared to $\gamma_I = 0$, which is the standard SVM. | Manifold regularization using laplacian graph in SVM | While I did not test it, reading the article, the optimization problem, both for SVM and LapSVM, is given as:
$$\beta^*=\max_{\beta\in\mathbb R^l}
\sum_{i = 1}^{l}\beta_i - {1\over 2}\beta^TQ\beta$$
s | Manifold regularization using laplacian graph in SVM
While I did not test it, reading the article, the optimization problem, both for SVM and LapSVM, is given as:
$$\beta^*=\max_{\beta\in\mathbb R^l}
\sum_{i = 1}^{l}\beta_i - {1\over 2}\beta^TQ\beta$$
subject to:
$$\sum_{i = 1}^{l}\beta_iy_i = 0\\
0 \le \beta_i \le {1\over l}\text{, with }i=1,\dots,l$$
For SVM:
$$Q_{\text{SVM}} = Y\left(K \over 2\gamma\right)Y\\
\alpha^*_{\text{SVM}}={Y\beta^* \over 2\gamma}$$
While for LapSVM we have the following (added parentheses to make the relationship clearer):
$$Q_{\text{LapSVM}} = Y\left(
JK
\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK\right)^{-1}
J^T\right)Y\\
\alpha^*_{\text{LapSVM}}=
\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK\right)^{-1}J^TY\beta^*$$
We can define $$Q_{\text{SVM*}} \equiv Q_{\text{LapSVM}}$$ if:
$$
\left\{\begin{matrix}
\gamma_{\text{SVM*}} = 1/2
\\
K_{\text{SVM*}}=JK_{\text{LapSVM}}\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK_{\text{LapSVM}}\right)^{-1}J^T
\end{matrix}\right.
$$
Last:
$$\alpha^*_{\text{LapSVM}}=
K_{\text{LapSVM}}\left(2\gamma_AI+2\frac{\gamma_I}{(l+u)^2}LK_{\text{LapSVM}}\right)^{-1}J^T
\alpha^*_{\text{SVM*}}$$
I can confirm that it works. See this example with a Gaussian kernel, and how the class virginica starts creeping into the unlabelled data when $\gamma_I = 2500$ compared to $\gamma_I = 0$, which is the standard SVM. | Manifold regularization using laplacian graph in SVM
While I did not test it, reading the article, the optimization problem, both for SVM and LapSVM, is given as:
$$\beta^*=\max_{\beta\in\mathbb R^l}
\sum_{i = 1}^{l}\beta_i - {1\over 2}\beta^TQ\beta$$
s |
32,545 | Which tests can I use to analyze dependent likert-type data? | I think there are several challenges to consider.
In terms of how to visualize, the most accurate would be to use a mosaic plot, or a stacked barplot (which are practically the same in this case, but it might be easier to find a stacked barplot in excel or SPSS than the mosaic plot).
It might also be helpful to change the likert scale to a numerical (1-5) scale, and have a boxplot of each of the 4 categories of your second question. Since boxplots are based on percentiles, the meaning of the boxplot can be somewhat consistent (depending on how the quantiles are calculated when dealing with mid points) with the type of data you present.
In terms of how to analyse, there are different questions you can ask. The simplest will be "is there a correlation between the two?", that can easily be answered using the pearson correlation on the ranking of the numerical values of your scales. This correlation will actually give you the Spearman correlation measure (the correlation of the ranks). The ranking is important for cases where you will have ties (for example, the vector: 1,2,2,4 should actually become: 1,2.5,2.5,3).
The wilcoxon test is relevant if you want to answer the question if the ranks of one measure is different than the other measure. But from your question, it doesn't sound like an interesting question. You can also use the Chi-square test for a similar question, but it's power will probably be smaller. | Which tests can I use to analyze dependent likert-type data? | I think there are several challenges to consider.
In terms of how to visualize, the most accurate would be to use a mosaic plot, or a stacked barplot (which are practically the same in this case, but | Which tests can I use to analyze dependent likert-type data?
I think there are several challenges to consider.
In terms of how to visualize, the most accurate would be to use a mosaic plot, or a stacked barplot (which are practically the same in this case, but it might be easier to find a stacked barplot in excel or SPSS than the mosaic plot).
It might also be helpful to change the likert scale to a numerical (1-5) scale, and have a boxplot of each of the 4 categories of your second question. Since boxplots are based on percentiles, the meaning of the boxplot can be somewhat consistent (depending on how the quantiles are calculated when dealing with mid points) with the type of data you present.
In terms of how to analyse, there are different questions you can ask. The simplest will be "is there a correlation between the two?", that can easily be answered using the pearson correlation on the ranking of the numerical values of your scales. This correlation will actually give you the Spearman correlation measure (the correlation of the ranks). The ranking is important for cases where you will have ties (for example, the vector: 1,2,2,4 should actually become: 1,2.5,2.5,3).
The wilcoxon test is relevant if you want to answer the question if the ranks of one measure is different than the other measure. But from your question, it doesn't sound like an interesting question. You can also use the Chi-square test for a similar question, but it's power will probably be smaller. | Which tests can I use to analyze dependent likert-type data?
I think there are several challenges to consider.
In terms of how to visualize, the most accurate would be to use a mosaic plot, or a stacked barplot (which are practically the same in this case, but |
32,546 | Which tests can I use to analyze dependent likert-type data? | In my opinion, this question has been done to death but surfaces again and again due to slightly different nomenclature. First off, it depends on what your scientific question is: are you interested in trend (averaged differential response comparing groups) or heterogeneity (piecewise comparisons of responses among all categories)? What are the outcomes of the study? What are your stimuli? Are the stimuli categorical or ordered somehow or continuous in nature? These are details that statisticians need to know.
Ordinal regression is the family of methods around coding ordinal responses by their numeric categories. The physical quantities you estimate are uselessly intepreted as "expected differences in ordinal response levels" (this is verbatim how you would report regression coefficients) which are often non-integral, however the statistics around those quantities do test for association in the response levels. So coding, "very poor" as 0 and "excellent" as 5 in the case of Likert data is sensible.
You are using the word dependent ambiguously here. Does your experimental design have repeated measures within individuals? Is one individual exposed to many stimuli and recorded for different responses at different times? This would be repeated measures. Otherwise, I think you are confused about dependence and each stimulus/outcome observation is pairwise independent from other observations. | Which tests can I use to analyze dependent likert-type data? | In my opinion, this question has been done to death but surfaces again and again due to slightly different nomenclature. First off, it depends on what your scientific question is: are you interested i | Which tests can I use to analyze dependent likert-type data?
In my opinion, this question has been done to death but surfaces again and again due to slightly different nomenclature. First off, it depends on what your scientific question is: are you interested in trend (averaged differential response comparing groups) or heterogeneity (piecewise comparisons of responses among all categories)? What are the outcomes of the study? What are your stimuli? Are the stimuli categorical or ordered somehow or continuous in nature? These are details that statisticians need to know.
Ordinal regression is the family of methods around coding ordinal responses by their numeric categories. The physical quantities you estimate are uselessly intepreted as "expected differences in ordinal response levels" (this is verbatim how you would report regression coefficients) which are often non-integral, however the statistics around those quantities do test for association in the response levels. So coding, "very poor" as 0 and "excellent" as 5 in the case of Likert data is sensible.
You are using the word dependent ambiguously here. Does your experimental design have repeated measures within individuals? Is one individual exposed to many stimuli and recorded for different responses at different times? This would be repeated measures. Otherwise, I think you are confused about dependence and each stimulus/outcome observation is pairwise independent from other observations. | Which tests can I use to analyze dependent likert-type data?
In my opinion, this question has been done to death but surfaces again and again due to slightly different nomenclature. First off, it depends on what your scientific question is: are you interested i |
32,547 | Which tests can I use to analyze dependent likert-type data? | A summary test statistic for multidimensional, non-metric (input and output) data is a tough pitch and likely of limited interpretable value.
You could run a proportional odds logistic regression model of one variable (input) on the other (target) - if you are willing to do the analysis as comparing the target's probability distribution over the input levels. It would show if the input variable has significantly different dependence over the input levels and target classes. It won't really matter for predictive probabilities which contrast scheme you use, however, for interpretation of the weights, you might like to use orthogonal polynomials. You will need to interpret with examples test cases and their predictive distributions on barplots. This is because along with the logit probability scale that the weights work on, there are cut-offs identified by the regression process - which makes it rather difficult to interpret logit scale quantities.
For example with R, your code would be
#input data
#quality <- scan()
#confidence <- scan()
Q <- length(unique(quality))
C <- length(unique(confidence))
require(MASS)
# tell R that the data is ordinal
quality <- factor(quality, levels = paste(1:Q), ordered = TRUE)
confidence <- factor(confidence, levels = paste(1:C), ordered = TRUE)
# train model, R will use orthogonal polynomials by default
polr.model <- polr(confidence ~ quality)
#plot probability predictions as pdf for each input level
lapply(
unique(quality),
function(z) {
pdf(paste('Quality_predictive_probabilities-Confidence_',z,'.pdf',sep=''))
probs <- predict(polr.model,newdata=list(quality=z), type='probs')
barplot(probs,xlab=paste('Quality',z),ylab='Confidence')
dev.off()
}
)
which will save the probability predictions into your current working directory. You may want to be careful about calling your variables confidence and quality if the audience is statistically aware as these words mean something quite specific to the community. | Which tests can I use to analyze dependent likert-type data? | A summary test statistic for multidimensional, non-metric (input and output) data is a tough pitch and likely of limited interpretable value.
You could run a proportional odds logistic regression mode | Which tests can I use to analyze dependent likert-type data?
A summary test statistic for multidimensional, non-metric (input and output) data is a tough pitch and likely of limited interpretable value.
You could run a proportional odds logistic regression model of one variable (input) on the other (target) - if you are willing to do the analysis as comparing the target's probability distribution over the input levels. It would show if the input variable has significantly different dependence over the input levels and target classes. It won't really matter for predictive probabilities which contrast scheme you use, however, for interpretation of the weights, you might like to use orthogonal polynomials. You will need to interpret with examples test cases and their predictive distributions on barplots. This is because along with the logit probability scale that the weights work on, there are cut-offs identified by the regression process - which makes it rather difficult to interpret logit scale quantities.
For example with R, your code would be
#input data
#quality <- scan()
#confidence <- scan()
Q <- length(unique(quality))
C <- length(unique(confidence))
require(MASS)
# tell R that the data is ordinal
quality <- factor(quality, levels = paste(1:Q), ordered = TRUE)
confidence <- factor(confidence, levels = paste(1:C), ordered = TRUE)
# train model, R will use orthogonal polynomials by default
polr.model <- polr(confidence ~ quality)
#plot probability predictions as pdf for each input level
lapply(
unique(quality),
function(z) {
pdf(paste('Quality_predictive_probabilities-Confidence_',z,'.pdf',sep=''))
probs <- predict(polr.model,newdata=list(quality=z), type='probs')
barplot(probs,xlab=paste('Quality',z),ylab='Confidence')
dev.off()
}
)
which will save the probability predictions into your current working directory. You may want to be careful about calling your variables confidence and quality if the audience is statistically aware as these words mean something quite specific to the community. | Which tests can I use to analyze dependent likert-type data?
A summary test statistic for multidimensional, non-metric (input and output) data is a tough pitch and likely of limited interpretable value.
You could run a proportional odds logistic regression mode |
32,548 | Variable selection with LASSO | It would be better to perform a COX regression with an L1 regularisation term, which would give the same type of variable selection you get from the standard least-squares LASSO approach. ISTR there has been at least one paper on this in the journal "Bioinformatics". There is a good paper by Robert Tibshirani, and @miura says that this is implemented in glmnet. | Variable selection with LASSO | It would be better to perform a COX regression with an L1 regularisation term, which would give the same type of variable selection you get from the standard least-squares LASSO approach. ISTR there | Variable selection with LASSO
It would be better to perform a COX regression with an L1 regularisation term, which would give the same type of variable selection you get from the standard least-squares LASSO approach. ISTR there has been at least one paper on this in the journal "Bioinformatics". There is a good paper by Robert Tibshirani, and @miura says that this is implemented in glmnet. | Variable selection with LASSO
It would be better to perform a COX regression with an L1 regularisation term, which would give the same type of variable selection you get from the standard least-squares LASSO approach. ISTR there |
32,549 | Correcting standard errors when the independent variables are autocorrelated | There are several ways to correct autocorrelation in a panel setting. The way you describe the clustering doesn't quite work this way. What you can do is:
Cluster the standard errors on the unit identifier, e.g. the individual/firm/household ID variable. This allows for arbitrary correlation within individuals which corrects for autocorrelation.
Calculate the Moulton factor and adjust your standard errors parametrically. If you have a balanced panel, the Moulton factor is $$M = 1 + (n-1)\rho_e$$ where $\rho_e$ is the within-individual correlation of the error. You then just need to multiply your standard errors with this factor in order to obtain an appropriate inflation of the naive standard errors which will correct for autocorrelation.
Block bootstrap the standard errors with individuals being "blocks". Typically 200-400 bootstrap replications should be enough in order to correct your standard errors. For very large panels this approach might take a significant amount of time.
You can find more on this topic in
- Cameron and Trivedi (2010) "Microeconometrics Using Stata", Revised Edition, Stata Press
- Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data", 2nd Edition, MIT Press | Correcting standard errors when the independent variables are autocorrelated | There are several ways to correct autocorrelation in a panel setting. The way you describe the clustering doesn't quite work this way. What you can do is:
Cluster the standard errors on the unit iden | Correcting standard errors when the independent variables are autocorrelated
There are several ways to correct autocorrelation in a panel setting. The way you describe the clustering doesn't quite work this way. What you can do is:
Cluster the standard errors on the unit identifier, e.g. the individual/firm/household ID variable. This allows for arbitrary correlation within individuals which corrects for autocorrelation.
Calculate the Moulton factor and adjust your standard errors parametrically. If you have a balanced panel, the Moulton factor is $$M = 1 + (n-1)\rho_e$$ where $\rho_e$ is the within-individual correlation of the error. You then just need to multiply your standard errors with this factor in order to obtain an appropriate inflation of the naive standard errors which will correct for autocorrelation.
Block bootstrap the standard errors with individuals being "blocks". Typically 200-400 bootstrap replications should be enough in order to correct your standard errors. For very large panels this approach might take a significant amount of time.
You can find more on this topic in
- Cameron and Trivedi (2010) "Microeconometrics Using Stata", Revised Edition, Stata Press
- Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data", 2nd Edition, MIT Press | Correcting standard errors when the independent variables are autocorrelated
There are several ways to correct autocorrelation in a panel setting. The way you describe the clustering doesn't quite work this way. What you can do is:
Cluster the standard errors on the unit iden |
32,550 | Expected best performance possible on a data set | I do not know whether this counts as an answer ...
This is the one problem which keeps you up at night. Do you can build a better model ? Phd-comics sums it up nicely (I don't know whether I am allowed to upload the comics, so I just linked them)
Proving the negative
Conversation impossible
From my personal experience, gained by participating in Machine Learning competitions, here is a rule of the thumb.
Imagine you are given a classification task. Sit down, brainstorm an hour or less how you'd approach the problem and check out the state of the art in this area. Build a model based on this research, preferably one which is known to be stable without too much parameter tweaking. The resulting performance will be roughly around 80% of the maximum achievable performance.
This rule is based on the so called Pareto principle, which also applies to optimization. Given a problem, you can create a solution which performs reasonable well fast, but from that point the ratio of improvement to time effort drops rapidly.
Some final words: When I read papers about new classification algorithms, I expect the authors to compare their new breed with such "pareto-optimized" approaches, i.e. I expect them to spend a reasonable amount of time to make the state of the art work (some require more or less parameter optimization). Unfortunately, many don't do that. | Expected best performance possible on a data set | I do not know whether this counts as an answer ...
This is the one problem which keeps you up at night. Do you can build a better model ? Phd-comics sums it up nicely (I don't know whether I am allowe | Expected best performance possible on a data set
I do not know whether this counts as an answer ...
This is the one problem which keeps you up at night. Do you can build a better model ? Phd-comics sums it up nicely (I don't know whether I am allowed to upload the comics, so I just linked them)
Proving the negative
Conversation impossible
From my personal experience, gained by participating in Machine Learning competitions, here is a rule of the thumb.
Imagine you are given a classification task. Sit down, brainstorm an hour or less how you'd approach the problem and check out the state of the art in this area. Build a model based on this research, preferably one which is known to be stable without too much parameter tweaking. The resulting performance will be roughly around 80% of the maximum achievable performance.
This rule is based on the so called Pareto principle, which also applies to optimization. Given a problem, you can create a solution which performs reasonable well fast, but from that point the ratio of improvement to time effort drops rapidly.
Some final words: When I read papers about new classification algorithms, I expect the authors to compare their new breed with such "pareto-optimized" approaches, i.e. I expect them to spend a reasonable amount of time to make the state of the art work (some require more or less parameter optimization). Unfortunately, many don't do that. | Expected best performance possible on a data set
I do not know whether this counts as an answer ...
This is the one problem which keeps you up at night. Do you can build a better model ? Phd-comics sums it up nicely (I don't know whether I am allowe |
32,551 | Expected best performance possible on a data set | The conventional way is to consider the ROC, and the area under it (AUC). The rationale behind this approach is that the higher the true positive rate for a particular false positive rate, the better the classifier. Integrating over all possible false positive rates gives you an overall measure. | Expected best performance possible on a data set | The conventional way is to consider the ROC, and the area under it (AUC). The rationale behind this approach is that the higher the true positive rate for a particular false positive rate, the better | Expected best performance possible on a data set
The conventional way is to consider the ROC, and the area under it (AUC). The rationale behind this approach is that the higher the true positive rate for a particular false positive rate, the better the classifier. Integrating over all possible false positive rates gives you an overall measure. | Expected best performance possible on a data set
The conventional way is to consider the ROC, and the area under it (AUC). The rationale behind this approach is that the higher the true positive rate for a particular false positive rate, the better |
32,552 | Expected best performance possible on a data set | If there is some way for you to visualize your data, that is the best possible scenario however not all data can be visualized in the same way, so you may need to find your own way to project the data that can help you understand your data better.
However, in general, I usually take a small sample of the data, convert it into ARFF and try different clustering algorithms from WEKA. Then, I just see which algorithm gives me better confusion matrix. It gives me a hint as to how well the classes are separated and allows me to investigate why that particular algorithm does better for this data. I also change the number of clusters (i.e i don't just use k = 2, I use k = 3, 4 etc.). It gives me an idea whether there is fragmentation in the data or whether one class is more fragmented than the other. If you mix training and testing points together for clustering, you can also measure which clusters are represented by your training points. Some clusters may be over-represented and some may be under-represented, both can cause issues which learning a classifier.
Always check your training accuracy. If your training accuracy is not looking good, then mis-classified training points are also a big hint. | Expected best performance possible on a data set | If there is some way for you to visualize your data, that is the best possible scenario however not all data can be visualized in the same way, so you may need to find your own way to project the data | Expected best performance possible on a data set
If there is some way for you to visualize your data, that is the best possible scenario however not all data can be visualized in the same way, so you may need to find your own way to project the data that can help you understand your data better.
However, in general, I usually take a small sample of the data, convert it into ARFF and try different clustering algorithms from WEKA. Then, I just see which algorithm gives me better confusion matrix. It gives me a hint as to how well the classes are separated and allows me to investigate why that particular algorithm does better for this data. I also change the number of clusters (i.e i don't just use k = 2, I use k = 3, 4 etc.). It gives me an idea whether there is fragmentation in the data or whether one class is more fragmented than the other. If you mix training and testing points together for clustering, you can also measure which clusters are represented by your training points. Some clusters may be over-represented and some may be under-represented, both can cause issues which learning a classifier.
Always check your training accuracy. If your training accuracy is not looking good, then mis-classified training points are also a big hint. | Expected best performance possible on a data set
If there is some way for you to visualize your data, that is the best possible scenario however not all data can be visualized in the same way, so you may need to find your own way to project the data |
32,553 | Cox proportional hazard model and non-randomly selected sample | There are proposed solutions to parametric hazard models. Take a look at these:
Prieger, James, 2000."A Generalized Parametric Selection Model for Non-normal Data," Working Papers 00-9, University of California at Davis, Department of Economics.
Boehmke, Frederick J., Daniel Morey and Megan Shannon. 2006. "Selection Bias and Continuous-Time Duration Models: Consequences and a Proposed Solution." American Journal of Political Science 50 (1): 192-207.
There is code for the later paper in Stata, package "dursel"
However, I am not aware of a solution for the semiparametric Cox model. | Cox proportional hazard model and non-randomly selected sample | There are proposed solutions to parametric hazard models. Take a look at these:
Prieger, James, 2000."A Generalized Parametric Selection Model for Non-normal Data," Working Papers 00-9, University of | Cox proportional hazard model and non-randomly selected sample
There are proposed solutions to parametric hazard models. Take a look at these:
Prieger, James, 2000."A Generalized Parametric Selection Model for Non-normal Data," Working Papers 00-9, University of California at Davis, Department of Economics.
Boehmke, Frederick J., Daniel Morey and Megan Shannon. 2006. "Selection Bias and Continuous-Time Duration Models: Consequences and a Proposed Solution." American Journal of Political Science 50 (1): 192-207.
There is code for the later paper in Stata, package "dursel"
However, I am not aware of a solution for the semiparametric Cox model. | Cox proportional hazard model and non-randomly selected sample
There are proposed solutions to parametric hazard models. Take a look at these:
Prieger, James, 2000."A Generalized Parametric Selection Model for Non-normal Data," Working Papers 00-9, University of |
32,554 | Cox proportional hazard model and non-randomly selected sample | The simple answer is weighting. That is, you can use weights to standardize groups in the "accepted" group to the population of interest. The problem that arises from using such weights in a pooled analysis using both the first and second 2 year phases is that the estimated population weights and the parameters are now dependent. The pseudolikelihood approach is typically used (in this case, it would be some kind of pseudo-partial likelihood) where you ignore the dependence between sample weights and parameter estimates. However, in many practical circumstances (and this one is no different), accounting for this dependence is necessary. The issue of creating an efficient estimator of the hazard ratios is a difficult one, and as far as I know open ended. This is vaguely similar to the two-phase study and I think it might be enlightening to consult the following article by Lumley and Breslow, freely available through the NIH
Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology.
The article discusses survey methods, typically applied in logistic regression, however you can weight survival data as well. Some important considerations which you neglected to mention is whether you're interested in creating a prediction which applies to the entire population, or to the "qualifying" population based on the 2-year estimates, or the "qualifying" population based on the resulting model. You also haven't mentioned exactly how such a "prediction" model is created from a Cox model, as fitted values from a Cox model cannot be interpreted as risks. I presume you estimate the hazard ratios, then obtain a smoothed estimate of the baseline hazard function. | Cox proportional hazard model and non-randomly selected sample | The simple answer is weighting. That is, you can use weights to standardize groups in the "accepted" group to the population of interest. The problem that arises from using such weights in a pooled an | Cox proportional hazard model and non-randomly selected sample
The simple answer is weighting. That is, you can use weights to standardize groups in the "accepted" group to the population of interest. The problem that arises from using such weights in a pooled analysis using both the first and second 2 year phases is that the estimated population weights and the parameters are now dependent. The pseudolikelihood approach is typically used (in this case, it would be some kind of pseudo-partial likelihood) where you ignore the dependence between sample weights and parameter estimates. However, in many practical circumstances (and this one is no different), accounting for this dependence is necessary. The issue of creating an efficient estimator of the hazard ratios is a difficult one, and as far as I know open ended. This is vaguely similar to the two-phase study and I think it might be enlightening to consult the following article by Lumley and Breslow, freely available through the NIH
Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology.
The article discusses survey methods, typically applied in logistic regression, however you can weight survival data as well. Some important considerations which you neglected to mention is whether you're interested in creating a prediction which applies to the entire population, or to the "qualifying" population based on the 2-year estimates, or the "qualifying" population based on the resulting model. You also haven't mentioned exactly how such a "prediction" model is created from a Cox model, as fitted values from a Cox model cannot be interpreted as risks. I presume you estimate the hazard ratios, then obtain a smoothed estimate of the baseline hazard function. | Cox proportional hazard model and non-randomly selected sample
The simple answer is weighting. That is, you can use weights to standardize groups in the "accepted" group to the population of interest. The problem that arises from using such weights in a pooled an |
32,555 | A reliable measure of series similarity - correlation just doesn't cut it for me | The two most common methods (in my experience) for comparing signals are the correlation and the mean squared error. Informally, if you imagine your signal as a point in some N-dimensional space (this tends to be easier if you imagine them as 3D points) then the correlation measures whether the points are in the same direction (from the "origin") and the mean squared error measures whether the points are in the same place (independent of the origin as long as both signals have the same origin). Which works better depends somewhat on the types of signal and noise in your system.
The MSE appears to be roughly equivalent to your example:
mse = 0;
for( int i=0; i<N; ++i )
mse += (x[i]-y[i])*(x[i]-y[i]);
mse /= N;
note however that this isn't really Pearson correlation, which would be more like
xx = 0;
xy = 0;
yy = 0;
for( int i=0; i<N; ++i )
{
xx += (x[i]-x_mean)*(x[i]-x_mean);
xy += (x[i]-x_mean)*(y[i]-y_mean);
yy += (y[i]-y_mean)*(y[i]-y_mean);
}
ppmcc = xy/std::sqrt(xx*yy);
given the signal means x_mean and y_mean. This is fairly close to the pure correlation:
corr = 0;
for( int i=0; i<N; ++i )
corr += x[i]*y[i];
however, I think the Pearson correlation will be more robust when the signals have a strong DC component (because the mean is subtracted) and are normalised, so a scaling in one of the signals will not cause a proportional increase in the correlation.
Finally, if the particular example in your question is a problem then you could also consider the mean absolute error (L1 norm):
mae = 0;
for( int i=0; i<N; ++i )
mae += std::abs(x[i]-y[i]);
mae /= N;
I'm aware of all three approaches being used in various signal and image processing applications, without knowing more about your particular application I couldn't say what would be likely to work best. I would note that the MAE and the MSE are less sensitive to exactly how the data is presented to them, but if the mean error is not really the metric you're interested in then they won't give you the results you're looking for. The correlation approaches can be better if you're more interested in the "direction" of your signal than the actual values involved, however it is more sensitive to how the data are presented and almost certainly requires some centring and normalisation to give the results you expect.
You might want to look up Phase Correlation, Cross Correlation, Normalised Correlation and Matched Filters. Most of these are used to match some sub-signal in a larger signal with some unknown time lag, but in your case you could just use the value they give for zero time lag if you know there is no lag between the two signals. | A reliable measure of series similarity - correlation just doesn't cut it for me | The two most common methods (in my experience) for comparing signals are the correlation and the mean squared error. Informally, if you imagine your signal as a point in some N-dimensional space (this | A reliable measure of series similarity - correlation just doesn't cut it for me
The two most common methods (in my experience) for comparing signals are the correlation and the mean squared error. Informally, if you imagine your signal as a point in some N-dimensional space (this tends to be easier if you imagine them as 3D points) then the correlation measures whether the points are in the same direction (from the "origin") and the mean squared error measures whether the points are in the same place (independent of the origin as long as both signals have the same origin). Which works better depends somewhat on the types of signal and noise in your system.
The MSE appears to be roughly equivalent to your example:
mse = 0;
for( int i=0; i<N; ++i )
mse += (x[i]-y[i])*(x[i]-y[i]);
mse /= N;
note however that this isn't really Pearson correlation, which would be more like
xx = 0;
xy = 0;
yy = 0;
for( int i=0; i<N; ++i )
{
xx += (x[i]-x_mean)*(x[i]-x_mean);
xy += (x[i]-x_mean)*(y[i]-y_mean);
yy += (y[i]-y_mean)*(y[i]-y_mean);
}
ppmcc = xy/std::sqrt(xx*yy);
given the signal means x_mean and y_mean. This is fairly close to the pure correlation:
corr = 0;
for( int i=0; i<N; ++i )
corr += x[i]*y[i];
however, I think the Pearson correlation will be more robust when the signals have a strong DC component (because the mean is subtracted) and are normalised, so a scaling in one of the signals will not cause a proportional increase in the correlation.
Finally, if the particular example in your question is a problem then you could also consider the mean absolute error (L1 norm):
mae = 0;
for( int i=0; i<N; ++i )
mae += std::abs(x[i]-y[i]);
mae /= N;
I'm aware of all three approaches being used in various signal and image processing applications, without knowing more about your particular application I couldn't say what would be likely to work best. I would note that the MAE and the MSE are less sensitive to exactly how the data is presented to them, but if the mean error is not really the metric you're interested in then they won't give you the results you're looking for. The correlation approaches can be better if you're more interested in the "direction" of your signal than the actual values involved, however it is more sensitive to how the data are presented and almost certainly requires some centring and normalisation to give the results you expect.
You might want to look up Phase Correlation, Cross Correlation, Normalised Correlation and Matched Filters. Most of these are used to match some sub-signal in a larger signal with some unknown time lag, but in your case you could just use the value they give for zero time lag if you know there is no lag between the two signals. | A reliable measure of series similarity - correlation just doesn't cut it for me
The two most common methods (in my experience) for comparing signals are the correlation and the mean squared error. Informally, if you imagine your signal as a point in some N-dimensional space (this |
32,556 | A reliable measure of series similarity - correlation just doesn't cut it for me | I am not sure if this is the right way to do it. But would scaling your data help? Try bringing the values to between 0 and 1. I suppose this should work. | A reliable measure of series similarity - correlation just doesn't cut it for me | I am not sure if this is the right way to do it. But would scaling your data help? Try bringing the values to between 0 and 1. I suppose this should work. | A reliable measure of series similarity - correlation just doesn't cut it for me
I am not sure if this is the right way to do it. But would scaling your data help? Try bringing the values to between 0 and 1. I suppose this should work. | A reliable measure of series similarity - correlation just doesn't cut it for me
I am not sure if this is the right way to do it. But would scaling your data help? Try bringing the values to between 0 and 1. I suppose this should work. |
32,557 | What data and statistics skills are currently in high demand and where are they in high demand? | A friend of mine has suggested that the software industry primarily
needs "big data" skills, not statistics skills per se.
While partially agreeing with your friend's comment, I would like to point out that in any industry, Big data tools are opted, only if all the V's are satisfied.
I work as the head of data science at a leading customer support company. Here, I do data hacking both for the product and also for the growth of the company.
I primarily use time series analysis techniques for churn prediction and sales analysis. This also includes the behavioural analysis of the customers, competition and the industry.
On the product side, we use a range of techniques starting from sentiment analysis using LSTM's, recommendation algorithms, etc.
But the core focus lies on time series analysis. The general workflow would be:
Cleaning and moulding the data.
the exploratory and explanatory analyses which involves identification of seasonality, trends and cycles. So, one need to explore correlations, auto-correlations, and several univariate and bivariate statistics; along with extensive plotting including the scatter, AFC, PAFC curves.
Now comes the forecasting part, where various models are tested each other, taking the step - 2 into serious consideration.
Tools used by me: R, Python and Excel sometimes.
And even the blend of data science and growth hacking have proven to do magic in the domain of marketing. So, the demand for statisticians and math nerds would remain as is; and is not going to decline anywhere in the near future; especially when customer focused startups are blossoming across the world. | What data and statistics skills are currently in high demand and where are they in high demand? | A friend of mine has suggested that the software industry primarily
needs "big data" skills, not statistics skills per se.
While partially agreeing with your friend's comment, I would like to point | What data and statistics skills are currently in high demand and where are they in high demand?
A friend of mine has suggested that the software industry primarily
needs "big data" skills, not statistics skills per se.
While partially agreeing with your friend's comment, I would like to point out that in any industry, Big data tools are opted, only if all the V's are satisfied.
I work as the head of data science at a leading customer support company. Here, I do data hacking both for the product and also for the growth of the company.
I primarily use time series analysis techniques for churn prediction and sales analysis. This also includes the behavioural analysis of the customers, competition and the industry.
On the product side, we use a range of techniques starting from sentiment analysis using LSTM's, recommendation algorithms, etc.
But the core focus lies on time series analysis. The general workflow would be:
Cleaning and moulding the data.
the exploratory and explanatory analyses which involves identification of seasonality, trends and cycles. So, one need to explore correlations, auto-correlations, and several univariate and bivariate statistics; along with extensive plotting including the scatter, AFC, PAFC curves.
Now comes the forecasting part, where various models are tested each other, taking the step - 2 into serious consideration.
Tools used by me: R, Python and Excel sometimes.
And even the blend of data science and growth hacking have proven to do magic in the domain of marketing. So, the demand for statisticians and math nerds would remain as is; and is not going to decline anywhere in the near future; especially when customer focused startups are blossoming across the world. | What data and statistics skills are currently in high demand and where are they in high demand?
A friend of mine has suggested that the software industry primarily
needs "big data" skills, not statistics skills per se.
While partially agreeing with your friend's comment, I would like to point |
32,558 | What data and statistics skills are currently in high demand and where are they in high demand? | One unexpected place where these skills are in high demand: HR. I ended up in the HR dept for a forward thinking tech company by chance after getting a masters in applied math. Turns out a lot of companies are just becoming interested in how statistics and data analysis can help them. Because HR analytics are in their relative infancy compared to well-explored areas like finance, this often involves relatively basic stuff like significance testing and OLS regression. Right now I'm working on a predictive employee attrition model using Cox proportional hazards. The field is on the upswing and there's a ton of opportunity to make a meaningful impact on significant problems while exercising a certain degree of creative license. HR is also a great place to learn about how companies are structured as well as how to build your career. | What data and statistics skills are currently in high demand and where are they in high demand? | One unexpected place where these skills are in high demand: HR. I ended up in the HR dept for a forward thinking tech company by chance after getting a masters in applied math. Turns out a lot of co | What data and statistics skills are currently in high demand and where are they in high demand?
One unexpected place where these skills are in high demand: HR. I ended up in the HR dept for a forward thinking tech company by chance after getting a masters in applied math. Turns out a lot of companies are just becoming interested in how statistics and data analysis can help them. Because HR analytics are in their relative infancy compared to well-explored areas like finance, this often involves relatively basic stuff like significance testing and OLS regression. Right now I'm working on a predictive employee attrition model using Cox proportional hazards. The field is on the upswing and there's a ton of opportunity to make a meaningful impact on significant problems while exercising a certain degree of creative license. HR is also a great place to learn about how companies are structured as well as how to build your career. | What data and statistics skills are currently in high demand and where are they in high demand?
One unexpected place where these skills are in high demand: HR. I ended up in the HR dept for a forward thinking tech company by chance after getting a masters in applied math. Turns out a lot of co |
32,559 | Feature construction in R | It would seem to me that this would leave you highly vulnerable to problems like spurious correlation and even overfitting. I forget the name of the principle that states the more models you try, the greater your risk of stumbling upon a bad one -- if you try so many models as to actually run a genetic algorithm, you can imagine how that principle is violated. | Feature construction in R | It would seem to me that this would leave you highly vulnerable to problems like spurious correlation and even overfitting. I forget the name of the principle that states the more models you try, the | Feature construction in R
It would seem to me that this would leave you highly vulnerable to problems like spurious correlation and even overfitting. I forget the name of the principle that states the more models you try, the greater your risk of stumbling upon a bad one -- if you try so many models as to actually run a genetic algorithm, you can imagine how that principle is violated. | Feature construction in R
It would seem to me that this would leave you highly vulnerable to problems like spurious correlation and even overfitting. I forget the name of the principle that states the more models you try, the |
32,560 | Feature construction in R | You could go about it like this: starting from a data.frame, you add a 'reasonable' set of transformed predictors or even interactions to your data (model.matrix and similar should be able to pull this off).
Once you're there, any variable selection method could do. glmnet comes to mind, but there are many options. A disadvantage to this way of working is that it will be hard to ensure that the main effect is in the model when an interaction is. Perhaps some forms of variable selection support this, but I know of no obvious ones besides stepwise procedures (that would defy the purpose). | Feature construction in R | You could go about it like this: starting from a data.frame, you add a 'reasonable' set of transformed predictors or even interactions to your data (model.matrix and similar should be able to pull thi | Feature construction in R
You could go about it like this: starting from a data.frame, you add a 'reasonable' set of transformed predictors or even interactions to your data (model.matrix and similar should be able to pull this off).
Once you're there, any variable selection method could do. glmnet comes to mind, but there are many options. A disadvantage to this way of working is that it will be hard to ensure that the main effect is in the model when an interaction is. Perhaps some forms of variable selection support this, but I know of no obvious ones besides stepwise procedures (that would defy the purpose). | Feature construction in R
You could go about it like this: starting from a data.frame, you add a 'reasonable' set of transformed predictors or even interactions to your data (model.matrix and similar should be able to pull thi |
32,561 | Feature construction in R | You could start with something simple like finding principle components or independent components. You could also get a little crazy and generate all the 2-way interactions of your variables. Obviously, as you generate and test more features, you need a feature selection algorithm that is more robust against overfitting.
Some modeling algorithms, like MARS, random forests, and non-linear SVMs automatically find certain interactions among your original features. | Feature construction in R | You could start with something simple like finding principle components or independent components. You could also get a little crazy and generate all the 2-way interactions of your variables. Obviou | Feature construction in R
You could start with something simple like finding principle components or independent components. You could also get a little crazy and generate all the 2-way interactions of your variables. Obviously, as you generate and test more features, you need a feature selection algorithm that is more robust against overfitting.
Some modeling algorithms, like MARS, random forests, and non-linear SVMs automatically find certain interactions among your original features. | Feature construction in R
You could start with something simple like finding principle components or independent components. You could also get a little crazy and generate all the 2-way interactions of your variables. Obviou |
32,562 | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively conservative? | To the best of my knowledge, the joint distribution of linear contrasts has been derived in the simple ANOVA case (see the documentation of the multcomp R package), but there is are no closed forms for the repeated measures setup. Nevertheless, you can always Bootstrap the joint distribution of these linear contrasts under the null,and look at the minimal t-statistic (or maximal p-value) for setting the significance threshold with FWE control.
As you also suggested, you can use methods which only require some qualitative condition on the joint distribution of the test statistics. Bonferroni is a good option if you have few contrasts. Otherwise, have a look at Holm's.
If you are looking into many linear contrasts, you should definitely ask yourself you want protection from any false discovery or only a proportion of false discoveries. In the latter case, use the BH procedure for FDR control. | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively cons | To the best of my knowledge, the joint distribution of linear contrasts has been derived in the simple ANOVA case (see the documentation of the multcomp R package), but there is are no closed forms fo | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively conservative?
To the best of my knowledge, the joint distribution of linear contrasts has been derived in the simple ANOVA case (see the documentation of the multcomp R package), but there is are no closed forms for the repeated measures setup. Nevertheless, you can always Bootstrap the joint distribution of these linear contrasts under the null,and look at the minimal t-statistic (or maximal p-value) for setting the significance threshold with FWE control.
As you also suggested, you can use methods which only require some qualitative condition on the joint distribution of the test statistics. Bonferroni is a good option if you have few contrasts. Otherwise, have a look at Holm's.
If you are looking into many linear contrasts, you should definitely ask yourself you want protection from any false discovery or only a proportion of false discoveries. In the latter case, use the BH procedure for FDR control. | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively cons
To the best of my knowledge, the joint distribution of linear contrasts has been derived in the simple ANOVA case (see the documentation of the multcomp R package), but there is are no closed forms fo |
32,563 | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively conservative? | Here's a collection of links to a SPSS forum. Hope you find it relevant to you to some degree: this, this, this, this. | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively cons | Here's a collection of links to a SPSS forum. Hope you find it relevant to you to some degree: this, this, this, this. | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively conservative?
Here's a collection of links to a SPSS forum. Hope you find it relevant to you to some degree: this, this, this, this. | Correcting for multiple comparisons in a within subjects / repeated measures ANOVA; excessively cons
Here's a collection of links to a SPSS forum. Hope you find it relevant to you to some degree: this, this, this, this. |
32,564 | Measures of autocorrelation in categorical values of a Markov Chain? | You can always choose one or several real valued functions of the categorical variables and look at the auto-correlation for the resulting sequence(s). You can, for instance, consider indicators of some subsets of the variables.
However, if I understood your question correctly, your sequence is obtained by an MCMC algorithm on the discrete space. In that case, it may be more interesting to look directly at the convergence rate for the Markov chain. Chapter 6 in this book by Brémaud treats this in details. The size of the second largest absolute value of the eigenvalues determines the convergence rate of the matrix of transition probabilities and thus the mixing of the process. | Measures of autocorrelation in categorical values of a Markov Chain? | You can always choose one or several real valued functions of the categorical variables and look at the auto-correlation for the resulting sequence(s). You can, for instance, consider indicators of so | Measures of autocorrelation in categorical values of a Markov Chain?
You can always choose one or several real valued functions of the categorical variables and look at the auto-correlation for the resulting sequence(s). You can, for instance, consider indicators of some subsets of the variables.
However, if I understood your question correctly, your sequence is obtained by an MCMC algorithm on the discrete space. In that case, it may be more interesting to look directly at the convergence rate for the Markov chain. Chapter 6 in this book by Brémaud treats this in details. The size of the second largest absolute value of the eigenvalues determines the convergence rate of the matrix of transition probabilities and thus the mixing of the process. | Measures of autocorrelation in categorical values of a Markov Chain?
You can always choose one or several real valued functions of the categorical variables and look at the auto-correlation for the resulting sequence(s). You can, for instance, consider indicators of so |
32,565 | Measures of autocorrelation in categorical values of a Markov Chain? | Instead of computing acf on your simulated time series, you could first create a time series of number of each kind of state change per unit of time (so you will a time serie for each state). And then compute the acf on each of the time series, and compare it with the real ones.
It's not a direct method, but you still you'll know if the rate of each kind of state changes through time is respected. | Measures of autocorrelation in categorical values of a Markov Chain? | Instead of computing acf on your simulated time series, you could first create a time series of number of each kind of state change per unit of time (so you will a time serie for each state). And then | Measures of autocorrelation in categorical values of a Markov Chain?
Instead of computing acf on your simulated time series, you could first create a time series of number of each kind of state change per unit of time (so you will a time serie for each state). And then compute the acf on each of the time series, and compare it with the real ones.
It's not a direct method, but you still you'll know if the rate of each kind of state changes through time is respected. | Measures of autocorrelation in categorical values of a Markov Chain?
Instead of computing acf on your simulated time series, you could first create a time series of number of each kind of state change per unit of time (so you will a time serie for each state). And then |
32,566 | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | Merely computing the covariance matrix--which you're going to need to get started in any event--is $O((Nf)^2)$ so, asymptotically in $N$, nothing is gained by choosing a $O(Nf)$ algorithm for the whitening.
There are approximations when the variables have additional structure, such as when they form a time series or realizations of a spatial stochastic process at various locations. These effectively rely on assumptions that let us relate the covariance between one pair of variables to that between other pairs of variables, such as between pairs separated by the same time lags. This is the conventional reason for assuming a process is stationary or intrinsically stationary, for instance. Calculations can be $O(Nf\,\log(Nf)$ in such cases (e.g., using the Fast Fourier Transform as in Yao & Journel 1998). Absent such a model, I don't see how you can avoid computing all pairwise covariances. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | Merely computing the covariance matrix--which you're going to need to get started in any event--is $O((Nf)^2)$ so, asymptotically in $N$, nothing is gained by choosing a $O(Nf)$ algorithm for the whit | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
Merely computing the covariance matrix--which you're going to need to get started in any event--is $O((Nf)^2)$ so, asymptotically in $N$, nothing is gained by choosing a $O(Nf)$ algorithm for the whitening.
There are approximations when the variables have additional structure, such as when they form a time series or realizations of a spatial stochastic process at various locations. These effectively rely on assumptions that let us relate the covariance between one pair of variables to that between other pairs of variables, such as between pairs separated by the same time lags. This is the conventional reason for assuming a process is stationary or intrinsically stationary, for instance. Calculations can be $O(Nf\,\log(Nf)$ in such cases (e.g., using the Fast Fourier Transform as in Yao & Journel 1998). Absent such a model, I don't see how you can avoid computing all pairwise covariances. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
Merely computing the covariance matrix--which you're going to need to get started in any event--is $O((Nf)^2)$ so, asymptotically in $N$, nothing is gained by choosing a $O(Nf)$ algorithm for the whit |
32,567 | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | On a whim, I decided to try computing (in R) the covariance matrix for a dataset of about the size mentioned in the OP:
z <- rnorm(1e8)
dim(z) <- c(1e6, 100)
vcv <- cov(z)
This took less than a minute in total, on a fairly generic laptop running Windows XP 32-bit. It probably took longer to generate z in the first place than to compute the matrix vcv. And R isn't particularly optimised for matrix operations out of the box.
Given this result, is speed that important? If N >> p, the time taken to compute your approximation is probably not going to be much less than to get the actual covariance matrix. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | On a whim, I decided to try computing (in R) the covariance matrix for a dataset of about the size mentioned in the OP:
z <- rnorm(1e8)
dim(z) <- c(1e6, 100)
vcv <- cov(z)
This took less than a minut | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
On a whim, I decided to try computing (in R) the covariance matrix for a dataset of about the size mentioned in the OP:
z <- rnorm(1e8)
dim(z) <- c(1e6, 100)
vcv <- cov(z)
This took less than a minute in total, on a fairly generic laptop running Windows XP 32-bit. It probably took longer to generate z in the first place than to compute the matrix vcv. And R isn't particularly optimised for matrix operations out of the box.
Given this result, is speed that important? If N >> p, the time taken to compute your approximation is probably not going to be much less than to get the actual covariance matrix. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
On a whim, I decided to try computing (in R) the covariance matrix for a dataset of about the size mentioned in the OP:
z <- rnorm(1e8)
dim(z) <- c(1e6, 100)
vcv <- cov(z)
This took less than a minut |
32,568 | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | Extra two cents:
Algorithmically speaking, I don' think there are any faster algorithms to do this for generic $X$. If there were, they must have been already implemented in the programs so far. However, from a software-engineering perspective, speeds can differ dramatically between implementations (e.g., legacy Blas, Goto Blas, Intel's MKL, and OpenBlas).
Depending on the application scenario, you can code your implementations specific to your use cases, but this requires do code-juggling in a more native language such as Fortran, C, and C++. Now to do fast implementations, one thing definitely needed is to take advantage of new CPU features such as AVX512, which requires bits of ASM knowledge.
Also, depending on how crude you mean, one possible approximation is to use Monte Carlo integration. For example, for a 1000000 x 1 column $x$ (say, 1000000 observations of something), the literal computation for mean will be $\sum_{i=1}^{1000000}x_i/1000000$, but for an approximation, you can take a subsmaple of only 1000 to get a crude estimation of mean, say $\sum_{j=1}^{1000}x_{i(j)}/1000$, where $x_{i(j)}$ is a sub sample of the original x. The idea applies to computing your covariance. Say, you have two columns x and y; rather than doing $\sum_{i=1}^{1000000}x_i*y_i$, you can approximate it with only a subsample (e.g, 1000) $\sum_{j=1}^{1000}x_{i(j)}*y_{i(j)}/1000 *1000000 $. If the ordering of your rows are randomly enough, you don't need to explicitly sample these 1000 indices, you may take it by a regular step or simply take the first 1000 rows.
For your back-substitution to solve $Lx=x_w$, if L is re-used many times, one bit of minor improvement is to explicitly store the diagonal elements of $L$ as their inversion (e.g., $1/Lii$); this will avoid doing the division but rather doing multiplication in the back-substitution stage. Nowadays, this saving is of no real sigfifance, given the speed for float division and multiplication should be roughly the same. But that is not the case for old CPUS because division is used to be much slower than multiplication. Again, this is just an minor thing and wouldn't expedite the computation dramatically, but depending on user cases, it may help a little bit. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | Extra two cents:
Algorithmically speaking, I don' think there are any faster algorithms to do this for generic $X$. If there were, they must have been already implemented in the programs so far. Howev | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
Extra two cents:
Algorithmically speaking, I don' think there are any faster algorithms to do this for generic $X$. If there were, they must have been already implemented in the programs so far. However, from a software-engineering perspective, speeds can differ dramatically between implementations (e.g., legacy Blas, Goto Blas, Intel's MKL, and OpenBlas).
Depending on the application scenario, you can code your implementations specific to your use cases, but this requires do code-juggling in a more native language such as Fortran, C, and C++. Now to do fast implementations, one thing definitely needed is to take advantage of new CPU features such as AVX512, which requires bits of ASM knowledge.
Also, depending on how crude you mean, one possible approximation is to use Monte Carlo integration. For example, for a 1000000 x 1 column $x$ (say, 1000000 observations of something), the literal computation for mean will be $\sum_{i=1}^{1000000}x_i/1000000$, but for an approximation, you can take a subsmaple of only 1000 to get a crude estimation of mean, say $\sum_{j=1}^{1000}x_{i(j)}/1000$, where $x_{i(j)}$ is a sub sample of the original x. The idea applies to computing your covariance. Say, you have two columns x and y; rather than doing $\sum_{i=1}^{1000000}x_i*y_i$, you can approximate it with only a subsample (e.g, 1000) $\sum_{j=1}^{1000}x_{i(j)}*y_{i(j)}/1000 *1000000 $. If the ordering of your rows are randomly enough, you don't need to explicitly sample these 1000 indices, you may take it by a regular step or simply take the first 1000 rows.
For your back-substitution to solve $Lx=x_w$, if L is re-used many times, one bit of minor improvement is to explicitly store the diagonal elements of $L$ as their inversion (e.g., $1/Lii$); this will avoid doing the division but rather doing multiplication in the back-substitution stage. Nowadays, this saving is of no real sigfifance, given the speed for float division and multiplication should be roughly the same. But that is not the case for old CPUS because division is used to be much slower than multiplication. Again, this is just an minor thing and wouldn't expedite the computation dramatically, but depending on user cases, it may help a little bit. | How to calculate tridiagonal approximate covariance matrix, for fast decorrelation?
Extra two cents:
Algorithmically speaking, I don' think there are any faster algorithms to do this for generic $X$. If there were, they must have been already implemented in the programs so far. Howev |
32,569 | Discrete choice panel models in R | For "fixed effects" logit regression as understood in econometrics (also called Chamberlain's conditional logit), "clogit" in the "survival" package does the job. | Discrete choice panel models in R | For "fixed effects" logit regression as understood in econometrics (also called Chamberlain's conditional logit), "clogit" in the "survival" package does the job. | Discrete choice panel models in R
For "fixed effects" logit regression as understood in econometrics (also called Chamberlain's conditional logit), "clogit" in the "survival" package does the job. | Discrete choice panel models in R
For "fixed effects" logit regression as understood in econometrics (also called Chamberlain's conditional logit), "clogit" in the "survival" package does the job. |
32,570 | Discrete choice panel models in R | What is wrong with using plm or lme4 (another link)? Particularly the glmer function? | Discrete choice panel models in R | What is wrong with using plm or lme4 (another link)? Particularly the glmer function? | Discrete choice panel models in R
What is wrong with using plm or lme4 (another link)? Particularly the glmer function? | Discrete choice panel models in R
What is wrong with using plm or lme4 (another link)? Particularly the glmer function? |
32,571 | Discrete choice panel models in R | pglm is now available and for e.g. conditional logit there is a closed form estimator that should be straightforward to implement. | Discrete choice panel models in R | pglm is now available and for e.g. conditional logit there is a closed form estimator that should be straightforward to implement. | Discrete choice panel models in R
pglm is now available and for e.g. conditional logit there is a closed form estimator that should be straightforward to implement. | Discrete choice panel models in R
pglm is now available and for e.g. conditional logit there is a closed form estimator that should be straightforward to implement. |
32,572 | Discrete choice panel models in R | If your dependent variable is only binary, you could try out the package "bife".
Their package description is:
"Estimates fixed effects binary choice models (logit and probit) with potentially many individual fixed effects and computes average partial effects. Incidental parameter bias can be reduced with an asymptotic bias correction proposed by Fernandez-Val (2009)" | Discrete choice panel models in R | If your dependent variable is only binary, you could try out the package "bife".
Their package description is:
"Estimates fixed effects binary choice models (logit and probit) with potentially many in | Discrete choice panel models in R
If your dependent variable is only binary, you could try out the package "bife".
Their package description is:
"Estimates fixed effects binary choice models (logit and probit) with potentially many individual fixed effects and computes average partial effects. Incidental parameter bias can be reduced with an asymptotic bias correction proposed by Fernandez-Val (2009)" | Discrete choice panel models in R
If your dependent variable is only binary, you could try out the package "bife".
Their package description is:
"Estimates fixed effects binary choice models (logit and probit) with potentially many in |
32,573 | Is there a version of multivariate multinomial logit? | Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and discusses according analysis with marginal models, conditional models and generalized estimating equations. | Is there a version of multivariate multinomial logit? | Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and dis | Is there a version of multivariate multinomial logit?
Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and discusses according analysis with marginal models, conditional models and generalized estimating equations. | Is there a version of multivariate multinomial logit?
Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and dis |
32,574 | Measuring correlation of trained neural networks | The Pearson correlation coefficient measures linear association. Being based on empirical second central moments, it is influenced by extreme values. Therefore:
Evidence of nonlinearity in a scatterplot of actual-vs-predicted values would suggest using an alternative such as the rank correlation (Spearman) coefficient;
If the relationship looks monotonic on average (as in the upper row of the illustration), a rank correlation coefficient will be effective;
Otherwise, the relationship is curvilinear (as in some examples from the lower row of the illustration, such as the leftmost or the middle u-shaped one) and likely any measure of correlation will be an inadequate description; using a rank correlation coefficient won't fix this.
The presence of outlying data in the scatterplot indicates the Pearson correlation coefficient may be overstating the strength of the linear relationship. It might or might not be correct; use it with due caution. The rank correlation coefficient might or might not be better, depending on how trustworthy the outlying values are.
(Image copied from the Wikipedia article on Pearson product-moment correlation coefficient.) | Measuring correlation of trained neural networks | The Pearson correlation coefficient measures linear association. Being based on empirical second central moments, it is influenced by extreme values. Therefore:
Evidence of nonlinearity in a scatte | Measuring correlation of trained neural networks
The Pearson correlation coefficient measures linear association. Being based on empirical second central moments, it is influenced by extreme values. Therefore:
Evidence of nonlinearity in a scatterplot of actual-vs-predicted values would suggest using an alternative such as the rank correlation (Spearman) coefficient;
If the relationship looks monotonic on average (as in the upper row of the illustration), a rank correlation coefficient will be effective;
Otherwise, the relationship is curvilinear (as in some examples from the lower row of the illustration, such as the leftmost or the middle u-shaped one) and likely any measure of correlation will be an inadequate description; using a rank correlation coefficient won't fix this.
The presence of outlying data in the scatterplot indicates the Pearson correlation coefficient may be overstating the strength of the linear relationship. It might or might not be correct; use it with due caution. The rank correlation coefficient might or might not be better, depending on how trustworthy the outlying values are.
(Image copied from the Wikipedia article on Pearson product-moment correlation coefficient.) | Measuring correlation of trained neural networks
The Pearson correlation coefficient measures linear association. Being based on empirical second central moments, it is influenced by extreme values. Therefore:
Evidence of nonlinearity in a scatte |
32,575 | Problem calculating joint and marginal distribution of two uniform distributions | In the "marginalisation" integral, the lower limit for $x_1$ is not $0$ but $x_2$ (because of the $0<x_2<x_1$ condition).
So the integral should be:
$$p(x_2)=\int p(x_1,x_2) dx_1=\int \frac{I(0\leq x_2\leq x_1\leq 1)}{x_1} dx_1=\int_{x_2}^{1} \frac{dx_1}{x_1}=log\big(\frac{1}{x_2}\big)$$
You have stumbled across, what I think is one of the hardest parts of statistical integrals - determining the limits of integration.
NOTE: This is consistent with Henry's answer, mine is the PDF, and his is the CDF. Differentiating his answer gives you mine, which shows we are both right. | Problem calculating joint and marginal distribution of two uniform distributions | In the "marginalisation" integral, the lower limit for $x_1$ is not $0$ but $x_2$ (because of the $0<x_2<x_1$ condition).
So the integral should be:
$$p(x_2)=\int p(x_1,x_2) dx_1=\int \frac{I(0\leq x_ | Problem calculating joint and marginal distribution of two uniform distributions
In the "marginalisation" integral, the lower limit for $x_1$ is not $0$ but $x_2$ (because of the $0<x_2<x_1$ condition).
So the integral should be:
$$p(x_2)=\int p(x_1,x_2) dx_1=\int \frac{I(0\leq x_2\leq x_1\leq 1)}{x_1} dx_1=\int_{x_2}^{1} \frac{dx_1}{x_1}=log\big(\frac{1}{x_2}\big)$$
You have stumbled across, what I think is one of the hardest parts of statistical integrals - determining the limits of integration.
NOTE: This is consistent with Henry's answer, mine is the PDF, and his is the CDF. Differentiating his answer gives you mine, which shows we are both right. | Problem calculating joint and marginal distribution of two uniform distributions
In the "marginalisation" integral, the lower limit for $x_1$ is not $0$ but $x_2$ (because of the $0<x_2<x_1$ condition).
So the integral should be:
$$p(x_2)=\int p(x_1,x_2) dx_1=\int \frac{I(0\leq x_ |
32,576 | Problem calculating joint and marginal distribution of two uniform distributions | You should not have $X_1$ in the marginal distribution for $X_2$
I would expect you to get $P(X_2 \le x_2) = x_2 (1-\log(x_2))$ and so the derivative gives a marginal density of $-\log(x_2)$.
This comes from $P(X_2 \le x_2 |X_1=x_1) = 1$ if $x_1 \le x_2$, and $ P(X_2 \le x_2 |X_1=x_1) = \frac{x_2}{x_1}$ if $x_2 \le x_1$ so the integral is
$$P(X_2 \le x_2) = \int_{x_1=0}^{x_2} dx_1 + \int_{x_1=x_2}^{1} \frac{x_2}{x_1} dx_1$$
$$ = \left[ x_1 \right]_{x_1=0}^{x_1=x_2} + \left[x_2 \log(x_1)\right]_{x_1=x_2}^{x_1=1} $$
$$ = x_2 - 0 +x_2 \log(1) - x_2 \log(x_2) $$
$$ = x_2 (1-\log(x_2))$$ | Problem calculating joint and marginal distribution of two uniform distributions | You should not have $X_1$ in the marginal distribution for $X_2$
I would expect you to get $P(X_2 \le x_2) = x_2 (1-\log(x_2))$ and so the derivative gives a marginal density of $-\log(x_2)$.
This c | Problem calculating joint and marginal distribution of two uniform distributions
You should not have $X_1$ in the marginal distribution for $X_2$
I would expect you to get $P(X_2 \le x_2) = x_2 (1-\log(x_2))$ and so the derivative gives a marginal density of $-\log(x_2)$.
This comes from $P(X_2 \le x_2 |X_1=x_1) = 1$ if $x_1 \le x_2$, and $ P(X_2 \le x_2 |X_1=x_1) = \frac{x_2}{x_1}$ if $x_2 \le x_1$ so the integral is
$$P(X_2 \le x_2) = \int_{x_1=0}^{x_2} dx_1 + \int_{x_1=x_2}^{1} \frac{x_2}{x_1} dx_1$$
$$ = \left[ x_1 \right]_{x_1=0}^{x_1=x_2} + \left[x_2 \log(x_1)\right]_{x_1=x_2}^{x_1=1} $$
$$ = x_2 - 0 +x_2 \log(1) - x_2 \log(x_2) $$
$$ = x_2 (1-\log(x_2))$$ | Problem calculating joint and marginal distribution of two uniform distributions
You should not have $X_1$ in the marginal distribution for $X_2$
I would expect you to get $P(X_2 \le x_2) = x_2 (1-\log(x_2))$ and so the derivative gives a marginal density of $-\log(x_2)$.
This c |
32,577 | Prerequisite for conversion from odds ratio to relative risk to be valid | Answering, even though this question is quite old.
The biggest caveat is that you cannot use a measurement of RR in a case-control study, because it cannot be calculated. If you have the data to compare between them, then there's no reason not to - differences between the two measures can often yield some insight.
Note however that in circumstances of highly prevalent diseases (~>10%), the OR won't well approximate the RR. Since what most studies are looking for is an RR or something that approximates it under special circumstances (OR, IDR, etc.), if the OR can't be expected to closely approximate the RR, it would be better to go with something that will.
Generally speaking, the OR is a convenience measurement to allow for the case-control study design, and because binomial regression models often have convergence issues. | Prerequisite for conversion from odds ratio to relative risk to be valid | Answering, even though this question is quite old.
The biggest caveat is that you cannot use a measurement of RR in a case-control study, because it cannot be calculated. If you have the data to compa | Prerequisite for conversion from odds ratio to relative risk to be valid
Answering, even though this question is quite old.
The biggest caveat is that you cannot use a measurement of RR in a case-control study, because it cannot be calculated. If you have the data to compare between them, then there's no reason not to - differences between the two measures can often yield some insight.
Note however that in circumstances of highly prevalent diseases (~>10%), the OR won't well approximate the RR. Since what most studies are looking for is an RR or something that approximates it under special circumstances (OR, IDR, etc.), if the OR can't be expected to closely approximate the RR, it would be better to go with something that will.
Generally speaking, the OR is a convenience measurement to allow for the case-control study design, and because binomial regression models often have convergence issues. | Prerequisite for conversion from odds ratio to relative risk to be valid
Answering, even though this question is quite old.
The biggest caveat is that you cannot use a measurement of RR in a case-control study, because it cannot be calculated. If you have the data to compa |
32,578 | Prerequisite for conversion from odds ratio to relative risk to be valid | In a sense, odds ratios are more universal than risk ratios so we spend too much time on risk ratios. Risk ratios are incapable of being constant over a wide range of risks, whereas an odds ratio is capable of being constant. For example, if a risk ratio is 3, the starting risk level cannot exceed 1/3. Because of this, models stated in terms of odds ratios often contain fewer interaction terms than models for relative risk (or for risk difference). | Prerequisite for conversion from odds ratio to relative risk to be valid | In a sense, odds ratios are more universal than risk ratios so we spend too much time on risk ratios. Risk ratios are incapable of being constant over a wide range of risks, whereas an odds ratio is | Prerequisite for conversion from odds ratio to relative risk to be valid
In a sense, odds ratios are more universal than risk ratios so we spend too much time on risk ratios. Risk ratios are incapable of being constant over a wide range of risks, whereas an odds ratio is capable of being constant. For example, if a risk ratio is 3, the starting risk level cannot exceed 1/3. Because of this, models stated in terms of odds ratios often contain fewer interaction terms than models for relative risk (or for risk difference). | Prerequisite for conversion from odds ratio to relative risk to be valid
In a sense, odds ratios are more universal than risk ratios so we spend too much time on risk ratios. Risk ratios are incapable of being constant over a wide range of risks, whereas an odds ratio is |
32,579 | Sampling from bivariate distribution with known density using MCMC | I think the order is correct, but the labels assigned to p(x) and p(y|x) were wrong. The original problem states p(y|x) is log-normal and p(x) is Singh-Maddala. So, it's
Generate an X from a Singh-Maddala, and
generate a Y from a log-normal having a mean which is a fraction of the generated X. | Sampling from bivariate distribution with known density using MCMC | I think the order is correct, but the labels assigned to p(x) and p(y|x) were wrong. The original problem states p(y|x) is log-normal and p(x) is Singh-Maddala. So, it's
Generate an X from a Singh-M | Sampling from bivariate distribution with known density using MCMC
I think the order is correct, but the labels assigned to p(x) and p(y|x) were wrong. The original problem states p(y|x) is log-normal and p(x) is Singh-Maddala. So, it's
Generate an X from a Singh-Maddala, and
generate a Y from a log-normal having a mean which is a fraction of the generated X. | Sampling from bivariate distribution with known density using MCMC
I think the order is correct, but the labels assigned to p(x) and p(y|x) were wrong. The original problem states p(y|x) is log-normal and p(x) is Singh-Maddala. So, it's
Generate an X from a Singh-M |
32,580 | Sampling from bivariate distribution with known density using MCMC | In fact, you should not do MCMC, since your problem is so much simpler. Try this algorithm:
Step 1: Generate a X from Log Normal
Step 2: Keeping this X fixed, generate a Y from the Singh Maddala.
Voilà! Sample Ready!!! | Sampling from bivariate distribution with known density using MCMC | In fact, you should not do MCMC, since your problem is so much simpler. Try this algorithm:
Step 1: Generate a X from Log Normal
Step 2: Keeping this X fixed, generate a Y from the Singh Maddala.
Voi | Sampling from bivariate distribution with known density using MCMC
In fact, you should not do MCMC, since your problem is so much simpler. Try this algorithm:
Step 1: Generate a X from Log Normal
Step 2: Keeping this X fixed, generate a Y from the Singh Maddala.
Voilà! Sample Ready!!! | Sampling from bivariate distribution with known density using MCMC
In fact, you should not do MCMC, since your problem is so much simpler. Try this algorithm:
Step 1: Generate a X from Log Normal
Step 2: Keeping this X fixed, generate a Y from the Singh Maddala.
Voi |
32,581 | Computing the cumulative distribution of max drawdown of random walk with drift | This is an alternating sum. Each successive pair almost cancels; such pair-sums eventually decrease monotonically.
One approach, then, is to compute the sum in pairs where $n$ = {1,2}, {3,4}, {5,6}, etc. (Doing so eliminates a lot of floating point error, too.) Some more tricks can help:
(1) To solve $\tan(t) = t / \alpha$ for a positive constant $\alpha$, a good starting value to search--and an excellent approximation for the $n^\text{th}$ largest root-- is $t = (n + 1/2)\pi - \frac{\alpha}{(n + 1/2)\pi}$. I suspect Newton-Raphson should work really well.
(2) After a small number of initial terms, the sums of pairs start decreasing in size very, very consistently. The logarithms of the absolute values of exponentially-spaced pairs quickly decrease almost linearly. This means you can interpolate among a very small number of calculated pair-sums to estimate all the pair-sums you did not compute. For example, by computing the values for only pairs (2,3), (4,5), (8,9), (16,17), ..., (16384, 16385) and constructing the interpolating polynomial for these (thought of as the values of a function at 1, 2, ..., 14) and using the arguments $h = \mu = \sigma = 1$, I was able to achieve six-figure precision for the worst-case errors. (Even nicer, the errors oscillate in sign, suggesting the precision in the summed interpolated values might be quite a bit better than six figures.) You could probably estimate the limiting sum to good precision by extrapolating linearly off the end of these values (which translates to a power law) and integrating the extrapolating function out to infinity. To complete this example calculation you also need the first term. That gives six-figure precision by means of only 29 computed terms in the summation.
(3) Note that the function really depends on $h/\sigma$ and $\mu/\sigma$, not on all three of these variables independently. The dependence on $T$ is weak (as it should be); you might be content to fix its value throughout all your calculations.
(4) On top of all this, consider using some series-acceleration methods, like Aitken's method. A good accounting of this appears in Numerical Recipes.
Added
(5) You can estimate the tail of the sum with an integral. Upon writing $\theta_n = (n + 1/2)\pi - 1/t_n$, the equation $\tan(\theta_n) = \theta_n / \alpha$ (with $\alpha = \mu h / \sigma^2$) can be solved for $t_n$, which is small, and then for $\theta_n$ by substituting back. Expanding the tangent in a Taylor series in $t_n$ gives the approximate solution
$$\theta_n = z - \frac{\alpha}{ z} - \frac{\alpha^2 - \alpha^3/3 }{z^3} + O\left((\frac{\alpha}{n})^5\right)$$
where $z = (n + 1/2)\pi$.
Provided $n$ is sufficiently large, the exponential factors of the form $1 - \exp(\frac{-\sigma^2 \theta_n^2 T}{2 h^2}) \exp(\frac{-\mu^2 T}{2 \sigma^2})$ become extremely close to 1 so you can neglect them. Typically these terms can be neglected even for small $n$ because $\theta_n^2$ is $\Theta\left(n^2\right)$, making the first exponential go to zero extremely quickly. (This happens once $n$ substantially exceeds $\alpha / T^{1/2}$. Do your calculations for large $T$ if you can!)
Using this expression for $\theta_n$ to sum the terms for $n$ and $n+1$ lets us approximate them (once all the smoke clears) as
$$\frac{2}{\pi n^2}-\frac{4}{\pi n^3}+\frac{13 \pi ^2+6 (4-3 \alpha ) \alpha }{2 \pi ^3 n^4}+O\left(\frac{1}{n^5}\right) \text{.}$$
Replacing the sum starting at $n = 2N$ by an integral over $N$ starting at $N - 1/4$ approximates the tail. (The integral has to be multiplied by a common factor of $\exp(-\alpha)$.) The error in the integral is $O(1/n^4)$. Thus, to achieve three significant figures you will typically need to compute around eight or so of the terms in the sum and then add this tail approximation. | Computing the cumulative distribution of max drawdown of random walk with drift | This is an alternating sum. Each successive pair almost cancels; such pair-sums eventually decrease monotonically.
One approach, then, is to compute the sum in pairs where $n$ = {1,2}, {3,4}, {5,6}, | Computing the cumulative distribution of max drawdown of random walk with drift
This is an alternating sum. Each successive pair almost cancels; such pair-sums eventually decrease monotonically.
One approach, then, is to compute the sum in pairs where $n$ = {1,2}, {3,4}, {5,6}, etc. (Doing so eliminates a lot of floating point error, too.) Some more tricks can help:
(1) To solve $\tan(t) = t / \alpha$ for a positive constant $\alpha$, a good starting value to search--and an excellent approximation for the $n^\text{th}$ largest root-- is $t = (n + 1/2)\pi - \frac{\alpha}{(n + 1/2)\pi}$. I suspect Newton-Raphson should work really well.
(2) After a small number of initial terms, the sums of pairs start decreasing in size very, very consistently. The logarithms of the absolute values of exponentially-spaced pairs quickly decrease almost linearly. This means you can interpolate among a very small number of calculated pair-sums to estimate all the pair-sums you did not compute. For example, by computing the values for only pairs (2,3), (4,5), (8,9), (16,17), ..., (16384, 16385) and constructing the interpolating polynomial for these (thought of as the values of a function at 1, 2, ..., 14) and using the arguments $h = \mu = \sigma = 1$, I was able to achieve six-figure precision for the worst-case errors. (Even nicer, the errors oscillate in sign, suggesting the precision in the summed interpolated values might be quite a bit better than six figures.) You could probably estimate the limiting sum to good precision by extrapolating linearly off the end of these values (which translates to a power law) and integrating the extrapolating function out to infinity. To complete this example calculation you also need the first term. That gives six-figure precision by means of only 29 computed terms in the summation.
(3) Note that the function really depends on $h/\sigma$ and $\mu/\sigma$, not on all three of these variables independently. The dependence on $T$ is weak (as it should be); you might be content to fix its value throughout all your calculations.
(4) On top of all this, consider using some series-acceleration methods, like Aitken's method. A good accounting of this appears in Numerical Recipes.
Added
(5) You can estimate the tail of the sum with an integral. Upon writing $\theta_n = (n + 1/2)\pi - 1/t_n$, the equation $\tan(\theta_n) = \theta_n / \alpha$ (with $\alpha = \mu h / \sigma^2$) can be solved for $t_n$, which is small, and then for $\theta_n$ by substituting back. Expanding the tangent in a Taylor series in $t_n$ gives the approximate solution
$$\theta_n = z - \frac{\alpha}{ z} - \frac{\alpha^2 - \alpha^3/3 }{z^3} + O\left((\frac{\alpha}{n})^5\right)$$
where $z = (n + 1/2)\pi$.
Provided $n$ is sufficiently large, the exponential factors of the form $1 - \exp(\frac{-\sigma^2 \theta_n^2 T}{2 h^2}) \exp(\frac{-\mu^2 T}{2 \sigma^2})$ become extremely close to 1 so you can neglect them. Typically these terms can be neglected even for small $n$ because $\theta_n^2$ is $\Theta\left(n^2\right)$, making the first exponential go to zero extremely quickly. (This happens once $n$ substantially exceeds $\alpha / T^{1/2}$. Do your calculations for large $T$ if you can!)
Using this expression for $\theta_n$ to sum the terms for $n$ and $n+1$ lets us approximate them (once all the smoke clears) as
$$\frac{2}{\pi n^2}-\frac{4}{\pi n^3}+\frac{13 \pi ^2+6 (4-3 \alpha ) \alpha }{2 \pi ^3 n^4}+O\left(\frac{1}{n^5}\right) \text{.}$$
Replacing the sum starting at $n = 2N$ by an integral over $N$ starting at $N - 1/4$ approximates the tail. (The integral has to be multiplied by a common factor of $\exp(-\alpha)$.) The error in the integral is $O(1/n^4)$. Thus, to achieve three significant figures you will typically need to compute around eight or so of the terms in the sum and then add this tail approximation. | Computing the cumulative distribution of max drawdown of random walk with drift
This is an alternating sum. Each successive pair almost cancels; such pair-sums eventually decrease monotonically.
One approach, then, is to compute the sum in pairs where $n$ = {1,2}, {3,4}, {5,6}, |
32,582 | Computing the cumulative distribution of max drawdown of random walk with drift | You might start by looking at the drawdown distribution functions in fBasics. So you could easily simulate the brownian motion with drift and apply these functions as a start. | Computing the cumulative distribution of max drawdown of random walk with drift | You might start by looking at the drawdown distribution functions in fBasics. So you could easily simulate the brownian motion with drift and apply these functions as a start. | Computing the cumulative distribution of max drawdown of random walk with drift
You might start by looking at the drawdown distribution functions in fBasics. So you could easily simulate the brownian motion with drift and apply these functions as a start. | Computing the cumulative distribution of max drawdown of random walk with drift
You might start by looking at the drawdown distribution functions in fBasics. So you could easily simulate the brownian motion with drift and apply these functions as a start. |
32,583 | Bayesian rating system with multiple categories for each rating | It depends on whether you want to wind up only with a cumulative rating of each object, or category-specific rating. Having a separate system in each category sounds more realistic, but your particular context might suggest otherwise. You could even do both a category-specific and overall rating! | Bayesian rating system with multiple categories for each rating | It depends on whether you want to wind up only with a cumulative rating of each object, or category-specific rating. Having a separate system in each category sounds more realistic, but your particula | Bayesian rating system with multiple categories for each rating
It depends on whether you want to wind up only with a cumulative rating of each object, or category-specific rating. Having a separate system in each category sounds more realistic, but your particular context might suggest otherwise. You could even do both a category-specific and overall rating! | Bayesian rating system with multiple categories for each rating
It depends on whether you want to wind up only with a cumulative rating of each object, or category-specific rating. Having a separate system in each category sounds more realistic, but your particula |
32,584 | Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided | If temperature levels X1...X5 are specific degree values, I'm not sure how temperature and stimulus (hot/cold) can be completely crossed. I presume then that "temperature" consists of 5 ordinal categories, ranging from "likely to cause minimal discomfort" to "likely to cause maximum discomfort permissible by my research ethics board", which then permits crossing with hot/cold direction.
If this is the case, you are correct that the 3 response variables might not be expected to be independent, so you might increase your power (relative to 3 seperate analyses) if you do a multivariate test. Frankly, I must admit that my first inclination would be to simply do three separate analyses, one for each response variable, but that's because I'm not very knowledgeable with regards to the mechanics of multivariate tests. That said, given the likely dependence between the 3 response variables, significant results across 3 separate anovas might be most reasonably considered as manifestations of this dependence rather than 3 truly independent sets of phenomena.
I am pretty sure that you shouldn't lump all 3 response variables into a single univariate analysis by adding "response variable type" as a predictor variable; this approach could run into trouble if the scales and variances of your response variables are very different. Presumably, a multivariate analysis has features that take such differences into account. (However, I wonder if a mixed effects analysis might also be able to handle such differences? I'm new to mixed effects, but suspect I it might...) | Recommendations - or best practices - for analyzing non-independent data. Specific example relating | If temperature levels X1...X5 are specific degree values, I'm not sure how temperature and stimulus (hot/cold) can be completely crossed. I presume then that "temperature" consists of 5 ordinal catego | Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided
If temperature levels X1...X5 are specific degree values, I'm not sure how temperature and stimulus (hot/cold) can be completely crossed. I presume then that "temperature" consists of 5 ordinal categories, ranging from "likely to cause minimal discomfort" to "likely to cause maximum discomfort permissible by my research ethics board", which then permits crossing with hot/cold direction.
If this is the case, you are correct that the 3 response variables might not be expected to be independent, so you might increase your power (relative to 3 seperate analyses) if you do a multivariate test. Frankly, I must admit that my first inclination would be to simply do three separate analyses, one for each response variable, but that's because I'm not very knowledgeable with regards to the mechanics of multivariate tests. That said, given the likely dependence between the 3 response variables, significant results across 3 separate anovas might be most reasonably considered as manifestations of this dependence rather than 3 truly independent sets of phenomena.
I am pretty sure that you shouldn't lump all 3 response variables into a single univariate analysis by adding "response variable type" as a predictor variable; this approach could run into trouble if the scales and variances of your response variables are very different. Presumably, a multivariate analysis has features that take such differences into account. (However, I wonder if a mixed effects analysis might also be able to handle such differences? I'm new to mixed effects, but suspect I it might...) | Recommendations - or best practices - for analyzing non-independent data. Specific example relating
If temperature levels X1...X5 are specific degree values, I'm not sure how temperature and stimulus (hot/cold) can be completely crossed. I presume then that "temperature" consists of 5 ordinal catego |
32,585 | Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided | As to your suggestion of using Multi-level models in this and the other thread, I see no benefit of approaching your analysis in this manner over repeated measures ANOVA. MLM are simply an extension of OLS regression, and offer an explicit framework to model group level (often referred to as "contextual") effects on lower level estimates. I would guess from your example you have no specific "contexts" besides that measures are repeated within individuals, and this is accounted for within the repeated ANOVA design (and your not interested in measuring effects of specific individuals anyway, only to control for this non-independence). Neither groups nor gender with what your description says are "contexts" in this sense, they can only be direct effects (or at least you can only observe if they have direct effects).
MLM just complicates things in this example IMO. It doesn't directly solve your problem that several dependent measures are non-independent, you can't measure any group level characteristic on your outcome because you only have two groups, and your hierarchy is cross classified and has three levels, (an observation is nested within only 1 individual, but the gender and group nestings are not mutually exclusive). All of the things you listed as purposes of the project can be accomplished using repeated ANOVA simply including group, gender, and group*gender interaction effects into the models.
It is beyond the scope of your question, but I would never agree that gender is a group observations are nested within. | Recommendations - or best practices - for analyzing non-independent data. Specific example relating | As to your suggestion of using Multi-level models in this and the other thread, I see no benefit of approaching your analysis in this manner over repeated measures ANOVA. MLM are simply an extension o | Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided
As to your suggestion of using Multi-level models in this and the other thread, I see no benefit of approaching your analysis in this manner over repeated measures ANOVA. MLM are simply an extension of OLS regression, and offer an explicit framework to model group level (often referred to as "contextual") effects on lower level estimates. I would guess from your example you have no specific "contexts" besides that measures are repeated within individuals, and this is accounted for within the repeated ANOVA design (and your not interested in measuring effects of specific individuals anyway, only to control for this non-independence). Neither groups nor gender with what your description says are "contexts" in this sense, they can only be direct effects (or at least you can only observe if they have direct effects).
MLM just complicates things in this example IMO. It doesn't directly solve your problem that several dependent measures are non-independent, you can't measure any group level characteristic on your outcome because you only have two groups, and your hierarchy is cross classified and has three levels, (an observation is nested within only 1 individual, but the gender and group nestings are not mutually exclusive). All of the things you listed as purposes of the project can be accomplished using repeated ANOVA simply including group, gender, and group*gender interaction effects into the models.
It is beyond the scope of your question, but I would never agree that gender is a group observations are nested within. | Recommendations - or best practices - for analyzing non-independent data. Specific example relating
As to your suggestion of using Multi-level models in this and the other thread, I see no benefit of approaching your analysis in this manner over repeated measures ANOVA. MLM are simply an extension o |
32,586 | Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided | I don't think that repeated measures is appropriate here - as I understand your protocol, the five temperatures (either hot or cold) are not repeated measures as they occur at different temperatures.
However, it does feel like you need some data reduction across the five levels in each temperature group. You might be well-served to consider some exploratory analysis to decide what the general shape of the relationship of pain perception to ordinal temperature input is across the hot and cold groups; then reduce that to a linear slope or exponential function; then model that single variable as your dependent variable with gender and A/B as factors.
Latent growth curve modeling might also be appropriate, but that is beyond me. | Recommendations - or best practices - for analyzing non-independent data. Specific example relating | I don't think that repeated measures is appropriate here - as I understand your protocol, the five temperatures (either hot or cold) are not repeated measures as they occur at different temperatures.
| Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided
I don't think that repeated measures is appropriate here - as I understand your protocol, the five temperatures (either hot or cold) are not repeated measures as they occur at different temperatures.
However, it does feel like you need some data reduction across the five levels in each temperature group. You might be well-served to consider some exploratory analysis to decide what the general shape of the relationship of pain perception to ordinal temperature input is across the hot and cold groups; then reduce that to a linear slope or exponential function; then model that single variable as your dependent variable with gender and A/B as factors.
Latent growth curve modeling might also be appropriate, but that is beyond me. | Recommendations - or best practices - for analyzing non-independent data. Specific example relating
I don't think that repeated measures is appropriate here - as I understand your protocol, the five temperatures (either hot or cold) are not repeated measures as they occur at different temperatures.
|
32,587 | Cross validation in very high dimension (to select the number of used variables in very high dimensional classification) | You miss one important issue -- there is almost never such thing as T[i]. Think of a simple problem in which the sum of two attributes (of a similar amplitude) is important; if you'd remove one of them the importance of the other will suddenly drop. Also, big amount of irrelevant attributes is the accuracy of most classifiers, so along their ability to assess importance. Last but not least, stochastic algorithms will return stochastic results, and so even the T[i] ranking can be unstable. So in principle you should at least recalculate T[i] after each (or at least after each non trivially redundant) attribute is removed.
Going back to the topic, the question which CV to choose is mostly problem dependent; with very small number of cases LOO may be the best choice because all other start to reduce to it; still small is rather n=10 not n=100. So I would just recommend random subsampling (which I use most) or K-fold (then with recreating splits on each step). Still, you should also collect not only mean but also the standard deviation of error estimates; this can be used to (approximately) judge which changes of mean are significant ans so help you decide when to cease the process. | Cross validation in very high dimension (to select the number of used variables in very high dimensi | You miss one important issue -- there is almost never such thing as T[i]. Think of a simple problem in which the sum of two attributes (of a similar amplitude) is important; if you'd remove one of the | Cross validation in very high dimension (to select the number of used variables in very high dimensional classification)
You miss one important issue -- there is almost never such thing as T[i]. Think of a simple problem in which the sum of two attributes (of a similar amplitude) is important; if you'd remove one of them the importance of the other will suddenly drop. Also, big amount of irrelevant attributes is the accuracy of most classifiers, so along their ability to assess importance. Last but not least, stochastic algorithms will return stochastic results, and so even the T[i] ranking can be unstable. So in principle you should at least recalculate T[i] after each (or at least after each non trivially redundant) attribute is removed.
Going back to the topic, the question which CV to choose is mostly problem dependent; with very small number of cases LOO may be the best choice because all other start to reduce to it; still small is rather n=10 not n=100. So I would just recommend random subsampling (which I use most) or K-fold (then with recreating splits on each step). Still, you should also collect not only mean but also the standard deviation of error estimates; this can be used to (approximately) judge which changes of mean are significant ans so help you decide when to cease the process. | Cross validation in very high dimension (to select the number of used variables in very high dimensi
You miss one important issue -- there is almost never such thing as T[i]. Think of a simple problem in which the sum of two attributes (of a similar amplitude) is important; if you'd remove one of the |
32,588 | Cross validation in very high dimension (to select the number of used variables in very high dimensional classification) | That's a good question, and that tends to hit more of what's referred to ensemble learners and model averaging (I'll provide links below):
When you're in high dimensional settings, the stability of your solution (i.e., what features/variables are selected) may be lacking because individual models may choose 1 among many collinear, exchangeable variables that by-and-large carry the same signal (among one of many reasons). Below are a couple of strategies on how to address this.
In bayesian model averaging for example,
Hoeting, Jennifer A., et al. "Bayesian model averaging: a tutorial." Statistical science (1999): 382-401.
you construct many models (say 100), and each of which is constructed with a subset of the original features. Then, each individual model determines which of the variables it saw was significant, and each model is weighed by data likelihood, giving you a nice summary of how to "judge" the effectiveness of variables in 'cross-validation" sort of way. If you know a-priori that some features are highly correlated, you can induce a sampling scheme such that they're never selected together (or if you have a block-correlation structure then you choose elements of different blocks in your variance-covariance matrix)
In a machine learning type setting: look at "ensemble feature selection". This paper (one example)
Neumann, Ursula, Nikita Genze, and Dominik Heider. "EFS: an ensemble feature selection tool implemented as R-package and web-application." BioData mining 10.1 (2017): 21.
determines feature significance across of a variety of "importance" metrics to make the final feature selection.
I would say that the machine learning route might be better b/c linear models (w/ feature selection) saturate at p = n b/c of their optimization re-formulation (see this post If p > n, the lasso selects at most n variables). But as long as you can define and justify a good objective criterion on how you 'cross-validate' the feature selection, then you're off to a good start.
Hope this helps! | Cross validation in very high dimension (to select the number of used variables in very high dimensi | That's a good question, and that tends to hit more of what's referred to ensemble learners and model averaging (I'll provide links below):
When you're in high dimensional settings, the stability of y | Cross validation in very high dimension (to select the number of used variables in very high dimensional classification)
That's a good question, and that tends to hit more of what's referred to ensemble learners and model averaging (I'll provide links below):
When you're in high dimensional settings, the stability of your solution (i.e., what features/variables are selected) may be lacking because individual models may choose 1 among many collinear, exchangeable variables that by-and-large carry the same signal (among one of many reasons). Below are a couple of strategies on how to address this.
In bayesian model averaging for example,
Hoeting, Jennifer A., et al. "Bayesian model averaging: a tutorial." Statistical science (1999): 382-401.
you construct many models (say 100), and each of which is constructed with a subset of the original features. Then, each individual model determines which of the variables it saw was significant, and each model is weighed by data likelihood, giving you a nice summary of how to "judge" the effectiveness of variables in 'cross-validation" sort of way. If you know a-priori that some features are highly correlated, you can induce a sampling scheme such that they're never selected together (or if you have a block-correlation structure then you choose elements of different blocks in your variance-covariance matrix)
In a machine learning type setting: look at "ensemble feature selection". This paper (one example)
Neumann, Ursula, Nikita Genze, and Dominik Heider. "EFS: an ensemble feature selection tool implemented as R-package and web-application." BioData mining 10.1 (2017): 21.
determines feature significance across of a variety of "importance" metrics to make the final feature selection.
I would say that the machine learning route might be better b/c linear models (w/ feature selection) saturate at p = n b/c of their optimization re-formulation (see this post If p > n, the lasso selects at most n variables). But as long as you can define and justify a good objective criterion on how you 'cross-validate' the feature selection, then you're off to a good start.
Hope this helps! | Cross validation in very high dimension (to select the number of used variables in very high dimensi
That's a good question, and that tends to hit more of what's referred to ensemble learners and model averaging (I'll provide links below):
When you're in high dimensional settings, the stability of y |
32,589 | Automating statistical correlation between "texts" and "data" | My students do this as their class project. A few teams hit the 70%s for accuracy, with pretty small samples, which ain't bad.
Let's say you have some data like this:
Return Symbol News Text
-4% DELL Centegra and Dell Services recognized with Outsourcing Center's...
7% MSFT Rising Service Revenues Benefit VMWare
1% CSCO Cisco Systems (CSCO) Receives 5 Star Strong Buy Rating From S&P
4% GOOG Summary Box: Google eyes more government deals
7% AAPL Sohu says 2nd-quarter net income rises 10 percent on higher...
You want to predict the return based on the text.
This is called Text Mining.
What you do ultimately is create an enormous matrix like this:
Return Centegra Rising Services Recognized...
-4% 0.23 0 0.11 0.34
7% 0 0.1 0.23 0
...
That has one column for every unique word, and one row for each return, and a weighted score for each word. The score is often the TFIDF score, or relative frequency of the word in the doc.
Then you run a regression and see if you can predict which words predict the return. You'll probably need to use PCA first.
Book: Fundamentals of Predictive Text Mining, Weiss
Software: RapidMiner with Text Plugin or R
You should also do a search on Google Scholar and read up on the ins and outs.
You can see my series of text mining videos here | Automating statistical correlation between "texts" and "data" | My students do this as their class project. A few teams hit the 70%s for accuracy, with pretty small samples, which ain't bad.
Let's say you have some data like this:
Return Symbol News Text
-4% DEL | Automating statistical correlation between "texts" and "data"
My students do this as their class project. A few teams hit the 70%s for accuracy, with pretty small samples, which ain't bad.
Let's say you have some data like this:
Return Symbol News Text
-4% DELL Centegra and Dell Services recognized with Outsourcing Center's...
7% MSFT Rising Service Revenues Benefit VMWare
1% CSCO Cisco Systems (CSCO) Receives 5 Star Strong Buy Rating From S&P
4% GOOG Summary Box: Google eyes more government deals
7% AAPL Sohu says 2nd-quarter net income rises 10 percent on higher...
You want to predict the return based on the text.
This is called Text Mining.
What you do ultimately is create an enormous matrix like this:
Return Centegra Rising Services Recognized...
-4% 0.23 0 0.11 0.34
7% 0 0.1 0.23 0
...
That has one column for every unique word, and one row for each return, and a weighted score for each word. The score is often the TFIDF score, or relative frequency of the word in the doc.
Then you run a regression and see if you can predict which words predict the return. You'll probably need to use PCA first.
Book: Fundamentals of Predictive Text Mining, Weiss
Software: RapidMiner with Text Plugin or R
You should also do a search on Google Scholar and read up on the ins and outs.
You can see my series of text mining videos here | Automating statistical correlation between "texts" and "data"
My students do this as their class project. A few teams hit the 70%s for accuracy, with pretty small samples, which ain't bad.
Let's say you have some data like this:
Return Symbol News Text
-4% DEL |
32,590 | Automating statistical correlation between "texts" and "data" | As per above, you need a set of articles and responses, and then you train eg. a Neural Net to them. RapidMiner will let you do this but there are many other tools out there that will let you do regressions of this size. Ideally your response variable will be consistent (ie % change after 1 hour exactly, or % change after 1 day exactly etc).
You may also want to apply some sort of filtering or classification to your training variables ie the words in the article. This could be as simple as filtering some words (eg prepositions, pronouns) or more complex like using syntax to choose which words should go into the regression. Note that any filtering you do risks biasing the result.
Some folks at University of Arizona already made a system that does this - their paper is on acm here and you may find it interesting. http://www.computer.org/portal/web/csdl/doi/10.1109/MC.2010.2 (you'll need a subscription to access if you're not eg at university). The references may also help point you in the right direction. | Automating statistical correlation between "texts" and "data" | As per above, you need a set of articles and responses, and then you train eg. a Neural Net to them. RapidMiner will let you do this but there are many other tools out there that will let you do regre | Automating statistical correlation between "texts" and "data"
As per above, you need a set of articles and responses, and then you train eg. a Neural Net to them. RapidMiner will let you do this but there are many other tools out there that will let you do regressions of this size. Ideally your response variable will be consistent (ie % change after 1 hour exactly, or % change after 1 day exactly etc).
You may also want to apply some sort of filtering or classification to your training variables ie the words in the article. This could be as simple as filtering some words (eg prepositions, pronouns) or more complex like using syntax to choose which words should go into the regression. Note that any filtering you do risks biasing the result.
Some folks at University of Arizona already made a system that does this - their paper is on acm here and you may find it interesting. http://www.computer.org/portal/web/csdl/doi/10.1109/MC.2010.2 (you'll need a subscription to access if you're not eg at university). The references may also help point you in the right direction. | Automating statistical correlation between "texts" and "data"
As per above, you need a set of articles and responses, and then you train eg. a Neural Net to them. RapidMiner will let you do this but there are many other tools out there that will let you do regre |
32,591 | Distribution of final piece in stick breaking process $x_{k+1} = (x_{k} -1) \cdot u_{k+1}$ | Simulation
We can simulate the distribution with a computer code
### function to perform a stick breaking process
sample = function(x=10) {
while(x>1) {
x = (x-1) * runif(1)
}
return(x)
}
# create an Empirical sample
n = 10^5
z = replicate(n, sample())
z = z[order(z)]
p = c(1:n)/n
# plot empirical distribution along with simple estimate
plot(z,p, type = "l", main = "Empirical distribution (black) \n compared to \n estimated CDF (red)", xlab = "x", ylab = "P(stick length < x)")
lines(z, z*(1-0.5*log(z)), col = 2)
First level estimate
The distribution function can be estimated with a function $$F(x) = x(1-0.5 \log(x))$$
That function is a mixture distribution of a uniform variable and the product of two uniform variables.
That mixture is based on the idea that you get
A uniform component for the cases that you end end up below one, $x_{k+1}<1$, while the previous value was above two, $x_k > 2$.
You get a product of two uniform variables component for the cases that you end end up below one and while $1 < x_k \leq 2$ and also $x_{k-1}>3$. (That latter condition means that $x_k$ is uniformly distributed between one and two)
There are some more components, if $1 < x_k \leq 2$ and also $x_{k-1}<3$, but those are more complicated to compute. | Distribution of final piece in stick breaking process $x_{k+1} = (x_{k} -1) \cdot u_{k+1}$ | Simulation
We can simulate the distribution with a computer code
### function to perform a stick breaking process
sample = function(x=10) {
while(x>1) {
x = (x-1) * runif(1)
}
return(x | Distribution of final piece in stick breaking process $x_{k+1} = (x_{k} -1) \cdot u_{k+1}$
Simulation
We can simulate the distribution with a computer code
### function to perform a stick breaking process
sample = function(x=10) {
while(x>1) {
x = (x-1) * runif(1)
}
return(x)
}
# create an Empirical sample
n = 10^5
z = replicate(n, sample())
z = z[order(z)]
p = c(1:n)/n
# plot empirical distribution along with simple estimate
plot(z,p, type = "l", main = "Empirical distribution (black) \n compared to \n estimated CDF (red)", xlab = "x", ylab = "P(stick length < x)")
lines(z, z*(1-0.5*log(z)), col = 2)
First level estimate
The distribution function can be estimated with a function $$F(x) = x(1-0.5 \log(x))$$
That function is a mixture distribution of a uniform variable and the product of two uniform variables.
That mixture is based on the idea that you get
A uniform component for the cases that you end end up below one, $x_{k+1}<1$, while the previous value was above two, $x_k > 2$.
You get a product of two uniform variables component for the cases that you end end up below one and while $1 < x_k \leq 2$ and also $x_{k-1}>3$. (That latter condition means that $x_k$ is uniformly distributed between one and two)
There are some more components, if $1 < x_k \leq 2$ and also $x_{k-1}<3$, but those are more complicated to compute. | Distribution of final piece in stick breaking process $x_{k+1} = (x_{k} -1) \cdot u_{k+1}$
Simulation
We can simulate the distribution with a computer code
### function to perform a stick breaking process
sample = function(x=10) {
while(x>1) {
x = (x-1) * runif(1)
}
return(x |
32,592 | Minimax estimator for geometric distribution | The weighted quadratic risk of an estimator $\delta$ is given by
$$R(p,\delta)=\sum_{x=0}^\infty (p-\delta(x))^2\times (1-p)^{x-1}$$
Since the first term of this sum is $(p-\delta(0))^2\times (1-p)^{-1}$, it diverges to infinity as $p$ goes to $1$, unless $\delta(0)=1$. In order to secure a finite minimax risk, the minimax estimator $\delta^\star$ must satisfy $$\delta(0)=1\tag{1}\,.$$ This implies that $R(1,\delta^\star)=0$.
Similarly, to achieve a finite risk at $p=0$,
$$R(0,\delta)=(1-0)^{2-1}+\sum_{x=1}^\infty \delta(x)^2<\infty\tag{2}$$
the series
$$\sum_{x=1}^\infty \delta(x)^2$$
must converge. If $\delta^\star(x)=0$ for $x>0$, this is obviously the case, with a risk function equal to
$$R(p,\delta^\star)=(1-p)+\sum_{x=1}^\infty p^2 (1-p)^{x-1}=1-p+p\underbrace{\sum_{x=0}^\infty p (1-p)^{x}}_{=1}=1$$
Therefore $\delta^\star$ has a constant risk function. And since (2) is greater than $1$, $\delta^\star$ is minimax. | Minimax estimator for geometric distribution | The weighted quadratic risk of an estimator $\delta$ is given by
$$R(p,\delta)=\sum_{x=0}^\infty (p-\delta(x))^2\times (1-p)^{x-1}$$
Since the first term of this sum is $(p-\delta(0))^2\times (1-p)^{- | Minimax estimator for geometric distribution
The weighted quadratic risk of an estimator $\delta$ is given by
$$R(p,\delta)=\sum_{x=0}^\infty (p-\delta(x))^2\times (1-p)^{x-1}$$
Since the first term of this sum is $(p-\delta(0))^2\times (1-p)^{-1}$, it diverges to infinity as $p$ goes to $1$, unless $\delta(0)=1$. In order to secure a finite minimax risk, the minimax estimator $\delta^\star$ must satisfy $$\delta(0)=1\tag{1}\,.$$ This implies that $R(1,\delta^\star)=0$.
Similarly, to achieve a finite risk at $p=0$,
$$R(0,\delta)=(1-0)^{2-1}+\sum_{x=1}^\infty \delta(x)^2<\infty\tag{2}$$
the series
$$\sum_{x=1}^\infty \delta(x)^2$$
must converge. If $\delta^\star(x)=0$ for $x>0$, this is obviously the case, with a risk function equal to
$$R(p,\delta^\star)=(1-p)+\sum_{x=1}^\infty p^2 (1-p)^{x-1}=1-p+p\underbrace{\sum_{x=0}^\infty p (1-p)^{x}}_{=1}=1$$
Therefore $\delta^\star$ has a constant risk function. And since (2) is greater than $1$, $\delta^\star$ is minimax. | Minimax estimator for geometric distribution
The weighted quadratic risk of an estimator $\delta$ is given by
$$R(p,\delta)=\sum_{x=0}^\infty (p-\delta(x))^2\times (1-p)^{x-1}$$
Since the first term of this sum is $(p-\delta(0))^2\times (1-p)^{- |
32,593 | Intuition about the coupon collector problem approaching a Gumbel distribution | Below is a bastardized short version of the connection made in the paper by Holst:
The connection with the Gumbel distribution is made with the following steps...
Viewing the waiting time to collect all the coupons by the individual waiting times to collect each individual coupon.
The waiting time to collect all coupons is the maximum of these individual waiting times.
The individual waiting times are approximately independent and exponential distributed (this independence is still fuzzy to me).
Then the waiting time to collect all coupons approaches an extreme value distribution. Since the distributions involved are approximately exponential distributions, the limiting distribution will be a Gumbel distribution.
An illustration relating to Holst's approach in this post on mathematics. | Intuition about the coupon collector problem approaching a Gumbel distribution | Below is a bastardized short version of the connection made in the paper by Holst:
The connection with the Gumbel distribution is made with the following steps...
Viewing the waiting time to collect | Intuition about the coupon collector problem approaching a Gumbel distribution
Below is a bastardized short version of the connection made in the paper by Holst:
The connection with the Gumbel distribution is made with the following steps...
Viewing the waiting time to collect all the coupons by the individual waiting times to collect each individual coupon.
The waiting time to collect all coupons is the maximum of these individual waiting times.
The individual waiting times are approximately independent and exponential distributed (this independence is still fuzzy to me).
Then the waiting time to collect all coupons approaches an extreme value distribution. Since the distributions involved are approximately exponential distributions, the limiting distribution will be a Gumbel distribution.
An illustration relating to Holst's approach in this post on mathematics. | Intuition about the coupon collector problem approaching a Gumbel distribution
Below is a bastardized short version of the connection made in the paper by Holst:
The connection with the Gumbel distribution is made with the following steps...
Viewing the waiting time to collect |
32,594 | an upper bound of mean absolute difference? | Theorem 3.3 from p. 86 of "Cerone, Pietro, and Sever S. Dragomir. "A survey on bounds for the Gini Mean Difference." Advances in Inequalities from Probability Theory and Statistics (2008)" states that
$$R_G(f) \le \frac{2}{(q+1)^{1/q}}\left[M_{E,p}(f)\right]^{1/p}$$
where $R_G(f)=\frac{1}2 E|X-Y|$, $p>1$, $1/p+1/q=1$, and $M_{E,p}(f)=E\left[|X-\mu|^{p}\right]$.
The proof is short and uses Holder's inequality.
Now, Remark 3.2 says to take $p=q=2$ in the inequality to find
$$R_G(f) \le \frac{2}{\sqrt{3}}\sigma$$
The reference says this inequality is known and refers to
https://galton.uchicago.edu/~wichura/stat304/handouts/L09.means3.pdf
But, I could not access that website.
It also states the upper bound is obtained for the Unif(0,1) distribution.
It seems like there is a misprint in the reference because I think the inequality should be $R_G(f) \le \frac{1}{\sqrt{3}}\sigma$. There is a $\frac{1}2$ included as part of the definition of the Gini mean difference $R_G(f)$. | an upper bound of mean absolute difference? | Theorem 3.3 from p. 86 of "Cerone, Pietro, and Sever S. Dragomir. "A survey on bounds for the Gini Mean Difference." Advances in Inequalities from Probability Theory and Statistics (2008)" states that | an upper bound of mean absolute difference?
Theorem 3.3 from p. 86 of "Cerone, Pietro, and Sever S. Dragomir. "A survey on bounds for the Gini Mean Difference." Advances in Inequalities from Probability Theory and Statistics (2008)" states that
$$R_G(f) \le \frac{2}{(q+1)^{1/q}}\left[M_{E,p}(f)\right]^{1/p}$$
where $R_G(f)=\frac{1}2 E|X-Y|$, $p>1$, $1/p+1/q=1$, and $M_{E,p}(f)=E\left[|X-\mu|^{p}\right]$.
The proof is short and uses Holder's inequality.
Now, Remark 3.2 says to take $p=q=2$ in the inequality to find
$$R_G(f) \le \frac{2}{\sqrt{3}}\sigma$$
The reference says this inequality is known and refers to
https://galton.uchicago.edu/~wichura/stat304/handouts/L09.means3.pdf
But, I could not access that website.
It also states the upper bound is obtained for the Unif(0,1) distribution.
It seems like there is a misprint in the reference because I think the inequality should be $R_G(f) \le \frac{1}{\sqrt{3}}\sigma$. There is a $\frac{1}2$ included as part of the definition of the Gini mean difference $R_G(f)$. | an upper bound of mean absolute difference?
Theorem 3.3 from p. 86 of "Cerone, Pietro, and Sever S. Dragomir. "A survey on bounds for the Gini Mean Difference." Advances in Inequalities from Probability Theory and Statistics (2008)" states that |
32,595 | Working with Time Series data: splitting the dataset and putting the model into production | For standard statistical methods (ARIMA, ETS, Holt-Winters, etc...)
I don't recommend any form of cross-validation (even time series cross-validation is a little tricky to use in practice). Instead, use a simple test/train split for experiments and initial proofs of concept, etc...
Then, when you go to production, don't bother with a train/test/evaluate split at all. As you pointed out correctly, you don't want to loose valuable information present in the last 90 days. Instead, in production you train multiple models on the entire data set, and then choose the one that gives you the lowest AIC or BIC.
This approach, try multiple models then and pick the one with the lowest Information Criterion, can be thought of as intuitively using Grid Search/MSE/L2 regularization.
In the large data limit, the AIC is equivalent to leave one out CV, and the BIC is equivalent to K-fold CV (if I recall correctly). See chapter 7 of Elements of Statistical Learning, for details and a discussion in general of how to train models without using a test set.
This approach is used by most production grade demand forecasting tools, [including the one my team uses][1]. For developing your own solution, if you are using R, then auto.arima and ETS functions from the Forecast and Fable packages will perform this AIC/BIC optimization for you automatically (and you can also tweak some of the search parameters manually as needed, increase).
If you are using Python, then the ARIMA and Statespace APIs will return the AIC and BIC for each model you fit, but you will have to do the grid-search loop your self. There are some packages that perform auto-metic time series model selection similar to auto.arima, but last I checked (a few months back) they weren't mature yet (definitely not production grade).
For LSTM based forecasting, the philosophy will be a little different.
For experiments and proof of concept, again use a simple train/test split (especially if you are going to compare against other models like ARIMA, ETS, etc...) - basically what you describe in your second option.
Then bring in your whole dataset, including the 90 days you originally left out for validation, and apply some Hyperparameter search scheme to your LSTM with the full data set. Bayesian Optimization is one of the most popular hyperparameter tuning approaches right now.
Once you've found the best Hyperparameters, then deploy your model to production, and start scoring its performance.
Here is one important difference between LSTM and Statistical models:
Usually statistical models are re-trained every time new data comes in (for the various teams I have worked for, we retrain the models every week or sometimes every night - in production we always use different flavors of exponential smoothing models).
You don't have to do this for LSTM, instead you need only retrain it every 3~6 months, or maybe you can automatically re-trigger the retraining process when ever the performance monitoring indicates that the error has gone above a certain threshold.
BUT - and this is a very important BUT!!!! - you can do this only because your LSTM has been trained on several hundred or thousand products/time series simultaneously, i.e. it is a global model. This is why it is "safe" to not retrain an LSTM so frequently, it has already seen so many previous examples of time series that it can pick on trends and changes in a newer product without having to adapt the local time series specific dynamic.
Note that because of this, you will have to include additional product features (product category, price, brand, etc...) in order for the LSTM to learn the similarities between the different product. LSTM only performs better than statistical methods in demand forecasting if it is trained on a large set of different products. If you train a separate LSTM for each individual time series product, then you will almost certainly end up overfitting, and a statistical method is guaranteed to work better (and is easier to tune because of the above mentioned IC trick).
To recap:
In both cases, do retrain on the entire data set, including the 90s days validation set, after doing your initial train/validation split.
For statistical methods, use a simple time series train/test split for some initial validations and proofs of concept, but don't bother with CV for Hyperparameter tuning. Instead, train multiple models in production, and use the AIC or the BIC as metric for automatic model selection. Also, perform this training and selection as frequently as possible (i.e. each time you get new demand data).
For LSTM, train a global model on as many time series and products as you can, and using additional product features so that the LSTM can learn similarities between products. This makes it safe to retrain the model every few months, instead of every day or every week. If you can't do this (because you don't have the extra features, or you only have a limited number of products, etc...), don't bother with LSTM at all, and stick with statistical methods instead.
Finally, look at hierarchical forecasting, which is another approach that is very popular for demand forecasting with multiple related products. | Working with Time Series data: splitting the dataset and putting the model into production | For standard statistical methods (ARIMA, ETS, Holt-Winters, etc...)
I don't recommend any form of cross-validation (even time series cross-validation is a little tricky to use in practice). Instead, | Working with Time Series data: splitting the dataset and putting the model into production
For standard statistical methods (ARIMA, ETS, Holt-Winters, etc...)
I don't recommend any form of cross-validation (even time series cross-validation is a little tricky to use in practice). Instead, use a simple test/train split for experiments and initial proofs of concept, etc...
Then, when you go to production, don't bother with a train/test/evaluate split at all. As you pointed out correctly, you don't want to loose valuable information present in the last 90 days. Instead, in production you train multiple models on the entire data set, and then choose the one that gives you the lowest AIC or BIC.
This approach, try multiple models then and pick the one with the lowest Information Criterion, can be thought of as intuitively using Grid Search/MSE/L2 regularization.
In the large data limit, the AIC is equivalent to leave one out CV, and the BIC is equivalent to K-fold CV (if I recall correctly). See chapter 7 of Elements of Statistical Learning, for details and a discussion in general of how to train models without using a test set.
This approach is used by most production grade demand forecasting tools, [including the one my team uses][1]. For developing your own solution, if you are using R, then auto.arima and ETS functions from the Forecast and Fable packages will perform this AIC/BIC optimization for you automatically (and you can also tweak some of the search parameters manually as needed, increase).
If you are using Python, then the ARIMA and Statespace APIs will return the AIC and BIC for each model you fit, but you will have to do the grid-search loop your self. There are some packages that perform auto-metic time series model selection similar to auto.arima, but last I checked (a few months back) they weren't mature yet (definitely not production grade).
For LSTM based forecasting, the philosophy will be a little different.
For experiments and proof of concept, again use a simple train/test split (especially if you are going to compare against other models like ARIMA, ETS, etc...) - basically what you describe in your second option.
Then bring in your whole dataset, including the 90 days you originally left out for validation, and apply some Hyperparameter search scheme to your LSTM with the full data set. Bayesian Optimization is one of the most popular hyperparameter tuning approaches right now.
Once you've found the best Hyperparameters, then deploy your model to production, and start scoring its performance.
Here is one important difference between LSTM and Statistical models:
Usually statistical models are re-trained every time new data comes in (for the various teams I have worked for, we retrain the models every week or sometimes every night - in production we always use different flavors of exponential smoothing models).
You don't have to do this for LSTM, instead you need only retrain it every 3~6 months, or maybe you can automatically re-trigger the retraining process when ever the performance monitoring indicates that the error has gone above a certain threshold.
BUT - and this is a very important BUT!!!! - you can do this only because your LSTM has been trained on several hundred or thousand products/time series simultaneously, i.e. it is a global model. This is why it is "safe" to not retrain an LSTM so frequently, it has already seen so many previous examples of time series that it can pick on trends and changes in a newer product without having to adapt the local time series specific dynamic.
Note that because of this, you will have to include additional product features (product category, price, brand, etc...) in order for the LSTM to learn the similarities between the different product. LSTM only performs better than statistical methods in demand forecasting if it is trained on a large set of different products. If you train a separate LSTM for each individual time series product, then you will almost certainly end up overfitting, and a statistical method is guaranteed to work better (and is easier to tune because of the above mentioned IC trick).
To recap:
In both cases, do retrain on the entire data set, including the 90s days validation set, after doing your initial train/validation split.
For statistical methods, use a simple time series train/test split for some initial validations and proofs of concept, but don't bother with CV for Hyperparameter tuning. Instead, train multiple models in production, and use the AIC or the BIC as metric for automatic model selection. Also, perform this training and selection as frequently as possible (i.e. each time you get new demand data).
For LSTM, train a global model on as many time series and products as you can, and using additional product features so that the LSTM can learn similarities between products. This makes it safe to retrain the model every few months, instead of every day or every week. If you can't do this (because you don't have the extra features, or you only have a limited number of products, etc...), don't bother with LSTM at all, and stick with statistical methods instead.
Finally, look at hierarchical forecasting, which is another approach that is very popular for demand forecasting with multiple related products. | Working with Time Series data: splitting the dataset and putting the model into production
For standard statistical methods (ARIMA, ETS, Holt-Winters, etc...)
I don't recommend any form of cross-validation (even time series cross-validation is a little tricky to use in practice). Instead, |
32,596 | Working with Time Series data: splitting the dataset and putting the model into production | simply select the forecast horizon based upon how often yo u will update hour forecasts. Assume you have 200 observations and plan on reforecasting every 7 periods. Now take 193 most recent values and predict the observations for period 194-200 . Now take 186 observations and predict the observations for 187-193. Now take 186 historical values and predict 187-193 . In this way all of your history is used to obtain a model and parameters to predict the next 7 values from K origins (test points).
Now at each point in the future remodel using all of the known data to predict the next 7 values.
It is important to note that one can specify a model or allow empirical identification ala https://autobox.com/pdfs/ARIMA%20FLOW%20CHART.pdf at each of the test points in order to provide a measure of the expected inadequacy/adequacy.
In this way your model is DYNAMIC and is identified based upon all of the historical data.
Now what I suggest is that at each model-building stage you EXPLICITELY test for constancy of parameters AND constancy of the error variance in order to yield a useful model AND respond to model dynamics (changes). In this way you are effectively discarding data that is no longer relevant as things may have changed such that older data needs to be put aside (parameter constancy )or at least modified via variance-stabilizing weights (GLS). | Working with Time Series data: splitting the dataset and putting the model into production | simply select the forecast horizon based upon how often yo u will update hour forecasts. Assume you have 200 observations and plan on reforecasting every 7 periods. Now take 193 most recent values and | Working with Time Series data: splitting the dataset and putting the model into production
simply select the forecast horizon based upon how often yo u will update hour forecasts. Assume you have 200 observations and plan on reforecasting every 7 periods. Now take 193 most recent values and predict the observations for period 194-200 . Now take 186 observations and predict the observations for 187-193. Now take 186 historical values and predict 187-193 . In this way all of your history is used to obtain a model and parameters to predict the next 7 values from K origins (test points).
Now at each point in the future remodel using all of the known data to predict the next 7 values.
It is important to note that one can specify a model or allow empirical identification ala https://autobox.com/pdfs/ARIMA%20FLOW%20CHART.pdf at each of the test points in order to provide a measure of the expected inadequacy/adequacy.
In this way your model is DYNAMIC and is identified based upon all of the historical data.
Now what I suggest is that at each model-building stage you EXPLICITELY test for constancy of parameters AND constancy of the error variance in order to yield a useful model AND respond to model dynamics (changes). In this way you are effectively discarding data that is no longer relevant as things may have changed such that older data needs to be put aside (parameter constancy )or at least modified via variance-stabilizing weights (GLS). | Working with Time Series data: splitting the dataset and putting the model into production
simply select the forecast horizon based upon how often yo u will update hour forecasts. Assume you have 200 observations and plan on reforecasting every 7 periods. Now take 193 most recent values and |
32,597 | Is there a stronger Universal Approximation Theorem for LSTMs? | There are previous results that RNNs are turing complete: Siegelmann 1992, Korsky 2019. (They have some slight technical differences). I found this answer which goes into more technical detail about the proof.
Turing completeness should be a stronger statement than universal approximation, since there are perfectly computable but unbounded or discontinuous functions.
I am aware of the results that any computable real-valued total function is continuous (see here). I don't really have the expertise to comment on this, except to say that I'm still sure turing completeness is still strictly more powerful than universal approximation. | Is there a stronger Universal Approximation Theorem for LSTMs? | There are previous results that RNNs are turing complete: Siegelmann 1992, Korsky 2019. (They have some slight technical differences). I found this answer which goes into more technical detail about t | Is there a stronger Universal Approximation Theorem for LSTMs?
There are previous results that RNNs are turing complete: Siegelmann 1992, Korsky 2019. (They have some slight technical differences). I found this answer which goes into more technical detail about the proof.
Turing completeness should be a stronger statement than universal approximation, since there are perfectly computable but unbounded or discontinuous functions.
I am aware of the results that any computable real-valued total function is continuous (see here). I don't really have the expertise to comment on this, except to say that I'm still sure turing completeness is still strictly more powerful than universal approximation. | Is there a stronger Universal Approximation Theorem for LSTMs?
There are previous results that RNNs are turing complete: Siegelmann 1992, Korsky 2019. (They have some slight technical differences). I found this answer which goes into more technical detail about t |
32,598 | Why is the autoencoder decoder usually the reverse architecture as the encoder? | Your intuition is correct, but it's not in the right context. For starters, let's define "high-quality features" as features that can be recycled for training other models, e.g. transferable. For example, training an (unlabeled) encoder on ImageNet could help give a solid baseline for classification on ImageNet, and on other image datasets.
Most classical autoencoders are trained on some form of (regularized) L2 loss. This means that after encoding a representation, the decoder must then reproduce the original image and is penalized based on the error of every single pixel. While regularization can help here, this is why you tend to get fuzzy images. The issue is that the loss is not semantic: it doesn't care that humans have ears, but does care that skin color tends to be uniform across the face. So if you were to replace the decoder with something really simple, the representation will likely focus on getting the average color right in each region of the image (whose size will roughly be proportional to the complexity of your decoder, and inversely proportional to your hidden layer size).
On the other hand, there are numerous general self-supervised techniques that can learn higher quality semantic features. The key here is to find a better loss function. You can find a really nice set of slides by Andrew Zisserman here. A simple example is a siamese network trained to predict the relative position of pairs of random crops:
In the above, the first crop of the cat's face, and the network needs to predict that the ear-crop should occur north-east of the cat's face. Note that the crops are chosen randomly and the trick is to balance the minimum and maximum distance between crops, so that related crops occur often.
In other words, the network uses a shared encoder and a rudimentary classifier to compare embeddings of different crops. This forces the network to learn what a cat really is as opposed to a soft-set of average colors and feature shapes.
You'll find plenty-more examples in the above slides which also show that these embeddings transfer considerably better than rote autoencoders when trained to predict classes. | Why is the autoencoder decoder usually the reverse architecture as the encoder? | Your intuition is correct, but it's not in the right context. For starters, let's define "high-quality features" as features that can be recycled for training other models, e.g. transferable. For exam | Why is the autoencoder decoder usually the reverse architecture as the encoder?
Your intuition is correct, but it's not in the right context. For starters, let's define "high-quality features" as features that can be recycled for training other models, e.g. transferable. For example, training an (unlabeled) encoder on ImageNet could help give a solid baseline for classification on ImageNet, and on other image datasets.
Most classical autoencoders are trained on some form of (regularized) L2 loss. This means that after encoding a representation, the decoder must then reproduce the original image and is penalized based on the error of every single pixel. While regularization can help here, this is why you tend to get fuzzy images. The issue is that the loss is not semantic: it doesn't care that humans have ears, but does care that skin color tends to be uniform across the face. So if you were to replace the decoder with something really simple, the representation will likely focus on getting the average color right in each region of the image (whose size will roughly be proportional to the complexity of your decoder, and inversely proportional to your hidden layer size).
On the other hand, there are numerous general self-supervised techniques that can learn higher quality semantic features. The key here is to find a better loss function. You can find a really nice set of slides by Andrew Zisserman here. A simple example is a siamese network trained to predict the relative position of pairs of random crops:
In the above, the first crop of the cat's face, and the network needs to predict that the ear-crop should occur north-east of the cat's face. Note that the crops are chosen randomly and the trick is to balance the minimum and maximum distance between crops, so that related crops occur often.
In other words, the network uses a shared encoder and a rudimentary classifier to compare embeddings of different crops. This forces the network to learn what a cat really is as opposed to a soft-set of average colors and feature shapes.
You'll find plenty-more examples in the above slides which also show that these embeddings transfer considerably better than rote autoencoders when trained to predict classes. | Why is the autoencoder decoder usually the reverse architecture as the encoder?
Your intuition is correct, but it's not in the right context. For starters, let's define "high-quality features" as features that can be recycled for training other models, e.g. transferable. For exam |
32,599 | Why is the autoencoder decoder usually the reverse architecture as the encoder? | I wonder if part of the reason might be historical (apparently Hinton's 2006 paper showed it done this way), and because (I believe) it was/is common to tie the weights. I.e. the decoder is using the same weights as the encoder, and they are effectively being learned together.
This question and answer https://stackoverflow.com/q/36889732/841830 discuss the advantages of using tied weights. And some more background here: https://amiralavi.net/blog/2018/08/25/tied-autoencoders | Why is the autoencoder decoder usually the reverse architecture as the encoder? | I wonder if part of the reason might be historical (apparently Hinton's 2006 paper showed it done this way), and because (I believe) it was/is common to tie the weights. I.e. the decoder is using the | Why is the autoencoder decoder usually the reverse architecture as the encoder?
I wonder if part of the reason might be historical (apparently Hinton's 2006 paper showed it done this way), and because (I believe) it was/is common to tie the weights. I.e. the decoder is using the same weights as the encoder, and they are effectively being learned together.
This question and answer https://stackoverflow.com/q/36889732/841830 discuss the advantages of using tied weights. And some more background here: https://amiralavi.net/blog/2018/08/25/tied-autoencoders | Why is the autoencoder decoder usually the reverse architecture as the encoder?
I wonder if part of the reason might be historical (apparently Hinton's 2006 paper showed it done this way), and because (I believe) it was/is common to tie the weights. I.e. the decoder is using the |
32,600 | What is the relation between a loss function and an energy function? | Yes I found these quotes by Yann LeCun particularly helpful:
"A distinction should be made between the energy function, which is minimized by the inference process, and the loss functional (introduced in Section 2), which is minimized by the learning process."
"A loss functional, minimized during learning, is used to measure the quality of the available energy functions."
An analogy to psychology:
This implies that energy functions are analogous to your intuition/habits: first you train your intuition/habits through experience, & external feedback serves as the 'loss function' that your intuition/habits adapt to. If your intuition is well trained the right decisions will become the easiest & thus require the least 'mental/activation energy' from you. Therefore decision making in this ideal scenario becomes mostly a matter of 'minimizing the energy' required by your chosen decision.
Further Reading:
A blog post by OpenAI describing energy based learning can be found here. And the full document where I found these quotes is here; it is a thorough tutorial on energy based learning by scholar Yann LeCun which you may find useful to read for a deeper understanding.
P.S. By convention the energy function is minimized. I believe this is because in physics: systems naturally seek low-energy states. | What is the relation between a loss function and an energy function? | Yes I found these quotes by Yann LeCun particularly helpful:
"A distinction should be made between the energy function, which is minimized by the inference process, and the loss functional (introduce | What is the relation between a loss function and an energy function?
Yes I found these quotes by Yann LeCun particularly helpful:
"A distinction should be made between the energy function, which is minimized by the inference process, and the loss functional (introduced in Section 2), which is minimized by the learning process."
"A loss functional, minimized during learning, is used to measure the quality of the available energy functions."
An analogy to psychology:
This implies that energy functions are analogous to your intuition/habits: first you train your intuition/habits through experience, & external feedback serves as the 'loss function' that your intuition/habits adapt to. If your intuition is well trained the right decisions will become the easiest & thus require the least 'mental/activation energy' from you. Therefore decision making in this ideal scenario becomes mostly a matter of 'minimizing the energy' required by your chosen decision.
Further Reading:
A blog post by OpenAI describing energy based learning can be found here. And the full document where I found these quotes is here; it is a thorough tutorial on energy based learning by scholar Yann LeCun which you may find useful to read for a deeper understanding.
P.S. By convention the energy function is minimized. I believe this is because in physics: systems naturally seek low-energy states. | What is the relation between a loss function and an energy function?
Yes I found these quotes by Yann LeCun particularly helpful:
"A distinction should be made between the energy function, which is minimized by the inference process, and the loss functional (introduce |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.