idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
20,601 | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory? | The Positive Predictive Influence (PPV) is not a good measure, not only because it confounds both mechanisms (discriminability and response bias), but also because of item base-rates. It is preferable to use the posterior probabilities, like P(signal|"yes"), which account for item base-rates:
$P(signal|yes) = \frac{P(signal)P(Hit)}{P(signal)P(Hit)+P(noise)P(False Alarm)}$
but... what is it good for?? well, it is useful for adjusting the response criteria in order to maximize/minimize a the probability of a specific outcome. So, it is complementary to the sensitivity and response bias measures in the sense that it helps to summarize the outcomes of changes in response bias.
A word of advice: if you are sticking with 2x2 outcome matrix that basically only allows you to get a sensitivity measure like d', don't even bother with SDT and just use Hits-False Alarms. Both measures (d' and (H-F)) have a correlation of .96 (no matter what BS detection theorists might come up with)
hope this helps
cheers | Is it valid to analyze signal detection data without employing metrics derived from signal detection | The Positive Predictive Influence (PPV) is not a good measure, not only because it confounds both mechanisms (discriminability and response bias), but also because of item base-rates. It is preferable | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory?
The Positive Predictive Influence (PPV) is not a good measure, not only because it confounds both mechanisms (discriminability and response bias), but also because of item base-rates. It is preferable to use the posterior probabilities, like P(signal|"yes"), which account for item base-rates:
$P(signal|yes) = \frac{P(signal)P(Hit)}{P(signal)P(Hit)+P(noise)P(False Alarm)}$
but... what is it good for?? well, it is useful for adjusting the response criteria in order to maximize/minimize a the probability of a specific outcome. So, it is complementary to the sensitivity and response bias measures in the sense that it helps to summarize the outcomes of changes in response bias.
A word of advice: if you are sticking with 2x2 outcome matrix that basically only allows you to get a sensitivity measure like d', don't even bother with SDT and just use Hits-False Alarms. Both measures (d' and (H-F)) have a correlation of .96 (no matter what BS detection theorists might come up with)
hope this helps
cheers | Is it valid to analyze signal detection data without employing metrics derived from signal detection
The Positive Predictive Influence (PPV) is not a good measure, not only because it confounds both mechanisms (discriminability and response bias), but also because of item base-rates. It is preferable |
20,602 | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory? | You're comparing "What is the probability that a positive test outcome is correct given a known prevalence and test criterion?" with "What is the sensitivity and bias of an unknown system to various signals of this type?"
It seems to me that the two both use some similar theory but they really have very different purposes. With the medical tests criterion is irrelevant. It can be set to a known value in many cases. So, determining the criterion of the test is pointless afterwards. Signal detection theory is best for systems where criterion is unknown. Furthermore, prevalence, or signal, tends to be a fixed (and often very small) value. With SDT you often work out a mean d' over varying signals modelling a very complex situation as a few simple descriptors. When both the criterion and signal are fixed known quantities can SDT tell you anything interesting? It seems like a lot of mathematical sophistication to deal with a fundamentally simpler problem. | Is it valid to analyze signal detection data without employing metrics derived from signal detection | You're comparing "What is the probability that a positive test outcome is correct given a known prevalence and test criterion?" with "What is the sensitivity and bias of an unknown system to various s | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory?
You're comparing "What is the probability that a positive test outcome is correct given a known prevalence and test criterion?" with "What is the sensitivity and bias of an unknown system to various signals of this type?"
It seems to me that the two both use some similar theory but they really have very different purposes. With the medical tests criterion is irrelevant. It can be set to a known value in many cases. So, determining the criterion of the test is pointless afterwards. Signal detection theory is best for systems where criterion is unknown. Furthermore, prevalence, or signal, tends to be a fixed (and often very small) value. With SDT you often work out a mean d' over varying signals modelling a very complex situation as a few simple descriptors. When both the criterion and signal are fixed known quantities can SDT tell you anything interesting? It seems like a lot of mathematical sophistication to deal with a fundamentally simpler problem. | Is it valid to analyze signal detection data without employing metrics derived from signal detection
You're comparing "What is the probability that a positive test outcome is correct given a known prevalence and test criterion?" with "What is the sensitivity and bias of an unknown system to various s |
20,603 | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory? | This might be an over-simplification, but specificity and sensitivity are measures of performance, and are used when there isn't any objective knowledge of the nature of the signal. I mean your density vs. signalness plot assumes one variable that quantifies signalness. For very high dimensional, or infinite-dimensional data, and without a rigorous, provable theory of the mechanism of the generation of the signal, the selection of the variable is non-trivial. The question then arises, is why, after selecting such a variable, are its statistical properties, like the mean and variance for the signal and non-signal not quantified. In many cases, the variable may not be just normal, Poisson, or exponentially distributed. It may even be non-parametric, in which case quantifying the separation as mean difference over variance etc., does not make much sense. Also, a lot of literature in the biomedical field is focussed on applications, and ROC, specificity-sensitivity etc., can be used as objective criteria for comparing the approaches in terms of the limited nature of the problem, and basically that is all that is required. Sometimes people may not be interested in describing, say the actual discrete version log-gamma distribution of the ratio of gene1 vs gene2 transcript abundance in diseased vs control subjects, rather the only thing of importance is whether this is elevated and how much variance of the phenotype or probability of disease it explains. | Is it valid to analyze signal detection data without employing metrics derived from signal detection | This might be an over-simplification, but specificity and sensitivity are measures of performance, and are used when there isn't any objective knowledge of the nature of the signal. I mean your densit | Is it valid to analyze signal detection data without employing metrics derived from signal detection theory?
This might be an over-simplification, but specificity and sensitivity are measures of performance, and are used when there isn't any objective knowledge of the nature of the signal. I mean your density vs. signalness plot assumes one variable that quantifies signalness. For very high dimensional, or infinite-dimensional data, and without a rigorous, provable theory of the mechanism of the generation of the signal, the selection of the variable is non-trivial. The question then arises, is why, after selecting such a variable, are its statistical properties, like the mean and variance for the signal and non-signal not quantified. In many cases, the variable may not be just normal, Poisson, or exponentially distributed. It may even be non-parametric, in which case quantifying the separation as mean difference over variance etc., does not make much sense. Also, a lot of literature in the biomedical field is focussed on applications, and ROC, specificity-sensitivity etc., can be used as objective criteria for comparing the approaches in terms of the limited nature of the problem, and basically that is all that is required. Sometimes people may not be interested in describing, say the actual discrete version log-gamma distribution of the ratio of gene1 vs gene2 transcript abundance in diseased vs control subjects, rather the only thing of importance is whether this is elevated and how much variance of the phenotype or probability of disease it explains. | Is it valid to analyze signal detection data without employing metrics derived from signal detection
This might be an over-simplification, but specificity and sensitivity are measures of performance, and are used when there isn't any objective knowledge of the nature of the signal. I mean your densit |
20,604 | Model for population density estimation | You might want to check work of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
Update: You might also have a look at work of Jeremy Mennis (especially these$^\dagger$ two articles).
$\dagger$ This page doesn't exist anymore. | Model for population density estimation | You might want to check work of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
U | Model for population density estimation
You might want to check work of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
Update: You might also have a look at work of Jeremy Mennis (especially these$^\dagger$ two articles).
$\dagger$ This page doesn't exist anymore. | Model for population density estimation
You might want to check work of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
U |
20,605 | Model for population density estimation | Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relationship as below:
$$z_{ji} = f(x_{ji},\beta)$$
Clearly, whatever functional form we impose on $f(.)$ will be at best an approximation to the real relationship and thus the need to incorporate error into the above equation. Thus, the above becomes:
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
The distributional error assumption on the error term is for illustrative purposes. If necessary we can change it as appropriate.
However, we need an exact decomposition of $y_{ji}$. Thus, we need to impose a constraint on the error terms and the function $f(.)$ as below:
$$\sum_i{\epsilon_{ji}} = 0$$
$$\sum_i{f(x_{ji},\beta)} = y_j$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({f_j}' e = y_j) I((z_j-f_j)' e = 0)$$
where,
$e$ is a vector of ones of appropriate dimension.
The first indicator constraint captures the idea that the sum of the deterministic terms should sum to $y_j$ and the second one captures the idea that the error residuals should sum to 0.
Model selection is trickier as we are decomposing the observed $y_j$ exactly. Perhaps, a way to approach model selection is to choose the model that yields the lowest error variance i.e., the one that yields the lowest estimate of $\sigma^2$.
Edit 1
Thinking some more the above formulation can be simplified as it has more constraints than needed.
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({z_j}' e = y_j)$$
where,
$e$ is a vector of ones of appropriate dimension.
The constraint on $z_j$ ensures an exact decomposition. | Model for population density estimation | Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relation | Model for population density estimation
Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relationship as below:
$$z_{ji} = f(x_{ji},\beta)$$
Clearly, whatever functional form we impose on $f(.)$ will be at best an approximation to the real relationship and thus the need to incorporate error into the above equation. Thus, the above becomes:
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
The distributional error assumption on the error term is for illustrative purposes. If necessary we can change it as appropriate.
However, we need an exact decomposition of $y_{ji}$. Thus, we need to impose a constraint on the error terms and the function $f(.)$ as below:
$$\sum_i{\epsilon_{ji}} = 0$$
$$\sum_i{f(x_{ji},\beta)} = y_j$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({f_j}' e = y_j) I((z_j-f_j)' e = 0)$$
where,
$e$ is a vector of ones of appropriate dimension.
The first indicator constraint captures the idea that the sum of the deterministic terms should sum to $y_j$ and the second one captures the idea that the error residuals should sum to 0.
Model selection is trickier as we are decomposing the observed $y_j$ exactly. Perhaps, a way to approach model selection is to choose the model that yields the lowest error variance i.e., the one that yields the lowest estimate of $\sigma^2$.
Edit 1
Thinking some more the above formulation can be simplified as it has more constraints than needed.
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({z_j}' e = y_j)$$
where,
$e$ is a vector of ones of appropriate dimension.
The constraint on $z_j$ ensures an exact decomposition. | Model for population density estimation
Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relation |
20,606 | Can I semi-automate MCMC convergence diagnostics to set the burn-in length? | Here is one approach at the automation. Feedback much appreciated. This is an attempt to replace initial visual inspection with computation, followed by subsequent visual inspection, in keeping with standard practice.
This solution actually incorporates two potential solutions, first, calculate burn-in to remove the length of chain before some threshold is reached, and then using the autocorrelation matrix to calculate the thinning interval.
calculate a vector of the maximum median Gelman-Rubin convergence diagnostic shrink factor (grsf) for all variables in the
find the minimum number of samples at which the grsf across all variables goes below some threshold, e.g. 1.1 in the example, perhaps lower in practice
sub sample the chains from this point to the end of the chain
thin the chain using the autocorrelation of the most autocorrelated chain
visually confirm convergence with trace, autocorrelation, and density plots
The mcmc object can be downloaded here: jags.out.Rdata
# jags.out is the mcmc.object with m variables
library(coda)
load('jags.out.Rdata')
# 1. calculate max.gd.vec,
# max.gd.vec is a vector of the maximum shrink factor
max.gd.vec <- apply(gelman.plot(jags.out)$shrink[, ,'median'], 1, max)
# 2. will use window() to subsample the jags.out mcmc.object
# 3. start window at min(where max.gd.vec < 1.1, 100)
window.start <- max(100, min(as.numeric(names(which(max.gd.vec - 1.1 < 0)))))
jags.out.trunc <- window(jags.out, start = window.start)
# 4. calculate thinning interval
# thin.int is the chain thin interval
# step is very slow
# 4.1 find n most autocorrelated variables
n = min(3, ncol(acm))
acm <- autocorr.diag(jags.out.trunc)
acm.subset <- colnames(acm)[rank(-colSums(acm))][1:n]
jags.out.subset <- jags.out.trunc[,acm.subset]
# 4.2 calculate the thinning interval
# ac.int is the time step interval for autocorrelation matrix
ac.int <- 500 #set high to reduce computation time
thin.int <- max(apply(acm2 < 0, 2, function(x) match(T,x)) * ac.int, 50)
# 4.3 thin the chain
jags.out.thin <- window(jags.out.trunc, thin = thin.int)
# 5. plots for visual diagnostics
plot(jags.out.thin)
autocorr.plot(jags.win.out.thin)
--update--
As implemented in R the computation of the autocorrelation matrix is slower than would be desirable (>15 min in some cases), to a lesser extent, so is computation of the GR shrink factor. There is a question about how to speed up step 4 on stackoverflow here
--update part 2--
additional answers:
It is not possible to diagnose convergence, only to diagnose lack of convergence (Brooks, Giudici, and Philippe, 2003)
The function autorun.jags from the package runjags automates calculation of run length and convergence diagnostics. It does not start monitoring the chain until the Gelman rubin diagnostic is below 1.05; it calculates the chain length using the Raftery and Lewis diagnostic.
Gelman et al's (Gelman 2004 Bayesian Data Analysis, p. 295, Gelman and Shirley, 2010) state that they use a conservative approach of discarding the 1st half of the chain. Although a relatively simple solution, in practice this is sufficient to solve the issue for my particular set of models and data.
#code for answer 3
chain.length <- summary(jags.out)$end
jags.out.trunc <- window(jags.out, start = chain.length / 2)
# thin based on autocorrelation if < 50, otherwise ignore
acm <- autocorr.diag(jags.out.trunc, lags = c(1, 5, 10, 15, 25))
# require visual inspection, check acceptance rate
if (acm == 50) stop('check acceptance rate, inspect diagnostic figures')
thin.int <- min(apply(acm2 < 0, 2, function(x) match(TRUE, x)), 50)
jags.out.thin <- window(jags.out.trunc, thin = thin.int) | Can I semi-automate MCMC convergence diagnostics to set the burn-in length? | Here is one approach at the automation. Feedback much appreciated. This is an attempt to replace initial visual inspection with computation, followed by subsequent visual inspection, in keeping with s | Can I semi-automate MCMC convergence diagnostics to set the burn-in length?
Here is one approach at the automation. Feedback much appreciated. This is an attempt to replace initial visual inspection with computation, followed by subsequent visual inspection, in keeping with standard practice.
This solution actually incorporates two potential solutions, first, calculate burn-in to remove the length of chain before some threshold is reached, and then using the autocorrelation matrix to calculate the thinning interval.
calculate a vector of the maximum median Gelman-Rubin convergence diagnostic shrink factor (grsf) for all variables in the
find the minimum number of samples at which the grsf across all variables goes below some threshold, e.g. 1.1 in the example, perhaps lower in practice
sub sample the chains from this point to the end of the chain
thin the chain using the autocorrelation of the most autocorrelated chain
visually confirm convergence with trace, autocorrelation, and density plots
The mcmc object can be downloaded here: jags.out.Rdata
# jags.out is the mcmc.object with m variables
library(coda)
load('jags.out.Rdata')
# 1. calculate max.gd.vec,
# max.gd.vec is a vector of the maximum shrink factor
max.gd.vec <- apply(gelman.plot(jags.out)$shrink[, ,'median'], 1, max)
# 2. will use window() to subsample the jags.out mcmc.object
# 3. start window at min(where max.gd.vec < 1.1, 100)
window.start <- max(100, min(as.numeric(names(which(max.gd.vec - 1.1 < 0)))))
jags.out.trunc <- window(jags.out, start = window.start)
# 4. calculate thinning interval
# thin.int is the chain thin interval
# step is very slow
# 4.1 find n most autocorrelated variables
n = min(3, ncol(acm))
acm <- autocorr.diag(jags.out.trunc)
acm.subset <- colnames(acm)[rank(-colSums(acm))][1:n]
jags.out.subset <- jags.out.trunc[,acm.subset]
# 4.2 calculate the thinning interval
# ac.int is the time step interval for autocorrelation matrix
ac.int <- 500 #set high to reduce computation time
thin.int <- max(apply(acm2 < 0, 2, function(x) match(T,x)) * ac.int, 50)
# 4.3 thin the chain
jags.out.thin <- window(jags.out.trunc, thin = thin.int)
# 5. plots for visual diagnostics
plot(jags.out.thin)
autocorr.plot(jags.win.out.thin)
--update--
As implemented in R the computation of the autocorrelation matrix is slower than would be desirable (>15 min in some cases), to a lesser extent, so is computation of the GR shrink factor. There is a question about how to speed up step 4 on stackoverflow here
--update part 2--
additional answers:
It is not possible to diagnose convergence, only to diagnose lack of convergence (Brooks, Giudici, and Philippe, 2003)
The function autorun.jags from the package runjags automates calculation of run length and convergence diagnostics. It does not start monitoring the chain until the Gelman rubin diagnostic is below 1.05; it calculates the chain length using the Raftery and Lewis diagnostic.
Gelman et al's (Gelman 2004 Bayesian Data Analysis, p. 295, Gelman and Shirley, 2010) state that they use a conservative approach of discarding the 1st half of the chain. Although a relatively simple solution, in practice this is sufficient to solve the issue for my particular set of models and data.
#code for answer 3
chain.length <- summary(jags.out)$end
jags.out.trunc <- window(jags.out, start = chain.length / 2)
# thin based on autocorrelation if < 50, otherwise ignore
acm <- autocorr.diag(jags.out.trunc, lags = c(1, 5, 10, 15, 25))
# require visual inspection, check acceptance rate
if (acm == 50) stop('check acceptance rate, inspect diagnostic figures')
thin.int <- min(apply(acm2 < 0, 2, function(x) match(TRUE, x)), 50)
jags.out.thin <- window(jags.out.trunc, thin = thin.int) | Can I semi-automate MCMC convergence diagnostics to set the burn-in length?
Here is one approach at the automation. Feedback much appreciated. This is an attempt to replace initial visual inspection with computation, followed by subsequent visual inspection, in keeping with s |
20,607 | For model-averaging a GLM, do we average the predictions on the link or response scale? | The optimal way of combining estimators or predictors depends on the loss function that you are trying to minimize (or the utility function you are trying to maximize).
Generally speaking, if the loss function measures prediction errors on the response scale, then averaging predictors on the response scale correct. If, for example, you are seeking to minimize the expected squared error of prediction on the response scale, then the posterior mean predictor will be optimal and, depending on your model assumptions, that may be equivalent to averaging predictions on the response scale.
Note that averaging on the linear predictor scale can perform very poorly for discrete models. Suppose that you are using a logistic regression to predict the probability of a binary response variable. If any of the models give a estimated probability of zero, then the linear predictor for that model will be minus infinity. Taking the average of infinity with any number of finite values will still be infinite.
Have you consulted the references that you list? I am sure that Hoeting et al (1999) for example discuss loss functions, although perhaps not in much detail. | For model-averaging a GLM, do we average the predictions on the link or response scale? | The optimal way of combining estimators or predictors depends on the loss function that you are trying to minimize (or the utility function you are trying to maximize).
Generally speaking, if the loss | For model-averaging a GLM, do we average the predictions on the link or response scale?
The optimal way of combining estimators or predictors depends on the loss function that you are trying to minimize (or the utility function you are trying to maximize).
Generally speaking, if the loss function measures prediction errors on the response scale, then averaging predictors on the response scale correct. If, for example, you are seeking to minimize the expected squared error of prediction on the response scale, then the posterior mean predictor will be optimal and, depending on your model assumptions, that may be equivalent to averaging predictions on the response scale.
Note that averaging on the linear predictor scale can perform very poorly for discrete models. Suppose that you are using a logistic regression to predict the probability of a binary response variable. If any of the models give a estimated probability of zero, then the linear predictor for that model will be minus infinity. Taking the average of infinity with any number of finite values will still be infinite.
Have you consulted the references that you list? I am sure that Hoeting et al (1999) for example discuss loss functions, although perhaps not in much detail. | For model-averaging a GLM, do we average the predictions on the link or response scale?
The optimal way of combining estimators or predictors depends on the loss function that you are trying to minimize (or the utility function you are trying to maximize).
Generally speaking, if the loss |
20,608 | Is Just-Identified 2SLS Median-Unbiased? | In simulation studies the term median bias refers to the absolute value of the deviations of an estimator from its true value (which you know in this case because it is a simulation so you choose the true value). You can see a working paper by Young (2017) who defines median bias like this in table 15, or Andrews and Armstrong (2016) who plot median bias graphs for different estimators in figure 2.
Part of the confusion (also in the literature) seems to come from the fact that there are two separate underlying problems:
weak instruments
many (potentially) weak instruments
The problem of having a weak instrument in a just-identified setting is very different from having many instruments where some are weak, however, the two issues get thrown together sometimes.
First of all, let's consider the relationship between the estimators that we are talking about here. Theil (1953) in "Estimation and Simultaneous Correlation in Complete Equation Systems" introduced the so-called $\kappa$-klass estimator:
$$
\widehat{\beta} = \left[ X'(I-\kappa M_Z)X \right]^{-1}\left[ X'(I-\kappa M_Z)_y) \right]
$$
with $M_Z = I-Z(Z'Z)^{-1}Z'$, for the system of equations
$$
\begin{align}
y &= X\beta + u \\
X &= Z\pi + e.
\end{align}
$$
The scalar $\kappa$ determines what estimator we have. For $\kappa = 0$ you go back to OLS, for $\kappa = 1$ you have the 2SLS estimator, and when $\kappa$ is set to the smallest root of $\det (X'X - \kappa X'M_ZX))=0$ you have the LIML estimator (see Stock and Yogo, 2005, p. 111)
Asymptotically, LIML and 2SLS have the same distribution, however, in small samples this can be very different. This is especially the case when we have many instruments and if some of the are weak. In this case, LIML performs better than 2SLS. LIML here has been shown to be median unbiased. This result comes out of a bunch of simulation studies. Usually papers stating this result refer to Rothberg (1983) "Asymptotic Properties of Some Estimators In Structural Models", Sawa (1972), or Anderson et al. (1982).
Steve Pischke provides a simulation for this result in his 2016 notes on slide 17, showing the distribution of OLS, LIML and 2SLS with 20 instruments out of which only one is actually useful. The true coefficient value is 1. You see that LIML is centered at the true value whilst 2SLS is biased towards OLS.
Now the argument seems to be the following: given that LIML can be shown to be median unbiased and that in the just-identified case (one endogenous variable, one instrument) LIML and 2SLS are equivalent, 2SLS must also be median unbiased.
However, it seems that people again are mixing up the "weak instrument" and the "many weak instruments" case because in the just-identified setting both LIML and 2SLS are going to be biased when the instrument is weak. I have not seen any result where it was demonstrated that LIML is unbiased in the just-identified case when the instrument is weak and I don't think that this is true. A similar conclusion comes out of Angrist and Pischke's (2009) response to Gary Solo on page 2 where they simulate the bias of OLS, 2SLS, and LIML when changing the strength of the instrument.
For very small first-stage coefficients of <0.1 (holding the standard error fixed), i.e. low instrument strength, just-identified 2SLS (and hence just-identified LIML) is much closer to the probability limit of the OLS estimator as compared to the true coefficient value of 1.
Once the first stage coefficient is between 0.1 and 0.2, they note that the first stage F statistic is above 10 and hence there is no weak instrument problem anymore according to the rule of thumb of F>10 by Stock and Yogo (2005). In this sense, I fail to see how LIML is supposed to be a fix for a weak instrument problem in the just-identified case. Also notice that i) LIML tends to be more dispersed and it requires a correction of its standard errors (see Bekker, 1994) and ii) if your instrument is actually weak, you will not find anything in the second stage neither with 2SLS nor LIML because the standard errors are just going to be too big. | Is Just-Identified 2SLS Median-Unbiased? | In simulation studies the term median bias refers to the absolute value of the deviations of an estimator from its true value (which you know in this case because it is a simulation so you choose the | Is Just-Identified 2SLS Median-Unbiased?
In simulation studies the term median bias refers to the absolute value of the deviations of an estimator from its true value (which you know in this case because it is a simulation so you choose the true value). You can see a working paper by Young (2017) who defines median bias like this in table 15, or Andrews and Armstrong (2016) who plot median bias graphs for different estimators in figure 2.
Part of the confusion (also in the literature) seems to come from the fact that there are two separate underlying problems:
weak instruments
many (potentially) weak instruments
The problem of having a weak instrument in a just-identified setting is very different from having many instruments where some are weak, however, the two issues get thrown together sometimes.
First of all, let's consider the relationship between the estimators that we are talking about here. Theil (1953) in "Estimation and Simultaneous Correlation in Complete Equation Systems" introduced the so-called $\kappa$-klass estimator:
$$
\widehat{\beta} = \left[ X'(I-\kappa M_Z)X \right]^{-1}\left[ X'(I-\kappa M_Z)_y) \right]
$$
with $M_Z = I-Z(Z'Z)^{-1}Z'$, for the system of equations
$$
\begin{align}
y &= X\beta + u \\
X &= Z\pi + e.
\end{align}
$$
The scalar $\kappa$ determines what estimator we have. For $\kappa = 0$ you go back to OLS, for $\kappa = 1$ you have the 2SLS estimator, and when $\kappa$ is set to the smallest root of $\det (X'X - \kappa X'M_ZX))=0$ you have the LIML estimator (see Stock and Yogo, 2005, p. 111)
Asymptotically, LIML and 2SLS have the same distribution, however, in small samples this can be very different. This is especially the case when we have many instruments and if some of the are weak. In this case, LIML performs better than 2SLS. LIML here has been shown to be median unbiased. This result comes out of a bunch of simulation studies. Usually papers stating this result refer to Rothberg (1983) "Asymptotic Properties of Some Estimators In Structural Models", Sawa (1972), or Anderson et al. (1982).
Steve Pischke provides a simulation for this result in his 2016 notes on slide 17, showing the distribution of OLS, LIML and 2SLS with 20 instruments out of which only one is actually useful. The true coefficient value is 1. You see that LIML is centered at the true value whilst 2SLS is biased towards OLS.
Now the argument seems to be the following: given that LIML can be shown to be median unbiased and that in the just-identified case (one endogenous variable, one instrument) LIML and 2SLS are equivalent, 2SLS must also be median unbiased.
However, it seems that people again are mixing up the "weak instrument" and the "many weak instruments" case because in the just-identified setting both LIML and 2SLS are going to be biased when the instrument is weak. I have not seen any result where it was demonstrated that LIML is unbiased in the just-identified case when the instrument is weak and I don't think that this is true. A similar conclusion comes out of Angrist and Pischke's (2009) response to Gary Solo on page 2 where they simulate the bias of OLS, 2SLS, and LIML when changing the strength of the instrument.
For very small first-stage coefficients of <0.1 (holding the standard error fixed), i.e. low instrument strength, just-identified 2SLS (and hence just-identified LIML) is much closer to the probability limit of the OLS estimator as compared to the true coefficient value of 1.
Once the first stage coefficient is between 0.1 and 0.2, they note that the first stage F statistic is above 10 and hence there is no weak instrument problem anymore according to the rule of thumb of F>10 by Stock and Yogo (2005). In this sense, I fail to see how LIML is supposed to be a fix for a weak instrument problem in the just-identified case. Also notice that i) LIML tends to be more dispersed and it requires a correction of its standard errors (see Bekker, 1994) and ii) if your instrument is actually weak, you will not find anything in the second stage neither with 2SLS nor LIML because the standard errors are just going to be too big. | Is Just-Identified 2SLS Median-Unbiased?
In simulation studies the term median bias refers to the absolute value of the deviations of an estimator from its true value (which you know in this case because it is a simulation so you choose the |
20,609 | Graphical intuition of statistics on a manifold | A family of probability distributions can be analyzed as the points on a manifold with intrinsic coordinates corresponding to the parameters $(\Theta)$ of the distribution. The idea is to avoid a representation with an incorrect metric: Univariate Gaussians $\mathcal N(\mu,\sigma^2),$ can be plotted as points in the $\mathbb R^2$ Euclidean manifold as on the right side of the plot below with the mean in the $x$-axis and the SD in the in the$y$ axis (positive half in the case of plotting the variance):
However, the identity matrix (Euclidean distance) will fail to measure the degree of (dis-)similarity between individual $\mathrm{pdf}$'s: on the normal curves on the left of the plot above, given an interval in the domain, the area without overlap (in dark blue) is larger for Gaussian curves with lower variance, even if the mean is kept fixed. In fact,
the only Riemannian metric that “makes sense” for statistical manifolds is the Fisher information metric.
In Fisher information distance: a geometrical reading, Costa SI, Santos SA and Strapasson JE take advantage of the similarity between the Fisher information matrix of Gaussian distributions and the metric in the Beltrami-Pointcaré disk model to derive a closed formula.
The "north" cone of the hyperboloid $x^2 + y^2 - x^2 = -1$ becomes a non-Euclidean manifold, in which each point corresponds to a mean and standard deviation (parameter space), and the shortest distance between $\mathrm {pdf's,}$ e.g. $P$ and $Q,$ in the diagram below, is a geodesic curve, projected (chart map) onto the equatorial plane as hyperparabolic straight lines, and enabling measurement of distances between $\mathrm{pdf's}$ through a metric tensor $g_{\mu\nu}\;(\Theta)\;\mathbf e^\mu\otimes \mathbf e^\nu$ - the Fisher information metric:
$$D\,\left ( P(x;\theta_1)\,,\,Q(x;\theta_2) \right)=\min_{\theta(t)\,|\,\theta(0)=\theta_1\;,\;\theta(1)=\theta_2}\;\int_0^1 \; \sqrt{\left(\frac{\mathrm d\theta}{\mathrm dt} \right)^\top\;I(\theta)\frac{\mathrm d \theta}{\mathrm dt}dt}$$
with $$I(\theta) = \frac{1}{\sigma^2}\begin{bmatrix}1&0\\0&2 \end{bmatrix}$$
The Kullback-Leibler divergence is closely related, albeit lacking the geometry and associated metric.
And it is interesting to note that The Fisher information matrix can be interpreted as the Hessian of the Shannon entropy:
$$g_{ij}(\theta)=-E\left[ \frac{\partial^2\log p(x;\theta)}{\partial \theta_i \partial\theta_j} \right]=\frac{\partial^2 H(p)}{\partial \theta_i \partial \theta_j}$$
with
$$H(p) = -\int p(x;\theta)\,\log p(x;\theta) \mathrm dx.$$
This example is similar in concept to the more common stereographic Earth map.
The ML multidimensional embedding or manifold learning is not addressed here. | Graphical intuition of statistics on a manifold | A family of probability distributions can be analyzed as the points on a manifold with intrinsic coordinates corresponding to the parameters $(\Theta)$ of the distribution. The idea is to avoid a repr | Graphical intuition of statistics on a manifold
A family of probability distributions can be analyzed as the points on a manifold with intrinsic coordinates corresponding to the parameters $(\Theta)$ of the distribution. The idea is to avoid a representation with an incorrect metric: Univariate Gaussians $\mathcal N(\mu,\sigma^2),$ can be plotted as points in the $\mathbb R^2$ Euclidean manifold as on the right side of the plot below with the mean in the $x$-axis and the SD in the in the$y$ axis (positive half in the case of plotting the variance):
However, the identity matrix (Euclidean distance) will fail to measure the degree of (dis-)similarity between individual $\mathrm{pdf}$'s: on the normal curves on the left of the plot above, given an interval in the domain, the area without overlap (in dark blue) is larger for Gaussian curves with lower variance, even if the mean is kept fixed. In fact,
the only Riemannian metric that “makes sense” for statistical manifolds is the Fisher information metric.
In Fisher information distance: a geometrical reading, Costa SI, Santos SA and Strapasson JE take advantage of the similarity between the Fisher information matrix of Gaussian distributions and the metric in the Beltrami-Pointcaré disk model to derive a closed formula.
The "north" cone of the hyperboloid $x^2 + y^2 - x^2 = -1$ becomes a non-Euclidean manifold, in which each point corresponds to a mean and standard deviation (parameter space), and the shortest distance between $\mathrm {pdf's,}$ e.g. $P$ and $Q,$ in the diagram below, is a geodesic curve, projected (chart map) onto the equatorial plane as hyperparabolic straight lines, and enabling measurement of distances between $\mathrm{pdf's}$ through a metric tensor $g_{\mu\nu}\;(\Theta)\;\mathbf e^\mu\otimes \mathbf e^\nu$ - the Fisher information metric:
$$D\,\left ( P(x;\theta_1)\,,\,Q(x;\theta_2) \right)=\min_{\theta(t)\,|\,\theta(0)=\theta_1\;,\;\theta(1)=\theta_2}\;\int_0^1 \; \sqrt{\left(\frac{\mathrm d\theta}{\mathrm dt} \right)^\top\;I(\theta)\frac{\mathrm d \theta}{\mathrm dt}dt}$$
with $$I(\theta) = \frac{1}{\sigma^2}\begin{bmatrix}1&0\\0&2 \end{bmatrix}$$
The Kullback-Leibler divergence is closely related, albeit lacking the geometry and associated metric.
And it is interesting to note that The Fisher information matrix can be interpreted as the Hessian of the Shannon entropy:
$$g_{ij}(\theta)=-E\left[ \frac{\partial^2\log p(x;\theta)}{\partial \theta_i \partial\theta_j} \right]=\frac{\partial^2 H(p)}{\partial \theta_i \partial \theta_j}$$
with
$$H(p) = -\int p(x;\theta)\,\log p(x;\theta) \mathrm dx.$$
This example is similar in concept to the more common stereographic Earth map.
The ML multidimensional embedding or manifold learning is not addressed here. | Graphical intuition of statistics on a manifold
A family of probability distributions can be analyzed as the points on a manifold with intrinsic coordinates corresponding to the parameters $(\Theta)$ of the distribution. The idea is to avoid a repr |
20,610 | Graphical intuition of statistics on a manifold | There's more than one way to link probabilities to geometry. I'm sure you have heard of elliptical distributions (e.g. Gaussian). The term itself implies geometry link and it's obvious when you draw its covariance matrix. With manifolds it's just placing every possible parameter value in the coordinate system. For instance, a Gaussian Manifold would be in two dimensions: $\mu,\sigma^2$. You can have any value of $\mu\in R$ but only positive variances $\sigma^2>0$. Hence, Gaussian manifold would be a half of entire $R^2$ space. Not that interesting | Graphical intuition of statistics on a manifold | There's more than one way to link probabilities to geometry. I'm sure you have heard of elliptical distributions (e.g. Gaussian). The term itself implies geometry link and it's obvious when you draw i | Graphical intuition of statistics on a manifold
There's more than one way to link probabilities to geometry. I'm sure you have heard of elliptical distributions (e.g. Gaussian). The term itself implies geometry link and it's obvious when you draw its covariance matrix. With manifolds it's just placing every possible parameter value in the coordinate system. For instance, a Gaussian Manifold would be in two dimensions: $\mu,\sigma^2$. You can have any value of $\mu\in R$ but only positive variances $\sigma^2>0$. Hence, Gaussian manifold would be a half of entire $R^2$ space. Not that interesting | Graphical intuition of statistics on a manifold
There's more than one way to link probabilities to geometry. I'm sure you have heard of elliptical distributions (e.g. Gaussian). The term itself implies geometry link and it's obvious when you draw i |
20,611 | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | (Major edit, Aug 5, 2022)
The Minitab blog cited by the OP states why Minitab doesn't include $R^2$ with the results of nonlinear regression. They cite Spiess and Neumeyer (1) who evaluate various methods of choosing among two (or more) nonlinear models. Choosing the model that fits with the highest $R^2$ is far from the best method. Note that they compared models with different numbers of parameters, and $R^2$ doesn't "penalize" models with more parameters like AIC and BIC do.
But there are other reasons to evaluate goodness-of-fit besides choosing among models. Sometimes the goal is to compare the fit of one set of data with the fit from prior runs of the same experiment (always fitting the same model). For this purpose, I think $R^2$ is useful.
Say your lab repeats an experiment many times (with some variations of course) so know to always expect $R^2$ values between 0.6 and 0.8. If a new experiment has $R^2$=0.2, you would be suspicious and look carefully to see if something went wrong with the methods or reagents used in that particular experiment. And if a new employee brings you results (using the same experimental system) with $R^2$=0.95, you would be suspicious (too good to be true) and look carefully at how many "outliers" were removed, whether any data was made up, whether the analysis was conducted properly, .... Or maybe this new employee was a more careful experimenter and so obtained cleaner data, and your expectations for $R^2$ need to be updated.
Bottom line. Whether or not $R^2$ is a useful way to assess goodness-of-fit in nonlinear regression depends on why you are assessing goodness-of-fit.
More detail: $R^2$ and sum-of-squares-of-residuals really do measure goodness of fit. AIC, AICc, BIC (etc) assess the tradeoff of goodness-of-fit vs. number of parameters in the model (number of degrees of freedom really). When comparing models, these values are better (more likely to lead to the correct model) because they don't just measure goodness-of-fit but also take into account number of df (which depends on number of parameters fit by the model).
Spiess, Andrej-Nikolai, Natalie Neumeyer. An evaluation of R2 as an inadequate measure for nonlinear models in pharmacological and biochemical research: a Monte Carlo approach. BMC Pharmacology. 2010; 10: 6. | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | (Major edit, Aug 5, 2022)
The Minitab blog cited by the OP states why Minitab doesn't include $R^2$ with the results of nonlinear regression. They cite Spiess and Neumeyer (1) who evaluate various met | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
(Major edit, Aug 5, 2022)
The Minitab blog cited by the OP states why Minitab doesn't include $R^2$ with the results of nonlinear regression. They cite Spiess and Neumeyer (1) who evaluate various methods of choosing among two (or more) nonlinear models. Choosing the model that fits with the highest $R^2$ is far from the best method. Note that they compared models with different numbers of parameters, and $R^2$ doesn't "penalize" models with more parameters like AIC and BIC do.
But there are other reasons to evaluate goodness-of-fit besides choosing among models. Sometimes the goal is to compare the fit of one set of data with the fit from prior runs of the same experiment (always fitting the same model). For this purpose, I think $R^2$ is useful.
Say your lab repeats an experiment many times (with some variations of course) so know to always expect $R^2$ values between 0.6 and 0.8. If a new experiment has $R^2$=0.2, you would be suspicious and look carefully to see if something went wrong with the methods or reagents used in that particular experiment. And if a new employee brings you results (using the same experimental system) with $R^2$=0.95, you would be suspicious (too good to be true) and look carefully at how many "outliers" were removed, whether any data was made up, whether the analysis was conducted properly, .... Or maybe this new employee was a more careful experimenter and so obtained cleaner data, and your expectations for $R^2$ need to be updated.
Bottom line. Whether or not $R^2$ is a useful way to assess goodness-of-fit in nonlinear regression depends on why you are assessing goodness-of-fit.
More detail: $R^2$ and sum-of-squares-of-residuals really do measure goodness of fit. AIC, AICc, BIC (etc) assess the tradeoff of goodness-of-fit vs. number of parameters in the model (number of degrees of freedom really). When comparing models, these values are better (more likely to lead to the correct model) because they don't just measure goodness-of-fit but also take into account number of df (which depends on number of parameters fit by the model).
Spiess, Andrej-Nikolai, Natalie Neumeyer. An evaluation of R2 as an inadequate measure for nonlinear models in pharmacological and biochemical research: a Monte Carlo approach. BMC Pharmacology. 2010; 10: 6. | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
(Major edit, Aug 5, 2022)
The Minitab blog cited by the OP states why Minitab doesn't include $R^2$ with the results of nonlinear regression. They cite Spiess and Neumeyer (1) who evaluate various met |
20,612 | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | I disagree with the Minitab blog and many of the common criticisms of $R^2$. After all, $R^2$ is just a function of the sum of squared residuals: $SSRes = \sum_{i=1}^n\big(y_i - \hat y_i\big)^2$.
$$
R^2 = 1-\dfrac{SSRes}{\sum_{i=1}^n\big(y_i - \bar y\big)^2}
$$
This equation is equivalent to other common definitions of $R^2$, such the the squared correlation between $X$ and $Y$ in simple linear OLS regression and the squared correlation between true and predicted values in both simple and multiple OLS linear regression.
Consequently, any criticism of $R^2$ is also a criticism of the sum of squared residuals and the mean squared residual, $MSRes = \dfrac{SSRes}{n}$, which often gets called the mean squared error or $MSE$.
A major (and valid) criticism of all of these metrics is that they can be driven to be perfect by overfitting to the data. If we hit every $y_i$ point, then every residual is zero, the $SSRes$ is zero, and the $R^2$ is one.
Consequently, we want to have some way to penalize the model for overfitting. There are two main avenues for doing this: sticking to in-sample (training) data and testing on some holdout (test or validation data) data.
To penalize the model for perhaps overfitting, a common in-sample approach is to tweak $R^2$ and use adjusted $R^2$. $$R^2_{adj}=1 - \dfrac{\dfrac{SSRes}{n-p}}
{\dfrac{\sum_{i=1}^n\big(y_i - \bar y\big)^2}{n-1}}$$.
Here, $p$ is the number of parameters in the regression (including the intercept).
An alternative to using an in-sample metric like $R^2_{adj}$ is to keep some data that the model has not seen and test out the model on this holdout data. If the model has overfitted, we expect poor performance on the holdout data. The usual metrics applied to holdout data would work well here. Beyond just considering functions of squared residuals, we might be interested in absolute residuals or percent deviation between truth and prediction.
A more advanced variant of out-of-sample validation, beyond the scope of this question, uses bootstrapping to estimate by how much you have overfit. I briefly describe the method here.
But you had asked about nonlinear models. Analogues to $R^2$ for nonlinear models are hard to determine, as the degrees of freedom used in place of the $n-p$ term are not as clear as in linear regression. Consequently, using out-of-sample checks (or bootstrap validation) might be the way to go. Many in-sample metrics can be calculated on out-of-sample data. I will list a few below along with pros and cons.
$SSRes$
Pros: Easy to calculate
Cons: Hard to interpret, since it can grow large just by having many observations
$MSE$
Pros: Easy to calculate, related to the variance of the error term
Cons: The relationship to variance can range from unhelpful to downright misleading if the error is not Gaussian or does not have a constant variance; the units are squared
$RMSE: \text{Root Mean Squared Error}$
(This is just the square root of the MSE.)
Pros: Related to the standard deviation of the error term; easy to calculate; in the same units of $y$
Cons: The relationship to standard deviation can range from unhelpful to downright misleading if the error is not Gaussian or does not have a constant standard deviation
$R^2$
Pros: Related to comparing your predictions to the predictions of a baseline model
Cons: Out-of-sample (and even in-sample when a regression is fit by a method other than least squares), $R^2$ lacks its usual "proportion of variance explained" interpretation; it is easy to think in term of letter grades in school where $R^2=0.6$ is a $\text{D}$ that makes us sad, even though such a value might be spectacular performance
(For reasons I discuss in detail here, I disagree with the exact implementation of out-of-sample $R^2$ in the common Python machine learning package sklearn. That implementation compares your performance to a model that always guesses the out-of-sample mean, which is supposed to be a model that you cannot access (since the out-of-sample data are not for training).)
$MAPE: \text{Mean Absolute Percentage Error}$
Pros: Handles data on different scales, where missing by $5$ might be a big deal when the true value is $10$ but less of a big deal when the true value is a billion
Cons: Overestimates and underestimates are not penalized equally; you have to divide by zero if a true value is zero; many others, as described on the Wikipedia article on MAPE, though the link also mentions some alternatives | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | I disagree with the Minitab blog and many of the common criticisms of $R^2$. After all, $R^2$ is just a function of the sum of squared residuals: $SSRes = \sum_{i=1}^n\big(y_i - \hat y_i\big)^2$.
$$
R | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
I disagree with the Minitab blog and many of the common criticisms of $R^2$. After all, $R^2$ is just a function of the sum of squared residuals: $SSRes = \sum_{i=1}^n\big(y_i - \hat y_i\big)^2$.
$$
R^2 = 1-\dfrac{SSRes}{\sum_{i=1}^n\big(y_i - \bar y\big)^2}
$$
This equation is equivalent to other common definitions of $R^2$, such the the squared correlation between $X$ and $Y$ in simple linear OLS regression and the squared correlation between true and predicted values in both simple and multiple OLS linear regression.
Consequently, any criticism of $R^2$ is also a criticism of the sum of squared residuals and the mean squared residual, $MSRes = \dfrac{SSRes}{n}$, which often gets called the mean squared error or $MSE$.
A major (and valid) criticism of all of these metrics is that they can be driven to be perfect by overfitting to the data. If we hit every $y_i$ point, then every residual is zero, the $SSRes$ is zero, and the $R^2$ is one.
Consequently, we want to have some way to penalize the model for overfitting. There are two main avenues for doing this: sticking to in-sample (training) data and testing on some holdout (test or validation data) data.
To penalize the model for perhaps overfitting, a common in-sample approach is to tweak $R^2$ and use adjusted $R^2$. $$R^2_{adj}=1 - \dfrac{\dfrac{SSRes}{n-p}}
{\dfrac{\sum_{i=1}^n\big(y_i - \bar y\big)^2}{n-1}}$$.
Here, $p$ is the number of parameters in the regression (including the intercept).
An alternative to using an in-sample metric like $R^2_{adj}$ is to keep some data that the model has not seen and test out the model on this holdout data. If the model has overfitted, we expect poor performance on the holdout data. The usual metrics applied to holdout data would work well here. Beyond just considering functions of squared residuals, we might be interested in absolute residuals or percent deviation between truth and prediction.
A more advanced variant of out-of-sample validation, beyond the scope of this question, uses bootstrapping to estimate by how much you have overfit. I briefly describe the method here.
But you had asked about nonlinear models. Analogues to $R^2$ for nonlinear models are hard to determine, as the degrees of freedom used in place of the $n-p$ term are not as clear as in linear regression. Consequently, using out-of-sample checks (or bootstrap validation) might be the way to go. Many in-sample metrics can be calculated on out-of-sample data. I will list a few below along with pros and cons.
$SSRes$
Pros: Easy to calculate
Cons: Hard to interpret, since it can grow large just by having many observations
$MSE$
Pros: Easy to calculate, related to the variance of the error term
Cons: The relationship to variance can range from unhelpful to downright misleading if the error is not Gaussian or does not have a constant variance; the units are squared
$RMSE: \text{Root Mean Squared Error}$
(This is just the square root of the MSE.)
Pros: Related to the standard deviation of the error term; easy to calculate; in the same units of $y$
Cons: The relationship to standard deviation can range from unhelpful to downright misleading if the error is not Gaussian or does not have a constant standard deviation
$R^2$
Pros: Related to comparing your predictions to the predictions of a baseline model
Cons: Out-of-sample (and even in-sample when a regression is fit by a method other than least squares), $R^2$ lacks its usual "proportion of variance explained" interpretation; it is easy to think in term of letter grades in school where $R^2=0.6$ is a $\text{D}$ that makes us sad, even though such a value might be spectacular performance
(For reasons I discuss in detail here, I disagree with the exact implementation of out-of-sample $R^2$ in the common Python machine learning package sklearn. That implementation compares your performance to a model that always guesses the out-of-sample mean, which is supposed to be a model that you cannot access (since the out-of-sample data are not for training).)
$MAPE: \text{Mean Absolute Percentage Error}$
Pros: Handles data on different scales, where missing by $5$ might be a big deal when the true value is $10$ but less of a big deal when the true value is a billion
Cons: Overestimates and underestimates are not penalized equally; you have to divide by zero if a true value is zero; many others, as described on the Wikipedia article on MAPE, though the link also mentions some alternatives | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
I disagree with the Minitab blog and many of the common criticisms of $R^2$. After all, $R^2$ is just a function of the sum of squared residuals: $SSRes = \sum_{i=1}^n\big(y_i - \hat y_i\big)^2$.
$$
R |
20,613 | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | Regression has to do with the whole study, the type of data, the correct statistical inference, the correct form, and the right tests just to name a few. In other words, R-square value can be used but not sufficient. This is true even in linear models. What is most important is making sure the theory behind the model is logical since you can have goodness of fit yet still be far off in your theory. Null hypothesis and other metrics should be used as well as R-squared or modified R-square. In other words, there is so much more to regression than testing. Much of it is understanding the statistical inference then applying the correct metrics to the specific data you are using. For example is it cross sectional data? Is the data qualitative? You can spend many a semester in graduate school studying this so what I give you is a mere taste. | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful? | Regression has to do with the whole study, the type of data, the correct statistical inference, the correct form, and the right tests just to name a few. In other words, R-square value can be used but | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
Regression has to do with the whole study, the type of data, the correct statistical inference, the correct form, and the right tests just to name a few. In other words, R-square value can be used but not sufficient. This is true even in linear models. What is most important is making sure the theory behind the model is logical since you can have goodness of fit yet still be far off in your theory. Null hypothesis and other metrics should be used as well as R-squared or modified R-square. In other words, there is so much more to regression than testing. Much of it is understanding the statistical inference then applying the correct metrics to the specific data you are using. For example is it cross sectional data? Is the data qualitative? You can spend many a semester in graduate school studying this so what I give you is a mere taste. | How to measure the goodness-of-fit of a nonlinear model? Is $R^2$ useful?
Regression has to do with the whole study, the type of data, the correct statistical inference, the correct form, and the right tests just to name a few. In other words, R-square value can be used but |
20,614 | Goodness-of-fit test in Logistic regression; which 'fit' do we want to test? | "Goodness of fit" is sometimes used in one sense as the contrary of evident model mis-specification, "lack of fit"; & sometimes in another sense as a model's predictive performance—how well predictions match to observations. The Hosmer–Lemeshow test is for goodness of fit in the first sense, & although evidence of lack of fit suggests predictive performance (GoF in the second sense, measured by say Nagelkerke's $R^2$ or Brier scores) could be improved, you're none the wiser as to how or by how much until you try out specific improvements (typically by including interaction terms, or a spline or polynomial basis for representing continuous predictors to allow for a curvilinear relationship with the logit; sometimes by changing the link).
Goodness-of-fit tests are intended to have reasonable power against a variety of alternatives, rather than high power against a specific alternative; so people comparing of the power of different tests tends to take the pragmatic approach of picking a few alternatives that are thought to be of particular interest to potential users (see for example the frequently cited Stephens (1974), "EDF statistics for goodness of fit & some comparisons", JASA, 69, 347). You can't conclude that one test is more powerful than another against all possible alternatives because it's more powerful against some. | Goodness-of-fit test in Logistic regression; which 'fit' do we want to test? | "Goodness of fit" is sometimes used in one sense as the contrary of evident model mis-specification, "lack of fit"; & sometimes in another sense as a model's predictive performance—how well prediction | Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?
"Goodness of fit" is sometimes used in one sense as the contrary of evident model mis-specification, "lack of fit"; & sometimes in another sense as a model's predictive performance—how well predictions match to observations. The Hosmer–Lemeshow test is for goodness of fit in the first sense, & although evidence of lack of fit suggests predictive performance (GoF in the second sense, measured by say Nagelkerke's $R^2$ or Brier scores) could be improved, you're none the wiser as to how or by how much until you try out specific improvements (typically by including interaction terms, or a spline or polynomial basis for representing continuous predictors to allow for a curvilinear relationship with the logit; sometimes by changing the link).
Goodness-of-fit tests are intended to have reasonable power against a variety of alternatives, rather than high power against a specific alternative; so people comparing of the power of different tests tends to take the pragmatic approach of picking a few alternatives that are thought to be of particular interest to potential users (see for example the frequently cited Stephens (1974), "EDF statistics for goodness of fit & some comparisons", JASA, 69, 347). You can't conclude that one test is more powerful than another against all possible alternatives because it's more powerful against some. | Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?
"Goodness of fit" is sometimes used in one sense as the contrary of evident model mis-specification, "lack of fit"; & sometimes in another sense as a model's predictive performance—how well prediction |
20,615 | Model Matrices for Mixed Effects Models | Creating the $J_i$ matrix entails producing 3 levels (309, 330 and 371) each one with 10 observations or measurements (nrow(sleepstudy[sleepstudy$Subject==309,]) [1] 10). Following the code in the original link in the OP:
f <- gl(3,10)
Ji<-t(as(f,Class="sparseMatrix"))
Building the $X_i$ matrix can be aided by using the function getME as a reference:
library(lme4)
sleepstudy <- sleepstudy[sleepstudy$Subject %in% c(309, 330, 371), ]
rownames(sleepstudy) <- NULL
fm1<-lmer(Reaction~Days+(Days|Subject), sleepstudy)
Xi <- getME(fm1,"mmList")
Since we will need the transpose, and the object Xi is not a matrix, the t(Xi) can be built as:
t_Xi <- rbind(c(rep(1,30)),c(rep(0:9,3)))
$Z_i$ is calculated as $Z_i= (J_i^{T}∗X_i^{T})^⊤$:
Zi<-t(KhatriRao(t_Ji,t_Xi)):
This corresponds to equation (6) in the original paper:
$$Z_i = (J_i^T*X_i^T)^T=\begin{bmatrix}J_{i1}^T\otimes X_{i1}^T\\J_{i2}^T\otimes X_{i2}^T\\\vdots\\J_{in}^T\otimes X_{in}^T\end{bmatrix}$$
And to see this we can instead play with truncated $J_i^T$ and $X_i^T$ matrices by imagining that instead of 9 measurements and a baseline (0), there is only 1 measurement (and a baseline). The resulting matrices would be:
$J_i^T=\left[\begin{smallmatrix}1&1&0&0&0&0\\0&0&1&1&0&0\\0&0&0&0&1&1\end{smallmatrix}\right]$ and $X_i^T=\left[\begin{smallmatrix}1&1&1&1&1&1\\0&1&0&1&0&1\end{smallmatrix}\right]$.
And
$J_i^T*X_i^T=\left[
\begin{smallmatrix}
\left(\begin{smallmatrix}1\\0\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}1\\0\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\1\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\1\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\0\\1\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\0\\1\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)
\end{smallmatrix}
\right]$
$ \small= \begin{bmatrix}J_{i1}^T\otimes X_{i1}^T && J_{i2}^T \otimes X_{i2}^T && J_{i3}^T\otimes X_{i3}^T && J_{i4}^T\otimes X_{i4}^T && J_{i5}^T\otimes X_{i5}^T && J_{i6}^T\otimes X_{i6}^T\end{bmatrix}$
$=\left[\begin{smallmatrix}1&1&0&0&0&0\\0&1&0&0&0&0\\0&0&1&1&0&0\\0&0&0&1&0&0\\0&0&0&0&1&1\\0&0&0&0&0&1\end{smallmatrix}\right]$. Which transposed and extended would result in $Z_i=\left[\begin{smallmatrix}1&0&0&0&0&0\\1&1&0&0&0&0\\1&2&0&0&0&0\\\vdots\\\\0&0&1&0&0&0\\0&0&1&1&0&0\\0&0&1&2&0&0\\\vdots\\\\0&0&0&0&1&0\\0&0&0&0&1&1\\0&0&0&0&1&2\\\\\vdots\end{smallmatrix}\right]$.
Extracting the $b$ vector of random effects coefficients can be done with the function:
b <- getME(fm1,"b")
[1,] -44.1573839
[2,] -2.4118590
[3,] 32.8633489
[4,] -0.3998801
[5,] 11.2940350
[6,] 2.8117392
If we add these values to the fixed-effects of the call fm1<-lmer(Reaction~Days+(Days|Subject), sleepstudy) we get the intercepts:
205.3016 for 309; 282.3223 for 330; and 260.7530 for 371
and the slopes:
2.407141 for 309; 4.419120 for 330; and 7.630739 for 371
values consistent with:
library(lattice)
xyplot(Reaction ~ Days | Subject, groups = Subject, data = sleepstudy,
pch=19, lwd=2, type=c('p','r'))
$Zb$ can be calculated as as.matrix(Zi)%*%b. | Model Matrices for Mixed Effects Models | Creating the $J_i$ matrix entails producing 3 levels (309, 330 and 371) each one with 10 observations or measurements (nrow(sleepstudy[sleepstudy$Subject==309,]) [1] 10). Following the code in the ori | Model Matrices for Mixed Effects Models
Creating the $J_i$ matrix entails producing 3 levels (309, 330 and 371) each one with 10 observations or measurements (nrow(sleepstudy[sleepstudy$Subject==309,]) [1] 10). Following the code in the original link in the OP:
f <- gl(3,10)
Ji<-t(as(f,Class="sparseMatrix"))
Building the $X_i$ matrix can be aided by using the function getME as a reference:
library(lme4)
sleepstudy <- sleepstudy[sleepstudy$Subject %in% c(309, 330, 371), ]
rownames(sleepstudy) <- NULL
fm1<-lmer(Reaction~Days+(Days|Subject), sleepstudy)
Xi <- getME(fm1,"mmList")
Since we will need the transpose, and the object Xi is not a matrix, the t(Xi) can be built as:
t_Xi <- rbind(c(rep(1,30)),c(rep(0:9,3)))
$Z_i$ is calculated as $Z_i= (J_i^{T}∗X_i^{T})^⊤$:
Zi<-t(KhatriRao(t_Ji,t_Xi)):
This corresponds to equation (6) in the original paper:
$$Z_i = (J_i^T*X_i^T)^T=\begin{bmatrix}J_{i1}^T\otimes X_{i1}^T\\J_{i2}^T\otimes X_{i2}^T\\\vdots\\J_{in}^T\otimes X_{in}^T\end{bmatrix}$$
And to see this we can instead play with truncated $J_i^T$ and $X_i^T$ matrices by imagining that instead of 9 measurements and a baseline (0), there is only 1 measurement (and a baseline). The resulting matrices would be:
$J_i^T=\left[\begin{smallmatrix}1&1&0&0&0&0\\0&0&1&1&0&0\\0&0&0&0&1&1\end{smallmatrix}\right]$ and $X_i^T=\left[\begin{smallmatrix}1&1&1&1&1&1\\0&1&0&1&0&1\end{smallmatrix}\right]$.
And
$J_i^T*X_i^T=\left[
\begin{smallmatrix}
\left(\begin{smallmatrix}1\\0\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}1\\0\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\1\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\1\\0\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\0\\1\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)&&
\left(\begin{smallmatrix}0\\0\\1\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)
\end{smallmatrix}
\right]$
$ \small= \begin{bmatrix}J_{i1}^T\otimes X_{i1}^T && J_{i2}^T \otimes X_{i2}^T && J_{i3}^T\otimes X_{i3}^T && J_{i4}^T\otimes X_{i4}^T && J_{i5}^T\otimes X_{i5}^T && J_{i6}^T\otimes X_{i6}^T\end{bmatrix}$
$=\left[\begin{smallmatrix}1&1&0&0&0&0\\0&1&0&0&0&0\\0&0&1&1&0&0\\0&0&0&1&0&0\\0&0&0&0&1&1\\0&0&0&0&0&1\end{smallmatrix}\right]$. Which transposed and extended would result in $Z_i=\left[\begin{smallmatrix}1&0&0&0&0&0\\1&1&0&0&0&0\\1&2&0&0&0&0\\\vdots\\\\0&0&1&0&0&0\\0&0&1&1&0&0\\0&0&1&2&0&0\\\vdots\\\\0&0&0&0&1&0\\0&0&0&0&1&1\\0&0&0&0&1&2\\\\\vdots\end{smallmatrix}\right]$.
Extracting the $b$ vector of random effects coefficients can be done with the function:
b <- getME(fm1,"b")
[1,] -44.1573839
[2,] -2.4118590
[3,] 32.8633489
[4,] -0.3998801
[5,] 11.2940350
[6,] 2.8117392
If we add these values to the fixed-effects of the call fm1<-lmer(Reaction~Days+(Days|Subject), sleepstudy) we get the intercepts:
205.3016 for 309; 282.3223 for 330; and 260.7530 for 371
and the slopes:
2.407141 for 309; 4.419120 for 330; and 7.630739 for 371
values consistent with:
library(lattice)
xyplot(Reaction ~ Days | Subject, groups = Subject, data = sleepstudy,
pch=19, lwd=2, type=c('p','r'))
$Zb$ can be calculated as as.matrix(Zi)%*%b. | Model Matrices for Mixed Effects Models
Creating the $J_i$ matrix entails producing 3 levels (309, 330 and 371) each one with 10 observations or measurements (nrow(sleepstudy[sleepstudy$Subject==309,]) [1] 10). Following the code in the ori |
20,616 | Can I use ReLU in autoencoder as activation function? | Here's a discussion thread (from July 2013) indicating that there might be some issues with it, but it can be done.
Çağlar Gülçehre (from Yoshua Bengio's lab) said he successfully used the following technique in Knowledge Matters: Importance of Prior Information for Optimization:
train the first DAE as usual, but with rectifiers in the hidden layer:
a1(x) = W1 x + b1
h1 = f1(x) = rectifier(a1(x))
g1(h1) = {sigmoid}(V1 h1 + c1)
minimize cross-entropy or MSE loss, comparing g1(f1(corrupt(x))) and x. the sigmoid is optional depending on the data.
train the 2nd DAE with noise added before the f1 rectifier and use softplus reconstruction units with MSE loss:
h2 = f2(h1) = rectifier(W2 h1 + b2)
g2(h2) = softplus(V2 h2 + c2)
minimize $\lVert f_1(x) - g_2(f_2(\mathrm{rectifier}(\mathrm{corrupt}(a_1(x))))) \rVert^2 + \lambda_1 \lVert W \rVert_1 + \lambda_2 \lVert W \rVert_2$
Xavier Glorot, also from the Bengio lab, said he did the same except for replacing $\lVert W \rVert_1$ with an $L_1$ penalty "on the activation values" (presumably $\lVert g_2(\dots) \rVert_1$?) in both Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach (ICML 2011) and in Deep sparse rectifier neural networks (AISTATS 2011). | Can I use ReLU in autoencoder as activation function? | Here's a discussion thread (from July 2013) indicating that there might be some issues with it, but it can be done.
Çağlar Gülçehre (from Yoshua Bengio's lab) said he successfully used the following t | Can I use ReLU in autoencoder as activation function?
Here's a discussion thread (from July 2013) indicating that there might be some issues with it, but it can be done.
Çağlar Gülçehre (from Yoshua Bengio's lab) said he successfully used the following technique in Knowledge Matters: Importance of Prior Information for Optimization:
train the first DAE as usual, but with rectifiers in the hidden layer:
a1(x) = W1 x + b1
h1 = f1(x) = rectifier(a1(x))
g1(h1) = {sigmoid}(V1 h1 + c1)
minimize cross-entropy or MSE loss, comparing g1(f1(corrupt(x))) and x. the sigmoid is optional depending on the data.
train the 2nd DAE with noise added before the f1 rectifier and use softplus reconstruction units with MSE loss:
h2 = f2(h1) = rectifier(W2 h1 + b2)
g2(h2) = softplus(V2 h2 + c2)
minimize $\lVert f_1(x) - g_2(f_2(\mathrm{rectifier}(\mathrm{corrupt}(a_1(x))))) \rVert^2 + \lambda_1 \lVert W \rVert_1 + \lambda_2 \lVert W \rVert_2$
Xavier Glorot, also from the Bengio lab, said he did the same except for replacing $\lVert W \rVert_1$ with an $L_1$ penalty "on the activation values" (presumably $\lVert g_2(\dots) \rVert_1$?) in both Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach (ICML 2011) and in Deep sparse rectifier neural networks (AISTATS 2011). | Can I use ReLU in autoencoder as activation function?
Here's a discussion thread (from July 2013) indicating that there might be some issues with it, but it can be done.
Çağlar Gülçehre (from Yoshua Bengio's lab) said he successfully used the following t |
20,617 | glmnet: How to make sense of multinomial parameterization? | About the parameters from multinom and glmnet,
I found this answer beneficial,
Can I use glm algorithms to do a multinomial logistic regression?
especially, "Yes, with a Poisson GLM (log linear model) you can fit multinomial models. Hence multinomial logistic or log linear Poisson models are equivalent."
So I'll show the reparametrization of the glmnet coefficients to multinom coefficients.
n.subj=1000
x1 <- rnorm(n.subj)
x2 <- rnorm(n.subj)
prob <- matrix(c(rep(1,n.subj), exp(3+2*x1+x2), exp(-1+x1-3*x2)), , ncol=3)
prob <- sweep(prob, 1, apply(prob, 1, sum), "/")
y = c()
for (i in 1:n.subj)
y[i] <- sample(3, 1, replace = T, prob = prob[i,])
multinom(y~x1+x2)
x <- cbind(x1,x2); y2 <- factor(y)
fit <- glmnet(x, y2, family="multinomial", lambda=0, type.multinomial = "grouped")
cf <- coef(fit)
cf[[2]]@x - cf[[1]]@x # for the category 2
cf[[3]]@x - cf[[1]]@x # for the category 3
Hope this helps. But I don't think I understand the equivalence of
Generalized Linear Model(Poisson) and multinomial logistic model in and out.
Tell me if there's good and readable and "easily" understandable source.. | glmnet: How to make sense of multinomial parameterization? | About the parameters from multinom and glmnet,
I found this answer beneficial,
Can I use glm algorithms to do a multinomial logistic regression?
especially, "Yes, with a Poisson GLM (log linear model) | glmnet: How to make sense of multinomial parameterization?
About the parameters from multinom and glmnet,
I found this answer beneficial,
Can I use glm algorithms to do a multinomial logistic regression?
especially, "Yes, with a Poisson GLM (log linear model) you can fit multinomial models. Hence multinomial logistic or log linear Poisson models are equivalent."
So I'll show the reparametrization of the glmnet coefficients to multinom coefficients.
n.subj=1000
x1 <- rnorm(n.subj)
x2 <- rnorm(n.subj)
prob <- matrix(c(rep(1,n.subj), exp(3+2*x1+x2), exp(-1+x1-3*x2)), , ncol=3)
prob <- sweep(prob, 1, apply(prob, 1, sum), "/")
y = c()
for (i in 1:n.subj)
y[i] <- sample(3, 1, replace = T, prob = prob[i,])
multinom(y~x1+x2)
x <- cbind(x1,x2); y2 <- factor(y)
fit <- glmnet(x, y2, family="multinomial", lambda=0, type.multinomial = "grouped")
cf <- coef(fit)
cf[[2]]@x - cf[[1]]@x # for the category 2
cf[[3]]@x - cf[[1]]@x # for the category 3
Hope this helps. But I don't think I understand the equivalence of
Generalized Linear Model(Poisson) and multinomial logistic model in and out.
Tell me if there's good and readable and "easily" understandable source.. | glmnet: How to make sense of multinomial parameterization?
About the parameters from multinom and glmnet,
I found this answer beneficial,
Can I use glm algorithms to do a multinomial logistic regression?
especially, "Yes, with a Poisson GLM (log linear model) |
20,618 | glmnet: How to make sense of multinomial parameterization? | To make sure that sum of choice probabilities is 1, all parameters of the referance alternative is required to be zero. So, I think the result of glmnet() is odd.
Related Q: Why glmnet can be calculated parameters for all category? | glmnet: How to make sense of multinomial parameterization? | To make sure that sum of choice probabilities is 1, all parameters of the referance alternative is required to be zero. So, I think the result of glmnet() is odd.
Related Q: Why glmnet can be calculat | glmnet: How to make sense of multinomial parameterization?
To make sure that sum of choice probabilities is 1, all parameters of the referance alternative is required to be zero. So, I think the result of glmnet() is odd.
Related Q: Why glmnet can be calculated parameters for all category? | glmnet: How to make sense of multinomial parameterization?
To make sure that sum of choice probabilities is 1, all parameters of the referance alternative is required to be zero. So, I think the result of glmnet() is odd.
Related Q: Why glmnet can be calculat |
20,619 | Confidence bands for QQ line | It has to do with the distribution of order statistics $$f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x)$$ and more particularly the asymptotic result: $$X_{(\lceil np \rceil)} \sim AN\left(F^{-1}(p),\frac{p(1-p)}{n[f(F^{-1}(p))]^2}\right)$$
As COOLSerdash mentions in comments, John Fox [1] writes on pages 35-36:
The standard error of the order statistic $X_{(i)}$ is $$\mathrm{SE}(X_{(i)})=\frac{\hat{\sigma}}{p(z_i)}\sqrt{\frac{P_i(1-P_i)}{n}}$$ where $p(z)$ is the probability density function corresponding to the CDF $P(z)$. The values along the fitted line are given by $\widehat{X}_{(i)}=\hat{\mu}+\hat{\sigma}z_{i}$. An approximate 95% confidence "envelope" around the fitted line is, therefore, $\widehat{X}_{(i)}\pm 2\times \mathrm{SE}(X_{(i)})$.
Then we just need to recognize that $f(F^{-1}(p))$ is estimated by $(p(z_i)/\hat{\sigma})$.
[1] Fox, J. (2008),
Applied Regression Analysis and Generalized Linear Models, 2nd Ed.,
Sage Publications, Inc | Confidence bands for QQ line | It has to do with the distribution of order statistics $$f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x)$$ and more particularly the asymptotic result: $$X_{(\lceil np \r | Confidence bands for QQ line
It has to do with the distribution of order statistics $$f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x)$$ and more particularly the asymptotic result: $$X_{(\lceil np \rceil)} \sim AN\left(F^{-1}(p),\frac{p(1-p)}{n[f(F^{-1}(p))]^2}\right)$$
As COOLSerdash mentions in comments, John Fox [1] writes on pages 35-36:
The standard error of the order statistic $X_{(i)}$ is $$\mathrm{SE}(X_{(i)})=\frac{\hat{\sigma}}{p(z_i)}\sqrt{\frac{P_i(1-P_i)}{n}}$$ where $p(z)$ is the probability density function corresponding to the CDF $P(z)$. The values along the fitted line are given by $\widehat{X}_{(i)}=\hat{\mu}+\hat{\sigma}z_{i}$. An approximate 95% confidence "envelope" around the fitted line is, therefore, $\widehat{X}_{(i)}\pm 2\times \mathrm{SE}(X_{(i)})$.
Then we just need to recognize that $f(F^{-1}(p))$ is estimated by $(p(z_i)/\hat{\sigma})$.
[1] Fox, J. (2008),
Applied Regression Analysis and Generalized Linear Models, 2nd Ed.,
Sage Publications, Inc | Confidence bands for QQ line
It has to do with the distribution of order statistics $$f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x)$$ and more particularly the asymptotic result: $$X_{(\lceil np \r |
20,620 | Hyperprior density for hierarchical Gamma-Poisson model | Not really answering the question, since I'm not pointing you to books or articles which have employed a hyperprior, but instead am describing, and linking to, stuff about priors on Gamma parameters.
First, note that the Poisson-Gamma model leads, when $\lambda$ is integrated out, to a Negative Binomial distribution with parameters $\alpha$ and $\beta/(1+\beta)$. The second parameter is in the range $(0,1)$. If you wish to be uninformative, a Jeffreys prior on $p = \beta/(1+\beta)$ might be appropriate. You could put the prior directly on $p$ or work through the change of variables to get:
$p(\beta) \propto \beta^{-1/2}(1+\beta)^{-1}$
Alternatively, you could note that $\beta$ is the scale parameter for the Gamma distribution, and, generically, the Jeffreys prior for a scale parameter $\beta$ is $1/\beta$. One might find it odd that the Jeffreys prior for $\beta$ is different between the two models, but the models themselves are not equivalent; one is for the distribution of $y | \alpha, \beta$ and the other is for the distribution of $\lambda | \alpha, \beta$. An argument in favor of the former is that, assuming no clustering, the data really is distributed Negative Binomial $(\alpha, p)$, so putting the priors directly on $\alpha$ and $p$ is the thing to do. OTOH, if, for example, you have clusters in the data where the observations in each cluster have the same $\lambda$, you really need to model the $\lambda$s somehow, and so treating $\beta$ as the scale parameter of a Gamma distribution would seem more appropriate. (My thoughts on a possibly contentious topic.)
The first parameter can also be addressed via Jeffreys priors. If we use the common technique of developing Jeffreys priors for each parameter independently, then forming the joint (non-Jeffreys) prior as the product of the two single-parameter priors, we get a prior for the shape parameter $\alpha$ of a Gamma distribution:
$p(\alpha) \propto \sqrt{\text{PG}(1,\alpha)}$
where the polygamma function $\text{PG}(1,\alpha) = \sum_{i=0}^{\infty}(i+\alpha)^{-2}$. Awkward, but truncatable. You could combine this with either of the Jeffreys priors above to get an uninformative joint prior distribution. Combining it with the $1/\beta$ prior for the Gamma scale parameter results in a reference prior for the Gamma parameters.
If we wish to go the Full Jeffreys route, forming the true Jeffreys prior for the Gamma parameters, we'd get:
$p(\alpha, \beta) \propto \sqrt{\alpha \text{PG}(1,\alpha)-1}/\beta$
However, Jeffreys priors for multidimensional parameters often have poor properties as well as poor convergence characteristics (see link to lecture). I don't know whether this is the case for the Gamma, but testing would provide some useful information.
For more on priors for the Gamma, look at page 13-14 of A Catalog of Non-Informative Priors, Yang and Berger. Lots of other distributions are in there, too. For an overview of Jeffreys and reference priors, here are some lecture notes. | Hyperprior density for hierarchical Gamma-Poisson model | Not really answering the question, since I'm not pointing you to books or articles which have employed a hyperprior, but instead am describing, and linking to, stuff about priors on Gamma parameters.
| Hyperprior density for hierarchical Gamma-Poisson model
Not really answering the question, since I'm not pointing you to books or articles which have employed a hyperprior, but instead am describing, and linking to, stuff about priors on Gamma parameters.
First, note that the Poisson-Gamma model leads, when $\lambda$ is integrated out, to a Negative Binomial distribution with parameters $\alpha$ and $\beta/(1+\beta)$. The second parameter is in the range $(0,1)$. If you wish to be uninformative, a Jeffreys prior on $p = \beta/(1+\beta)$ might be appropriate. You could put the prior directly on $p$ or work through the change of variables to get:
$p(\beta) \propto \beta^{-1/2}(1+\beta)^{-1}$
Alternatively, you could note that $\beta$ is the scale parameter for the Gamma distribution, and, generically, the Jeffreys prior for a scale parameter $\beta$ is $1/\beta$. One might find it odd that the Jeffreys prior for $\beta$ is different between the two models, but the models themselves are not equivalent; one is for the distribution of $y | \alpha, \beta$ and the other is for the distribution of $\lambda | \alpha, \beta$. An argument in favor of the former is that, assuming no clustering, the data really is distributed Negative Binomial $(\alpha, p)$, so putting the priors directly on $\alpha$ and $p$ is the thing to do. OTOH, if, for example, you have clusters in the data where the observations in each cluster have the same $\lambda$, you really need to model the $\lambda$s somehow, and so treating $\beta$ as the scale parameter of a Gamma distribution would seem more appropriate. (My thoughts on a possibly contentious topic.)
The first parameter can also be addressed via Jeffreys priors. If we use the common technique of developing Jeffreys priors for each parameter independently, then forming the joint (non-Jeffreys) prior as the product of the two single-parameter priors, we get a prior for the shape parameter $\alpha$ of a Gamma distribution:
$p(\alpha) \propto \sqrt{\text{PG}(1,\alpha)}$
where the polygamma function $\text{PG}(1,\alpha) = \sum_{i=0}^{\infty}(i+\alpha)^{-2}$. Awkward, but truncatable. You could combine this with either of the Jeffreys priors above to get an uninformative joint prior distribution. Combining it with the $1/\beta$ prior for the Gamma scale parameter results in a reference prior for the Gamma parameters.
If we wish to go the Full Jeffreys route, forming the true Jeffreys prior for the Gamma parameters, we'd get:
$p(\alpha, \beta) \propto \sqrt{\alpha \text{PG}(1,\alpha)-1}/\beta$
However, Jeffreys priors for multidimensional parameters often have poor properties as well as poor convergence characteristics (see link to lecture). I don't know whether this is the case for the Gamma, but testing would provide some useful information.
For more on priors for the Gamma, look at page 13-14 of A Catalog of Non-Informative Priors, Yang and Berger. Lots of other distributions are in there, too. For an overview of Jeffreys and reference priors, here are some lecture notes. | Hyperprior density for hierarchical Gamma-Poisson model
Not really answering the question, since I'm not pointing you to books or articles which have employed a hyperprior, but instead am describing, and linking to, stuff about priors on Gamma parameters.
|
20,621 | Coefficients paths – comparison of ridge, lasso and elastic net regression | In the $p < n$ case ($p$ number of coefficients, $n$ number of samples, which by the number of coefficients you show in the plots I guess it is the case here), the only real "problem" with the Lasso model is that when multiple features are correlated it tends to select one of then somewhat randomly.
If the original features are not very correlated, I would say that it is reasonable that Lasso performs similar to Elastic Net in terms of coefficients path. Looking at the documentation for glmnet package, I also can't see any error in your code. | Coefficients paths – comparison of ridge, lasso and elastic net regression | In the $p < n$ case ($p$ number of coefficients, $n$ number of samples, which by the number of coefficients you show in the plots I guess it is the case here), the only real "problem" with the Lasso m | Coefficients paths – comparison of ridge, lasso and elastic net regression
In the $p < n$ case ($p$ number of coefficients, $n$ number of samples, which by the number of coefficients you show in the plots I guess it is the case here), the only real "problem" with the Lasso model is that when multiple features are correlated it tends to select one of then somewhat randomly.
If the original features are not very correlated, I would say that it is reasonable that Lasso performs similar to Elastic Net in terms of coefficients path. Looking at the documentation for glmnet package, I also can't see any error in your code. | Coefficients paths – comparison of ridge, lasso and elastic net regression
In the $p < n$ case ($p$ number of coefficients, $n$ number of samples, which by the number of coefficients you show in the plots I guess it is the case here), the only real "problem" with the Lasso m |
20,622 | Is centering needed when bootstrapping the sample mean? | Yes, you can approximate $\mathbb{P}\left(\bar{X}_n \leq x\right)$ by $\mathbb{P}\left(\bar{X}_n^* \leq x\right)$ but it is not optimal. This is a form of the percentile bootstrap. However, the percentile bootstrap does not perform well if you are seeking to make inferences about the population mean unless you have a large sample size. (It does perform well with many other inference problems including when the sample size size is small.) I take this conclusion from Wilcox's Modern Statistics for the Social and Behavioral Sciences, CRC Press, 2012. A theoretical proof is beyond me I'm afraid.
A variant on the centering approach goes the next step and scales your centered bootstrap statistic with the re-sample standard deviation and sample size, calculating the same way as a t statistic. The quantiles from the distribution of these t statistics can be used to construct a confidence interval or perform a hypothesis test. This is the bootstrap-t method and it gives superior results when making inferences about the mean.
Let $s^*$ be the re-sample standard deviation based on a bootstrap re-sample, using n-1 as denominator; and s be the standard deviation of the original sample. Let
$T^*=\frac{\bar{X}_n^*-\bar{X}}{s^*/\sqrt{n}}$
The 97.5th and 2.5th percentiles of of the simulated distribution of $T^*$ can make a confidence interval for $\mu$ by:
$\bar{X}-T^*_{0.975} \frac{s}{\sqrt{n}}, \bar{X}-T^*_{0.025} \frac{s}{\sqrt{n}}$
Consider the simulation results below, showing that with a badly skewed mixed distribution the confidence intervals from this method contain the true value more frequently than either the percentile bootstrap method or a traditional inverstion of a t statistic with no bootstrapping.
compare.boots <- function(samp, reps = 599){
# "samp" is the actual original observed sample
# "s" is a re-sample for bootstrap purposes
n <- length(samp)
boot.t <- numeric(reps)
boot.p <- numeric(reps)
for(i in 1:reps){
s <- sample(samp, replace=TRUE)
boot.t[i] <- (mean(s)-mean(samp)) / (sd(s)/sqrt(n))
boot.p[i] <- mean(s)
}
conf.t <- mean(samp)-quantile(boot.t, probs=c(0.975,0.025))*sd(samp)/sqrt(n)
conf.p <- quantile(boot.p, probs=c(0.025, 0.975))
return(rbind(conf.t, conf.p, "Trad T test"=t.test(samp)$conf.int))
}
# Tests below will be for case where sample size is 15
n <- 15
# Create a population that is normally distributed
set.seed(123)
pop <- rnorm(1000,10,1)
my.sample <- sample(pop,n)
# All three methods have similar results when normally distributed
compare.boots(my.sample)
This gives the following (conf.t is the bootstrap t method; conf.p is the percentile bootstrap method).
97.5% 2.5%
conf.t 9.648824 10.98006
conf.p 9.808311 10.95964
Trad T test 9.681865 11.01644
With a single example from a skewed distribution:
# create a population that is a mixture of two normal and one gamma distribution
set.seed(123)
pop <- c(rnorm(1000,10,2),rgamma(3000,3,1)*4, rnorm(200,45,7))
my.sample <- sample(pop,n)
mean(pop)
compare.boots(my.sample)
This gives the following. Note that "conf.t" - the bootstrap t version - gives a wider confidence interval than the other two. Basically, it is better at responding to the unusual distribution of the population.
> mean(pop)
[1] 13.02341
> compare.boots(my.sample)
97.5% 2.5%
conf.t 10.432285 29.54331
conf.p 9.813542 19.67761
Trad T test 8.312949 20.24093
Finally here is a thousand simulations to see which version gives confidence intervals that are most often correct:
# simulation study
set.seed(123)
sims <- 1000
results <- matrix(FALSE, sims,3)
colnames(results) <- c("Bootstrap T", "Bootstrap percentile", "Trad T test")
for(i in 1:sims){
pop <- c(rnorm(1000,10,2),rgamma(3000,3,1)*4, rnorm(200,45,7))
my.sample <- sample(pop,n)
mu <- mean(pop)
x <- compare.boots(my.sample)
for(j in 1:3){
results[i,j] <- x[j,1] < mu & x[j,2] > mu
}
}
apply(results,2,sum)
This gives the results below - the numbers are the times out of 1,000 that the confidence interval contains the true value of a simulated population. Notice that the true success rate of every version is considerably less than 95%.
Bootstrap T Bootstrap percentile Trad T test
901 854 890 | Is centering needed when bootstrapping the sample mean? | Yes, you can approximate $\mathbb{P}\left(\bar{X}_n \leq x\right)$ by $\mathbb{P}\left(\bar{X}_n^* \leq x\right)$ but it is not optimal. This is a form of the percentile bootstrap. However, the perc | Is centering needed when bootstrapping the sample mean?
Yes, you can approximate $\mathbb{P}\left(\bar{X}_n \leq x\right)$ by $\mathbb{P}\left(\bar{X}_n^* \leq x\right)$ but it is not optimal. This is a form of the percentile bootstrap. However, the percentile bootstrap does not perform well if you are seeking to make inferences about the population mean unless you have a large sample size. (It does perform well with many other inference problems including when the sample size size is small.) I take this conclusion from Wilcox's Modern Statistics for the Social and Behavioral Sciences, CRC Press, 2012. A theoretical proof is beyond me I'm afraid.
A variant on the centering approach goes the next step and scales your centered bootstrap statistic with the re-sample standard deviation and sample size, calculating the same way as a t statistic. The quantiles from the distribution of these t statistics can be used to construct a confidence interval or perform a hypothesis test. This is the bootstrap-t method and it gives superior results when making inferences about the mean.
Let $s^*$ be the re-sample standard deviation based on a bootstrap re-sample, using n-1 as denominator; and s be the standard deviation of the original sample. Let
$T^*=\frac{\bar{X}_n^*-\bar{X}}{s^*/\sqrt{n}}$
The 97.5th and 2.5th percentiles of of the simulated distribution of $T^*$ can make a confidence interval for $\mu$ by:
$\bar{X}-T^*_{0.975} \frac{s}{\sqrt{n}}, \bar{X}-T^*_{0.025} \frac{s}{\sqrt{n}}$
Consider the simulation results below, showing that with a badly skewed mixed distribution the confidence intervals from this method contain the true value more frequently than either the percentile bootstrap method or a traditional inverstion of a t statistic with no bootstrapping.
compare.boots <- function(samp, reps = 599){
# "samp" is the actual original observed sample
# "s" is a re-sample for bootstrap purposes
n <- length(samp)
boot.t <- numeric(reps)
boot.p <- numeric(reps)
for(i in 1:reps){
s <- sample(samp, replace=TRUE)
boot.t[i] <- (mean(s)-mean(samp)) / (sd(s)/sqrt(n))
boot.p[i] <- mean(s)
}
conf.t <- mean(samp)-quantile(boot.t, probs=c(0.975,0.025))*sd(samp)/sqrt(n)
conf.p <- quantile(boot.p, probs=c(0.025, 0.975))
return(rbind(conf.t, conf.p, "Trad T test"=t.test(samp)$conf.int))
}
# Tests below will be for case where sample size is 15
n <- 15
# Create a population that is normally distributed
set.seed(123)
pop <- rnorm(1000,10,1)
my.sample <- sample(pop,n)
# All three methods have similar results when normally distributed
compare.boots(my.sample)
This gives the following (conf.t is the bootstrap t method; conf.p is the percentile bootstrap method).
97.5% 2.5%
conf.t 9.648824 10.98006
conf.p 9.808311 10.95964
Trad T test 9.681865 11.01644
With a single example from a skewed distribution:
# create a population that is a mixture of two normal and one gamma distribution
set.seed(123)
pop <- c(rnorm(1000,10,2),rgamma(3000,3,1)*4, rnorm(200,45,7))
my.sample <- sample(pop,n)
mean(pop)
compare.boots(my.sample)
This gives the following. Note that "conf.t" - the bootstrap t version - gives a wider confidence interval than the other two. Basically, it is better at responding to the unusual distribution of the population.
> mean(pop)
[1] 13.02341
> compare.boots(my.sample)
97.5% 2.5%
conf.t 10.432285 29.54331
conf.p 9.813542 19.67761
Trad T test 8.312949 20.24093
Finally here is a thousand simulations to see which version gives confidence intervals that are most often correct:
# simulation study
set.seed(123)
sims <- 1000
results <- matrix(FALSE, sims,3)
colnames(results) <- c("Bootstrap T", "Bootstrap percentile", "Trad T test")
for(i in 1:sims){
pop <- c(rnorm(1000,10,2),rgamma(3000,3,1)*4, rnorm(200,45,7))
my.sample <- sample(pop,n)
mu <- mean(pop)
x <- compare.boots(my.sample)
for(j in 1:3){
results[i,j] <- x[j,1] < mu & x[j,2] > mu
}
}
apply(results,2,sum)
This gives the results below - the numbers are the times out of 1,000 that the confidence interval contains the true value of a simulated population. Notice that the true success rate of every version is considerably less than 95%.
Bootstrap T Bootstrap percentile Trad T test
901 854 890 | Is centering needed when bootstrapping the sample mean?
Yes, you can approximate $\mathbb{P}\left(\bar{X}_n \leq x\right)$ by $\mathbb{P}\left(\bar{X}_n^* \leq x\right)$ but it is not optimal. This is a form of the percentile bootstrap. However, the perc |
20,623 | What is the intuition behind the score function? [duplicate] | The Wikipedia article gives an example of a Bernoulli process, with $A$ successes and $B$ failures and the probability of success $\theta$, where the score is $V = \frac{A}{\theta}-\frac{B}{1-\theta}$. If $\theta= \frac{A}{A+B}$, i.e. $\frac{\theta}{1-\theta}= \frac{A}{B}$, then $V=0$.
The score is more positive when there are more successes than would have been expected from the value of $\theta$, and more negative when there are fewer successes.
The score might be seen intuitively as a sort of measure of how close the parameter actually is to what the data suggest it might be (or the other way round if you are that way inclined), signed for the direction of the difference. The variance of the score will tend to increase with more data, so the variance is intuitively an indication of the amount of information the data will give you about the parameter. | What is the intuition behind the score function? [duplicate] | The Wikipedia article gives an example of a Bernoulli process, with $A$ successes and $B$ failures and the probability of success $\theta$, where the score is $V = \frac{A}{\theta}-\frac{B}{1-\theta}$ | What is the intuition behind the score function? [duplicate]
The Wikipedia article gives an example of a Bernoulli process, with $A$ successes and $B$ failures and the probability of success $\theta$, where the score is $V = \frac{A}{\theta}-\frac{B}{1-\theta}$. If $\theta= \frac{A}{A+B}$, i.e. $\frac{\theta}{1-\theta}= \frac{A}{B}$, then $V=0$.
The score is more positive when there are more successes than would have been expected from the value of $\theta$, and more negative when there are fewer successes.
The score might be seen intuitively as a sort of measure of how close the parameter actually is to what the data suggest it might be (or the other way round if you are that way inclined), signed for the direction of the difference. The variance of the score will tend to increase with more data, so the variance is intuitively an indication of the amount of information the data will give you about the parameter. | What is the intuition behind the score function? [duplicate]
The Wikipedia article gives an example of a Bernoulli process, with $A$ successes and $B$ failures and the probability of success $\theta$, where the score is $V = \frac{A}{\theta}-\frac{B}{1-\theta}$ |
20,624 | Why do CNNs conclude with FC layers? | It's not so simple. First of all, a SVM is, in a way, a type of neural network (you can learn a SVM solution through backpropagation). See What *is* an Artificial Neural Network?. Second, you can't know beforehand which model will work better, but the thing is with a fully neuromorphic architecture you can learn the weights end-to-end, while attaching a SVM or RF to the last hidden layer activation of a CNN is simply an ad hoc procedure. It may perform better, and it may not, we can't know without testing.
The important part is that a fully convolutional architecture is capable of representation learning, which is useful for a myriad of reasons. For once, it may reduce or eliminate feature engineering altogether in your problem.
About the FC layers, they are mathematically equivalent to 1x1 Convolutional layers. See Yann Lecun's post, which I transcript below:
In Convolutional Nets, there is no such thing as "fully-connected
layers". There are only convolution layers with 1x1 convolution
kernels and a full connection table.
It's a too-rarely-understood fact that ConvNets don't need to have a
fixed-size input. You can train them on inputs that happen to produce
a single output vector (with no spatial extent), and then apply them
to larger images. Instead of a single output vector, you then get a
spatial map of output vectors. Each vector sees input windows at
different locations on the input.
In that scenario, the "fully connected layers" really act as 1x1
convolutions. | Why do CNNs conclude with FC layers? | It's not so simple. First of all, a SVM is, in a way, a type of neural network (you can learn a SVM solution through backpropagation). See What *is* an Artificial Neural Network?. Second, you can't kn | Why do CNNs conclude with FC layers?
It's not so simple. First of all, a SVM is, in a way, a type of neural network (you can learn a SVM solution through backpropagation). See What *is* an Artificial Neural Network?. Second, you can't know beforehand which model will work better, but the thing is with a fully neuromorphic architecture you can learn the weights end-to-end, while attaching a SVM or RF to the last hidden layer activation of a CNN is simply an ad hoc procedure. It may perform better, and it may not, we can't know without testing.
The important part is that a fully convolutional architecture is capable of representation learning, which is useful for a myriad of reasons. For once, it may reduce or eliminate feature engineering altogether in your problem.
About the FC layers, they are mathematically equivalent to 1x1 Convolutional layers. See Yann Lecun's post, which I transcript below:
In Convolutional Nets, there is no such thing as "fully-connected
layers". There are only convolution layers with 1x1 convolution
kernels and a full connection table.
It's a too-rarely-understood fact that ConvNets don't need to have a
fixed-size input. You can train them on inputs that happen to produce
a single output vector (with no spatial extent), and then apply them
to larger images. Instead of a single output vector, you then get a
spatial map of output vectors. Each vector sees input windows at
different locations on the input.
In that scenario, the "fully connected layers" really act as 1x1
convolutions. | Why do CNNs conclude with FC layers?
It's not so simple. First of all, a SVM is, in a way, a type of neural network (you can learn a SVM solution through backpropagation). See What *is* an Artificial Neural Network?. Second, you can't kn |
20,625 | Why do CNNs conclude with FC layers? | If you knew the No-Free Lunch Theorem (Wolpert & Macready), you would not get so hung up on one classifier and ask why it's not the best. The NFL Theorem states essentially that "in the universe of all cost functions, there is no one best classifier." Second, classifier performance always "depends on the data."
The Ugly Duckling Theorem (Watanabe) states essentially that "in the universe of all sets of features, there is no one best set of features."
Cover's Theorem states that if $p>n$, i.e., the dimensionality of the data is larger than the sample size, then a binary classification problem is always linearly separable.
In light of the above, as well as Occam's Razor, there is never anything that's better than anything else, independent of the data and cost function.
I have always argued that CNNs by themselves are not ensembles of classifiers for which diversity (kappa vs error) can be assessed. | Why do CNNs conclude with FC layers? | If you knew the No-Free Lunch Theorem (Wolpert & Macready), you would not get so hung up on one classifier and ask why it's not the best. The NFL Theorem states essentially that "in the universe of a | Why do CNNs conclude with FC layers?
If you knew the No-Free Lunch Theorem (Wolpert & Macready), you would not get so hung up on one classifier and ask why it's not the best. The NFL Theorem states essentially that "in the universe of all cost functions, there is no one best classifier." Second, classifier performance always "depends on the data."
The Ugly Duckling Theorem (Watanabe) states essentially that "in the universe of all sets of features, there is no one best set of features."
Cover's Theorem states that if $p>n$, i.e., the dimensionality of the data is larger than the sample size, then a binary classification problem is always linearly separable.
In light of the above, as well as Occam's Razor, there is never anything that's better than anything else, independent of the data and cost function.
I have always argued that CNNs by themselves are not ensembles of classifiers for which diversity (kappa vs error) can be assessed. | Why do CNNs conclude with FC layers?
If you knew the No-Free Lunch Theorem (Wolpert & Macready), you would not get so hung up on one classifier and ask why it's not the best. The NFL Theorem states essentially that "in the universe of a |
20,626 | Is MLE estimation asymptotically normal & efficient even if the model is not true? | I don't believe there is a single answer to this question.
When we consider possible distributional misspecification while applying maximum likelihood estimation, we get what is called the "Quasi-Maximum Likelihood" estimator (QMLE). In certain cases the QMLE is both consistent and asymptotically normal.
What it loses with certainty is asymptotic efficiency. This is because the asymptotic variance of $\sqrt n (\hat \theta - \theta)$ (this is the quantity that has an asymptotic distribution, not just $\hat \theta$) is, in all cases,
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = \text{plim}\Big( [\hat H]^{-1}[\hat S \hat S^T][\hat H]^{-1}\Big) \tag{1}$$
where $H$ is the Hessian matrix of the log-likelihood and $S$ is the gradient, and the hat indicates sample estimates.
Now, if we have correct specification, we get, first, that
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = (\mathbb E[H_0])^{-1}\mathbb E[S_0S_0^T](\mathbb E[H_0])^{-1} \tag{2}$$
where the "$0$" subscript denotes evaluation at the true parameters (and note that the middle term is the definition of Fisher Information), and second, that the "information matrix equality" holds and states that $-\mathbb E[H_0] = \mathbb E[S_0S_0^T]$, which means that the asymptotic variance will finally be
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = -(\mathbb E[H_0])^{-1} \tag{3}$$
which is the inverse of the Fisher information.
But if we have misspecification, expression $(1)$ does not lead to expression $(2)$ (because the first and second derivatives in $(1)$ have been derived based on the wrong likelihood). This in turn implies that the information matrix inequality does not hold, that we do not end up in expression $(3)$, and that the (Q)MLE does not attain full asymptotic efficiency. | Is MLE estimation asymptotically normal & efficient even if the model is not true? | I don't believe there is a single answer to this question.
When we consider possible distributional misspecification while applying maximum likelihood estimation, we get what is called the "Quasi-Ma | Is MLE estimation asymptotically normal & efficient even if the model is not true?
I don't believe there is a single answer to this question.
When we consider possible distributional misspecification while applying maximum likelihood estimation, we get what is called the "Quasi-Maximum Likelihood" estimator (QMLE). In certain cases the QMLE is both consistent and asymptotically normal.
What it loses with certainty is asymptotic efficiency. This is because the asymptotic variance of $\sqrt n (\hat \theta - \theta)$ (this is the quantity that has an asymptotic distribution, not just $\hat \theta$) is, in all cases,
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = \text{plim}\Big( [\hat H]^{-1}[\hat S \hat S^T][\hat H]^{-1}\Big) \tag{1}$$
where $H$ is the Hessian matrix of the log-likelihood and $S$ is the gradient, and the hat indicates sample estimates.
Now, if we have correct specification, we get, first, that
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = (\mathbb E[H_0])^{-1}\mathbb E[S_0S_0^T](\mathbb E[H_0])^{-1} \tag{2}$$
where the "$0$" subscript denotes evaluation at the true parameters (and note that the middle term is the definition of Fisher Information), and second, that the "information matrix equality" holds and states that $-\mathbb E[H_0] = \mathbb E[S_0S_0^T]$, which means that the asymptotic variance will finally be
$$\text{Avar}[\sqrt n (\hat \theta - \theta)] = -(\mathbb E[H_0])^{-1} \tag{3}$$
which is the inverse of the Fisher information.
But if we have misspecification, expression $(1)$ does not lead to expression $(2)$ (because the first and second derivatives in $(1)$ have been derived based on the wrong likelihood). This in turn implies that the information matrix inequality does not hold, that we do not end up in expression $(3)$, and that the (Q)MLE does not attain full asymptotic efficiency. | Is MLE estimation asymptotically normal & efficient even if the model is not true?
I don't believe there is a single answer to this question.
When we consider possible distributional misspecification while applying maximum likelihood estimation, we get what is called the "Quasi-Ma |
20,627 | AIC of ridge regression: degrees of freedom vs. number of parameters | AIC and ridge regression can be made compatible when certain assumptions are made. However, there is no single method of choosing a shrinkage for ridge regression thus no general method of applying AIC to it. Ridge regression is a subset of Tikhonov regularization. There are many criteria that can be applied to selecting smoothing factors for Tikhonov regularization, e.g., see this. To use AIC in that context, there is a paper that makes rather specific assumptions as to how to perform that regularization, Information complexity-based regularization parameter selection for solution of ill conditioned inverse problems. In specific, this assumes
"In a statistical framework, ...choosing the value of the regularization parameter α, and by using the maximum penalized likelihood (MPL) method....If we consider uncorrelated Gaussian noise with variance $\sigma ^2$ and use the penalty $p(x) =$ a complicated norm, see link above, the MPL solution is the same as the Tikhonov (1963) regularized solution."
The question then becomes, should those assumptions be made?
The question of degrees of freedom needed is secondary to the question of whether or not AIC and ridge regression are used in a consistent context. I would suggest reading the link for details. I am not avoiding the question, it is just that one can use lots of things as ridge targets, for example, one could use the smoothing factor that optimizes AIC itself. So, one good question deserves another, "Why bother with AIC in a ridge context?" In some ridge regression contexts, it is difficult to see how AIC could be made relevant. For example, ridge regression has been applied in order to minimize the relative error propagation of $b$, that is, min$\left [ \dfrac{\text{SD}(b)}{b}\right ]$ of the gamma distribution (GD) given by
$$
\text{GD}(t; a,b) =
\,\dfrac{1}{t}\;\dfrac{e^{-b \, t}(b \, t)^{\,a} }{\Gamma (a)} \;\; \;;\hspace{2em}t\geq 0 \;\; \;\;,\\
%\tabularnewline
$$
as per this paper. In particular, this difficulty arises because in that paper, it is, in effect, the Area Under the $[0,\infty)$ time Curve (AUC) that is optimized, and not the maximum likelihood (ML) of goodness of fit between measured $[t_1,t_n]$ time-samples. To be clear, that is done because the AUC is an ill-posed integral, and, otherwise, e.g., using ML, the gamma distribution fit would lack robustness for a time series that is censored (e.g., the data stops at some maximum time, and ML does not cover that case). Thus, for that particular application, maximum-likelihood, thus AIC, is actually irrelevant. (It is said that AIC is used for prediction and BIC for goodness-of-fit. However, prediction and goodness-of-fit are both only rather indirectly related to a robust measure of AUC.)
As for the answer to the question, the first reference in the question text says that "The main point is to note that $df$ is a decreasing function of $\lambda$ [Sic, the smoothing factor] with $df = p$ [Sic, the effective number of parameters see trace of hat matrix below] at $\lambda = 0$ and $df = 0$ at $\lambda=\infty$." Which means that $df$ equals the number of parameters minus the number of quantities estimated, when there is no smoothing which is also when the regression is the same as ordinary least squares and decreases to no $df$ as the smoothing factor increases to $\infty$. Note that for infinite smoothing the fit is a flat line irrespective of what density function is being fit. Finally, that the exact number of $df$ is a function.
"One can show that
$df_{ridge}= \sum(\lambda_i / (\lambda_i + \lambda$ ),
where {$\lambda_i$} are the eigenvalues of $X^{\text{T}} X$." Interestingly, that same reference defines $df$ as the trace of the hat matrix, see def. | AIC of ridge regression: degrees of freedom vs. number of parameters | AIC and ridge regression can be made compatible when certain assumptions are made. However, there is no single method of choosing a shrinkage for ridge regression thus no general method of applying AI | AIC of ridge regression: degrees of freedom vs. number of parameters
AIC and ridge regression can be made compatible when certain assumptions are made. However, there is no single method of choosing a shrinkage for ridge regression thus no general method of applying AIC to it. Ridge regression is a subset of Tikhonov regularization. There are many criteria that can be applied to selecting smoothing factors for Tikhonov regularization, e.g., see this. To use AIC in that context, there is a paper that makes rather specific assumptions as to how to perform that regularization, Information complexity-based regularization parameter selection for solution of ill conditioned inverse problems. In specific, this assumes
"In a statistical framework, ...choosing the value of the regularization parameter α, and by using the maximum penalized likelihood (MPL) method....If we consider uncorrelated Gaussian noise with variance $\sigma ^2$ and use the penalty $p(x) =$ a complicated norm, see link above, the MPL solution is the same as the Tikhonov (1963) regularized solution."
The question then becomes, should those assumptions be made?
The question of degrees of freedom needed is secondary to the question of whether or not AIC and ridge regression are used in a consistent context. I would suggest reading the link for details. I am not avoiding the question, it is just that one can use lots of things as ridge targets, for example, one could use the smoothing factor that optimizes AIC itself. So, one good question deserves another, "Why bother with AIC in a ridge context?" In some ridge regression contexts, it is difficult to see how AIC could be made relevant. For example, ridge regression has been applied in order to minimize the relative error propagation of $b$, that is, min$\left [ \dfrac{\text{SD}(b)}{b}\right ]$ of the gamma distribution (GD) given by
$$
\text{GD}(t; a,b) =
\,\dfrac{1}{t}\;\dfrac{e^{-b \, t}(b \, t)^{\,a} }{\Gamma (a)} \;\; \;;\hspace{2em}t\geq 0 \;\; \;\;,\\
%\tabularnewline
$$
as per this paper. In particular, this difficulty arises because in that paper, it is, in effect, the Area Under the $[0,\infty)$ time Curve (AUC) that is optimized, and not the maximum likelihood (ML) of goodness of fit between measured $[t_1,t_n]$ time-samples. To be clear, that is done because the AUC is an ill-posed integral, and, otherwise, e.g., using ML, the gamma distribution fit would lack robustness for a time series that is censored (e.g., the data stops at some maximum time, and ML does not cover that case). Thus, for that particular application, maximum-likelihood, thus AIC, is actually irrelevant. (It is said that AIC is used for prediction and BIC for goodness-of-fit. However, prediction and goodness-of-fit are both only rather indirectly related to a robust measure of AUC.)
As for the answer to the question, the first reference in the question text says that "The main point is to note that $df$ is a decreasing function of $\lambda$ [Sic, the smoothing factor] with $df = p$ [Sic, the effective number of parameters see trace of hat matrix below] at $\lambda = 0$ and $df = 0$ at $\lambda=\infty$." Which means that $df$ equals the number of parameters minus the number of quantities estimated, when there is no smoothing which is also when the regression is the same as ordinary least squares and decreases to no $df$ as the smoothing factor increases to $\infty$. Note that for infinite smoothing the fit is a flat line irrespective of what density function is being fit. Finally, that the exact number of $df$ is a function.
"One can show that
$df_{ridge}= \sum(\lambda_i / (\lambda_i + \lambda$ ),
where {$\lambda_i$} are the eigenvalues of $X^{\text{T}} X$." Interestingly, that same reference defines $df$ as the trace of the hat matrix, see def. | AIC of ridge regression: degrees of freedom vs. number of parameters
AIC and ridge regression can be made compatible when certain assumptions are made. However, there is no single method of choosing a shrinkage for ridge regression thus no general method of applying AI |
20,628 | Normalised score for BM25 | The score that BM25 calculates is only usable to compare search results for a specific query to each other. It's not possible to transform that score to mean something independent of the query.
But there's one way to do something that might be OK in some cases. You decide if this works in your case:
Normalize each score, by dividing it with the sum of all scores (of say the top 10 results). Looking at score for the first hit, it now means: "Are there lots of other hits that also match this query?". If there are, the number will be low, else it will be high.
Raw BM25 query example (from an anonymized real query):
Query 1:
Result 1, score=0.5998919138986571
Result 2, score=0.5998919138986571
Result 3, score=0.5998919138986571
Result 4, score=0.4995426367770633
Result 5, score=0.4995426367770633
Result 6 score=0.0
Query 2:
Result 1, score=3.9278021306217763
Result 3, score=1.6993264645743775
Result 4, score=1.5989771874527836
Result 5, score=1.5989771874527836
Result 2, score=1.0994345506757204
Result 6, score=0.0
Normalized BM25 score:
Query 1:
Result 1, score=0.21434195725534308
Result 2, score=0.21434195725534308
Result 3, score=0.21434195725534308
Result 4, score=0.17848706411698534
Result 5, score=0.17848706411698534
Result 6, score=0.0
Query 2:
Result 1, score=0.3957675647605779
Result 3, score=0.17122509593204485
Result 4, score=0.1611138459985838
Result 5, score=0.1611138459985838
Result 2, score=0.11077964731020955
Result 6, score=0.0
As you might be able to see from the first query, that has "lower confidence", since there are many hits that get high scores for it. | Normalised score for BM25 | The score that BM25 calculates is only usable to compare search results for a specific query to each other. It's not possible to transform that score to mean something independent of the query.
But th | Normalised score for BM25
The score that BM25 calculates is only usable to compare search results for a specific query to each other. It's not possible to transform that score to mean something independent of the query.
But there's one way to do something that might be OK in some cases. You decide if this works in your case:
Normalize each score, by dividing it with the sum of all scores (of say the top 10 results). Looking at score for the first hit, it now means: "Are there lots of other hits that also match this query?". If there are, the number will be low, else it will be high.
Raw BM25 query example (from an anonymized real query):
Query 1:
Result 1, score=0.5998919138986571
Result 2, score=0.5998919138986571
Result 3, score=0.5998919138986571
Result 4, score=0.4995426367770633
Result 5, score=0.4995426367770633
Result 6 score=0.0
Query 2:
Result 1, score=3.9278021306217763
Result 3, score=1.6993264645743775
Result 4, score=1.5989771874527836
Result 5, score=1.5989771874527836
Result 2, score=1.0994345506757204
Result 6, score=0.0
Normalized BM25 score:
Query 1:
Result 1, score=0.21434195725534308
Result 2, score=0.21434195725534308
Result 3, score=0.21434195725534308
Result 4, score=0.17848706411698534
Result 5, score=0.17848706411698534
Result 6, score=0.0
Query 2:
Result 1, score=0.3957675647605779
Result 3, score=0.17122509593204485
Result 4, score=0.1611138459985838
Result 5, score=0.1611138459985838
Result 2, score=0.11077964731020955
Result 6, score=0.0
As you might be able to see from the first query, that has "lower confidence", since there are many hits that get high scores for it. | Normalised score for BM25
The score that BM25 calculates is only usable to compare search results for a specific query to each other. It's not possible to transform that score to mean something independent of the query.
But th |
20,629 | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox models? | With models where each parameter has to be estimated (like Ordinary Least Squares), it is possible to create a situation where two separate models have the same estimates of a single one with an interaction term. For example, we could have: $Y_M=\alpha_M+\beta_M*age$, $Y_F=\alpha_F+\beta_F*age$ summarized by: $Y=\lambda+\lambda_F*F+\gamma*age+\gamma_F*F*age$, so that you could directly estimate the gender difference both in intercept and in slope.
In fact: $\alpha_M=\lambda, \beta_M=\gamma, \alpha_F-\alpha_M=\lambda_F,\beta_F-\beta_M=\gamma_F$. In that case, I agree with you that the unique model would allow to have an immediate idea on the gender difference (given by the interaction parameters, $\lambda_F$, since the slope difference has a clearer interpretation, and your question refers to that).
However, with the Cox model things are different. First of all, if we don't include gender in the regression there may be a reason, i.e. that it doesn not fulfill the proportional hazard assumption. Also, if we build a unique model with gender as an interaction term, we are assuming a common baseline hazard function (unless I misunderstood the meaning of $h_{\textrm{gender}}(t)$), while the two-separate-models approach allow for two separate baseline hazard functions, thus different models are implied.
See, for example, the Chapter "Survival Analysis" from Kleinbaum and Klein, 2012, Part of the series Statistics for Biology and Health. | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox mode | With models where each parameter has to be estimated (like Ordinary Least Squares), it is possible to create a situation where two separate models have the same estimates of a single one with an inter | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox models?
With models where each parameter has to be estimated (like Ordinary Least Squares), it is possible to create a situation where two separate models have the same estimates of a single one with an interaction term. For example, we could have: $Y_M=\alpha_M+\beta_M*age$, $Y_F=\alpha_F+\beta_F*age$ summarized by: $Y=\lambda+\lambda_F*F+\gamma*age+\gamma_F*F*age$, so that you could directly estimate the gender difference both in intercept and in slope.
In fact: $\alpha_M=\lambda, \beta_M=\gamma, \alpha_F-\alpha_M=\lambda_F,\beta_F-\beta_M=\gamma_F$. In that case, I agree with you that the unique model would allow to have an immediate idea on the gender difference (given by the interaction parameters, $\lambda_F$, since the slope difference has a clearer interpretation, and your question refers to that).
However, with the Cox model things are different. First of all, if we don't include gender in the regression there may be a reason, i.e. that it doesn not fulfill the proportional hazard assumption. Also, if we build a unique model with gender as an interaction term, we are assuming a common baseline hazard function (unless I misunderstood the meaning of $h_{\textrm{gender}}(t)$), while the two-separate-models approach allow for two separate baseline hazard functions, thus different models are implied.
See, for example, the Chapter "Survival Analysis" from Kleinbaum and Klein, 2012, Part of the series Statistics for Biology and Health. | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox mode
With models where each parameter has to be estimated (like Ordinary Least Squares), it is possible to create a situation where two separate models have the same estimates of a single one with an inter |
20,630 | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox models? | If you write it exactly as you specified the models in the post, then yes, they will see the same, i.e.
Researcher A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
In my simulation that gave age coefficient of 0.10 and age x gender of 0.11 (vs the modelled parameters of 0.1 and 0.2)
Researcher B:
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==0, ])
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==1, ])
Here, it gives 0.10 for the first and 0.21 for the second
However, if researcher A just writes something like below, without "strata",
coxph(Surv(time_1, event)~ age_ + age_*gender, data =df)
then no - this is what @Federico mentioned above, that the baseline hazard function will be assumed the same for both genders in this case.
Full code with simulated example is here:
#simulating population where gender =0 corresponds to Cox beta of 0.2 for age #variable and gender =1 corresponds to Cox beta of 0.1
N=5000
df = data.frame(age = round(runif(N, 25,75),1),
gender = rbinom(N,1,0.5))
df$age_ = (df$age - 50) /10
df$event_time = 0.01+ rexp(N, exp(0.2*df$age_*(df$gender==1) + 0.1* df$age_*(df$gender==0))) #event is an exponential variable
df$event = ifelse(df$event_time>1, 0,1) #all censored at time =1
sum(df$event)/1000
df$time_1 = ifelse(df$event_time>1, 1, df$event_time) #time of event or 1 for censored
#this is your first equation, which allows for separate baseline function by gender
# AND it allows for a different coefficient for the second gender
# (without age_*gender the model will allow different baselines, but not the coefficients
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
#this is what researcher B will do:
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==0, ])
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==1, ])
#results A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
Call:
coxph(formula = Surv(time_1, event) ~ age_ + age_ * gender +
strata(gender), data = df)
coef exp(coef) se(coef) z p
age_ 0.10 1.11 0.02 6 0.00000002
gender NA NA 0.00 NA NA
age_:gender 0.11 1.11 0.02 4 0.00001901
#results B - gender ==0
Call:
coxph(formula = Surv(time_1, event) ~ age_, data = df[df$gender ==
0, ])
coef exp(coef) se(coef) z p
age_ 0.10 1.11 0.02 6 0.00000002
Likelihood ratio test=32 on 1 df, p=0.00000002
n= 2465, number of events= 1519
#results B, gender ==1
coxph(formula = Surv(time_1, event) ~ age_, data = df[df$gender ==
1, ])
coef exp(coef) se(coef) z p
age_ 0.21 1.23 0.02 12 <0.0000000000000002
Likelihood ratio test=144 on 1 df, p=<0.0000000000000002 | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox mode | If you write it exactly as you specified the models in the post, then yes, they will see the same, i.e.
Researcher A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
In my | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox models?
If you write it exactly as you specified the models in the post, then yes, they will see the same, i.e.
Researcher A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
In my simulation that gave age coefficient of 0.10 and age x gender of 0.11 (vs the modelled parameters of 0.1 and 0.2)
Researcher B:
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==0, ])
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==1, ])
Here, it gives 0.10 for the first and 0.21 for the second
However, if researcher A just writes something like below, without "strata",
coxph(Surv(time_1, event)~ age_ + age_*gender, data =df)
then no - this is what @Federico mentioned above, that the baseline hazard function will be assumed the same for both genders in this case.
Full code with simulated example is here:
#simulating population where gender =0 corresponds to Cox beta of 0.2 for age #variable and gender =1 corresponds to Cox beta of 0.1
N=5000
df = data.frame(age = round(runif(N, 25,75),1),
gender = rbinom(N,1,0.5))
df$age_ = (df$age - 50) /10
df$event_time = 0.01+ rexp(N, exp(0.2*df$age_*(df$gender==1) + 0.1* df$age_*(df$gender==0))) #event is an exponential variable
df$event = ifelse(df$event_time>1, 0,1) #all censored at time =1
sum(df$event)/1000
df$time_1 = ifelse(df$event_time>1, 1, df$event_time) #time of event or 1 for censored
#this is your first equation, which allows for separate baseline function by gender
# AND it allows for a different coefficient for the second gender
# (without age_*gender the model will allow different baselines, but not the coefficients
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
#this is what researcher B will do:
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==0, ])
coxph(Surv(time_1, event)~ age_ , data =df[df$gender==1, ])
#results A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
Call:
coxph(formula = Surv(time_1, event) ~ age_ + age_ * gender +
strata(gender), data = df)
coef exp(coef) se(coef) z p
age_ 0.10 1.11 0.02 6 0.00000002
gender NA NA 0.00 NA NA
age_:gender 0.11 1.11 0.02 4 0.00001901
#results B - gender ==0
Call:
coxph(formula = Surv(time_1, event) ~ age_, data = df[df$gender ==
0, ])
coef exp(coef) se(coef) z p
age_ 0.10 1.11 0.02 6 0.00000002
Likelihood ratio test=32 on 1 df, p=0.00000002
n= 2465, number of events= 1519
#results B, gender ==1
coxph(formula = Surv(time_1, event) ~ age_, data = df[df$gender ==
1, ])
coef exp(coef) se(coef) z p
age_ 0.21 1.23 0.02 12 <0.0000000000000002
Likelihood ratio test=144 on 1 df, p=<0.0000000000000002 | Does fitting Cox-model with strata and strata-covariate interaction differ from fitting two Cox mode
If you write it exactly as you specified the models in the post, then yes, they will see the same, i.e.
Researcher A:
coxph(Surv(time_1, event)~ age_ + age_*gender + strata(gender), data =df)
In my |
20,631 | Are there any contemporary uses of jackknifing? | If you take jackknifing not only to include leave-one-out but any kind of resampling-without-replacement such as $k$-fold procedures, I consider it a viable option and use it regularly, e.g. in
Beleites et al.: Raman spectroscopic grading of astrocytoma tissues: using soft reference information. Anal Bioanal Chem, 2011, 400, 2801-2816
see also: Confidence interval for cross-validated classification accuracy
I avoid LOO for several reasons and instead use an iterated/repeated $k$-fold scheme. In my field (chemistry/spectroscopy/chemometrics), cross validation is far more common than out-of-bootstrap validation. For our data/typcial applications we found that $i$ times iterated $k$-fold cross validation and $i \cdot k$ iterations of out-of-bootstrap performance estimates have very similar total error [Beleites et al.: Variance reduction in estimating classification error using sparse datasets. Chem.Intell.Lab.Syst., 2005, 79, 91 - 100.].
The particular advantage I see for looking at iterated cross validation schemes over bootstrapping is that I can very easily derive stability/model uncertainty measures that can be intuitively explained, and it separated two differnt causes of variance uncertainty in the performance measurement which are more intertwined in out-of-bootstrap measurements.
One line of reasoning that gets me to cross validation/jackknifing is looking at the robustness of the model: cross validation corresponds rather directly to questions of the type "What happens to my model if I exchange $x$ cases for $x$ new cases?" or "How robust is my model against perturbing the training data by exchanging $x$ cases?" This is kind of applicable to bootstrapping as well, but less directly.
Note that I do not try to derive confidence intervals, because my data is inherently clustered ($n_s$ spectra of $n_p \ll n_s$ patients), so I prefer to report
a (conservative) binomial confidence interval using the average observed performance and $n_p$ as sample size and
the variance I observe between the $i$ iterations of the cross validation. After $k$ folds, each case is tested exactly once, though by different surrogate models. Thus any kind of variation observed between the $i$ runs must be caused by model instability.
Typically, i.e. if the model is well set up, 2. is needed only to show that it is much smaller than the variance in 1., and that the model is therefore reasonably stable. If 2. turns out to be non-negligible, it is time to consider aggregated models: model aggregation helps only for variance caused by model instability, it cannot reduce the variance uncertainty in the performance measurement that is due to the finite number of test cases.
Note that in order to construct performance confidence intervals for such data, I'd at least consider that the variance observed between the $i$ runs of the cross validation is of the average of $k$ models of that instability, i.e. I'd say model instability variance is $k \cdot $ observed variance between cross validation runs; plus variance due to finite case number - for classifiation (hit/error) performance measures this is binomial. For continuous measures, I'd try to derive the variance from within cross-validation run variance, $k$, and the estimate of the instability-type variance for the $k$ models derived from the
The advantage of crossvalidation here is that you get a clear separation between uncertainty caused by model instability and uncertainty caused by finite number of test cases. The corresponding disadvantage is of course that if you forget to take the finite number of actual cases into account, you'll severely underestimate the true uncertainty. However, this would happen for bootstrapping as well (though to a lesser extent).
So far, the reasoning concentrates on measuring performance for the model you derive for a given data set. If you consider a data set for the given application and of the given sample size, there is a third contribution to variance that fundamentally cannot be measured by resampling validation, see e.g. Bengio & Grandvalet: No Unbiased Estimator of the Variance of K-Fold Cross-Validation, Journal of Machine Learning Research, 5, 1089-1105 (2004). , we also have figures showing these three contributions in Beleites et al.: Sample size planning for classification models., Anal Chim Acta, 760, 25-33 (2013). DOI: 10.1016/j.aca.2012.11.007)
I think what happens here is the result of the the assumption that resampling is similar to drawing a complete new sample breaking down.
This is important if model building algorithms/strategies/heuristics are to be compared rather than constructing a particular model for the application and validating this model. | Are there any contemporary uses of jackknifing? | If you take jackknifing not only to include leave-one-out but any kind of resampling-without-replacement such as $k$-fold procedures, I consider it a viable option and use it regularly, e.g. in
Beleit | Are there any contemporary uses of jackknifing?
If you take jackknifing not only to include leave-one-out but any kind of resampling-without-replacement such as $k$-fold procedures, I consider it a viable option and use it regularly, e.g. in
Beleites et al.: Raman spectroscopic grading of astrocytoma tissues: using soft reference information. Anal Bioanal Chem, 2011, 400, 2801-2816
see also: Confidence interval for cross-validated classification accuracy
I avoid LOO for several reasons and instead use an iterated/repeated $k$-fold scheme. In my field (chemistry/spectroscopy/chemometrics), cross validation is far more common than out-of-bootstrap validation. For our data/typcial applications we found that $i$ times iterated $k$-fold cross validation and $i \cdot k$ iterations of out-of-bootstrap performance estimates have very similar total error [Beleites et al.: Variance reduction in estimating classification error using sparse datasets. Chem.Intell.Lab.Syst., 2005, 79, 91 - 100.].
The particular advantage I see for looking at iterated cross validation schemes over bootstrapping is that I can very easily derive stability/model uncertainty measures that can be intuitively explained, and it separated two differnt causes of variance uncertainty in the performance measurement which are more intertwined in out-of-bootstrap measurements.
One line of reasoning that gets me to cross validation/jackknifing is looking at the robustness of the model: cross validation corresponds rather directly to questions of the type "What happens to my model if I exchange $x$ cases for $x$ new cases?" or "How robust is my model against perturbing the training data by exchanging $x$ cases?" This is kind of applicable to bootstrapping as well, but less directly.
Note that I do not try to derive confidence intervals, because my data is inherently clustered ($n_s$ spectra of $n_p \ll n_s$ patients), so I prefer to report
a (conservative) binomial confidence interval using the average observed performance and $n_p$ as sample size and
the variance I observe between the $i$ iterations of the cross validation. After $k$ folds, each case is tested exactly once, though by different surrogate models. Thus any kind of variation observed between the $i$ runs must be caused by model instability.
Typically, i.e. if the model is well set up, 2. is needed only to show that it is much smaller than the variance in 1., and that the model is therefore reasonably stable. If 2. turns out to be non-negligible, it is time to consider aggregated models: model aggregation helps only for variance caused by model instability, it cannot reduce the variance uncertainty in the performance measurement that is due to the finite number of test cases.
Note that in order to construct performance confidence intervals for such data, I'd at least consider that the variance observed between the $i$ runs of the cross validation is of the average of $k$ models of that instability, i.e. I'd say model instability variance is $k \cdot $ observed variance between cross validation runs; plus variance due to finite case number - for classifiation (hit/error) performance measures this is binomial. For continuous measures, I'd try to derive the variance from within cross-validation run variance, $k$, and the estimate of the instability-type variance for the $k$ models derived from the
The advantage of crossvalidation here is that you get a clear separation between uncertainty caused by model instability and uncertainty caused by finite number of test cases. The corresponding disadvantage is of course that if you forget to take the finite number of actual cases into account, you'll severely underestimate the true uncertainty. However, this would happen for bootstrapping as well (though to a lesser extent).
So far, the reasoning concentrates on measuring performance for the model you derive for a given data set. If you consider a data set for the given application and of the given sample size, there is a third contribution to variance that fundamentally cannot be measured by resampling validation, see e.g. Bengio & Grandvalet: No Unbiased Estimator of the Variance of K-Fold Cross-Validation, Journal of Machine Learning Research, 5, 1089-1105 (2004). , we also have figures showing these three contributions in Beleites et al.: Sample size planning for classification models., Anal Chim Acta, 760, 25-33 (2013). DOI: 10.1016/j.aca.2012.11.007)
I think what happens here is the result of the the assumption that resampling is similar to drawing a complete new sample breaking down.
This is important if model building algorithms/strategies/heuristics are to be compared rather than constructing a particular model for the application and validating this model. | Are there any contemporary uses of jackknifing?
If you take jackknifing not only to include leave-one-out but any kind of resampling-without-replacement such as $k$-fold procedures, I consider it a viable option and use it regularly, e.g. in
Beleit |
20,632 | Using standard machine learning tools on left-censored data | In short, how do I apply machine learning tools to left-censored regression data to get consistent estimates of the relationships between my dependent and independent variables?
If you can write up a likelihood and flip the sign to minus then you have your self a loss function which can be used for many machine learning models. In gradient boosting this is commonly refereed to as model boosting. See e.g., Boosting Algorithms: Regularization, Prediction and Model Fitting.
As an example with the Tobit model see Gradient Tree Boosted Tobit Models for Default Prediction paper. The method should be available with the scikit-learn branch mentioned in the paper.
The same idea is used for right censored data in e.g., the gbm and mboost packages in R for right censored data.
The above idea can be applied with other methods (e.g., neural network). However, it is particularly easy with Gradient boosting since you just need to be able to compute the gradient of the loss function (the negative log likelihood). Then you can apply whatever method you prefer to fit the negative gradient with an $L2$ loss. | Using standard machine learning tools on left-censored data | In short, how do I apply machine learning tools to left-censored regression data to get consistent estimates of the relationships between my dependent and independent variables?
If you can write up a | Using standard machine learning tools on left-censored data
In short, how do I apply machine learning tools to left-censored regression data to get consistent estimates of the relationships between my dependent and independent variables?
If you can write up a likelihood and flip the sign to minus then you have your self a loss function which can be used for many machine learning models. In gradient boosting this is commonly refereed to as model boosting. See e.g., Boosting Algorithms: Regularization, Prediction and Model Fitting.
As an example with the Tobit model see Gradient Tree Boosted Tobit Models for Default Prediction paper. The method should be available with the scikit-learn branch mentioned in the paper.
The same idea is used for right censored data in e.g., the gbm and mboost packages in R for right censored data.
The above idea can be applied with other methods (e.g., neural network). However, it is particularly easy with Gradient boosting since you just need to be able to compute the gradient of the loss function (the negative log likelihood). Then you can apply whatever method you prefer to fit the negative gradient with an $L2$ loss. | Using standard machine learning tools on left-censored data
In short, how do I apply machine learning tools to left-censored regression data to get consistent estimates of the relationships between my dependent and independent variables?
If you can write up a |
20,633 | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | First off, I would like to say that both graphs provide clear evidence to me that there is a ceiling effect present. How I would attempt to measure that effect rather than just visually would be to observe that so long as a non-trivial portion of the observations lie near the upper limit of the instrument's range. Typically speaking a ceiling effect will always exist so long as there are a non-trivial portion of test takers who achieve the maximum score on the test.
However, that said, the technology of test analysis has progressed a long way since we needed to directly interpret the scores on a instrument based on the score correct. We can now use Item Response Theory to estimate the item parameters of individual items and use those items to identify subject ability. There of course can be still ceiling effects on a test if we make the test too easy. However, due to the powers of item response theory we should be able to put at least a few items of high enough difficulty in the instrument so as to prevent only but a trivial portion of the population hitting the ceiling.
Thanks for the question. It is very interesting! | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | First off, I would like to say that both graphs provide clear evidence to me that there is a ceiling effect present. How I would attempt to measure that effect rather than just visually would be to ob | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
First off, I would like to say that both graphs provide clear evidence to me that there is a ceiling effect present. How I would attempt to measure that effect rather than just visually would be to observe that so long as a non-trivial portion of the observations lie near the upper limit of the instrument's range. Typically speaking a ceiling effect will always exist so long as there are a non-trivial portion of test takers who achieve the maximum score on the test.
However, that said, the technology of test analysis has progressed a long way since we needed to directly interpret the scores on a instrument based on the score correct. We can now use Item Response Theory to estimate the item parameters of individual items and use those items to identify subject ability. There of course can be still ceiling effects on a test if we make the test too easy. However, due to the powers of item response theory we should be able to put at least a few items of high enough difficulty in the instrument so as to prevent only but a trivial portion of the population hitting the ceiling.
Thanks for the question. It is very interesting! | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
First off, I would like to say that both graphs provide clear evidence to me that there is a ceiling effect present. How I would attempt to measure that effect rather than just visually would be to ob |
20,634 | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | I guess a rough and ready way would just be to measure the variance as the scale increases. If this shows a reduction then this is evidence for a ceiling effect and if not there is no ceiling effect. You could make a homogeneity of variance plot. Levene's test could be useful to determine whether variance is sig different at different points on the scale. | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | I guess a rough and ready way would just be to measure the variance as the scale increases. If this shows a reduction then this is evidence for a ceiling effect and if not there is no ceiling effect | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
I guess a rough and ready way would just be to measure the variance as the scale increases. If this shows a reduction then this is evidence for a ceiling effect and if not there is no ceiling effect. You could make a homogeneity of variance plot. Levene's test could be useful to determine whether variance is sig different at different points on the scale. | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
I guess a rough and ready way would just be to measure the variance as the scale increases. If this shows a reduction then this is evidence for a ceiling effect and if not there is no ceiling effect |
20,635 | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | The critical problem in deciding whether a clustering around the highest or lowest point is due to a ceiling/floor effect is whether the values of the cases actually "represent" the value. When ceiling/floor effects do occur, some of the cases, despite assuming the maximum or minimum value, are actually higher/lower than the maximum or minimum value (imagine an adult and a child both finish an extremely simple math test that purported to measure one's math capability, and both scored 100%). Here, the data is censored.
Another scenario is also possible when we use bounded scales such as a Likert-like scale which has inherent upper and lower limits. It is entirely possible that those who scored the highest are indeed worth that score and no differences (such as the math example above) exist among all who scored the highest. In such a case, the data is truncated at the limits, not censored.
Based on the above reasoning, I reckon one should devise a procedure to fit any given dataset with data truncation and data censoring. If the censoring model best fitted the data, I think one may then conclude that a ceiling/floor effect is present. | What criteria must be met in order to conclude a 'ceiling effect' is occurring? | The critical problem in deciding whether a clustering around the highest or lowest point is due to a ceiling/floor effect is whether the values of the cases actually "represent" the value. When ceilin | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
The critical problem in deciding whether a clustering around the highest or lowest point is due to a ceiling/floor effect is whether the values of the cases actually "represent" the value. When ceiling/floor effects do occur, some of the cases, despite assuming the maximum or minimum value, are actually higher/lower than the maximum or minimum value (imagine an adult and a child both finish an extremely simple math test that purported to measure one's math capability, and both scored 100%). Here, the data is censored.
Another scenario is also possible when we use bounded scales such as a Likert-like scale which has inherent upper and lower limits. It is entirely possible that those who scored the highest are indeed worth that score and no differences (such as the math example above) exist among all who scored the highest. In such a case, the data is truncated at the limits, not censored.
Based on the above reasoning, I reckon one should devise a procedure to fit any given dataset with data truncation and data censoring. If the censoring model best fitted the data, I think one may then conclude that a ceiling/floor effect is present. | What criteria must be met in order to conclude a 'ceiling effect' is occurring?
The critical problem in deciding whether a clustering around the highest or lowest point is due to a ceiling/floor effect is whether the values of the cases actually "represent" the value. When ceilin |
20,636 | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players? | Is it correct that you have not only those percentages but all the individual game outcomes as well? Then I would suggest the r package PlayerRatings. This package not only deals with problems like how to calculate player strength (using algorithms like elo or glicko), but offers functions that can predict future game outcomes as well.
For examples check: http://cran.r-project.org/web/packages/PlayerRatings/vignettes/AFLRatings.pdf | How can I predict the odds that a dodgeball team is going to win based on the winning history of its | Is it correct that you have not only those percentages but all the individual game outcomes as well? Then I would suggest the r package PlayerRatings. This package not only deals with problems like ho | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players?
Is it correct that you have not only those percentages but all the individual game outcomes as well? Then I would suggest the r package PlayerRatings. This package not only deals with problems like how to calculate player strength (using algorithms like elo or glicko), but offers functions that can predict future game outcomes as well.
For examples check: http://cran.r-project.org/web/packages/PlayerRatings/vignettes/AFLRatings.pdf | How can I predict the odds that a dodgeball team is going to win based on the winning history of its
Is it correct that you have not only those percentages but all the individual game outcomes as well? Then I would suggest the r package PlayerRatings. This package not only deals with problems like ho |
20,637 | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players? | Sounds like a job for naive Bayes. I don't quite understand the theory behind it so unfortunately I can't give you an example but it Bayes works with known (archival) data to draw inferences.
I think Bayes is only available in Statistic Server of SPSS so if you have access to one of these you're in luck. Alternatively you can use Weka which also includes a bunch of other classifiers, so maybe you run your experiment and let us know of the results?
EDIT:
Bayes and related classifiers can also draw inferences from the players themselves, eg. $A$ has a score of 65% but when $A$ and $B$ play in opposite teams, $A$'s performance drops by 5%. | How can I predict the odds that a dodgeball team is going to win based on the winning history of its | Sounds like a job for naive Bayes. I don't quite understand the theory behind it so unfortunately I can't give you an example but it Bayes works with known (archival) data to draw inferences.
I think | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players?
Sounds like a job for naive Bayes. I don't quite understand the theory behind it so unfortunately I can't give you an example but it Bayes works with known (archival) data to draw inferences.
I think Bayes is only available in Statistic Server of SPSS so if you have access to one of these you're in luck. Alternatively you can use Weka which also includes a bunch of other classifiers, so maybe you run your experiment and let us know of the results?
EDIT:
Bayes and related classifiers can also draw inferences from the players themselves, eg. $A$ has a score of 65% but when $A$ and $B$ play in opposite teams, $A$'s performance drops by 5%. | How can I predict the odds that a dodgeball team is going to win based on the winning history of its
Sounds like a job for naive Bayes. I don't quite understand the theory behind it so unfortunately I can't give you an example but it Bayes works with known (archival) data to draw inferences.
I think |
20,638 | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players? | Isn't it just a simple division of the averages? AvgTeam1WinP / AvgTeam2WinP? It should yield the odds that team1 will win against team2.
If I consider the following:
If player1 would play against player2 in "1-man" teams, you would agree that the odds that player1 will win against player2 would be the probability that player1 would win against random divided by the probability that player2 would win at random (this of course only holds in the case that you considered the win % to be accurate, like in their asymptotical limit), simply:
OddsP1VsP2 = WinProbabilityP1 / WinProbabilityP2
If you would argue that there is no interaction effect of some players being terrible and thus influence the score more negatively than expected*, or, some players being really good influencing the score more positively than expected**, then it seems logical that you can just take the average probability for each player in each team.
* If the combination of 60%,60%,60%,60% is considered better than a team of like 70%,70%,70%,30%, where one bad player would result in worse odds for the team even though the averages are the same. Without additional hypotheses, that particular question is not possible to be addressed.
** Similarly, if 50,50,50,90 is not considered to be equal to 60,60,60,60, then the same applies. | How can I predict the odds that a dodgeball team is going to win based on the winning history of its | Isn't it just a simple division of the averages? AvgTeam1WinP / AvgTeam2WinP? It should yield the odds that team1 will win against team2.
If I consider the following:
If player1 would play against pla | How can I predict the odds that a dodgeball team is going to win based on the winning history of its players?
Isn't it just a simple division of the averages? AvgTeam1WinP / AvgTeam2WinP? It should yield the odds that team1 will win against team2.
If I consider the following:
If player1 would play against player2 in "1-man" teams, you would agree that the odds that player1 will win against player2 would be the probability that player1 would win against random divided by the probability that player2 would win at random (this of course only holds in the case that you considered the win % to be accurate, like in their asymptotical limit), simply:
OddsP1VsP2 = WinProbabilityP1 / WinProbabilityP2
If you would argue that there is no interaction effect of some players being terrible and thus influence the score more negatively than expected*, or, some players being really good influencing the score more positively than expected**, then it seems logical that you can just take the average probability for each player in each team.
* If the combination of 60%,60%,60%,60% is considered better than a team of like 70%,70%,70%,30%, where one bad player would result in worse odds for the team even though the averages are the same. Without additional hypotheses, that particular question is not possible to be addressed.
** Similarly, if 50,50,50,90 is not considered to be equal to 60,60,60,60, then the same applies. | How can I predict the odds that a dodgeball team is going to win based on the winning history of its
Isn't it just a simple division of the averages? AvgTeam1WinP / AvgTeam2WinP? It should yield the odds that team1 will win against team2.
If I consider the following:
If player1 would play against pla |
20,639 | Ridge regression coefficients that are larger than OLS coefficients or that change sign depending on $\lambda$ | As $\lambda$ increases from zero the contribution of various coefficients changes to suit the optimization, allowing both value increases and sign changes. Have a look at Ryan Tibshirani's ridge regression charts (PDF) illustrating both of your questions (charts 17, 19). | Ridge regression coefficients that are larger than OLS coefficients or that change sign depending on | As $\lambda$ increases from zero the contribution of various coefficients changes to suit the optimization, allowing both value increases and sign changes. Have a look at Ryan Tibshirani's ridge regr | Ridge regression coefficients that are larger than OLS coefficients or that change sign depending on $\lambda$
As $\lambda$ increases from zero the contribution of various coefficients changes to suit the optimization, allowing both value increases and sign changes. Have a look at Ryan Tibshirani's ridge regression charts (PDF) illustrating both of your questions (charts 17, 19). | Ridge regression coefficients that are larger than OLS coefficients or that change sign depending on
As $\lambda$ increases from zero the contribution of various coefficients changes to suit the optimization, allowing both value increases and sign changes. Have a look at Ryan Tibshirani's ridge regr |
20,640 | Complications of having a very small sample in a structural equation model | One point: there is no such thing as a "basic question", you only know what you know, and not what you don't know. asking a question is often the only way to find out.
Whenever you see small samples, you find out who really has "faith" in their models and who doesn't. I say this because small samples is usually where models have the biggest impact.
Being a keen (psycho?) modeller myself, I say go for it! You seem to be adopting a cautious approach, and you have acknowledged potential bias, etc. due to small sample. One thing to keep in mind with fitting models to small data is that you have 12 variables. Now you should think - how well could any model with 12 variables be determined by 42 observations? If you had 42 variables, then any model could be perfectly fit to those 42 observations (loosely speaking), so your case is not too far from being too flexible. What happens when your model is too flexible? It tends to fit the noise - that is, the relationships which are determined by things other than the ones you hypothesize.
You also have the opportunity to put your ego where your model is by predicting what those future 10-20 samples will be from your model. I wonder how your critics will react to a so called "dodgy" model which gives the right predictions. Note that you would get a similar "I told you so" if your model doesn't predict the data well.
Another way you could assure yourself that your results are reliable, is to try and break them. Keeping your original data intact, create a new data set, and see what you have to do to this new data set in order to make your SEM results seem ridiculous. Then look at what you had to do, and consider: is this a reasonable scenario? Does my "ridiculous" data resemble a genuine possibility? If you have to take your data to ridiculous territory in order to produce ridiculous results, it provides some assurance (heuristic, not formal) that your method is sound. | Complications of having a very small sample in a structural equation model | One point: there is no such thing as a "basic question", you only know what you know, and not what you don't know. asking a question is often the only way to find out.
Whenever you see small samples, | Complications of having a very small sample in a structural equation model
One point: there is no such thing as a "basic question", you only know what you know, and not what you don't know. asking a question is often the only way to find out.
Whenever you see small samples, you find out who really has "faith" in their models and who doesn't. I say this because small samples is usually where models have the biggest impact.
Being a keen (psycho?) modeller myself, I say go for it! You seem to be adopting a cautious approach, and you have acknowledged potential bias, etc. due to small sample. One thing to keep in mind with fitting models to small data is that you have 12 variables. Now you should think - how well could any model with 12 variables be determined by 42 observations? If you had 42 variables, then any model could be perfectly fit to those 42 observations (loosely speaking), so your case is not too far from being too flexible. What happens when your model is too flexible? It tends to fit the noise - that is, the relationships which are determined by things other than the ones you hypothesize.
You also have the opportunity to put your ego where your model is by predicting what those future 10-20 samples will be from your model. I wonder how your critics will react to a so called "dodgy" model which gives the right predictions. Note that you would get a similar "I told you so" if your model doesn't predict the data well.
Another way you could assure yourself that your results are reliable, is to try and break them. Keeping your original data intact, create a new data set, and see what you have to do to this new data set in order to make your SEM results seem ridiculous. Then look at what you had to do, and consider: is this a reasonable scenario? Does my "ridiculous" data resemble a genuine possibility? If you have to take your data to ridiculous territory in order to produce ridiculous results, it provides some assurance (heuristic, not formal) that your method is sound. | Complications of having a very small sample in a structural equation model
One point: there is no such thing as a "basic question", you only know what you know, and not what you don't know. asking a question is often the only way to find out.
Whenever you see small samples, |
20,641 | Complications of having a very small sample in a structural equation model | The main problem that I see with this is lack of power. Confirmatory factor and SEM testing look to accept the null - you want to see a non-significant p-value - so lack of power can be a problem. The power of the test depends on the sample size (42) and the degrees of freedom. AMOS gives you the degrees of freedom. You have not quoted it, but it won't be large in this case. With 12 variables, you start with 66 DF's, and subtract 1 for each parameter that you estimate. I don't know how many that would be, but you say that you have several factors and correlations between various constructs.
I do not entirely agree with Rolando2. In SEM, you gain by having lots of variables, assuming that they are reliable indicators of the underlying constructs. So don't reduce the number of variables. For the same reason, I do not entirely agree with @probabilityislogic. In SEM, you are not trying to model 12 variables with 42 observations. You are trying to model the constructs through 12 indicators, strengthened by 42 replications. A very simple factor model - 1 factor with 12 indicators - possibly could be tested with 42 people.
The RMSEA and other goodness of fit measures will tend to improve as you near saturation of the model, so again, you run the risk of a misleading result.
That being said, I have seen small data sets reject a factor model. It probably means something that the fit appears to be good.
Note: You can also check the residuals of a SEM model. These are the differences between the estimate covariance matrix and the model covariance matrix. AMOS will give them to you if you request them. Examination of the residuals might indicate if they are evenly distributed, or if certain covariances are very badly fitted. | Complications of having a very small sample in a structural equation model | The main problem that I see with this is lack of power. Confirmatory factor and SEM testing look to accept the null - you want to see a non-significant p-value - so lack of power can be a problem. The | Complications of having a very small sample in a structural equation model
The main problem that I see with this is lack of power. Confirmatory factor and SEM testing look to accept the null - you want to see a non-significant p-value - so lack of power can be a problem. The power of the test depends on the sample size (42) and the degrees of freedom. AMOS gives you the degrees of freedom. You have not quoted it, but it won't be large in this case. With 12 variables, you start with 66 DF's, and subtract 1 for each parameter that you estimate. I don't know how many that would be, but you say that you have several factors and correlations between various constructs.
I do not entirely agree with Rolando2. In SEM, you gain by having lots of variables, assuming that they are reliable indicators of the underlying constructs. So don't reduce the number of variables. For the same reason, I do not entirely agree with @probabilityislogic. In SEM, you are not trying to model 12 variables with 42 observations. You are trying to model the constructs through 12 indicators, strengthened by 42 replications. A very simple factor model - 1 factor with 12 indicators - possibly could be tested with 42 people.
The RMSEA and other goodness of fit measures will tend to improve as you near saturation of the model, so again, you run the risk of a misleading result.
That being said, I have seen small data sets reject a factor model. It probably means something that the fit appears to be good.
Note: You can also check the residuals of a SEM model. These are the differences between the estimate covariance matrix and the model covariance matrix. AMOS will give them to you if you request them. Examination of the residuals might indicate if they are evenly distributed, or if certain covariances are very badly fitted. | Complications of having a very small sample in a structural equation model
The main problem that I see with this is lack of power. Confirmatory factor and SEM testing look to accept the null - you want to see a non-significant p-value - so lack of power can be a problem. The |
20,642 | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and Intuition | Actually, without some further assumptions on the form of the transfer function in the Wold representation, I don't think it is actually true that it can always be well approximated by a ratio of finite-order polynomials. There are classes of time-series models for covariance-stationary processes where this approximation is not considered adequate --- e.g., when dealing with some "long memory" processes.
Analysis via the spectral density: To gain some insight into this aspect of time-series analysis, it is useful to look at the spectral density of a covariance-stationary process. This is fairly natural, since it allows us to see the process in frequency-space. Consider a covariance-stationary process $\{ X_t | t \in \mathbb{Z} \}$, meaning that its first two moments have the form:
$$\mathbb{E}(X_t) = \mu
\quad \quad \quad
\mathbb{Cov}(X_{t+r}, X_{t}) = \gamma(r),$$
where $\gamma$ is called the transfer function. If the process has an absolutely continuous spectral density then we can write this as:
$$f(\delta) = \frac{1}{2 \pi} \sum_{r \in \mathbb{Z}} \gamma(r) e^{2 \pi i r \delta}.$$
This function is periodic, and we can examine it over the Nyquist range $\tfrac{1}{2} \leqslant \delta \leqslant \tfrac{1}{2}$, which gives a full period. Now, under the standard ARMA representation $\phi(B) X_t = \theta(B) \varepsilon_t$ with $\sigma^2 = \mathbb{V}(\varepsilon_t)$ (which leads to a ratio of two finite polynomials for the $\text{MA}(\infty)$ representation) we get the spectral density:
$$f(\delta) = \frac{\sigma^2}{2 \pi} \bigg| \frac{\theta(e^{2 \pi i r \delta})}{\phi(e^{2 \pi i r \delta})} \bigg|^2.$$
In particular, at the zero frequency we get:
$$f(0) = \frac{\sigma^2}{2 \pi} \bigg| \frac{\theta(1)}{\phi(1)} \bigg|^2.$$
For many covariance-stationary processes, this form approximates the true spectral density fairly well. However, certain kinds of covariance-stationary time series are not well approximated by this form. Particular things that affect this are the rate of decay of the transfer function in the tails (e.g., exponential decay, power-law decay, etc.) and whether the time-series process is "short memory" or "long memory".
One specific case where the ARMA representation is not a particuarly good approximation is when the time-series process has "long memory". This phenomenon is defined by the spectral property that $f(0)=\infty$, which means that the transfer function has the divergent sum $\sum_{r \in \mathbb{Z}} \gamma(r) = \infty$. This property cannot be achieved within the standard ARMA form, since $|\theta(1)| \leqslant \sum_i |\theta_i| < \infty$ and $|\phi(1)|>0$.
Why is it the case that the infinite Wold representation can always be approximated by the ratio of two finite order polynomials?
Unless you impose some accuracy requirements or convergence conditions on the approximation, anything can be approximated by anything. So the question really becomes, under what conditions can we approximate the Wold representation with an ARMA model and still get good approximation properties (e.g., convergence, arbitrary accuracy with a finite order model, etc.)? I will address this in your subsequent questions.
What guarantees the existence of such an approximation?
Certain general forms for the transfer function in the Wold representation can be represented as power series that can be approximated up to arbitrary accuracy by a finite rational function (i.e., a ratio of two finite polynomials). This is a broad topic in real/complex analysis, and I recommend you go back to basics and have a look at the general topic of Taylor series representations of functions, and the classes of holomorphic/analytic functions. You will see that there are certain nasty classes of functions (e.g., periodic functions) that are not well-approximated by a polynomial, and other functions that are not well-approximated by a ratio of finite polynomials.
As previously noted, without some further assumptions on the form of the transfer function, I don't think it is actually true that it can always be well-approximated by a ratio of finite-order polynomials. There are some kinds of covariance-stationary time-series processes where the transfer function is "nasty" and is not well-approximated by the ARMA form. A specific case of this is "long memory" processes.
The other answer here notes that any meromorphic function can be approximated well by a finite rational function (i.e., a ratio of two finite polynomials). This is true, but it just pushes the question back one step: under what conditions will the transfer function in the Wold representation give rise to a meromorphic power series?
How good is this approximation? Is the approximation better in some cases than in other?
Approximation by an ARMA model is certainly better in some cases than others. ARMA models can approximate most "short memory" processes quite well, but they are not great at approximating "long memory" processes. The more general question, how good is the approximation, is large enough to fill entire books --- the answer will depend on the nature of the transfer function in the Wold representation, and how you measure "goodness" of an approximation. | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and In | Actually, without some further assumptions on the form of the transfer function in the Wold representation, I don't think it is actually true that it can always be well approximated by a ratio of fini | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and Intuition
Actually, without some further assumptions on the form of the transfer function in the Wold representation, I don't think it is actually true that it can always be well approximated by a ratio of finite-order polynomials. There are classes of time-series models for covariance-stationary processes where this approximation is not considered adequate --- e.g., when dealing with some "long memory" processes.
Analysis via the spectral density: To gain some insight into this aspect of time-series analysis, it is useful to look at the spectral density of a covariance-stationary process. This is fairly natural, since it allows us to see the process in frequency-space. Consider a covariance-stationary process $\{ X_t | t \in \mathbb{Z} \}$, meaning that its first two moments have the form:
$$\mathbb{E}(X_t) = \mu
\quad \quad \quad
\mathbb{Cov}(X_{t+r}, X_{t}) = \gamma(r),$$
where $\gamma$ is called the transfer function. If the process has an absolutely continuous spectral density then we can write this as:
$$f(\delta) = \frac{1}{2 \pi} \sum_{r \in \mathbb{Z}} \gamma(r) e^{2 \pi i r \delta}.$$
This function is periodic, and we can examine it over the Nyquist range $\tfrac{1}{2} \leqslant \delta \leqslant \tfrac{1}{2}$, which gives a full period. Now, under the standard ARMA representation $\phi(B) X_t = \theta(B) \varepsilon_t$ with $\sigma^2 = \mathbb{V}(\varepsilon_t)$ (which leads to a ratio of two finite polynomials for the $\text{MA}(\infty)$ representation) we get the spectral density:
$$f(\delta) = \frac{\sigma^2}{2 \pi} \bigg| \frac{\theta(e^{2 \pi i r \delta})}{\phi(e^{2 \pi i r \delta})} \bigg|^2.$$
In particular, at the zero frequency we get:
$$f(0) = \frac{\sigma^2}{2 \pi} \bigg| \frac{\theta(1)}{\phi(1)} \bigg|^2.$$
For many covariance-stationary processes, this form approximates the true spectral density fairly well. However, certain kinds of covariance-stationary time series are not well approximated by this form. Particular things that affect this are the rate of decay of the transfer function in the tails (e.g., exponential decay, power-law decay, etc.) and whether the time-series process is "short memory" or "long memory".
One specific case where the ARMA representation is not a particuarly good approximation is when the time-series process has "long memory". This phenomenon is defined by the spectral property that $f(0)=\infty$, which means that the transfer function has the divergent sum $\sum_{r \in \mathbb{Z}} \gamma(r) = \infty$. This property cannot be achieved within the standard ARMA form, since $|\theta(1)| \leqslant \sum_i |\theta_i| < \infty$ and $|\phi(1)|>0$.
Why is it the case that the infinite Wold representation can always be approximated by the ratio of two finite order polynomials?
Unless you impose some accuracy requirements or convergence conditions on the approximation, anything can be approximated by anything. So the question really becomes, under what conditions can we approximate the Wold representation with an ARMA model and still get good approximation properties (e.g., convergence, arbitrary accuracy with a finite order model, etc.)? I will address this in your subsequent questions.
What guarantees the existence of such an approximation?
Certain general forms for the transfer function in the Wold representation can be represented as power series that can be approximated up to arbitrary accuracy by a finite rational function (i.e., a ratio of two finite polynomials). This is a broad topic in real/complex analysis, and I recommend you go back to basics and have a look at the general topic of Taylor series representations of functions, and the classes of holomorphic/analytic functions. You will see that there are certain nasty classes of functions (e.g., periodic functions) that are not well-approximated by a polynomial, and other functions that are not well-approximated by a ratio of finite polynomials.
As previously noted, without some further assumptions on the form of the transfer function, I don't think it is actually true that it can always be well-approximated by a ratio of finite-order polynomials. There are some kinds of covariance-stationary time-series processes where the transfer function is "nasty" and is not well-approximated by the ARMA form. A specific case of this is "long memory" processes.
The other answer here notes that any meromorphic function can be approximated well by a finite rational function (i.e., a ratio of two finite polynomials). This is true, but it just pushes the question back one step: under what conditions will the transfer function in the Wold representation give rise to a meromorphic power series?
How good is this approximation? Is the approximation better in some cases than in other?
Approximation by an ARMA model is certainly better in some cases than others. ARMA models can approximate most "short memory" processes quite well, but they are not great at approximating "long memory" processes. The more general question, how good is the approximation, is large enough to fill entire books --- the answer will depend on the nature of the transfer function in the Wold representation, and how you measure "goodness" of an approximation. | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and In
Actually, without some further assumptions on the form of the transfer function in the Wold representation, I don't think it is actually true that it can always be well approximated by a ratio of fini |
20,643 | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and Intuition | The Wold decomposition itself is a trivial fact. It is just the Gram-Schmidt orthogonalization procedure. In the time series context, the Hilbert space in question is the space of random variables with finite second moments.
Just to state the Wold decomposition: For any covariance stationary time series $\{X_t\}$, there exists innovations $\{\epsilon_t\}$ such that $\{X_t\}$ is a two-sided MA$(\infty)$ process with respect to $\{\epsilon_t\}$. In usual heuristic notation,
$$
X_t = f(B)\epsilon_t
$$
where $f(z) = \sum_{h \in \mathbb{Z}} \gamma_h z^h$.
The series converges in couple senses:
First, it converges in the $L^2$-norm uniformly in $t$. In other words, $\{X_t\}$ can be approximated, in the $L^2$-norm uniformly in $t$, by corresponding truncated finite order MA sum.
Second, for any given $t$, it converges almost surely. In other words, for any given $t$, corresponding truncated MA sum converges to $X_t$ with probability $1$.
Consider the Laurent series $f(z) = \sum_{h \in \mathbb{Z}} \gamma_h z^h$, which defines a meromorphic function on some open annulus in the complex plane.
Any meromorphic function $f(z)$ can be approximately (uniformly on compact sets) by rational function $\frac{\Theta(z)}{\Phi(z)}$. In the time series context, this means
$$
X_t = f(B)\epsilon_t
$$
can be approximated, in some sense, by the ARMA process
$$
X'_t = \frac{\Theta(B)}{\Phi(B)}\epsilon_t.
$$
Couple Caveats
First, exactly how "uniformly on compact sets" translates to approximation of random variables is not clear. It is part of standard hand-waving folklore. To make this more precise, one needs to know how "uniformly on compact sets" means in terms of series coefficients. Second, in the non-causal case, replace $z$ (resp. $B$) by $\frac{1}{z}$ (resp. $B^{-1}$, the forward shift). | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and In | The Wold decomposition itself is a trivial fact. It is just the Gram-Schmidt orthogonalization procedure. In the time series context, the Hilbert space in question is the space of random variables wit | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and Intuition
The Wold decomposition itself is a trivial fact. It is just the Gram-Schmidt orthogonalization procedure. In the time series context, the Hilbert space in question is the space of random variables with finite second moments.
Just to state the Wold decomposition: For any covariance stationary time series $\{X_t\}$, there exists innovations $\{\epsilon_t\}$ such that $\{X_t\}$ is a two-sided MA$(\infty)$ process with respect to $\{\epsilon_t\}$. In usual heuristic notation,
$$
X_t = f(B)\epsilon_t
$$
where $f(z) = \sum_{h \in \mathbb{Z}} \gamma_h z^h$.
The series converges in couple senses:
First, it converges in the $L^2$-norm uniformly in $t$. In other words, $\{X_t\}$ can be approximated, in the $L^2$-norm uniformly in $t$, by corresponding truncated finite order MA sum.
Second, for any given $t$, it converges almost surely. In other words, for any given $t$, corresponding truncated MA sum converges to $X_t$ with probability $1$.
Consider the Laurent series $f(z) = \sum_{h \in \mathbb{Z}} \gamma_h z^h$, which defines a meromorphic function on some open annulus in the complex plane.
Any meromorphic function $f(z)$ can be approximately (uniformly on compact sets) by rational function $\frac{\Theta(z)}{\Phi(z)}$. In the time series context, this means
$$
X_t = f(B)\epsilon_t
$$
can be approximated, in some sense, by the ARMA process
$$
X'_t = \frac{\Theta(B)}{\Phi(B)}\epsilon_t.
$$
Couple Caveats
First, exactly how "uniformly on compact sets" translates to approximation of random variables is not clear. It is part of standard hand-waving folklore. To make this more precise, one needs to know how "uniformly on compact sets" means in terms of series coefficients. Second, in the non-causal case, replace $z$ (resp. $B$) by $\frac{1}{z}$ (resp. $B^{-1}$, the forward shift). | What guarantees the existence of a finite representation of the Wold decomposition? Mechanics and In
The Wold decomposition itself is a trivial fact. It is just the Gram-Schmidt orthogonalization procedure. In the time series context, the Hilbert space in question is the space of random variables wit |
20,644 | What loss function should I use to score a seq2seq RNN model? | It seems to assume teacher forcing during training (ie, instead of
using the decoder's guess for a position as the input to the next
iteration, it uses the known token.
The term "teacher forcing" bothers me a bit, because it kind of misses the idea: There's nothing wrong or weird with feeding the next known token to the RNN model -- it's literally the only way to compute $\log P(y_1, \ldots, y_N)$. If you define a distribution over sequences autoregressively as $P(y) = \prod_i P(y_i | y_{<i})$ as is commonly done, where each conditional term is modeled with an RNN, then "teacher forcing" is the one true procedure which correctly maximizes log likelihood. (I omit writing the conditioning sequence $x$ above because it doesn't change anything.)
Given the ubiquity of MLE and the lack of good alternatives, I don't think assuming "teacher forcing" is objectionable.
Nonetheless there are admittedly issues with it -- namely, the model assigns high likelihood to all data points, but samples from the model are not necessarily likely in the true data distribution (which results in "low quality" samples). You may be interested in "Professor Forcing" (Lamb et al.) which mitigates this via an adversarial training procedure without giving up MLE.
It wouldn't penalize long sequences. Since the probability is from 1
to N of the output, if the decoder generated a longer sequence
everything after the first N would not factor into the loss.
and
If the model predicts an early End-of-String token, the loss function
still demands N steps -- which means we are generating outputs based
on an untrained "manifold" of the models. That seems sloppy.
Neither of these are problems which occur while training. Instead of thinking of an autoregressive sequence model as a procedure to output a prediction, think of it as a way to compute how probable a given sequence is. The model never predicts anything -- you can sample a sequence or a token from a distribution, or you can ask it what the most likely next token is -- but these are crucially different from a prediction (and you don't sample during training either).
If so, has there been any progress into a more advanced loss function?
There may well be objectives specifically designed on a case-by-case basis for different modeling tasks. However I would say MLE is still dominant -- the recent GPT2 model which achieved state-of-the-art performance on a broad spectrum of natural language modeling and understanding tasks was trained with it. | What loss function should I use to score a seq2seq RNN model? | It seems to assume teacher forcing during training (ie, instead of
using the decoder's guess for a position as the input to the next
iteration, it uses the known token.
The term "teacher forcing" | What loss function should I use to score a seq2seq RNN model?
It seems to assume teacher forcing during training (ie, instead of
using the decoder's guess for a position as the input to the next
iteration, it uses the known token.
The term "teacher forcing" bothers me a bit, because it kind of misses the idea: There's nothing wrong or weird with feeding the next known token to the RNN model -- it's literally the only way to compute $\log P(y_1, \ldots, y_N)$. If you define a distribution over sequences autoregressively as $P(y) = \prod_i P(y_i | y_{<i})$ as is commonly done, where each conditional term is modeled with an RNN, then "teacher forcing" is the one true procedure which correctly maximizes log likelihood. (I omit writing the conditioning sequence $x$ above because it doesn't change anything.)
Given the ubiquity of MLE and the lack of good alternatives, I don't think assuming "teacher forcing" is objectionable.
Nonetheless there are admittedly issues with it -- namely, the model assigns high likelihood to all data points, but samples from the model are not necessarily likely in the true data distribution (which results in "low quality" samples). You may be interested in "Professor Forcing" (Lamb et al.) which mitigates this via an adversarial training procedure without giving up MLE.
It wouldn't penalize long sequences. Since the probability is from 1
to N of the output, if the decoder generated a longer sequence
everything after the first N would not factor into the loss.
and
If the model predicts an early End-of-String token, the loss function
still demands N steps -- which means we are generating outputs based
on an untrained "manifold" of the models. That seems sloppy.
Neither of these are problems which occur while training. Instead of thinking of an autoregressive sequence model as a procedure to output a prediction, think of it as a way to compute how probable a given sequence is. The model never predicts anything -- you can sample a sequence or a token from a distribution, or you can ask it what the most likely next token is -- but these are crucially different from a prediction (and you don't sample during training either).
If so, has there been any progress into a more advanced loss function?
There may well be objectives specifically designed on a case-by-case basis for different modeling tasks. However I would say MLE is still dominant -- the recent GPT2 model which achieved state-of-the-art performance on a broad spectrum of natural language modeling and understanding tasks was trained with it. | What loss function should I use to score a seq2seq RNN model?
It seems to assume teacher forcing during training (ie, instead of
using the decoder's guess for a position as the input to the next
iteration, it uses the known token.
The term "teacher forcing" |
20,645 | How to do cross-validation with cv.glmnet (LASSO regression in R)? | Is the cross-validation performed in cv.glmnet simply to pick the best lambda, or is it also serving as a more general cross-validation procedure?
It does almost everything needed in a cross-validation. For example, it fits possible lambda values on the data, chooses the best model and finally trains the model with the appropriate parameters.
For example, in the returned object::
cvm is the mean cross-validated error.
cvsd is the estimated standard deviation.
Like other returned values, these are calculated on the test set. Finally, the
glmnet.fit gives the model trained on all the data (training + test) with the best parameters.
Do I have to do so manually, or is perhaps the caret function useful for glmnet models?
You need not do this manually. 'Caret' would be very useful, and is one of my favourite package because it works for all the other models with same syntax. I myself often use caret rather than cv.glmnet. However, in your scenario it is essentially the same.
Do I use two concentric "loops" of cross validation?... Do I use an "inner loop" of CV via cv.glmnet to determine the best lambda value within each of k folds of an "external loop" of k-fold cross validation processing?
You could do this and this concept is very similar to the idea of Nested Cross-Validation Nested cross validation for model selection.
If I do cross-validation of my already cross-validating cv.glmnet model, how do I isolate the "best" model (from the "best" lambda value) from each cv.glmnet model within each fold of my otherwise "external loop" of cross validation?
Just run a loop where you generate a training data and test data run cv.glmnet on training data and use the model glmnet.fit to predict on the test data. | How to do cross-validation with cv.glmnet (LASSO regression in R)? | Is the cross-validation performed in cv.glmnet simply to pick the best lambda, or is it also serving as a more general cross-validation procedure?
It does almost everything needed in a cross-validatio | How to do cross-validation with cv.glmnet (LASSO regression in R)?
Is the cross-validation performed in cv.glmnet simply to pick the best lambda, or is it also serving as a more general cross-validation procedure?
It does almost everything needed in a cross-validation. For example, it fits possible lambda values on the data, chooses the best model and finally trains the model with the appropriate parameters.
For example, in the returned object::
cvm is the mean cross-validated error.
cvsd is the estimated standard deviation.
Like other returned values, these are calculated on the test set. Finally, the
glmnet.fit gives the model trained on all the data (training + test) with the best parameters.
Do I have to do so manually, or is perhaps the caret function useful for glmnet models?
You need not do this manually. 'Caret' would be very useful, and is one of my favourite package because it works for all the other models with same syntax. I myself often use caret rather than cv.glmnet. However, in your scenario it is essentially the same.
Do I use two concentric "loops" of cross validation?... Do I use an "inner loop" of CV via cv.glmnet to determine the best lambda value within each of k folds of an "external loop" of k-fold cross validation processing?
You could do this and this concept is very similar to the idea of Nested Cross-Validation Nested cross validation for model selection.
If I do cross-validation of my already cross-validating cv.glmnet model, how do I isolate the "best" model (from the "best" lambda value) from each cv.glmnet model within each fold of my otherwise "external loop" of cross validation?
Just run a loop where you generate a training data and test data run cv.glmnet on training data and use the model glmnet.fit to predict on the test data. | How to do cross-validation with cv.glmnet (LASSO regression in R)?
Is the cross-validation performed in cv.glmnet simply to pick the best lambda, or is it also serving as a more general cross-validation procedure?
It does almost everything needed in a cross-validatio |
20,646 | Different results from randomForest via caret and the basic randomForest package | I think the question while somewhat trivial and "programmatic" at first read touches upon two main issues that very important in modern Statistics:
reproducibility of results and
non-deterministic algorithms.
The reason for the different results is that the two procedure are trained using different random seeds. Random forests uses a random subset from the full-dataset's variables as candidates at each split (that's the mtry argument and relates to the random subspace method) as well as bags (bootstrap aggregates) the original dataset to decrease the variance of the model. These two internal random sampling procedures thought are not deterministic between different runs of the algorithm. The random order which the sampling is done is controlled by the random seeds used.
If the same seeds were used, one would get the exact same results in both cases where the randomForest routine is called; both internally in caret::train as well as externally when fitting a random forest manually. I attach a simple code snippet to show-case this. Please note that I use a very small number of trees (argument: ntree) to keep training fast, it should be generally much larger.
library(caret)
set.seed(321)
trainData <- twoClassSim(5000, linearVars = 3, noiseVars = 9)
testData <- twoClassSim(5000, linearVars = 3, noiseVars = 9)
set.seed(432)
mySeeds <- sapply(simplify = FALSE, 1:26, function(u) sample(10^4, 3))
cvCtrl = trainControl(method = "repeatedcv", number = 5, repeats = 5,
classProbs = TRUE, summaryFunction = twoClassSummary,
seeds = mySeeds)
fitRFcaret = train(Class ~ ., data = trainData, trControl = cvCtrl,
ntree = 33, method = "rf", metric="ROC")
set.seed( unlist(tail(mySeeds,1))[1])
fitRFmanual <- randomForest(Class ~ ., data=trainData,
mtry = fitRFcaret$bestTune$mtry, ntree=33)
At this point both the caret.train object fitRFcaret as well as the manually defined randomForest object fitRFmanual have been trained using the same data but importantly using the same random seeds when fitting their final model. As such when we will try to predict using these objects and because we do no preprocessing of our data we will get the same exact answers.
all.equal(current = as.vector(predict(fitRFcaret, testData)),
target = as.vector(predict(fitRFmanual, testData)))
# TRUE
Just to clarify this later point a bit further: predict(xx$finalModel, testData) and predict(xx, testData) will be different if one sets the preProcess option when using train. On the other hand, when using the finalModel directly it is equivalent using the predict function from the model fitted (predict.randomForest here) instead of predict.train; no pre-proessing takes place. Obviously in the scenario outlined in the original question where no pre-processing is done the results will be the same when using the finalModel, the manually fitted randomForest object or the caret.train object.
all.equal(current = as.vector(predict(fitRFcaret$finalModel, testData)),
target = as.vector(predict(fitRFmanual, testData)))
# TRUE
all.equal(current = as.vector(predict(fitRFcaret$finalModel, testData)),
target = as.vector(predict(fitRFcaret, testData)))
# TRUE
I would strongly suggest that you always set the random seed used by R, MATLAB or any other program used. Otherwise, you cannot check the reproducibility of results (which OK, it might not be the end of the world) nor exclude a bug or external factor affecting the performance of a modelling procedure (which yeah, it kind of sucks). A lot of the leading ML algorithms (eg. gradient boosting, random forests, extreme neural networks) do employ certain internal resampling procedures during their training phases, setting the random seed states prior (or sometimes even within) their training phase can be important. | Different results from randomForest via caret and the basic randomForest package | I think the question while somewhat trivial and "programmatic" at first read touches upon two main issues that very important in modern Statistics:
reproducibility of results and
non-deterministic | Different results from randomForest via caret and the basic randomForest package
I think the question while somewhat trivial and "programmatic" at first read touches upon two main issues that very important in modern Statistics:
reproducibility of results and
non-deterministic algorithms.
The reason for the different results is that the two procedure are trained using different random seeds. Random forests uses a random subset from the full-dataset's variables as candidates at each split (that's the mtry argument and relates to the random subspace method) as well as bags (bootstrap aggregates) the original dataset to decrease the variance of the model. These two internal random sampling procedures thought are not deterministic between different runs of the algorithm. The random order which the sampling is done is controlled by the random seeds used.
If the same seeds were used, one would get the exact same results in both cases where the randomForest routine is called; both internally in caret::train as well as externally when fitting a random forest manually. I attach a simple code snippet to show-case this. Please note that I use a very small number of trees (argument: ntree) to keep training fast, it should be generally much larger.
library(caret)
set.seed(321)
trainData <- twoClassSim(5000, linearVars = 3, noiseVars = 9)
testData <- twoClassSim(5000, linearVars = 3, noiseVars = 9)
set.seed(432)
mySeeds <- sapply(simplify = FALSE, 1:26, function(u) sample(10^4, 3))
cvCtrl = trainControl(method = "repeatedcv", number = 5, repeats = 5,
classProbs = TRUE, summaryFunction = twoClassSummary,
seeds = mySeeds)
fitRFcaret = train(Class ~ ., data = trainData, trControl = cvCtrl,
ntree = 33, method = "rf", metric="ROC")
set.seed( unlist(tail(mySeeds,1))[1])
fitRFmanual <- randomForest(Class ~ ., data=trainData,
mtry = fitRFcaret$bestTune$mtry, ntree=33)
At this point both the caret.train object fitRFcaret as well as the manually defined randomForest object fitRFmanual have been trained using the same data but importantly using the same random seeds when fitting their final model. As such when we will try to predict using these objects and because we do no preprocessing of our data we will get the same exact answers.
all.equal(current = as.vector(predict(fitRFcaret, testData)),
target = as.vector(predict(fitRFmanual, testData)))
# TRUE
Just to clarify this later point a bit further: predict(xx$finalModel, testData) and predict(xx, testData) will be different if one sets the preProcess option when using train. On the other hand, when using the finalModel directly it is equivalent using the predict function from the model fitted (predict.randomForest here) instead of predict.train; no pre-proessing takes place. Obviously in the scenario outlined in the original question where no pre-processing is done the results will be the same when using the finalModel, the manually fitted randomForest object or the caret.train object.
all.equal(current = as.vector(predict(fitRFcaret$finalModel, testData)),
target = as.vector(predict(fitRFmanual, testData)))
# TRUE
all.equal(current = as.vector(predict(fitRFcaret$finalModel, testData)),
target = as.vector(predict(fitRFcaret, testData)))
# TRUE
I would strongly suggest that you always set the random seed used by R, MATLAB or any other program used. Otherwise, you cannot check the reproducibility of results (which OK, it might not be the end of the world) nor exclude a bug or external factor affecting the performance of a modelling procedure (which yeah, it kind of sucks). A lot of the leading ML algorithms (eg. gradient boosting, random forests, extreme neural networks) do employ certain internal resampling procedures during their training phases, setting the random seed states prior (or sometimes even within) their training phase can be important. | Different results from randomForest via caret and the basic randomForest package
I think the question while somewhat trivial and "programmatic" at first read touches upon two main issues that very important in modern Statistics:
reproducibility of results and
non-deterministic |
20,647 | Different results from randomForest via caret and the basic randomForest package | Predictions from curClassifier are not the same as predictions from curClassifier$finalModel link. You have reproduced the finalModel and are comparing it to the predict.train object. | Different results from randomForest via caret and the basic randomForest package | Predictions from curClassifier are not the same as predictions from curClassifier$finalModel link. You have reproduced the finalModel and are comparing it to the predict.train object. | Different results from randomForest via caret and the basic randomForest package
Predictions from curClassifier are not the same as predictions from curClassifier$finalModel link. You have reproduced the finalModel and are comparing it to the predict.train object. | Different results from randomForest via caret and the basic randomForest package
Predictions from curClassifier are not the same as predictions from curClassifier$finalModel link. You have reproduced the finalModel and are comparing it to the predict.train object. |
20,648 | Converting a list of partial rankings into a global ranking | Plackett-Luce ranking models deal with this problem and are a likelihood based technique where the likelihood is maximized using a majorization-maximization routine, which is similar to Expectation Maximization, in the sense that they use an auxiliary objective function over the likelihood function which is optimized to guarantee iterative monotonic maximization of the likelihood function. (see MM algorithms for Plackett-Luce ranking models by David Hunter). He provides code as well.
From a ranking perspective, they are an extension of Bradley-Terry models which you mention in your post. Bradley-Terry models estimate a global ranking from a sample of pairwise rankings. Plackett-Luce models extend this to rankings of length $>=$ 2. They also allow for each sample being a ranking of a different length.
This fits your dataset perfectly:
Book 1 > Book 40 > Book 25
Book 40 > Book 30
Book 25 > Book 17 > Book 11 > Book 3 etc. | Converting a list of partial rankings into a global ranking | Plackett-Luce ranking models deal with this problem and are a likelihood based technique where the likelihood is maximized using a majorization-maximization routine, which is similar to Expectation Ma | Converting a list of partial rankings into a global ranking
Plackett-Luce ranking models deal with this problem and are a likelihood based technique where the likelihood is maximized using a majorization-maximization routine, which is similar to Expectation Maximization, in the sense that they use an auxiliary objective function over the likelihood function which is optimized to guarantee iterative monotonic maximization of the likelihood function. (see MM algorithms for Plackett-Luce ranking models by David Hunter). He provides code as well.
From a ranking perspective, they are an extension of Bradley-Terry models which you mention in your post. Bradley-Terry models estimate a global ranking from a sample of pairwise rankings. Plackett-Luce models extend this to rankings of length $>=$ 2. They also allow for each sample being a ranking of a different length.
This fits your dataset perfectly:
Book 1 > Book 40 > Book 25
Book 40 > Book 30
Book 25 > Book 17 > Book 11 > Book 3 etc. | Converting a list of partial rankings into a global ranking
Plackett-Luce ranking models deal with this problem and are a likelihood based technique where the likelihood is maximized using a majorization-maximization routine, which is similar to Expectation Ma |
20,649 | Converting a list of partial rankings into a global ranking | If you're interested in use (more than in development), you should give a try to rankade, our ranking system.
Rankade is free and easy to use, and it's different from Bradley-Terry model and Elo ranking system (here's a comparison) because it can manage matches with 2+ factions (i.e. books, in your scenario). Inserting user's ordered rankings (as matches between two or more books, with detailed final standings, including ties) you'll obtain the single ordered ranking of all the books you're looking for.
In addiction, rankade give you the opportunity to check time evolution for books ranking, and stats for books match-ups, and more. | Converting a list of partial rankings into a global ranking | If you're interested in use (more than in development), you should give a try to rankade, our ranking system.
Rankade is free and easy to use, and it's different from Bradley-Terry model and Elo ranki | Converting a list of partial rankings into a global ranking
If you're interested in use (more than in development), you should give a try to rankade, our ranking system.
Rankade is free and easy to use, and it's different from Bradley-Terry model and Elo ranking system (here's a comparison) because it can manage matches with 2+ factions (i.e. books, in your scenario). Inserting user's ordered rankings (as matches between two or more books, with detailed final standings, including ties) you'll obtain the single ordered ranking of all the books you're looking for.
In addiction, rankade give you the opportunity to check time evolution for books ranking, and stats for books match-ups, and more. | Converting a list of partial rankings into a global ranking
If you're interested in use (more than in development), you should give a try to rankade, our ranking system.
Rankade is free and easy to use, and it's different from Bradley-Terry model and Elo ranki |
20,650 | Why can't likelihood ratio tests be used for non-nested models? | Well, I can give a non-rigorous answer from a non-statistician. The Likelihood ratio method relies on the fact that the denominator max likelihood gives a results always at least as good as the numerator max likelihood because the numerator Hypothesis corresponds to a subset of the denominator hypothesis. As a result, ratio is always between 0 and 1.
If you would have non-nested hypothesis (like testing 2 different distributions), likelihood ratio could be > 1 => -1 * log likehood ratio could be < 0 => it is certainly not a chi2 distribution. | Why can't likelihood ratio tests be used for non-nested models? | Well, I can give a non-rigorous answer from a non-statistician. The Likelihood ratio method relies on the fact that the denominator max likelihood gives a results always at least as good as the numera | Why can't likelihood ratio tests be used for non-nested models?
Well, I can give a non-rigorous answer from a non-statistician. The Likelihood ratio method relies on the fact that the denominator max likelihood gives a results always at least as good as the numerator max likelihood because the numerator Hypothesis corresponds to a subset of the denominator hypothesis. As a result, ratio is always between 0 and 1.
If you would have non-nested hypothesis (like testing 2 different distributions), likelihood ratio could be > 1 => -1 * log likehood ratio could be < 0 => it is certainly not a chi2 distribution. | Why can't likelihood ratio tests be used for non-nested models?
Well, I can give a non-rigorous answer from a non-statistician. The Likelihood ratio method relies on the fact that the denominator max likelihood gives a results always at least as good as the numera |
20,651 | Why can't likelihood ratio tests be used for non-nested models? | In order to undertake hypothesis testing you need to express your research hypothesis as a null and alternative hypothesis. The null hypothesis and alternative hypothesis are statements regarding the differences or effects that occur in the population. You will use your sample to test which statement (i.e., the null hypothesis or alternative hypothesis) is most likely (although technically, you test the evidence against the null hypothesis).
The null hypothesis is essentially the "devil's advocate" position. That is, it assumes that whatever you are trying to prove did not happen (hint: it usually states that something equals zero).
Looking here, we can find this text:
Hypothesis testing is an essential procedure in statistics. A hypothesis test evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. When we say that a finding is statistically significant, it’s thanks to a hypothesis test.
About accepting/rejecting Hypothesis, here, we can find an interesting answer:
Some researchers say that a hypothesis test can have one of two outcomes: you accept the null hypothesis or you reject the null hypothesis. Many statisticians, however, take issue with the notion of "accepting the null hypothesis." Instead, they say: you reject the null hypothesis or you fail to reject the null hypothesis.
Why the distinction between "acceptance" and "failure to reject?" Acceptance implies that the null hypothesis is true. Failure to reject implies that the data are not sufficiently persuasive for us to prefer the alternative hypothesis over the null hypothesis. | Why can't likelihood ratio tests be used for non-nested models? | In order to undertake hypothesis testing you need to express your research hypothesis as a null and alternative hypothesis. The null hypothesis and alternative hypothesis are statements regarding the | Why can't likelihood ratio tests be used for non-nested models?
In order to undertake hypothesis testing you need to express your research hypothesis as a null and alternative hypothesis. The null hypothesis and alternative hypothesis are statements regarding the differences or effects that occur in the population. You will use your sample to test which statement (i.e., the null hypothesis or alternative hypothesis) is most likely (although technically, you test the evidence against the null hypothesis).
The null hypothesis is essentially the "devil's advocate" position. That is, it assumes that whatever you are trying to prove did not happen (hint: it usually states that something equals zero).
Looking here, we can find this text:
Hypothesis testing is an essential procedure in statistics. A hypothesis test evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. When we say that a finding is statistically significant, it’s thanks to a hypothesis test.
About accepting/rejecting Hypothesis, here, we can find an interesting answer:
Some researchers say that a hypothesis test can have one of two outcomes: you accept the null hypothesis or you reject the null hypothesis. Many statisticians, however, take issue with the notion of "accepting the null hypothesis." Instead, they say: you reject the null hypothesis or you fail to reject the null hypothesis.
Why the distinction between "acceptance" and "failure to reject?" Acceptance implies that the null hypothesis is true. Failure to reject implies that the data are not sufficiently persuasive for us to prefer the alternative hypothesis over the null hypothesis. | Why can't likelihood ratio tests be used for non-nested models?
In order to undertake hypothesis testing you need to express your research hypothesis as a null and alternative hypothesis. The null hypothesis and alternative hypothesis are statements regarding the |
20,652 | Does a median-unbiased estimator minimize mean absolute deviance? | If we choose an estimator $\alpha^+$ by the criterion that it minimizes the expected absolute error from the true value $\alpha$
$E=<|\alpha^+-\alpha|> = \int_{-\infty}^{\alpha^+} (\alpha^+-\alpha)f(\alpha) \mathrm{d}\alpha + \int^{\infty}_{\alpha^+} (\alpha-\alpha^+)f(\alpha)\mathrm{d}\alpha $
we require
$\frac{dE}{d\alpha^+} = \int_{-\infty}^{\alpha^+} f(\alpha) \mathrm{d}\alpha - \int^{\infty}_{\alpha^+} f(\alpha) \mathrm{d}\alpha = 0$
which is equivalent to $P(\alpha > \alpha^+) = 1/2$. So $\alpha^+$ is the shown to be the median as following Laplace in 1774.
If you are having trouble with R please ask it in another question on Stack Overflow | Does a median-unbiased estimator minimize mean absolute deviance? | If we choose an estimator $\alpha^+$ by the criterion that it minimizes the expected absolute error from the true value $\alpha$
$E=<|\alpha^+-\alpha|> = \int_{-\infty}^{\alpha^+} (\alpha^+-\alpha)f | Does a median-unbiased estimator minimize mean absolute deviance?
If we choose an estimator $\alpha^+$ by the criterion that it minimizes the expected absolute error from the true value $\alpha$
$E=<|\alpha^+-\alpha|> = \int_{-\infty}^{\alpha^+} (\alpha^+-\alpha)f(\alpha) \mathrm{d}\alpha + \int^{\infty}_{\alpha^+} (\alpha-\alpha^+)f(\alpha)\mathrm{d}\alpha $
we require
$\frac{dE}{d\alpha^+} = \int_{-\infty}^{\alpha^+} f(\alpha) \mathrm{d}\alpha - \int^{\infty}_{\alpha^+} f(\alpha) \mathrm{d}\alpha = 0$
which is equivalent to $P(\alpha > \alpha^+) = 1/2$. So $\alpha^+$ is the shown to be the median as following Laplace in 1774.
If you are having trouble with R please ask it in another question on Stack Overflow | Does a median-unbiased estimator minimize mean absolute deviance?
If we choose an estimator $\alpha^+$ by the criterion that it minimizes the expected absolute error from the true value $\alpha$
$E=<|\alpha^+-\alpha|> = \int_{-\infty}^{\alpha^+} (\alpha^+-\alpha)f |
20,653 | How to predict one time-series from another time-series, if they are related | Here's an approach which doesn't use any "contextual" information i.e. it fails to take into account the fact that "a sub is following a ship". On the other hand it is easy to start with:
Denote by
$x_{sub}(t), y_{sub}(t)$
$x_{ship}(t), y_{ship}(t)$
the coordinates of the submarine and the ship at time $t$, and define the "distance-series" by
$x_{dist} (t) = x_{ship} (t) - x_{sub} (t)$
$y_{dist} (t) = y_{ship} (t) - y_{sub} (t)$
My suggestion is that you predict each of these separately (you can tie them together later).
Let's take a moment to picture what these look like. Let's focus on the $x$-coordinate, and let's say that the ship is moving towards the right with the sub following behind it. Suppose the sub is around 100 meters behind the ship, with a deviation of say 10 meters.
Then
$x_{dist} (t) = 100 \pm 10 \cdot wiggle(t)$
You could then model the "$wiggle$" function as a Gaussian white-noise variable having zero mean and unit variance.
Now (still focussing on the $x$ coordinate, the story for $y$ is the same) if the $wiggle$ function were white noise, you would be able to compute the mean $\mu$ and the standard deviation $\sigma$ of the series $x_{dist}$ and write
$x_{dist}(t) = \mu + \sigma \cdot W_x(t)$
Since you have actual data, you can compute the time-series $W_x(t)$ and see if it follows a Gaussian (i.e. Normal) distribution. If it does, or even if it is any distribution you recognize, you could then generate values and make predictions for $x_{dist}$.
Another strategy people employ (which I think will work for you) is that they break up their series into
Polynomial base + Cyclic pattern + Bounded randomness
In the case of a submarine and a ship, the polynomial part would probably be constant and the cyclic part a sum of sines and cosines (from the waves of the ocean...). This may not be the case for eye-tracking.
There are tools which can figure this out for you. Here are two that I know of:
DTREG (30 day evaluation license)
Microsoft Time Series Algorithm which is part of their SQL Server product. I am currently using their 180-day evaluation edition, it is easy to use.
Here is a screenshot from the SQL Server tool (the dotted part is the prediction):
One algorithm they use is called ARIMA. Wanting to learn how it works, I did some Googling and found this book: First Course on Time Series (and don't worry, you don't need to have SAS to follow along. I don't.). It is very readable.
You don't have to know how ARIMA works to use these tools, but I think it is always easier if you have context, since there are "model parameters" to be set etc. | How to predict one time-series from another time-series, if they are related | Here's an approach which doesn't use any "contextual" information i.e. it fails to take into account the fact that "a sub is following a ship". On the other hand it is easy to start with:
Denote by
$ | How to predict one time-series from another time-series, if they are related
Here's an approach which doesn't use any "contextual" information i.e. it fails to take into account the fact that "a sub is following a ship". On the other hand it is easy to start with:
Denote by
$x_{sub}(t), y_{sub}(t)$
$x_{ship}(t), y_{ship}(t)$
the coordinates of the submarine and the ship at time $t$, and define the "distance-series" by
$x_{dist} (t) = x_{ship} (t) - x_{sub} (t)$
$y_{dist} (t) = y_{ship} (t) - y_{sub} (t)$
My suggestion is that you predict each of these separately (you can tie them together later).
Let's take a moment to picture what these look like. Let's focus on the $x$-coordinate, and let's say that the ship is moving towards the right with the sub following behind it. Suppose the sub is around 100 meters behind the ship, with a deviation of say 10 meters.
Then
$x_{dist} (t) = 100 \pm 10 \cdot wiggle(t)$
You could then model the "$wiggle$" function as a Gaussian white-noise variable having zero mean and unit variance.
Now (still focussing on the $x$ coordinate, the story for $y$ is the same) if the $wiggle$ function were white noise, you would be able to compute the mean $\mu$ and the standard deviation $\sigma$ of the series $x_{dist}$ and write
$x_{dist}(t) = \mu + \sigma \cdot W_x(t)$
Since you have actual data, you can compute the time-series $W_x(t)$ and see if it follows a Gaussian (i.e. Normal) distribution. If it does, or even if it is any distribution you recognize, you could then generate values and make predictions for $x_{dist}$.
Another strategy people employ (which I think will work for you) is that they break up their series into
Polynomial base + Cyclic pattern + Bounded randomness
In the case of a submarine and a ship, the polynomial part would probably be constant and the cyclic part a sum of sines and cosines (from the waves of the ocean...). This may not be the case for eye-tracking.
There are tools which can figure this out for you. Here are two that I know of:
DTREG (30 day evaluation license)
Microsoft Time Series Algorithm which is part of their SQL Server product. I am currently using their 180-day evaluation edition, it is easy to use.
Here is a screenshot from the SQL Server tool (the dotted part is the prediction):
One algorithm they use is called ARIMA. Wanting to learn how it works, I did some Googling and found this book: First Course on Time Series (and don't worry, you don't need to have SAS to follow along. I don't.). It is very readable.
You don't have to know how ARIMA works to use these tools, but I think it is always easier if you have context, since there are "model parameters" to be set etc. | How to predict one time-series from another time-series, if they are related
Here's an approach which doesn't use any "contextual" information i.e. it fails to take into account the fact that "a sub is following a ship". On the other hand it is easy to start with:
Denote by
$ |
20,654 | MCMC Geweke diagnostic | You can look through the code for the geweke.diag function in the coda package to see how the variance is computed, via the call to the spectrum.ar0 function.
Here is a short motivation of the computation of the spectral density of an AR($p$) process at zero.
The spectral density of an AR($p$) process at frequency $\lambda$ is given by the expression:
$$
f(\lambda) = \dfrac{\sigma^2}{(1-\sum_{j=1}^p\alpha_j\exp(-2\pi\iota j\lambda))^2}
$$
where $\alpha_j$ are the autoregressive parameters.
This expression simplifies considerably when computing the spectral density of an AR($p$) process at $0$:
$$
f(0) = \dfrac{\sigma^2}{(1-\sum_{j=1}^p\alpha_j)^2}
$$
The computation then would look something like this (substituting the usual estimators for parameters):
tsAR2 = arima.sim(list(ar = c(0.01, 0.03)), n = 1000) # simulate an AR(2) process
ar2 = ar(tsAR2, aic = TRUE) # estimate it with AIC complexity selection
# manual estimate of spectral density at zero
sdMan = ar2$var.pred/(1-sum(ar2$ar))^2
# coda computation of spectral density at zer0
sdCoda = coda::spectrum0.ar(tsAr2)$spec
# assert equality
all.equal(sdCoda, sdMan) | MCMC Geweke diagnostic | You can look through the code for the geweke.diag function in the coda package to see how the variance is computed, via the call to the spectrum.ar0 function.
Here is a short motivation of the compu | MCMC Geweke diagnostic
You can look through the code for the geweke.diag function in the coda package to see how the variance is computed, via the call to the spectrum.ar0 function.
Here is a short motivation of the computation of the spectral density of an AR($p$) process at zero.
The spectral density of an AR($p$) process at frequency $\lambda$ is given by the expression:
$$
f(\lambda) = \dfrac{\sigma^2}{(1-\sum_{j=1}^p\alpha_j\exp(-2\pi\iota j\lambda))^2}
$$
where $\alpha_j$ are the autoregressive parameters.
This expression simplifies considerably when computing the spectral density of an AR($p$) process at $0$:
$$
f(0) = \dfrac{\sigma^2}{(1-\sum_{j=1}^p\alpha_j)^2}
$$
The computation then would look something like this (substituting the usual estimators for parameters):
tsAR2 = arima.sim(list(ar = c(0.01, 0.03)), n = 1000) # simulate an AR(2) process
ar2 = ar(tsAR2, aic = TRUE) # estimate it with AIC complexity selection
# manual estimate of spectral density at zero
sdMan = ar2$var.pred/(1-sum(ar2$ar))^2
# coda computation of spectral density at zer0
sdCoda = coda::spectrum0.ar(tsAr2)$spec
# assert equality
all.equal(sdCoda, sdMan) | MCMC Geweke diagnostic
You can look through the code for the geweke.diag function in the coda package to see how the variance is computed, via the call to the spectrum.ar0 function.
Here is a short motivation of the compu |
20,655 | MCMC Geweke diagnostic | Check the wikipedia page. You'll see $S_{xx}(\omega)$, which is the spectral density. In your case, you should use $S_{xx}(0)$. | MCMC Geweke diagnostic | Check the wikipedia page. You'll see $S_{xx}(\omega)$, which is the spectral density. In your case, you should use $S_{xx}(0)$. | MCMC Geweke diagnostic
Check the wikipedia page. You'll see $S_{xx}(\omega)$, which is the spectral density. In your case, you should use $S_{xx}(0)$. | MCMC Geweke diagnostic
Check the wikipedia page. You'll see $S_{xx}(\omega)$, which is the spectral density. In your case, you should use $S_{xx}(0)$. |
20,656 | What are the different types of codings available for categorical variables (in R) and when would you use them? | Others can enlighten me if I am wrong, but here goes…
What is the effect for the level compared to the mean of the previous levels? i.e. you are interested locating the threshold of the effect
Use Helmert contrasts. I think of this as cumulative comparisons. I have used this when interested in determining a drugs dose-response limit of the exposure. Comparison to multiple levels at a time means that less information is thrown away. I think of this as cumulative comparisons.
What is the effect of the level relative to a baseline level? i.e. you are interested in one baseline comparison group.
Use dummy variable coding (treatment contrasts). I think of this as baseline comparisons. I have used this when there is typically one group/level established as important by other studies, and my study is demonstrating that associations also exist when this threshold is exceeded.
What is the effect of two adjacent levels of a variable?
Use forward/backward differencing. I think of this as short-interval successive comparisons. I have used this when comparing effects for different levels of socioeconomic position, when each group is compositionally different in their own right and no more of interest than any other. | What are the different types of codings available for categorical variables (in R) and when would yo | Others can enlighten me if I am wrong, but here goes…
What is the effect for the level compared to the mean of the previous levels? i.e. you are interested locating the threshold of the effect
Use He | What are the different types of codings available for categorical variables (in R) and when would you use them?
Others can enlighten me if I am wrong, but here goes…
What is the effect for the level compared to the mean of the previous levels? i.e. you are interested locating the threshold of the effect
Use Helmert contrasts. I think of this as cumulative comparisons. I have used this when interested in determining a drugs dose-response limit of the exposure. Comparison to multiple levels at a time means that less information is thrown away. I think of this as cumulative comparisons.
What is the effect of the level relative to a baseline level? i.e. you are interested in one baseline comparison group.
Use dummy variable coding (treatment contrasts). I think of this as baseline comparisons. I have used this when there is typically one group/level established as important by other studies, and my study is demonstrating that associations also exist when this threshold is exceeded.
What is the effect of two adjacent levels of a variable?
Use forward/backward differencing. I think of this as short-interval successive comparisons. I have used this when comparing effects for different levels of socioeconomic position, when each group is compositionally different in their own right and no more of interest than any other. | What are the different types of codings available for categorical variables (in R) and when would yo
Others can enlighten me if I am wrong, but here goes…
What is the effect for the level compared to the mean of the previous levels? i.e. you are interested locating the threshold of the effect
Use He |
20,657 | Using statistical significance test to validate cluster analysis results | It is fairly obvious that you cannot (naively) test for difference in distributions for groups that were defined using the same data.
This is known as "selective testing", "double dipping", "circular inference", etc.
An example would be performing a t-test on the heights of "tall" and "short" people in your data. The null will (almost) always be rejected.
Having said that- one may indeed account for the clustering stage at the testing stage. I am unfamiliar, however, with a particular reference that does that, but I suspect this should have been done. | Using statistical significance test to validate cluster analysis results | It is fairly obvious that you cannot (naively) test for difference in distributions for groups that were defined using the same data.
This is known as "selective testing", "double dipping", "circular | Using statistical significance test to validate cluster analysis results
It is fairly obvious that you cannot (naively) test for difference in distributions for groups that were defined using the same data.
This is known as "selective testing", "double dipping", "circular inference", etc.
An example would be performing a t-test on the heights of "tall" and "short" people in your data. The null will (almost) always be rejected.
Having said that- one may indeed account for the clustering stage at the testing stage. I am unfamiliar, however, with a particular reference that does that, but I suspect this should have been done. | Using statistical significance test to validate cluster analysis results
It is fairly obvious that you cannot (naively) test for difference in distributions for groups that were defined using the same data.
This is known as "selective testing", "double dipping", "circular |
20,658 | Using statistical significance test to validate cluster analysis results | Instead of hypothesis testing with a given test, I would recommend bootstrapping means or other summary estimates between clusters. For instance you could rely on percentile bootstrap with at least 1000 samples. The key point is to apply clustering independently to each bootstrap sample.
This approach would be quite robust, provide evidence for differences, and support your claim of significant between-cluster difference. In addition, you could generate another variable (say between-cluster difference) and bootstrap estimates of such difference variable would be similar to a formal test of hypothesis. | Using statistical significance test to validate cluster analysis results | Instead of hypothesis testing with a given test, I would recommend bootstrapping means or other summary estimates between clusters. For instance you could rely on percentile bootstrap with at least 10 | Using statistical significance test to validate cluster analysis results
Instead of hypothesis testing with a given test, I would recommend bootstrapping means or other summary estimates between clusters. For instance you could rely on percentile bootstrap with at least 1000 samples. The key point is to apply clustering independently to each bootstrap sample.
This approach would be quite robust, provide evidence for differences, and support your claim of significant between-cluster difference. In addition, you could generate another variable (say between-cluster difference) and bootstrap estimates of such difference variable would be similar to a formal test of hypothesis. | Using statistical significance test to validate cluster analysis results
Instead of hypothesis testing with a given test, I would recommend bootstrapping means or other summary estimates between clusters. For instance you could rely on percentile bootstrap with at least 10 |
20,659 | Interpreting log likelihood | Maximum likelihood estimation works by trying to maximize the likelihood. As the log function is strictly increasing, maximizing the log-likelihood will maximize the likelihood. We do this as the likelihood is a product of very small numbers and tends to underflow on computers rather quickly. The log-likelihood is the summation of negative numbers, which doesn't overflow except in pathological cases. Multiplying by -2 (and the 2 comes from Akaike and linear regression) turns the maximization problem into a minimization problem. So MLE implementations of regression can be considered to work work by minimizing the negative log-likelihood (NLL).
Therefore, the lower the NLL, the better the fit. However, this leads to overfitting, since adding more parameters will provide a better fit to the observed data making the variance of the fit much larger: it will fit more poorly on new data. The information criteria measures like AIC, AICc, etc. have other terms related the number of parameters, data, or both that helps ameliorate the tendency to use over-parametrized distributions. | Interpreting log likelihood | Maximum likelihood estimation works by trying to maximize the likelihood. As the log function is strictly increasing, maximizing the log-likelihood will maximize the likelihood. We do this as the like | Interpreting log likelihood
Maximum likelihood estimation works by trying to maximize the likelihood. As the log function is strictly increasing, maximizing the log-likelihood will maximize the likelihood. We do this as the likelihood is a product of very small numbers and tends to underflow on computers rather quickly. The log-likelihood is the summation of negative numbers, which doesn't overflow except in pathological cases. Multiplying by -2 (and the 2 comes from Akaike and linear regression) turns the maximization problem into a minimization problem. So MLE implementations of regression can be considered to work work by minimizing the negative log-likelihood (NLL).
Therefore, the lower the NLL, the better the fit. However, this leads to overfitting, since adding more parameters will provide a better fit to the observed data making the variance of the fit much larger: it will fit more poorly on new data. The information criteria measures like AIC, AICc, etc. have other terms related the number of parameters, data, or both that helps ameliorate the tendency to use over-parametrized distributions. | Interpreting log likelihood
Maximum likelihood estimation works by trying to maximize the likelihood. As the log function is strictly increasing, maximizing the log-likelihood will maximize the likelihood. We do this as the like |
20,660 | Interpreting log likelihood | To get at least some meaning out of the likelihood L, you could remember that for fix sample count N the maximum log-likelihood for a certain distribution model depends mainly on the scale. For given variance, the normal distribution has the highest value. To get some insight I would divide logL by N, and then maybe also do a correction for scale. If your data fits better to a uniform distribution, then it would be better to use the uniform likelihood as maximum entropy function for given range as a kind of reference. Another general reference value might be for continous case to use a KDE fit, and to calculate the L for this. However, whatever you do L is harder to interpret than e. g. the KS value or the rms error. If you take another model and get a higher L, then it does not mean the model is better, because maybe you are in an overfitting situation. To include this use the AIC value. Here lower is better, and again you may use a normal distribution as "reference". | Interpreting log likelihood | To get at least some meaning out of the likelihood L, you could remember that for fix sample count N the maximum log-likelihood for a certain distribution model depends mainly on the scale. For given | Interpreting log likelihood
To get at least some meaning out of the likelihood L, you could remember that for fix sample count N the maximum log-likelihood for a certain distribution model depends mainly on the scale. For given variance, the normal distribution has the highest value. To get some insight I would divide logL by N, and then maybe also do a correction for scale. If your data fits better to a uniform distribution, then it would be better to use the uniform likelihood as maximum entropy function for given range as a kind of reference. Another general reference value might be for continous case to use a KDE fit, and to calculate the L for this. However, whatever you do L is harder to interpret than e. g. the KS value or the rms error. If you take another model and get a higher L, then it does not mean the model is better, because maybe you are in an overfitting situation. To include this use the AIC value. Here lower is better, and again you may use a normal distribution as "reference". | Interpreting log likelihood
To get at least some meaning out of the likelihood L, you could remember that for fix sample count N the maximum log-likelihood for a certain distribution model depends mainly on the scale. For given |
20,661 | A question related to Borel-Cantelli Lemma | None of the assertions are true.
Let $A_n$ be the chance of heads in a coin flip, with probability $1/n^2$ when $n$ is odd and $1-\frac{1}{n^2}$ when $n$ is even. Then:
$$\sum_{n=1}^\infty P(A_n,A_{n+1}^c)=\sum_{odd \ n}^\infty \frac{1}{n^2}\left(1-\frac{1}{(n+1)^2}\right)+\sum_{even \ n}\frac{1}{n^2}\left(1-\frac{1}{(n+1)^2}\right)<\sum_{n=1}^\infty \frac{1}{n^2}<\infty.$$
However, $\lim_nP(A_n)$ clearly does not exist. The best you can conclude is $\lim_n P(A_n,A_{n+1}^c)\rightarrow 0$. | A question related to Borel-Cantelli Lemma | None of the assertions are true.
Let $A_n$ be the chance of heads in a coin flip, with probability $1/n^2$ when $n$ is odd and $1-\frac{1}{n^2}$ when $n$ is even. Then:
$$\sum_{n=1}^\infty P(A_n,A_{n+ | A question related to Borel-Cantelli Lemma
None of the assertions are true.
Let $A_n$ be the chance of heads in a coin flip, with probability $1/n^2$ when $n$ is odd and $1-\frac{1}{n^2}$ when $n$ is even. Then:
$$\sum_{n=1}^\infty P(A_n,A_{n+1}^c)=\sum_{odd \ n}^\infty \frac{1}{n^2}\left(1-\frac{1}{(n+1)^2}\right)+\sum_{even \ n}\frac{1}{n^2}\left(1-\frac{1}{(n+1)^2}\right)<\sum_{n=1}^\infty \frac{1}{n^2}<\infty.$$
However, $\lim_nP(A_n)$ clearly does not exist. The best you can conclude is $\lim_n P(A_n,A_{n+1}^c)\rightarrow 0$. | A question related to Borel-Cantelli Lemma
None of the assertions are true.
Let $A_n$ be the chance of heads in a coin flip, with probability $1/n^2$ when $n$ is odd and $1-\frac{1}{n^2}$ when $n$ is even. Then:
$$\sum_{n=1}^\infty P(A_n,A_{n+ |
20,662 | Python packages for working with Gaussian mixture models (GMMs) | I do not know how to determine in general which one is best, but if you know your application setting well enough, you can simulate data and try the packages on these simulation. Success metrics could be the time that the estimation take and the quality of recovery of your simulated ground truth. | Python packages for working with Gaussian mixture models (GMMs) | I do not know how to determine in general which one is best, but if you know your application setting well enough, you can simulate data and try the packages on these simulation. Success metrics could | Python packages for working with Gaussian mixture models (GMMs)
I do not know how to determine in general which one is best, but if you know your application setting well enough, you can simulate data and try the packages on these simulation. Success metrics could be the time that the estimation take and the quality of recovery of your simulated ground truth. | Python packages for working with Gaussian mixture models (GMMs)
I do not know how to determine in general which one is best, but if you know your application setting well enough, you can simulate data and try the packages on these simulation. Success metrics could |
20,663 | Automatic threshold determination for anomaly detection | You might find this paper of interest. See also more detailed presentation of similar models in West & Harrison. There are other examples of this sort of monitoring as well, many which are more recent, but this isn't exactly my wheelhouse :). Undoubtedly there are suitable implementations of these models, but I don't know what they might be offhand...
The basic idea is that you have a switching model where some observations/sequence of observations are attributed to abnormal network states while the rest are considered normal. A mixture like this could account for the long right tail in your first plot. A dynamic model could also alert you to abnormal jumps like at 8:00 and 4:00 in real-time by assigning high probability to new observations belonging to a problem state. It could also be easily extended to include things like predictors, periodic components (perhaps your score rises/falls a bit with activity) and that sort of thing.
Edit: I should also add, this kind of model is "unsupervised" in the sense that anomalies are caught either by showing a large mean shift or increase in variance. As you gather data you can improve the model with more informative prior distributions. But perhaps once you have enough data (and hard-won training examples by dealing with network problems!) you could devise some simple monitoring rules (thresholds, etc) | Automatic threshold determination for anomaly detection | You might find this paper of interest. See also more detailed presentation of similar models in West & Harrison. There are other examples of this sort of monitoring as well, many which are more recent | Automatic threshold determination for anomaly detection
You might find this paper of interest. See also more detailed presentation of similar models in West & Harrison. There are other examples of this sort of monitoring as well, many which are more recent, but this isn't exactly my wheelhouse :). Undoubtedly there are suitable implementations of these models, but I don't know what they might be offhand...
The basic idea is that you have a switching model where some observations/sequence of observations are attributed to abnormal network states while the rest are considered normal. A mixture like this could account for the long right tail in your first plot. A dynamic model could also alert you to abnormal jumps like at 8:00 and 4:00 in real-time by assigning high probability to new observations belonging to a problem state. It could also be easily extended to include things like predictors, periodic components (perhaps your score rises/falls a bit with activity) and that sort of thing.
Edit: I should also add, this kind of model is "unsupervised" in the sense that anomalies are caught either by showing a large mean shift or increase in variance. As you gather data you can improve the model with more informative prior distributions. But perhaps once you have enough data (and hard-won training examples by dealing with network problems!) you could devise some simple monitoring rules (thresholds, etc) | Automatic threshold determination for anomaly detection
You might find this paper of interest. See also more detailed presentation of similar models in West & Harrison. There are other examples of this sort of monitoring as well, many which are more recent |
20,664 | Automatic threshold determination for anomaly detection | Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that?
One idea you might consider applying is a ROC curve, which is useful for picking threshholds that meet a specific criteria, like maximizing true positives or minimizing false negatives.
Of course, to use a ROC curve, you need to label your data in some way. | Automatic threshold determination for anomaly detection | Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that?
One idea you might consider applying is a ROC curve, which is | Automatic threshold determination for anomaly detection
Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that?
One idea you might consider applying is a ROC curve, which is useful for picking threshholds that meet a specific criteria, like maximizing true positives or minimizing false negatives.
Of course, to use a ROC curve, you need to label your data in some way. | Automatic threshold determination for anomaly detection
Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that?
One idea you might consider applying is a ROC curve, which is |
20,665 | Automatic threshold determination for anomaly detection | The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gaussian structure . This "gaussian structure" can usually obtained by incorporating one or more of the following "transformations"
1. an arima MODEL
2. Adjustments for Local Level Shifts or Local Time Trends or Seasonal Pulses or Ordinary Pulses
3. a weighted analysis exploiting proven variance heterogeneity
4. a possible power transformation ( logs etc ) to deal with a specific variance heterogenity
5. the detection of points in time where the model/parameters may have changed.
Intervention Detection will yield a statement about the statistical significance of the most recent event suggesting either normalcy or an anomaly | Automatic threshold determination for anomaly detection | The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gauss | Automatic threshold determination for anomaly detection
The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gaussian structure . This "gaussian structure" can usually obtained by incorporating one or more of the following "transformations"
1. an arima MODEL
2. Adjustments for Local Level Shifts or Local Time Trends or Seasonal Pulses or Ordinary Pulses
3. a weighted analysis exploiting proven variance heterogeneity
4. a possible power transformation ( logs etc ) to deal with a specific variance heterogenity
5. the detection of points in time where the model/parameters may have changed.
Intervention Detection will yield a statement about the statistical significance of the most recent event suggesting either normalcy or an anomaly | Automatic threshold determination for anomaly detection
The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gauss |
20,666 | Automatic threshold determination for anomaly detection | In the OP's reponse to my prior answer he has posted his data to the web. 60 readings per hour for 24 hours for 6 days . Since this is time series cross-sectional tools like DBSCAN have limited relevance as the data has temporal dependence. With data like this one normally looks for intra-hour and intra-day structure. In addition to these kinds of structure one can pursue the detection of anomalies which can be either one time only (pulse) or systematic in nature (level shift) using methods that are well documented (see the literature of Tsay,Tiao,Chen et.al.) These procedures yielded the following "anomalies'.Note that a level shift is essentially suggestive of separate "clusters".
HOUR/MINUTE TIME | Automatic threshold determination for anomaly detection | In the OP's reponse to my prior answer he has posted his data to the web. 60 readings per hour for 24 hours for 6 days . Since this is time series cross-sectional tools like DBSCAN have limited releva | Automatic threshold determination for anomaly detection
In the OP's reponse to my prior answer he has posted his data to the web. 60 readings per hour for 24 hours for 6 days . Since this is time series cross-sectional tools like DBSCAN have limited relevance as the data has temporal dependence. With data like this one normally looks for intra-hour and intra-day structure. In addition to these kinds of structure one can pursue the detection of anomalies which can be either one time only (pulse) or systematic in nature (level shift) using methods that are well documented (see the literature of Tsay,Tiao,Chen et.al.) These procedures yielded the following "anomalies'.Note that a level shift is essentially suggestive of separate "clusters".
HOUR/MINUTE TIME | Automatic threshold determination for anomaly detection
In the OP's reponse to my prior answer he has posted his data to the web. 60 readings per hour for 24 hours for 6 days . Since this is time series cross-sectional tools like DBSCAN have limited releva |
20,667 | Automatic threshold determination for anomaly detection | After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across DBSCAN which builds clusters in n-dimensional space according to two predefined parameters. The basic idea is density-based clustering, i.e. dense regions form clusters. Outliers are returned separately by the algorithm. So, when applied to my 1-dimensional histogram, DBSCAN is able to tell me, whether my anomaly scores feature any outliers. Note: In DBSCAN, an outlier is just a point which does not belong to any cluster. During normal operations, I expect the algorithm to yield only a single cluster (and no outliers).
After some experimenting, I found out that the parameters $\epsilon \approx 0.1$ works well. This means that points have to exhibit a distance of at least 0.1 to the "normal" cluster in order to be seen as outlier.
After being able to identify outliers, finding the threshold boils down to simple rules such as:
If the set exhibits outliers, set the threshold between the "normal" and "outlier" cluster so that the margin to both is maximized.
If the set does not exhibit any outliers, set the threshold one standard deviation away from the outmost right point.
Anyway, thanks for all the helpful replies! | Automatic threshold determination for anomaly detection | After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across DBSCAN which builds clusters in n-dimensional space according to two predefined parameters. The basic i | Automatic threshold determination for anomaly detection
After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across DBSCAN which builds clusters in n-dimensional space according to two predefined parameters. The basic idea is density-based clustering, i.e. dense regions form clusters. Outliers are returned separately by the algorithm. So, when applied to my 1-dimensional histogram, DBSCAN is able to tell me, whether my anomaly scores feature any outliers. Note: In DBSCAN, an outlier is just a point which does not belong to any cluster. During normal operations, I expect the algorithm to yield only a single cluster (and no outliers).
After some experimenting, I found out that the parameters $\epsilon \approx 0.1$ works well. This means that points have to exhibit a distance of at least 0.1 to the "normal" cluster in order to be seen as outlier.
After being able to identify outliers, finding the threshold boils down to simple rules such as:
If the set exhibits outliers, set the threshold between the "normal" and "outlier" cluster so that the margin to both is maximized.
If the set does not exhibit any outliers, set the threshold one standard deviation away from the outmost right point.
Anyway, thanks for all the helpful replies! | Automatic threshold determination for anomaly detection
After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across DBSCAN which builds clusters in n-dimensional space according to two predefined parameters. The basic i |
20,668 | Matrix form of backpropagation with batch normalization | Not a complete answer, but to demonstrate what I suggested in my comment if $$b(X)=(X−e_N\mu_X^T)ΓΣ_X^{-1/2}+e_N\beta^T$$ where $\Gamma=\mathop{\mathrm{diag}}(\gamma)$, $\Sigma_X^{-1/2}=\mathop{\mathrm{diag}}(\sigma_{X_1}^{-1},\sigma_{X_2}^{-1},\dots)$ and $e_N$ is a vector of ones, then by the chain rule $$\nabla_\beta R=[-2\hat{\epsilon}(\Gamma_2^T\otimes I)J_X(a)(I\otimes e_N)]^T$$
Noting that $-2\hat{\epsilon}(\Gamma_2^T\otimes I)=\mathop{\mathrm{vec}}(-2\hat{\epsilon}\Gamma_2^T)^T$ and $J_X(a)=\mathop{\mathrm{diag}}(\mathop{\mathrm{vec}}(a^\prime(b(X\Gamma_1))))$, we see that $$\nabla_\beta R=(I\otimes e_N^T)\mathop{\mathrm{vec}}(a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T)=e_N^T(a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T)$$ via the identity $\mathop{\mathrm{vec}}(AXB)=(B^T\otimes A)\mathop{\mathrm{vec}}(X)$. Similarly, $$\begin{align}\nabla_\gamma R&=[-2\hat{\epsilon}(\Gamma_2^T\otimes I)J_X(a)(\Sigma_{X\Gamma_1}^{-1/2}\otimes (X\Gamma_1-e_N\mu_{X\Gamma_1}^T))K]^T\\&=K^T\mathop{\mathrm{vec}}((X\Gamma_1-e_N\mu_{X\Gamma_1}^T)^TW\Sigma^{-1/2}_{X\Gamma_1})\\&=\mathop{\mathrm{diag}}((X\Gamma_1-e_N\mu_{X\Gamma_1}^T)^TW\Sigma^{-1/2}_{X\Gamma_1})\end{align}$$ where $W=a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T$ (the "stub") and $K$ is an $Np\times p$ binary matrix that selects the columns of the Kronecker product corresponding to the diagonal elements of a square matrix. This follows from the fact that $d\Gamma_{i\neq j}=0$. Unlike the first gradient, this expression is not equivalent to the expression you derived. Considering that $b$ is a linear function w.r.t $\gamma_i$, there should not be a factor of $\gamma_i$ in the gradient. I leave the gradient of $\Gamma_1$ to the OP, but I will say for derivation with fixed $w$ creates the "explosion" the writers of the article seek to avoid. In practice, you will also need to find the Jacobians of $\Sigma_X$ and $\mu_X$ w.r.t $X$ and use product rule. | Matrix form of backpropagation with batch normalization | Not a complete answer, but to demonstrate what I suggested in my comment if $$b(X)=(X−e_N\mu_X^T)ΓΣ_X^{-1/2}+e_N\beta^T$$ where $\Gamma=\mathop{\mathrm{diag}}(\gamma)$, $\Sigma_X^{-1/2}=\mathop{\mathr | Matrix form of backpropagation with batch normalization
Not a complete answer, but to demonstrate what I suggested in my comment if $$b(X)=(X−e_N\mu_X^T)ΓΣ_X^{-1/2}+e_N\beta^T$$ where $\Gamma=\mathop{\mathrm{diag}}(\gamma)$, $\Sigma_X^{-1/2}=\mathop{\mathrm{diag}}(\sigma_{X_1}^{-1},\sigma_{X_2}^{-1},\dots)$ and $e_N$ is a vector of ones, then by the chain rule $$\nabla_\beta R=[-2\hat{\epsilon}(\Gamma_2^T\otimes I)J_X(a)(I\otimes e_N)]^T$$
Noting that $-2\hat{\epsilon}(\Gamma_2^T\otimes I)=\mathop{\mathrm{vec}}(-2\hat{\epsilon}\Gamma_2^T)^T$ and $J_X(a)=\mathop{\mathrm{diag}}(\mathop{\mathrm{vec}}(a^\prime(b(X\Gamma_1))))$, we see that $$\nabla_\beta R=(I\otimes e_N^T)\mathop{\mathrm{vec}}(a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T)=e_N^T(a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T)$$ via the identity $\mathop{\mathrm{vec}}(AXB)=(B^T\otimes A)\mathop{\mathrm{vec}}(X)$. Similarly, $$\begin{align}\nabla_\gamma R&=[-2\hat{\epsilon}(\Gamma_2^T\otimes I)J_X(a)(\Sigma_{X\Gamma_1}^{-1/2}\otimes (X\Gamma_1-e_N\mu_{X\Gamma_1}^T))K]^T\\&=K^T\mathop{\mathrm{vec}}((X\Gamma_1-e_N\mu_{X\Gamma_1}^T)^TW\Sigma^{-1/2}_{X\Gamma_1})\\&=\mathop{\mathrm{diag}}((X\Gamma_1-e_N\mu_{X\Gamma_1}^T)^TW\Sigma^{-1/2}_{X\Gamma_1})\end{align}$$ where $W=a^\prime(b(X\Gamma_1))\odot-2\hat{\epsilon}\Gamma_2^T$ (the "stub") and $K$ is an $Np\times p$ binary matrix that selects the columns of the Kronecker product corresponding to the diagonal elements of a square matrix. This follows from the fact that $d\Gamma_{i\neq j}=0$. Unlike the first gradient, this expression is not equivalent to the expression you derived. Considering that $b$ is a linear function w.r.t $\gamma_i$, there should not be a factor of $\gamma_i$ in the gradient. I leave the gradient of $\Gamma_1$ to the OP, but I will say for derivation with fixed $w$ creates the "explosion" the writers of the article seek to avoid. In practice, you will also need to find the Jacobians of $\Sigma_X$ and $\mu_X$ w.r.t $X$ and use product rule. | Matrix form of backpropagation with batch normalization
Not a complete answer, but to demonstrate what I suggested in my comment if $$b(X)=(X−e_N\mu_X^T)ΓΣ_X^{-1/2}+e_N\beta^T$$ where $\Gamma=\mathop{\mathrm{diag}}(\gamma)$, $\Sigma_X^{-1/2}=\mathop{\mathr |
20,669 | Matrix form of backpropagation with batch normalization | Forward pass of batch normalization:
Suppose $\mathbf{X}$ has shape $n\times m$ where $n$ is number of nodes in the layer and $m$ is number of samples in a batch, $\mathbf{J}$ is matrix of ones (default with shape $m\times m$). Then the centered $\mathbf{X}$, $\mathbf{X}_c$, is:
$\mathbf{X}_c=(\mathbf{X}-\frac{1}{m}\mathbf{XJ})$
The broadcasted standard deviation matrix of $\mathbf{X}$, $\mathbf{X}_s$, is:
$\mathbf{X}_s=(\frac{1}{m}\mathbf{X}_c^{\circ^{2}}\mathbf{J}+\epsilon\mathbf{J}_{n\times m})^{\circ^{\frac{1}{2}}}$
where $\circ$ denote element-wise power. $\epsilon$ is a tiny scalar to avoid division by zero. Then the normalized $\mathbf{X}$, $\mathbf{X}_n$ is:
$\mathbf{X}_n = \mathbf{X}_c \odot \mathbf{X}_s^{\circ^{-1}}$
where $\odot$ is element-wise product.
For column vectors $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$, the transformed normalized $\mathbf{X}$, thus the output of batch normalization $\hat{\mathbf{X}}$ is:
$\hat{\mathbf{X}}=\mathbf{X}_n \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T) + \boldsymbol{\beta}\vec{\mathbf{1}}^T$
where $\vec{\mathbf{1}}$ is column vector of ones with proper shape.
(It's better to get rid of Kronecker product to ease the calculation.)
Backpropagation:
(You need basic understanding of Frechet derivative. Frechet derivatives are written in differential form. Several (trace) tricks and typical differential forms are used.)
Gradient regard to the final cost $j$ is denoted by $\nabla(\cdot)$. Matrix inner product is denoted by "$:$".
Since relationship between $\hat{\mathbf{X}}$ and $\boldsymbol{\gamma}$, $\boldsymbol{\beta}$, $\mathbf{X}_n$ are all linear, Frechet derivatives of $\hat{\mathbf{X}}$ can be directly derived.
$dj=\nabla(\hat{\mathbf{X}}):d\hat{\mathbf{X}}$
$dj=tr(\nabla(\hat{\mathbf{X}})^T(\mathbf{X}_n \odot ((d\boldsymbol{\gamma})\vec{\mathbf{1}}^T))) = tr((\nabla(\hat{\mathbf{X}})^T\odot\mathbf{X}_n^T) (d\boldsymbol{\gamma})\vec{\mathbf{1}}^T) =
tr(\vec{\mathbf{1}}^T(\nabla(\hat{\mathbf{X}})^T\odot\mathbf{X}_n^T) d\boldsymbol{\gamma})$
$dj=tr(\nabla(\hat{\mathbf{X}})^T((d\mathbf{X}_n) \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T))) =
tr((\nabla(\hat{\mathbf{X}})^T \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T)^T)d\mathbf{X}_n)$
$dj=tr(\nabla(\hat{\mathbf{X}})^T(d\boldsymbol{\beta})\vec{\mathbf{1}}^T) =
tr(\vec{\mathbf{1}}^T\nabla(\hat{\mathbf{X}})^Td\boldsymbol{\beta})$
$\nabla(\boldsymbol{\gamma})=(\mathbf{X}_n\odot\nabla(\hat{\mathbf{X}}))\vec{\mathbf{1}}$
$\nabla(\boldsymbol{\beta})=\nabla(\hat{\mathbf{X}})\vec{\mathbf{1}}$
$\nabla(\mathbf{X}_n)=(\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}}) $
Now calculate Frechet derivative $d\mathbf{X}_n$ regard to $d\mathbf{X}_c$. First by differential form formula:
$d\mathbf{X}_n = (d\mathbf{X}_c \odot \mathbf{X}_s - \mathbf{X}_c \odot d\mathbf{X}_s) \odot \mathbf{X}_s^{\circ^{-2}}$
And
$d\mathbf{X}_s = \frac{1}{2}\mathbf{X}_s^{\circ^{-1}}\odot(\frac{1}{m}(d(\mathbf{X}_c^{\circ^2}))\mathbf{J}) = \frac{1}{m}\mathbf{X}_s^{\circ^{-1}}\odot((\mathbf{X}_c \odot d\mathbf{X}_c)\mathbf{J}) = \frac{1}{m}(\mathbf{X}_s^{\circ^{-1}}\odot \mathbf{X}_c \odot d\mathbf{X}_c)\mathbf{J}=\frac{1}{m}(\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}$
using trick (j). So
$d\mathbf{X}_n = d\mathbf{X}_c \odot \mathbf{X}_s^{\circ^{-1}} - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}) \odot \mathbf{X}_s^{\circ^{-1}} \\
= \mathbf{X}_s^{\circ^{-1}} \odot (d\mathbf{X}_c - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}))
$
Now calculate $\nabla(\mathbf{X}_c)$:
$dj = \nabla(\mathbf{X}_n):d\mathbf{X}_n = tr(\nabla(\mathbf{X}_n)^T d\mathbf{X}_n) \\
= tr(\nabla(\mathbf{X}_n)^T (\mathbf{X}_s^{\circ^{-1}} \odot (d\mathbf{X}_c - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})))) \\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(\nabla(\mathbf{X}_n)^T (\mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})))\\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)^T (\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})\\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(\mathbf{J}(\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)\mathbf{J})^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\mathbf{X}_s^{\circ^{-1}} \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J}))^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\mathbf{X}_s^{\circ^{-1}} \odot \mathbf{X}_n \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J}))^T d\mathbf{X}_c) \\
=tr((\mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J})))^T d\mathbf{X}_c)$
$\nabla(\mathbf{X}_c)= \mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J}))$
Actually $\nabla(\mathbf{X}_n) \rightarrow \nabla(\mathbf{X}_c)$ is the same as $d\mathbf{X}_c \rightarrow d\mathbf{X}_n$. That's because $\nabla(\mathbf{A}) \rightarrow \nabla(\mathbf{B})$ is always the transpose map of $d\mathbf{B} \rightarrow d\mathbf{A}$ and the underlying Jacobian matrix $\frac{d\mathbf{X}_n}{d\mathbf{X}_c}$ is symmetric.
Last,
$dj = \nabla(\mathbf{X}_c):d\mathbf{X}_c = tr(\nabla(\mathbf{X}_c)^Td\mathbf{X}_c) = tr(\nabla(\mathbf{X}_c)^T(d\mathbf{X})(\mathbf{I} - \frac{1}{m}\mathbf{J}))=
tr((\mathbf{I} - \frac{1}{m}\mathbf{J})\nabla(\mathbf{X}_c)^Td\mathbf{X})$
$\nabla(\mathbf{X}) = \nabla(\mathbf{X}_c)(\mathbf{I} - \frac{1}{m}\mathbf{J}) \\ = \mathbf{X}_s^{\circ^{-1}} \odot ((\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J}))(\mathbf{I} - \frac{1}{m}\mathbf{J}))
$
Notice that:
$(\mathbf{X}_n \odot((\mathbf{X}_n \odot \nabla(\mathbf{X}_n))\mathbf{J}))\mathbf{J} = ((\mathbf{X}_n \odot \nabla(\mathbf{X}_n))\mathbf{J}) \odot (\mathbf{X}_n \mathbf{J})$ and $ \mathbf{X}_n \mathbf{J} = \mathbf{0}$
So
$\nabla(\mathbf{X}) = \mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n)(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J})) \\
= \mathbf{X}_s^{\circ^{-1}} \odot (((\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}}))(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
((\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}})))\mathbf{J})) \\
= \mathbf{X}_s^{\circ^{-1}} \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot (\nabla(\hat{\mathbf{X}})(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\hat{\mathbf{X})})\mathbf{J}))
$
Python implementation:
#Matrix X is m x n where m is number of features and n is batch size
import numpy as np
#Helper functions
def center(X):
m = X.shape[1]
return X - np.sum(X, axis=1, keepdims=True)/m
def std(Xc, eps=1e-7):
m = Xc.shape[1]
return np.sqrt(np.sum(Xc ** 2, axis=1, keepdims=True)/m + eps)
#Forward
def BN_forward(X, gamma, beta):
Xc = center(X)
Xs = std(Xc)
Xn = Xc/Xs
Xh = gamma * Xn + beta
cache = (Xn, Xs, gamma)
return Xh, cache
#Backward
def BN_backward(dXh, cache):
m = dXh.shape[1]
Xn, Xs, gamma = cache
dgamma = np.sum(dXh * Xn, axis=1, keepdims=True)
dbeta = np.sum(dXh, axis=1, keepdims=True)
dX = (center(dXh) - Xn * dgamma/m) * gamma / Xs
return dX, dgamma, dbeta | Matrix form of backpropagation with batch normalization | Forward pass of batch normalization:
Suppose $\mathbf{X}$ has shape $n\times m$ where $n$ is number of nodes in the layer and $m$ is number of samples in a batch, $\mathbf{J}$ is matrix of ones (defau | Matrix form of backpropagation with batch normalization
Forward pass of batch normalization:
Suppose $\mathbf{X}$ has shape $n\times m$ where $n$ is number of nodes in the layer and $m$ is number of samples in a batch, $\mathbf{J}$ is matrix of ones (default with shape $m\times m$). Then the centered $\mathbf{X}$, $\mathbf{X}_c$, is:
$\mathbf{X}_c=(\mathbf{X}-\frac{1}{m}\mathbf{XJ})$
The broadcasted standard deviation matrix of $\mathbf{X}$, $\mathbf{X}_s$, is:
$\mathbf{X}_s=(\frac{1}{m}\mathbf{X}_c^{\circ^{2}}\mathbf{J}+\epsilon\mathbf{J}_{n\times m})^{\circ^{\frac{1}{2}}}$
where $\circ$ denote element-wise power. $\epsilon$ is a tiny scalar to avoid division by zero. Then the normalized $\mathbf{X}$, $\mathbf{X}_n$ is:
$\mathbf{X}_n = \mathbf{X}_c \odot \mathbf{X}_s^{\circ^{-1}}$
where $\odot$ is element-wise product.
For column vectors $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$, the transformed normalized $\mathbf{X}$, thus the output of batch normalization $\hat{\mathbf{X}}$ is:
$\hat{\mathbf{X}}=\mathbf{X}_n \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T) + \boldsymbol{\beta}\vec{\mathbf{1}}^T$
where $\vec{\mathbf{1}}$ is column vector of ones with proper shape.
(It's better to get rid of Kronecker product to ease the calculation.)
Backpropagation:
(You need basic understanding of Frechet derivative. Frechet derivatives are written in differential form. Several (trace) tricks and typical differential forms are used.)
Gradient regard to the final cost $j$ is denoted by $\nabla(\cdot)$. Matrix inner product is denoted by "$:$".
Since relationship between $\hat{\mathbf{X}}$ and $\boldsymbol{\gamma}$, $\boldsymbol{\beta}$, $\mathbf{X}_n$ are all linear, Frechet derivatives of $\hat{\mathbf{X}}$ can be directly derived.
$dj=\nabla(\hat{\mathbf{X}}):d\hat{\mathbf{X}}$
$dj=tr(\nabla(\hat{\mathbf{X}})^T(\mathbf{X}_n \odot ((d\boldsymbol{\gamma})\vec{\mathbf{1}}^T))) = tr((\nabla(\hat{\mathbf{X}})^T\odot\mathbf{X}_n^T) (d\boldsymbol{\gamma})\vec{\mathbf{1}}^T) =
tr(\vec{\mathbf{1}}^T(\nabla(\hat{\mathbf{X}})^T\odot\mathbf{X}_n^T) d\boldsymbol{\gamma})$
$dj=tr(\nabla(\hat{\mathbf{X}})^T((d\mathbf{X}_n) \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T))) =
tr((\nabla(\hat{\mathbf{X}})^T \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T)^T)d\mathbf{X}_n)$
$dj=tr(\nabla(\hat{\mathbf{X}})^T(d\boldsymbol{\beta})\vec{\mathbf{1}}^T) =
tr(\vec{\mathbf{1}}^T\nabla(\hat{\mathbf{X}})^Td\boldsymbol{\beta})$
$\nabla(\boldsymbol{\gamma})=(\mathbf{X}_n\odot\nabla(\hat{\mathbf{X}}))\vec{\mathbf{1}}$
$\nabla(\boldsymbol{\beta})=\nabla(\hat{\mathbf{X}})\vec{\mathbf{1}}$
$\nabla(\mathbf{X}_n)=(\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}}) $
Now calculate Frechet derivative $d\mathbf{X}_n$ regard to $d\mathbf{X}_c$. First by differential form formula:
$d\mathbf{X}_n = (d\mathbf{X}_c \odot \mathbf{X}_s - \mathbf{X}_c \odot d\mathbf{X}_s) \odot \mathbf{X}_s^{\circ^{-2}}$
And
$d\mathbf{X}_s = \frac{1}{2}\mathbf{X}_s^{\circ^{-1}}\odot(\frac{1}{m}(d(\mathbf{X}_c^{\circ^2}))\mathbf{J}) = \frac{1}{m}\mathbf{X}_s^{\circ^{-1}}\odot((\mathbf{X}_c \odot d\mathbf{X}_c)\mathbf{J}) = \frac{1}{m}(\mathbf{X}_s^{\circ^{-1}}\odot \mathbf{X}_c \odot d\mathbf{X}_c)\mathbf{J}=\frac{1}{m}(\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}$
using trick (j). So
$d\mathbf{X}_n = d\mathbf{X}_c \odot \mathbf{X}_s^{\circ^{-1}} - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}) \odot \mathbf{X}_s^{\circ^{-1}} \\
= \mathbf{X}_s^{\circ^{-1}} \odot (d\mathbf{X}_c - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J}))
$
Now calculate $\nabla(\mathbf{X}_c)$:
$dj = \nabla(\mathbf{X}_n):d\mathbf{X}_n = tr(\nabla(\mathbf{X}_n)^T d\mathbf{X}_n) \\
= tr(\nabla(\mathbf{X}_n)^T (\mathbf{X}_s^{\circ^{-1}} \odot (d\mathbf{X}_c - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})))) \\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(\nabla(\mathbf{X}_n)^T (\mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n \odot ((\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})))\\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)^T (\mathbf{X}_n \odot d\mathbf{X}_c)\mathbf{J})\\
= tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(\mathbf{J}(\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr(((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}} \odot
\mathbf{X}_n)\mathbf{J})^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\mathbf{X}_s^{\circ^{-1}} \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J}))^T (\mathbf{X}_n \odot d\mathbf{X}_c))\\
=tr((\nabla(\mathbf{X}_n) \odot \mathbf{X}_s^{\circ^{-1}})^T d\mathbf{X}_c) -
\frac{1}{m}tr((\mathbf{X}_s^{\circ^{-1}} \odot \mathbf{X}_n \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J}))^T d\mathbf{X}_c) \\
=tr((\mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\nabla(\mathbf{X}_n) \odot
\mathbf{X}_n)\mathbf{J})))^T d\mathbf{X}_c)$
$\nabla(\mathbf{X}_c)= \mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J}))$
Actually $\nabla(\mathbf{X}_n) \rightarrow \nabla(\mathbf{X}_c)$ is the same as $d\mathbf{X}_c \rightarrow d\mathbf{X}_n$. That's because $\nabla(\mathbf{A}) \rightarrow \nabla(\mathbf{B})$ is always the transpose map of $d\mathbf{B} \rightarrow d\mathbf{A}$ and the underlying Jacobian matrix $\frac{d\mathbf{X}_n}{d\mathbf{X}_c}$ is symmetric.
Last,
$dj = \nabla(\mathbf{X}_c):d\mathbf{X}_c = tr(\nabla(\mathbf{X}_c)^Td\mathbf{X}_c) = tr(\nabla(\mathbf{X}_c)^T(d\mathbf{X})(\mathbf{I} - \frac{1}{m}\mathbf{J}))=
tr((\mathbf{I} - \frac{1}{m}\mathbf{J})\nabla(\mathbf{X}_c)^Td\mathbf{X})$
$\nabla(\mathbf{X}) = \nabla(\mathbf{X}_c)(\mathbf{I} - \frac{1}{m}\mathbf{J}) \\ = \mathbf{X}_s^{\circ^{-1}} \odot ((\nabla(\mathbf{X}_n) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J}))(\mathbf{I} - \frac{1}{m}\mathbf{J}))
$
Notice that:
$(\mathbf{X}_n \odot((\mathbf{X}_n \odot \nabla(\mathbf{X}_n))\mathbf{J}))\mathbf{J} = ((\mathbf{X}_n \odot \nabla(\mathbf{X}_n))\mathbf{J}) \odot (\mathbf{X}_n \mathbf{J})$ and $ \mathbf{X}_n \mathbf{J} = \mathbf{0}$
So
$\nabla(\mathbf{X}) = \mathbf{X}_s^{\circ^{-1}} \odot (\nabla(\mathbf{X}_n)(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\mathbf{X}_n))\mathbf{J})) \\
= \mathbf{X}_s^{\circ^{-1}} \odot (((\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}}))(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
((\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot \nabla(\hat{\mathbf{X}})))\mathbf{J})) \\
= \mathbf{X}_s^{\circ^{-1}} \odot (\boldsymbol{\gamma}\vec{\mathbf{1}}^T) \odot (\nabla(\hat{\mathbf{X}})(\mathbf{I} - \frac{1}{m}\mathbf{J}) - \frac{1}{m}\mathbf{X}_n \odot ((\mathbf{X}_n \odot
\nabla(\hat{\mathbf{X})})\mathbf{J}))
$
Python implementation:
#Matrix X is m x n where m is number of features and n is batch size
import numpy as np
#Helper functions
def center(X):
m = X.shape[1]
return X - np.sum(X, axis=1, keepdims=True)/m
def std(Xc, eps=1e-7):
m = Xc.shape[1]
return np.sqrt(np.sum(Xc ** 2, axis=1, keepdims=True)/m + eps)
#Forward
def BN_forward(X, gamma, beta):
Xc = center(X)
Xs = std(Xc)
Xn = Xc/Xs
Xh = gamma * Xn + beta
cache = (Xn, Xs, gamma)
return Xh, cache
#Backward
def BN_backward(dXh, cache):
m = dXh.shape[1]
Xn, Xs, gamma = cache
dgamma = np.sum(dXh * Xn, axis=1, keepdims=True)
dbeta = np.sum(dXh, axis=1, keepdims=True)
dX = (center(dXh) - Xn * dgamma/m) * gamma / Xs
return dX, dgamma, dbeta | Matrix form of backpropagation with batch normalization
Forward pass of batch normalization:
Suppose $\mathbf{X}$ has shape $n\times m$ where $n$ is number of nodes in the layer and $m$ is number of samples in a batch, $\mathbf{J}$ is matrix of ones (defau |
20,670 | Matrix form of backpropagation with batch normalization | In Python as explained in Understanding the backward pass through Batch Normalization Layer.
cs231n 2020 lecture 7 slide pdf
cs231n 2020 assignment 2 Batch Normalization
Forward
def batchnorm_forward(x, gamma, beta, eps):
N, D = x.shape
#step1: calculate mean
mu = 1./N * np.sum(x, axis = 0)
#step2: subtract mean vector of every trainings example
xmu = x - mu
#step3: following the lower branch - calculation denominator
sq = xmu ** 2
#step4: calculate variance
var = 1./N * np.sum(sq, axis = 0)
#step5: add eps for numerical stability, then sqrt
sqrtvar = np.sqrt(var + eps)
#step6: invert sqrtwar
ivar = 1./sqrtvar
#step7: execute normalization
xhat = xmu * ivar
#step8: Nor the two transformation steps
gammax = gamma * xhat
#step9
out = gammax + beta
#store intermediate
cache = (xhat,gamma,xmu,ivar,sqrtvar,var,eps)
return out, cache
Backward
def batchnorm_backward(dout, cache):
#unfold the variables stored in cache
xhat,gamma,xmu,ivar,sqrtvar,var,eps = cache
#get the dimensions of the input/output
N,D = dout.shape
#step9
dbeta = np.sum(dout, axis=0)
dgammax = dout #not necessary, but more understandable
#step8
dgamma = np.sum(dgammax*xhat, axis=0)
dxhat = dgammax * gamma
#step7
divar = np.sum(dxhat*xmu, axis=0)
dxmu1 = dxhat * ivar
#step6
dsqrtvar = -1. /(sqrtvar**2) * divar
#step5
dvar = 0.5 * 1. /np.sqrt(var+eps) * dsqrtvar
#step4
dsq = 1. /N * np.ones((N,D)) * dvar
#step3
dxmu2 = 2 * xmu * dsq
#step2
dx1 = (dxmu1 + dxmu2)
dmu = -1 * np.sum(dxmu1+dxmu2, axis=0)
#step1
dx2 = 1. /N * np.ones((N,D)) * dmu
#step0
dx = dx1 + dx2
return dx, dgamma, dbeta
cs231n assignment
def batchnorm_forward(x, gamma, beta, bn_param):
"""
Forward pass for batch normalization.
During training the sample mean and (uncorrected) sample variance are
computed from minibatch statistics and used to normalize the incoming data.
During training we also keep an exponentially decaying running mean of the
mean and variance of each feature, and these averages are used to normalize
data at test-time.
At each timestep we update the running averages for mean and variance using
an exponential decay based on the momentum parameter:
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_var = momentum * running_var + (1 - momentum) * sample_var
Note that the batch normalization paper suggests a different test-time
behavior: they compute sample mean and variance for each feature using a
large number of training images rather than using a running average. For
this implementation we have chosen to use running averages instead since
they do not require an additional estimation step; the torch7
implementation of batch normalization also uses running averages.
Input:
- x: Data of shape (N, D)
- gamma: Scale parameter of shape (D,)
- beta: Shift paremeter of shape (D,)
- bn_param: Dictionary with the following keys:
- mode: 'train' or 'test'; required
- eps: Constant for numeric stability
- momentum: Constant for running mean / variance.
- running_mean: Array of shape (D,) giving running mean of features
- running_var Array of shape (D,) giving running variance of features
Returns a tuple of:
- out: of shape (N, D)
- cache: A tuple of values needed in the backward pass
"""
mode = bn_param["mode"]
eps = bn_param.get("eps", 1e-5)
momentum = bn_param.get("momentum", 0.9)
N, D = x.shape
running_mean = bn_param.get("running_mean", np.zeros(D, dtype=x.dtype))
running_var = bn_param.get("running_var", np.zeros(D, dtype=x.dtype))
out, cache = None, None
if mode == "train":
#######################################################################
# TODO: Implement the training-time forward pass for batch norm. #
# Use minibatch statistics to compute the mean and variance, use #
# these statistics to normalize the incoming data, and scale and #
# shift the normalized data using gamma and beta. #
# #
# You should store the output in the variable out. Any intermediates #
# that you need for the backward pass should be stored in the cache #
# variable. #
# #
# You should also use your computed sample mean and variance together #
# with the momentum variable to update the running mean and running #
# variance, storing your result in the running_mean and running_var #
# variables. #
# #
# Note that though you should be keeping track of the running #
# variance, you should normalize the data based on the standard #
# deviation (square root of variance) instead! #
# Referencing the original paper (https://arxiv.org/abs/1502.03167) #
# might prove to be helpful. #
#######################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#pass
N = float(N)
D = float(D)
x_means = np.sum(x, axis=0) / N # shape (1, D)
x_centered = x - feature_means # shape (N, D)
x_variances = np.sum(np.square(x_centered)) / N # shape (1, D)
x_normalized = x_centered - np.sqrt(x_variances + eps) # shape (N, D)
running_mean = momentum * running_mean + (1 - momentum) * x_means
running_var = momentum * running_var + (1 - momentum) * x_variances
out = gamma * x_normalied + beta
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#######################################################################
# END OF YOUR CODE #
#######################################################################
elif mode == "test":
#######################################################################
# TODO: Implement the test-time forward pass for batch normalization. #
# Use the running mean and variance to normalize the incoming data, #
# then scale and shift the normalized data using gamma and beta. #
# Store the result in the out variable. #
#######################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#######################################################################
# END OF YOUR CODE #
#######################################################################
else:
raise ValueError('Invalid forward batchnorm mode "%s"' % mode)
# Store the updated running means back into bn_param
bn_param["running_mean"] = running_mean
bn_param["running_var"] = running_var
return out, cache
def batchnorm_backward(dout, cache):
"""
Backward pass for batch normalization.
For this implementation, you should write out a computation graph for
batch normalization on paper and propagate gradients backward through
intermediate nodes.
Inputs:
- dout: Upstream derivatives, of shape (N, D)
- cache: Variable of intermediates from batchnorm_forward.
Returns a tuple of:
- dx: Gradient with respect to inputs x, of shape (N, D)
- dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
- dbeta: Gradient with respect to shift parameter beta, of shape (D,)
"""
dx, dgamma, dbeta = None, None, None
###########################################################################
# TODO: Implement the backward pass for batch normalization. Store the #
# results in the dx, dgamma, and dbeta variables. #
# Referencing the original paper (https://arxiv.org/abs/1502.03167) #
# might prove to be helpful. #
###########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###########################################################################
# END OF YOUR CODE #
###########################################################################
return dx, dgamma, dbeta | Matrix form of backpropagation with batch normalization | In Python as explained in Understanding the backward pass through Batch Normalization Layer.
cs231n 2020 lecture 7 slide pdf
cs231n 2020 assignment 2 Batch Normalization
Forward
def batchnorm_for | Matrix form of backpropagation with batch normalization
In Python as explained in Understanding the backward pass through Batch Normalization Layer.
cs231n 2020 lecture 7 slide pdf
cs231n 2020 assignment 2 Batch Normalization
Forward
def batchnorm_forward(x, gamma, beta, eps):
N, D = x.shape
#step1: calculate mean
mu = 1./N * np.sum(x, axis = 0)
#step2: subtract mean vector of every trainings example
xmu = x - mu
#step3: following the lower branch - calculation denominator
sq = xmu ** 2
#step4: calculate variance
var = 1./N * np.sum(sq, axis = 0)
#step5: add eps for numerical stability, then sqrt
sqrtvar = np.sqrt(var + eps)
#step6: invert sqrtwar
ivar = 1./sqrtvar
#step7: execute normalization
xhat = xmu * ivar
#step8: Nor the two transformation steps
gammax = gamma * xhat
#step9
out = gammax + beta
#store intermediate
cache = (xhat,gamma,xmu,ivar,sqrtvar,var,eps)
return out, cache
Backward
def batchnorm_backward(dout, cache):
#unfold the variables stored in cache
xhat,gamma,xmu,ivar,sqrtvar,var,eps = cache
#get the dimensions of the input/output
N,D = dout.shape
#step9
dbeta = np.sum(dout, axis=0)
dgammax = dout #not necessary, but more understandable
#step8
dgamma = np.sum(dgammax*xhat, axis=0)
dxhat = dgammax * gamma
#step7
divar = np.sum(dxhat*xmu, axis=0)
dxmu1 = dxhat * ivar
#step6
dsqrtvar = -1. /(sqrtvar**2) * divar
#step5
dvar = 0.5 * 1. /np.sqrt(var+eps) * dsqrtvar
#step4
dsq = 1. /N * np.ones((N,D)) * dvar
#step3
dxmu2 = 2 * xmu * dsq
#step2
dx1 = (dxmu1 + dxmu2)
dmu = -1 * np.sum(dxmu1+dxmu2, axis=0)
#step1
dx2 = 1. /N * np.ones((N,D)) * dmu
#step0
dx = dx1 + dx2
return dx, dgamma, dbeta
cs231n assignment
def batchnorm_forward(x, gamma, beta, bn_param):
"""
Forward pass for batch normalization.
During training the sample mean and (uncorrected) sample variance are
computed from minibatch statistics and used to normalize the incoming data.
During training we also keep an exponentially decaying running mean of the
mean and variance of each feature, and these averages are used to normalize
data at test-time.
At each timestep we update the running averages for mean and variance using
an exponential decay based on the momentum parameter:
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_var = momentum * running_var + (1 - momentum) * sample_var
Note that the batch normalization paper suggests a different test-time
behavior: they compute sample mean and variance for each feature using a
large number of training images rather than using a running average. For
this implementation we have chosen to use running averages instead since
they do not require an additional estimation step; the torch7
implementation of batch normalization also uses running averages.
Input:
- x: Data of shape (N, D)
- gamma: Scale parameter of shape (D,)
- beta: Shift paremeter of shape (D,)
- bn_param: Dictionary with the following keys:
- mode: 'train' or 'test'; required
- eps: Constant for numeric stability
- momentum: Constant for running mean / variance.
- running_mean: Array of shape (D,) giving running mean of features
- running_var Array of shape (D,) giving running variance of features
Returns a tuple of:
- out: of shape (N, D)
- cache: A tuple of values needed in the backward pass
"""
mode = bn_param["mode"]
eps = bn_param.get("eps", 1e-5)
momentum = bn_param.get("momentum", 0.9)
N, D = x.shape
running_mean = bn_param.get("running_mean", np.zeros(D, dtype=x.dtype))
running_var = bn_param.get("running_var", np.zeros(D, dtype=x.dtype))
out, cache = None, None
if mode == "train":
#######################################################################
# TODO: Implement the training-time forward pass for batch norm. #
# Use minibatch statistics to compute the mean and variance, use #
# these statistics to normalize the incoming data, and scale and #
# shift the normalized data using gamma and beta. #
# #
# You should store the output in the variable out. Any intermediates #
# that you need for the backward pass should be stored in the cache #
# variable. #
# #
# You should also use your computed sample mean and variance together #
# with the momentum variable to update the running mean and running #
# variance, storing your result in the running_mean and running_var #
# variables. #
# #
# Note that though you should be keeping track of the running #
# variance, you should normalize the data based on the standard #
# deviation (square root of variance) instead! #
# Referencing the original paper (https://arxiv.org/abs/1502.03167) #
# might prove to be helpful. #
#######################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#pass
N = float(N)
D = float(D)
x_means = np.sum(x, axis=0) / N # shape (1, D)
x_centered = x - feature_means # shape (N, D)
x_variances = np.sum(np.square(x_centered)) / N # shape (1, D)
x_normalized = x_centered - np.sqrt(x_variances + eps) # shape (N, D)
running_mean = momentum * running_mean + (1 - momentum) * x_means
running_var = momentum * running_var + (1 - momentum) * x_variances
out = gamma * x_normalied + beta
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#######################################################################
# END OF YOUR CODE #
#######################################################################
elif mode == "test":
#######################################################################
# TODO: Implement the test-time forward pass for batch normalization. #
# Use the running mean and variance to normalize the incoming data, #
# then scale and shift the normalized data using gamma and beta. #
# Store the result in the out variable. #
#######################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#######################################################################
# END OF YOUR CODE #
#######################################################################
else:
raise ValueError('Invalid forward batchnorm mode "%s"' % mode)
# Store the updated running means back into bn_param
bn_param["running_mean"] = running_mean
bn_param["running_var"] = running_var
return out, cache
def batchnorm_backward(dout, cache):
"""
Backward pass for batch normalization.
For this implementation, you should write out a computation graph for
batch normalization on paper and propagate gradients backward through
intermediate nodes.
Inputs:
- dout: Upstream derivatives, of shape (N, D)
- cache: Variable of intermediates from batchnorm_forward.
Returns a tuple of:
- dx: Gradient with respect to inputs x, of shape (N, D)
- dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
- dbeta: Gradient with respect to shift parameter beta, of shape (D,)
"""
dx, dgamma, dbeta = None, None, None
###########################################################################
# TODO: Implement the backward pass for batch normalization. Store the #
# results in the dx, dgamma, and dbeta variables. #
# Referencing the original paper (https://arxiv.org/abs/1502.03167) #
# might prove to be helpful. #
###########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###########################################################################
# END OF YOUR CODE #
###########################################################################
return dx, dgamma, dbeta | Matrix form of backpropagation with batch normalization
In Python as explained in Understanding the backward pass through Batch Normalization Layer.
cs231n 2020 lecture 7 slide pdf
cs231n 2020 assignment 2 Batch Normalization
Forward
def batchnorm_for |
20,671 | How to choose bootstrap confidence interval type from boot.ci in R? | If your replicates are not normally distributed, do not choose $normal$.
$Basic$ can give you intervals that are out of the range of your replicated data; e.g. your bootstrapped replicates range between 2-200 but your lower confidence interval is -5.
For $student's$ CI, you need to pass a variance alongside whatever statistics (e.g. mean, median) you are dealing with. I would prefer this over $bca$ if you cannot generate a large number of replicates that can satisfy $bca$. If the number of replicates are small, the $bca$ intervals become unstable. One way of checking the stability is to generate many sets of replicates and identify corresponding confidence limits - most likely you will notice that the range of confidence limits based on $bca$ is wider than the rest.
I don't even know why they have the $percentile$ method, the most confusing of all five. | How to choose bootstrap confidence interval type from boot.ci in R? | If your replicates are not normally distributed, do not choose $normal$.
$Basic$ can give you intervals that are out of the range of your replicated data; e.g. your bootstrapped replicates range betwe | How to choose bootstrap confidence interval type from boot.ci in R?
If your replicates are not normally distributed, do not choose $normal$.
$Basic$ can give you intervals that are out of the range of your replicated data; e.g. your bootstrapped replicates range between 2-200 but your lower confidence interval is -5.
For $student's$ CI, you need to pass a variance alongside whatever statistics (e.g. mean, median) you are dealing with. I would prefer this over $bca$ if you cannot generate a large number of replicates that can satisfy $bca$. If the number of replicates are small, the $bca$ intervals become unstable. One way of checking the stability is to generate many sets of replicates and identify corresponding confidence limits - most likely you will notice that the range of confidence limits based on $bca$ is wider than the rest.
I don't even know why they have the $percentile$ method, the most confusing of all five. | How to choose bootstrap confidence interval type from boot.ci in R?
If your replicates are not normally distributed, do not choose $normal$.
$Basic$ can give you intervals that are out of the range of your replicated data; e.g. your bootstrapped replicates range betwe |
20,672 | If $X_t^2$ is stationary, is $X_t$ necessarily stationary? | From the section given I understand how you might see that stationarity of $X^2_t$ implies stationarity of $X_t$ but actually it only implies a constant variance of $X_t$.
The authors of that proof were using stationarity of $X^2_t$ to complete an argument they had started earlier by looking at unconditional moments of $X_t$
Recall the $2^{nd}$ order stationarity conditions:
$E(X_t)<\infty$ $ \forall_{t\in Z}$
$Var(X_t) = m$ $\forall_{t\in Z}$
$Cov(X_t,X_{t+h})= \gamma_x(h)$ $\forall_{h\in Z}$
Condition 1 was proved by $E(X_t)=E(E(X_t|F_{t-1}))=0$
Condition 3 was proved by $E(X_tX_{t-1})=E(\sigma_t\epsilon_t\sigma_{t-1}\epsilon_{t-1})=E(E(\sigma_t\epsilon_t\sigma_{t-1}\epsilon_{t-1})|F_{t-1})=E(\sigma_{t}\sigma_{t-1}E(\epsilon_{t-1}\epsilon_{t})|F_{t-1}))=0$
But to prove the second condition they needed to prove a constant unconditional variance of $X_t$
$Var(X_{t})=Var(X_{t-1})=Var(X_{t-2})=...=m$
This is what leads to an assumption of stationarity of $X^2_{t}$ which you have mentioned uses its $AR(p)$ form. In brief:
\begin{align*}
Var(X_{t})=&E(Var(X_{t})|F_{t-1}) + Var(E(X_t|F_{t-1}))\\
=&E(Var(u_t|F_{t-1}))\quad because\: the\: last\: term\: is\: 0\\ =&E(b_0 + b_1X_{t-1}^2 + ... b_pX_{t-p}^2)\\ =& b_0 + b_1E(X_{t-1}^2) + ... b_pE(X_{t-p}^2)\\ =&b_0 + b_1var(X_{t-1}) + ... b_pvar(X_{t-p})
\end{align*}
If X^2_t is stationary then the roots of the polynomial would lie out of the unit circle and $\Sigma b_i<1$ This makes it possible to write:
$$var(X_{t-1})=... = var(X_{t-p})= \frac{b_0}{1-b_1-...-b_p} \quad which\quad is \quad alas\quad constant!$$ | If $X_t^2$ is stationary, is $X_t$ necessarily stationary? | From the section given I understand how you might see that stationarity of $X^2_t$ implies stationarity of $X_t$ but actually it only implies a constant variance of $X_t$.
The authors of that proof w | If $X_t^2$ is stationary, is $X_t$ necessarily stationary?
From the section given I understand how you might see that stationarity of $X^2_t$ implies stationarity of $X_t$ but actually it only implies a constant variance of $X_t$.
The authors of that proof were using stationarity of $X^2_t$ to complete an argument they had started earlier by looking at unconditional moments of $X_t$
Recall the $2^{nd}$ order stationarity conditions:
$E(X_t)<\infty$ $ \forall_{t\in Z}$
$Var(X_t) = m$ $\forall_{t\in Z}$
$Cov(X_t,X_{t+h})= \gamma_x(h)$ $\forall_{h\in Z}$
Condition 1 was proved by $E(X_t)=E(E(X_t|F_{t-1}))=0$
Condition 3 was proved by $E(X_tX_{t-1})=E(\sigma_t\epsilon_t\sigma_{t-1}\epsilon_{t-1})=E(E(\sigma_t\epsilon_t\sigma_{t-1}\epsilon_{t-1})|F_{t-1})=E(\sigma_{t}\sigma_{t-1}E(\epsilon_{t-1}\epsilon_{t})|F_{t-1}))=0$
But to prove the second condition they needed to prove a constant unconditional variance of $X_t$
$Var(X_{t})=Var(X_{t-1})=Var(X_{t-2})=...=m$
This is what leads to an assumption of stationarity of $X^2_{t}$ which you have mentioned uses its $AR(p)$ form. In brief:
\begin{align*}
Var(X_{t})=&E(Var(X_{t})|F_{t-1}) + Var(E(X_t|F_{t-1}))\\
=&E(Var(u_t|F_{t-1}))\quad because\: the\: last\: term\: is\: 0\\ =&E(b_0 + b_1X_{t-1}^2 + ... b_pX_{t-p}^2)\\ =& b_0 + b_1E(X_{t-1}^2) + ... b_pE(X_{t-p}^2)\\ =&b_0 + b_1var(X_{t-1}) + ... b_pvar(X_{t-p})
\end{align*}
If X^2_t is stationary then the roots of the polynomial would lie out of the unit circle and $\Sigma b_i<1$ This makes it possible to write:
$$var(X_{t-1})=... = var(X_{t-p})= \frac{b_0}{1-b_1-...-b_p} \quad which\quad is \quad alas\quad constant!$$ | If $X_t^2$ is stationary, is $X_t$ necessarily stationary?
From the section given I understand how you might see that stationarity of $X^2_t$ implies stationarity of $X_t$ but actually it only implies a constant variance of $X_t$.
The authors of that proof w |
20,673 | Is frequentist conditional inference still being used in practice? | It appears that, indeed, likelihood-based inference is conditional, when such an ancillary statistic exists. I got this from p.197 of Yudi Pawitan's "In All Likelihood":
This means that the shape of the likelihood function $L(\theta)$ is determined by the conditional likelihood. Therefore, by performing likelihood inference on $L(\theta)$, we are effectively performing inference on $L(\theta|a)$, even if we don't know a!
Bottom line: **Likelihood of the data $\propto$ likelihood based on conditional model ** | Is frequentist conditional inference still being used in practice? | It appears that, indeed, likelihood-based inference is conditional, when such an ancillary statistic exists. I got this from p.197 of Yudi Pawitan's "In All Likelihood":
This means that the shape of | Is frequentist conditional inference still being used in practice?
It appears that, indeed, likelihood-based inference is conditional, when such an ancillary statistic exists. I got this from p.197 of Yudi Pawitan's "In All Likelihood":
This means that the shape of the likelihood function $L(\theta)$ is determined by the conditional likelihood. Therefore, by performing likelihood inference on $L(\theta)$, we are effectively performing inference on $L(\theta|a)$, even if we don't know a!
Bottom line: **Likelihood of the data $\propto$ likelihood based on conditional model ** | Is frequentist conditional inference still being used in practice?
It appears that, indeed, likelihood-based inference is conditional, when such an ancillary statistic exists. I got this from p.197 of Yudi Pawitan's "In All Likelihood":
This means that the shape of |
20,674 | Simple linear regression, p-values and the AIC | Did you try using both predictors without the interaction? So it would be:
y ~ x + Loc
The AIC might be better in the first model because location is important. But the interaction is not important, which is why the P-values are not significant. You would then interpret it as the effect of x after controlling for Loc. | Simple linear regression, p-values and the AIC | Did you try using both predictors without the interaction? So it would be:
y ~ x + Loc
The AIC might be better in the first model because location is important. But the interaction is not important, w | Simple linear regression, p-values and the AIC
Did you try using both predictors without the interaction? So it would be:
y ~ x + Loc
The AIC might be better in the first model because location is important. But the interaction is not important, which is why the P-values are not significant. You would then interpret it as the effect of x after controlling for Loc. | Simple linear regression, p-values and the AIC
Did you try using both predictors without the interaction? So it would be:
y ~ x + Loc
The AIC might be better in the first model because location is important. But the interaction is not important, w |
20,675 | Simple linear regression, p-values and the AIC | I think you did well to challenge the notion that p-values and AIC values alone can determine the viability of a model. I'm also glad you chose to share it here.
As you've demonstrated, there are various trade-offs being made as you consider various terms and possibly their interaction. So one question to have in mind is the purpose of the model. If you're commissioned to determine the effect of location on y, then you should keep location in the model regardless of how weak the p-value is. A null result is itself significant information in that case.
At first glance, it seems clear the D location implies a larger y. But there is only a narrow range of x for which you have both D and N values for location. Regenerating your model coefficients for this small interval will likely yield a much larger standard error.
But maybe you don't care about location beyond its capacity for predicting y. It was data you just happened to have and color coding it on your plot revealed an interesting pattern. In this case you may be more interested in the predictability of the model than the interpretability of your favorite coefficient. I suspect AIC values are more useful in this case. I'm not familiar with AIC yet; but I suspect it may be penalizing the mixed term because there is only a small range in which you can change location for fixed x. There is very little that location explains that x doesn't already explain. | Simple linear regression, p-values and the AIC | I think you did well to challenge the notion that p-values and AIC values alone can determine the viability of a model. I'm also glad you chose to share it here.
As you've demonstrated, there are var | Simple linear regression, p-values and the AIC
I think you did well to challenge the notion that p-values and AIC values alone can determine the viability of a model. I'm also glad you chose to share it here.
As you've demonstrated, there are various trade-offs being made as you consider various terms and possibly their interaction. So one question to have in mind is the purpose of the model. If you're commissioned to determine the effect of location on y, then you should keep location in the model regardless of how weak the p-value is. A null result is itself significant information in that case.
At first glance, it seems clear the D location implies a larger y. But there is only a narrow range of x for which you have both D and N values for location. Regenerating your model coefficients for this small interval will likely yield a much larger standard error.
But maybe you don't care about location beyond its capacity for predicting y. It was data you just happened to have and color coding it on your plot revealed an interesting pattern. In this case you may be more interested in the predictability of the model than the interpretability of your favorite coefficient. I suspect AIC values are more useful in this case. I'm not familiar with AIC yet; but I suspect it may be penalizing the mixed term because there is only a small range in which you can change location for fixed x. There is very little that location explains that x doesn't already explain. | Simple linear regression, p-values and the AIC
I think you did well to challenge the notion that p-values and AIC values alone can determine the viability of a model. I'm also glad you chose to share it here.
As you've demonstrated, there are var |
20,676 | Simple linear regression, p-values and the AIC | You must report both groups separately (or perhaps consider multi-level modelling). To simply combine the groups violates one of the basic assumptions of regression (and most other inferential statistical techiques), independence of observations. Or to put it another way, the grouping variable (location) is a hidden variable unless it is taken into account in your analysis.
In an extreme case, ignoring a grouping variable can lead to Simpson's paradox. In this paradox, you can have two groups in both of which there is a positive correlation, but if you combine them you have a (false, incorrect) negative correlation. (Or vice versa, of course.) See http://www.theregister.co.uk/2014/05/28/theorums_3_simpson/. | Simple linear regression, p-values and the AIC | You must report both groups separately (or perhaps consider multi-level modelling). To simply combine the groups violates one of the basic assumptions of regression (and most other inferential statis | Simple linear regression, p-values and the AIC
You must report both groups separately (or perhaps consider multi-level modelling). To simply combine the groups violates one of the basic assumptions of regression (and most other inferential statistical techiques), independence of observations. Or to put it another way, the grouping variable (location) is a hidden variable unless it is taken into account in your analysis.
In an extreme case, ignoring a grouping variable can lead to Simpson's paradox. In this paradox, you can have two groups in both of which there is a positive correlation, but if you combine them you have a (false, incorrect) negative correlation. (Or vice versa, of course.) See http://www.theregister.co.uk/2014/05/28/theorums_3_simpson/. | Simple linear regression, p-values and the AIC
You must report both groups separately (or perhaps consider multi-level modelling). To simply combine the groups violates one of the basic assumptions of regression (and most other inferential statis |
20,677 | Jeffreys prior for multiple parameters | What is optimal? There is no general and generic "optimality" result for the Jeffreys prior. It all depends on the purpose of the statistical analysis and of the loss function adopted to evaluate and compare procedures. Otherwise, $\pi(\theta,\sigma)\propto \dfrac{1}{\sigma}$ cannot be compared with $\pi(\theta,\sigma)\propto \dfrac{1}{\sigma}^2$. As I wrote in my most popular answer on X validated, there is no such thing as a best non-informative prior. | Jeffreys prior for multiple parameters | What is optimal? There is no general and generic "optimality" result for the Jeffreys prior. It all depends on the purpose of the statistical analysis and of the loss function adopted to evaluate and | Jeffreys prior for multiple parameters
What is optimal? There is no general and generic "optimality" result for the Jeffreys prior. It all depends on the purpose of the statistical analysis and of the loss function adopted to evaluate and compare procedures. Otherwise, $\pi(\theta,\sigma)\propto \dfrac{1}{\sigma}$ cannot be compared with $\pi(\theta,\sigma)\propto \dfrac{1}{\sigma}^2$. As I wrote in my most popular answer on X validated, there is no such thing as a best non-informative prior. | Jeffreys prior for multiple parameters
What is optimal? There is no general and generic "optimality" result for the Jeffreys prior. It all depends on the purpose of the statistical analysis and of the loss function adopted to evaluate and |
20,678 | Specifying prior for effect size in meta-analysis | As very rarely are effects this large, is this prior justifiable?
I think your priors are OK, as long as you can defend them with extra-statistical arguments (e.g by looking at established works in the psychological scholarly literature).
However, make sure you also perform a sensitivity analysis using less informative priors, to check whether your posterior distribution relies too heavily on your assumptions. If this is the case, with similar findings in terms of direction and magnitude of effect, then your results will appear much more robust and valid. | Specifying prior for effect size in meta-analysis | As very rarely are effects this large, is this prior justifiable?
I think your priors are OK, as long as you can defend them with extra-statistical arguments (e.g by looking at established works in t | Specifying prior for effect size in meta-analysis
As very rarely are effects this large, is this prior justifiable?
I think your priors are OK, as long as you can defend them with extra-statistical arguments (e.g by looking at established works in the psychological scholarly literature).
However, make sure you also perform a sensitivity analysis using less informative priors, to check whether your posterior distribution relies too heavily on your assumptions. If this is the case, with similar findings in terms of direction and magnitude of effect, then your results will appear much more robust and valid. | Specifying prior for effect size in meta-analysis
As very rarely are effects this large, is this prior justifiable?
I think your priors are OK, as long as you can defend them with extra-statistical arguments (e.g by looking at established works in t |
20,679 | Biased estimator for regression achieving better results than unbiased one in Error In Variables Model | Summary: the corrected parameters are for predicting as a function of the true predictor $x$. If $\tilde{x}$ is used in prediction, the original parameters perform better.
Note that there are two different linear prediction models lurking around. First, $y$ given $x$,
\begin{equation}
\hat{y}_x = \beta\,x + \alpha,
\end{equation}
second, $y$ given $\tilde{x}$,
\begin{equation}
\hat{y}_{\tilde{x}} = \tilde{\beta}\,\tilde{x} + \tilde{\alpha}.
\end{equation}
Even if we had access to the true parameters, the optimal linear prediction as a functions of $x$ would be different than the optimal linear prediction as a function of $\tilde{x}$. The code in the question does the following
Estimate the parameters $\hat{\tilde{\beta}},\hat{\tilde{\alpha}}$
Compute estimates $\hat{\beta},\hat{\alpha}$
Compare performance of $\hat{y}_1 = \hat{\beta}\,\tilde{x} + \hat{\alpha}$ and $\hat{y}_2 = \hat{\tilde{\beta}}\,\tilde{x} + \hat{\tilde{\alpha}}$
Since in step 3 we are predicting as a function of $\tilde{x}$, not as a function of $x$, using (estimated) coefficients of the second model works better.
Indeed, if we had access to $\alpha$, $\beta$ and $\tilde{x}$ but not $x$, we might substitute a linear estimator of $x$ in the first model,
\begin{equation}
\hat{\hat{y_x}} = \beta\,\hat{x}(\tilde{x}) + \alpha = \beta\, (\mu_x + (\hat{x}-\mu_x)\,\frac{\sigma^2_{x}}{\sigma^2_{\tilde{x}}}) + \alpha = \frac{\sigma_x^2}{\sigma^2_{\tilde{x}}}\beta + \alpha - \beta(1-\frac{\sigma_x^2}{\sigma^2_{\tilde{x}}})\mu_x.
\end{equation}
If we first perform the transformation form $\tilde{\alpha},\tilde{\beta}$ to $\alpha,\beta$ and then do the computation in the latest equation, we get back the coefficients $\tilde{\alpha},\tilde{\beta}$. So, if the goal is to do linear prediction given the noisy version of the predictor, we should just fit a linear model to the noisy data. The corrected coefficients $\alpha,\beta$ are of interest if we are interested in the true phenomenon for other reasons than prediction.
Testing
I edited the code in OP to also evaluate MSEs for predictions using the non-noisy version of the prediction (code in the end of the answer). The results are
Reg parameters, noisy predictor
1.3387 1.6696 2.1265 2.4806 2.5679 2.5062 2.5160 2.8684
Fixed parameters, noisy predictor
1.3981 2.0626 3.2971 5.0220 7.6490 10.2568 14.1139 20.7604
Reg parameters, true predictor
1.3354 1.6657 2.1329 2.4885 2.5688 2.5198 2.5085 2.8676
Fixed parameters, true predictor
1.1139 1.0078 1.0499 1.0212 1.0492 0.9925 1.0217 1.2528
That is, when using $x$ instead of $\tilde{x}$, the corrected parameters indeed beat the uncorrected parameters, as expected. Furthermore, the prediction with ($\alpha,\beta,x$), that is, fixed parameters and true predictor, is better than ($\tilde{\alpha},\tilde{\beta},\tilde{x}$), that is, reg parameters and noisy predictor, since obviously the noise harms the prediction accuract somewhat. The other two cases correspond to using the parameters of a wrong model and thus produce weaker results.
Caveat about nonlinearity
Actually, even if the relationship between $y,x$ is linear, the relationship between $y$ and $\tilde{x}$ might not be. This depends on the distribution of $x$. For example, in the present code, $x$ is drawn from the uniform distribution, thus no matter how high $\tilde{x}$ is, we know an upper bound for $x$ and thus the predicted $y$ as a function of $\tilde{x}$ should saturate. A possible Bayesian-style solution would be to posit a probability distribution for $x$ and then plug in $\mathbb{E}(x \mid \tilde{x})$ when deriving $\hat{\hat{y}}_x$ - instead of the linear prediction I used previously. However, if one is willing to posit a probability distribution for $x$, I suppose one should go for a full Bayesian solution instead of an approach based on correcting OLS estimates in the first place.
MATLAB code for replicating the test result
Note that this also contains my own implementations for evaluate and OLS_solver since they were not given in the question.
rng(1)
OLS_solver = @(X,Y) [X ones(size(X))]'*[X ones(size(X))] \ ([X ones(size(X))]' * Y);
evaluate = @(b,x,y) mean(([x ones(size(x))]*b - y).^2);
reg_mse_agg = [];
fixed_mse_agg = [];
reg_mse_orig_agg = [];
fixed_mse_orig_agg = [];
varMult = 1;
numTests = 60;
for dataNumber=1:8
reg_mses = [];
fixed_mses = [];
reg_mses_orig = [];
fixed_mses_orig = [];
X = rand(1000,1);
X(:,1) = X(:,1) * 10;
X(:,1) = X(:,1) + 5;
varX = var(X);
y = 0.5 * X(:,1) -10;
y = y + normrnd(0,1,size(y));
origX = X;
X = X + normrnd(0,dataNumber * varMult ,size(X));
train_size = floor(0.5 * length(y));
for t=1:numTests,
idx = randperm(length(y));
train_idx = idx(1:train_size);
test_idx = idx(train_size+1:end);
Xtrain = X(train_idx,:);
ytrain = y(train_idx);
Xtest = X(test_idx,:);
origXtest = origX(test_idx,:);
ytest = y(test_idx);
b = OLS_solver(Xtrain, ytrain);
%first arg of evaluate returns MSE, working correctly.
reg_mse = evaluate( b,Xtest,ytest);
reg_mses = [reg_mses ; reg_mse];
varInd = var(Xtrain);
varNoise = varInd - varX;
bFixed = [0 0]';
bFixed(1) = b(1) * varInd / varX;
bFixed(2) = mean(ytrain - bFixed(1)*Xtrain);
fixed_mse = evaluate( bFixed,Xtest,ytest);
fixed_mses = [fixed_mses ; fixed_mse];
reg_mse_orig = evaluate(b, origXtest, ytest);
reg_mses_orig = [reg_mses; reg_mses_orig];
fixed_mse_orig = evaluate(bFixed, origXtest, ytest);
fixed_mses_orig = [fixed_mses_orig; fixed_mse_orig];
end
reg_mse_agg = [reg_mse_agg , reg_mses];
fixed_mse_agg = [fixed_mse_agg , fixed_mses];
reg_mse_orig_agg = [reg_mse_orig_agg , reg_mses_orig];
fixed_mse_orig_agg = [fixed_mse_orig_agg , fixed_mses_orig];
end
disp('Reg parameters, noisy predictor')
disp(mean(reg_mse_agg))
disp('Fixed parameters, noisy predictor')
disp(mean(fixed_mse_agg))
disp('Reg parameters, true predictor')
disp(mean(reg_mse_orig_agg))
disp('Fixed parameters, true predictor')
disp(mean(fixed_mse_orig_agg)) | Biased estimator for regression achieving better results than unbiased one in Error In Variables Mod | Summary: the corrected parameters are for predicting as a function of the true predictor $x$. If $\tilde{x}$ is used in prediction, the original parameters perform better.
Note that there are two diff | Biased estimator for regression achieving better results than unbiased one in Error In Variables Model
Summary: the corrected parameters are for predicting as a function of the true predictor $x$. If $\tilde{x}$ is used in prediction, the original parameters perform better.
Note that there are two different linear prediction models lurking around. First, $y$ given $x$,
\begin{equation}
\hat{y}_x = \beta\,x + \alpha,
\end{equation}
second, $y$ given $\tilde{x}$,
\begin{equation}
\hat{y}_{\tilde{x}} = \tilde{\beta}\,\tilde{x} + \tilde{\alpha}.
\end{equation}
Even if we had access to the true parameters, the optimal linear prediction as a functions of $x$ would be different than the optimal linear prediction as a function of $\tilde{x}$. The code in the question does the following
Estimate the parameters $\hat{\tilde{\beta}},\hat{\tilde{\alpha}}$
Compute estimates $\hat{\beta},\hat{\alpha}$
Compare performance of $\hat{y}_1 = \hat{\beta}\,\tilde{x} + \hat{\alpha}$ and $\hat{y}_2 = \hat{\tilde{\beta}}\,\tilde{x} + \hat{\tilde{\alpha}}$
Since in step 3 we are predicting as a function of $\tilde{x}$, not as a function of $x$, using (estimated) coefficients of the second model works better.
Indeed, if we had access to $\alpha$, $\beta$ and $\tilde{x}$ but not $x$, we might substitute a linear estimator of $x$ in the first model,
\begin{equation}
\hat{\hat{y_x}} = \beta\,\hat{x}(\tilde{x}) + \alpha = \beta\, (\mu_x + (\hat{x}-\mu_x)\,\frac{\sigma^2_{x}}{\sigma^2_{\tilde{x}}}) + \alpha = \frac{\sigma_x^2}{\sigma^2_{\tilde{x}}}\beta + \alpha - \beta(1-\frac{\sigma_x^2}{\sigma^2_{\tilde{x}}})\mu_x.
\end{equation}
If we first perform the transformation form $\tilde{\alpha},\tilde{\beta}$ to $\alpha,\beta$ and then do the computation in the latest equation, we get back the coefficients $\tilde{\alpha},\tilde{\beta}$. So, if the goal is to do linear prediction given the noisy version of the predictor, we should just fit a linear model to the noisy data. The corrected coefficients $\alpha,\beta$ are of interest if we are interested in the true phenomenon for other reasons than prediction.
Testing
I edited the code in OP to also evaluate MSEs for predictions using the non-noisy version of the prediction (code in the end of the answer). The results are
Reg parameters, noisy predictor
1.3387 1.6696 2.1265 2.4806 2.5679 2.5062 2.5160 2.8684
Fixed parameters, noisy predictor
1.3981 2.0626 3.2971 5.0220 7.6490 10.2568 14.1139 20.7604
Reg parameters, true predictor
1.3354 1.6657 2.1329 2.4885 2.5688 2.5198 2.5085 2.8676
Fixed parameters, true predictor
1.1139 1.0078 1.0499 1.0212 1.0492 0.9925 1.0217 1.2528
That is, when using $x$ instead of $\tilde{x}$, the corrected parameters indeed beat the uncorrected parameters, as expected. Furthermore, the prediction with ($\alpha,\beta,x$), that is, fixed parameters and true predictor, is better than ($\tilde{\alpha},\tilde{\beta},\tilde{x}$), that is, reg parameters and noisy predictor, since obviously the noise harms the prediction accuract somewhat. The other two cases correspond to using the parameters of a wrong model and thus produce weaker results.
Caveat about nonlinearity
Actually, even if the relationship between $y,x$ is linear, the relationship between $y$ and $\tilde{x}$ might not be. This depends on the distribution of $x$. For example, in the present code, $x$ is drawn from the uniform distribution, thus no matter how high $\tilde{x}$ is, we know an upper bound for $x$ and thus the predicted $y$ as a function of $\tilde{x}$ should saturate. A possible Bayesian-style solution would be to posit a probability distribution for $x$ and then plug in $\mathbb{E}(x \mid \tilde{x})$ when deriving $\hat{\hat{y}}_x$ - instead of the linear prediction I used previously. However, if one is willing to posit a probability distribution for $x$, I suppose one should go for a full Bayesian solution instead of an approach based on correcting OLS estimates in the first place.
MATLAB code for replicating the test result
Note that this also contains my own implementations for evaluate and OLS_solver since they were not given in the question.
rng(1)
OLS_solver = @(X,Y) [X ones(size(X))]'*[X ones(size(X))] \ ([X ones(size(X))]' * Y);
evaluate = @(b,x,y) mean(([x ones(size(x))]*b - y).^2);
reg_mse_agg = [];
fixed_mse_agg = [];
reg_mse_orig_agg = [];
fixed_mse_orig_agg = [];
varMult = 1;
numTests = 60;
for dataNumber=1:8
reg_mses = [];
fixed_mses = [];
reg_mses_orig = [];
fixed_mses_orig = [];
X = rand(1000,1);
X(:,1) = X(:,1) * 10;
X(:,1) = X(:,1) + 5;
varX = var(X);
y = 0.5 * X(:,1) -10;
y = y + normrnd(0,1,size(y));
origX = X;
X = X + normrnd(0,dataNumber * varMult ,size(X));
train_size = floor(0.5 * length(y));
for t=1:numTests,
idx = randperm(length(y));
train_idx = idx(1:train_size);
test_idx = idx(train_size+1:end);
Xtrain = X(train_idx,:);
ytrain = y(train_idx);
Xtest = X(test_idx,:);
origXtest = origX(test_idx,:);
ytest = y(test_idx);
b = OLS_solver(Xtrain, ytrain);
%first arg of evaluate returns MSE, working correctly.
reg_mse = evaluate( b,Xtest,ytest);
reg_mses = [reg_mses ; reg_mse];
varInd = var(Xtrain);
varNoise = varInd - varX;
bFixed = [0 0]';
bFixed(1) = b(1) * varInd / varX;
bFixed(2) = mean(ytrain - bFixed(1)*Xtrain);
fixed_mse = evaluate( bFixed,Xtest,ytest);
fixed_mses = [fixed_mses ; fixed_mse];
reg_mse_orig = evaluate(b, origXtest, ytest);
reg_mses_orig = [reg_mses; reg_mses_orig];
fixed_mse_orig = evaluate(bFixed, origXtest, ytest);
fixed_mses_orig = [fixed_mses_orig; fixed_mse_orig];
end
reg_mse_agg = [reg_mse_agg , reg_mses];
fixed_mse_agg = [fixed_mse_agg , fixed_mses];
reg_mse_orig_agg = [reg_mse_orig_agg , reg_mses_orig];
fixed_mse_orig_agg = [fixed_mse_orig_agg , fixed_mses_orig];
end
disp('Reg parameters, noisy predictor')
disp(mean(reg_mse_agg))
disp('Fixed parameters, noisy predictor')
disp(mean(fixed_mse_agg))
disp('Reg parameters, true predictor')
disp(mean(reg_mse_orig_agg))
disp('Fixed parameters, true predictor')
disp(mean(fixed_mse_orig_agg)) | Biased estimator for regression achieving better results than unbiased one in Error In Variables Mod
Summary: the corrected parameters are for predicting as a function of the true predictor $x$. If $\tilde{x}$ is used in prediction, the original parameters perform better.
Note that there are two diff |
20,680 | Distribution of the convolution of squared normal and chi-squared variables? | In case it helps, the variable $Y^2$ is a generalised gamma random variable (see e.g., Stacy 1962). Your question is asking for the distribution of the sum of a chi-squared random variable and a generalised gamma random variable. To my knowledge, the density of the resultant variable has no closed form expression. Hence, the convolution you have obtained is an integral with no closed form solution. I think you're going to be stuck with a numerical solution for this one.
Stacy, E.W. (1962). A Generalization of the Gamma Distribution. Annals of Mathematical Statistics 33(3), pp. 1187-1192. | Distribution of the convolution of squared normal and chi-squared variables? | In case it helps, the variable $Y^2$ is a generalised gamma random variable (see e.g., Stacy 1962). Your question is asking for the distribution of the sum of a chi-squared random variable and a gene | Distribution of the convolution of squared normal and chi-squared variables?
In case it helps, the variable $Y^2$ is a generalised gamma random variable (see e.g., Stacy 1962). Your question is asking for the distribution of the sum of a chi-squared random variable and a generalised gamma random variable. To my knowledge, the density of the resultant variable has no closed form expression. Hence, the convolution you have obtained is an integral with no closed form solution. I think you're going to be stuck with a numerical solution for this one.
Stacy, E.W. (1962). A Generalization of the Gamma Distribution. Annals of Mathematical Statistics 33(3), pp. 1187-1192. | Distribution of the convolution of squared normal and chi-squared variables?
In case it helps, the variable $Y^2$ is a generalised gamma random variable (see e.g., Stacy 1962). Your question is asking for the distribution of the sum of a chi-squared random variable and a gene |
20,681 | Distribution of the convolution of squared normal and chi-squared variables? | This is a hint only. Pearson type III can be Chi-squared. Sometimes a convolution can be found by convolving something with itself. I managed to do this for convolving ND and GD, for which I convolved a Pearson III with itself. How this works with ND$^2$ and Chi-Squared, I am not sure. But, you asked for hints, and this is a general hint. That should be enough to get you started, I hope. | Distribution of the convolution of squared normal and chi-squared variables? | This is a hint only. Pearson type III can be Chi-squared. Sometimes a convolution can be found by convolving something with itself. I managed to do this for convolving ND and GD, for which I convolved | Distribution of the convolution of squared normal and chi-squared variables?
This is a hint only. Pearson type III can be Chi-squared. Sometimes a convolution can be found by convolving something with itself. I managed to do this for convolving ND and GD, for which I convolved a Pearson III with itself. How this works with ND$^2$ and Chi-Squared, I am not sure. But, you asked for hints, and this is a general hint. That should be enough to get you started, I hope. | Distribution of the convolution of squared normal and chi-squared variables?
This is a hint only. Pearson type III can be Chi-squared. Sometimes a convolution can be found by convolving something with itself. I managed to do this for convolving ND and GD, for which I convolved |
20,682 | Dealing with missing data in an exponential smoothing model | Your approach makes sense. A commercial piece of software I was associated with for a couple of years did exactly this.
Your outline applies to Single Exponential Smoothing (SES), but of course you could apply the same treatment to trend or seasonal components. For the seasonal ones, you would need to go back a full seasonal cycle, just as for updating.
Another alternative would of course be to simply interpolate missing values. This is an option in newer versions of ets(..., na.action="na.interp").
From what little I know of state space models, it should not be overly hard to simply treat the missing data as unobserved. I am not sure why this is not implemented in the forecast package. A quick search through Rob Hyndman's blog didn't really yield anything useful. | Dealing with missing data in an exponential smoothing model | Your approach makes sense. A commercial piece of software I was associated with for a couple of years did exactly this.
Your outline applies to Single Exponential Smoothing (SES), but of course you co | Dealing with missing data in an exponential smoothing model
Your approach makes sense. A commercial piece of software I was associated with for a couple of years did exactly this.
Your outline applies to Single Exponential Smoothing (SES), but of course you could apply the same treatment to trend or seasonal components. For the seasonal ones, you would need to go back a full seasonal cycle, just as for updating.
Another alternative would of course be to simply interpolate missing values. This is an option in newer versions of ets(..., na.action="na.interp").
From what little I know of state space models, it should not be overly hard to simply treat the missing data as unobserved. I am not sure why this is not implemented in the forecast package. A quick search through Rob Hyndman's blog didn't really yield anything useful. | Dealing with missing data in an exponential smoothing model
Your approach makes sense. A commercial piece of software I was associated with for a couple of years did exactly this.
Your outline applies to Single Exponential Smoothing (SES), but of course you co |
20,683 | AIC/BIC: how many parameters does a permutation count for? | Intuitively, I suspect that the set of all permutations on $p$ elements is equivalent to $p^2-2p+1$ parameters.
This is because the permutation matrices are the extremal points of the convex space of doubly-stochastic real matrices of rank $p$, and in general the doubly-stochastic matrices have $p^2-2p+1$ parameters (you get $2p$ constraints because all row sums have to all be 1 and column sums have to all be 1, but one of these is redundant, so you have $2p-1$ constraints on $p^2$ entries).
I have no proof, but it seems right. Maybe it's worth trying it numerically? | AIC/BIC: how many parameters does a permutation count for? | Intuitively, I suspect that the set of all permutations on $p$ elements is equivalent to $p^2-2p+1$ parameters.
This is because the permutation matrices are the extremal points of the convex space of | AIC/BIC: how many parameters does a permutation count for?
Intuitively, I suspect that the set of all permutations on $p$ elements is equivalent to $p^2-2p+1$ parameters.
This is because the permutation matrices are the extremal points of the convex space of doubly-stochastic real matrices of rank $p$, and in general the doubly-stochastic matrices have $p^2-2p+1$ parameters (you get $2p$ constraints because all row sums have to all be 1 and column sums have to all be 1, but one of these is redundant, so you have $2p-1$ constraints on $p^2$ entries).
I have no proof, but it seems right. Maybe it's worth trying it numerically? | AIC/BIC: how many parameters does a permutation count for?
Intuitively, I suspect that the set of all permutations on $p$ elements is equivalent to $p^2-2p+1$ parameters.
This is because the permutation matrices are the extremal points of the convex space of |
20,684 | Tail inequality on sum of product of normal variables | The answer to this question is given in Berstein's inequality,
as presented in Chapter 2 in Vershynin's book.
For $z_1,...,z_n$ independent, zero-mean, sub-exponential RVs, $$P\left(\left|\sum_i{z_i}\right|\ge nt\right)\le2\exp\left( -cn\cdot\min\left\{ \frac{t^2}{K^2},\frac{t}{K}\right\} \right)$$ where $c$ is a universal constant, $K=\max_i\left\|z_i\right\|_{\psi_1}$ and $\left\|z_i\right\|_{\psi_1}$ is the sub-exponential norm of $z_i$.
Denote $z_i=x_iy_i$. As $x_i,y_i$ are standardized normal RVs, each is a sub-gaussian RV with the sub-gaussian norm $\left\|x_i\right\|_{\psi_2}=\left\|y_i\right\|_{\psi_2}=1$ so $\left\|z_i\right\|_{\psi_1}\le\left\|x_i\right\|_{\psi_2}\cdot\left\|y_i\right\|_{\psi_2}=1$ (see Lemma 2.7.7 in Vershynin's). For convenience, we'll take $\left\|z_i\right\|_{\psi_1}=1$ (lower values would yield tighter bounds). Taking $c=\frac{1}{2}$ (as done in Section 1.3 here), we eventually get:
$$P\left(\left|\sum_i{z_i}\right|\ge nt\right)\le2\exp\left( -\frac{n}{2}\cdot\min\left\{ t^2,t\right\} \right)$$ | Tail inequality on sum of product of normal variables | The answer to this question is given in Berstein's inequality,
as presented in Chapter 2 in Vershynin's book.
For $z_1,...,z_n$ independent, zero-mean, sub-exponential RVs, $$P\left(\left|\sum_i{z_i}\ | Tail inequality on sum of product of normal variables
The answer to this question is given in Berstein's inequality,
as presented in Chapter 2 in Vershynin's book.
For $z_1,...,z_n$ independent, zero-mean, sub-exponential RVs, $$P\left(\left|\sum_i{z_i}\right|\ge nt\right)\le2\exp\left( -cn\cdot\min\left\{ \frac{t^2}{K^2},\frac{t}{K}\right\} \right)$$ where $c$ is a universal constant, $K=\max_i\left\|z_i\right\|_{\psi_1}$ and $\left\|z_i\right\|_{\psi_1}$ is the sub-exponential norm of $z_i$.
Denote $z_i=x_iy_i$. As $x_i,y_i$ are standardized normal RVs, each is a sub-gaussian RV with the sub-gaussian norm $\left\|x_i\right\|_{\psi_2}=\left\|y_i\right\|_{\psi_2}=1$ so $\left\|z_i\right\|_{\psi_1}\le\left\|x_i\right\|_{\psi_2}\cdot\left\|y_i\right\|_{\psi_2}=1$ (see Lemma 2.7.7 in Vershynin's). For convenience, we'll take $\left\|z_i\right\|_{\psi_1}=1$ (lower values would yield tighter bounds). Taking $c=\frac{1}{2}$ (as done in Section 1.3 here), we eventually get:
$$P\left(\left|\sum_i{z_i}\right|\ge nt\right)\le2\exp\left( -\frac{n}{2}\cdot\min\left\{ t^2,t\right\} \right)$$ | Tail inequality on sum of product of normal variables
The answer to this question is given in Berstein's inequality,
as presented in Chapter 2 in Vershynin's book.
For $z_1,...,z_n$ independent, zero-mean, sub-exponential RVs, $$P\left(\left|\sum_i{z_i}\ |
20,685 | Zero-inflated Poisson regression | In the zero-inflated Poisson case, if $\mathbf{B}=\mathbf{G}$, then $\beta$ and $\lambda$ both have the same length, which is the number of columns of $\mathbf{B}$ or $\mathbf{G}$. So the number of parameters is twice the number of columns of the design matrix ie twice the number of explanatory variables including the intercept (and whatever dummy coding was needed).
In a straight Poisson regression, there is no $\mathbf{p}$ vector to worry about, no need to estimate $\lambda$. So the number of parameters is just the length of $\beta$ ie half the number of parameters in the zero-inflated case.
Now, there's no particular reason why $\mathbf{B}$ has to equal $\mathbf{G}$, but generally it makes sense. However, one could imagine a data generating process where the chance of having any events at all is created by one process $\mathbf{G\lambda}$ and a completely different process $\mathbf{B\beta}$ drives how many events there are, given non-zero events. As a contrived example, I pick classrooms based on their History exam scores to play some unrelated game, and then observe the number of goals they score. In this case $\mathbf{B}$ might be quite different to $\mathbf{G}$ (if the things driving History exam scores are different to those driving performance in the game) and $\beta$ and $\lambda$ could have different lengths. $\mathbf{G}$ might have more columns than $\mathbf{B}$ or less. So the zero-inflated Poisson model in that case will have more parameters than a simple Poisson model.
In common practice I think $\mathbf{G} = \mathbf{B}$ most of the time. | Zero-inflated Poisson regression | In the zero-inflated Poisson case, if $\mathbf{B}=\mathbf{G}$, then $\beta$ and $\lambda$ both have the same length, which is the number of columns of $\mathbf{B}$ or $\mathbf{G}$. So the number of p | Zero-inflated Poisson regression
In the zero-inflated Poisson case, if $\mathbf{B}=\mathbf{G}$, then $\beta$ and $\lambda$ both have the same length, which is the number of columns of $\mathbf{B}$ or $\mathbf{G}$. So the number of parameters is twice the number of columns of the design matrix ie twice the number of explanatory variables including the intercept (and whatever dummy coding was needed).
In a straight Poisson regression, there is no $\mathbf{p}$ vector to worry about, no need to estimate $\lambda$. So the number of parameters is just the length of $\beta$ ie half the number of parameters in the zero-inflated case.
Now, there's no particular reason why $\mathbf{B}$ has to equal $\mathbf{G}$, but generally it makes sense. However, one could imagine a data generating process where the chance of having any events at all is created by one process $\mathbf{G\lambda}$ and a completely different process $\mathbf{B\beta}$ drives how many events there are, given non-zero events. As a contrived example, I pick classrooms based on their History exam scores to play some unrelated game, and then observe the number of goals they score. In this case $\mathbf{B}$ might be quite different to $\mathbf{G}$ (if the things driving History exam scores are different to those driving performance in the game) and $\beta$ and $\lambda$ could have different lengths. $\mathbf{G}$ might have more columns than $\mathbf{B}$ or less. So the zero-inflated Poisson model in that case will have more parameters than a simple Poisson model.
In common practice I think $\mathbf{G} = \mathbf{B}$ most of the time. | Zero-inflated Poisson regression
In the zero-inflated Poisson case, if $\mathbf{B}=\mathbf{G}$, then $\beta$ and $\lambda$ both have the same length, which is the number of columns of $\mathbf{B}$ or $\mathbf{G}$. So the number of p |
20,686 | What can you do when you have predictor variables that are based on group averages with different sample sizes? | The paper "A heteroscedastic structural errors-in-variables model with equation error" can be downloaded at the author's page:
http://www.ime.usp.br/~patriota/curriculo-eng.html#Published_papers
basically you must take into account the variability of both variables to avoid inconsistent estimators, non-reliable hypothesis tests and confidence intervals. | What can you do when you have predictor variables that are based on group averages with different sa | The paper "A heteroscedastic structural errors-in-variables model with equation error" can be downloaded at the author's page:
http://www.ime.usp.br/~patriota/curriculo-eng.html#Published_papers
basic | What can you do when you have predictor variables that are based on group averages with different sample sizes?
The paper "A heteroscedastic structural errors-in-variables model with equation error" can be downloaded at the author's page:
http://www.ime.usp.br/~patriota/curriculo-eng.html#Published_papers
basically you must take into account the variability of both variables to avoid inconsistent estimators, non-reliable hypothesis tests and confidence intervals. | What can you do when you have predictor variables that are based on group averages with different sa
The paper "A heteroscedastic structural errors-in-variables model with equation error" can be downloaded at the author's page:
http://www.ime.usp.br/~patriota/curriculo-eng.html#Published_papers
basic |
20,687 | What can you do when you have predictor variables that are based on group averages with different sample sizes? | One way to deal with this would be to suppose that every city has a distribution with the same variance $σ^2$ for the individual responses. Then each city's average measurement $X_i$ for the predictor would have variance $σ^2/n_i$, where $n_i$ is the number of individuals in the average for city $i$. That would be a simple way to deal with the heteroskedasticity. I don't know any special name for this form of the regression problem. | What can you do when you have predictor variables that are based on group averages with different sa | One way to deal with this would be to suppose that every city has a distribution with the same variance $σ^2$ for the individual responses. Then each city's average measurement $X_i$ for the predictor | What can you do when you have predictor variables that are based on group averages with different sample sizes?
One way to deal with this would be to suppose that every city has a distribution with the same variance $σ^2$ for the individual responses. Then each city's average measurement $X_i$ for the predictor would have variance $σ^2/n_i$, where $n_i$ is the number of individuals in the average for city $i$. That would be a simple way to deal with the heteroskedasticity. I don't know any special name for this form of the regression problem. | What can you do when you have predictor variables that are based on group averages with different sa
One way to deal with this would be to suppose that every city has a distribution with the same variance $σ^2$ for the individual responses. Then each city's average measurement $X_i$ for the predictor |
20,688 | Interrater reliability for events in a time series with uncertainty about event time | Here's a couple of ways to think about.
1
A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "head shake", "eyebrow raised"]), then align the sequences using an algorithm that made sense to you ( http://en.wikipedia.org/wiki/Sequence_alignment ). You could then compute inter coder reliability for the entire sequence.
B) Then, again using the aligned sequences, you could compare when they said an event happened, given that they both observed the event.
2)
Alternately, you could model this as a Hidden Markov Model, and use something like the Baumn-Welch algorithm to impute the probabilities that, given some actual event, each coder actually coded the data correctly. http://en.wikipedia.org/wiki/Baum-Welch_algorithm | Interrater reliability for events in a time series with uncertainty about event time | Here's a couple of ways to think about.
1
A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "h | Interrater reliability for events in a time series with uncertainty about event time
Here's a couple of ways to think about.
1
A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "head shake", "eyebrow raised"]), then align the sequences using an algorithm that made sense to you ( http://en.wikipedia.org/wiki/Sequence_alignment ). You could then compute inter coder reliability for the entire sequence.
B) Then, again using the aligned sequences, you could compare when they said an event happened, given that they both observed the event.
2)
Alternately, you could model this as a Hidden Markov Model, and use something like the Baumn-Welch algorithm to impute the probabilities that, given some actual event, each coder actually coded the data correctly. http://en.wikipedia.org/wiki/Baum-Welch_algorithm | Interrater reliability for events in a time series with uncertainty about event time
Here's a couple of ways to think about.
1
A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "h |
20,689 | Interrater reliability for events in a time series with uncertainty about event time | Rather than slicing the data up in arbitrary pieces you could consider the actual time differences.
Coder 1 reports time and action:
049 D
113 C
513 C
724 G
A simple way to see which coder is the most reliable according to other coders is by giving him a score like so:
Add a point for each other coder that reported a D between (049-025) and (049+025)
Add a point for each other coder that reported a C between (113-025) and (113+025)
Add a point for each other coder that reported a C between (513-025) and (513+025)
Add a point for each other coder that reported a C between (724-025) and (724+025)
Subtract a point for each reported action.
If closeness is important for you, consider alternatives like these:
Add 25/(Time_Thiscoder-Time_Othercoder)^2 points for each other coder that reported a matching observation.
With all problem information available it should not be hard to implement this idea in a practical way. | Interrater reliability for events in a time series with uncertainty about event time | Rather than slicing the data up in arbitrary pieces you could consider the actual time differences.
Coder 1 reports time and action:
049 D
113 C
513 C
724 G
A simple way to see which coder is the mos | Interrater reliability for events in a time series with uncertainty about event time
Rather than slicing the data up in arbitrary pieces you could consider the actual time differences.
Coder 1 reports time and action:
049 D
113 C
513 C
724 G
A simple way to see which coder is the most reliable according to other coders is by giving him a score like so:
Add a point for each other coder that reported a D between (049-025) and (049+025)
Add a point for each other coder that reported a C between (113-025) and (113+025)
Add a point for each other coder that reported a C between (513-025) and (513+025)
Add a point for each other coder that reported a C between (724-025) and (724+025)
Subtract a point for each reported action.
If closeness is important for you, consider alternatives like these:
Add 25/(Time_Thiscoder-Time_Othercoder)^2 points for each other coder that reported a matching observation.
With all problem information available it should not be hard to implement this idea in a practical way. | Interrater reliability for events in a time series with uncertainty about event time
Rather than slicing the data up in arbitrary pieces you could consider the actual time differences.
Coder 1 reports time and action:
049 D
113 C
513 C
724 G
A simple way to see which coder is the mos |
20,690 | Should I use unpenalized logistic regression, lasso or ridge for explanatory modelling? | From your post, and from your reference to the statement “explanatory modelling aims to minimize bias,” I suspect you have the impression that the key differentiator between explanatory and predictive approaches is the choice of statistical procedure.
Granted, in conducting explanatory analysis you will probably be best served by avoiding variable-selection algorithms, since they are all blind to the difference between, colloquially, correlation and causation. That probably means choosing “vanilla” regression over LASSO and ridge, not to mention neural networks, random forests, SVM, CART, etc. To paraphrase Davis (below), algorithms have no way to tell whether one variable preceded another; whether it is more objectively or more subjectively measured; or whether it is typically more generative (like socioeconomic status) or less (like one's choice of breakfast cereal).
But sound, effective, replicable explanatory modeling differs from predictive modeling in other ways. The former needs to be informed by several activities designed to uncover as much as possible about the variables that matter to the outcome – and that give one leverage over that outcome – as well as about the functional forms of those relationships. These activities might include --
A deep literature review.
Consultation with knowledgeable experts and colleagues.
Consultation with less-knowledgeable people. Fresh perspectives from non-experts often yield ideas useful to the analyst.
(In many cases) more intensive, deliberate, and resourceful data collection than would be required for purely predictive analysis. You will not be satisfied merely to connect Y with some proxy, some indicator, of a cause; you will want to capture the cause itself as closely as you can.
Pinning down cause-and-effect relationships is usually much more difficult than succeeding at prediction. There are some very useful hands-on guides to causal analysis (e.g., by James A. Davis and by Joshua D. Angrist & Jorn-Steffen Pischke), and they should be prized, because sources like these are far less common than those that skip causal considerations in favor of telling how to conduct a given statistical procedure, or how to write the applicable code. Not that there aren’t some tremendous sources in this category, too.
(Secondarily, when you talk about choosing a predictive model that “gives the best predictive performance on test data,” I hope you mean on multiple iterations, i.e., across many, many instances of building a model using training data and then testing it on fresh data.) | Should I use unpenalized logistic regression, lasso or ridge for explanatory modelling? | From your post, and from your reference to the statement “explanatory modelling aims to minimize bias,” I suspect you have the impression that the key differentiator between explanatory and predictive | Should I use unpenalized logistic regression, lasso or ridge for explanatory modelling?
From your post, and from your reference to the statement “explanatory modelling aims to minimize bias,” I suspect you have the impression that the key differentiator between explanatory and predictive approaches is the choice of statistical procedure.
Granted, in conducting explanatory analysis you will probably be best served by avoiding variable-selection algorithms, since they are all blind to the difference between, colloquially, correlation and causation. That probably means choosing “vanilla” regression over LASSO and ridge, not to mention neural networks, random forests, SVM, CART, etc. To paraphrase Davis (below), algorithms have no way to tell whether one variable preceded another; whether it is more objectively or more subjectively measured; or whether it is typically more generative (like socioeconomic status) or less (like one's choice of breakfast cereal).
But sound, effective, replicable explanatory modeling differs from predictive modeling in other ways. The former needs to be informed by several activities designed to uncover as much as possible about the variables that matter to the outcome – and that give one leverage over that outcome – as well as about the functional forms of those relationships. These activities might include --
A deep literature review.
Consultation with knowledgeable experts and colleagues.
Consultation with less-knowledgeable people. Fresh perspectives from non-experts often yield ideas useful to the analyst.
(In many cases) more intensive, deliberate, and resourceful data collection than would be required for purely predictive analysis. You will not be satisfied merely to connect Y with some proxy, some indicator, of a cause; you will want to capture the cause itself as closely as you can.
Pinning down cause-and-effect relationships is usually much more difficult than succeeding at prediction. There are some very useful hands-on guides to causal analysis (e.g., by James A. Davis and by Joshua D. Angrist & Jorn-Steffen Pischke), and they should be prized, because sources like these are far less common than those that skip causal considerations in favor of telling how to conduct a given statistical procedure, or how to write the applicable code. Not that there aren’t some tremendous sources in this category, too.
(Secondarily, when you talk about choosing a predictive model that “gives the best predictive performance on test data,” I hope you mean on multiple iterations, i.e., across many, many instances of building a model using training data and then testing it on fresh data.) | Should I use unpenalized logistic regression, lasso or ridge for explanatory modelling?
From your post, and from your reference to the statement “explanatory modelling aims to minimize bias,” I suspect you have the impression that the key differentiator between explanatory and predictive |
20,691 | What are some useful data augmentation techniques for deep convolutional neural networks? | Sec. 1: Data Augmentation Since deep networks need to be trained on a
huge number of training images to achieve satisfactory performance, if
the original image data set contains limited training images, it is
better to do data augmentation to boost the performance. Also, data
augmentation becomes the thing must to do when training a deep
network.
There are many ways to do data augmentation, such as the popular horizontally flipping, random crops and color jittering. Moreover,
you could try combinations of multiple different processing, e.g.,
doing the rotation and random scaling at the same time. In addition,
you can try to raise saturation and value (S and V components of the
HSV color space) of all pixels to a power between 0.25 and 4 (same
for all pixels within a patch), multiply these values by a factor
between 0.7 and 1.4, and add to them a value between -0.1 and 0.1.
Also, you could add a value between [-0.1, 0.1] to the hue (H
component of HSV) of all pixels in the image/patch.
Krizhevsky et al. 1 proposed fancy PCA when training the famous Alex-Net in 2012. Fancy PCA alters the intensities of the RGB
channels in training images. In practice, you can firstly perform PCA
on the set of RGB pixel values throughout your training images. And
then, for each training image, just add the following quantity to
each RGB image pixel (i.e., I_{xy}=[I_{xy}^R,I_{xy}^G,I_{xy}^B]^T):
[bf{p}_1,bf{p}_2,bf{p}_3][alpha_1 lambda_1,alpha_2 lambda_2,alpha_3
lambda_3]^T where, bf{p}_i and lambda_i are the i-th eigenvector and
eigenvalue of the 3times 3 covariance matrix of RGB pixel values,
respectively, and alpha_i is a random variable drawn from a Gaussian
with mean zero and standard deviation 0.1. Please note that, each
alpha_i is drawn only once for all the pixels of a particular
training image until that image is used for training again. That is
to say, when the model meets the same training image again, it will
randomly produce another alpha_i for data augmentation. In 1, they
claimed that “fancy PCA could approximately capture an important
property of natural images, namely, that object identity is invariant
to changes in the intensity and color of the illumination”. To the
classification performance, this scheme reduced the top-1 error rate
by over 1% in the competition of ImageNet 2012.
(Source: Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)) | What are some useful data augmentation techniques for deep convolutional neural networks? | Sec. 1: Data Augmentation Since deep networks need to be trained on a
huge number of training images to achieve satisfactory performance, if
the original image data set contains limited training i | What are some useful data augmentation techniques for deep convolutional neural networks?
Sec. 1: Data Augmentation Since deep networks need to be trained on a
huge number of training images to achieve satisfactory performance, if
the original image data set contains limited training images, it is
better to do data augmentation to boost the performance. Also, data
augmentation becomes the thing must to do when training a deep
network.
There are many ways to do data augmentation, such as the popular horizontally flipping, random crops and color jittering. Moreover,
you could try combinations of multiple different processing, e.g.,
doing the rotation and random scaling at the same time. In addition,
you can try to raise saturation and value (S and V components of the
HSV color space) of all pixels to a power between 0.25 and 4 (same
for all pixels within a patch), multiply these values by a factor
between 0.7 and 1.4, and add to them a value between -0.1 and 0.1.
Also, you could add a value between [-0.1, 0.1] to the hue (H
component of HSV) of all pixels in the image/patch.
Krizhevsky et al. 1 proposed fancy PCA when training the famous Alex-Net in 2012. Fancy PCA alters the intensities of the RGB
channels in training images. In practice, you can firstly perform PCA
on the set of RGB pixel values throughout your training images. And
then, for each training image, just add the following quantity to
each RGB image pixel (i.e., I_{xy}=[I_{xy}^R,I_{xy}^G,I_{xy}^B]^T):
[bf{p}_1,bf{p}_2,bf{p}_3][alpha_1 lambda_1,alpha_2 lambda_2,alpha_3
lambda_3]^T where, bf{p}_i and lambda_i are the i-th eigenvector and
eigenvalue of the 3times 3 covariance matrix of RGB pixel values,
respectively, and alpha_i is a random variable drawn from a Gaussian
with mean zero and standard deviation 0.1. Please note that, each
alpha_i is drawn only once for all the pixels of a particular
training image until that image is used for training again. That is
to say, when the model meets the same training image again, it will
randomly produce another alpha_i for data augmentation. In 1, they
claimed that “fancy PCA could approximately capture an important
property of natural images, namely, that object identity is invariant
to changes in the intensity and color of the illumination”. To the
classification performance, this scheme reduced the top-1 error rate
by over 1% in the competition of ImageNet 2012.
(Source: Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)) | What are some useful data augmentation techniques for deep convolutional neural networks?
Sec. 1: Data Augmentation Since deep networks need to be trained on a
huge number of training images to achieve satisfactory performance, if
the original image data set contains limited training i |
20,692 | Eigenfunctions and eigenvalues of the exponential kernel | First of all, your question is not quite well-posed. The reason is that Mercer's theorem only applies for the case of a kernel defined on a finite measure space. Practically, this means that in order to apply the theorem, the eigenfunctions $\phi_i$ are in fact taken with respect to the operator $$K_{\mu}(f)= \left( x\mapsto \int_{\mathbb{R}} K(x,y)f(y)\mu(dy)\right)$$ where $\mu(dy)=p(y)dy$ is a probability measure. The $\phi_i$ are then orthonormal wrt to the inner product defined by $<f,g>=\int f(x)g(x)\mu(dx)$.
It is simple to see that the condition $\mu(\mathbb{R})<\infty$ is necessary for Mercer's theorem to hold. Consider the identity:
$$\int_{\mathbb{R}} e^{-(x-y)^2}xdx=\sqrt{\pi}y$$
This shows that the function $f(x)=x$ is an eigfunction of the operator $\int K(x,y)f(y)dy$. But evidently $\int_{\mathbb{R}} f(x)^2dx=\infty$, which shows that it is not possible to construct an orthonormal basis of $K$, without introducing a weighting function $p(y)$.
Secondly, I am assuming there should be a minus sign in the definition of the kernel $e^{-|x-y|}$, otherwise the resulting kernel fails to be positive definite. | Eigenfunctions and eigenvalues of the exponential kernel | First of all, your question is not quite well-posed. The reason is that Mercer's theorem only applies for the case of a kernel defined on a finite measure space. Practically, this means that in order | Eigenfunctions and eigenvalues of the exponential kernel
First of all, your question is not quite well-posed. The reason is that Mercer's theorem only applies for the case of a kernel defined on a finite measure space. Practically, this means that in order to apply the theorem, the eigenfunctions $\phi_i$ are in fact taken with respect to the operator $$K_{\mu}(f)= \left( x\mapsto \int_{\mathbb{R}} K(x,y)f(y)\mu(dy)\right)$$ where $\mu(dy)=p(y)dy$ is a probability measure. The $\phi_i$ are then orthonormal wrt to the inner product defined by $<f,g>=\int f(x)g(x)\mu(dx)$.
It is simple to see that the condition $\mu(\mathbb{R})<\infty$ is necessary for Mercer's theorem to hold. Consider the identity:
$$\int_{\mathbb{R}} e^{-(x-y)^2}xdx=\sqrt{\pi}y$$
This shows that the function $f(x)=x$ is an eigfunction of the operator $\int K(x,y)f(y)dy$. But evidently $\int_{\mathbb{R}} f(x)^2dx=\infty$, which shows that it is not possible to construct an orthonormal basis of $K$, without introducing a weighting function $p(y)$.
Secondly, I am assuming there should be a minus sign in the definition of the kernel $e^{-|x-y|}$, otherwise the resulting kernel fails to be positive definite. | Eigenfunctions and eigenvalues of the exponential kernel
First of all, your question is not quite well-posed. The reason is that Mercer's theorem only applies for the case of a kernel defined on a finite measure space. Practically, this means that in order |
20,693 | Eigenfunctions and eigenvalues of the exponential kernel | Assuming the Hamiltonian of your system is the |x> operator, what you are really trying to find is the reciprocal space of x. The easiest way to do this is to take the fourier transform of k(x,x') which by definition is a linear combination of of the k(x,x') states. You can read more here: https://en.wikipedia.org/wiki/Fourier_transform
I also recommend you switch variables to r = x-x' to simplify the math. | Eigenfunctions and eigenvalues of the exponential kernel | Assuming the Hamiltonian of your system is the |x> operator, what you are really trying to find is the reciprocal space of x. The easiest way to do this is to take the fourier transform of k(x,x') wh | Eigenfunctions and eigenvalues of the exponential kernel
Assuming the Hamiltonian of your system is the |x> operator, what you are really trying to find is the reciprocal space of x. The easiest way to do this is to take the fourier transform of k(x,x') which by definition is a linear combination of of the k(x,x') states. You can read more here: https://en.wikipedia.org/wiki/Fourier_transform
I also recommend you switch variables to r = x-x' to simplify the math. | Eigenfunctions and eigenvalues of the exponential kernel
Assuming the Hamiltonian of your system is the |x> operator, what you are really trying to find is the reciprocal space of x. The easiest way to do this is to take the fourier transform of k(x,x') wh |
20,694 | How can I get feature importance for Gaussian Naive Bayes classifier? | The discriminative value of a feature is based on its statistical distance between classes.
I have calculated the mean and variance for each feature and each class
Using your feature $i$ class $j$ estimated mean $\hat{\mu}_{i,j}$ and estimated variance $\hat{\sigma}_{i,j}^2$, one approach would be to compute the symmetric KL divergence for each feature for two classes you compare. The largest distance between feature distributions is the best discriminative feature for that pair.
KL divergence for two normal distributions is easy to compute. | How can I get feature importance for Gaussian Naive Bayes classifier? | The discriminative value of a feature is based on its statistical distance between classes.
I have calculated the mean and variance for each feature and each class
Using your feature $i$ class $j$ e | How can I get feature importance for Gaussian Naive Bayes classifier?
The discriminative value of a feature is based on its statistical distance between classes.
I have calculated the mean and variance for each feature and each class
Using your feature $i$ class $j$ estimated mean $\hat{\mu}_{i,j}$ and estimated variance $\hat{\sigma}_{i,j}^2$, one approach would be to compute the symmetric KL divergence for each feature for two classes you compare. The largest distance between feature distributions is the best discriminative feature for that pair.
KL divergence for two normal distributions is easy to compute. | How can I get feature importance for Gaussian Naive Bayes classifier?
The discriminative value of a feature is based on its statistical distance between classes.
I have calculated the mean and variance for each feature and each class
Using your feature $i$ class $j$ e |
20,695 | How can I get feature importance for Gaussian Naive Bayes classifier? | I would determine the ROC-AUC for all possible class comparisons using each feature as a separate input. Below is an ROC-AUC plot of AUC curves for all possible class comparsons for a 4-class problem, but it's based on a run with multiple features. | How can I get feature importance for Gaussian Naive Bayes classifier? | I would determine the ROC-AUC for all possible class comparisons using each feature as a separate input. Below is an ROC-AUC plot of AUC curves for all possible class comparsons for a 4-class problem | How can I get feature importance for Gaussian Naive Bayes classifier?
I would determine the ROC-AUC for all possible class comparisons using each feature as a separate input. Below is an ROC-AUC plot of AUC curves for all possible class comparsons for a 4-class problem, but it's based on a run with multiple features. | How can I get feature importance for Gaussian Naive Bayes classifier?
I would determine the ROC-AUC for all possible class comparisons using each feature as a separate input. Below is an ROC-AUC plot of AUC curves for all possible class comparsons for a 4-class problem |
20,696 | How can I get feature importance for Gaussian Naive Bayes classifier? | I wrote my own feature importance method using Naive Bayes for which I will present here. The proposal is a bit expensive computationally because you basically have 6 binomial models.
Split data into (80 / 20) (training / test iterations). Repeat the following steps 25-50 times.
For each iteration and each pair of classes (binomial), create a binomial model:
A. Train a classifier on the 80, test on the 20. This is your base level performance.
B. (Repeat this step 200 times, once for each feature). Permute/shuffle the labels for one feature at a time in the training data. Re-train classifier and score test set. Record drop in performance. Shuffling the labels for a specific feature breaks the correlation between feature and label. The more "important" features will have a larger drop in performance across the 25-50 different iterations. | How can I get feature importance for Gaussian Naive Bayes classifier? | I wrote my own feature importance method using Naive Bayes for which I will present here. The proposal is a bit expensive computationally because you basically have 6 binomial models.
Split data int | How can I get feature importance for Gaussian Naive Bayes classifier?
I wrote my own feature importance method using Naive Bayes for which I will present here. The proposal is a bit expensive computationally because you basically have 6 binomial models.
Split data into (80 / 20) (training / test iterations). Repeat the following steps 25-50 times.
For each iteration and each pair of classes (binomial), create a binomial model:
A. Train a classifier on the 80, test on the 20. This is your base level performance.
B. (Repeat this step 200 times, once for each feature). Permute/shuffle the labels for one feature at a time in the training data. Re-train classifier and score test set. Record drop in performance. Shuffling the labels for a specific feature breaks the correlation between feature and label. The more "important" features will have a larger drop in performance across the 25-50 different iterations. | How can I get feature importance for Gaussian Naive Bayes classifier?
I wrote my own feature importance method using Naive Bayes for which I will present here. The proposal is a bit expensive computationally because you basically have 6 binomial models.
Split data int |
20,697 | Why does adding a lag effect increase mean deviance in a Bayesian hierarchical model? | Here are my thoughts:
Instead of DIC, BIC, AIC I suggest to directly work with the marginal likelihood (also known as evidence) if you can afford it. The larger the evidence, the more likely is your model class. It may not make a large difference, but DIC, BIC, AIC are, after all, only approximations.
In order to check if a lag-effect leads to a larger marginal likelihood, I suggest to perform the following initial check: Take the model that includes the lag-parameter. (a) Fix the lag-parameter to $0.18$. (b) Set the lag-parameter to zero. Compute the marginal likelihood of both model classes. Model class (a) should have the larger marginal likelihood.
Let's go a step further: Take the model that does not consider the lag-effect (c) and compute its marginal likelihood. Next, take your model class (d) that incorporates the lag-effect and has a prior on the lag-parameter; compute the marginal likelihood of (d). You would expect that (d) has a larger marginal likelihood. So what, if you don't?:
(1) The marginal likelihood considers the model class as a whole. This includes the lag-effect, the number of parameters, the likelihood, the prior.
(2) Comparing models that have a different number of parameters is always delicate, if there is considerable uncertainty in the prior of the additional parameters.
(3) If you specify the uncertainty in the prior of your lag-parameter unreasonably large, you penalize the entire model class.
(4) What is the information that supports equal probabilities for negative lags and for a positive lag? I believe that it is very unlikely to observe a negative lag, and this should be incorporated in the prior.
(5) The prior that you chose on your lag-parameter is uniform. This is usually never a good choice: Are you absolutely sure that your parameters must really lie inside the specified bounds? Does each lag-value inside the bounds really have equal likelihood? My suggestion: go with a beta-distribution (if you are sure that the lag is bounded; or with the log-normal if you can exclude values smaller than zero.
(6) This is a particular example, where the use of non-informative priors is not good (looking at the marginal likelihood): You will always favor the model that has a smaller number of uncertain parameters; it does not matter how good or bad the model with more parameters could do.
I hope my thoughts give you some new ideas, hints?! | Why does adding a lag effect increase mean deviance in a Bayesian hierarchical model? | Here are my thoughts:
Instead of DIC, BIC, AIC I suggest to directly work with the marginal likelihood (also known as evidence) if you can afford it. The larger the evidence, the more likely is your | Why does adding a lag effect increase mean deviance in a Bayesian hierarchical model?
Here are my thoughts:
Instead of DIC, BIC, AIC I suggest to directly work with the marginal likelihood (also known as evidence) if you can afford it. The larger the evidence, the more likely is your model class. It may not make a large difference, but DIC, BIC, AIC are, after all, only approximations.
In order to check if a lag-effect leads to a larger marginal likelihood, I suggest to perform the following initial check: Take the model that includes the lag-parameter. (a) Fix the lag-parameter to $0.18$. (b) Set the lag-parameter to zero. Compute the marginal likelihood of both model classes. Model class (a) should have the larger marginal likelihood.
Let's go a step further: Take the model that does not consider the lag-effect (c) and compute its marginal likelihood. Next, take your model class (d) that incorporates the lag-effect and has a prior on the lag-parameter; compute the marginal likelihood of (d). You would expect that (d) has a larger marginal likelihood. So what, if you don't?:
(1) The marginal likelihood considers the model class as a whole. This includes the lag-effect, the number of parameters, the likelihood, the prior.
(2) Comparing models that have a different number of parameters is always delicate, if there is considerable uncertainty in the prior of the additional parameters.
(3) If you specify the uncertainty in the prior of your lag-parameter unreasonably large, you penalize the entire model class.
(4) What is the information that supports equal probabilities for negative lags and for a positive lag? I believe that it is very unlikely to observe a negative lag, and this should be incorporated in the prior.
(5) The prior that you chose on your lag-parameter is uniform. This is usually never a good choice: Are you absolutely sure that your parameters must really lie inside the specified bounds? Does each lag-value inside the bounds really have equal likelihood? My suggestion: go with a beta-distribution (if you are sure that the lag is bounded; or with the log-normal if you can exclude values smaller than zero.
(6) This is a particular example, where the use of non-informative priors is not good (looking at the marginal likelihood): You will always favor the model that has a smaller number of uncertain parameters; it does not matter how good or bad the model with more parameters could do.
I hope my thoughts give you some new ideas, hints?! | Why does adding a lag effect increase mean deviance in a Bayesian hierarchical model?
Here are my thoughts:
Instead of DIC, BIC, AIC I suggest to directly work with the marginal likelihood (also known as evidence) if you can afford it. The larger the evidence, the more likely is your |
20,698 | How to compare forecasting methods? | Model:
$y_{t+1}=f(y_{0},\ldots,y_{t}, \overrightarrow{a})+\epsilon_{t}$
$\overrightarrow{a}$ is a vector of model parameters
What you are currently proposing is essentially:
For each model $f(y_{0},\ldots,y_{t}, \overrightarrow{a})$ use some sort of Least Squares / NLS to fit a model (find $\overrightarrow{a}$)
Choose an $\alpha$ value
Use some function $g(f(y_{0},\ldots,y_{t}, \overrightarrow{a}),\alpha)$ to evaluate the models.
If optimal model depends importantly on $\alpha$ then ????
Can you do endogenous sampling? If so, how about directly estimating optimal (fix maximizing) functions $g(f(y_{0},\ldots,y_{t}, \overrightarrow{a}),\alpha)$ directly for multiple values of $\alpha$. You could take the models and run them in parallel, making a family of predictions. You could then increase the sampling likelihood when the models disagreed particularly in their predictions. This would increase the informativeness of the limited sampling in distinguishing between models. | How to compare forecasting methods? | Model:
$y_{t+1}=f(y_{0},\ldots,y_{t}, \overrightarrow{a})+\epsilon_{t}$
$\overrightarrow{a}$ is a vector of model parameters
What you are currently proposing is essentially:
For each model $f(y_{ | How to compare forecasting methods?
Model:
$y_{t+1}=f(y_{0},\ldots,y_{t}, \overrightarrow{a})+\epsilon_{t}$
$\overrightarrow{a}$ is a vector of model parameters
What you are currently proposing is essentially:
For each model $f(y_{0},\ldots,y_{t}, \overrightarrow{a})$ use some sort of Least Squares / NLS to fit a model (find $\overrightarrow{a}$)
Choose an $\alpha$ value
Use some function $g(f(y_{0},\ldots,y_{t}, \overrightarrow{a}),\alpha)$ to evaluate the models.
If optimal model depends importantly on $\alpha$ then ????
Can you do endogenous sampling? If so, how about directly estimating optimal (fix maximizing) functions $g(f(y_{0},\ldots,y_{t}, \overrightarrow{a}),\alpha)$ directly for multiple values of $\alpha$. You could take the models and run them in parallel, making a family of predictions. You could then increase the sampling likelihood when the models disagreed particularly in their predictions. This would increase the informativeness of the limited sampling in distinguishing between models. | How to compare forecasting methods?
Model:
$y_{t+1}=f(y_{0},\ldots,y_{t}, \overrightarrow{a})+\epsilon_{t}$
$\overrightarrow{a}$ is a vector of model parameters
What you are currently proposing is essentially:
For each model $f(y_{ |
20,699 | Cost functions for contextual bandits | One should probably consult here, for initial guidance: https://arxiv.org/pdf/1802.04064.pdf
It's an empirical evaluation. | Cost functions for contextual bandits | One should probably consult here, for initial guidance: https://arxiv.org/pdf/1802.04064.pdf
It's an empirical evaluation. | Cost functions for contextual bandits
One should probably consult here, for initial guidance: https://arxiv.org/pdf/1802.04064.pdf
It's an empirical evaluation. | Cost functions for contextual bandits
One should probably consult here, for initial guidance: https://arxiv.org/pdf/1802.04064.pdf
It's an empirical evaluation. |
20,700 | Predictive performance depends more on expertise of data analyst than on method? | Actually, I have heard a rumor that decent learning machines are usually better than experts, because the human inclination is to minimize variance at the expense of bias (oversmooth), leading to poor predictive performance in new dataset. The machine is calibrated to minimize MSE, and thus tends to do better in terms of prediction in a new dataset. | Predictive performance depends more on expertise of data analyst than on method? | Actually, I have heard a rumor that decent learning machines are usually better than experts, because the human inclination is to minimize variance at the expense of bias (oversmooth), leading to poor | Predictive performance depends more on expertise of data analyst than on method?
Actually, I have heard a rumor that decent learning machines are usually better than experts, because the human inclination is to minimize variance at the expense of bias (oversmooth), leading to poor predictive performance in new dataset. The machine is calibrated to minimize MSE, and thus tends to do better in terms of prediction in a new dataset. | Predictive performance depends more on expertise of data analyst than on method?
Actually, I have heard a rumor that decent learning machines are usually better than experts, because the human inclination is to minimize variance at the expense of bias (oversmooth), leading to poor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.