idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
47,501
|
Train a SVM-based classifier while taking into account the weight information
|
Try this package: https://CRAN.R-project.org/package=WeightSVM
It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
|
Train a SVM-based classifier while taking into account the weight information
|
Try this package: https://CRAN.R-project.org/package=WeightSVM
It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
|
Train a SVM-based classifier while taking into account the weight information
Try this package: https://CRAN.R-project.org/package=WeightSVM
It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
|
Train a SVM-based classifier while taking into account the weight information
Try this package: https://CRAN.R-project.org/package=WeightSVM
It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
|
47,502
|
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
|
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Cohen's d (and the closely related Hedges' g) are calculated on real observations, whereas the distributions underlying observed responses--and used to compute d'--are latent. In the signal detection world, there has been a good deal of work on the possibility that these latent distributions may not be Gaussian in some cases, with most researchers arguing that in that case d' is not an appropriate metric. Other measures, such as the area under the ROC curve, are advocated in that case. As far as I'm aware, in meta-analysis people are fine with using a scaled mean difference even if the distributions are not Gaussian. Nonetheless, they are fundamentally the same idea. You should realize that statistics is loaded with things that are the same, but have different names and historically developed in isolation from each other.
|
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
|
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Co
|
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Cohen's d (and the closely related Hedges' g) are calculated on real observations, whereas the distributions underlying observed responses--and used to compute d'--are latent. In the signal detection world, there has been a good deal of work on the possibility that these latent distributions may not be Gaussian in some cases, with most researchers arguing that in that case d' is not an appropriate metric. Other measures, such as the area under the ROC curve, are advocated in that case. As far as I'm aware, in meta-analysis people are fine with using a scaled mean difference even if the distributions are not Gaussian. Nonetheless, they are fundamentally the same idea. You should realize that statistics is loaded with things that are the same, but have different names and historically developed in isolation from each other.
|
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Co
|
47,503
|
Is additive logistic regression equivalent to boosted decision stumps?
|
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic model). Having said that, people typically use specialized names for boosted models - for example Gradient Boosting Machine which also belongs to the class of generalized additive models (with decision tree as the base learner) and supports multiple loss functions including the logistic one.
|
Is additive logistic regression equivalent to boosted decision stumps?
|
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic
|
Is additive logistic regression equivalent to boosted decision stumps?
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic model). Having said that, people typically use specialized names for boosted models - for example Gradient Boosting Machine which also belongs to the class of generalized additive models (with decision tree as the base learner) and supports multiple loss functions including the logistic one.
|
Is additive logistic regression equivalent to boosted decision stumps?
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic
|
47,504
|
Logistic regression performs better on validation data
|
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult" ones are in the training set. If you were to repeatedly re-sample the data to form randomly partitioned training and validation sets, you would expect the average error on the training set to be lower than on the validation set, but that does not mean that it will be lower on every run of the experiment.
If your sample size is small, it suggests that this variability means that the validation set performance estimate has a high variability and isn't a reliable indicator of performance, so you should probably use some sort of (repeated) cross-validation or perhaps bootstrapping instead.
I have seen this sort of thing before as I have been working on the problems caused in model selection caused by the variance of the model selection criterion. It doesn't necessarily indicate a problem with the model, but it does suggest that the sample of data is too small.
If the relative class frequencies are very disparate, then it may be that the validation set happens to have fewer minority class examples than the training set, which might also affect the performance estimate, use stratified bootstrap or cross-validation, which maintains the same proportion of positive and negative patterns in the training set and validation set.
|
Logistic regression performs better on validation data
|
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult
|
Logistic regression performs better on validation data
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult" ones are in the training set. If you were to repeatedly re-sample the data to form randomly partitioned training and validation sets, you would expect the average error on the training set to be lower than on the validation set, but that does not mean that it will be lower on every run of the experiment.
If your sample size is small, it suggests that this variability means that the validation set performance estimate has a high variability and isn't a reliable indicator of performance, so you should probably use some sort of (repeated) cross-validation or perhaps bootstrapping instead.
I have seen this sort of thing before as I have been working on the problems caused in model selection caused by the variance of the model selection criterion. It doesn't necessarily indicate a problem with the model, but it does suggest that the sample of data is too small.
If the relative class frequencies are very disparate, then it may be that the validation set happens to have fewer minority class examples than the training set, which might also affect the performance estimate, use stratified bootstrap or cross-validation, which maintains the same proportion of positive and negative patterns in the training set and validation set.
|
Logistic regression performs better on validation data
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult
|
47,505
|
Logistic regression performs better on validation data
|
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the bootstrap with perhaps 300 resamples. The problem can be uncovered by doing another 70-30 split and noting differences in the model fitted and in the validation statistics.
|
Logistic regression performs better on validation data
|
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the boot
|
Logistic regression performs better on validation data
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the bootstrap with perhaps 300 resamples. The problem can be uncovered by doing another 70-30 split and noting differences in the model fitted and in the validation statistics.
|
Logistic regression performs better on validation data
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the boot
|
47,506
|
More details on bootstrap procedure to estimate confidence interval of sample SD
|
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd estimate that parametric distribution using mle. In your case, where you have no idea the distribution, leave out the assumption of normality and use simple case resampling. Get your sample from the data, not from the theoretical distribution. So just resample 25 from your sample of 25 WITH replacement. Do that a large number of times and you can generate a confidence interval. If I assume your data is y then in R the code might typically be....
library(boot)
sdb <- function(y, i) sd(y[i]) #boot needs a function of y that can index y
b <- boot(y, sdb, 1000)
boot.ci(b)
But, you might prefer to do such a simple example by more explicit means and therefore see what's really going on and play with the guts of it. Note that
b below is not the same thing as b above.
b <- sapply( 1:1000, function(x) {
s <- sample(y, 25, replace = TRUE)
sd(s) } )
b <- sort(b)
#low end of CI
b[25]
#hi end of CI
b[975]
|
More details on bootstrap procedure to estimate confidence interval of sample SD
|
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd
|
More details on bootstrap procedure to estimate confidence interval of sample SD
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd estimate that parametric distribution using mle. In your case, where you have no idea the distribution, leave out the assumption of normality and use simple case resampling. Get your sample from the data, not from the theoretical distribution. So just resample 25 from your sample of 25 WITH replacement. Do that a large number of times and you can generate a confidence interval. If I assume your data is y then in R the code might typically be....
library(boot)
sdb <- function(y, i) sd(y[i]) #boot needs a function of y that can index y
b <- boot(y, sdb, 1000)
boot.ci(b)
But, you might prefer to do such a simple example by more explicit means and therefore see what's really going on and play with the guts of it. Note that
b below is not the same thing as b above.
b <- sapply( 1:1000, function(x) {
s <- sample(y, 25, replace = TRUE)
sd(s) } )
b <- sort(b)
#low end of CI
b[25]
#hi end of CI
b[975]
|
More details on bootstrap procedure to estimate confidence interval of sample SD
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd
|
47,507
|
Are HAC estimators used for estimation of regression coefficients?
|
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your coefficients are unbiased, but the standard errors are incorrect. HAC standard errors address the latter point.
As you allude to, this does not give efficient coefficient estimates. To achieve efficiency, in economics, at least, we typically use a Cochrane-Orcutt/Prais-Whinston procedure. This requires much stronger modeling assumptions to estimate the structure of the serial correlation, however.
They are analogous to Eicker-White heteroskedasticity robust standard errors. This procedure does not alter estimation, it only changes the estimates of the standard errors to ensure that they are consistent in the presence of heteroskedasticity. The efficient fix would be weighted least squares, but this requires modeling the form of the heteroskedasticity.
|
Are HAC estimators used for estimation of regression coefficients?
|
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your
|
Are HAC estimators used for estimation of regression coefficients?
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your coefficients are unbiased, but the standard errors are incorrect. HAC standard errors address the latter point.
As you allude to, this does not give efficient coefficient estimates. To achieve efficiency, in economics, at least, we typically use a Cochrane-Orcutt/Prais-Whinston procedure. This requires much stronger modeling assumptions to estimate the structure of the serial correlation, however.
They are analogous to Eicker-White heteroskedasticity robust standard errors. This procedure does not alter estimation, it only changes the estimates of the standard errors to ensure that they are consistent in the presence of heteroskedasticity. The efficient fix would be weighted least squares, but this requires modeling the form of the heteroskedasticity.
|
Are HAC estimators used for estimation of regression coefficients?
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your
|
47,508
|
How do I propagate error values through a matrix diagonalization?
|
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpromising. Why not just compute an empirical distribution? That is, draw a large number of variants of the original matrix from the hypothesized error distribution and diagonalize them. Study the output distribution of the eigenvectors and eigenvalues.
There are some subtleties, because there will not be a definite matching among the lists of eigenvalues. For instance, in one iteration the sorted eigenvalues might be $(1.0, 0.99, 0.17)$ and in the next they might be $(1.01, 0.98, 0.17)$. Is the $1.01$ in the latter a slight variation of the $1.0$ in the former, or perhaps has the $0.99$ been perturbed into $1.01$ and the $1.0$ into $0.98$? It is impossible to know. Thus, you need to characterize the multivariate distribution of multisets of eigenvalues rather than $n$-tuples of eigenvalues.
The same problem attaches to the eigenvectors, but it gets worse, because there is no unique normalization of the eigenvectors. (They are determined only up to sign.) However, these problems are no different in nature than the ambiguities present in other geometric problems such as characterizing the directions of linear features in a plane (which can be given only up to a multiple of 180 degrees) and so should not present any additional conceptual challenge; they are just going to be a nuisance.
Here is an example of the empirical distributions of the sorted eigenvalues of a 4 by 4 matrix, using 2500 draws from the error distribution.
This scatterplot matrix also shows the lines y=x on each plot to emphasize the constraints imposed by sorting the four eigenvalues.
|
How do I propagate error values through a matrix diagonalization?
|
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpro
|
How do I propagate error values through a matrix diagonalization?
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpromising. Why not just compute an empirical distribution? That is, draw a large number of variants of the original matrix from the hypothesized error distribution and diagonalize them. Study the output distribution of the eigenvectors and eigenvalues.
There are some subtleties, because there will not be a definite matching among the lists of eigenvalues. For instance, in one iteration the sorted eigenvalues might be $(1.0, 0.99, 0.17)$ and in the next they might be $(1.01, 0.98, 0.17)$. Is the $1.01$ in the latter a slight variation of the $1.0$ in the former, or perhaps has the $0.99$ been perturbed into $1.01$ and the $1.0$ into $0.98$? It is impossible to know. Thus, you need to characterize the multivariate distribution of multisets of eigenvalues rather than $n$-tuples of eigenvalues.
The same problem attaches to the eigenvectors, but it gets worse, because there is no unique normalization of the eigenvectors. (They are determined only up to sign.) However, these problems are no different in nature than the ambiguities present in other geometric problems such as characterizing the directions of linear features in a plane (which can be given only up to a multiple of 180 degrees) and so should not present any additional conceptual challenge; they are just going to be a nuisance.
Here is an example of the empirical distributions of the sorted eigenvalues of a 4 by 4 matrix, using 2500 draws from the error distribution.
This scatterplot matrix also shows the lines y=x on each plot to emphasize the constraints imposed by sorting the four eigenvalues.
|
How do I propagate error values through a matrix diagonalization?
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpro
|
47,509
|
Vertical line graphs in R
|
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows:
# simulate some data
x <- runif(15000)
x[sample(15000, 50)] <- runif(50, 0, 5)
# make the plot
plot(x, type="h", col=rep(c("red", "blue", "green"), each=5000))
This makes the following (yellow looked terrible):
|
Vertical line graphs in R
|
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows:
# simulate some data
x <- runif
|
Vertical line graphs in R
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows:
# simulate some data
x <- runif(15000)
x[sample(15000, 50)] <- runif(50, 0, 5)
# make the plot
plot(x, type="h", col=rep(c("red", "blue", "green"), each=5000))
This makes the following (yellow looked terrible):
|
Vertical line graphs in R
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows:
# simulate some data
x <- runif
|
47,510
|
Vertical line graphs in R
|
Use a barplot in combination with the grDevices-package to create a color-palette.
require(grDevices)
# data
dat <- sample(1:10,15000,prob=runif(10),replace=T)
dat <- sort(dat)
plotdat <- as.data.frame(table(dat))
plotdat[,2] <- plotdat[,2]/sum(plotdat[,2])
# generate colors
colors <- heat.colors(10)
# and sort them according to frequency
colors <- colors[order(order(plotdat[,2],decreasing=T))]
barplot(plotdat[,2],names.arg=as.character(1:10),col=colors)
This creates a plot with the property "the higher the color-heat, the higher the frequency"
|
Vertical line graphs in R
|
Use a barplot in combination with the grDevices-package to create a color-palette.
require(grDevices)
# data
dat <- sample(1:10,15000,prob=runif(10),replace=T)
dat <- sort(dat)
plotdat <- as.data.fra
|
Vertical line graphs in R
Use a barplot in combination with the grDevices-package to create a color-palette.
require(grDevices)
# data
dat <- sample(1:10,15000,prob=runif(10),replace=T)
dat <- sort(dat)
plotdat <- as.data.frame(table(dat))
plotdat[,2] <- plotdat[,2]/sum(plotdat[,2])
# generate colors
colors <- heat.colors(10)
# and sort them according to frequency
colors <- colors[order(order(plotdat[,2],decreasing=T))]
barplot(plotdat[,2],names.arg=as.character(1:10),col=colors)
This creates a plot with the property "the higher the color-heat, the higher the frequency"
|
Vertical line graphs in R
Use a barplot in combination with the grDevices-package to create a color-palette.
require(grDevices)
# data
dat <- sample(1:10,15000,prob=runif(10),replace=T)
dat <- sort(dat)
plotdat <- as.data.fra
|
47,511
|
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
|
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and comparisons between different estimates (which I expect is your primary goal in the subsets analyses) are going to be particularly noisy.
This online tutorial on standard errors (as a pdf) contains formulas for the SE of the correlation coefficient (as well as of the Fisher transformation of the correlation, which is a better scale to be measuring the SE). You'll see that the scales by approximately $1/\sqrt{n}$.
For a correlation of about 0.5, the SE with a sample size of 200 will be about 0.06; with a sample size of 50 it will be about double that.
|
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
|
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and compar
|
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and comparisons between different estimates (which I expect is your primary goal in the subsets analyses) are going to be particularly noisy.
This online tutorial on standard errors (as a pdf) contains formulas for the SE of the correlation coefficient (as well as of the Fisher transformation of the correlation, which is a better scale to be measuring the SE). You'll see that the scales by approximately $1/\sqrt{n}$.
For a correlation of about 0.5, the SE with a sample size of 200 will be about 0.06; with a sample size of 50 it will be about double that.
|
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and compar
|
47,512
|
Performing contrasts among treatment levels in survival analysis
|
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even say naive, but I am not a big fan of his book.) There is no surv function, and the closest function, Surv, is not a regression function at all, but rather a method to construct an object suitable to use in the LHS of formulae in a regression model.
Likewise for the "contrasts procedure". There is no such thing. Factors in R generate treatment contrasts unless otherwise specified to the regression function, but there is not "contrasts procedure". If there were three levels under consideration then there would be a reference level whose "effect" would be included in the Intercept, and there would be two coefficients, one for each of the other levels. Those coefficients would be the difference from the reference level measured on the log-hazard scale. Exponentiated they would become relative hazards.
I do not see that the multiple comparisons problem is specific to survival analysis. You are estimating parameters, and if they are normally distributed under asymptotic assumptions, then you should be able to apply reasonable methods. There are a variety of such adjustments supported in the p.adjust function in the stats package.
|
Performing contrasts among treatment levels in survival analysis
|
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even
|
Performing contrasts among treatment levels in survival analysis
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even say naive, but I am not a big fan of his book.) There is no surv function, and the closest function, Surv, is not a regression function at all, but rather a method to construct an object suitable to use in the LHS of formulae in a regression model.
Likewise for the "contrasts procedure". There is no such thing. Factors in R generate treatment contrasts unless otherwise specified to the regression function, but there is not "contrasts procedure". If there were three levels under consideration then there would be a reference level whose "effect" would be included in the Intercept, and there would be two coefficients, one for each of the other levels. Those coefficients would be the difference from the reference level measured on the log-hazard scale. Exponentiated they would become relative hazards.
I do not see that the multiple comparisons problem is specific to survival analysis. You are estimating parameters, and if they are normally distributed under asymptotic assumptions, then you should be able to apply reasonable methods. There are a variety of such adjustments supported in the p.adjust function in the stats package.
|
Performing contrasts among treatment levels in survival analysis
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even
|
47,513
|
Performing contrasts among treatment levels in survival analysis
|
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f. or multiple d.f. contrasts. Type ?contrast.rms for guidance. You need to substitute cph for coxph when using rms.
|
Performing contrasts among treatment levels in survival analysis
|
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f
|
Performing contrasts among treatment levels in survival analysis
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f. or multiple d.f. contrasts. Type ?contrast.rms for guidance. You need to substitute cph for coxph when using rms.
|
Performing contrasts among treatment levels in survival analysis
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f
|
47,514
|
What if a numerator term is zero in Naive Bayes?
|
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
|
What if a numerator term is zero in Naive Bayes?
|
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
|
What if a numerator term is zero in Naive Bayes?
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
|
What if a numerator term is zero in Naive Bayes?
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
|
47,515
|
What if a numerator term is zero in Naive Bayes?
|
I start all counts with 1, in pseudo-code: Count=max(1,Count).
|
What if a numerator term is zero in Naive Bayes?
|
I start all counts with 1, in pseudo-code: Count=max(1,Count).
|
What if a numerator term is zero in Naive Bayes?
I start all counts with 1, in pseudo-code: Count=max(1,Count).
|
What if a numerator term is zero in Naive Bayes?
I start all counts with 1, in pseudo-code: Count=max(1,Count).
|
47,516
|
Representing the anova's interaction in R
|
Be very careful with : it means a bunch of different things depending on the context with which you use it in R.
See and ?interaction, ?formula, ?lm, ?':'
Here's an example of interaction:
df <- data.frame(X=sample(letters[1:10],200, replace=T),Y=sample(letters[1:10],200, replace=T))
> df$X:df$Y
[1] a:a i:i c:e g:g e:c e:h i:i j:f h:f a:j i:e c:h h:c h:a j:f i:g g:e a:c d:g f:j c:i h:g g:h g:d e:b a:a c:a
[28] e:e c:d b:e i:h i:j g:g d:b h:d j:d a:j e:i d:g i:e e:c e:e h:h j:b f:b a:g h:g b:j h:e j:d b:f d:i j:i b:c
[55] a:i c:b b:d g:h g:f h:i e:a h:e d:e d:f i:j a:a d:e i:b g:c d:g j:h c:g j:b i:d b:g e:c h:b e:g b:b h:g d:j
[82] j:i i:b d:a a:h h:f j:c c:j f:j e:g h:i g:f j:a b:e j:i a:j d:c g:j a:h h:c b:a c:f b:e f:d c:d j:d i:f d:j
[109] g:b j:i c:c h:b b:a f:c c:g j:i h:b j:e c:j c:b i:e f:i c:j g:i i:e h:i b:e i:d c:i j:i h:g g:j d:j a:h d:b
[136] c:f j:b a:e f:i c:j j:h g:i b:d i:j h:i g:i g:i g:j h:d g:g g:c f:g e:b j:a b:b f:e i:i g:h c:f i:f f:c a:f
[163] h:g e:f b:b b:j b:a i:g b:i h:j f:j a:f h:j b:a f:d h:g f:f d:a d:j d:g d:g g:b d:e e:b h:b g:a h:a h:g e:j
[190] d:d d:b e:h h:j f:g a:g f:i j:b d:h a:g j:e
100 Levels: a:a a:b a:c a:d a:e a:f a:g a:h a:i a:j b:a b:b b:c b:d b:e b:f b:g b:h b:i b:j c:a c:b c:c ... j:j
Essentially returning all the unique set of X that match Y
In aov, or lm, or and function that takes a formula as an arguemnt. It means the "interaction effect" of those two variables.
df <- data.frame(y=runif(100),x=rnorm(100),z=rchisq(100,20))
lm(y ~ x + z, df)
lm(y ~ x:z, df)
lm(y ~ x*z, df) ## essentailly y~ x + z + x:z
It can also be short hand when using numbers, or objects that store numbers:
1:4 = 1,2,3,4
a <- 5
b <- 10
a:b 5,6,7,8,9,10
|
Representing the anova's interaction in R
|
Be very careful with : it means a bunch of different things depending on the context with which you use it in R.
See and ?interaction, ?formula, ?lm, ?':'
Here's an example of interaction:
df <- dat
|
Representing the anova's interaction in R
Be very careful with : it means a bunch of different things depending on the context with which you use it in R.
See and ?interaction, ?formula, ?lm, ?':'
Here's an example of interaction:
df <- data.frame(X=sample(letters[1:10],200, replace=T),Y=sample(letters[1:10],200, replace=T))
> df$X:df$Y
[1] a:a i:i c:e g:g e:c e:h i:i j:f h:f a:j i:e c:h h:c h:a j:f i:g g:e a:c d:g f:j c:i h:g g:h g:d e:b a:a c:a
[28] e:e c:d b:e i:h i:j g:g d:b h:d j:d a:j e:i d:g i:e e:c e:e h:h j:b f:b a:g h:g b:j h:e j:d b:f d:i j:i b:c
[55] a:i c:b b:d g:h g:f h:i e:a h:e d:e d:f i:j a:a d:e i:b g:c d:g j:h c:g j:b i:d b:g e:c h:b e:g b:b h:g d:j
[82] j:i i:b d:a a:h h:f j:c c:j f:j e:g h:i g:f j:a b:e j:i a:j d:c g:j a:h h:c b:a c:f b:e f:d c:d j:d i:f d:j
[109] g:b j:i c:c h:b b:a f:c c:g j:i h:b j:e c:j c:b i:e f:i c:j g:i i:e h:i b:e i:d c:i j:i h:g g:j d:j a:h d:b
[136] c:f j:b a:e f:i c:j j:h g:i b:d i:j h:i g:i g:i g:j h:d g:g g:c f:g e:b j:a b:b f:e i:i g:h c:f i:f f:c a:f
[163] h:g e:f b:b b:j b:a i:g b:i h:j f:j a:f h:j b:a f:d h:g f:f d:a d:j d:g d:g g:b d:e e:b h:b g:a h:a h:g e:j
[190] d:d d:b e:h h:j f:g a:g f:i j:b d:h a:g j:e
100 Levels: a:a a:b a:c a:d a:e a:f a:g a:h a:i a:j b:a b:b b:c b:d b:e b:f b:g b:h b:i b:j c:a c:b c:c ... j:j
Essentially returning all the unique set of X that match Y
In aov, or lm, or and function that takes a formula as an arguemnt. It means the "interaction effect" of those two variables.
df <- data.frame(y=runif(100),x=rnorm(100),z=rchisq(100,20))
lm(y ~ x + z, df)
lm(y ~ x:z, df)
lm(y ~ x*z, df) ## essentailly y~ x + z + x:z
It can also be short hand when using numbers, or objects that store numbers:
1:4 = 1,2,3,4
a <- 5
b <- 10
a:b 5,6,7,8,9,10
|
Representing the anova's interaction in R
Be very careful with : it means a bunch of different things depending on the context with which you use it in R.
See and ?interaction, ?formula, ?lm, ?':'
Here's an example of interaction:
df <- dat
|
47,517
|
Assumptions and pitfalls in competing risks model
|
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Risks.
I wrote my master's thesis on competing risks and from what I remember, they are some drawbacks/disadvantages when competing risks analysis is conducted. The most well-known problem is an issue of identifiability. There were many papers and journals regarding this issue when I was preparing my thesis and I believe many scholars are still writing papers about this.
The issue of identifiability arises due to the nature of competing risks modelling -- when death or failure has observed from an identifiable cause-of-failure it cannot occur again later from another case.
Another issue arises when the cause of failure for a subject or unit is NOT precisely identified, it can be narrowed down to a subset of the potential risks. This phenomenon is called masking.
|
Assumptions and pitfalls in competing risks model
|
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Ris
|
Assumptions and pitfalls in competing risks model
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Risks.
I wrote my master's thesis on competing risks and from what I remember, they are some drawbacks/disadvantages when competing risks analysis is conducted. The most well-known problem is an issue of identifiability. There were many papers and journals regarding this issue when I was preparing my thesis and I believe many scholars are still writing papers about this.
The issue of identifiability arises due to the nature of competing risks modelling -- when death or failure has observed from an identifiable cause-of-failure it cannot occur again later from another case.
Another issue arises when the cause of failure for a subject or unit is NOT precisely identified, it can be narrowed down to a subset of the potential risks. This phenomenon is called masking.
|
Assumptions and pitfalls in competing risks model
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Ris
|
47,518
|
How do I visualize changes in proportions compared to another period?
|
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
For the latter, a time series of percent stacked charts might look fine - you do not have to use 7 colors, just alternate them to emphasize the pattern.
|
How do I visualize changes in proportions compared to another period?
|
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
|
How do I visualize changes in proportions compared to another period?
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
For the latter, a time series of percent stacked charts might look fine - you do not have to use 7 colors, just alternate them to emphasize the pattern.
|
How do I visualize changes in proportions compared to another period?
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
|
47,519
|
How do I visualize changes in proportions compared to another period?
|
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series.
I am not an expert in graph design, so this may also be a no-go, but have you considered four colors with three types of plot type?
Though, I think there is an even better approach. I know you say that 4 colors is a no-go, I'm about to ignore that. It is probably canonically true... but you are describing fruits. These have canonical colors as well as shapes associated with them. If you use those colors and shapes I think it would be hard to go wrong.
Using colors alone there might be some confusion, e.g. green apple vs honeydew, red apple vs watermelon, etc. But using colors poses an additional problem, color blind individuals. You can test for the extent to which this would be a problem by creating an image of your different colors and looking here: http://www.vischeck.com/vischeck/. Protanopia and Deuteranopia, forms of red-green color blindness, are by far the most common (occurring almost always in males). Even so, color blindness is a misnomer and if you select your pallet carefully the differences in shades may be sufficient.
In conjunction with a color approach, you want to use fruit shaped points. These are unlikely to be a default in a plotting program and you may have to spend some time in photoshop to make it look right. Even if you can't take that time, differing geometric plot points AND color should make things reasonable.
Moreover, you could use the approach I suggest with a slope graph.
As a side note, if you have a technically/mathematically astute audience, a Y-axis in log odds might mean more to them that percentages alone.
|
How do I visualize changes in proportions compared to another period?
|
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series.
I am not an expert in graph design, so this may also be a no-go, but have yo
|
How do I visualize changes in proportions compared to another period?
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series.
I am not an expert in graph design, so this may also be a no-go, but have you considered four colors with three types of plot type?
Though, I think there is an even better approach. I know you say that 4 colors is a no-go, I'm about to ignore that. It is probably canonically true... but you are describing fruits. These have canonical colors as well as shapes associated with them. If you use those colors and shapes I think it would be hard to go wrong.
Using colors alone there might be some confusion, e.g. green apple vs honeydew, red apple vs watermelon, etc. But using colors poses an additional problem, color blind individuals. You can test for the extent to which this would be a problem by creating an image of your different colors and looking here: http://www.vischeck.com/vischeck/. Protanopia and Deuteranopia, forms of red-green color blindness, are by far the most common (occurring almost always in males). Even so, color blindness is a misnomer and if you select your pallet carefully the differences in shades may be sufficient.
In conjunction with a color approach, you want to use fruit shaped points. These are unlikely to be a default in a plotting program and you may have to spend some time in photoshop to make it look right. Even if you can't take that time, differing geometric plot points AND color should make things reasonable.
Moreover, you could use the approach I suggest with a slope graph.
As a side note, if you have a technically/mathematically astute audience, a Y-axis in log odds might mean more to them that percentages alone.
|
How do I visualize changes in proportions compared to another period?
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series.
I am not an expert in graph design, so this may also be a no-go, but have yo
|
47,520
|
Estimating PDF of continuous distribution from (few) data points
|
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at your disposition, the function density provides what you need:
histAndDensity<-function(x, ...)
{
retval<-hist(x, freq=FALSE, ...)
lines(density(x, na.rm=TRUE), col="red")
invisible(retval)
}
|
Estimating PDF of continuous distribution from (few) data points
|
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at
|
Estimating PDF of continuous distribution from (few) data points
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at your disposition, the function density provides what you need:
histAndDensity<-function(x, ...)
{
retval<-hist(x, freq=FALSE, ...)
lines(density(x, na.rm=TRUE), col="red")
invisible(retval)
}
|
Estimating PDF of continuous distribution from (few) data points
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at
|
47,521
|
Interpretation of MDS factor plot
|
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book that gives a clear explanation of interpretation of MDS biplots. I'll also give few references where people can read more about interpretation of MDS ploting to better understand it.
This answer is divided in few parts:
Part 1: The axis of the biplot are the principal components. x-axis has the PC 1, which reflect the max variance in the dataset. y-axis has the PC 2, whichreflect 2nd most variance. E.g. in my example x-axis represent 72% of the variance, while y-axis represent 16% of the variance in the data.
PC1 PC2 PC3 PC4
0.727891 0.166721 0.070320 0.003048
Part 2: The arrows reflect how the variables are loaded in each PCs. E.g. in my example "uncluttered" & "visualization" is highly negatively loaded to PC 2, hence y-axis. Similarly, "no water","fast relief" & "convinient" is highly plositively loaded to PC 2, hence x-axis.This gives us a visualization about how variables are loaded in different PCs.
NMDS1 NMDS2
Safe 0.616967 -0.786989
Highly.efficacious -0.135565 0.990768
Same.side.effect.profile 0.822707 -0.568466
Fast.Relief 0.988621 -0.150428
No.Water 0.990893 0.134648
Convenient 0.989206 0.146534
Convincing 0.763225 -0.646133
Visually.appealing 0.154414 -0.988006
Very.novel 0.900984 0.433853
Noticeable 0.691596 0.722284
Likely.to.be.read 0.887028 -0.461715
Uncluttered 0.031498 -0.999504
Interesting 0.872584 -0.488465
Credible 0.620556 -0.784162
Prescribe.Recommend 0.809955 -0.586492
part 3: Concept points tells us how dissimilar they are from the each other. So, in my example Concept 1 & Concept 2 are very different from rest of them. Concept 2 is both bad in terms of visual appeal as well as convenience. Whereas concept 3 & 4 are more alike. They are also good in terms of visualization as well as convenience.
Reference: 1) Greenacre, M. (2010). Biplots in Practice
2) Everitt & Hothorn: An Introduction to Multivariate Analysis with R(Chapter 4).
3) Hair: Multivariate Data Analysis
|
Interpretation of MDS factor plot
|
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book th
|
Interpretation of MDS factor plot
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book that gives a clear explanation of interpretation of MDS biplots. I'll also give few references where people can read more about interpretation of MDS ploting to better understand it.
This answer is divided in few parts:
Part 1: The axis of the biplot are the principal components. x-axis has the PC 1, which reflect the max variance in the dataset. y-axis has the PC 2, whichreflect 2nd most variance. E.g. in my example x-axis represent 72% of the variance, while y-axis represent 16% of the variance in the data.
PC1 PC2 PC3 PC4
0.727891 0.166721 0.070320 0.003048
Part 2: The arrows reflect how the variables are loaded in each PCs. E.g. in my example "uncluttered" & "visualization" is highly negatively loaded to PC 2, hence y-axis. Similarly, "no water","fast relief" & "convinient" is highly plositively loaded to PC 2, hence x-axis.This gives us a visualization about how variables are loaded in different PCs.
NMDS1 NMDS2
Safe 0.616967 -0.786989
Highly.efficacious -0.135565 0.990768
Same.side.effect.profile 0.822707 -0.568466
Fast.Relief 0.988621 -0.150428
No.Water 0.990893 0.134648
Convenient 0.989206 0.146534
Convincing 0.763225 -0.646133
Visually.appealing 0.154414 -0.988006
Very.novel 0.900984 0.433853
Noticeable 0.691596 0.722284
Likely.to.be.read 0.887028 -0.461715
Uncluttered 0.031498 -0.999504
Interesting 0.872584 -0.488465
Credible 0.620556 -0.784162
Prescribe.Recommend 0.809955 -0.586492
part 3: Concept points tells us how dissimilar they are from the each other. So, in my example Concept 1 & Concept 2 are very different from rest of them. Concept 2 is both bad in terms of visual appeal as well as convenience. Whereas concept 3 & 4 are more alike. They are also good in terms of visualization as well as convenience.
Reference: 1) Greenacre, M. (2010). Biplots in Practice
2) Everitt & Hothorn: An Introduction to Multivariate Analysis with R(Chapter 4).
3) Hair: Multivariate Data Analysis
|
Interpretation of MDS factor plot
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book th
|
47,522
|
How do I determine how well a dataset approximates a distribution?
|
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution.
If you want a statistical test, the Kolmogorov-Smirnov statistic provides a non-parametric test for whether the data come from $p(x)$, using the maximum difference in the empirical and analytic cdf.
Of course, you could also evaluate the log-probability of your data under the two distributions: $L_1 = \sum_i p_1(X_i)$ vs. $L_2 = \sum_i p_2(X_i)$, and take whichever is larger. This is equivalent to maximum likelihood density comparison. (However, this may not be valid if $p_1$ and $p_2$ are distributions fit to your data, especially if they have different numbers of fitted parameters; in that case you want to do "model comparison", and there are a variety of tools for this— AIC, BIC, Bayes Factors, Likelihood-ratio test, Cross-validation, etc.)
|
How do I determine how well a dataset approximates a distribution?
|
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution.
If you want a statistical test, the Kolmogorov-Smirnov s
|
How do I determine how well a dataset approximates a distribution?
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution.
If you want a statistical test, the Kolmogorov-Smirnov statistic provides a non-parametric test for whether the data come from $p(x)$, using the maximum difference in the empirical and analytic cdf.
Of course, you could also evaluate the log-probability of your data under the two distributions: $L_1 = \sum_i p_1(X_i)$ vs. $L_2 = \sum_i p_2(X_i)$, and take whichever is larger. This is equivalent to maximum likelihood density comparison. (However, this may not be valid if $p_1$ and $p_2$ are distributions fit to your data, especially if they have different numbers of fitted parameters; in that case you want to do "model comparison", and there are a variety of tools for this— AIC, BIC, Bayes Factors, Likelihood-ratio test, Cross-validation, etc.)
|
How do I determine how well a dataset approximates a distribution?
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution.
If you want a statistical test, the Kolmogorov-Smirnov s
|
47,523
|
Why is the tick marker for zero after the bar in this qplot bar chart?
|
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem at the other end with the $999$ projects put into the bin $[999,1032)$ which would appear above $1000$; so it would be better to have a binwidth which is a factor of $1000$ - I would suggest binwidth=25.
For example
library(ggplot2)
set.seed(1)
df <- data.frame(projects = rgeom(10000,.005) )
qplot(projects, data=subset(df, projects<1000), geom="bar",
binwidth=25, right=FALSE )
produces
|
Why is the tick marker for zero after the bar in this qplot bar chart?
|
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem
|
Why is the tick marker for zero after the bar in this qplot bar chart?
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem at the other end with the $999$ projects put into the bin $[999,1032)$ which would appear above $1000$; so it would be better to have a binwidth which is a factor of $1000$ - I would suggest binwidth=25.
For example
library(ggplot2)
set.seed(1)
df <- data.frame(projects = rgeom(10000,.005) )
qplot(projects, data=subset(df, projects<1000), geom="bar",
binwidth=25, right=FALSE )
produces
|
Why is the tick marker for zero after the bar in this qplot bar chart?
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem
|
47,524
|
Friedman's test and post-hoc analysis
|
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package.
The maxT procedure is rather complex and there is no relation with the traditional $\chi^2$ statistic you're probably used to get after a Friedman ANOVA. The general idea is to control the FWER. Let's say you perform 1000 permutations, for every variable of interest, then you can derive not only pointwise empirical p-values for each variable (as you would do with a single permutation test) but also a value that accounts for the fact that you tested all those variables at the same time. The latter is achieved by comparing each observed test statistic against the maximum of permuted statistics over all variables. In other words, this p-value reflects the chance of seeing a test statistic as large as the one you observed, given you've performed as many tests.
More information (in a genomic context, and with algorithmic considerations) can be found in
Dudoit, S., Shaffer, J.P., and
Boldrick, J.C. (2003). Multiple
Hypothesis Testing in Microarray
Experiments. Statistical
Science, 18(1), 71–103.
(Here are some slides from the same author with applications in R with the multtest package.)
Another good reference is Multiple Testing Procedures with Applications to Genomics, by Dudoit and van der Laan (Springer, 2008).
Now, if you want to get more "traditional" statistic, you can use the agricolae package which has a friedman() function that performs the overall Friedman's test followed by post-hoc comparisons.
The permutation method yields a maxT=3.24, p=0.003394, suggesting an overall effect of the target when accounting for the blocking factor. The post-hoc tests basically indicate that only results for Wine A vs. Wine C (p=0.003400) are statistically different at the 5% level.
Using the non-parametric test, we have
> library(agricolae)
> with(WineTasting, friedman(Taster, Wine, Taste, group=FALSE))
Friedman's Test
===============
Adjusted for ties
Value: 11.14286
Pvalue chisq : 0.003805041
F value : 7.121739
Pvalue F: 0.002171298
Alpha : 0.05
t-Student : 2.018082
Comparison between treatments
Sum of the ranks
Difference pvalue sig LCL UCL
Wine A - Wine B 6 0.301210 -5.57 17.57
Wine A - Wine C 21 0.000692 *** 9.43 32.57
Wine B - Wine C 15 0.012282 * 3.43 26.57
The two global tests agree and basically say there is a significant effect of Wine type. We would, however, reach different conclusions about the pairwise difference. It should be noted that the above pairwise tests (Fisher's LSD) are not really corrected for multiple comparisons, although the difference B-C would remain significant even after Holm's correction (which also provides strong control of the FWER).
|
Friedman's test and post-hoc analysis
|
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package.
The maxT procedure is rather complex and there is no relation with the traditional $\
|
Friedman's test and post-hoc analysis
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package.
The maxT procedure is rather complex and there is no relation with the traditional $\chi^2$ statistic you're probably used to get after a Friedman ANOVA. The general idea is to control the FWER. Let's say you perform 1000 permutations, for every variable of interest, then you can derive not only pointwise empirical p-values for each variable (as you would do with a single permutation test) but also a value that accounts for the fact that you tested all those variables at the same time. The latter is achieved by comparing each observed test statistic against the maximum of permuted statistics over all variables. In other words, this p-value reflects the chance of seeing a test statistic as large as the one you observed, given you've performed as many tests.
More information (in a genomic context, and with algorithmic considerations) can be found in
Dudoit, S., Shaffer, J.P., and
Boldrick, J.C. (2003). Multiple
Hypothesis Testing in Microarray
Experiments. Statistical
Science, 18(1), 71–103.
(Here are some slides from the same author with applications in R with the multtest package.)
Another good reference is Multiple Testing Procedures with Applications to Genomics, by Dudoit and van der Laan (Springer, 2008).
Now, if you want to get more "traditional" statistic, you can use the agricolae package which has a friedman() function that performs the overall Friedman's test followed by post-hoc comparisons.
The permutation method yields a maxT=3.24, p=0.003394, suggesting an overall effect of the target when accounting for the blocking factor. The post-hoc tests basically indicate that only results for Wine A vs. Wine C (p=0.003400) are statistically different at the 5% level.
Using the non-parametric test, we have
> library(agricolae)
> with(WineTasting, friedman(Taster, Wine, Taste, group=FALSE))
Friedman's Test
===============
Adjusted for ties
Value: 11.14286
Pvalue chisq : 0.003805041
F value : 7.121739
Pvalue F: 0.002171298
Alpha : 0.05
t-Student : 2.018082
Comparison between treatments
Sum of the ranks
Difference pvalue sig LCL UCL
Wine A - Wine B 6 0.301210 -5.57 17.57
Wine A - Wine C 21 0.000692 *** 9.43 32.57
Wine B - Wine C 15 0.012282 * 3.43 26.57
The two global tests agree and basically say there is a significant effect of Wine type. We would, however, reach different conclusions about the pairwise difference. It should be noted that the above pairwise tests (Fisher's LSD) are not really corrected for multiple comparisons, although the difference B-C would remain significant even after Holm's correction (which also provides strong control of the FWER).
|
Friedman's test and post-hoc analysis
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package.
The maxT procedure is rather complex and there is no relation with the traditional $\
|
47,525
|
Conditions for Central Limit Theorem for dependent sequences
|
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that something like this would have escaped all of them.)
First of all, note that the formula for $V$ that you give is part of the conclusion of the associated central limit theorem. See, for example, Theorem 7.6 on pages 416–417 of R. Durrett, Probability: Theory and Examples, 3rd. ed., which based on your link, you appear to have access to.
At any rate, here is a simple counterexample to your claim.
Let $X_0$ equal $+1$ with probability $1/2$ and $-1$ with probability $1/2$. Define $X_n = (-1)^n X_0$. Then $\{X_n\}$ is a stationary ergodic process with mean 0 and variance 1, but the Central Limit Theorem fails.
The properties of stationarity and ergodicity should be pretty easy to see as we can construct this process by defining a function over the states of a two-state Markov chain with stationary probability measure $\pi(x) = 1/2$ for $x \in \{0,1\}$.
Observe that this process yields a sequence of the form $-X_0, X_0, -X_0, \ldots$ and so
Even without appealing to any notions about ergodicity, it is easy to see that $\newcommand{\e}{\mathbb{E}}\bar{X}_n \to \e X_0 = 0$ almost surely, and,
$\newcommand{\Var}{\mathbb{V}\mathrm{ar}}\Var(S_n) = 0$ if $n$ is even and $1$ if $n$ is odd.
This already is enough to conclude that there is no way that any rescaling of $S_n$ can make it converge in distribution to a normal random variable. In fact, for every function $f$ such that $f(n) \to \infty$, $S_n / f(n) \to 0$ almost surely no matter how slowly $f$ diverges.
Note also that this example should make it clear that the formula for $V$ is a conclusion of the theorem. Indeed, for the example above,
$$
V_n = 1 + 2 \sum_{i = 1}^n \e X_0 X_i = \left\{
\begin{array}{rl}
-1, & n \text{ odd}, \\
1, & n \text{ even},
\end{array}
\right.
$$
which, of course, (a) makes no sense as a variance, (b) does not have a limit, and (c) is not asymptotically equivalent to $\Var(S_n)$. (NB: I use a slightly different form for $V_n$ than you do where mine matches that given in Durrett.)
|
Conditions for Central Limit Theorem for dependent sequences
|
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that somet
|
Conditions for Central Limit Theorem for dependent sequences
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that something like this would have escaped all of them.)
First of all, note that the formula for $V$ that you give is part of the conclusion of the associated central limit theorem. See, for example, Theorem 7.6 on pages 416–417 of R. Durrett, Probability: Theory and Examples, 3rd. ed., which based on your link, you appear to have access to.
At any rate, here is a simple counterexample to your claim.
Let $X_0$ equal $+1$ with probability $1/2$ and $-1$ with probability $1/2$. Define $X_n = (-1)^n X_0$. Then $\{X_n\}$ is a stationary ergodic process with mean 0 and variance 1, but the Central Limit Theorem fails.
The properties of stationarity and ergodicity should be pretty easy to see as we can construct this process by defining a function over the states of a two-state Markov chain with stationary probability measure $\pi(x) = 1/2$ for $x \in \{0,1\}$.
Observe that this process yields a sequence of the form $-X_0, X_0, -X_0, \ldots$ and so
Even without appealing to any notions about ergodicity, it is easy to see that $\newcommand{\e}{\mathbb{E}}\bar{X}_n \to \e X_0 = 0$ almost surely, and,
$\newcommand{\Var}{\mathbb{V}\mathrm{ar}}\Var(S_n) = 0$ if $n$ is even and $1$ if $n$ is odd.
This already is enough to conclude that there is no way that any rescaling of $S_n$ can make it converge in distribution to a normal random variable. In fact, for every function $f$ such that $f(n) \to \infty$, $S_n / f(n) \to 0$ almost surely no matter how slowly $f$ diverges.
Note also that this example should make it clear that the formula for $V$ is a conclusion of the theorem. Indeed, for the example above,
$$
V_n = 1 + 2 \sum_{i = 1}^n \e X_0 X_i = \left\{
\begin{array}{rl}
-1, & n \text{ odd}, \\
1, & n \text{ even},
\end{array}
\right.
$$
which, of course, (a) makes no sense as a variance, (b) does not have a limit, and (c) is not asymptotically equivalent to $\Var(S_n)$. (NB: I use a slightly different form for $V_n$ than you do where mine matches that given in Durrett.)
|
Conditions for Central Limit Theorem for dependent sequences
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that somet
|
47,526
|
Power analysis for moderator effect in regression with two continuous predictors
|
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error variance (with help from the researcher), generating data sets using the assumed model, and seeing what proportion of these give a significant p-value for the interaction. Then use trial and error to find the minimum sample size giving the required power.
|
Power analysis for moderator effect in regression with two continuous predictors
|
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error vari
|
Power analysis for moderator effect in regression with two continuous predictors
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error variance (with help from the researcher), generating data sets using the assumed model, and seeing what proportion of these give a significant p-value for the interaction. Then use trial and error to find the minimum sample size giving the required power.
|
Power analysis for moderator effect in regression with two continuous predictors
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error vari
|
47,527
|
Power analysis for moderator effect in regression with two continuous predictors
|
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M?
Your regression model would have 3 predictors X, M, and their (centered) interaction (X*M).
If you run the analysis using GPower (http://gpower.hhu.de/) you would set it up using the following parameters.
F tests - Linear multiple regression: Fixed model, R² deviation from zero
Analysis: A priori: Compute required sample size
Input: Effect size f² = 0.15
α err prob = 0.05
Power (1-β err prob) = 0.80
Number of predictors = 3
Output: Noncentrality parameter λ = 11.5500000
Critical F = 2.7300187
Numerator df = 3
Denominator df = 73
Total sample size = 77
Actual power = 0.8017655
You could vary the effect size, f2 to small .02, medium .15, or large .35.
In my above example f2 was set to .15.
Alpha should be set to .05, and power (1-B err prob) should be set to .80
|
Power analysis for moderator effect in regression with two continuous predictors
|
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M?
Your regression model would have 3 predictor
|
Power analysis for moderator effect in regression with two continuous predictors
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M?
Your regression model would have 3 predictors X, M, and their (centered) interaction (X*M).
If you run the analysis using GPower (http://gpower.hhu.de/) you would set it up using the following parameters.
F tests - Linear multiple regression: Fixed model, R² deviation from zero
Analysis: A priori: Compute required sample size
Input: Effect size f² = 0.15
α err prob = 0.05
Power (1-β err prob) = 0.80
Number of predictors = 3
Output: Noncentrality parameter λ = 11.5500000
Critical F = 2.7300187
Numerator df = 3
Denominator df = 73
Total sample size = 77
Actual power = 0.8017655
You could vary the effect size, f2 to small .02, medium .15, or large .35.
In my above example f2 was set to .15.
Alpha should be set to .05, and power (1-B err prob) should be set to .80
|
Power analysis for moderator effect in regression with two continuous predictors
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M?
Your regression model would have 3 predictor
|
47,528
|
Binning raw data prior to building a logistic regression model
|
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree of arbitrariness into the model. Take a look at regression splines as an alternative. Notes about this may be found at http://biostat.mc.vanderbilt.edu/rms. Also make sure that your outcome is truly dichotomous, i.e., that the time until the event is irrelevant and you have no censoring.
|
Binning raw data prior to building a logistic regression model
|
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree o
|
Binning raw data prior to building a logistic regression model
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree of arbitrariness into the model. Take a look at regression splines as an alternative. Notes about this may be found at http://biostat.mc.vanderbilt.edu/rms. Also make sure that your outcome is truly dichotomous, i.e., that the time until the event is irrelevant and you have no censoring.
|
Binning raw data prior to building a logistic regression model
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree o
|
47,529
|
Binning raw data prior to building a logistic regression model
|
You could specify your binding algorithm in a function, define utility function and optimize input parameters...
The ideas for utility function can be:
Predictive power (weight of evidence and information value)
Monotonnicly decreasing average default rate from one bin to another (as you increase the age of history...)
You can also constrain your optimization to look only for three to 5 bins for example...
|
Binning raw data prior to building a logistic regression model
|
You could specify your binding algorithm in a function, define utility function and optimize input parameters...
The ideas for utility function can be:
Predictive power (weight of evidence and inform
|
Binning raw data prior to building a logistic regression model
You could specify your binding algorithm in a function, define utility function and optimize input parameters...
The ideas for utility function can be:
Predictive power (weight of evidence and information value)
Monotonnicly decreasing average default rate from one bin to another (as you increase the age of history...)
You can also constrain your optimization to look only for three to 5 bins for example...
|
Binning raw data prior to building a logistic regression model
You could specify your binding algorithm in a function, define utility function and optimize input parameters...
The ideas for utility function can be:
Predictive power (weight of evidence and inform
|
47,530
|
Active learning using SVM Regression
|
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will probably suggest bad places to label the data as your current hypothesis is poor. It is a good idea to do some random exploration as well, as that is about the best way to ensure that eventually you will label the data that shows the current hypothesis to be incorrect.
For regression models, I would suggest that Gaussian Process regression is a better bet for active learning, as it gives you predictive error bars, so you can query the labels for points where the model is most uncertain. See for example this paper looks an interesting place to start.
I have worked on active learning in classification, and the results have been rather mixed for all strategies. Often just picking points randomly (i.e. all exploration, no exploitation) works best. I am looking into active learning for regression problems at the moment and intending to tse GPs, I'll add to my answer if I find out anything that seems to work better than exploration only.
|
Active learning using SVM Regression
|
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will prob
|
Active learning using SVM Regression
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will probably suggest bad places to label the data as your current hypothesis is poor. It is a good idea to do some random exploration as well, as that is about the best way to ensure that eventually you will label the data that shows the current hypothesis to be incorrect.
For regression models, I would suggest that Gaussian Process regression is a better bet for active learning, as it gives you predictive error bars, so you can query the labels for points where the model is most uncertain. See for example this paper looks an interesting place to start.
I have worked on active learning in classification, and the results have been rather mixed for all strategies. Often just picking points randomly (i.e. all exploration, no exploitation) works best. I am looking into active learning for regression problems at the moment and intending to tse GPs, I'll add to my answer if I find out anything that seems to work better than exploration only.
|
Active learning using SVM Regression
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will prob
|
47,531
|
Active learning using SVM Regression
|
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new points will decrease. If you have any other method to labelize your new generated points rather than using the boundary that can be a good way and your accuracy for the new generated boundary will be better.
|
Active learning using SVM Regression
|
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new p
|
Active learning using SVM Regression
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new points will decrease. If you have any other method to labelize your new generated points rather than using the boundary that can be a good way and your accuracy for the new generated boundary will be better.
|
Active learning using SVM Regression
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new p
|
47,532
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
|
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed.
If it does I wouldn't bother with analysing the autocorrelation at all, but focus on the estimation of $k_1$, $k_2$ and $k_3$, and testing the hypothesis about them.
To estimate $k_1$, $k_2$ and $k_3$ you need a non-linear model. Your idea of log transformation followed by linear modelling is best when the error (difference between the measured $y$ temperature and the one predicted by the formula) is proportional to the temperature. However, I suspect that the error will be primarily due to temperature measurement and thus normally distributed with the same variance for any temperature (you need to check this). If so, a non-linear model would be more appropriate.
A model using the above function will give you estimates for the parameters of the cooling of a single beaker, $a$ and $k$. We may however assume that $a$ should be the same for each beaker, $k$s should be similar for the same substance, and that the standard deviation ($\sigma$) is the same across all temperature measurements. These can be expressed in a model accounting for all the beakers in the same time (second index $j$ is beaker ID):
$$y_j(t) = a + e^{-(k_i + \alpha_j)t} + \epsilon$$
where
$\epsilon$ is normally distributed error of SD $\sigma$, $k_i$ is one of 3 mean $k$ values for substance $i$, $\alpha_j$ is normally distributed random deviation of a specific beaker from the $k_i$ substance mean, with a substance specific SD ($\sigma_{\alpha{}i}$). This is now a non-linear mixed effect model, that can be fitted using various software. After this you have the $k_i$ values and their standard errors.
The next question is how to test the hypothesis that $k_1 > k_2 > k_3$. It may be “cleaner” to formulate such a hypothesis in the Bayesian way. However you used the word test, so you probably want a significance test – but in order to do that you have to have a more specific alternative hypothesis (or family of hypotheses).
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
|
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed.
If it does I wouldn't b
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed.
If it does I wouldn't bother with analysing the autocorrelation at all, but focus on the estimation of $k_1$, $k_2$ and $k_3$, and testing the hypothesis about them.
To estimate $k_1$, $k_2$ and $k_3$ you need a non-linear model. Your idea of log transformation followed by linear modelling is best when the error (difference between the measured $y$ temperature and the one predicted by the formula) is proportional to the temperature. However, I suspect that the error will be primarily due to temperature measurement and thus normally distributed with the same variance for any temperature (you need to check this). If so, a non-linear model would be more appropriate.
A model using the above function will give you estimates for the parameters of the cooling of a single beaker, $a$ and $k$. We may however assume that $a$ should be the same for each beaker, $k$s should be similar for the same substance, and that the standard deviation ($\sigma$) is the same across all temperature measurements. These can be expressed in a model accounting for all the beakers in the same time (second index $j$ is beaker ID):
$$y_j(t) = a + e^{-(k_i + \alpha_j)t} + \epsilon$$
where
$\epsilon$ is normally distributed error of SD $\sigma$, $k_i$ is one of 3 mean $k$ values for substance $i$, $\alpha_j$ is normally distributed random deviation of a specific beaker from the $k_i$ substance mean, with a substance specific SD ($\sigma_{\alpha{}i}$). This is now a non-linear mixed effect model, that can be fitted using various software. After this you have the $k_i$ values and their standard errors.
The next question is how to test the hypothesis that $k_1 > k_2 > k_3$. It may be “cleaner” to formulate such a hypothesis in the Bayesian way. However you used the word test, so you probably want a significance test – but in order to do that you have to have a more specific alternative hypothesis (or family of hypotheses).
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed.
If it does I wouldn't b
|
47,533
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
|
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factors you have a covariate (a) and a factor (substance or $i$ in $k_{i}$). You also have a random effect (individual measurements units or unitID). The good thing about nlme is that it also allows you to model the correlations in the residuals with e.g. an AR covariance structure.
edit: I always like to use a mixed-model when dealing with repeated measures. Still, if you don't want to include a random factor, you can model it with gnls in the same package. gnls still lets you select AR as covariance structure of he residuals.
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
|
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factor
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factors you have a covariate (a) and a factor (substance or $i$ in $k_{i}$). You also have a random effect (individual measurements units or unitID). The good thing about nlme is that it also allows you to model the correlations in the residuals with e.g. an AR covariance structure.
edit: I always like to use a mixed-model when dealing with repeated measures. Still, if you don't want to include a random factor, you can model it with gnls in the same package. gnls still lets you select AR as covariance structure of he residuals.
|
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factor
|
47,534
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
|
You may need to clarify what you mean by
"accounting for the fact that I have taken repeated measures..."
You say that
"I would like to know if the mean chocolate consumption per day is higher among happy
people than those who are not happy..."
This suggests to me that time is not really relevant to your research question. Thus, you could do one of the following.
You could correlate mean happiness ([time1 + time2] / 2) with mean chocolate consumption.
You could correlate happiness with chocolate consumption at a given time point.
You could correlate happiness with chocolate consumption across times (e.g., 1 with 2).
A variant on the above would involve performing a regression or other predictive model predicting one variable from the other.
Alternatively, you may find that you can rephrase your research question more clearly to incorporate what you are interested in with regards to the effect of time.
You could correlate chocolate change scores with happiness change scores.
You could predict time 2 chocolate from time 1 chocolate and time 1 happiness to see whether time 1 happiness predicts over and above time 1 chocolate.
As a side point, while it may be an artificial example, it seems strange to measure happiness as a Yes / No variable. I would measure it as a scale. It is also a little strange talking about the mean of a Yes / No variable.
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
|
You may need to clarify what you mean by
"accounting for the fact that I have taken repeated measures..."
You say that
"I would like to know if the mean chocolate consumption per day is higher am
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may need to clarify what you mean by
"accounting for the fact that I have taken repeated measures..."
You say that
"I would like to know if the mean chocolate consumption per day is higher among happy
people than those who are not happy..."
This suggests to me that time is not really relevant to your research question. Thus, you could do one of the following.
You could correlate mean happiness ([time1 + time2] / 2) with mean chocolate consumption.
You could correlate happiness with chocolate consumption at a given time point.
You could correlate happiness with chocolate consumption across times (e.g., 1 with 2).
A variant on the above would involve performing a regression or other predictive model predicting one variable from the other.
Alternatively, you may find that you can rephrase your research question more clearly to incorporate what you are interested in with regards to the effect of time.
You could correlate chocolate change scores with happiness change scores.
You could predict time 2 chocolate from time 1 chocolate and time 1 happiness to see whether time 1 happiness predicts over and above time 1 chocolate.
As a side point, while it may be an artificial example, it seems strange to measure happiness as a Yes / No variable. I would measure it as a scale. It is also a little strange talking about the mean of a Yes / No variable.
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may need to clarify what you mean by
"accounting for the fact that I have taken repeated measures..."
You say that
"I would like to know if the mean chocolate consumption per day is higher am
|
47,535
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
|
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independently from chocolate consumption. This can be modelled by including subject id as a random effect categorical predictor. Age might also influence happiness. After these the model would look like this: logit(happy) ~ choc + age + id, where age is either 14 or 18, and the data are in the long format, a mixed effect logistic regression including a random categorical, a fixed categorical and a continuous predictor. (As an analogue of the repeated measures approach you could use a covariance pattern model, where id is not a predictor, but used in the specification of the covariance.)
Alternatively chocolate consumption can be regarded as dependent variable. choc ~ happy + age + id could be the model (long data format), where id is a random effect, mixed effect ANOVA; or choc ~ happy + age, where repeated measures are considered, repeated measures ANOVA.
I have no idea if happiness causes increased/decreased chocolate consumption or vice versa. You are safe asking about a "relationship" between the two.
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
|
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independent
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independently from chocolate consumption. This can be modelled by including subject id as a random effect categorical predictor. Age might also influence happiness. After these the model would look like this: logit(happy) ~ choc + age + id, where age is either 14 or 18, and the data are in the long format, a mixed effect logistic regression including a random categorical, a fixed categorical and a continuous predictor. (As an analogue of the repeated measures approach you could use a covariance pattern model, where id is not a predictor, but used in the specification of the covariance.)
Alternatively chocolate consumption can be regarded as dependent variable. choc ~ happy + age + id could be the model (long data format), where id is a random effect, mixed effect ANOVA; or choc ~ happy + age, where repeated measures are considered, repeated measures ANOVA.
I have no idea if happiness causes increased/decreased chocolate consumption or vice versa. You are safe asking about a "relationship" between the two.
|
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independent
|
47,536
|
How to compare the effectiveness of medical diagnostic techniques?
|
As it is described in the original post, the experiment is a randomized block.
Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist.
Instrument (3 levels) and the true result (2 levels) of the test are the two treatments, which I assume were assigned randomly.
Consider the different specimens to be replications of each treatment combination.
The one response variable is whether the pathologist's diagnosis is correct (2 levels).
Because the result is categorical, the link function will need to be something like logit or probit. Here's some R code that does that. It may need to be extended depending on your friend's hypotheses.
library(lme4)
glmer(correctness ~ instrument*trueresult + (1|pathologist),
family = binomial)
The coefficients from a logit model can be interpreted in relation to odds ratios. For a particular combination of predictors, the model estimates an odds ratio. The individual coefficients indicate how the odds ratio changes depending on the predictors.
If your friend doesn't care about distinguishing between type I and type II error, he or she can drop true result predictor from the model.
library(lme4)
glmer(correctness ~ instrument+trueresult + (1|pathologist),
family = binomial)
The measurement in multiple sessions may be an incomplete block design, so your friend should look at those if he or she is concerned about the assumption of independence among measurements.
|
How to compare the effectiveness of medical diagnostic techniques?
|
As it is described in the original post, the experiment is a randomized block.
Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist.
Instrument (3 levels) a
|
How to compare the effectiveness of medical diagnostic techniques?
As it is described in the original post, the experiment is a randomized block.
Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist.
Instrument (3 levels) and the true result (2 levels) of the test are the two treatments, which I assume were assigned randomly.
Consider the different specimens to be replications of each treatment combination.
The one response variable is whether the pathologist's diagnosis is correct (2 levels).
Because the result is categorical, the link function will need to be something like logit or probit. Here's some R code that does that. It may need to be extended depending on your friend's hypotheses.
library(lme4)
glmer(correctness ~ instrument*trueresult + (1|pathologist),
family = binomial)
The coefficients from a logit model can be interpreted in relation to odds ratios. For a particular combination of predictors, the model estimates an odds ratio. The individual coefficients indicate how the odds ratio changes depending on the predictors.
If your friend doesn't care about distinguishing between type I and type II error, he or she can drop true result predictor from the model.
library(lme4)
glmer(correctness ~ instrument+trueresult + (1|pathologist),
family = binomial)
The measurement in multiple sessions may be an incomplete block design, so your friend should look at those if he or she is concerned about the assumption of independence among measurements.
|
How to compare the effectiveness of medical diagnostic techniques?
As it is described in the original post, the experiment is a randomized block.
Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist.
Instrument (3 levels) a
|
47,537
|
How to compare the effectiveness of medical diagnostic techniques?
|
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article
http://en.wikipedia.org/wiki/Receiver_operating_characteristic and the external links to it may also be useful.
Some other methods can be found here
http://onbiostatistics.blogspot.com/2011/01/agreement-statistics-and-kappa.html
|
How to compare the effectiveness of medical diagnostic techniques?
|
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article
http://en.w
|
How to compare the effectiveness of medical diagnostic techniques?
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article
http://en.wikipedia.org/wiki/Receiver_operating_characteristic and the external links to it may also be useful.
Some other methods can be found here
http://onbiostatistics.blogspot.com/2011/01/agreement-statistics-and-kappa.html
|
How to compare the effectiveness of medical diagnostic techniques?
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article
http://en.w
|
47,538
|
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
|
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes:
$$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$
This indicates a uniform probability with respect to $\mu$. A more familiar notation is:
$$p(\mu|I)\propto 1$$
It comes from the "proper" derivation of a PDF from a CDF.
$$lim_{dy\rightarrow 0}P(Y\in[y,y+dy))=f(y)dy$$
EDIT: I initially wrote this answer in a hasty fashion, and so had a bit of unclear notation myself. In my example, I only had a 1-dimension variable $\mu_1$, and all the above relate to a 1-dimensional random variable. I think the statistical physics literature ("maxent" people) uses this notation (but not entirely sure) - Edwin Jaynes, Larry Bretthorst, Stephen Gull, and others. I've never seen it explained in any more detail than what I have given.
And second is that $I$ stands for "prior information", not an identity matrix. This is just a good habit to express $I$ explicitly as part of your assumptions - so that you don't forget that 1) they are there, and 2) you answer depends on the prior information just as much it depends on the data.
|
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
|
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes:
$$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$
This indicates a uniform probability
|
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes:
$$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$
This indicates a uniform probability with respect to $\mu$. A more familiar notation is:
$$p(\mu|I)\propto 1$$
It comes from the "proper" derivation of a PDF from a CDF.
$$lim_{dy\rightarrow 0}P(Y\in[y,y+dy))=f(y)dy$$
EDIT: I initially wrote this answer in a hasty fashion, and so had a bit of unclear notation myself. In my example, I only had a 1-dimension variable $\mu_1$, and all the above relate to a 1-dimensional random variable. I think the statistical physics literature ("maxent" people) uses this notation (but not entirely sure) - Edwin Jaynes, Larry Bretthorst, Stephen Gull, and others. I've never seen it explained in any more detail than what I have given.
And second is that $I$ stands for "prior information", not an identity matrix. This is just a good habit to express $I$ explicitly as part of your assumptions - so that you don't forget that 1) they are there, and 2) you answer depends on the prior information just as much it depends on the data.
|
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes:
$$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$
This indicates a uniform probability
|
47,539
|
Calculating the mean using regression data
|
Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$.
where $r$ is the correlation. The question doesn't state whether the standard deviation (0.482) is for $s_y$ or $s_x$ (the MLE standard deviation, with divisor $n$). Either way, you can work out the either from the info given. for their ratio must satisfy:
$$\frac{\hat{\beta}}{r}=\frac{s_y}{s_x}$$
The slope can't be negative if the correlation is positive, so I have assumed that you have done something incorrectly (for you have correlation of 0.117, and slope of -0.00024; this is impossible). This will affect the numbers, but not the general method. So I will assume the standard deviations are both known, but not write in the specific values. The same goes for the rest of the actual numbers.
Now the variance of $\hat{\beta}$ is given by:
$$var(\hat{\beta})=s_e^2(X^TX)^{-1}_{22}=\frac{s_e^2 (X^TX)_{11}}{|X^TX|}$$
Note that $(X^TX)_{11}=n$ and $s_e^2$ is the "mean square error". The variance of $\alpha$ is given by:
$$var(\hat{\alpha})=s_e^2(X^TX)^{-1}_{11}=\frac{s_e^2 (X^TX)_{22}}{|X^TX|}$$
Now $(X^TX)_{22}=\sum_i x_i^2 = n(s_x^2+n\overline{x}^2)$
And dividing these two variances gives:
$$\frac{var(\hat{\alpha})}{var(\hat{\beta})}=\frac{(X^TX)_{22}}{(X^TX)_{11}}=\frac{n(s_x^2+n\overline{x}^2)}{n}=s_x^2+n\overline{x}^2$$
Now all quantities in the equation are known, except for the mean $\overline{x}$. So we can re-arrange this equation and solve for the mean:
$$\overline{x}=\pm\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}$$
But we know from the start that $x_i>0$ - you can't drive "negative miles". So only the positive square root is to be taken. The rest is straight-forward CI stuff. The estimate of the mean $\hat{\overline{y}}$ is given by:
$$\hat{\overline{y}}=\hat{\alpha}+\hat{\beta}\overline{x}=\hat{\alpha}+\hat{\beta}\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}=\overline{y}$$
And the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})+2\overline{x}cov(\hat{\alpha},\hat{\beta})$$
Now the covariance is equal to:
$$cov(\hat{\alpha},\hat{\beta})=s_e^2(X^TX)^{-1}_{21}=-\frac{s_e^2 (X^TX)_{21}}{|X^TX|}=-\frac{s_e^2 n\overline{x}}{ns_x^2}=-\frac{s_e^2 \overline{x}}{s_x^2}$$
And so the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})-2\frac{s_e^2 \overline{x}^2}{s_x^2}=var(\hat{\alpha})+\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}\left(var(\hat{\beta})-2\frac{s_e^2}{s_x^2}\right)$$
So you construct your $100(1-P)$% confidence interval by choosing $T_{1-P/2}^{(n-2)}$ as the $P/2$ quantile of standard T distribution with $n-1$ degrees of freedom (which effectively equal to the standard normal, as $n-1=100$), and you have:
$$CI=\overline{y}\pm T_{1-P/2}^{(n-2)}\sqrt{var(\hat{\overline{y}})}$$
And all quantities are calculable, given the information.
|
Calculating the mean using regression data
|
Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and
|
Calculating the mean using regression data
Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$.
where $r$ is the correlation. The question doesn't state whether the standard deviation (0.482) is for $s_y$ or $s_x$ (the MLE standard deviation, with divisor $n$). Either way, you can work out the either from the info given. for their ratio must satisfy:
$$\frac{\hat{\beta}}{r}=\frac{s_y}{s_x}$$
The slope can't be negative if the correlation is positive, so I have assumed that you have done something incorrectly (for you have correlation of 0.117, and slope of -0.00024; this is impossible). This will affect the numbers, but not the general method. So I will assume the standard deviations are both known, but not write in the specific values. The same goes for the rest of the actual numbers.
Now the variance of $\hat{\beta}$ is given by:
$$var(\hat{\beta})=s_e^2(X^TX)^{-1}_{22}=\frac{s_e^2 (X^TX)_{11}}{|X^TX|}$$
Note that $(X^TX)_{11}=n$ and $s_e^2$ is the "mean square error". The variance of $\alpha$ is given by:
$$var(\hat{\alpha})=s_e^2(X^TX)^{-1}_{11}=\frac{s_e^2 (X^TX)_{22}}{|X^TX|}$$
Now $(X^TX)_{22}=\sum_i x_i^2 = n(s_x^2+n\overline{x}^2)$
And dividing these two variances gives:
$$\frac{var(\hat{\alpha})}{var(\hat{\beta})}=\frac{(X^TX)_{22}}{(X^TX)_{11}}=\frac{n(s_x^2+n\overline{x}^2)}{n}=s_x^2+n\overline{x}^2$$
Now all quantities in the equation are known, except for the mean $\overline{x}$. So we can re-arrange this equation and solve for the mean:
$$\overline{x}=\pm\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}$$
But we know from the start that $x_i>0$ - you can't drive "negative miles". So only the positive square root is to be taken. The rest is straight-forward CI stuff. The estimate of the mean $\hat{\overline{y}}$ is given by:
$$\hat{\overline{y}}=\hat{\alpha}+\hat{\beta}\overline{x}=\hat{\alpha}+\hat{\beta}\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}=\overline{y}$$
And the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})+2\overline{x}cov(\hat{\alpha},\hat{\beta})$$
Now the covariance is equal to:
$$cov(\hat{\alpha},\hat{\beta})=s_e^2(X^TX)^{-1}_{21}=-\frac{s_e^2 (X^TX)_{21}}{|X^TX|}=-\frac{s_e^2 n\overline{x}}{ns_x^2}=-\frac{s_e^2 \overline{x}}{s_x^2}$$
And so the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})-2\frac{s_e^2 \overline{x}^2}{s_x^2}=var(\hat{\alpha})+\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}\left(var(\hat{\beta})-2\frac{s_e^2}{s_x^2}\right)$$
So you construct your $100(1-P)$% confidence interval by choosing $T_{1-P/2}^{(n-2)}$ as the $P/2$ quantile of standard T distribution with $n-1$ degrees of freedom (which effectively equal to the standard normal, as $n-1=100$), and you have:
$$CI=\overline{y}\pm T_{1-P/2}^{(n-2)}\sqrt{var(\hat{\overline{y}})}$$
And all quantities are calculable, given the information.
|
Calculating the mean using regression data
Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and
|
47,540
|
Interactions between non-linear predictors
|
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like:
fit1 = gamm4(
formula = V1 ~ V2 + s(V3)
, random = ~ (1|V4)
)
fit2 = gamm4(
formula = V1 ~ V2 + s(V3,by=V2)
, random = ~ (1|V4)
)
fit1 seeks to predict V1 using V4 as a random effect and V2 and V3 as fixed effects, but where V3 is spline-smoothed. fit2 seeks the same, except with the addition of permitting the smooth of V3 to vary within levels of V2, thus implementing an interaction. Comparison of fit1 to fit2 evaluates the necessity of permitting the interaction.
|
Interactions between non-linear predictors
|
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like:
fit1 = gamm4(
formula = V1 ~ V2 + s(V3)
, random =
|
Interactions between non-linear predictors
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like:
fit1 = gamm4(
formula = V1 ~ V2 + s(V3)
, random = ~ (1|V4)
)
fit2 = gamm4(
formula = V1 ~ V2 + s(V3,by=V2)
, random = ~ (1|V4)
)
fit1 seeks to predict V1 using V4 as a random effect and V2 and V3 as fixed effects, but where V3 is spline-smoothed. fit2 seeks the same, except with the addition of permitting the smooth of V3 to vary within levels of V2, thus implementing an interaction. Comparison of fit1 to fit2 evaluates the necessity of permitting the interaction.
|
Interactions between non-linear predictors
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like:
fit1 = gamm4(
formula = V1 ~ V2 + s(V3)
, random =
|
47,541
|
Propagation of large errors
|
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio distribution depends on the distributions of $A$ and $B$.
If we assume that $A$ and $B$ both have Gaussian (normal) distributions, then $A/B$ has a Gaussian ratio distribution, for which a closed form exists but is rather complicated. In general, this will be an asymmetric distribution, so it is not well characterised simply by its mean and standard deviation. However, it is possible to find a confidence interval for $A/B$ using Fieller's theorem.
|
Propagation of large errors
|
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio
|
Propagation of large errors
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio distribution depends on the distributions of $A$ and $B$.
If we assume that $A$ and $B$ both have Gaussian (normal) distributions, then $A/B$ has a Gaussian ratio distribution, for which a closed form exists but is rather complicated. In general, this will be an asymmetric distribution, so it is not well characterised simply by its mean and standard deviation. However, it is possible to find a confidence interval for $A/B$ using Fieller's theorem.
|
Propagation of large errors
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio
|
47,542
|
Propagation of large errors
|
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So while it is true that $E[X+Y]=E[X]+E[Y]$ and $E[X-Y]=E[X]-E[Y]$, it would usually not be true to say $E[XY]=E[X]E[Y]$ or $E[X/Y]=E[X]/E[Y]$, though they will be close for small errors. But for large errors, that effect will disrupt your propagation of error calculations.
The second problem will be that the propagation of errors is asymmetric in multiplication and division, and that also becomes more important as the relative errors increase.
Suppose for example you had $A$ being 270, 540 or 810 and $B$ being 3, 6 or 9. Then $A/B$ could be 30, 45, 60, 90 (three ways), 135, 180, or 270. While 90 may be the mode and median as well as 540/6, the mean is 110, and 30 is much closer to 90 (or to 110) than 270 is.
|
Propagation of large errors
|
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So whil
|
Propagation of large errors
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So while it is true that $E[X+Y]=E[X]+E[Y]$ and $E[X-Y]=E[X]-E[Y]$, it would usually not be true to say $E[XY]=E[X]E[Y]$ or $E[X/Y]=E[X]/E[Y]$, though they will be close for small errors. But for large errors, that effect will disrupt your propagation of error calculations.
The second problem will be that the propagation of errors is asymmetric in multiplication and division, and that also becomes more important as the relative errors increase.
Suppose for example you had $A$ being 270, 540 or 810 and $B$ being 3, 6 or 9. Then $A/B$ could be 30, 45, 60, 90 (three ways), 135, 180, or 270. While 90 may be the mode and median as well as 540/6, the mean is 110, and 30 is much closer to 90 (or to 110) than 270 is.
|
Propagation of large errors
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So whil
|
47,543
|
Propagation of large errors
|
The formula for error propagation
$\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$
works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$
Since (most) functions can be linearly approximated, the above also works for small errors. For large errors, a symmetric distribution of $x$ leads to an asymmetric distribution of the error in $f$ (e.g. if $f(x)=x^{10}$, $f(0)=0$, $f(1)=1$, $f(2)=1024$, so the formula won't hold if $x=1$ and $\sigma_x=1$).
With large error, you may be able to calculate the transformation of the error distribution, otherwise you can perform a Monte Carlo simulation to estimate the distribution of $\sigma_f$.
|
Propagation of large errors
|
The formula for error propagation
$\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$
works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$
Since (most) functio
|
Propagation of large errors
The formula for error propagation
$\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$
works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$
Since (most) functions can be linearly approximated, the above also works for small errors. For large errors, a symmetric distribution of $x$ leads to an asymmetric distribution of the error in $f$ (e.g. if $f(x)=x^{10}$, $f(0)=0$, $f(1)=1$, $f(2)=1024$, so the formula won't hold if $x=1$ and $\sigma_x=1$).
With large error, you may be able to calculate the transformation of the error distribution, otherwise you can perform a Monte Carlo simulation to estimate the distribution of $\sigma_f$.
|
Propagation of large errors
The formula for error propagation
$\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$
works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$
Since (most) functio
|
47,544
|
Semantic distance between excerpts of text
|
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself, that counts for how many points of the taxonomy of words you need to get from one word to another (for example from cat to dog you might have 4 but from nail to music you might have 25) Then using this numbers calculated among the nouns of the sentences just invent a metric (for example use simply the average of distance between the nouns, or use the minimum distance between the nouns) that will help you carry on your task.
|
Semantic distance between excerpts of text
|
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself,
|
Semantic distance between excerpts of text
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself, that counts for how many points of the taxonomy of words you need to get from one word to another (for example from cat to dog you might have 4 but from nail to music you might have 25) Then using this numbers calculated among the nouns of the sentences just invent a metric (for example use simply the average of distance between the nouns, or use the minimum distance between the nouns) that will help you carry on your task.
|
Semantic distance between excerpts of text
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself,
|
47,545
|
Semantic distance between excerpts of text
|
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of
I cleaned my truck up this morning.
Bananas are an excellent source of potassium.
is more similar to
Early today, I got up and washed my car.
It's determining which of these is more similar:
Early today, I got up.
Early today, I took a bath.
Yesterday I went to a car wash.
Today I'm going to look at new cars.
It rained on my car yesterday.
I do plenty of work around the house.
The kids got the car filthy today. Why doesn't my husband discipline them more.
Jane's car, freshly washed, was hit by three of the eggs.
etc, etc. etc.
One can make up task contexts when any of the above, and many more are the most similar. You want to think very carefully about your goals first, before you assume a particular general purpose technology (Wordnet, a particular unsupervised learner, whatever) will do what you want.
And it's a good idea to have someone not invested in the technology do a blind evaluation of it.
|
Semantic distance between excerpts of text
|
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of
I cleaned my truck u
|
Semantic distance between excerpts of text
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of
I cleaned my truck up this morning.
Bananas are an excellent source of potassium.
is more similar to
Early today, I got up and washed my car.
It's determining which of these is more similar:
Early today, I got up.
Early today, I took a bath.
Yesterday I went to a car wash.
Today I'm going to look at new cars.
It rained on my car yesterday.
I do plenty of work around the house.
The kids got the car filthy today. Why doesn't my husband discipline them more.
Jane's car, freshly washed, was hit by three of the eggs.
etc, etc. etc.
One can make up task contexts when any of the above, and many more are the most similar. You want to think very carefully about your goals first, before you assume a particular general purpose technology (Wordnet, a particular unsupervised learner, whatever) will do what you want.
And it's a good idea to have someone not invested in the technology do a blind evaluation of it.
|
Semantic distance between excerpts of text
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of
I cleaned my truck u
|
47,546
|
Semantic distance between excerpts of text
|
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
|
Semantic distance between excerpts of text
|
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
|
Semantic distance between excerpts of text
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
|
Semantic distance between excerpts of text
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
|
47,547
|
Question about combining hazard ratios - Maybe Simpson's paradox?
|
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding. You can certainly have confounding in survival analysis. I agree it appears sensible to only present the heart and lung results separately.
|
Question about combining hazard ratios - Maybe Simpson's paradox?
|
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding.
|
Question about combining hazard ratios - Maybe Simpson's paradox?
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding. You can certainly have confounding in survival analysis. I agree it appears sensible to only present the heart and lung results separately.
|
Question about combining hazard ratios - Maybe Simpson's paradox?
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding.
|
47,548
|
Question about combining hazard ratios - Maybe Simpson's paradox?
|
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like
$$\begin{array}{rrrrrr}
\textit{Organ}&\textit{Outcome}&A&B&C&D\\
\textrm{Lung}&\textrm{Bad}&371&2727&2374&418\\
\textrm{Lung}&\textrm{Good}&556&3199&2740&558\\
\textrm{Heart}&\textrm{Bad}&214&245&195&273\\
\textrm{Heart}&\textrm{Good}&8859&3828&4691&8752\\
\end{array}$$
then I think you would get something like your hazard ratios (if that means ratios of bad outcomes/totals fractions). Many other patterns of numbers would too.
If you are reviewing the article, it seems reasonable to ask for the underlying numbers to be presented. If they look anything like mine, then it does seem a little strange to add Lung and Heart numbers without a good reason.
|
Question about combining hazard ratios - Maybe Simpson's paradox?
|
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like
$$\begin{array}{rrrrrr}
\textit{Organ}&\textit{Outcome}&A&B&C&D\\
\textrm{Lung}&\textrm
|
Question about combining hazard ratios - Maybe Simpson's paradox?
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like
$$\begin{array}{rrrrrr}
\textit{Organ}&\textit{Outcome}&A&B&C&D\\
\textrm{Lung}&\textrm{Bad}&371&2727&2374&418\\
\textrm{Lung}&\textrm{Good}&556&3199&2740&558\\
\textrm{Heart}&\textrm{Bad}&214&245&195&273\\
\textrm{Heart}&\textrm{Good}&8859&3828&4691&8752\\
\end{array}$$
then I think you would get something like your hazard ratios (if that means ratios of bad outcomes/totals fractions). Many other patterns of numbers would too.
If you are reviewing the article, it seems reasonable to ask for the underlying numbers to be presented. If they look anything like mine, then it does seem a little strange to add Lung and Heart numbers without a good reason.
|
Question about combining hazard ratios - Maybe Simpson's paradox?
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like
$$\begin{array}{rrrrrr}
\textit{Organ}&\textit{Outcome}&A&B&C&D\\
\textrm{Lung}&\textrm
|
47,549
|
When to use Equal-Frequency-Histograms
|
This is not a proper or complete answer, but two observations from my personal experience:
An equal-frequency histogram will hide outliers (I've seen them in long, low bins).
The heights of the individual bins in an equal-frequency histogram seem more stable than in an equal-width histogram.
I use equal-frequency histograms mainly for exploratory analysis. They give me a better intuitive feel for the shape of the distribution than an equal-width histogram.
I am trying them now for an application where I am using function of a histogram of the data as a distance metric for two very skewed distributions. An equal-width histogram would have almost all of the samples in one bin, whereas an equal-frequency histogram with the same number of bins will have many narrow bins in that area. Intuitively, if we consider the height of a bin as a variable, the equal-frequency histogram will better spread the available distribution information among the variables.
|
When to use Equal-Frequency-Histograms
|
This is not a proper or complete answer, but two observations from my personal experience:
An equal-frequency histogram will hide outliers (I've seen them in long, low bins).
The heights of the indi
|
When to use Equal-Frequency-Histograms
This is not a proper or complete answer, but two observations from my personal experience:
An equal-frequency histogram will hide outliers (I've seen them in long, low bins).
The heights of the individual bins in an equal-frequency histogram seem more stable than in an equal-width histogram.
I use equal-frequency histograms mainly for exploratory analysis. They give me a better intuitive feel for the shape of the distribution than an equal-width histogram.
I am trying them now for an application where I am using function of a histogram of the data as a distance metric for two very skewed distributions. An equal-width histogram would have almost all of the samples in one bin, whereas an equal-frequency histogram with the same number of bins will have many narrow bins in that area. Intuitively, if we consider the height of a bin as a variable, the equal-frequency histogram will better spread the available distribution information among the variables.
|
When to use Equal-Frequency-Histograms
This is not a proper or complete answer, but two observations from my personal experience:
An equal-frequency histogram will hide outliers (I've seen them in long, low bins).
The heights of the indi
|
47,550
|
When to use Equal-Frequency-Histograms
|
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values).
For finding the best number of bins, I think it really depends on what you are trying to do with the histogram. In general I think it would be best to ensure your error of choice was below some threshold (eg. Sum of squared errors < THRESH) and bin the values in that manner.
Alternatively, the number of bins can be passed in as a parameter (if you're concerned about the space consumption of the histogram).
|
When to use Equal-Frequency-Histograms
|
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values).
For finding the best number of bins, I think it really depends on what you are tryin
|
When to use Equal-Frequency-Histograms
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values).
For finding the best number of bins, I think it really depends on what you are trying to do with the histogram. In general I think it would be best to ensure your error of choice was below some threshold (eg. Sum of squared errors < THRESH) and bin the values in that manner.
Alternatively, the number of bins can be passed in as a parameter (if you're concerned about the space consumption of the histogram).
|
When to use Equal-Frequency-Histograms
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values).
For finding the best number of bins, I think it really depends on what you are tryin
|
47,551
|
Constrained versus unconstrained formulation of SVM optimisation
|
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest value that satisfies the constraint occurs at equality. So as $\xi_i \geq 0$, $\xi_i = max(0, 1 - y_i(w^Tx_i+b))$, which substituting gives something rather similar to your second formulation.
Having checked the paper by Chapelle, it looks like the second formulation is missing a "1 -" in the second half of the max operation (see definition of L(.,.) following equation 2.8). In that case both formulations are identical, they are both equivalent representations of the primal optimisation problem (the dual formulation is in terms of the Lagrange multipliers $\alpha_i$). The advantages and disadvantages are therefore purely computational.
|
Constrained versus unconstrained formulation of SVM optimisation
|
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest val
|
Constrained versus unconstrained formulation of SVM optimisation
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest value that satisfies the constraint occurs at equality. So as $\xi_i \geq 0$, $\xi_i = max(0, 1 - y_i(w^Tx_i+b))$, which substituting gives something rather similar to your second formulation.
Having checked the paper by Chapelle, it looks like the second formulation is missing a "1 -" in the second half of the max operation (see definition of L(.,.) following equation 2.8). In that case both formulations are identical, they are both equivalent representations of the primal optimisation problem (the dual formulation is in terms of the Lagrange multipliers $\alpha_i$). The advantages and disadvantages are therefore purely computational.
|
Constrained versus unconstrained formulation of SVM optimisation
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest val
|
47,552
|
Constrained versus unconstrained formulation of SVM optimisation
|
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the minimum of the first problem is the minimizer of the second, and vice versa.
Replacing $g(x)$ in the doc with $1-y_i(w^T x_i +b) $ will answer your question.
To prove it in both directions:
The second problem in the doc -> the first problem in the doc
Suppose we have $(x^\star, \xi^\star)$ as the minimizer of the second problem in the doc. Then $\xi^\star=g(x^\star)$ (because otherwise the objective function can get a smaller value by setting $\xi$ smaller). Due to being a minimizer, we have "$\forall x, \forall \xi, f(x)+\xi \ge f(x^\star)+g(x^\star)$". By setting $\xi=g(x)$ as a special case, we have "$\forall x, f(x)+g(x) \ge f(x^\star)+g(x^\star)$", which shows it's also the minimum of the first problem. QED.
The first problem -> the second problem
Suppose we have $x^\star$ as the minimizer of the first problem in the doc.
Therefore $x^\star$ minimizes "$f(x)+\xi, s.t. \xi=g(x)$".
Therefore $x^\star$ minimizes "$f(x)+\xi, s.t. \xi \ge g(x)$" (because when this problem attains minimum, $\xi$ must be equal to $g(x)$; which means it degrades to the above problem). This problem is exactly the second problem in the doc. QED.
|
Constrained versus unconstrained formulation of SVM optimisation
|
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the
|
Constrained versus unconstrained formulation of SVM optimisation
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the minimum of the first problem is the minimizer of the second, and vice versa.
Replacing $g(x)$ in the doc with $1-y_i(w^T x_i +b) $ will answer your question.
To prove it in both directions:
The second problem in the doc -> the first problem in the doc
Suppose we have $(x^\star, \xi^\star)$ as the minimizer of the second problem in the doc. Then $\xi^\star=g(x^\star)$ (because otherwise the objective function can get a smaller value by setting $\xi$ smaller). Due to being a minimizer, we have "$\forall x, \forall \xi, f(x)+\xi \ge f(x^\star)+g(x^\star)$". By setting $\xi=g(x)$ as a special case, we have "$\forall x, f(x)+g(x) \ge f(x^\star)+g(x^\star)$", which shows it's also the minimum of the first problem. QED.
The first problem -> the second problem
Suppose we have $x^\star$ as the minimizer of the first problem in the doc.
Therefore $x^\star$ minimizes "$f(x)+\xi, s.t. \xi=g(x)$".
Therefore $x^\star$ minimizes "$f(x)+\xi, s.t. \xi \ge g(x)$" (because when this problem attains minimum, $\xi$ must be equal to $g(x)$; which means it degrades to the above problem). This problem is exactly the second problem in the doc. QED.
|
Constrained versus unconstrained formulation of SVM optimisation
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the
|
47,553
|
Compare rank orders of population members across different variables
|
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be similar among "raters". This seems to be an application for Kendall's concordance coefficient $W$ of inter-rater agreement. In R
> rtr1 <- c(1, 6, 3, 2, 5, 4) # rank order from "rater" 1
> rtr2 <- c(1, 5, 6, 2, 4, 3) # "rater" 2
> rtr3 <- c(2, 3, 6, 5, 4, 1) # "rater" 3
> ratings <- cbind(rtr1, rtr2, rtr3)
> library(irr) # for kendall()
> kendall(ratings)
Kendall's coefficient of concordance W
Subjects = 6
Raters = 3
W = 0.568
Chisq(5) = 8.52
p-value = 0.130
Edit: This is equivalent to the Friedman-Test for dependent samples:
> rtrAll <- c(rtr1, rtr2, rtr3)
> nBl <- 3 # number of blocks / raters
> P <- 6 # number of dependent samples / units
> IV <- factor(rep(1:P, nBl)) # factor sample / unit
> blocks <- factor(rep(1:nBl, each=P)) # factor blocks / raters
> friedman.test(rtrAll, IV, blocks)
Friedman rank sum test
data: rtrAll, IV and blocks
Friedman chi-squared = 8.5238, df = 5, p-value = 0.1296
|
Compare rank orders of population members across different variables
|
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to
|
Compare rank orders of population members across different variables
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be similar among "raters". This seems to be an application for Kendall's concordance coefficient $W$ of inter-rater agreement. In R
> rtr1 <- c(1, 6, 3, 2, 5, 4) # rank order from "rater" 1
> rtr2 <- c(1, 5, 6, 2, 4, 3) # "rater" 2
> rtr3 <- c(2, 3, 6, 5, 4, 1) # "rater" 3
> ratings <- cbind(rtr1, rtr2, rtr3)
> library(irr) # for kendall()
> kendall(ratings)
Kendall's coefficient of concordance W
Subjects = 6
Raters = 3
W = 0.568
Chisq(5) = 8.52
p-value = 0.130
Edit: This is equivalent to the Friedman-Test for dependent samples:
> rtrAll <- c(rtr1, rtr2, rtr3)
> nBl <- 3 # number of blocks / raters
> P <- 6 # number of dependent samples / units
> IV <- factor(rep(1:P, nBl)) # factor sample / unit
> blocks <- factor(rep(1:nBl, each=P)) # factor blocks / raters
> friedman.test(rtrAll, IV, blocks)
Friedman rank sum test
data: rtrAll, IV and blocks
Friedman chi-squared = 8.5238, df = 5, p-value = 0.1296
|
Compare rank orders of population members across different variables
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to
|
47,554
|
Optimal parameter $\alpha$ for exponential smoothing using least squares
|
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\cdots+e_n^2$.
You can also use maximum likelihood estimation as discussed in my Springer book.
If you're just using simple exponential smoothing, and are happy to assume normal errors with constant variance, then an ARIMA(0,1,1) model is equivalent.
When you use a state space representation (such as in the innovations state space form, or by writing the ARIMA model in state space form), then handling missing values is easy. For example, the R function arima() will handle missing values without complaint.
|
Optimal parameter $\alpha$ for exponential smoothing using least squares
|
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\
|
Optimal parameter $\alpha$ for exponential smoothing using least squares
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\cdots+e_n^2$.
You can also use maximum likelihood estimation as discussed in my Springer book.
If you're just using simple exponential smoothing, and are happy to assume normal errors with constant variance, then an ARIMA(0,1,1) model is equivalent.
When you use a state space representation (such as in the innovations state space form, or by writing the ARIMA model in state space form), then handling missing values is easy. For example, the R function arima() will handle missing values without complaint.
|
Optimal parameter $\alpha$ for exponential smoothing using least squares
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\
|
47,555
|
Random permutation of a vector with a fixed expected sample correlation to the original?
|
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made.
I will use a standard notation where the action of a permutation $\sigma$ is written $ X^\sigma_i = X_{\sigma (i)}$ and the set of all permutations of the $n$ coordinates is $S_n$.
As you note in the question, upon standardizing $X$ it suffices to investigate $\mathbb{E}[{X^\sigma}'X]$. Because $X'X = 1$, a correlation of $r = 1$ is certainly attainable by means of the identity permutation $\epsilon$ (where $\epsilon(i) = i$ for all $i$). However, for any given $X$ there is a minimum attainable correlation: it is realized by associating the $k^\text{th}$ smallest component of $X^\sigma$ with the $k^\text{th}$ largest component of $X$. For example, with $X = (-2,1,1)/\sqrt{6}$ the smallest possible correlation of $-1/2$ is achieved by $X^\sigma = (1,1,-2)/\sqrt{6}$. Let's call this minimum correlation $r_{min}(X)$ and let $\sigma_{min}(X)$ be any permutation achieving this minimum value.
Every possible expected correlation of value between $r_{min}(X)$ and $1$ is attainable by means of a distribution supported on just $\sigma_{min}$ and $\epsilon$. Specifically, set
$$p = \frac{r - r_{min}}{1 - r_{min}}$$
and generate the permutation $\sigma_{min}$ with probability $1 - p$ and the permutation $\epsilon$ with probability $p$. (If $r_{min} = 1$ this formula is undefined but there's nothing to do anyway.)
I suspect you would like a more "interesting" distribution of permutations than this. To create this you will need to add more conditions. Here's one way to frame your problem: to every permutation $\sigma$ corresponds the number $f(\sigma) = {X^\sigma}'X$. An arbitrary probability distribution over the permutations assigns a non-negative value $p(\sigma)$ to each permutation according to the axioms of probability. The expectation of $f$, which is the expected correlation between $X$ and $X^\sigma$, of course equals
$$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)}.$$
Given a desired expected correlation $r$, you therefore have freedom to choose the $n!$ values $p(\sigma)$ subject to the conditions
$$\sum_{\sigma \in S_n}{p(\sigma)} = 1,$$
$$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)} = r,$$
$$p(\sigma) \ge 0 \text{ for all } \sigma \in S_n.$$
I have merely demonstrated that this linear program is feasible if and only if $r_{min} \le r \le 1$. You are free to choose among the solutions (a convex set of distributions) in any way you like. For instance, you might prefer to use as uniform a choice of permutations as possible, in which case you might seek to minimize the variance of the $p(\sigma)$ (thought of just as a set of numbers) subject to the preceding conditions. That's a quadratic program, for which there are many good solution methods and much available software. Solving this (exactly) will become problematic once $n$ exceeds about $8$ or so, because it involves $n!$ variables and you'll just overwhelm the software. In such cases you might want to restrict the distributions further, such as requiring them to be only cyclic and anti-cyclic permutations of the sorted coordinates (just $2n$ variables). Another possibility is to choose a bunch of permutations randomly--making sure to include the order-reversing permutation among them so the minimum correlation can be included--and then finding an approximately uniform distribution among them.
|
Random permutation of a vector with a fixed expected sample correlation to the original?
|
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made.
I will use a standard notation where the
|
Random permutation of a vector with a fixed expected sample correlation to the original?
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made.
I will use a standard notation where the action of a permutation $\sigma$ is written $ X^\sigma_i = X_{\sigma (i)}$ and the set of all permutations of the $n$ coordinates is $S_n$.
As you note in the question, upon standardizing $X$ it suffices to investigate $\mathbb{E}[{X^\sigma}'X]$. Because $X'X = 1$, a correlation of $r = 1$ is certainly attainable by means of the identity permutation $\epsilon$ (where $\epsilon(i) = i$ for all $i$). However, for any given $X$ there is a minimum attainable correlation: it is realized by associating the $k^\text{th}$ smallest component of $X^\sigma$ with the $k^\text{th}$ largest component of $X$. For example, with $X = (-2,1,1)/\sqrt{6}$ the smallest possible correlation of $-1/2$ is achieved by $X^\sigma = (1,1,-2)/\sqrt{6}$. Let's call this minimum correlation $r_{min}(X)$ and let $\sigma_{min}(X)$ be any permutation achieving this minimum value.
Every possible expected correlation of value between $r_{min}(X)$ and $1$ is attainable by means of a distribution supported on just $\sigma_{min}$ and $\epsilon$. Specifically, set
$$p = \frac{r - r_{min}}{1 - r_{min}}$$
and generate the permutation $\sigma_{min}$ with probability $1 - p$ and the permutation $\epsilon$ with probability $p$. (If $r_{min} = 1$ this formula is undefined but there's nothing to do anyway.)
I suspect you would like a more "interesting" distribution of permutations than this. To create this you will need to add more conditions. Here's one way to frame your problem: to every permutation $\sigma$ corresponds the number $f(\sigma) = {X^\sigma}'X$. An arbitrary probability distribution over the permutations assigns a non-negative value $p(\sigma)$ to each permutation according to the axioms of probability. The expectation of $f$, which is the expected correlation between $X$ and $X^\sigma$, of course equals
$$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)}.$$
Given a desired expected correlation $r$, you therefore have freedom to choose the $n!$ values $p(\sigma)$ subject to the conditions
$$\sum_{\sigma \in S_n}{p(\sigma)} = 1,$$
$$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)} = r,$$
$$p(\sigma) \ge 0 \text{ for all } \sigma \in S_n.$$
I have merely demonstrated that this linear program is feasible if and only if $r_{min} \le r \le 1$. You are free to choose among the solutions (a convex set of distributions) in any way you like. For instance, you might prefer to use as uniform a choice of permutations as possible, in which case you might seek to minimize the variance of the $p(\sigma)$ (thought of just as a set of numbers) subject to the preceding conditions. That's a quadratic program, for which there are many good solution methods and much available software. Solving this (exactly) will become problematic once $n$ exceeds about $8$ or so, because it involves $n!$ variables and you'll just overwhelm the software. In such cases you might want to restrict the distributions further, such as requiring them to be only cyclic and anti-cyclic permutations of the sorted coordinates (just $2n$ variables). Another possibility is to choose a bunch of permutations randomly--making sure to include the order-reversing permutation among them so the minimum correlation can be included--and then finding an approximately uniform distribution among them.
|
Random permutation of a vector with a fixed expected sample correlation to the original?
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made.
I will use a standard notation where the
|
47,556
|
What is a meaning of "p-value F" from Friedman test?
|
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are:
T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(A1 - C1)
T2.aj <- (m[1] - 1) * T1.aj/(m[1] * (m[2] - 1) - T1.aj)
Comparing this with the formula in chl's answer, you'll notice that T2.adj ("F value") corresponds to $F_{obs}$ and T1.adj ("Value") to $F_r$.
|
What is a meaning of "p-value F" from Friedman test?
|
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are:
T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(
|
What is a meaning of "p-value F" from Friedman test?
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are:
T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(A1 - C1)
T2.aj <- (m[1] - 1) * T1.aj/(m[1] * (m[2] - 1) - T1.aj)
Comparing this with the formula in chl's answer, you'll notice that T2.adj ("F value") corresponds to $F_{obs}$ and T1.adj ("Value") to $F_r$.
|
What is a meaning of "p-value F" from Friedman test?
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are:
T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(
|
47,557
|
What is a meaning of "p-value F" from Friedman test?
|
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you sum these ranks for each of your $a$ groups (let denote them sum $R_i$), then the Friedman statistic is defined as
$$
F_r=\frac{12}{ba(a+1)}\sum_{i=1}^aR_i^2-3b(a+1)
$$
and follows a $\chi^2(a-1)$, for $a$ and $b$ sufficiently large. Quoting Zar (Biostatistical Analysis, 4th ed., pp. 263-264), this approximation is conservative (hence, test has low power) and we can use an F-test, with
$$
F_{\text{obs}}=\frac{(b-1)F_r}{b(a-1)-F_r}
$$
which is to be compared to an F distribution with $a-1$ and $(a-1)(b-1$) degrees of freedom.
|
What is a meaning of "p-value F" from Friedman test?
|
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you su
|
What is a meaning of "p-value F" from Friedman test?
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you sum these ranks for each of your $a$ groups (let denote them sum $R_i$), then the Friedman statistic is defined as
$$
F_r=\frac{12}{ba(a+1)}\sum_{i=1}^aR_i^2-3b(a+1)
$$
and follows a $\chi^2(a-1)$, for $a$ and $b$ sufficiently large. Quoting Zar (Biostatistical Analysis, 4th ed., pp. 263-264), this approximation is conservative (hence, test has low power) and we can use an F-test, with
$$
F_{\text{obs}}=\frac{(b-1)F_r}{b(a-1)-F_r}
$$
which is to be compared to an F distribution with $a-1$ and $(a-1)(b-1$) degrees of freedom.
|
What is a meaning of "p-value F" from Friedman test?
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you su
|
47,558
|
What is a meaning of "p-value F" from Friedman test?
|
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic
$F_F=\frac{(N-1)\chi^2_F}{N(k-1)-\chi^2_F}$
which is distributed according to the F-distributionwith k−1 and (k−1)(N−1) degrees of freedom.
References:
Demsar, J. (2006). Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research, 7, p. 11.
Iman, R. L. and Davenport,J. M. Approximations of the critical region of the Friedman statistic. Communications in Statistics, pages 571–595, 1980.
|
What is a meaning of "p-value F" from Friedman test?
|
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic
$F_F=\frac{(N-1)\chi^2_F}{N
|
What is a meaning of "p-value F" from Friedman test?
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic
$F_F=\frac{(N-1)\chi^2_F}{N(k-1)-\chi^2_F}$
which is distributed according to the F-distributionwith k−1 and (k−1)(N−1) degrees of freedom.
References:
Demsar, J. (2006). Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research, 7, p. 11.
Iman, R. L. and Davenport,J. M. Approximations of the critical region of the Friedman statistic. Communications in Statistics, pages 571–595, 1980.
|
What is a meaning of "p-value F" from Friedman test?
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic
$F_F=\frac{(N-1)\chi^2_F}{N
|
47,559
|
Significance of the slope of a straight line fit
|
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different.
For instance, effectively one may consider robust regression as least squares on data stripped from outliers; using $r^2$ on all data in this case adds non-culpable penalty for high residuals of outliers.
|
Significance of the slope of a straight line fit
|
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different.
For instance, effectively one m
|
Significance of the slope of a straight line fit
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different.
For instance, effectively one may consider robust regression as least squares on data stripped from outliers; using $r^2$ on all data in this case adds non-culpable penalty for high residuals of outliers.
|
Significance of the slope of a straight line fit
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different.
For instance, effectively one m
|
47,560
|
Significance of the slope of a straight line fit
|
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties:
A robust coefficient of determination for regression, O Renauda
Edit:
*Is there any reason why this would NOT be a valid approach? * For one this does not make your method any more robust. There is a large literature on this issue, and fortunatly, good tools have been designed to adress these points.
|
Significance of the slope of a straight line fit
|
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties:
A robust coefficient of determination for regression, O Renauda
Edit:
*Is there any r
|
Significance of the slope of a straight line fit
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties:
A robust coefficient of determination for regression, O Renauda
Edit:
*Is there any reason why this would NOT be a valid approach? * For one this does not make your method any more robust. There is a large literature on this issue, and fortunatly, good tools have been designed to adress these points.
|
Significance of the slope of a straight line fit
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties:
A robust coefficient of determination for regression, O Renauda
Edit:
*Is there any r
|
47,561
|
Significance of the slope of a straight line fit
|
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard errors = Coefficient/Standard error), p value, and confidence interval. The p value directly addresses the statistical significance of the slope or coefficient you have in mind.
R Square of the model tells how well the model explains the dependent variable, or how well the model fits the dependent variable.
The p value of each coefficient tells you how statistically significant those coefficients are.
Very often you can have a model with a high R Square, but that includes one variable with a coefficient that is not statistically significant (its p value is too high). In such a case, it suggests your model would be nearly as good if you took that variable out. By the way, you should really focus on the Adjusted R Square instead of R Square. The Adjusted R Square correctly penalizes the model for having more variable and potentially over-fitting the data with independent variables that are not so relevant.
|
Significance of the slope of a straight line fit
|
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard er
|
Significance of the slope of a straight line fit
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard errors = Coefficient/Standard error), p value, and confidence interval. The p value directly addresses the statistical significance of the slope or coefficient you have in mind.
R Square of the model tells how well the model explains the dependent variable, or how well the model fits the dependent variable.
The p value of each coefficient tells you how statistically significant those coefficients are.
Very often you can have a model with a high R Square, but that includes one variable with a coefficient that is not statistically significant (its p value is too high). In such a case, it suggests your model would be nearly as good if you took that variable out. By the way, you should really focus on the Adjusted R Square instead of R Square. The Adjusted R Square correctly penalizes the model for having more variable and potentially over-fitting the data with independent variables that are not so relevant.
|
Significance of the slope of a straight line fit
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard er
|
47,562
|
Significance of the slope of a straight line fit
|
It should be possible to use a permutation test to test the significance of the slope.
Under the null, the slope is zero.
Under the assumptions of the model and the null together, there's therefore no association between y and x.
Hence the y's can be shuffled relative to the x to obtain the permutation distribution of the test statistic.
The p-value can be determined by funding the proportion of values at least as extreme as the observed statistic in the null distribution.
|
Significance of the slope of a straight line fit
|
It should be possible to use a permutation test to test the significance of the slope.
Under the null, the slope is zero.
Under the assumptions of the model and the null together, there's therefore n
|
Significance of the slope of a straight line fit
It should be possible to use a permutation test to test the significance of the slope.
Under the null, the slope is zero.
Under the assumptions of the model and the null together, there's therefore no association between y and x.
Hence the y's can be shuffled relative to the x to obtain the permutation distribution of the test statistic.
The p-value can be determined by funding the proportion of values at least as extreme as the observed statistic in the null distribution.
|
Significance of the slope of a straight line fit
It should be possible to use a permutation test to test the significance of the slope.
Under the null, the slope is zero.
Under the assumptions of the model and the null together, there's therefore n
|
47,563
|
Visualization of a multivariate function
|
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a quick assessment of the multiple,one-way sensitivities f() has to the various input variables. Here is an example chart (from here)
This chart is not that interesting, but an interpretation would be "NPV is most sensitive to Shipments, all other things being equal. But, the sensitivity is mostly on the upside, which is good. The Escalation variable induces sensitivity into NPV, but what looks to be skewed negatively a bit...".
You could do something similar for Mean(f) on the X-axis as well as Var(f)
Given what you find from some first glance visualizations like this, you could then slice and dice more and focus on specific variables or relationships between variables. Maybe you can revisit this thread in coming months and post the visualizations you found useful :)
|
Visualization of a multivariate function
|
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a q
|
Visualization of a multivariate function
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a quick assessment of the multiple,one-way sensitivities f() has to the various input variables. Here is an example chart (from here)
This chart is not that interesting, but an interpretation would be "NPV is most sensitive to Shipments, all other things being equal. But, the sensitivity is mostly on the upside, which is good. The Escalation variable induces sensitivity into NPV, but what looks to be skewed negatively a bit...".
You could do something similar for Mean(f) on the X-axis as well as Var(f)
Given what you find from some first glance visualizations like this, you could then slice and dice more and focus on specific variables or relationships between variables. Maybe you can revisit this thread in coming months and post the visualizations you found useful :)
|
Visualization of a multivariate function
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a q
|
47,564
|
Visualization of a multivariate function
|
Just a thought, although I've never tried it.
you could obtain a large number of values from the function across different parameter values
take a tour of the resulting data in ggobi (check out Mat Kelcey's video)
|
Visualization of a multivariate function
|
Just a thought, although I've never tried it.
you could obtain a large number of values from the function across different parameter values
take a tour of the resulting data in ggobi (check out Mat K
|
Visualization of a multivariate function
Just a thought, although I've never tried it.
you could obtain a large number of values from the function across different parameter values
take a tour of the resulting data in ggobi (check out Mat Kelcey's video)
|
Visualization of a multivariate function
Just a thought, although I've never tried it.
you could obtain a large number of values from the function across different parameter values
take a tour of the resulting data in ggobi (check out Mat K
|
47,565
|
Visualization of a multivariate function
|
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all others fixed. This would show you how the function varies in the directions of the maximal variance of the inputs.
|
Visualization of a multivariate function
|
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all
|
Visualization of a multivariate function
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all others fixed. This would show you how the function varies in the directions of the maximal variance of the inputs.
|
Visualization of a multivariate function
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all
|
47,566
|
How to compute efficiency?
|
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$
where $p$ is the parameter you want to estimate and $X_1,\dots,X_N$ are the observed random variables. For notational simplicity I will just write the objective function as $g(p)$ from now on.
We need some assumptions. Firstly I'll assume that you have already shown that the maximiser of $g$ is a consistent estimator (this actually tends to be the hardest part!). So, if the `true' value of the parameter is $p_0$ and the estimator is
$$ \hat{p} = \arg\max_{p} g(p) $$
then we have that $\hat{p} \rightarrow p_0$ almost surely as $N \rightarrow \infty$. Our second assumption is that $g$ is twice differentiable in a neighbourhood about $p_0$ (you can sometimes get away without this assumption, but the solution becomes more problem dependent). In view of strong consistency we can and will assume that $\hat{p}$ is inside this neighbourhood.
Denote by $g'$ and $g''$ the first and second derivatives of $g$ with respect to $p$. Then
$$ g'(p_0) - g'(\hat{p}) = (p_0 - \hat{p})g''(\bar{p})$$
where $\bar{p}$ lies between $\hat{p}$ and $p_0$. Now because $\hat{p}$ maximises $g$ we have $g'(\hat{p}) = 0$ so
$$(p_0 - \hat{p}) = \frac{g'(p_0)}{g''(\bar{p})}$$
and because $\hat{p} \rightarrow p_0$ almost surely then $\bar{p} \rightarrow p_0$ almost surely so $g''(\bar{p}) \rightarrow g''(p_0)$ almost surely and
$$(p_0 - \hat{p}) \rightarrow \frac{g'(p_0)}{g''(p_0)}$$
almost surely. So, in order to describe the distribution of $p_0 - \hat{p}$, i.e. the estimators central limit theorem, you need to find the distribution of $\frac{g'(p_0)}{g''(p_0)}$. This now becomes problem dependent.
|
How to compute efficiency?
|
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$
where $p$ is the parameter you want
|
How to compute efficiency?
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$
where $p$ is the parameter you want to estimate and $X_1,\dots,X_N$ are the observed random variables. For notational simplicity I will just write the objective function as $g(p)$ from now on.
We need some assumptions. Firstly I'll assume that you have already shown that the maximiser of $g$ is a consistent estimator (this actually tends to be the hardest part!). So, if the `true' value of the parameter is $p_0$ and the estimator is
$$ \hat{p} = \arg\max_{p} g(p) $$
then we have that $\hat{p} \rightarrow p_0$ almost surely as $N \rightarrow \infty$. Our second assumption is that $g$ is twice differentiable in a neighbourhood about $p_0$ (you can sometimes get away without this assumption, but the solution becomes more problem dependent). In view of strong consistency we can and will assume that $\hat{p}$ is inside this neighbourhood.
Denote by $g'$ and $g''$ the first and second derivatives of $g$ with respect to $p$. Then
$$ g'(p_0) - g'(\hat{p}) = (p_0 - \hat{p})g''(\bar{p})$$
where $\bar{p}$ lies between $\hat{p}$ and $p_0$. Now because $\hat{p}$ maximises $g$ we have $g'(\hat{p}) = 0$ so
$$(p_0 - \hat{p}) = \frac{g'(p_0)}{g''(\bar{p})}$$
and because $\hat{p} \rightarrow p_0$ almost surely then $\bar{p} \rightarrow p_0$ almost surely so $g''(\bar{p}) \rightarrow g''(p_0)$ almost surely and
$$(p_0 - \hat{p}) \rightarrow \frac{g'(p_0)}{g''(p_0)}$$
almost surely. So, in order to describe the distribution of $p_0 - \hat{p}$, i.e. the estimators central limit theorem, you need to find the distribution of $\frac{g'(p_0)}{g''(p_0)}$. This now becomes problem dependent.
|
How to compute efficiency?
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$
where $p$ is the parameter you want
|
47,567
|
How to compute efficiency?
|
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic normality has the conditions necessary to prove these properties. The conditions at the wiki may be stronger than what you need as they are used to prove asymptotic normality whereas you simply want to compute the variance of the estimator.
I am guessing that if your function satisfies the same conditions then the proof will carry over to your function as well. If not then we need to know one or both of the following: (a) the specific condition that $g(.)$ does not satisfy from the list at the wiki and (b) the specifics of $g(.)$ to give a better answer to your question.
|
How to compute efficiency?
|
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic no
|
How to compute efficiency?
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic normality has the conditions necessary to prove these properties. The conditions at the wiki may be stronger than what you need as they are used to prove asymptotic normality whereas you simply want to compute the variance of the estimator.
I am guessing that if your function satisfies the same conditions then the proof will carry over to your function as well. If not then we need to know one or both of the following: (a) the specific condition that $g(.)$ does not satisfy from the list at the wiki and (b) the specifics of $g(.)$ to give a better answer to your question.
|
How to compute efficiency?
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic no
|
47,568
|
Robust version of Hotelling $T^2$ test
|
Sure: two answers
a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here:
http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/
b) if by robustness you mean optimal under large group of distributions, then you should go for a sign based T2 (ask if this what you want, by the tone of your question i think not).
PS: this is the paper you want;
Roelant, E., Van Aelst, S., and Willems, G. (2008), “Fast Bootstrap for Robust Hotelling Tests,” COMPSTAT 2008: Proceedings in Computational Statistics (P. Brito, Ed.) Heidelberg: Physika-Verlag, to appear.
|
Robust version of Hotelling $T^2$ test
|
Sure: two answers
a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here:
http://
|
Robust version of Hotelling $T^2$ test
Sure: two answers
a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here:
http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/
b) if by robustness you mean optimal under large group of distributions, then you should go for a sign based T2 (ask if this what you want, by the tone of your question i think not).
PS: this is the paper you want;
Roelant, E., Van Aelst, S., and Willems, G. (2008), “Fast Bootstrap for Robust Hotelling Tests,” COMPSTAT 2008: Proceedings in Computational Statistics (P. Brito, Ed.) Heidelberg: Physika-Verlag, to appear.
|
Robust version of Hotelling $T^2$ test
Sure: two answers
a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here:
http://
|
47,569
|
Robust version of Hotelling $T^2$ test
|
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression, and in A comparison of robust alternatives to Hoteslling's T^2 control chart, which outlines some robust alternatives based on MVE, MCD, RMCD and trimmed means.
|
Robust version of Hotelling $T^2$ test
|
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression
|
Robust version of Hotelling $T^2$ test
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression, and in A comparison of robust alternatives to Hoteslling's T^2 control chart, which outlines some robust alternatives based on MVE, MCD, RMCD and trimmed means.
|
Robust version of Hotelling $T^2$ test
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression
|
47,570
|
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
|
You are right.
Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used interchangeably, when they do amount to the same thing only if the regression contains a constant (or, more precisely, if the residuals have mean zero, which can also be the case if the regressors can be linearly combined into a constant, say with an exhaustive set of dummies).
A little numerical illustration:
n <- 10
y <- rnorm(n)
x <- rnorm(n)
regwcst <- lm(y~x)
regwocst <- lm(y~x-1)
d1 <- c(rep(1,5), rep(0,5)) # two exhaustive dummies
d2 <- 1-d1
regwdumm <- lm(y~x-1+d1+d2)
> crossprod(x, resid(regwcst)) # all numerically zero
[,1]
[1,] -2.081668e-17
> crossprod(x, resid(regwdumm))
[,1]
[1,] -1.249001e-16
> crossprod(x, resid(regwocst))
[,1]
[1,] 1.804112e-16
> cor(x, resid(regwcst)) # numerically zero
[1] -2.721791e-17
> cor(x, resid(regwocst)) # not numerically zero
[1] 0.01718539
> cor(x, resid(regwdumm)) # numerically zero
|
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
|
You are right.
Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used intercha
|
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
You are right.
Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used interchangeably, when they do amount to the same thing only if the regression contains a constant (or, more precisely, if the residuals have mean zero, which can also be the case if the regressors can be linearly combined into a constant, say with an exhaustive set of dummies).
A little numerical illustration:
n <- 10
y <- rnorm(n)
x <- rnorm(n)
regwcst <- lm(y~x)
regwocst <- lm(y~x-1)
d1 <- c(rep(1,5), rep(0,5)) # two exhaustive dummies
d2 <- 1-d1
regwdumm <- lm(y~x-1+d1+d2)
> crossprod(x, resid(regwcst)) # all numerically zero
[,1]
[1,] -2.081668e-17
> crossprod(x, resid(regwdumm))
[,1]
[1,] -1.249001e-16
> crossprod(x, resid(regwocst))
[,1]
[1,] 1.804112e-16
> cor(x, resid(regwcst)) # numerically zero
[1] -2.721791e-17
> cor(x, resid(regwocst)) # not numerically zero
[1] 0.01718539
> cor(x, resid(regwdumm)) # numerically zero
|
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
You are right.
Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used intercha
|
47,571
|
Convergence of a confidence interval for the variance of a not normal distribution
|
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$,
\begin{align}
P[A_n \geq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] \to \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right), \tag{1}
\end{align}
where $\Phi$ is the cdf of the standard normal distribution, $z_{\alpha/2} = \Phi^{-1}(1 - \alpha/2), \tau^2 = \sigma^4(\kappa - 1)$.
To prove $(1)$, we first show that
\begin{align}
\xi_n = (n - 1) + \sqrt{2(n - 1)}(z_{\alpha/2} + o(1)). \tag{2}
\end{align}
To show $(2)$, note that provided $Y_n \sim \chi^2_{n - 1}$, CLT implies that
\begin{align}
Z_n := \frac{Y_n - (n - 1)}{\sqrt{2(n - 1)}}\to_d N(0, 1),
\end{align}
whence
\begin{align}
\xi_n &= F_{Y_n}^{-1}(1 - \alpha/2) = (n - 1) + \sqrt{2(n - 1)}F_{Z_n}^{-1}(1 - \alpha/2) \\
&= (n - 1) + \sqrt{2(n - 1)}(z_{\alpha/2} + o(1)),
\end{align}
i.e., $(2)$ holds.
By $\Delta_n := \sqrt{n}(S_n^2 - \sigma^2)/\tau \to_d N(0, 1)$ and Polya's Theorem, we have
\begin{align}
\sup_{x \in \mathbb{R}}|F_{\Delta_n}(x) - \Phi(x)| \to 0 \tag{3}
\end{align}
as $n \to \infty$. It then follows that
\begin{align}
& \left|P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] - \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)\right| \\
=& \left|P[\Delta_n \geq \sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2] - \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)\right| \\
\leq & |F_{\Delta_n}(\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi(\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2)| + o(1) \\
&+ |\Phi(-\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)| \\
\leq & \sup_{x \in \mathbb{R}}|F_{\Delta_n}(x) - \Phi(x)| + o(1) \\
&+ |\Phi(-\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)| \\
\to & 0
\end{align}
as $n \to \infty$. The "$o(1)$" term stands for $P[\Delta_n = \sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2)]$, which tends to $0$ as $n \to \infty$. The last step is a consequence of $(2)$ and $(3)$.
By the similar argument, it can be shown that
\begin{align}
P[B_n \leq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \leq \eta_n] \to \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right).
\end{align}
Therefore,
\begin{align}
P[A_n < \sigma^2 < B_n] = 1 - P[A_n \geq \sigma^2] - P[B_n \leq \sigma^2]
\to 1 - 2\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right).
\end{align}
The above asymptotic result may be verified by considering $X_1, \ldots, X_n \text{ i.i.d. } \sim N(\mu, \sigma^2)$, for which case $\tau^2 = 2\sigma^4$, whence $P[A_n < \sigma^2 < B_n] \to 1 - \alpha$. On the other hand, it is well-known that $(A_n, B_n)$ is the exact $1 - \alpha$ confidence interval for $\sigma^2$ under the normality condition.
|
Convergence of a confidence interval for the variance of a not normal distribution
|
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$,
\begin{align}
P[A_n \geq \sigma^2] = P[(n -
|
Convergence of a confidence interval for the variance of a not normal distribution
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$,
\begin{align}
P[A_n \geq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] \to \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right), \tag{1}
\end{align}
where $\Phi$ is the cdf of the standard normal distribution, $z_{\alpha/2} = \Phi^{-1}(1 - \alpha/2), \tau^2 = \sigma^4(\kappa - 1)$.
To prove $(1)$, we first show that
\begin{align}
\xi_n = (n - 1) + \sqrt{2(n - 1)}(z_{\alpha/2} + o(1)). \tag{2}
\end{align}
To show $(2)$, note that provided $Y_n \sim \chi^2_{n - 1}$, CLT implies that
\begin{align}
Z_n := \frac{Y_n - (n - 1)}{\sqrt{2(n - 1)}}\to_d N(0, 1),
\end{align}
whence
\begin{align}
\xi_n &= F_{Y_n}^{-1}(1 - \alpha/2) = (n - 1) + \sqrt{2(n - 1)}F_{Z_n}^{-1}(1 - \alpha/2) \\
&= (n - 1) + \sqrt{2(n - 1)}(z_{\alpha/2} + o(1)),
\end{align}
i.e., $(2)$ holds.
By $\Delta_n := \sqrt{n}(S_n^2 - \sigma^2)/\tau \to_d N(0, 1)$ and Polya's Theorem, we have
\begin{align}
\sup_{x \in \mathbb{R}}|F_{\Delta_n}(x) - \Phi(x)| \to 0 \tag{3}
\end{align}
as $n \to \infty$. It then follows that
\begin{align}
& \left|P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] - \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)\right| \\
=& \left|P[\Delta_n \geq \sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2] - \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)\right| \\
\leq & |F_{\Delta_n}(\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi(\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2)| + o(1) \\
&+ |\Phi(-\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)| \\
\leq & \sup_{x \in \mathbb{R}}|F_{\Delta_n}(x) - \Phi(x)| + o(1) \\
&+ |\Phi(-\sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2) -
\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right)| \\
\to & 0
\end{align}
as $n \to \infty$. The "$o(1)$" term stands for $P[\Delta_n = \sqrt{n}\tau^{-1}((n - 1)^{-1}\xi_n - 1)\sigma^2)]$, which tends to $0$ as $n \to \infty$. The last step is a consequence of $(2)$ and $(3)$.
By the similar argument, it can be shown that
\begin{align}
P[B_n \leq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \leq \eta_n] \to \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right).
\end{align}
Therefore,
\begin{align}
P[A_n < \sigma^2 < B_n] = 1 - P[A_n \geq \sigma^2] - P[B_n \leq \sigma^2]
\to 1 - 2\Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right).
\end{align}
The above asymptotic result may be verified by considering $X_1, \ldots, X_n \text{ i.i.d. } \sim N(\mu, \sigma^2)$, for which case $\tau^2 = 2\sigma^4$, whence $P[A_n < \sigma^2 < B_n] \to 1 - \alpha$. On the other hand, it is well-known that $(A_n, B_n)$ is the exact $1 - \alpha$ confidence interval for $\sigma^2$ under the normality condition.
|
Convergence of a confidence interval for the variance of a not normal distribution
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$,
\begin{align}
P[A_n \geq \sigma^2] = P[(n -
|
47,572
|
Statistical test to assess significant difference in landcover selection
|
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern.
In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\theta_1,\ldots,\theta_5)$ the problem is to test
$$H_0: \theta_1=\cdots=\theta_5=1/5$$ against
$$H_1:\theta_i\neq\theta_j\, \text{for at least one pair } i,j, \text{with } i\neq j.$$
Note that $466$ is the sum of cell counts.
There are several ways to implement this test, and in R the simplest way is perhaps this
counts = c(105, 327, 30, 2, 2)
expected = rep(1/5,5)
chisq.test(counts, p = expected)
|
Statistical test to assess significant difference in landcover selection
|
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern.
In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\t
|
Statistical test to assess significant difference in landcover selection
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern.
In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\theta_1,\ldots,\theta_5)$ the problem is to test
$$H_0: \theta_1=\cdots=\theta_5=1/5$$ against
$$H_1:\theta_i\neq\theta_j\, \text{for at least one pair } i,j, \text{with } i\neq j.$$
Note that $466$ is the sum of cell counts.
There are several ways to implement this test, and in R the simplest way is perhaps this
counts = c(105, 327, 30, 2, 2)
expected = rep(1/5,5)
chisq.test(counts, p = expected)
|
Statistical test to assess significant difference in landcover selection
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern.
In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\t
|
47,573
|
Statistical test to assess significant difference in landcover selection
|
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretation.
Is the multinomial distribution justified? The counts are number of times the tagged animal has been in each landcover area. How reasonable it is to assume that the observations (sightings) are independent? With movement data, we would expect that if the animal is in the Evergreen now, it is more likely to stay in that area than to move to another area.
Along those lines, if the animal wants to move to Grassland but Grassland doesn't directly adjoin Evergreen, it would have to cross other landcovers to reach it. Again, the independence of sightings required by the multinomial distribution would be violated.
Is the null hypothesis $H_0:\theta_1=\cdots=\theta_5=\frac{1}{5}$ justified? The availability of the landcover types in the animal's habitat matters. If the habitat consists of mostly Savanna, why would be interested in the testing the naive null hypothesis that the animal spends equal amount of time in each landcover type? The null hypothesis that the probabilities are all equal sounds unrealistic.
We don't need a formal statistical test to reject the null hypothesis that the animal been in each landcover class equal number of times; it's enough to look at the table of counts. However, rejecting this specific null hypothesis may in fact say little about the behavior of the animal.
References
A quick search for "habitat selection" turns up a number of relevant articles and even an R package. These could be a starting point for looking into how to analyze GPS tracking data in a meaningful way.
[1] William, G., Jean-Michel, G., Sonia, S. et al. Same habitat types but different use: evidence of context-dependent habitat selection in roe deer across populations. Sci Rep 8, 5102 (2018). https://doi.org/10.1038/s41598-018-23111-0
[2] Fattorini, L., Pisani, C., Riga, F. et al. A permutation-based combination of sign tests for assessing habitat selection. Environ Ecol Stat 21, 161–187 (2014). https://doi.org/10.1007/s10651-013-0250-7
[3] phuasses: Proportional Habitat Use Assessment
|
Statistical test to assess significant difference in landcover selection
|
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretat
|
Statistical test to assess significant difference in landcover selection
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretation.
Is the multinomial distribution justified? The counts are number of times the tagged animal has been in each landcover area. How reasonable it is to assume that the observations (sightings) are independent? With movement data, we would expect that if the animal is in the Evergreen now, it is more likely to stay in that area than to move to another area.
Along those lines, if the animal wants to move to Grassland but Grassland doesn't directly adjoin Evergreen, it would have to cross other landcovers to reach it. Again, the independence of sightings required by the multinomial distribution would be violated.
Is the null hypothesis $H_0:\theta_1=\cdots=\theta_5=\frac{1}{5}$ justified? The availability of the landcover types in the animal's habitat matters. If the habitat consists of mostly Savanna, why would be interested in the testing the naive null hypothesis that the animal spends equal amount of time in each landcover type? The null hypothesis that the probabilities are all equal sounds unrealistic.
We don't need a formal statistical test to reject the null hypothesis that the animal been in each landcover class equal number of times; it's enough to look at the table of counts. However, rejecting this specific null hypothesis may in fact say little about the behavior of the animal.
References
A quick search for "habitat selection" turns up a number of relevant articles and even an R package. These could be a starting point for looking into how to analyze GPS tracking data in a meaningful way.
[1] William, G., Jean-Michel, G., Sonia, S. et al. Same habitat types but different use: evidence of context-dependent habitat selection in roe deer across populations. Sci Rep 8, 5102 (2018). https://doi.org/10.1038/s41598-018-23111-0
[2] Fattorini, L., Pisani, C., Riga, F. et al. A permutation-based combination of sign tests for assessing habitat selection. Environ Ecol Stat 21, 161–187 (2014). https://doi.org/10.1007/s10651-013-0250-7
[3] phuasses: Proportional Habitat Use Assessment
|
Statistical test to assess significant difference in landcover selection
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretat
|
47,574
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
|
Confidence intervals are a frequentist measure of uncertainty.
The researcher determines a population parameter of interest (say average income in a country) that they want to learn.
Then, the researcher collects a random sample from the population and feeds this data into a formula that puts out an interval. The formula is designed so that, if it is applied to a random sample, it yields an interval that covers the true population parameter with a fixed probability - say 95%. In other words, before the researcher collects the data they anticipate that in 95% of cases they will collect a sample on which they will compute a confidence interval that covers the true parameter.
Translating this idea to LASSO regression, one may declare the "true values" of the estimated non-zero coefficients to be the parameters of interest. In this case, the justification above does not go through since parameter selection is a property of the data sample, i.e., the researcher does not know what the parameters of interest are before they collect the data sample.
The framework of conditional inference provides a way of translating the idea of a confidence interval to scenarios where the parameters of interest depend on the data. This is not a strategy for computing traditional confidence intervals. This is a strategy for computing something that is similar to confidence intervals. The interpretation and justification is slightly different though and, as pointed out by in the answer by Pananos, not familiar to most practitioners.
I also want to note that in many applications of LASSO regression it is not really meaningful to compute any kind measure of significance for the estimated coefficients since the coefficients do not (and are not meant to) have any kind of interpretation as population parameters. In pure prediction excercises, the researcher is only interested in computing good predictive values and very different configurations of parameter values will give similar predicted values. To assess the predictive power of the estimated model one would typically assess the properties of the predicted values (e.g. via out-of-sample error), not "significance" of the estimated coefficients.
Lastly, if it is known that e.g. out of 100 variables 90 have a zero coefficient and the others have "sufficiently large" coefficients (we don't know which ones!) then it can be shown that the Lasso selects the 10 variables with non-zero coefficients with probability one if the sample size is sufficiently large. In this case, fitting a normal OLS ("post Lasso) using only the variables selected by the Lasso will yield valid confidence intervals.
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
|
Confidence intervals are a frequentist measure of uncertainty.
The researcher determines a population parameter of interest (say average income in a country) that they want to learn.
Then, the researc
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
Confidence intervals are a frequentist measure of uncertainty.
The researcher determines a population parameter of interest (say average income in a country) that they want to learn.
Then, the researcher collects a random sample from the population and feeds this data into a formula that puts out an interval. The formula is designed so that, if it is applied to a random sample, it yields an interval that covers the true population parameter with a fixed probability - say 95%. In other words, before the researcher collects the data they anticipate that in 95% of cases they will collect a sample on which they will compute a confidence interval that covers the true parameter.
Translating this idea to LASSO regression, one may declare the "true values" of the estimated non-zero coefficients to be the parameters of interest. In this case, the justification above does not go through since parameter selection is a property of the data sample, i.e., the researcher does not know what the parameters of interest are before they collect the data sample.
The framework of conditional inference provides a way of translating the idea of a confidence interval to scenarios where the parameters of interest depend on the data. This is not a strategy for computing traditional confidence intervals. This is a strategy for computing something that is similar to confidence intervals. The interpretation and justification is slightly different though and, as pointed out by in the answer by Pananos, not familiar to most practitioners.
I also want to note that in many applications of LASSO regression it is not really meaningful to compute any kind measure of significance for the estimated coefficients since the coefficients do not (and are not meant to) have any kind of interpretation as population parameters. In pure prediction excercises, the researcher is only interested in computing good predictive values and very different configurations of parameter values will give similar predicted values. To assess the predictive power of the estimated model one would typically assess the properties of the predicted values (e.g. via out-of-sample error), not "significance" of the estimated coefficients.
Lastly, if it is known that e.g. out of 100 variables 90 have a zero coefficient and the others have "sufficiently large" coefficients (we don't know which ones!) then it can be shown that the Lasso selects the 10 variables with non-zero coefficients with probability one if the sample size is sufficiently large. In this case, fitting a normal OLS ("post Lasso) using only the variables selected by the Lasso will yield valid confidence intervals.
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
Confidence intervals are a frequentist measure of uncertainty.
The researcher determines a population parameter of interest (say average income in a country) that they want to learn.
Then, the researc
|
47,575
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
|
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else?
Prior to some work in the area of selective inference, the bias in the estimates of the coefficients complicated the theory for testing coefficients. What's more, there may (or may not have, I can't recall) been some theory for fixed regularization strength, but we almost always estimate the regularization strength from the data, which further added to the complexity.
Around 2014/2015, work started to come out on selective inference which provided some theoretical grounding on how to do inference on penalized models such as LASSO. I'm not sure if it is very mainstream as of yet, I haven't seen it be used often, but it is an active area of research.
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
|
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else?
Prior to some
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else?
Prior to some work in the area of selective inference, the bias in the estimates of the coefficients complicated the theory for testing coefficients. What's more, there may (or may not have, I can't recall) been some theory for fixed regularization strength, but we almost always estimate the regularization strength from the data, which further added to the complexity.
Around 2014/2015, work started to come out on selective inference which provided some theoretical grounding on how to do inference on penalized models such as LASSO. I'm not sure if it is very mainstream as of yet, I haven't seen it be used often, but it is an active area of research.
|
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else?
Prior to some
|
47,576
|
Outlier/anomaly detection on histograms
|
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance metric between your "data points" - which in your case are your histograms.
There are various ways of doing this. If your histograms all contain the same number of points, and all have the same breaks, you can simply take the average of the squared difference in bin counts. If the breaks are the same, but the counts differ, you can normalize first. Alternatively, you can use the Earth Mover's Distance, which is a general distance between distributions - you can estimate this even on the raw data, before binning into histograms.
Once you have a distance matrix between your histograms, one way forward would be to cluster your histograms, e.g., with a DBSCAN method, which explicitly allows for treating some data points (i.e., histograms) as "noise". You would need to fiddle around with the tuning parameters until you get results you are comfortable with. They will depend on the bumpiness and bin counts of your histograms.
As an example, here are 20 histograms, which one is the outlier?
Our approach correctly identifies the one at the bottom right as "noise", i.e., as an outlier.
R code:
library(dbscan)
set.seed(1)
n_obs <- 2e3
sims <- cbind(replicate(19,runif(n_obs)),rbeta(n_obs,2,2))
histograms <- matrix(NA,nrow=20,ncol=10)
opar <- par(mfrow=c(4,5),las=1,mai=c(.3,.3,0,0))
for ( ii in 1:20 ) {
histograms[ii,] <- hist(sims[,ii],xlab="",ylab="",
breaks=seq(0,1,by=0.1),main="")$counts
}
distances <- matrix(NA,20,20)
for ( xx in 1:20 ) {
for ( yy in 1:20 ) {
distances[xx,yy] <- mean((histograms[xx,]-histograms[yy,])^2)
}
}
clustering <- dbscan(distances,eps=10000,minPts=2)
clustering$cluster
Alternatively, since you have no more than 20 histograms, you could use an "inter-ocular trauma test for significance". Something like that might be a good idea for calibrating the clustering-based approach above, in any case.
|
Outlier/anomaly detection on histograms
|
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance
|
Outlier/anomaly detection on histograms
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance metric between your "data points" - which in your case are your histograms.
There are various ways of doing this. If your histograms all contain the same number of points, and all have the same breaks, you can simply take the average of the squared difference in bin counts. If the breaks are the same, but the counts differ, you can normalize first. Alternatively, you can use the Earth Mover's Distance, which is a general distance between distributions - you can estimate this even on the raw data, before binning into histograms.
Once you have a distance matrix between your histograms, one way forward would be to cluster your histograms, e.g., with a DBSCAN method, which explicitly allows for treating some data points (i.e., histograms) as "noise". You would need to fiddle around with the tuning parameters until you get results you are comfortable with. They will depend on the bumpiness and bin counts of your histograms.
As an example, here are 20 histograms, which one is the outlier?
Our approach correctly identifies the one at the bottom right as "noise", i.e., as an outlier.
R code:
library(dbscan)
set.seed(1)
n_obs <- 2e3
sims <- cbind(replicate(19,runif(n_obs)),rbeta(n_obs,2,2))
histograms <- matrix(NA,nrow=20,ncol=10)
opar <- par(mfrow=c(4,5),las=1,mai=c(.3,.3,0,0))
for ( ii in 1:20 ) {
histograms[ii,] <- hist(sims[,ii],xlab="",ylab="",
breaks=seq(0,1,by=0.1),main="")$counts
}
distances <- matrix(NA,20,20)
for ( xx in 1:20 ) {
for ( yy in 1:20 ) {
distances[xx,yy] <- mean((histograms[xx,]-histograms[yy,])^2)
}
}
clustering <- dbscan(distances,eps=10000,minPts=2)
clustering$cluster
Alternatively, since you have no more than 20 histograms, you could use an "inter-ocular trauma test for significance". Something like that might be a good idea for calibrating the clustering-based approach above, in any case.
|
Outlier/anomaly detection on histograms
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance
|
47,577
|
Product of two independent Student distributions
|
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as
$$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)\,\frac{\mathrm{d}x}{|x|}.$$
Ignoring normalizing constants (we'll consider these at the end), for two Student t densities with $\nu$ and $\mu$ degrees of freedom this integrand is proportional to
$$h(x,z) = \left(1 + \frac{x^2}{\mu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{x^2\nu}\right)^{-(\nu+1)/2}\,\frac{1}{|x|}.$$
Let's find a lower bound for $f(z)$ when $z$ is small. To do so, we may restrict the region of integration and we may replace the integrand by anything that never exceeds it.
Let $z$ be positive but less than $1$ and consider the integration region where $x^2\nu$ ranges between $z^2$ and $1.$ To get an appreciation for what's going on, here (with $\mu=\nu=1$) are plots of $h(x,z)$ for $|z| = 1$ (blue), $1/2, 1/4,$ and $1/8$ (red).
You can see that as $|z|$ approaches $0,$ there's more and more area pushed into this region. That's no surprise: we would expect the largest area (which corresponds to the highest density of the product) to be at the center of the product distribution, which (by symmetry) must be $0.$ But how large does it get?
Over the region $x^2/\nu \in[z^2, 1]$ the first factor of $h$ is smallest when $x$ is smallest and the second factor is smallest when $x$ is largest, whence throughout this region
$$\begin{aligned}
h(x,z) &\ge \left(1 + \frac{z^2}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{1}\right)^{-(\nu+1)/2}\,\frac{1}{|x|} \\
&\ge \left(1 + \frac{1}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + 1\right)^{-(\nu+1)/2}\,\frac{1}{|x|}.
\end{aligned}$$
The second inequality is a consequence of $z^2 \le 1.$
The factors before $1/|x|$ are constant (but nonzero), depending only on $\mu$ and $\nu,$ so again let's consider them later and ignore them now. As $x$ varies over just the positive part of this region it runs from $z/\sqrt{\nu}$ to $1/\sqrt{\nu},$ giving a lower bound proportional to
$$\int_{z/\sqrt{\nu}}^{1/\sqrt{\nu}} \frac{\mathrm{d}x}{|x|} = -\log z.$$
As $z\to 0,$ this lower bound diverges. Consequently, no matter what the constants of proportionality are that we ignored, $f_{XY}(z)$ diverges at $0.$
Here, to illustrate, is a histogram from a simulation of ten million products (with $\nu=\mu=1/2$). Almost a million of those products are represented. The red curve is the negative logarithm. Clearly it approximates the density well near zero.
However, any Student t distribution with (say) $\kappa \gt 0$ degrees of freedom has a value proportional to $(1 + 0^2/\kappa)^{-(\kappa+1)/2} = 1$ at the origin, which is finite. Consequently, the product of two independent Student t distributions is never (even remotely like) a Student t distribution.
The product density can be found analytically as a polynomial combination of Riemann hypergeometric functions. Since this product is never a Student t distribution, though, I did not see any point into providing further details.
|
Product of two independent Student distributions
|
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as
$$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)
|
Product of two independent Student distributions
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as
$$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)\,\frac{\mathrm{d}x}{|x|}.$$
Ignoring normalizing constants (we'll consider these at the end), for two Student t densities with $\nu$ and $\mu$ degrees of freedom this integrand is proportional to
$$h(x,z) = \left(1 + \frac{x^2}{\mu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{x^2\nu}\right)^{-(\nu+1)/2}\,\frac{1}{|x|}.$$
Let's find a lower bound for $f(z)$ when $z$ is small. To do so, we may restrict the region of integration and we may replace the integrand by anything that never exceeds it.
Let $z$ be positive but less than $1$ and consider the integration region where $x^2\nu$ ranges between $z^2$ and $1.$ To get an appreciation for what's going on, here (with $\mu=\nu=1$) are plots of $h(x,z)$ for $|z| = 1$ (blue), $1/2, 1/4,$ and $1/8$ (red).
You can see that as $|z|$ approaches $0,$ there's more and more area pushed into this region. That's no surprise: we would expect the largest area (which corresponds to the highest density of the product) to be at the center of the product distribution, which (by symmetry) must be $0.$ But how large does it get?
Over the region $x^2/\nu \in[z^2, 1]$ the first factor of $h$ is smallest when $x$ is smallest and the second factor is smallest when $x$ is largest, whence throughout this region
$$\begin{aligned}
h(x,z) &\ge \left(1 + \frac{z^2}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{1}\right)^{-(\nu+1)/2}\,\frac{1}{|x|} \\
&\ge \left(1 + \frac{1}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + 1\right)^{-(\nu+1)/2}\,\frac{1}{|x|}.
\end{aligned}$$
The second inequality is a consequence of $z^2 \le 1.$
The factors before $1/|x|$ are constant (but nonzero), depending only on $\mu$ and $\nu,$ so again let's consider them later and ignore them now. As $x$ varies over just the positive part of this region it runs from $z/\sqrt{\nu}$ to $1/\sqrt{\nu},$ giving a lower bound proportional to
$$\int_{z/\sqrt{\nu}}^{1/\sqrt{\nu}} \frac{\mathrm{d}x}{|x|} = -\log z.$$
As $z\to 0,$ this lower bound diverges. Consequently, no matter what the constants of proportionality are that we ignored, $f_{XY}(z)$ diverges at $0.$
Here, to illustrate, is a histogram from a simulation of ten million products (with $\nu=\mu=1/2$). Almost a million of those products are represented. The red curve is the negative logarithm. Clearly it approximates the density well near zero.
However, any Student t distribution with (say) $\kappa \gt 0$ degrees of freedom has a value proportional to $(1 + 0^2/\kappa)^{-(\kappa+1)/2} = 1$ at the origin, which is finite. Consequently, the product of two independent Student t distributions is never (even remotely like) a Student t distribution.
The product density can be found analytically as a polynomial combination of Riemann hypergeometric functions. Since this product is never a Student t distribution, though, I did not see any point into providing further details.
|
Product of two independent Student distributions
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as
$$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)
|
47,578
|
Testing the difference of proportions equal to a certain value
|
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you are exceedingly unfortunate to have both very very low $p_1$, $p_2$ and very very low $n$, $m$.
Certainly, higher $n$, $m$ will help, as the plot below shows. Note: I defined $p_2$=$p_1$+$p$.
Let x and y denote the numbers of successes observed in two independent sets of n and m Bernoulli trials, respectively, where $p_1$ and $p_2$ are the true success probabilities associated with each set of trials. Let $p_e=\frac{x+y}{n+m}$ and define:
$$z=\frac{\frac{x}{n}-\frac{y}{m}-(p_1-p_2)}{\sqrt{\frac{p_e(1-p_e)}{n}+\frac{p_e(1-p_e)}{m}}}$$
$z$ is approximately ~ $Normal(0,1)$.
In your particular case, with m and n around 150, the approximation is very good as long as the smallest of the 2 probabilities is no less than ~ 0.04. I colored the sampling distribution in blue when $p_1>=0.04$ and in red otherwise.
#code for the first plot
p1=0.05
p=0.03
p2=p1+p
nvalue=20
mvalue=25
zhats<-NULL
for (i in 1:10000) {
set.seed(i)
data1<-rbinom(n=nvalue,size=1,p=p1)
set.seed(i+20)
data2<-rbinom(n=mvalue,size=1,p=p2)
p_e<-(sum(data1)+sum(data2))/(nvalue+mvalue)
z_hat<-(mean(data1)-mean(data2)+p)/sqrt(p_e*(1-p_e)/nvalue+p_e*(1-p_e)/mvalue)
zhats<-c(zhats,z_hat)
}
plot(density(zhats),col="red",xlab="",main=paste0("n=",nvalue," m=",mvalue," p1=",p1," p=",p),lwd=2,
xlim=c(-4,4),ylim=c(0,0.5),cex.main=1.7)
par(new=T)
plot(density(rnorm(n=1000000)),xlim=c(-4,4),ylim=c(0,0.5),ann=F,ylab=F,lwd=2)
|
Testing the difference of proportions equal to a certain value
|
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you a
|
Testing the difference of proportions equal to a certain value
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you are exceedingly unfortunate to have both very very low $p_1$, $p_2$ and very very low $n$, $m$.
Certainly, higher $n$, $m$ will help, as the plot below shows. Note: I defined $p_2$=$p_1$+$p$.
Let x and y denote the numbers of successes observed in two independent sets of n and m Bernoulli trials, respectively, where $p_1$ and $p_2$ are the true success probabilities associated with each set of trials. Let $p_e=\frac{x+y}{n+m}$ and define:
$$z=\frac{\frac{x}{n}-\frac{y}{m}-(p_1-p_2)}{\sqrt{\frac{p_e(1-p_e)}{n}+\frac{p_e(1-p_e)}{m}}}$$
$z$ is approximately ~ $Normal(0,1)$.
In your particular case, with m and n around 150, the approximation is very good as long as the smallest of the 2 probabilities is no less than ~ 0.04. I colored the sampling distribution in blue when $p_1>=0.04$ and in red otherwise.
#code for the first plot
p1=0.05
p=0.03
p2=p1+p
nvalue=20
mvalue=25
zhats<-NULL
for (i in 1:10000) {
set.seed(i)
data1<-rbinom(n=nvalue,size=1,p=p1)
set.seed(i+20)
data2<-rbinom(n=mvalue,size=1,p=p2)
p_e<-(sum(data1)+sum(data2))/(nvalue+mvalue)
z_hat<-(mean(data1)-mean(data2)+p)/sqrt(p_e*(1-p_e)/nvalue+p_e*(1-p_e)/mvalue)
zhats<-c(zhats,z_hat)
}
plot(density(zhats),col="red",xlab="",main=paste0("n=",nvalue," m=",mvalue," p1=",p1," p=",p),lwd=2,
xlim=c(-4,4),ylim=c(0,0.5),cex.main=1.7)
par(new=T)
plot(density(rnorm(n=1000000)),xlim=c(-4,4),ylim=c(0,0.5),ann=F,ylab=F,lwd=2)
|
Testing the difference of proportions equal to a certain value
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you a
|
47,579
|
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimators
|
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$.
Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delta method,
$$\operatorname{Var}(\hat g_1) \approx \frac{\lambda (g'(\lambda))^2}{n}=\frac{\lambda e^{-2\lambda}(1-\lambda)^2}{n} \quad , \text{ for large }n$$
And the exact variance of $\hat g_2$ is
$$\operatorname{Var}(\hat g_2)=\frac{\lambda e^{-\lambda}(1-\lambda e^{-\lambda})}{n}$$
Note that $\hat g_1$ is asymptotically unbiased for $g(\lambda)$ (by delta method) and $\hat g_2$ is exactly unbiased.
Asymptotic relative efficiency of $\hat g_2$ with respect to $\hat g_1$ is the limit of the ratio of the variances of $\hat g_1$ and $\hat g_2$ as $n\to \infty$.
Your answer for the second part is correct.
When $\lambda=1$, by CLT,
$$\sqrt n(\overline X_n-1) \stackrel{d}\longrightarrow Z \,,\quad\text{ where }Z\sim N(0,1)$$
Delta method in this case says that (provided $g''(1)\ne 0$, which holds here)
$$n\left(g(\overline X_n)-g(1)\right)\stackrel{d}\longrightarrow \frac{Z^2}{2} g''(1)$$
[ The proof is similar to the proof for the usual delta method, except here you need a second order approximation. Note that $$n\left(g(\overline X_n)-g(1)\right)=\frac{n(\overline X_n-1)^2}{2!}g''(\overline X_n^*)=\frac{(\sqrt n(\overline X_n-1))^2}{2}g''(\overline X_n^*)\,,$$ where $\overline X_n^*$ lies between $\overline X_n$ and $1$. ]
Therefore, $$n\left(g(\overline X_n)-\frac1e\right)\stackrel{d}\longrightarrow -\frac{1}{2e}\chi^2_1 $$
In other words,
$$2ne\left(\frac1e-g(\overline X_n)\right) \stackrel{d}\longrightarrow \chi^2_1$$
|
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimat
|
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$.
Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delt
|
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimators
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$.
Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delta method,
$$\operatorname{Var}(\hat g_1) \approx \frac{\lambda (g'(\lambda))^2}{n}=\frac{\lambda e^{-2\lambda}(1-\lambda)^2}{n} \quad , \text{ for large }n$$
And the exact variance of $\hat g_2$ is
$$\operatorname{Var}(\hat g_2)=\frac{\lambda e^{-\lambda}(1-\lambda e^{-\lambda})}{n}$$
Note that $\hat g_1$ is asymptotically unbiased for $g(\lambda)$ (by delta method) and $\hat g_2$ is exactly unbiased.
Asymptotic relative efficiency of $\hat g_2$ with respect to $\hat g_1$ is the limit of the ratio of the variances of $\hat g_1$ and $\hat g_2$ as $n\to \infty$.
Your answer for the second part is correct.
When $\lambda=1$, by CLT,
$$\sqrt n(\overline X_n-1) \stackrel{d}\longrightarrow Z \,,\quad\text{ where }Z\sim N(0,1)$$
Delta method in this case says that (provided $g''(1)\ne 0$, which holds here)
$$n\left(g(\overline X_n)-g(1)\right)\stackrel{d}\longrightarrow \frac{Z^2}{2} g''(1)$$
[ The proof is similar to the proof for the usual delta method, except here you need a second order approximation. Note that $$n\left(g(\overline X_n)-g(1)\right)=\frac{n(\overline X_n-1)^2}{2!}g''(\overline X_n^*)=\frac{(\sqrt n(\overline X_n-1))^2}{2}g''(\overline X_n^*)\,,$$ where $\overline X_n^*$ lies between $\overline X_n$ and $1$. ]
Therefore, $$n\left(g(\overline X_n)-\frac1e\right)\stackrel{d}\longrightarrow -\frac{1}{2e}\chi^2_1 $$
In other words,
$$2ne\left(\frac1e-g(\overline X_n)\right) \stackrel{d}\longrightarrow \chi^2_1$$
|
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimat
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$.
Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delt
|
47,580
|
Is the third moment of an AR(1) dependent on $t$?
|
It may or may not be:
If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$,
$$
E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\epsilon_{t-j}^3),
$$
as pairs $\epsilon_i,\epsilon_j,\epsilon_k$ for which we do not have $i=j=k$ will yield terms of the form, e.g., $E(\epsilon_j^2)E(\epsilon_k)=0$.
If $E(\epsilon_{t-j}^3)$ is constant over time, and if we denote that quantity by $\gamma$, we obtain
$$
E(X_t^3)=\frac{\gamma}{1-\phi^3}
$$
A little illustration:
n <- 21000
k <- 2
epsilon <- rchisq(n, k)-k # a skewed mean zero distribution
phi <- 0.9
X <- arima.sim(model = list(ar=phi), n = n-1000, innov = epsilon[-(1:1000)], n.start = 1000, start.innov=epsilon[1:1000])
gamma <- k*(k+2)*(k+4) - 3*k^2*(k+2) + 3*k^3 - k^3 # 3rd moment of epsilon, see https://en.wikipedia.org/wiki/Chi-squared_distribution
gamma
mean(epsilon^3)
gamma/(1-phi^3)
mean(X^3)
|
Is the third moment of an AR(1) dependent on $t$?
|
It may or may not be:
If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$,
$$
E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\eps
|
Is the third moment of an AR(1) dependent on $t$?
It may or may not be:
If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$,
$$
E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\epsilon_{t-j}^3),
$$
as pairs $\epsilon_i,\epsilon_j,\epsilon_k$ for which we do not have $i=j=k$ will yield terms of the form, e.g., $E(\epsilon_j^2)E(\epsilon_k)=0$.
If $E(\epsilon_{t-j}^3)$ is constant over time, and if we denote that quantity by $\gamma$, we obtain
$$
E(X_t^3)=\frac{\gamma}{1-\phi^3}
$$
A little illustration:
n <- 21000
k <- 2
epsilon <- rchisq(n, k)-k # a skewed mean zero distribution
phi <- 0.9
X <- arima.sim(model = list(ar=phi), n = n-1000, innov = epsilon[-(1:1000)], n.start = 1000, start.innov=epsilon[1:1000])
gamma <- k*(k+2)*(k+4) - 3*k^2*(k+2) + 3*k^3 - k^3 # 3rd moment of epsilon, see https://en.wikipedia.org/wiki/Chi-squared_distribution
gamma
mean(epsilon^3)
gamma/(1-phi^3)
mean(X^3)
|
Is the third moment of an AR(1) dependent on $t$?
It may or may not be:
If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$,
$$
E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\eps
|
47,581
|
Are there any weight matrices of residual connections in ResNet?
|
There are two cases in the ResNet paper.
When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix.
When the summands would have different shapes, then there is a weight matrix that has the purpose of projecting the shortcut output to be the same shape as the direct output.
From the ResNet paper Kaiming He et al., "Deep Residual Learning for Image Recognition"
We adopt residual learning to every few stacked layers.
A building block is shown in Fig. 1. Formally, in this paper we consider a building block defined as:
\begin{equation}\label{eq:identity}
y= \mathcal{F}(x, \{W_{i}\}) + x.
\end{equation}
Here $x$ and $y$ are the input and output vectors of the layers considered. The function $\mathcal{F}(x, \{W_{i}\})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $\mathcal{F}=W_{2}\sigma(W_{1}{x})$ in which $\sigma$ denotes ReLU (Nair 2010) and the biases are omitted for simplifying notations. The operation $\mathcal{F}+{x}$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., $\sigma({y}),$ see Fig. 2).
The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
The dimensions of ${x}$ and $\mathcal{F}$ must be equal in Eqn. 1. If this is not the case (\eg, when changing the input/output channels), we can perform a linear projection $W_{s}$ by the shortcut connections to match the dimensions:
\begin{equation}\label{eq:transform}
{y}= \mathcal{F}({x}, \{W_{i}\}) + W_{s}{x}.
\end{equation}
We can also use a square matrix $W_{s}$ in Eqn.1. But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus $W_{s}$ is only used when matching dimensions.
|
Are there any weight matrices of residual connections in ResNet?
|
There are two cases in the ResNet paper.
When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix.
When the summands would have di
|
Are there any weight matrices of residual connections in ResNet?
There are two cases in the ResNet paper.
When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix.
When the summands would have different shapes, then there is a weight matrix that has the purpose of projecting the shortcut output to be the same shape as the direct output.
From the ResNet paper Kaiming He et al., "Deep Residual Learning for Image Recognition"
We adopt residual learning to every few stacked layers.
A building block is shown in Fig. 1. Formally, in this paper we consider a building block defined as:
\begin{equation}\label{eq:identity}
y= \mathcal{F}(x, \{W_{i}\}) + x.
\end{equation}
Here $x$ and $y$ are the input and output vectors of the layers considered. The function $\mathcal{F}(x, \{W_{i}\})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $\mathcal{F}=W_{2}\sigma(W_{1}{x})$ in which $\sigma$ denotes ReLU (Nair 2010) and the biases are omitted for simplifying notations. The operation $\mathcal{F}+{x}$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., $\sigma({y}),$ see Fig. 2).
The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
The dimensions of ${x}$ and $\mathcal{F}$ must be equal in Eqn. 1. If this is not the case (\eg, when changing the input/output channels), we can perform a linear projection $W_{s}$ by the shortcut connections to match the dimensions:
\begin{equation}\label{eq:transform}
{y}= \mathcal{F}({x}, \{W_{i}\}) + W_{s}{x}.
\end{equation}
We can also use a square matrix $W_{s}$ in Eqn.1. But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus $W_{s}$ is only used when matching dimensions.
|
Are there any weight matrices of residual connections in ResNet?
There are two cases in the ResNet paper.
When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix.
When the summands would have di
|
47,582
|
Randomized controlled trial and DAG
|
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$, because by definition random assignment $R$ is the only prior cause of treatment:
$$\boxed{R} \to A \to Y$$
In the simple DAG above, randomization is the only cause of treatment. If there were a backdoor path through some third variable like disease severity, or smoking history, then randomization would not actually assign treatment.
In this DAG notation, the box around $R$ (the randomizing process) indicates that it has no prior cause (i.e. it is a purely probabilistic phenomenon).
This specific fact about random assignment—that it reduces the role of confounding via a backdoor path to chance—is rather the entire point of random assignment to treatment.
|
Randomized controlled trial and DAG
|
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$,
|
Randomized controlled trial and DAG
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$, because by definition random assignment $R$ is the only prior cause of treatment:
$$\boxed{R} \to A \to Y$$
In the simple DAG above, randomization is the only cause of treatment. If there were a backdoor path through some third variable like disease severity, or smoking history, then randomization would not actually assign treatment.
In this DAG notation, the box around $R$ (the randomizing process) indicates that it has no prior cause (i.e. it is a purely probabilistic phenomenon).
This specific fact about random assignment—that it reduces the role of confounding via a backdoor path to chance—is rather the entire point of random assignment to treatment.
|
Randomized controlled trial and DAG
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$,
|
47,583
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
|
Its possible the model may get better, yes.
Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top out around a million dollars on average, where as in my home town they are just about half that. But anyone who has searched for a home to buy knows that price varies within the city, not just between city and that variation can be used to obtain a more accurate estimate.
These sorts of approaches (neighborhoods within cities, models within brands) are often handled using a mixed effect model. Let $y_{i, c, n}$ be the price of house $i$ which resides in city $c$ in neighbourhood $n$. One possible model for the neighbourhood example might be as follows.
$$ \beta_{c} \sim \mathcal{N}(\beta_0, \sigma)$$
$$ \beta_{n} \sim \mathcal{N}(\beta_0 + \beta_c, \sigma_c)$$
$$ y_{i, c, b} \sim \mathcal{N}(\beta_0 + \beta_c + \beta_n, \sigma_n)$$
Here, there is some population level average housing price $\beta_0$. The city level average housing price varies about $\beta_0$ (here, we idealize the variation between cities as coming from a normal distribution with some variance $\sigma^2_c$). The neighbourhood level average housing price is again idealized as varying around the city level average housing price, and the individual homes around this mean.
In short, yes it can be useful to keep those variables. They are correlated (only in so far as Shoreditch can not appear when London is not the city, for example), but they can be used to further explain variation within the larger class.
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
|
Its possible the model may get better, yes.
Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
Its possible the model may get better, yes.
Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top out around a million dollars on average, where as in my home town they are just about half that. But anyone who has searched for a home to buy knows that price varies within the city, not just between city and that variation can be used to obtain a more accurate estimate.
These sorts of approaches (neighborhoods within cities, models within brands) are often handled using a mixed effect model. Let $y_{i, c, n}$ be the price of house $i$ which resides in city $c$ in neighbourhood $n$. One possible model for the neighbourhood example might be as follows.
$$ \beta_{c} \sim \mathcal{N}(\beta_0, \sigma)$$
$$ \beta_{n} \sim \mathcal{N}(\beta_0 + \beta_c, \sigma_c)$$
$$ y_{i, c, b} \sim \mathcal{N}(\beta_0 + \beta_c + \beta_n, \sigma_n)$$
Here, there is some population level average housing price $\beta_0$. The city level average housing price varies about $\beta_0$ (here, we idealize the variation between cities as coming from a normal distribution with some variance $\sigma^2_c$). The neighbourhood level average housing price is again idealized as varying around the city level average housing price, and the individual homes around this mean.
In short, yes it can be useful to keep those variables. They are correlated (only in so far as Shoreditch can not appear when London is not the city, for example), but they can be used to further explain variation within the larger class.
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
Its possible the model may get better, yes.
Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top
|
47,584
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
|
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model; you should use the neighbourhood variable but not the city variable. The problem is not merely that neighbourhood and city are correlated, but that the latter is a deterministic function of the former.
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
|
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model; you should use the neighbourhood variable but not the city variable. The problem is not merely that neighbourhood and city are correlated, but that the latter is a deterministic function of the former.
|
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model
|
47,585
|
Does Student's T require normally-distributed data?
|
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, the answer is "maybe" depending on lots of things. It seems from some simulations that one of the biggest factors is the skewness of your population distribution:
All of these are done taking 10000 samples of size n=10. The "theoretical" sampling distributions are normal, chi-square and student's-t respectively.
Uniformly distributed population:
Beta distributed population with $\alpha = \beta = 0.5$
Weibull distributed population with $\lambda = 1, k = 0.5$
As $k$ gets smaller, the t distribution does a worse and worse job for the Weibull distributed population.
|
Does Student's T require normally-distributed data?
|
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, th
|
Does Student's T require normally-distributed data?
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, the answer is "maybe" depending on lots of things. It seems from some simulations that one of the biggest factors is the skewness of your population distribution:
All of these are done taking 10000 samples of size n=10. The "theoretical" sampling distributions are normal, chi-square and student's-t respectively.
Uniformly distributed population:
Beta distributed population with $\alpha = \beta = 0.5$
Weibull distributed population with $\lambda = 1, k = 0.5$
As $k$ gets smaller, the t distribution does a worse and worse job for the Weibull distributed population.
|
Does Student's T require normally-distributed data?
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, th
|
47,586
|
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
|
At least in epidemiology, the term is
Relative Risk reduction:
the relative risk reduction (RRR) or efficacy is the
relative decrease in the risk of an adverse event in the exposed group
compared to an unexposed group. It is computed as ${\displaystyle
(I_{u}-I_{e})/I_{u}}$, where $I_e$ is the incidence in the exposed
group, and $I_{u}$ is the incidence in the unexposed group.
If the risk of an adverse event is increased by the exposure rather than decreased, term relative risk increase (RRI) is used, and computed as ${\displaystyle (I_{e}-I_{u})/I_{u}}$.
If the direction of risk change is not assumed, a term relative effect is used and computed as ${\displaystyle (I_{e}-I_{u})/I_{u}}$
Dictionary of Epidemiology - Oxford Reference
|
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
|
At least in epidemiology, the term is
Relative Risk reduction:
the relative risk reduction (RRR) or efficacy is the
relative decrease in the risk of an adverse event in the exposed group
compared to
|
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
At least in epidemiology, the term is
Relative Risk reduction:
the relative risk reduction (RRR) or efficacy is the
relative decrease in the risk of an adverse event in the exposed group
compared to an unexposed group. It is computed as ${\displaystyle
(I_{u}-I_{e})/I_{u}}$, where $I_e$ is the incidence in the exposed
group, and $I_{u}$ is the incidence in the unexposed group.
If the risk of an adverse event is increased by the exposure rather than decreased, term relative risk increase (RRI) is used, and computed as ${\displaystyle (I_{e}-I_{u})/I_{u}}$.
If the direction of risk change is not assumed, a term relative effect is used and computed as ${\displaystyle (I_{e}-I_{u})/I_{u}}$
Dictionary of Epidemiology - Oxford Reference
|
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
At least in epidemiology, the term is
Relative Risk reduction:
the relative risk reduction (RRR) or efficacy is the
relative decrease in the risk of an adverse event in the exposed group
compared to
|
47,587
|
Is threshold moving unnecessary in balanced classification problem?
|
No, that is not correct.
First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not the modeling stage. Thresholds can only be set if we include the costs of misclassification - and that holds even for balanced datasets.
That thresholds are mistakenly discussed in the context of modeling is a consequence of the reliance on accuracy as an evaluation measure, which is an extremely misleading practice: Why is accuracy not the best measure for assessing classification models?
Now, to your question. We can easily simulate calibrated probabilistic predictions ("calibrated" meaning that an instance with a predicted probability $\hat{p}$ of belonging to the target class actually belongs to the target class with probability $p=\hat{p}$, so we are not dealing with artifacts of mispredicting) for a balanced dataset, simply by drawing predictions $\hat{p}_i\sim U[0,1]$, then assigning instance $i$ to the True class with probability $\hat{p}$. Now, using a threshold $t$ amounts to treating instance $i$ as True if $\hat{p}_i>t$, and as False if not.
What is an "optimal" threshold? This will, as above, depend on the costs of misclassification. Whether you treat a True as False, or a False as True, may have very different costs indeed. And this holds even for balanced datasets. If the costs of treating a False as True and much higher than the reverse costs of treating a True as False, then it makes sense to increase the (decision!) threshold.
As an example, perhaps you have a database of people that you may want to sell something to. If you do not offer the product to someone who would have bought it (treating a True as False), you lose the sale. If you offer the product to someone who does not want it (treating a False as True), you may send this person unwanted emails, and majorly tick them off. Thus, the costs are asymmetric. It only makes sense to pitch your product to people where you are highly certain they will buy it (and not be ticked off), i.e., you want to choose a high decision threshold.
Below, I make some assumptions on the relevant costs and simulate the costs depending on the threshold. As you see, the lowest-cost threshold is definitely not at 0.5. This is R code; you can adapt and run it yourself as you see fit.
cost_of_treating_T_as_T <- 0 # incurred if outcome==T & probabilities_of_T>=threshold
cost_of_treating_T_as_F <- 10 # incurred if outcome==T & probabilities_of_T<threshold
cost_of_treating_F_as_T <- 50 # incurred if outcome==F & probabilities_of_T>=threshold
cost_of_treating_F_as_F <- 1 # incurred if outcome==F & probabilities_of_T<threshold
nn <- 1e5
probabilities_of_T <- runif(nn)
outcomes <- runif(nn)<probabilities_of_T
sum(outcomes)/nn # balanced data
thresholds <- seq(.01,.99,by=.01)
average_costs <- sapply(thresholds,function(tt)
cost_of_treating_T_as_T*sum((probabilities_of_T>=tt)*outcomes) +
cost_of_treating_T_as_F*sum((probabilities_of_T<tt)*outcomes) +
cost_of_treating_F_as_T*sum((probabilities_of_T>=tt)*(!outcomes)) +
cost_of_treating_F_as_F*sum((probabilities_of_T<tt)*(!outcomes))
)/nn
plot(thresholds,average_costs,type="l",las=1,xlab="Threshold",ylab="Average Costs")
Finally, my answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion discusses this situation from a closely related angle.
|
Is threshold moving unnecessary in balanced classification problem?
|
No, that is not correct.
First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not
|
Is threshold moving unnecessary in balanced classification problem?
No, that is not correct.
First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not the modeling stage. Thresholds can only be set if we include the costs of misclassification - and that holds even for balanced datasets.
That thresholds are mistakenly discussed in the context of modeling is a consequence of the reliance on accuracy as an evaluation measure, which is an extremely misleading practice: Why is accuracy not the best measure for assessing classification models?
Now, to your question. We can easily simulate calibrated probabilistic predictions ("calibrated" meaning that an instance with a predicted probability $\hat{p}$ of belonging to the target class actually belongs to the target class with probability $p=\hat{p}$, so we are not dealing with artifacts of mispredicting) for a balanced dataset, simply by drawing predictions $\hat{p}_i\sim U[0,1]$, then assigning instance $i$ to the True class with probability $\hat{p}$. Now, using a threshold $t$ amounts to treating instance $i$ as True if $\hat{p}_i>t$, and as False if not.
What is an "optimal" threshold? This will, as above, depend on the costs of misclassification. Whether you treat a True as False, or a False as True, may have very different costs indeed. And this holds even for balanced datasets. If the costs of treating a False as True and much higher than the reverse costs of treating a True as False, then it makes sense to increase the (decision!) threshold.
As an example, perhaps you have a database of people that you may want to sell something to. If you do not offer the product to someone who would have bought it (treating a True as False), you lose the sale. If you offer the product to someone who does not want it (treating a False as True), you may send this person unwanted emails, and majorly tick them off. Thus, the costs are asymmetric. It only makes sense to pitch your product to people where you are highly certain they will buy it (and not be ticked off), i.e., you want to choose a high decision threshold.
Below, I make some assumptions on the relevant costs and simulate the costs depending on the threshold. As you see, the lowest-cost threshold is definitely not at 0.5. This is R code; you can adapt and run it yourself as you see fit.
cost_of_treating_T_as_T <- 0 # incurred if outcome==T & probabilities_of_T>=threshold
cost_of_treating_T_as_F <- 10 # incurred if outcome==T & probabilities_of_T<threshold
cost_of_treating_F_as_T <- 50 # incurred if outcome==F & probabilities_of_T>=threshold
cost_of_treating_F_as_F <- 1 # incurred if outcome==F & probabilities_of_T<threshold
nn <- 1e5
probabilities_of_T <- runif(nn)
outcomes <- runif(nn)<probabilities_of_T
sum(outcomes)/nn # balanced data
thresholds <- seq(.01,.99,by=.01)
average_costs <- sapply(thresholds,function(tt)
cost_of_treating_T_as_T*sum((probabilities_of_T>=tt)*outcomes) +
cost_of_treating_T_as_F*sum((probabilities_of_T<tt)*outcomes) +
cost_of_treating_F_as_T*sum((probabilities_of_T>=tt)*(!outcomes)) +
cost_of_treating_F_as_F*sum((probabilities_of_T<tt)*(!outcomes))
)/nn
plot(thresholds,average_costs,type="l",las=1,xlab="Threshold",ylab="Average Costs")
Finally, my answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion discusses this situation from a closely related angle.
|
Is threshold moving unnecessary in balanced classification problem?
No, that is not correct.
First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not
|
47,588
|
Train-Test Splits in Random Forest approach with small sample sizes
|
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still want a "clean" hold-out set for validation.
Train-test splits are often quite inaccurate for small data sets. Consider the bootstrap or repeated k-fold CV, e.g. 100 times (stratified) 10-fold CV or similiar will probably be not too bad.
|
Train-Test Splits in Random Forest approach with small sample sizes
|
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still
|
Train-Test Splits in Random Forest approach with small sample sizes
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still want a "clean" hold-out set for validation.
Train-test splits are often quite inaccurate for small data sets. Consider the bootstrap or repeated k-fold CV, e.g. 100 times (stratified) 10-fold CV or similiar will probably be not too bad.
|
Train-Test Splits in Random Forest approach with small sample sizes
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still
|
47,589
|
Help understand the virtue of generalized linear models
|
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are normal. However, they often aren't; for example, they can be badly skewed, with differing residual variances, the appearance of probable 'outliers', etc. One way to deal with these somewhat common problems is to transform $Y$. For instance, it often turns out to be helpful to take the logarithm of $Y$ and all those problems go away. In such a case, the conditional distribuitons of $Y$ become normal. That's what they're referring to. However, with Bernoulli data ($Y \in \{0, 1\}$), no transformation will ever make the conditional distribution normal—it will always be Bernoulli. The point of the link function is not to make $Y$ normal. (In fact, the link function is not even applied to $Y$, it is applied to the parameter that governs the behavior of the conditional distribution. In the case of the Bernoulli, that's the conditional probability, $p$.) Instead, the reason for the link function is to make it possible for the right hand side to model the needed parameter.
It may help to read some of my existing answers that are related to this:
Difference between logit and probit models
Is the logit function always the best for regression modeling of binary data?
I'm not sure how to answer this. It seems to be based on a mistaken premise.
The first set of transformations are members of the set of power transformations. They are (some of the) ways to transform $Y$ values for OLS regression. The second set are possible link functions for Bernoulli data. I don't see "arbitrary" in the quote from the book. It is certainly true that there are essentially infinite transformations to normalize the conditional distribution of $Y$, and there are essentially infinite transformations that can be used as link functions in a binomial regression model, but in general these are different infinite sets and there are also infinite sets that cannot be used for each. For a power transformation to correct skew, you want a monotonic transformation that will progressively shrink lager values down (e.g., $\sqrt{Y}$) or progressively expand them up (e.g., $Y^2$); for a link function for a binary response, you want a function that will transform $(0, 1) \rightarrow (-\infty, \infty)$
|
Help understand the virtue of generalized linear models
|
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are no
|
Help understand the virtue of generalized linear models
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are normal. However, they often aren't; for example, they can be badly skewed, with differing residual variances, the appearance of probable 'outliers', etc. One way to deal with these somewhat common problems is to transform $Y$. For instance, it often turns out to be helpful to take the logarithm of $Y$ and all those problems go away. In such a case, the conditional distribuitons of $Y$ become normal. That's what they're referring to. However, with Bernoulli data ($Y \in \{0, 1\}$), no transformation will ever make the conditional distribution normal—it will always be Bernoulli. The point of the link function is not to make $Y$ normal. (In fact, the link function is not even applied to $Y$, it is applied to the parameter that governs the behavior of the conditional distribution. In the case of the Bernoulli, that's the conditional probability, $p$.) Instead, the reason for the link function is to make it possible for the right hand side to model the needed parameter.
It may help to read some of my existing answers that are related to this:
Difference between logit and probit models
Is the logit function always the best for regression modeling of binary data?
I'm not sure how to answer this. It seems to be based on a mistaken premise.
The first set of transformations are members of the set of power transformations. They are (some of the) ways to transform $Y$ values for OLS regression. The second set are possible link functions for Bernoulli data. I don't see "arbitrary" in the quote from the book. It is certainly true that there are essentially infinite transformations to normalize the conditional distribution of $Y$, and there are essentially infinite transformations that can be used as link functions in a binomial regression model, but in general these are different infinite sets and there are also infinite sets that cannot be used for each. For a power transformation to correct skew, you want a monotonic transformation that will progressively shrink lager values down (e.g., $\sqrt{Y}$) or progressively expand them up (e.g., $Y^2$); for a link function for a binary response, you want a function that will transform $(0, 1) \rightarrow (-\infty, \infty)$
|
Help understand the virtue of generalized linear models
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are no
|
47,590
|
Can long run variance of a time series be used to test mean of the series?
|
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null.
And indeed, kernel-based long-run variance estimators are sometimes also referred to as nonparametric estimators that do not (there still are some assumptions, of course) require you to postulate a parametric model for the dependence.
That said, indeed, if you knew that your series follows a specific structure, you could exploit that. Why if in OLS the autocorrelation between residuals is positive, it will lead to inflated t-stats? discusses that the long-run variance of an AR(1) is
$$
\sigma^2/(1-\phi)^2,
$$
which you could estimate parametrically.
And indeed, as discussed for example in Newey-West t-stats and critical values, there is a price to pay for the above nonparametric inference, namely relatively poor finite-sample performance.
|
Can long run variance of a time series be used to test mean of the series?
|
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null.
And indee
|
Can long run variance of a time series be used to test mean of the series?
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null.
And indeed, kernel-based long-run variance estimators are sometimes also referred to as nonparametric estimators that do not (there still are some assumptions, of course) require you to postulate a parametric model for the dependence.
That said, indeed, if you knew that your series follows a specific structure, you could exploit that. Why if in OLS the autocorrelation between residuals is positive, it will lead to inflated t-stats? discusses that the long-run variance of an AR(1) is
$$
\sigma^2/(1-\phi)^2,
$$
which you could estimate parametrically.
And indeed, as discussed for example in Newey-West t-stats and critical values, there is a price to pay for the above nonparametric inference, namely relatively poor finite-sample performance.
|
Can long run variance of a time series be used to test mean of the series?
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null.
And indee
|
47,591
|
Is pooling countries together or running a regression model for each country alone is more suitable for comparison?
|
The two models yield different results because they are, well, different models.
Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them, which is fine. age and gender are presumably a potential confounder, hence this is also correclty included.
So the question really comes down to how to treat country. First, you have repeated measures within country - earnings in one country are more likely to be similar to earnings in the same country, than other countries and this needs to be accounted for. Pooling will not do this. country might also be considered a confounder, since it seems likely that country will influence both earnings, and education. So your 2nd model is more approriate. You are correct that the output will change depending on the reference level of country, but this is just a simple re-parameterisation. To understand this, you need to realise that the intercept include the reference level, so when you change the reference level, a few things will change, but the overall model is the same.
Since you have only 6 countries, you are rightly concerned about fitting a mixed effects / multilevel model - however there is some consensus in some domains that 6 is (just) OK. So I would also try to fit the model:
earnings ~ education*year + age + gender + (1 | country)
and compare the inferences it with the your 2nd model.
|
Is pooling countries together or running a regression model for each country alone is more suitable
|
The two models yield different results because they are, well, different models.
Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them
|
Is pooling countries together or running a regression model for each country alone is more suitable for comparison?
The two models yield different results because they are, well, different models.
Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them, which is fine. age and gender are presumably a potential confounder, hence this is also correclty included.
So the question really comes down to how to treat country. First, you have repeated measures within country - earnings in one country are more likely to be similar to earnings in the same country, than other countries and this needs to be accounted for. Pooling will not do this. country might also be considered a confounder, since it seems likely that country will influence both earnings, and education. So your 2nd model is more approriate. You are correct that the output will change depending on the reference level of country, but this is just a simple re-parameterisation. To understand this, you need to realise that the intercept include the reference level, so when you change the reference level, a few things will change, but the overall model is the same.
Since you have only 6 countries, you are rightly concerned about fitting a mixed effects / multilevel model - however there is some consensus in some domains that 6 is (just) OK. So I would also try to fit the model:
earnings ~ education*year + age + gender + (1 | country)
and compare the inferences it with the your 2nd model.
|
Is pooling countries together or running a regression model for each country alone is more suitable
The two models yield different results because they are, well, different models.
Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them
|
47,592
|
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\sigma)$?
|
The variable $\frac{\hat\mu}{\hat\sigma}$
This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for the standard error of Cohen's d )
\begin{array}{crl}
\text{Var}\left(\frac{\hat\mu}{\hat\sigma}\right) &=& \frac{1}{n}\left(\frac{\nu(1+n(\mu/\sigma)^2)}{\nu-2} - \frac{n(\mu/\sigma)^2 \nu}{2} \left(\frac{\Gamma((\nu-1)/2)}{\Gamma(\nu/2)}\right)^2 \right) \\ &\approx& \frac{1+\frac{1}{2}(\mu/\sigma)^2}{n} \end{array}
The transformed variable $\hat p = \Phi \left(- \frac{\hat \mu}{\hat \sigma} \right)$
This can be related to the variance above by using a Delta approximation. The variance scales with the slope/derivative of the transformation. So you get approximately
$$\begin{array}{}
\text{Var}(\hat{p}) &\approx& \text{Var}\left( \frac{\hat\mu}{\hat\sigma}\right) \phi\left(- \frac{\hat \mu}{\hat \sigma} \right)^2 \\ &\approx& \frac{1}{n} \cdot \phi\left(- \frac{\hat \mu}{\hat \sigma} \right)^2 \cdot\left({1+ \frac{1}{2} (\mu/\sigma)^2} \right)
\end{array}$$
Or by approximation
$$\begin{array}{}
\text{Var}(\hat{p}) &\approx& c \cdot \frac{p}{n}
\end{array}$$
where the factor $c = (1+\frac{1}{2}x^2)\phi(x)^2/\Phi(-x)$ is like:
Code for comparison:
The code below shows that this Delta approximation works reasonably for $\mu/\sigma = 2$. But for higher values of this parameter, the difference is large if $n$ is small, and a higher-order approximation should be made.
### settings
set.seed(1)
mu = 2
sig = 1
d = mu/sig
n = 10
p <- pnorm(-d)
### function to simulate the sample and estimating it
sample = function(n) {
x <- rnorm(n,mu,sig)
d = mean(x)/var(x)^0.5
p_est <- 1-pnorm(d)
p_est
}
#### perform the simulation 1000 times
#smp <- replicate(1000,sample(n))
#var(smp) ### simulation variance
#dnorm(d)^2 * (1 + 0.5 * d^2)/n ### formula variance
### simulate for a range of n
n_rng = 10:100
### simulated variance
v1 <- sapply(n_rng, FUN = function(x) {
smp <- replicate(1000,sample(x)) ### simulate
return(var(smp)) ### compute variance
})
### estimated variance
v2 <- dnorm(d)^2 * (1/n_rng) * (1 + 0.5 * d^2)
### estimated variance with higher precision
tmu <- d*sqrt(n_rng)
nu <- n_rng-1
fc <- gamma((nu-1)/2)/gamma(nu/2)
v3 <- dnorm(d)^2 * (1/n_rng) * ( nu/(nu-2)*(1+tmu^2) - tmu^2*nu/2 * fc^2 )
### plot results
plot(n_rng,v2, ylim = range(c(v1,v2)), log = "xy", type = "l",
main = "compare simulated variance \n with estimated variance",
xlab = "n", ylab = expression(hat(p)))
lines(n_rng,v3, col = 1, lty = 2)
points(n_rng,v1, col = 1, bg = 0 , pc = 21, cex = 0.7)
|
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\
|
The variable $\frac{\hat\mu}{\hat\sigma}$
This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for t
|
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\sigma)$?
The variable $\frac{\hat\mu}{\hat\sigma}$
This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for the standard error of Cohen's d )
\begin{array}{crl}
\text{Var}\left(\frac{\hat\mu}{\hat\sigma}\right) &=& \frac{1}{n}\left(\frac{\nu(1+n(\mu/\sigma)^2)}{\nu-2} - \frac{n(\mu/\sigma)^2 \nu}{2} \left(\frac{\Gamma((\nu-1)/2)}{\Gamma(\nu/2)}\right)^2 \right) \\ &\approx& \frac{1+\frac{1}{2}(\mu/\sigma)^2}{n} \end{array}
The transformed variable $\hat p = \Phi \left(- \frac{\hat \mu}{\hat \sigma} \right)$
This can be related to the variance above by using a Delta approximation. The variance scales with the slope/derivative of the transformation. So you get approximately
$$\begin{array}{}
\text{Var}(\hat{p}) &\approx& \text{Var}\left( \frac{\hat\mu}{\hat\sigma}\right) \phi\left(- \frac{\hat \mu}{\hat \sigma} \right)^2 \\ &\approx& \frac{1}{n} \cdot \phi\left(- \frac{\hat \mu}{\hat \sigma} \right)^2 \cdot\left({1+ \frac{1}{2} (\mu/\sigma)^2} \right)
\end{array}$$
Or by approximation
$$\begin{array}{}
\text{Var}(\hat{p}) &\approx& c \cdot \frac{p}{n}
\end{array}$$
where the factor $c = (1+\frac{1}{2}x^2)\phi(x)^2/\Phi(-x)$ is like:
Code for comparison:
The code below shows that this Delta approximation works reasonably for $\mu/\sigma = 2$. But for higher values of this parameter, the difference is large if $n$ is small, and a higher-order approximation should be made.
### settings
set.seed(1)
mu = 2
sig = 1
d = mu/sig
n = 10
p <- pnorm(-d)
### function to simulate the sample and estimating it
sample = function(n) {
x <- rnorm(n,mu,sig)
d = mean(x)/var(x)^0.5
p_est <- 1-pnorm(d)
p_est
}
#### perform the simulation 1000 times
#smp <- replicate(1000,sample(n))
#var(smp) ### simulation variance
#dnorm(d)^2 * (1 + 0.5 * d^2)/n ### formula variance
### simulate for a range of n
n_rng = 10:100
### simulated variance
v1 <- sapply(n_rng, FUN = function(x) {
smp <- replicate(1000,sample(x)) ### simulate
return(var(smp)) ### compute variance
})
### estimated variance
v2 <- dnorm(d)^2 * (1/n_rng) * (1 + 0.5 * d^2)
### estimated variance with higher precision
tmu <- d*sqrt(n_rng)
nu <- n_rng-1
fc <- gamma((nu-1)/2)/gamma(nu/2)
v3 <- dnorm(d)^2 * (1/n_rng) * ( nu/(nu-2)*(1+tmu^2) - tmu^2*nu/2 * fc^2 )
### plot results
plot(n_rng,v2, ylim = range(c(v1,v2)), log = "xy", type = "l",
main = "compare simulated variance \n with estimated variance",
xlab = "n", ylab = expression(hat(p)))
lines(n_rng,v3, col = 1, lty = 2)
points(n_rng,v1, col = 1, bg = 0 , pc = 21, cex = 0.7)
|
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\
The variable $\frac{\hat\mu}{\hat\sigma}$
This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for t
|
47,593
|
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
|
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 10 days to 1 reward in 11 days. You would also prefer 2 reward tomorrow over 1 reward today, and 2 reward in 11 days over 1 reward in 10 days.
Under your scheme, it seems like you'd prefer 1 reward tomorrow over 1 reward today, but you'd prefer 1 reward in 10 days over 1 reward in 11 days. You might prefer 1 reward today over 2 tomorrow, but 2 in 11 days rather than 1 in 10 days.
So you answer differently to the same questions depending on how far away something is, which is a bit strange. If taking these rewards required longer term planning and preperation, you might find yourself spending a few days to prepare to do X, only to later change your mind and throw it all away to do Y.
Another popular alternative to exponential discounting is hyperbolic discounting, which is supposedly what humans use. However this is also not time-consistent.
Practically speaking, it's a bit nontrivial to use alternate discount functions because the Bellman equation, the basis of many reinforcement learning algorithms, assumes exponential discounting. Fedus et al show you can tweak some things to make hyperbolic discounting work with Q-learning.
Another practical reason for exponential discounting is that it converges, whereas a hyperbolic sum of rewards might diverge to infinity. So it makes things nice for theoretical analysis.
|
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
|
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 1
|
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 10 days to 1 reward in 11 days. You would also prefer 2 reward tomorrow over 1 reward today, and 2 reward in 11 days over 1 reward in 10 days.
Under your scheme, it seems like you'd prefer 1 reward tomorrow over 1 reward today, but you'd prefer 1 reward in 10 days over 1 reward in 11 days. You might prefer 1 reward today over 2 tomorrow, but 2 in 11 days rather than 1 in 10 days.
So you answer differently to the same questions depending on how far away something is, which is a bit strange. If taking these rewards required longer term planning and preperation, you might find yourself spending a few days to prepare to do X, only to later change your mind and throw it all away to do Y.
Another popular alternative to exponential discounting is hyperbolic discounting, which is supposedly what humans use. However this is also not time-consistent.
Practically speaking, it's a bit nontrivial to use alternate discount functions because the Bellman equation, the basis of many reinforcement learning algorithms, assumes exponential discounting. Fedus et al show you can tweak some things to make hyperbolic discounting work with Q-learning.
Another practical reason for exponential discounting is that it converges, whereas a hyperbolic sum of rewards might diverge to infinity. So it makes things nice for theoretical analysis.
|
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 1
|
47,594
|
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
|
Here is a counterexample:
$P(X_n = 1) = \frac{1}{\sqrt{n}}$
$P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$
To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$.
Then for $n > N$, $P(n|X_n| > M) = P(|X_n| > \frac{M}{n}) = P(X_n = 1) = \frac{1}{\sqrt{n}} < \epsilon$ as required.
But $E(X_n) = \frac{1}{\sqrt{n}}$, which is not $O(\frac{1}{n})$.
|
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
|
Here is a counterexample:
$P(X_n = 1) = \frac{1}{\sqrt{n}}$
$P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$
To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$.
Then
|
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
Here is a counterexample:
$P(X_n = 1) = \frac{1}{\sqrt{n}}$
$P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$
To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$.
Then for $n > N$, $P(n|X_n| > M) = P(|X_n| > \frac{M}{n}) = P(X_n = 1) = \frac{1}{\sqrt{n}} < \epsilon$ as required.
But $E(X_n) = \frac{1}{\sqrt{n}}$, which is not $O(\frac{1}{n})$.
|
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
Here is a counterexample:
$P(X_n = 1) = \frac{1}{\sqrt{n}}$
$P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$
To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$.
Then
|
47,595
|
What is a medcouple?
|
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric.
The center of a symmetry, should it exist, would be the median $M.$ To study symmetry, then, it suffices to examine how far each value is from the median. Accordingly, recenter the data to their median residuals
$$y_i = x_i - M.$$
By the very definition of the median, at least half the $y_i$ are zero or greater ("non-negative") and at least half the $y_i$ are zero or smaller ("non-positive").
In a perfectly symmetric distribution, each nonzero $y_i$ has a counterpart $y_{i^\prime} = -y_i$ an equal distance away from $0$ but of the opposite sign. (Let's say the corresponding $x_i$ and $x_{i^\prime}$ are counterparts of each other, too.)
We may therefore measure the imbalance of any $y_j \ge 0$ compared to any $y_i \le 0$ by comparing their absolute values $|y_j| = y_j$ and $|y_i| = -y_i.$
Your reference adopts a relative measure of imbalance,
$$h(y_i, y_j) = \frac{|y_j| - |y_i|}{|y_j| + |y_i|} = \frac{y_j + y_i}{y_j - y_i} = \frac{(x_j - M) + (x_i - M)}{x_j - x_i}.$$
(This is half the "relative percent difference" of the absolute values of the median residuals. It is not, by far, the only such relative measure one could use. See https://stats.stackexchange.com/a/201864/919 for a discussion and a characterization of all such possible measures.)
Your reference remarks there will be problems whenever the denominator is zero, a situation it (incorrectly) dismisses as being of no interest in its intended applications (to samples of distributions that are continuous near their medians). (This remark is incorrect because in any sample of odd size $n$ there will always be one fraction with denominator $0;$ namely, $h(M,M).$ For a full definition of $h,$ see Wikipedia on medcouples.)
The salient properties of this measure are
Location invariance: when a constant is added to all $x_i,$ $h$ does not change. This is by construction: the $y_i$ are unaffected by this change of location of the $x_i.$
Scale invariance: when all $x_i$ are multiplied by a positive value, $h$ does not change.
Universal finite range: $-1 \le h \le 1$ always. This is obvious from the expression for $h$ in terms of absolute values (apply the triangle inequality inequality for the Euclidean line $\mathbb R$ for a rigorous proof).
Small values of $h(x_i,x_j)$ indicate $x_i$ and $x_j$ are close to being counterparts. ("Small" of course means relative to $1,$ the largest possible absolute value of $h.$)
Sign equivariance: when all the data are negated, all the $h(x_i,x_j)$ are negated, too, because $h(x_i,x_j) = -h(-x_j, -x_i).$
Indication of skewness. The sign of $h(x_i, x_j)$ is positive when $x_j$ is further above the median than $x_i$ is below the median.
Absolute values near $1$ indicate one of the values is much further from $M$ than the other is, relative to the distance between $x_j$ and $x_i.$ Positive values mean $x_j$ is further and negative values mean $x_i$ is further.
This all justifies calling $h(x_i,x_j)$ something like a "two-point skewness measure" whenever $x_i \le M \le x_j.$ However, it's only one indication of the overall distribution of the data. The medcouple summarizes these two-point skewnesses.
Thus, if there is an overall tendency for positive deviations of data to exceed the magnitudes of negative deviations, an average of the $h(x_i, x_j)$ will measure the "overall skewness" (again restricting to $x_i\le M$ and $x_j\ge M$).
Continuing in the spirit of using robust statistics, for the average we may use the median. Thus,
the medcouple of the batch $(x_1, x_2, \ldots, x_n)$ is the median of all the two-point skewness measures.
Consider, as a simple example, the batch $(4, 4, 6, 12).$ Its median can be taken to be midway between $4$ and $6,$ equal to $5.$ The deviations $y_i$ are $(-1,-1,1,7).$ The two nonpositive deviations $(y_1,y_2)=(-1, -1)$ can be taken to be the $y_i$ and the two nonnegative deviations $(y_3,y_4)=(1,7)$ will serve as $y_j,$ thereby giving four possible two-point skewness indicators:
$$\begin{aligned}
h(y_3,y_1) &= h(1,-1) = 0;\\
h(y_4,y_1) &= h(7,-1) = 6/8;\\
h(y_3,y_2) &= h(1,-1) = 0;\\
h(y_4,y_2) &= h(7,-1) = 6/8.
\end{aligned}$$
The resulting batch of two-point skewness indicators $(0, 6/8, 0, 6/8)$ has $3/8$ as its median: this is the "medcouple" of the original batch $(x_1, \ldots, x_4).$ It tells us a typical two-point skewness measure is $3/8:$ this batch is positively skewed by this amount.
|
What is a medcouple?
|
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric.
The center of a symmetry, should it exist, would be t
|
What is a medcouple?
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric.
The center of a symmetry, should it exist, would be the median $M.$ To study symmetry, then, it suffices to examine how far each value is from the median. Accordingly, recenter the data to their median residuals
$$y_i = x_i - M.$$
By the very definition of the median, at least half the $y_i$ are zero or greater ("non-negative") and at least half the $y_i$ are zero or smaller ("non-positive").
In a perfectly symmetric distribution, each nonzero $y_i$ has a counterpart $y_{i^\prime} = -y_i$ an equal distance away from $0$ but of the opposite sign. (Let's say the corresponding $x_i$ and $x_{i^\prime}$ are counterparts of each other, too.)
We may therefore measure the imbalance of any $y_j \ge 0$ compared to any $y_i \le 0$ by comparing their absolute values $|y_j| = y_j$ and $|y_i| = -y_i.$
Your reference adopts a relative measure of imbalance,
$$h(y_i, y_j) = \frac{|y_j| - |y_i|}{|y_j| + |y_i|} = \frac{y_j + y_i}{y_j - y_i} = \frac{(x_j - M) + (x_i - M)}{x_j - x_i}.$$
(This is half the "relative percent difference" of the absolute values of the median residuals. It is not, by far, the only such relative measure one could use. See https://stats.stackexchange.com/a/201864/919 for a discussion and a characterization of all such possible measures.)
Your reference remarks there will be problems whenever the denominator is zero, a situation it (incorrectly) dismisses as being of no interest in its intended applications (to samples of distributions that are continuous near their medians). (This remark is incorrect because in any sample of odd size $n$ there will always be one fraction with denominator $0;$ namely, $h(M,M).$ For a full definition of $h,$ see Wikipedia on medcouples.)
The salient properties of this measure are
Location invariance: when a constant is added to all $x_i,$ $h$ does not change. This is by construction: the $y_i$ are unaffected by this change of location of the $x_i.$
Scale invariance: when all $x_i$ are multiplied by a positive value, $h$ does not change.
Universal finite range: $-1 \le h \le 1$ always. This is obvious from the expression for $h$ in terms of absolute values (apply the triangle inequality inequality for the Euclidean line $\mathbb R$ for a rigorous proof).
Small values of $h(x_i,x_j)$ indicate $x_i$ and $x_j$ are close to being counterparts. ("Small" of course means relative to $1,$ the largest possible absolute value of $h.$)
Sign equivariance: when all the data are negated, all the $h(x_i,x_j)$ are negated, too, because $h(x_i,x_j) = -h(-x_j, -x_i).$
Indication of skewness. The sign of $h(x_i, x_j)$ is positive when $x_j$ is further above the median than $x_i$ is below the median.
Absolute values near $1$ indicate one of the values is much further from $M$ than the other is, relative to the distance between $x_j$ and $x_i.$ Positive values mean $x_j$ is further and negative values mean $x_i$ is further.
This all justifies calling $h(x_i,x_j)$ something like a "two-point skewness measure" whenever $x_i \le M \le x_j.$ However, it's only one indication of the overall distribution of the data. The medcouple summarizes these two-point skewnesses.
Thus, if there is an overall tendency for positive deviations of data to exceed the magnitudes of negative deviations, an average of the $h(x_i, x_j)$ will measure the "overall skewness" (again restricting to $x_i\le M$ and $x_j\ge M$).
Continuing in the spirit of using robust statistics, for the average we may use the median. Thus,
the medcouple of the batch $(x_1, x_2, \ldots, x_n)$ is the median of all the two-point skewness measures.
Consider, as a simple example, the batch $(4, 4, 6, 12).$ Its median can be taken to be midway between $4$ and $6,$ equal to $5.$ The deviations $y_i$ are $(-1,-1,1,7).$ The two nonpositive deviations $(y_1,y_2)=(-1, -1)$ can be taken to be the $y_i$ and the two nonnegative deviations $(y_3,y_4)=(1,7)$ will serve as $y_j,$ thereby giving four possible two-point skewness indicators:
$$\begin{aligned}
h(y_3,y_1) &= h(1,-1) = 0;\\
h(y_4,y_1) &= h(7,-1) = 6/8;\\
h(y_3,y_2) &= h(1,-1) = 0;\\
h(y_4,y_2) &= h(7,-1) = 6/8.
\end{aligned}$$
The resulting batch of two-point skewness indicators $(0, 6/8, 0, 6/8)$ has $3/8$ as its median: this is the "medcouple" of the original batch $(x_1, \ldots, x_4).$ It tells us a typical two-point skewness measure is $3/8:$ this batch is positively skewed by this amount.
|
What is a medcouple?
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric.
The center of a symmetry, should it exist, would be t
|
47,596
|
What is a medcouple?
|
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple.
We have the MC itself as
MC = med h(xi, xj)
and we have h as
h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi)
we have two indices: i and j.
they are used to form the couples to be compared.
The goal is to compare the biggest value in the data-set to the smallest one.
Then compare the second-biggest to the seccond smallest
The 3rd biggest to the 3rd smallest and so on.
This can be done by sorting the dataset descending with j starting at 1 and i starting at [length of data]
With every step j gets increased by one and i decreased by 1.
example-dataset, sorted descending allread: {10, 8, 5, 2, 1}
x[j=1] will start on the left side and select 10.
x[i=5] will start on the right side and select 1.
Thats the first couple for the medcouple calculation
For step 2 j gets increased by 1 and i decreased.
x[j=2] will start on the left side and select 8.
x[i=4] will start on the right side and select 2.
Thats the second couple for the medcouple calculation (and also the final one in our example, since there are no pairs left and right the median left)
Now we can plug this couples into the h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi) formular.
(xj−Q2) is a measure of how much the bigger of the 2 couple-values lies above the median
(Q2−xi) is a measure of how much the smaller of the 2 couple_values lies under the median.
you could also see this as distance to the median for both of the couple values.
(xj−Q2) - (Q2−xi) evaluates the difference in distances of the couple-values to the mean. If this value is negative you know, that the smaller value of the couple lies further away from the median than the bigger value of the couple.
By dividing (xj−Q2) - (Q2−xi) by (xj−xi) you standardise the difference of the couple-values distance to the median by their distance to each other.
For each of your value couples (from step1 to stepn, see above) you now have value, that tells you which of this two values lies further away from the median and how impactfull this difference is in relation to the distance between both values from the couple (remember: wenn this value is < 0 it means that the lower value is further away from the median, than the higher value).
Now we use the median over all this calculated values, to get a measurement for the skewness of the distribution. If we find, that this calculated med lies below 0 we know, that there are more couples where the smaller value is further away from the median than the larger value, or in other words, the left-side tail of the distribution is longer than the right-side one.
|
What is a medcouple?
|
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple.
We have the MC itself as
MC = med h(xi, xj)
and we have h as
h(xi, xj) = (xj−Q2)−(
|
What is a medcouple?
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple.
We have the MC itself as
MC = med h(xi, xj)
and we have h as
h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi)
we have two indices: i and j.
they are used to form the couples to be compared.
The goal is to compare the biggest value in the data-set to the smallest one.
Then compare the second-biggest to the seccond smallest
The 3rd biggest to the 3rd smallest and so on.
This can be done by sorting the dataset descending with j starting at 1 and i starting at [length of data]
With every step j gets increased by one and i decreased by 1.
example-dataset, sorted descending allread: {10, 8, 5, 2, 1}
x[j=1] will start on the left side and select 10.
x[i=5] will start on the right side and select 1.
Thats the first couple for the medcouple calculation
For step 2 j gets increased by 1 and i decreased.
x[j=2] will start on the left side and select 8.
x[i=4] will start on the right side and select 2.
Thats the second couple for the medcouple calculation (and also the final one in our example, since there are no pairs left and right the median left)
Now we can plug this couples into the h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi) formular.
(xj−Q2) is a measure of how much the bigger of the 2 couple-values lies above the median
(Q2−xi) is a measure of how much the smaller of the 2 couple_values lies under the median.
you could also see this as distance to the median for both of the couple values.
(xj−Q2) - (Q2−xi) evaluates the difference in distances of the couple-values to the mean. If this value is negative you know, that the smaller value of the couple lies further away from the median than the bigger value of the couple.
By dividing (xj−Q2) - (Q2−xi) by (xj−xi) you standardise the difference of the couple-values distance to the median by their distance to each other.
For each of your value couples (from step1 to stepn, see above) you now have value, that tells you which of this two values lies further away from the median and how impactfull this difference is in relation to the distance between both values from the couple (remember: wenn this value is < 0 it means that the lower value is further away from the median, than the higher value).
Now we use the median over all this calculated values, to get a measurement for the skewness of the distribution. If we find, that this calculated med lies below 0 we know, that there are more couples where the smaller value is further away from the median than the larger value, or in other words, the left-side tail of the distribution is longer than the right-side one.
|
What is a medcouple?
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple.
We have the MC itself as
MC = med h(xi, xj)
and we have h as
h(xi, xj) = (xj−Q2)−(
|
47,597
|
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
|
Its a bit involved, but it should be doable.
As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically, but we can instead simulate it. Here is some gamma regression data
N = 100
x = np.random.normal(size = N)
true_beta = np.array([0.3])
eta = 0.8 + x*true_beta
mu = np.exp(eta)
shape = 10
#parameterize gamma in terms of shaope and scale
y = gamma(a=shape, scale=mu/shape).rvs()
Now, I will fit the gamma regression to this data
X = sm.tools.add_constant(x)
gamma_model = sm.GLM(y, X, family=sm.families.Gamma(link = sm.families.links.log()))
gamma_results = gamma_model.fit()
gamma_results.summary()
Generalized Linear Model Regression Results
Dep. Variable: ,y , No. Observations: , 100
Model: ,GLM , Df Residuals: , 98
Model Family: ,Gamma , Df Model: , 1
Link Function: ,log , Scale: ,0.075594
Method: ,IRLS , Log-Likelihood: , -96.426
Date: ,Mon, 30 Nov 2020, Deviance: , 7.7252
Time: ,22:45:07 , Pearson chi2: , 7.41
No. Iterations: ,7 , ,
Covariance Type:,nonrobust , ,
, coef , std err , z ,P>|z| , [0.025 , 0.975]
const, 0.8172, 0.028, 29.264, 0.000, 0.762, 0.872
x1 , 0.2392, 0.029, 8.333, 0.000, 0.183, 0.296
So long as I have enough data, we can make a normal approximation to the sampling distribution of the coefficients.
The mean and covariance can be obtained from the model summary.
beta_samp_mean = gamma_results.params
beta_samp_cov = gamma_results.cov_params()
dispersion = gamma_results.scale
Now, it is just a matter of sampling fake data using these estimates and taking quantiles.
X_pred = np.linspace(-2, 2)
X_pred = sm.tools.add_constant(X_pred)
num_samps = 100_000
possible_coefficients = np.random.multivariate_normal(mean = beta_samp_mean, cov = beta_samp_cov, size = num_samps)
linear_predictions = [X_pred@b for b in possible_coefficients]
y_hyp = gamma(a=1/dispersion, scale = np.exp(linear_predictions)*dispersion).rvs()
# Here is the prediction interval
l, u = np.quantile(y_hyp, q=[0.025, 0.975], axis = 0)
Its easy to then plot the prediction interval
yhat = gamma_results.predict(X_pred)
fig, ax = plt.subplots(dpi = 120)
plt.plot(X_pred[:,1], yhat, color = 'red', label = 'Estimated')
plt.plot(X_pred[:, 1], np.exp(0.8 + X_pred[:, 1]*true_beta), label = 'Truth')
plt.fill_between(X_pred[:, 1], l, u, color = 'red', alpha = 0.1, label = 'Prediction Interval')
for i in range(10):
y_tilde = gamma(a=shape, scale=np.exp(0.8 + X_pred[:, 1]*true_beta)/shape).rvs()
plt.scatter(X_pred[:, 1], y_tilde, s = 1, color = 'k')
plt.scatter(X_pred[:, 1], y_tilde, s = 1, color = 'k', label = 'New Data')
plt.legend()
Math of what is going on
Our data $y$ are distributed according to
$$ y\vert X \sim \mbox{Gamma}(\phi, \mu(x)/\phi) $$
At least I think that is the correct parameterization of the Gamma, I can never get it right. In any case, assuming we use a log link for the model, this means
$$ \mu(x) = \exp(X\beta)$$
The thing is, we never know $\beta$, we only get $\hat{\beta}$ because we have to estimate the parameters of the model. The parameters are thus a random variable (because different data can yield different parameters). Theory says that with enough data, we can consider
$$ \hat{\beta} \sim \mbox{Normal}(\beta, \Sigma) $$
and some more theory says that plugging in our estimate for $\beta$ and $\Sigma$ should be good enough. Let $\tilde{y}\vert X$ be data I might see for observations with covariates $X$. If I could, I would really compute
$$ \tilde{y} \vert X \sim \int p(y\vert X,\beta)p (\beta) \, d \beta $$
and then take quantiles of this distribution. But this integral is really hard, so instead we just approximate it by simulating from $p(\beta)$ (the normal distribution) and passing whatever we simulated to $p(y\vert X, \beta)$ (in this case, the gamma distribution).
Now, I realize I've been quite fast and loose here, so if any readers want to put a little more rigour into my explanation, please let me know in a comment and I will clean it up. I think this should be good enough to give OP an idea of how this works.
|
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
|
Its a bit involved, but it should be doable.
As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically,
|
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
Its a bit involved, but it should be doable.
As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically, but we can instead simulate it. Here is some gamma regression data
N = 100
x = np.random.normal(size = N)
true_beta = np.array([0.3])
eta = 0.8 + x*true_beta
mu = np.exp(eta)
shape = 10
#parameterize gamma in terms of shaope and scale
y = gamma(a=shape, scale=mu/shape).rvs()
Now, I will fit the gamma regression to this data
X = sm.tools.add_constant(x)
gamma_model = sm.GLM(y, X, family=sm.families.Gamma(link = sm.families.links.log()))
gamma_results = gamma_model.fit()
gamma_results.summary()
Generalized Linear Model Regression Results
Dep. Variable: ,y , No. Observations: , 100
Model: ,GLM , Df Residuals: , 98
Model Family: ,Gamma , Df Model: , 1
Link Function: ,log , Scale: ,0.075594
Method: ,IRLS , Log-Likelihood: , -96.426
Date: ,Mon, 30 Nov 2020, Deviance: , 7.7252
Time: ,22:45:07 , Pearson chi2: , 7.41
No. Iterations: ,7 , ,
Covariance Type:,nonrobust , ,
, coef , std err , z ,P>|z| , [0.025 , 0.975]
const, 0.8172, 0.028, 29.264, 0.000, 0.762, 0.872
x1 , 0.2392, 0.029, 8.333, 0.000, 0.183, 0.296
So long as I have enough data, we can make a normal approximation to the sampling distribution of the coefficients.
The mean and covariance can be obtained from the model summary.
beta_samp_mean = gamma_results.params
beta_samp_cov = gamma_results.cov_params()
dispersion = gamma_results.scale
Now, it is just a matter of sampling fake data using these estimates and taking quantiles.
X_pred = np.linspace(-2, 2)
X_pred = sm.tools.add_constant(X_pred)
num_samps = 100_000
possible_coefficients = np.random.multivariate_normal(mean = beta_samp_mean, cov = beta_samp_cov, size = num_samps)
linear_predictions = [X_pred@b for b in possible_coefficients]
y_hyp = gamma(a=1/dispersion, scale = np.exp(linear_predictions)*dispersion).rvs()
# Here is the prediction interval
l, u = np.quantile(y_hyp, q=[0.025, 0.975], axis = 0)
Its easy to then plot the prediction interval
yhat = gamma_results.predict(X_pred)
fig, ax = plt.subplots(dpi = 120)
plt.plot(X_pred[:,1], yhat, color = 'red', label = 'Estimated')
plt.plot(X_pred[:, 1], np.exp(0.8 + X_pred[:, 1]*true_beta), label = 'Truth')
plt.fill_between(X_pred[:, 1], l, u, color = 'red', alpha = 0.1, label = 'Prediction Interval')
for i in range(10):
y_tilde = gamma(a=shape, scale=np.exp(0.8 + X_pred[:, 1]*true_beta)/shape).rvs()
plt.scatter(X_pred[:, 1], y_tilde, s = 1, color = 'k')
plt.scatter(X_pred[:, 1], y_tilde, s = 1, color = 'k', label = 'New Data')
plt.legend()
Math of what is going on
Our data $y$ are distributed according to
$$ y\vert X \sim \mbox{Gamma}(\phi, \mu(x)/\phi) $$
At least I think that is the correct parameterization of the Gamma, I can never get it right. In any case, assuming we use a log link for the model, this means
$$ \mu(x) = \exp(X\beta)$$
The thing is, we never know $\beta$, we only get $\hat{\beta}$ because we have to estimate the parameters of the model. The parameters are thus a random variable (because different data can yield different parameters). Theory says that with enough data, we can consider
$$ \hat{\beta} \sim \mbox{Normal}(\beta, \Sigma) $$
and some more theory says that plugging in our estimate for $\beta$ and $\Sigma$ should be good enough. Let $\tilde{y}\vert X$ be data I might see for observations with covariates $X$. If I could, I would really compute
$$ \tilde{y} \vert X \sim \int p(y\vert X,\beta)p (\beta) \, d \beta $$
and then take quantiles of this distribution. But this integral is really hard, so instead we just approximate it by simulating from $p(\beta)$ (the normal distribution) and passing whatever we simulated to $p(y\vert X, \beta)$ (in this case, the gamma distribution).
Now, I realize I've been quite fast and loose here, so if any readers want to put a little more rigour into my explanation, please let me know in a comment and I will clean it up. I think this should be good enough to give OP an idea of how this works.
|
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
Its a bit involved, but it should be doable.
As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically,
|
47,598
|
Understanding how to tell whether random effects assumption is sufficiently violated to pose a problem in practice
|
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when
the correlation is very high - typically in the region of 0.8 or higher.
when the cluster sizes are small.
when the goal of the analysis is inference rather than prediction.
Regarding 1, in healthcare settings, this is fairly implausible.
Regarding 2, even with small cluster sizes, mixed models are quite robust as we will see from the simulations below
Regarding 3, you specifically mention prediction as the goal of your analyis so again, we will see below that predictions from mixed models with correlated fixed and random effects are not greatly affected by the degree of corelation.
It is also worth noting here, that in this kind of applied setting, we are not talking about a problem of confounding - it is mediation. The exposure causes the outcome, and also the group (hospital) assignment, and the hospital has a causal effect on the outcome. So, in a causal framework if we were interested in the total effect of the exposure on the outcome we would not adjust for the hospital effect, either as fixed effects or random effects, but we would do so if we were only interested in the direct effect. Again, if we are interested in prediction instead, rather than inference, then this problem wanes.
So here is a simple simulation were we look at varying levels of correlation between an exposure E and grouping variable X from 0.5 to 0.95 and we look at the impact of this on the estimate for E and the mean squared error of predictions:
library(MASS)
set.seed(15)
N <- 100
n.sim <- 100
simvec.E <- numeric(n.sim) # a vector to hold the estimates for E
simvec.mse <- numeric(n.sim) # a vector to hold the mse for the predictions
rhos <- seq(0.5, 0.95, by = 0.05)
simvec.rho <- numeric(length(rhos)) # vector for the mean estimates at each rho
simvec.rho.mse <- numeric(length(rhos)) # vector for mse at each rho
for (j in 1:length(rhos)) {
Sigma = matrix(c(1, rhos[j], rhos[j], 1), byrow = TRUE, nrow = 2)
for(i in 1:n.sim) {
dt <- data.frame(mvrnorm(N, mu = c(0,0), Sigma = Sigma, empirical = TRUE))
# put them on a bigger scale, so it's easy to create the group factor
dt1 <- dt + 5
dt1 <- dt1 * 10
X <- as.integer(dt1$X1)
E <- dt1$X2
Y <- E + X + rnorm(N) # so the estimate for E that we want to recover is 1
X <- as.factor(X)
lmm <- lmer(Y ~ E + (1|X))
simvec.E[i] <- summary(lmm)$coef[2]
simvec.mse[i] <- sum((Y - predict(lmm))^2)
}
simvec.rho[j] <- mean(simvec.E)
simvec.rho.mse[j] <- mean(simvec.mse)
}
ggplot(data.frame(rho = rhos, E = simvec.rho), aes(x = rho, y = E)) + geom_point()+ geom_line()
ggplot(data.frame(rho = rhos, mse = simvec.rho.mse), aes(x = rho, y = mse))+ geom_point() + geom_line()
So here we see that the estimates for E (simulate with a value of 1) are largely unbiased up to correlations of around 0.8. Even at 0.95 the bias is only 6%
Here we see no marked effect on mean squared error of prediction.
As mentioned above, small cluster sizes exacerbate the bias. In these simulations each dataset has only 100 observations with 35-40 groups, so the cluster sizes are small.
We can easily create more clusters by increasing N to 1000 which results in around 50-60 groups
Here we see that the bias is smaller.
And here again we see no discernable impact of correlation on mean squared error of prediction.
I would encourage you to play around with these or similar simulations, there are many parameters that can be changed, as well as changing the way the data are simulated to better reflect your actual use case.
|
Understanding how to tell whether random effects assumption is sufficiently violated to pose a probl
|
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when
the correlation is very high - typically in the region of 0.8 or higher.
|
Understanding how to tell whether random effects assumption is sufficiently violated to pose a problem in practice
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when
the correlation is very high - typically in the region of 0.8 or higher.
when the cluster sizes are small.
when the goal of the analysis is inference rather than prediction.
Regarding 1, in healthcare settings, this is fairly implausible.
Regarding 2, even with small cluster sizes, mixed models are quite robust as we will see from the simulations below
Regarding 3, you specifically mention prediction as the goal of your analyis so again, we will see below that predictions from mixed models with correlated fixed and random effects are not greatly affected by the degree of corelation.
It is also worth noting here, that in this kind of applied setting, we are not talking about a problem of confounding - it is mediation. The exposure causes the outcome, and also the group (hospital) assignment, and the hospital has a causal effect on the outcome. So, in a causal framework if we were interested in the total effect of the exposure on the outcome we would not adjust for the hospital effect, either as fixed effects or random effects, but we would do so if we were only interested in the direct effect. Again, if we are interested in prediction instead, rather than inference, then this problem wanes.
So here is a simple simulation were we look at varying levels of correlation between an exposure E and grouping variable X from 0.5 to 0.95 and we look at the impact of this on the estimate for E and the mean squared error of predictions:
library(MASS)
set.seed(15)
N <- 100
n.sim <- 100
simvec.E <- numeric(n.sim) # a vector to hold the estimates for E
simvec.mse <- numeric(n.sim) # a vector to hold the mse for the predictions
rhos <- seq(0.5, 0.95, by = 0.05)
simvec.rho <- numeric(length(rhos)) # vector for the mean estimates at each rho
simvec.rho.mse <- numeric(length(rhos)) # vector for mse at each rho
for (j in 1:length(rhos)) {
Sigma = matrix(c(1, rhos[j], rhos[j], 1), byrow = TRUE, nrow = 2)
for(i in 1:n.sim) {
dt <- data.frame(mvrnorm(N, mu = c(0,0), Sigma = Sigma, empirical = TRUE))
# put them on a bigger scale, so it's easy to create the group factor
dt1 <- dt + 5
dt1 <- dt1 * 10
X <- as.integer(dt1$X1)
E <- dt1$X2
Y <- E + X + rnorm(N) # so the estimate for E that we want to recover is 1
X <- as.factor(X)
lmm <- lmer(Y ~ E + (1|X))
simvec.E[i] <- summary(lmm)$coef[2]
simvec.mse[i] <- sum((Y - predict(lmm))^2)
}
simvec.rho[j] <- mean(simvec.E)
simvec.rho.mse[j] <- mean(simvec.mse)
}
ggplot(data.frame(rho = rhos, E = simvec.rho), aes(x = rho, y = E)) + geom_point()+ geom_line()
ggplot(data.frame(rho = rhos, mse = simvec.rho.mse), aes(x = rho, y = mse))+ geom_point() + geom_line()
So here we see that the estimates for E (simulate with a value of 1) are largely unbiased up to correlations of around 0.8. Even at 0.95 the bias is only 6%
Here we see no marked effect on mean squared error of prediction.
As mentioned above, small cluster sizes exacerbate the bias. In these simulations each dataset has only 100 observations with 35-40 groups, so the cluster sizes are small.
We can easily create more clusters by increasing N to 1000 which results in around 50-60 groups
Here we see that the bias is smaller.
And here again we see no discernable impact of correlation on mean squared error of prediction.
I would encourage you to play around with these or similar simulations, there are many parameters that can be changed, as well as changing the way the data are simulated to better reflect your actual use case.
|
Understanding how to tell whether random effects assumption is sufficiently violated to pose a probl
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when
the correlation is very high - typically in the region of 0.8 or higher.
|
47,599
|
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from $-\infty$ to $\mu$)?
|
The mean of a variable $X$ can be computed as
$$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$
The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as
$$0 = \int_{0}^{\infty}1-F(x+\mu_X)dx - \int_{-\infty}^{0} F(x+\mu_X)dx $$
Or
$$ 0 = \int_{\mu_X}^{\infty}1-F(x)dx -\int_{-\infty}^{\mu_X} F(x)dx$$
Which is equivalent to your equation.
Therefore the mean $\mu_X$ in these computations is the same as the parameter $\mu$ in your question.
|
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from
|
The mean of a variable $X$ can be computed as
$$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$
The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as
$$0 = \i
|
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from $-\infty$ to $\mu$)?
The mean of a variable $X$ can be computed as
$$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$
The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as
$$0 = \int_{0}^{\infty}1-F(x+\mu_X)dx - \int_{-\infty}^{0} F(x+\mu_X)dx $$
Or
$$ 0 = \int_{\mu_X}^{\infty}1-F(x)dx -\int_{-\infty}^{\mu_X} F(x)dx$$
Which is equivalent to your equation.
Therefore the mean $\mu_X$ in these computations is the same as the parameter $\mu$ in your question.
|
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from
The mean of a variable $X$ can be computed as
$$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$
The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as
$$0 = \i
|
47,600
|
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicate]
|
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and mixed random variables should also be possible, but I will not pursue that here. In a related answer here we show a partial extension of the expectation rule. Specifically, it is shown that for an arbitrary continuous random variable $X$ and any constant $a \in \mathbb{R}$ you have the general rule:$^\dagger$
$$\mathbb{E}[\max(X-a,0)] = \int \limits_{a}^\infty (1-F_X(x)) \ dx.$$
We can go further by writing the expectation of an arbitrary continuous random variable $X$ as:
$$\begin{align}
\mathbb{E}[X]
&= \lim_{a\rightarrow -\infty} \mathbb{E}[\max(X,a)] \\[12pt]
&= \lim_{a\rightarrow -\infty} \Big( a+\mathbb{E}[\max(X-a,0)] \Big) \\[12pt]
&= \lim_{a\rightarrow -\infty} \Big( - \int \limits_{a}^0 \ dx + \mathbb{E}[\max(X-a,0)] \Big) \\[12pt]
&= \lim_{a\rightarrow -\infty} \Bigg( - \int \limits_{a}^\infty \mathbb{I}(x < 0) \ dx + \int \limits_{a}^\infty (1-F_X(x)) \ dx \Bigg) \\[6pt]
&= \lim_{a\rightarrow -\infty} \Bigg( \int \limits_{a}^\infty (1-\mathbb{I}(x < 0)-F_X(x)) \ dx \Bigg) \\[6pt]
&= \lim_{a\rightarrow -\infty} \int \limits_{a}^\infty (\mathbb{I}(x \geqslant 0)-F_X(x)) \ dx \\[6pt]
&= \int \limits_{-\infty}^\infty (\mathbb{I}(x \geqslant 0)-F_X(x)) \ dx. \\[6pt]
\end{align}$$
In cases where the individual integrals are convergent, this can be written in simple form as:
$$\mathbb{E}[X] = \int \limits_0^\infty (1-F_X(x)) \ dx - \int \limits_0^\infty F_X(-x) \ dx.$$
This integral rule extends the Darth Vader rule for continuous non-negative random variables. (Extension for discrete random variables is similar, but you have to be a bit more careful with the boundaries of the integrals.) In the special case where $X$ is continuous and non-negative we have $F_X(-x) = 0$ for all $x < 0$ and so the second term in this equation vanishes, giving the standard expectation rule. I have not seen this integral expression in any textbooks or papers, so it does not seem to be one that is used much (if at all?) in statistical practice. Nevertheless, it does provide one possible extension of the standard integral rule to deal with random variables that can be negative.
$^\dagger$ In the special case where $X$ is non-negative and $a=0$ this reduces down to the standard expectation rule for non-negative random variables shown in the question.
|
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicat
|
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and
|
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicate]
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and mixed random variables should also be possible, but I will not pursue that here. In a related answer here we show a partial extension of the expectation rule. Specifically, it is shown that for an arbitrary continuous random variable $X$ and any constant $a \in \mathbb{R}$ you have the general rule:$^\dagger$
$$\mathbb{E}[\max(X-a,0)] = \int \limits_{a}^\infty (1-F_X(x)) \ dx.$$
We can go further by writing the expectation of an arbitrary continuous random variable $X$ as:
$$\begin{align}
\mathbb{E}[X]
&= \lim_{a\rightarrow -\infty} \mathbb{E}[\max(X,a)] \\[12pt]
&= \lim_{a\rightarrow -\infty} \Big( a+\mathbb{E}[\max(X-a,0)] \Big) \\[12pt]
&= \lim_{a\rightarrow -\infty} \Big( - \int \limits_{a}^0 \ dx + \mathbb{E}[\max(X-a,0)] \Big) \\[12pt]
&= \lim_{a\rightarrow -\infty} \Bigg( - \int \limits_{a}^\infty \mathbb{I}(x < 0) \ dx + \int \limits_{a}^\infty (1-F_X(x)) \ dx \Bigg) \\[6pt]
&= \lim_{a\rightarrow -\infty} \Bigg( \int \limits_{a}^\infty (1-\mathbb{I}(x < 0)-F_X(x)) \ dx \Bigg) \\[6pt]
&= \lim_{a\rightarrow -\infty} \int \limits_{a}^\infty (\mathbb{I}(x \geqslant 0)-F_X(x)) \ dx \\[6pt]
&= \int \limits_{-\infty}^\infty (\mathbb{I}(x \geqslant 0)-F_X(x)) \ dx. \\[6pt]
\end{align}$$
In cases where the individual integrals are convergent, this can be written in simple form as:
$$\mathbb{E}[X] = \int \limits_0^\infty (1-F_X(x)) \ dx - \int \limits_0^\infty F_X(-x) \ dx.$$
This integral rule extends the Darth Vader rule for continuous non-negative random variables. (Extension for discrete random variables is similar, but you have to be a bit more careful with the boundaries of the integrals.) In the special case where $X$ is continuous and non-negative we have $F_X(-x) = 0$ for all $x < 0$ and so the second term in this equation vanishes, giving the standard expectation rule. I have not seen this integral expression in any textbooks or papers, so it does not seem to be one that is used much (if at all?) in statistical practice. Nevertheless, it does provide one possible extension of the standard integral rule to deal with random variables that can be negative.
$^\dagger$ In the special case where $X$ is non-negative and $a=0$ this reduces down to the standard expectation rule for non-negative random variables shown in the question.
|
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicat
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.