idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
47,501
Train a SVM-based classifier while taking into account the weight information
Try this package: https://CRAN.R-project.org/package=WeightSVM It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
Train a SVM-based classifier while taking into account the weight information
Try this package: https://CRAN.R-project.org/package=WeightSVM It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
Train a SVM-based classifier while taking into account the weight information Try this package: https://CRAN.R-project.org/package=WeightSVM It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
Train a SVM-based classifier while taking into account the weight information Try this package: https://CRAN.R-project.org/package=WeightSVM It uses a modified version of 'libsvm' and is able to deal with instance weighting. You can assign lower weights to some subjects.
47,502
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Cohen's d (and the closely related Hedges' g) are calculated on real observations, whereas the distributions underlying ob...
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?
They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Co
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing? They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Cohen's d (and the clos...
Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing? They are essentially the same thing: differences between means measured in units of standard deviations, as you say. There are some theoretical differences in the substance from which they arise. Co
47,503
Is additive logistic regression equivalent to boosted decision stumps?
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic model). Having said that, people typically use specialized names for boosted models - for example Gradient Boosting Mac...
Is additive logistic regression equivalent to boosted decision stumps?
Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic
Is additive logistic regression equivalent to boosted decision stumps? Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic model). Having said that, people typically use s...
Is additive logistic regression equivalent to boosted decision stumps? Boosted decision stumps is just a special case of generalized additive models (i.e. if the logistic loss function is used then, technically, one could call boosted decision stumps an additive logistic
47,504
Logistic regression performs better on validation data
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult" ones are in the training set. If you were to repeatedly re-sample the data to form randomly partitioned training and ...
Logistic regression performs better on validation data
No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult
Logistic regression performs better on validation data No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult" ones are in the training set. If you were to repeatedly re-sam...
Logistic regression performs better on validation data No, this isn't necessarily a problem, especially if the sample size is small. It could easily be that purely by chance more of the "easy" patterns are in the validation set and more if the "difficult
47,505
Logistic regression performs better on validation data
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the bootstrap with perhaps 300 resamples. The problem can be uncovered by doing another 70-30 split and noting differences in t...
Logistic regression performs better on validation data
The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the boot
Logistic regression performs better on validation data The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the bootstrap with perhaps 300 resamples. The problem can be uncovered b...
Logistic regression performs better on validation data The sample size is too small for single-split validation. To obtain sufficiently precise estimate all steps of 10-fold cross-validation should be repeated 100 times (or at least 50). Or use the boot
47,506
More details on bootstrap procedure to estimate confidence interval of sample SD
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd estimate that parametric distribution using mle. In your case, where you have no idea the distribution, leave out the ...
More details on bootstrap procedure to estimate confidence interval of sample SD
Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd
More details on bootstrap procedure to estimate confidence interval of sample SD Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd estimate that parametric distribution ...
More details on bootstrap procedure to estimate confidence interval of sample SD Yes, you missed something, or rather added something. You're doing parametric bootstrap, which is only appropriate if you know something about the kind of distribution you expect. Furthermore, you'd
47,507
Are HAC estimators used for estimation of regression coefficients?
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your coefficients are unbiased, but the standard errors are incorrect. HAC standard errors address the latter point. As you a...
Are HAC estimators used for estimation of regression coefficients?
HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your
Are HAC estimators used for estimation of regression coefficients? HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your coefficients are unbiased, but the standard errors ar...
Are HAC estimators used for estimation of regression coefficients? HAC procedures are just about providing consistent estimates of the standard errors. They do not change the estimation of the coefficients. If you have strict exogeneity with serial correlation, your
47,508
How do I propagate error values through a matrix diagonalization?
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpromising. Why not just compute an empirical distribution? That is, draw a large number of variants of the original matri...
How do I propagate error values through a matrix diagonalization?
The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpro
How do I propagate error values through a matrix diagonalization? The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpromising. Why not just compute an empirical distributio...
How do I propagate error values through a matrix diagonalization? The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpro
47,509
Vertical line graphs in R
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows: # simulate some data x <- runif(15000) x[sample(15000, 50)] <- runif(50, 0, 5) # make the plot plot(x, type="h", col=rep(c("red", "blue", "green"), ea...
Vertical line graphs in R
You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows: # simulate some data x <- runif
Vertical line graphs in R You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows: # simulate some data x <- runif(15000) x[sample(15000, 50)] <- runif(50, 0, 5) # make the plot plot(x, type="h", col=rep(c("...
Vertical line graphs in R You can use the plot function with type="h" to get the vertical lines and col to specify the colors, using rep to create the vector of colors that you want, as follows: # simulate some data x <- runif
47,510
Vertical line graphs in R
Use a barplot in combination with the grDevices-package to create a color-palette. require(grDevices) # data dat <- sample(1:10,15000,prob=runif(10),replace=T) dat <- sort(dat) plotdat <- as.data.frame(table(dat)) plotdat[,2] <- plotdat[,2]/sum(plotdat[,2]) # generate colors colors <- heat.colors(10) # and sort them ...
Vertical line graphs in R
Use a barplot in combination with the grDevices-package to create a color-palette. require(grDevices) # data dat <- sample(1:10,15000,prob=runif(10),replace=T) dat <- sort(dat) plotdat <- as.data.fra
Vertical line graphs in R Use a barplot in combination with the grDevices-package to create a color-palette. require(grDevices) # data dat <- sample(1:10,15000,prob=runif(10),replace=T) dat <- sort(dat) plotdat <- as.data.frame(table(dat)) plotdat[,2] <- plotdat[,2]/sum(plotdat[,2]) # generate colors colors <- heat.c...
Vertical line graphs in R Use a barplot in combination with the grDevices-package to create a color-palette. require(grDevices) # data dat <- sample(1:10,15000,prob=runif(10),replace=T) dat <- sort(dat) plotdat <- as.data.fra
47,511
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and comparisons between different estimates (which I expect is your primary goal in the subsets analyses) are going to be particul...
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses?
When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and compar
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses? When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and comparisons between differe...
What is a reasonable sample size for correlation analysis for both overall and sub-group analyses? When it comes to sample size, bigger is better, but we often have to take what we get. With the smaller sample sizes, your estimates of the correlation are going to become extremely noisy, and compar
47,512
Performing contrasts among treatment levels in survival analysis
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even say naive, but I am not a big fan of his book.) There is no surv function, and the closest function, Surv, is not a reg...
Performing contrasts among treatment levels in survival analysis
The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even
Performing contrasts among treatment levels in survival analysis The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even say naive, but I am not a big fan of his book.) There ...
Performing contrasts among treatment levels in survival analysis The methods description does not match up with anything I see in Crawley's chapter on survival analysis. His discussion of the example he used with three levels seem pretty rudimentary (one might even
47,513
Performing contrasts among treatment levels in survival analysis
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f. or multiple d.f. contrasts. Type ?contrast.rms for guidance. You need to substitute cph for coxph when using rms.
Performing contrasts among treatment levels in survival analysis
In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f
Performing contrasts among treatment levels in survival analysis In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f. or multiple d.f. contrasts. Type ?contrast.rms for g...
Performing contrasts among treatment levels in survival analysis In the R rms package there is are wrapper function for the survival package's coxph and survreg functions. When you use one of these two functions you can use contrast.rms to easily obtain single d.f
47,514
What if a numerator term is zero in Naive Bayes?
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
What if a numerator term is zero in Naive Bayes?
One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
What if a numerator term is zero in Naive Bayes? One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
What if a numerator term is zero in Naive Bayes? One method to deal with this is to increment all counts by 1. This is known as Laplace smoothing. If you Google Laplace smoothing and Naive Bayes you will find many references.
47,515
What if a numerator term is zero in Naive Bayes?
I start all counts with 1, in pseudo-code: Count=max(1,Count).
What if a numerator term is zero in Naive Bayes?
I start all counts with 1, in pseudo-code: Count=max(1,Count).
What if a numerator term is zero in Naive Bayes? I start all counts with 1, in pseudo-code: Count=max(1,Count).
What if a numerator term is zero in Naive Bayes? I start all counts with 1, in pseudo-code: Count=max(1,Count).
47,516
Representing the anova's interaction in R
Be very careful with : it means a bunch of different things depending on the context with which you use it in R. See and ?interaction, ?formula, ?lm, ?':' Here's an example of interaction: df <- data.frame(X=sample(letters[1:10],200, replace=T),Y=sample(letters[1:10],200, replace=T)) > df$X:df$Y [1] a:a i:i c:e g:g...
Representing the anova's interaction in R
Be very careful with : it means a bunch of different things depending on the context with which you use it in R. See and ?interaction, ?formula, ?lm, ?':' Here's an example of interaction: df <- dat
Representing the anova's interaction in R Be very careful with : it means a bunch of different things depending on the context with which you use it in R. See and ?interaction, ?formula, ?lm, ?':' Here's an example of interaction: df <- data.frame(X=sample(letters[1:10],200, replace=T),Y=sample(letters[1:10],200, rep...
Representing the anova's interaction in R Be very careful with : it means a bunch of different things depending on the context with which you use it in R. See and ?interaction, ?formula, ?lm, ?':' Here's an example of interaction: df <- dat
47,517
Assumptions and pitfalls in competing risks model
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Risks. I wrote my master's thesis on competing risks and from what I remember, they are some drawbacks/disadvantages when c...
Assumptions and pitfalls in competing risks model
Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Ris
Assumptions and pitfalls in competing risks model Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Risks. I wrote my master's thesis on competing risks and from what I reme...
Assumptions and pitfalls in competing risks model Pintile's book is an excellent book for understanding competing risks but if you want to study theoretical side of the competing risks, take a look at Martin J. Crowder's book, Classical Competing Ris
47,518
How do I visualize changes in proportions compared to another period?
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/ For the latter, a time series of percent stacked charts might look fine - you do not have to use 7 colors, just alternat...
How do I visualize changes in proportions compared to another period?
What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
How do I visualize changes in proportions compared to another period? What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/ For the latter, a time series of percent stacked c...
How do I visualize changes in proportions compared to another period? What is more important for you - between group comparison or the intra-group composition? For the former, a parallel coordinates plot seems to be a natural choice: http://charliepark.org/slopegraphs/
47,519
How do I visualize changes in proportions compared to another period?
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series. I am not an expert in graph design, so this may also be a no-go, but have you considered four colors with three types of plot type? Though, I think there is an even better approach. I know you ...
How do I visualize changes in proportions compared to another period?
To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series. I am not an expert in graph design, so this may also be a no-go, but have yo
How do I visualize changes in proportions compared to another period? To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series. I am not an expert in graph design, so this may also be a no-go, but have you considered four colors with three types of plot ...
How do I visualize changes in proportions compared to another period? To me the slope graph looks really messy and I think I'd have trouble looking at it, especially across eight time series. I am not an expert in graph design, so this may also be a no-go, but have yo
47,520
Estimating PDF of continuous distribution from (few) data points
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at your disposition, the function density provides what you need: histAndDensity<-function(x, ...) { retval<-hist(x, fre...
Estimating PDF of continuous distribution from (few) data points
What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at
Estimating PDF of continuous distribution from (few) data points What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at your disposition, the function density provides what y...
Estimating PDF of continuous distribution from (few) data points What you are looking for is kernel density estimation. You should find numerous hits on an internet search for these terms, and it is even on Wikipedia so that should get you started. If you have R at
47,521
Interpretation of MDS factor plot
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book that gives a clear explanation of interpretation of MDS biplots. I'll also give few references where people can read more ...
Interpretation of MDS factor plot
I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book th
Interpretation of MDS factor plot I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book that gives a clear explanation of interpretation of MDS biplots. I'll also give few refe...
Interpretation of MDS factor plot I'm answering my own question for 2 reason:1) I want to be clear what I've understood is correct or not. 2) If somebody is looking for the same reason he/she should find it here.I hardly found book th
47,522
How do I determine how well a dataset approximates a distribution?
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution. If you want a statistical test, the Kolmogorov-Smirnov statistic provides a non-parametric test for whether the data come from $p(x)$, using the maximum difference in the empir...
How do I determine how well a dataset approximates a distribution?
For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution. If you want a statistical test, the Kolmogorov-Smirnov s
How do I determine how well a dataset approximates a distribution? For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution. If you want a statistical test, the Kolmogorov-Smirnov statistic provides a non-parametric test for whether t...
How do I determine how well a dataset approximates a distribution? For visualization purposes, try a Q-Q plot, which is a plot of the quantiles of your data against the quantiles of the expected distribution. If you want a statistical test, the Kolmogorov-Smirnov s
47,523
Why is the tick marker for zero after the bar in this qplot bar chart?
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem at the other end with the $999$ projects put into the bin $[999,1032)$ which would appear above $1000$; so it would be b...
Why is the tick marker for zero after the bar in this qplot bar chart?
The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem
Why is the tick marker for zero after the bar in this qplot bar chart? The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem at the other end with the $999$ projects put int...
Why is the tick marker for zero after the bar in this qplot bar chart? The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need right=FALSE. You could then have a similar problem
47,524
Friedman's test and post-hoc analysis
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package. The maxT procedure is rather complex and there is no relation with the traditional $\chi^2$ statistic you're probably used to get after a Friedman ANOVA. The general idea is to control the FWER. Let's say ...
Friedman's test and post-hoc analysis
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package. The maxT procedure is rather complex and there is no relation with the traditional $\
Friedman's test and post-hoc analysis As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package. The maxT procedure is rather complex and there is no relation with the traditional $\chi^2$ statistic you're probably used to get after a Friedman ANOVA. The general i...
Friedman's test and post-hoc analysis As @caracal's said, this script implements a permutation-based approach to Friedman's test with the coin package. The maxT procedure is rather complex and there is no relation with the traditional $\
47,525
Conditions for Central Limit Theorem for dependent sequences
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that something like this would have escaped all of them.) First of all, note that the formula for $V$ that you give is part of the...
Conditions for Central Limit Theorem for dependent sequences
Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that somet
Conditions for Central Limit Theorem for dependent sequences Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that something like this would have escaped all of them.) First of al...
Conditions for Central Limit Theorem for dependent sequences Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that somet
47,526
Power analysis for moderator effect in regression with two continuous predictors
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error variance (with help from the researcher), generating data sets using the assumed model, and seeing what proportion of these ...
Power analysis for moderator effect in regression with two continuous predictors
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error vari
Power analysis for moderator effect in regression with two continuous predictors If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error variance (with help from the researcher), g...
Power analysis for moderator effect in regression with two continuous predictors If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error vari
47,527
Power analysis for moderator effect in regression with two continuous predictors
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M? Your regression model would have 3 predictors X, M, and their (centered) interaction (X*M). If you run the analysis using GPower (http://gpower.hhu.de/) you would s...
Power analysis for moderator effect in regression with two continuous predictors
Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M? Your regression model would have 3 predictor
Power analysis for moderator effect in regression with two continuous predictors Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M? Your regression model would have 3 predictors X, M, and their (centered) interactio...
Power analysis for moderator effect in regression with two continuous predictors Assuming that the IV (X) and the Moderator (M) are continuous variables, and your research question is: Is the relationship between X and Y moderated by M? Your regression model would have 3 predictor
47,528
Binning raw data prior to building a logistic regression model
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree of arbitrariness into the model. Take a look at regression splines as an alternative. Notes about this may be found at ...
Binning raw data prior to building a logistic regression model
Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree o
Binning raw data prior to building a logistic regression model Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree of arbitrariness into the model. Take a look at regressio...
Binning raw data prior to building a logistic regression model Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree o
47,529
Binning raw data prior to building a logistic regression model
You could specify your binding algorithm in a function, define utility function and optimize input parameters... The ideas for utility function can be: Predictive power (weight of evidence and information value) Monotonnicly decreasing average default rate from one bin to another (as you increase the age of history......
Binning raw data prior to building a logistic regression model
You could specify your binding algorithm in a function, define utility function and optimize input parameters... The ideas for utility function can be: Predictive power (weight of evidence and inform
Binning raw data prior to building a logistic regression model You could specify your binding algorithm in a function, define utility function and optimize input parameters... The ideas for utility function can be: Predictive power (weight of evidence and information value) Monotonnicly decreasing average default rate...
Binning raw data prior to building a logistic regression model You could specify your binding algorithm in a function, define utility function and optimize input parameters... The ideas for utility function can be: Predictive power (weight of evidence and inform
47,530
Active learning using SVM Regression
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will probably suggest bad places to label the data as your current hypothesis is poor. It is a good idea to do some random explo...
Active learning using SVM Regression
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will prob
Active learning using SVM Regression Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will probably suggest bad places to label the data as your current hypothesis is poor. It i...
Active learning using SVM Regression Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will prob
47,531
Active learning using SVM Regression
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new points will decrease. If you have any other method to labelize your new generated points rather than using the boundary t...
Active learning using SVM Regression
I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new p
Active learning using SVM Regression I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new points will decrease. If you have any other method to labelize your new generated po...
Active learning using SVM Regression I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new p
47,532
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed. If it does I wouldn't bother with analysing the autocorrelation at all, but focus on the estimation of $k_1$, $k_2$ and $k_3$, and testing the ...
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed. If it does I wouldn't b
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)? As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed. If it does I wouldn't bother with analysing...
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)? As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed. If it does I wouldn't b
47,533
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factors you have a covariate (a) and a factor (substance or $i$ in $k_{i}$). You also have a random effect (individual measure...
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)?
If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factor
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)? If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factors you have a covaria...
How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)? If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the nlme package. Basically as fixed factor
47,534
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may need to clarify what you mean by "accounting for the fact that I have taken repeated measures..." You say that "I would like to know if the mean chocolate consumption per day is higher among happy people than those who are not happy..." This suggests to me that time is not really relevant to your resear...
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may need to clarify what you mean by "accounting for the fact that I have taken repeated measures..." You say that "I would like to know if the mean chocolate consumption per day is higher am
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA? You may need to clarify what you mean by "accounting for the fact that I have taken repeated measures..." You say that "I would like to know if the mean chocolate consumption per day is higher among happy people than ...
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA? You may need to clarify what you mean by "accounting for the fact that I have taken repeated measures..." You say that "I would like to know if the mean chocolate consumption per day is higher am
47,535
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independently from chocolate consumption. This can be modelled by including subject id as a random effect categorical predictor. Ag...
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA?
You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independent
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA? You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independently from chocolate consum...
Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA? You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independent
47,536
How to compare the effectiveness of medical diagnostic techniques?
As it is described in the original post, the experiment is a randomized block. Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist. Instrument (3 levels) and the true result (2 levels) of the test are the two treatments, which I assume were assigned randomly. Consider the di...
How to compare the effectiveness of medical diagnostic techniques?
As it is described in the original post, the experiment is a randomized block. Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist. Instrument (3 levels) a
How to compare the effectiveness of medical diagnostic techniques? As it is described in the original post, the experiment is a randomized block. Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist. Instrument (3 levels) and the true result (2 levels) of the test are the two...
How to compare the effectiveness of medical diagnostic techniques? As it is described in the original post, the experiment is a randomized block. Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist. Instrument (3 levels) a
47,537
How to compare the effectiveness of medical diagnostic techniques?
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article http://en.wikipedia.org/wiki/Receiver_operating_characteristic and the external links to it may also be useful. Some other methods ...
How to compare the effectiveness of medical diagnostic techniques?
The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article http://en.w
How to compare the effectiveness of medical diagnostic techniques? The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article http://en.wikipedia.org/wiki/Receiver_operating_characteristic a...
How to compare the effectiveness of medical diagnostic techniques? The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article http://en.w
47,538
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes: $$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$ This indicates a uniform probability with respect to $\mu$. A more familiar notation is: $$p(\mu|I)\propto 1$$ It comes from the "proper" derivation of a PD...
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?"
This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes: $$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$ This indicates a uniform probability
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?" This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes: $$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$ This indicates a uniform probability with respect to $\mu$...
What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?" This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes: $$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$ This indicates a uniform probability
47,539
Calculating the mean using regression data
Contrary to @whuber's claim, the mean of x and y are contained in the information given. Okay, so you have the line equation $$y_i=\alpha +x_i\beta + e_i$$ estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$. where $r$ is the correlation. The question doesn't state whether ...
Calculating the mean using regression data
Contrary to @whuber's claim, the mean of x and y are contained in the information given. Okay, so you have the line equation $$y_i=\alpha +x_i\beta + e_i$$ estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and
Calculating the mean using regression data Contrary to @whuber's claim, the mean of x and y are contained in the information given. Okay, so you have the line equation $$y_i=\alpha +x_i\beta + e_i$$ estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$. where $r$ is the correl...
Calculating the mean using regression data Contrary to @whuber's claim, the mean of x and y are contained in the information given. Okay, so you have the line equation $$y_i=\alpha +x_i\beta + e_i$$ estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and
47,540
Interactions between non-linear predictors
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like: fit1 = gamm4( formula = V1 ~ V2 + s(V3) , random = ~ (1|V4) ) fit2 = gamm4( formula = V1 ~ V2 + s(V3,by=V2) , random = ~ (1|V4) ) fit1 seeks to predict V1 using ...
Interactions between non-linear predictors
You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like: fit1 = gamm4( formula = V1 ~ V2 + s(V3) , random =
Interactions between non-linear predictors You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like: fit1 = gamm4( formula = V1 ~ V2 + s(V3) , random = ~ (1|V4) ) fit2 = gamm4( formula = V1 ~ V2 + s(V3,by=V2) , random = ...
Interactions between non-linear predictors You could try generalized additive mixed models, handily implemented in the gamm4 package. The way I've used them, you can do something like: fit1 = gamm4( formula = V1 ~ V2 + s(V3) , random =
47,541
Propagation of large errors
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio distribution depends on the distributions of $A$ and $B$. If we assume that $A$ and $B$ both have Gaussian (normal) dis...
Propagation of large errors
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio
Propagation of large errors For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio distribution depends on the distributions of $A$ and $B$. If we assume that $A$ and $B$ bot...
Propagation of large errors For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a ratio distribution, but which ratio
47,542
Propagation of large errors
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So while it is true that $E[X+Y]=E[X]+E[Y]$ and $E[X-Y]=E[X]-E[Y]$, it would usually not be true to say $E[XY]=E[X]E[Y]$ or $E[...
Propagation of large errors
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So whil
Propagation of large errors The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So while it is true that $E[X+Y]=E[X]+E[Y]$ and $E[X-Y]=E[X]-E[Y]$, it would usually not be true to...
Propagation of large errors The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So whil
47,543
Propagation of large errors
The formula for error propagation $\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$ works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$ Since (most) functions can be linearly approximated, the above also works for small errors. For large errors, a symmetric distribution of $x...
Propagation of large errors
The formula for error propagation $\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$ works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$ Since (most) functio
Propagation of large errors The formula for error propagation $\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$ works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$ Since (most) functions can be linearly approximated, the above also works for small errors. For large errors, a ...
Propagation of large errors The formula for error propagation $\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$ works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$ Since (most) functio
47,544
Semantic distance between excerpts of text
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself, that counts for how many points of the taxonomy of words you need to get from one word to another (for example from cat ...
Semantic distance between excerpts of text
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself,
Semantic distance between excerpts of text Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself, that counts for how many points of the taxonomy of words you need to get from...
Semantic distance between excerpts of text Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself,
47,545
Semantic distance between excerpts of text
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of I cleaned my truck up this morning. Bananas are an excellent source of potassium. is more similar to Early today, I got up and washed my ...
Semantic distance between excerpts of text
It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of I cleaned my truck u
Semantic distance between excerpts of text It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of I cleaned my truck up this morning. Bananas are an excellent source of potassium. is more simila...
Semantic distance between excerpts of text It is far from obvious, and indeed is highly task-specific, when two sentences are similar enough to, say, group together in a cluster. The problem is not determining which of I cleaned my truck u
47,546
Semantic distance between excerpts of text
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
Semantic distance between excerpts of text
Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
Semantic distance between excerpts of text Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
Semantic distance between excerpts of text Check out the work by Jones & Mewhort (2007). This more recent work may also be of interest, particularly their online tool.
47,547
Question about combining hazard ratios - Maybe Simpson's paradox?
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding. You can certainly have confounding in survival analysis. I agree it appears sensible to only present the heart and lung...
Question about combining hazard ratios - Maybe Simpson's paradox?
Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding.
Question about combining hazard ratios - Maybe Simpson's paradox? Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding. You can certainly have confounding in survival analys...
Question about combining hazard ratios - Maybe Simpson's paradox? Strictly, Simpson's paradox refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term confounding.
47,548
Question about combining hazard ratios - Maybe Simpson's paradox?
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like $$\begin{array}{rrrrrr} \textit{Organ}&\textit{Outcome}&A&B&C&D\\ \textrm{Lung}&\textrm{Bad}&371&2727&2374&418\\ \textrm{Lung}&\textrm{Good}&556&3199&2740&558\\ \textrm{Heart}&\textrm{Bad}&214&245&195&273\...
Question about combining hazard ratios - Maybe Simpson's paradox?
Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like $$\begin{array}{rrrrrr} \textit{Organ}&\textit{Outcome}&A&B&C&D\\ \textrm{Lung}&\textrm
Question about combining hazard ratios - Maybe Simpson's paradox? Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like $$\begin{array}{rrrrrr} \textit{Organ}&\textit{Outcome}&A&B&C&D\\ \textrm{Lung}&\textrm{Bad}&371&2727&2374&418\\ \textrm{Lung}&\textrm{Good}...
Question about combining hazard ratios - Maybe Simpson's paradox? Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like $$\begin{array}{rrrrrr} \textit{Organ}&\textit{Outcome}&A&B&C&D\\ \textrm{Lung}&\textrm
47,549
When to use Equal-Frequency-Histograms
This is not a proper or complete answer, but two observations from my personal experience: An equal-frequency histogram will hide outliers (I've seen them in long, low bins). The heights of the individual bins in an equal-frequency histogram seem more stable than in an equal-width histogram. I use equal-frequency hi...
When to use Equal-Frequency-Histograms
This is not a proper or complete answer, but two observations from my personal experience: An equal-frequency histogram will hide outliers (I've seen them in long, low bins). The heights of the indi
When to use Equal-Frequency-Histograms This is not a proper or complete answer, but two observations from my personal experience: An equal-frequency histogram will hide outliers (I've seen them in long, low bins). The heights of the individual bins in an equal-frequency histogram seem more stable than in an equal-wid...
When to use Equal-Frequency-Histograms This is not a proper or complete answer, but two observations from my personal experience: An equal-frequency histogram will hide outliers (I've seen them in long, low bins). The heights of the indi
47,550
When to use Equal-Frequency-Histograms
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values). For finding the best number of bins, I think it really depends on what you are trying to do with the histogram. In general I think it would be best to ensure your error of choice was below some threshold ...
When to use Equal-Frequency-Histograms
Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values). For finding the best number of bins, I think it really depends on what you are tryin
When to use Equal-Frequency-Histograms Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values). For finding the best number of bins, I think it really depends on what you are trying to do with the histogram. In general I think it would be best to ensure your er...
When to use Equal-Frequency-Histograms Equi-depth histograms are a solution to the problem of quantization (mapping continuous values to discrete values). For finding the best number of bins, I think it really depends on what you are tryin
47,551
Constrained versus unconstrained formulation of SVM optimisation
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest value that satisfies the constraint occurs at equality. So as $\xi_i \geq 0$, $\xi_i = max(0, 1 - y_i(w^Tx_i+b))$, which s...
Constrained versus unconstrained formulation of SVM optimisation
It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest val
Constrained versus unconstrained formulation of SVM optimisation It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest value that satisfies the constraint occurs at equality. S...
Constrained versus unconstrained formulation of SVM optimisation It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest val
47,552
Constrained versus unconstrained formulation of SVM optimisation
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the minimum of the first problem is the minimizer of the second, and vice versa. Replacing $g(x)$ in the doc with $1-y_i(w^...
Constrained versus unconstrained formulation of SVM optimisation
Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the
Constrained versus unconstrained formulation of SVM optimisation Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the minimum of the first problem is the minimizer of the s...
Constrained versus unconstrained formulation of SVM optimisation Please see the first page of https://davidrosenberg.github.io/mlcourse/Notes/svm-lecture-prep.pdf for a more formal answer. aka, the 2 problems are "equivalent" in the sense that the minimizer and the
47,553
Compare rank orders of population members across different variables
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be similar among "raters". This seems to be an application for Kendall's co...
Compare rank orders of population members across different variables
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to
Compare rank orders of population members across different variables I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be sim...
Compare rank orders of population members across different variables I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to
47,554
Optimal parameter $\alpha$ for exponential smoothing using least squares
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\cdots+e_n^2$. You can also use maximum likelihood estimation as discussed in my Springer book. If you're just using simp...
Optimal parameter $\alpha$ for exponential smoothing using least squares
Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\
Optimal parameter $\alpha$ for exponential smoothing using least squares Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\cdots+e_n^2$. You can also use maximum likeliho...
Optimal parameter $\alpha$ for exponential smoothing using least squares Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\
47,555
Random permutation of a vector with a fixed expected sample correlation to the original?
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made. I will use a standard notation where the action of a permutation $\sigma$ is written $ X^\sigma_i = X_{\sigma (i)}$ and the set of all permutations of the $n$ c...
Random permutation of a vector with a fixed expected sample correlation to the original?
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made. I will use a standard notation where the
Random permutation of a vector with a fixed expected sample correlation to the original? The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made. I will use a standard notation where the action of a permutation $\sigm...
Random permutation of a vector with a fixed expected sample correlation to the original? The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made. I will use a standard notation where the
47,556
What is a meaning of "p-value F" from Friedman test?
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are: T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(A1 - C1) T2.aj <- (m[1] - 1) * T1.aj/(m[1] * (m[2] - 1) - T1.aj) Comparing this with the formula in chl's answer, you'l...
What is a meaning of "p-value F" from Friedman test?
It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are: T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(
What is a meaning of "p-value F" from Friedman test? It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are: T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(A1 - C1) T2.aj <- (m[1] - 1) * T1.aj/(m[1] * (m[2] - 1) - T1.aj) C...
What is a meaning of "p-value F" from Friedman test? It seems the output is from the agricolae package using the method friedman. The relevant lines for computing the two statistics in that function are: T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(
47,557
What is a meaning of "p-value F" from Friedman test?
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you sum these ranks for each of your $a$ groups (let denote them sum $R_i$), then the Friedman statistic is defined as $$ F_r...
What is a meaning of "p-value F" from Friedman test?
I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you su
What is a meaning of "p-value F" from Friedman test? I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you sum these ranks for each of your $a$ groups (let denote them sum $R_i...
What is a meaning of "p-value F" from Friedman test? I generally used friedman.test() which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you su
47,558
What is a meaning of "p-value F" from Friedman test?
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic $F_F=\frac{(N-1)\chi^2_F}{N(k-1)-\chi^2_F}$ which is distributed according to the F-distributionwith k−1 and (k−1)(N−1) degrees of freedom. Refere...
What is a meaning of "p-value F" from Friedman test?
Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic $F_F=\frac{(N-1)\chi^2_F}{N
What is a meaning of "p-value F" from Friedman test? Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic $F_F=\frac{(N-1)\chi^2_F}{N(k-1)-\chi^2_F}$ which is distributed according to the F-distributi...
What is a meaning of "p-value F" from Friedman test? Probably $p_F$ refers to the F-statistic developed by Iman and Davenport? They showed that Friedman’s $\chi^2$ is undesirably conservative and derived a "better" statistic $F_F=\frac{(N-1)\chi^2_F}{N
47,559
Significance of the slope of a straight line fit
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different. For instance, effectively one may consider robust regression as least squares on data stripped from outliers; using $r^2$ on all data in this case adds...
Significance of the slope of a straight line fit
No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different. For instance, effectively one m
Significance of the slope of a straight line fit No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different. For instance, effectively one may consider robust regression as least squares on data stripped from ou...
Significance of the slope of a straight line fit No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different. For instance, effectively one m
47,560
Significance of the slope of a straight line fit
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties: A robust coefficient of determination for regression, O Renauda Edit: *Is there any reason why this would NOT be a valid approach? * For one this does not make your method any more robust. There is a large...
Significance of the slope of a straight line fit
No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties: A robust coefficient of determination for regression, O Renauda Edit: *Is there any r
Significance of the slope of a straight line fit No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties: A robust coefficient of determination for regression, O Renauda Edit: *Is there any reason why this would NOT be a valid approach? * For one this does not m...
Significance of the slope of a straight line fit No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties: A robust coefficient of determination for regression, O Renauda Edit: *Is there any r
47,561
Significance of the slope of a straight line fit
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard errors = Coefficient/Standard error), p value, and confidence interval. The p value directly addresses the statistical si...
Significance of the slope of a straight line fit
I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard er
Significance of the slope of a straight line fit I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard errors = Coefficient/Standard error), p value, and confidence interval. ...
Significance of the slope of a straight line fit I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard er
47,562
Significance of the slope of a straight line fit
It should be possible to use a permutation test to test the significance of the slope. Under the null, the slope is zero. Under the assumptions of the model and the null together, there's therefore no association between y and x. Hence the y's can be shuffled relative to the x to obtain the permutation distribution o...
Significance of the slope of a straight line fit
It should be possible to use a permutation test to test the significance of the slope. Under the null, the slope is zero. Under the assumptions of the model and the null together, there's therefore n
Significance of the slope of a straight line fit It should be possible to use a permutation test to test the significance of the slope. Under the null, the slope is zero. Under the assumptions of the model and the null together, there's therefore no association between y and x. Hence the y's can be shuffled relative ...
Significance of the slope of a straight line fit It should be possible to use a permutation test to test the significance of the slope. Under the null, the slope is zero. Under the assumptions of the model and the null together, there's therefore n
47,563
Visualization of a multivariate function
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a quick assessment of the multiple,one-way sensitivities f() has to the various input variables. Here is an example chart (...
Visualization of a multivariate function
Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a q
Visualization of a multivariate function Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a quick assessment of the multiple,one-way sensitivities f() has to the various in...
Visualization of a multivariate function Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a Latin Hypercube strategy. Then, a tornado chart can be used to get a q
47,564
Visualization of a multivariate function
Just a thought, although I've never tried it. you could obtain a large number of values from the function across different parameter values take a tour of the resulting data in ggobi (check out Mat Kelcey's video)
Visualization of a multivariate function
Just a thought, although I've never tried it. you could obtain a large number of values from the function across different parameter values take a tour of the resulting data in ggobi (check out Mat K
Visualization of a multivariate function Just a thought, although I've never tried it. you could obtain a large number of values from the function across different parameter values take a tour of the resulting data in ggobi (check out Mat Kelcey's video)
Visualization of a multivariate function Just a thought, although I've never tried it. you could obtain a large number of values from the function across different parameter values take a tour of the resulting data in ggobi (check out Mat K
47,565
Visualization of a multivariate function
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all others fixed. This would show you how the function varies in the directions of the maximal variance of the inputs.
Visualization of a multivariate function
You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all
Visualization of a multivariate function You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all others fixed. This would show you how the function varies in the directions o...
Visualization of a multivariate function You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all
47,566
How to compute efficiency?
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$ where $p$ is the parameter you want to estimate and $X_1,\dots,X_N$ are the observed random variables. For notational simplicity I will just write the obj...
How to compute efficiency?
I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$ where $p$ is the parameter you want
How to compute efficiency? I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$ where $p$ is the parameter you want to estimate and $X_1,\dots,X_N$ are the observed random variables. For notational simplicit...
How to compute efficiency? I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$ where $p$ is the parameter you want
47,567
How to compute efficiency?
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic normality has the conditions necessary to prove these properties. The conditions at the wiki may be stronger than what you...
How to compute efficiency?
The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic no
How to compute efficiency? The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic normality has the conditions necessary to prove these properties. The conditions at the wiki ma...
How to compute efficiency? The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on consistency and asymptotic no
47,568
Robust version of Hotelling $T^2$ test
Sure: two answers a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here: http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/ b) if by robustness you mean optimal under large group o...
Robust version of Hotelling $T^2$ test
Sure: two answers a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here: http://
Robust version of Hotelling $T^2$ test Sure: two answers a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here: http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/ b) if by robustne...
Robust version of Hotelling $T^2$ test Sure: two answers a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here: http://
47,569
Robust version of Hotelling $T^2$ test
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression, and in A comparison of robust alternatives to Hoteslling's T^2 control chart, which outlines some robust alternatives ...
Robust version of Hotelling $T^2$ test
Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression
Robust version of Hotelling $T^2$ test Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression, and in A comparison of robust alternatives to Hoteslling's T^2 control chart, w...
Robust version of Hotelling $T^2$ test Some robust alernatives are discussed in A class of robust stepwise alternativese to Hotelling's T 2 tests, which deals with trimmed means of the marginals of residuals produced by stepwise regression
47,570
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
You are right. Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used interchangeably, when they do amount to the same thing only if the regression contains a constant (or, more precisely, if the re...
In OLS, does the uncorrelatedness between regressors and residuals require a constant?
You are right. Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used intercha
In OLS, does the uncorrelatedness between regressors and residuals require a constant? You are right. Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used interchangeably, when they do amount to t...
In OLS, does the uncorrelatedness between regressors and residuals require a constant? You are right. Maybe because most regressions do contain a constant, the property $X'e=0$ (often called, more precisely, "orthogonality") and the terminology "uncorrelatedness" are often used intercha
47,571
Convergence of a confidence interval for the variance of a not normal distribution
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$, \begin{align} P[A_n \geq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] \to \Phi\left(-\sqrt{2}\sigma^2z_{\alpha/2}/\tau\right), \tag{1} \end{align} where $\Phi$ ...
Convergence of a confidence interval for the variance of a not normal distribution
Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$, \begin{align} P[A_n \geq \sigma^2] = P[(n -
Convergence of a confidence interval for the variance of a not normal distribution Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$, \begin{align} P[A_n \geq \sigma^2] = P[(n - 1)S_n^2/\sigma^2 \geq \xi_n] \to \Ph...
Convergence of a confidence interval for the variance of a not normal distribution Denote $\chi^2_{n - 1, \alpha/2}$ and $\chi^2_{n - 1, 1 - \alpha/2}$ by $\xi_n$ and $\eta_n$ respectively. In the following we show that as $n \to \infty$, \begin{align} P[A_n \geq \sigma^2] = P[(n -
47,572
Statistical test to assess significant difference in landcover selection
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern. In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\theta_1,\ldots,\theta_5)$ the problem is to test $$H_0: \theta_1=\cdots=\theta_5=1/5$$ against $$H_1:\theta_i\neq\theta_j...
Statistical test to assess significant difference in landcover selection
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern. In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\t
Statistical test to assess significant difference in landcover selection Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern. In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\theta_1,\ldots,\theta_5)$ the problem is to test...
Statistical test to assess significant difference in landcover selection Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern. In particular, given the sample $(X_1,\ldots,X_5)\sim \text{Mn}(466,\t
47,573
Statistical test to assess significant difference in landcover selection
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretation. Is the multinomial distribution justified? The counts are number of times the tagged animal has been in each landc...
Statistical test to assess significant difference in landcover selection
I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretat
Statistical test to assess significant difference in landcover selection I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretation. Is the multinomial distribution justified...
Statistical test to assess significant difference in landcover selection I think there are issues with the multinomial test proposed by @utobi, the null hypothesis of equal probabilities for the five landcover types and with the resulting "pattern of selection" interpretat
47,574
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
Confidence intervals are a frequentist measure of uncertainty. The researcher determines a population parameter of interest (say average income in a country) that they want to learn. Then, the researcher collects a random sample from the population and feeds this data into a formula that puts out an interval. The formu...
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
Confidence intervals are a frequentist measure of uncertainty. The researcher determines a population parameter of interest (say average income in a country) that they want to learn. Then, the researc
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate] Confidence intervals are a frequentist measure of uncertainty. The researcher determines a population parameter of interest (say average income in a country) that they want to l...
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr Confidence intervals are a frequentist measure of uncertainty. The researcher determines a population parameter of interest (say average income in a country) that they want to learn. Then, the researc
47,575
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate]
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else? Prior to some work in the area of selective inference, the bias in the estimates of the coefficients complicated the theory for testin...
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr
Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else? Prior to some
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients from a Lasso regression problem? [duplicate] Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it somethin...
Is it true that currently we cannot compute p-values or confidence intervals for the coefficients fr Why is it that currently inference on the coefficients is not possible? Is it that structurally the variance of the coefficient estimators have no closed form? Or is it something else? Prior to some
47,576
Outlier/anomaly detection on histograms
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance metric between your "data points" - which in your case are your histograms. There are various ways of doing this. If yo...
Outlier/anomaly detection on histograms
Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance
Outlier/anomaly detection on histograms Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance metric between your "data points" - which in your case are your histograms. The...
Outlier/anomaly detection on histograms Outlier or anomaly detection methods always rely on some notion of distance between the "data points" to be subjected to the detection algorithm. So your first step needs to be to decide on a distance
47,577
Product of two independent Student distributions
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as $$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)\,\frac{\mathrm{d}x}{|x|}.$$ Ignoring normalizing constants (we'll consider these at the end), for two Student t densiti...
Product of two independent Student distributions
When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as $$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)
Product of two independent Student distributions When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as $$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)\,\frac{\mathrm{d}x}{|x|}.$$ Ignoring normalizing constants (we'll cons...
Product of two independent Student distributions When $X$ and $Y$ are independent random variables with densities $f_X$ and $f_Y,$ the density of their product can be found with a change of variables as $$f_{XY}(z) = \int_{\mathbb R} f_X(x) f_Y(z/x)
47,578
Testing the difference of proportions equal to a certain value
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you are exceedingly unfortunate to have both very very low $p_1$, $p_2$ and very very low $n$, $m$. Certainly, higher $n$, $m...
Testing the difference of proportions equal to a certain value
I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you a
Testing the difference of proportions equal to a certain value I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you are exceedingly unfortunate to have both very very low $p_...
Testing the difference of proportions equal to a certain value I don't think there is an exact test in this case, but there is an approximate test. In general, concerns about the poor approximation of the approximate test are likely to be exaggerated unless you a
47,579
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimators
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$. Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delta method, $$\operatorname{Var}(\hat g_1) \approx \frac{\lambda (g'(\lambda))^2}{n}=\frac{\lambda e^{-2\lambda}(1-\lambda...
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimat
Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$. Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delt
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimators Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$. Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delta method, $$\ope...
Determine the limiting distribution of $n[g(\bar{X}_n)-1/e]$ of iid Poisson samples with two estimat Suppose $\hat g_1(\lambda)=g(\overline X_n)=\overline X_ne^{-\overline X_n}$ and $\hat g_2(\lambda)=\frac1n\sum\limits_{i=1}^n I(X_i=1)$. Provided $\lambda\ne 1$ (so that $g'(\lambda)\ne 0$ ), by delt
47,580
Is the third moment of an AR(1) dependent on $t$?
It may or may not be: If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$, $$ E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\epsilon_{t-j}^3), $$ as pairs $\epsilon_i,\epsilon_j,\epsilon_k$ for which we do not have $i=j=k$ will yield terms of the f...
Is the third moment of an AR(1) dependent on $t$?
It may or may not be: If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$, $$ E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\eps
Is the third moment of an AR(1) dependent on $t$? It may or may not be: If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$, $$ E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\epsilon_{t-j}^3), $$ as pairs $\epsilon_i,\epsilon_j,\epsilon_k$ for whic...
Is the third moment of an AR(1) dependent on $t$? It may or may not be: If $\epsilon_t$ is independent WN, the $MA(\infty)$ representation $X_t=\sum_{j=0}^\infty\phi^j\epsilon_{t-j}$ gives, for $|\phi|<1$, $$ E(X_t^3)=\sum_{j=0}^\infty\phi^{3j}E(\eps
47,581
Are there any weight matrices of residual connections in ResNet?
There are two cases in the ResNet paper. When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix. When the summands would have different shapes, then there is a weight matrix that has the purpose of projecting the shortcut output to be the same shap...
Are there any weight matrices of residual connections in ResNet?
There are two cases in the ResNet paper. When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix. When the summands would have di
Are there any weight matrices of residual connections in ResNet? There are two cases in the ResNet paper. When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix. When the summands would have different shapes, then there is a weight matrix that has ...
Are there any weight matrices of residual connections in ResNet? There are two cases in the ResNet paper. When shortcut connections where the summands have the same shape, the identity mapping is used, so there is no weight matrix. When the summands would have di
47,582
Randomized controlled trial and DAG
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$, because by definition random assignment $R$ is the only prior cause of treatment: $$\boxed{R} \to A \to Y$$ In the simpl...
Randomized controlled trial and DAG
Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$,
Randomized controlled trial and DAG Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$, because by definition random assignment $R$ is the only prior cause of treatment: $$...
Randomized controlled trial and DAG Quite simply an RCT ensures no backdoor paths (technically it reduces the possibility of backdoor confounding to a chance which is inversely related to sample size) from outcome $Y$ to treatment $A$,
47,583
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
Its possible the model may get better, yes. Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top out around a million dollars on average, where as in my home town they are just about half that. But anyone who has sea...
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
Its possible the model may get better, yes. Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model? Its possible the model may get better, yes. Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top out around a mill...
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode Its possible the model may get better, yes. Cities and neighborhoods are a particularly good example. The price of homes in Ontario varies quite drastically. In Toronto, single family dwellings top
47,584
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model?
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model; you should use the neighbourhood variable but not the city variable. The problem is not merely that neighbourhood and...
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode
Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a model? Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model; you should use ...
Is there any gain by adding correlated categorical variables (e.g.: city and neighborhood) in a mode Presumably the neighbourhoods are unique to their city, so once you know the neighbourhood you know the city. Assuming this is the case, adding both variables will lead to an over-parameterised model
47,585
Does Student's T require normally-distributed data?
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, the answer is "maybe" depending on lots of things. It seems from some simulations that one of the biggest factors is the ...
Does Student's T require normally-distributed data?
It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, th
Does Student's T require normally-distributed data? It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, the answer is "maybe" depending on lots of things. It seems from some...
Does Student's T require normally-distributed data? It seems from the comments that the short answer here is for large samples, "no", because the sampling distribution of the mean converges to a normal distribution for large $n$. For small samples, th
47,586
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
At least in epidemiology, the term is Relative Risk reduction: the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to an unexposed group. It is computed as ${\displaystyle (I_{u}-I_{e})/I_{u}}$, where $I_e$ is the incidence in the exposed...
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name?
At least in epidemiology, the term is Relative Risk reduction: the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name? At least in epidemiology, the term is Relative Risk reduction: the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to an unexposed group. It is computed as ${\displaystyle (I_{u}-I_{e})/I_{u...
Does ( P(B|A) - P(B|~A) ) / P(B|A) have a name? At least in epidemiology, the term is Relative Risk reduction: the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to
47,587
Is threshold moving unnecessary in balanced classification problem?
No, that is not correct. First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not the modeling stage. Thresholds can only be set if we include the costs of misclassification - and that holds even for b...
Is threshold moving unnecessary in balanced classification problem?
No, that is not correct. First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not
Is threshold moving unnecessary in balanced classification problem? No, that is not correct. First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not the modeling stage. Thresholds can only be set if w...
Is threshold moving unnecessary in balanced classification problem? No, that is not correct. First off, please take a look at Reduce Classification Probability Threshold, where I argue that discussions about thresholds belong to the decision stage of the analysis, not
47,588
Train-Test Splits in Random Forest approach with small sample sizes
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still want a "clean" hold-out set for validation. Train-test splits are often quite inaccurate for small data sets. Consider t...
Train-Test Splits in Random Forest approach with small sample sizes
Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still
Train-Test Splits in Random Forest approach with small sample sizes Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still want a "clean" hold-out set for validation. Train-te...
Train-Test Splits in Random Forest approach with small sample sizes Yes, RFs' in-built OOB mse can be seen as an indicator for model performance. But you won't be able to compare its performance to different models (or different hyperparameters). Generally, you still
47,589
Help understand the virtue of generalized linear models
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are normal. However, they often aren't; for example, they can be badly skewed, with differing residual variances, the appeara...
Help understand the virtue of generalized linear models
The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are no
Help understand the virtue of generalized linear models The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are normal. However, they often aren't; for example, they can be badl...
Help understand the virtue of generalized linear models The ordinary least squares regression model assumes that the errors are normally distributed (and with constant variance). Equivalently, you could say that the conditional distributions of $Y$ are no
47,590
Can long run variance of a time series be used to test mean of the series?
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null. And indeed, kernel-based long-run variance estimators are sometimes also referred to as nonparametric estimators that do not (the...
Can long run variance of a time series be used to test mean of the series?
Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null. And indee
Can long run variance of a time series be used to test mean of the series? Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null. And indeed, kernel-based long-run variance estimators ...
Can long run variance of a time series be used to test mean of the series? Basically, yes and yes - you can replace the long-run variance with a consistent estimator thereof and, by Slutzky's theorem, the test statistic will still be standard normal under the null. And indee
47,591
Is pooling countries together or running a regression model for each country alone is more suitable for comparison?
The two models yield different results because they are, well, different models. Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them, which is fine. age and gender are presumably a potential confounder, hence this is also correclty included. So the que...
Is pooling countries together or running a regression model for each country alone is more suitable
The two models yield different results because they are, well, different models. Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them
Is pooling countries together or running a regression model for each country alone is more suitable for comparison? The two models yield different results because they are, well, different models. Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them, wh...
Is pooling countries together or running a regression model for each country alone is more suitable The two models yield different results because they are, well, different models. Clearly you are interested in the "effects" of education and year and naturally you include an interaction between them
47,592
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\sigma)$?
The variable $\frac{\hat\mu}{\hat\sigma}$ This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for the standard error of Cohen's d ) \begin{array}{crl} \text{Var}\left(\frac{\hat\mu}{\hat\sigma}\right) &=& \frac{1}{n}\le...
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\
The variable $\frac{\hat\mu}{\hat\sigma}$ This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for t
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\sigma)$? The variable $\frac{\hat\mu}{\hat\sigma}$ This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for the standard...
How to estimate the sample variance of the estimator of the parameter $P(x≤0)$ where $x \sim N(\mu,\ The variable $\frac{\hat\mu}{\hat\sigma}$ This follows a non-central t-distribution scaled by $\sqrt{n}$ and has approximately the following variance (see a related question: What is the formula for t
47,593
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 10 days to 1 reward in 11 days. You would also prefer 2 reward tomorrow over 1 reward today, and 2 reward in 11 days over...
Is gamma actually an efficient way to weigh future rewards in reinforcement learning?
Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 1
Is gamma actually an efficient way to weigh future rewards in reinforcement learning? Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 10 days to 1 reward in 11 days. You...
Is gamma actually an efficient way to weigh future rewards in reinforcement learning? Exponential discounting is "time-consistent" in a way that other forms of discounting are not. For example, with $\gamma = 0.9$, you would prefer 1 reward today to 1 reward tomorrow, and 1 reward in 1
47,594
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
Here is a counterexample: $P(X_n = 1) = \frac{1}{\sqrt{n}}$ $P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$ To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$. Then for $n > N$, $P(n|X_n| > M) = P(|X_n| > \frac{M}{n}) = P(X_n = 1) = \frac{1}{\sqrt{n}} < \epsilon$ as required. But $E(X...
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$?
Here is a counterexample: $P(X_n = 1) = \frac{1}{\sqrt{n}}$ $P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$ To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$. Then
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$? Here is a counterexample: $P(X_n = 1) = \frac{1}{\sqrt{n}}$ $P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$ To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$. Then for $n > N$, $P(n|X_n| > M) = P(|X_n| > \frac{M}...
Can we go from $X_n = \mu + O_p(n^{-1})$ to $E[X_n] = \mu + O(n^{-1})$? Here is a counterexample: $P(X_n = 1) = \frac{1}{\sqrt{n}}$ $P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$ To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$. Then
47,595
What is a medcouple?
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric. The center of a symmetry, should it exist, would be the median $M.$ To study symmetry, then, it suffices to examine how far each value is from the median. Accordingly, rec...
What is a medcouple?
This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric. The center of a symmetry, should it exist, would be t
What is a medcouple? This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric. The center of a symmetry, should it exist, would be the median $M.$ To study symmetry, then, it suffices to examine how far each value is from the medi...
What is a medcouple? This concept concerns a batch of data $(x_1, x_2, \ldots, x_n):$ the medcouple is a way to measure how much a batch deviates from being symmetric. The center of a symmetry, should it exist, would be t
47,596
What is a medcouple?
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple. We have the MC itself as MC = med h(xi, xj) and we have h as h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi) we have two indices: i and j. they are used to form the couples to be compared. The goal is to compare ...
What is a medcouple?
Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple. We have the MC itself as MC = med h(xi, xj) and we have h as h(xi, xj) = (xj−Q2)−(
What is a medcouple? Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple. We have the MC itself as MC = med h(xi, xj) and we have h as h(xi, xj) = (xj−Q2)−(Q2−xi) / (xj−xi) we have two indices: i and j. they are used to form the couples to be compared. Th...
What is a medcouple? Im sorry, im not very good with formulars/formating but i still try my best to share my understanding of med couple. We have the MC itself as MC = med h(xi, xj) and we have h as h(xi, xj) = (xj−Q2)−(
47,597
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
Its a bit involved, but it should be doable. As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically, but we can instead simulate it. Here is some gamma regression data N = 100 x = np.random.normal(size = N) true_beta = ...
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python?
Its a bit involved, but it should be doable. As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically,
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python? Its a bit involved, but it should be doable. As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically, but we can instead simulate it. Here...
How to calculate prediction interval in GLM (Gamma) / TweedieRegression in Python? Its a bit involved, but it should be doable. As that post says, in order to get a prediction interval you have to integrate over the uncertainty in the coefficients. That is hard to do analytically,
47,598
Understanding how to tell whether random effects assumption is sufficiently violated to pose a problem in practice
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when the correlation is very high - typically in the region of 0.8 or higher. when the cluster sizes are small. when the goal of the analysis is inference rather than prediction. Regarding 1, in ...
Understanding how to tell whether random effects assumption is sufficiently violated to pose a probl
In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when the correlation is very high - typically in the region of 0.8 or higher.
Understanding how to tell whether random effects assumption is sufficiently violated to pose a problem in practice In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when the correlation is very high - typically in the region of 0.8 or higher. when ...
Understanding how to tell whether random effects assumption is sufficiently violated to pose a probl In my experience, the issue of the correlation of predictors / exposures with the random effects only becomes a problem when the correlation is very high - typically in the region of 0.8 or higher.
47,599
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from $-\infty$ to $\mu$)?
The mean of a variable $X$ can be computed as $$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$ The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as $$0 = \int_{0}^{\infty}1-F(x+\mu_X)dx - \int_{-\infty}^{0} F(x+\mu_X)dx $$ Or $$ 0 = \int_{\mu_X}^{\infty}1-F(x)dx -\int_{-\in...
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from
The mean of a variable $X$ can be computed as $$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$ The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as $$0 = \i
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from $-\infty$ to $\mu$)? The mean of a variable $X$ can be computed as $$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$ The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as $$0 = \...
Which $\mu$ hold so that integral of CDF (from $\mu$ to $\infty$) equals to integral of 1-CDF (from The mean of a variable $X$ can be computed as $$\mu_X = \int_{0}^{\infty}1-F(x)dx - \int_{-\infty}^{0} F(x)dx $$ The mean of a shifted variable $X-\mu_X$ (which equals zero) is computed as $$0 = \i
47,600
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicate]
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and mixed random variables should also be possible, but I will not pursue that here. In a related answer here we show a par...
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicat
While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicate] While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and mixed random vari...
How do you write the expected value of an arbitrary random variable $X$ in terms of $F_X$? [duplicat While the "Darth Vader rule" (a silly name) applies to any non-negative random variable, I am going to simplify the analysis by looking only at continuous random variables. Extension to discrete and