idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,301
Performing multiple linear regressions, in Excel, that have a common x-intercept?
I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in all z's. For a common intercept with 3 slopes it would look like this. batch y x xz 48 xz 58 48 0.9 0 0 0 48 0.7 12 12 0 48 0.6 24 24 0 48 0.6 36 36 0 48 0.66 48 48 0 48 0.59 60 60 0 58 1 0 0 0 58 0.9 12 0 12 58 0.8 24 0 24 58 0.75 36 0 36 58 0.82 48 0 48 69 1 0 0 0 69 0.9 12 0 0 69 0.84 24 0 0 69 0.83 36 0 0 Then use the Excel regression add in. SUMMARY OUTPUT Regression Statistics Multiple R 0.838 R Square 0.703 Adjusted R Square 0.622 Standard Error 0.0851 Observations 15 ANOVA df SS MS F Significance F Regression 3 0.1886 0.0629 8.68 0.003 Residual 11 0.0797 0.00725 Total 14 0.2683 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept 0.901 0.0381 23.7 9.E-11 0.817 0.984 months -0.00199 0.00233 -0.85 0.41 -0.00712 0.00315 xz 48 -0.00441 0.00218 -2.02 0.068 -0.00921 0.00039 xz 58 -0.000725 0.00232 -0.31 0.76 -0.00582 0.00437 The ANOVA is rubbish but the output will give you all the equations for each of the lines.e.g. Y=0.901-0.00199x for b69 Y=0.901-0.00199x-0.00218zx for b48 Y=0.901-0.00199x-0.000725zx for b58
Performing multiple linear regressions, in Excel, that have a common x-intercept?
I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in al
Performing multiple linear regressions, in Excel, that have a common x-intercept? I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in all z's. For a common intercept with 3 slopes it would look like this. batch y x xz 48 xz 58 48 0.9 0 0 0 48 0.7 12 12 0 48 0.6 24 24 0 48 0.6 36 36 0 48 0.66 48 48 0 48 0.59 60 60 0 58 1 0 0 0 58 0.9 12 0 12 58 0.8 24 0 24 58 0.75 36 0 36 58 0.82 48 0 48 69 1 0 0 0 69 0.9 12 0 0 69 0.84 24 0 0 69 0.83 36 0 0 Then use the Excel regression add in. SUMMARY OUTPUT Regression Statistics Multiple R 0.838 R Square 0.703 Adjusted R Square 0.622 Standard Error 0.0851 Observations 15 ANOVA df SS MS F Significance F Regression 3 0.1886 0.0629 8.68 0.003 Residual 11 0.0797 0.00725 Total 14 0.2683 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept 0.901 0.0381 23.7 9.E-11 0.817 0.984 months -0.00199 0.00233 -0.85 0.41 -0.00712 0.00315 xz 48 -0.00441 0.00218 -2.02 0.068 -0.00921 0.00039 xz 58 -0.000725 0.00232 -0.31 0.76 -0.00582 0.00437 The ANOVA is rubbish but the output will give you all the equations for each of the lines.e.g. Y=0.901-0.00199x for b69 Y=0.901-0.00199x-0.00218zx for b48 Y=0.901-0.00199x-0.000725zx for b58
Performing multiple linear regressions, in Excel, that have a common x-intercept? I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in al
46,302
Assessing the accuracy of a deterministic mathematical model
A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) relationship between the predicted and observed values, while higher values suggest that your model better agrees (up to a scale factor) with the observed data. Correlations can easily be computed in R with the cor function. The cor.test function does some testing of association between two variables. You should also just plot the data and take a look at it. Your model might not perform equally well under all conditions (e.g., maybe it breaks down around freezing temperatures) There are more sophisticated things you could try, but I think these are a reasonable first step.
Assessing the accuracy of a deterministic mathematical model
A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear)
Assessing the accuracy of a deterministic mathematical model A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) relationship between the predicted and observed values, while higher values suggest that your model better agrees (up to a scale factor) with the observed data. Correlations can easily be computed in R with the cor function. The cor.test function does some testing of association between two variables. You should also just plot the data and take a look at it. Your model might not perform equally well under all conditions (e.g., maybe it breaks down around freezing temperatures) There are more sophisticated things you could try, but I think these are a reasonable first step.
Assessing the accuracy of a deterministic mathematical model A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear)
46,303
Assessing the accuracy of a deterministic mathematical model
Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I don't think you can assess the latter. As to the former, I would compute the mean square prediction error. By that I mean use the model to predict the mean annual temperature for each of the 30 years (presumably based on available inputs that the mode needs to get the estimates) and take the average squared difference between it and the actual recorded mean annual temperatures. This gives you an estimate but not the accuracy of the estimate. So you may have a standard for the accuracy and you want to test the hypothesis that accuracy is better than a certain level. Now I can give a somewhat vague description of how to do this. It is admittedly vague because I do not know what inputs go into the model to make the prediction. The idea would be to make small perturbations to the input and see how these perturbations affect the accuracy of the prediction. This would give you a distribution of mean square errors from which you could estimate a p-value for your hypothesis. All this assumes that you have a sensible way to perturb the inputs that would characterize the sampling variability in the inputs. The resulting estimates would then provide a representation of the variability of the individual predictions and from that the variability in the estimated means square error of prediction.
Assessing the accuracy of a deterministic mathematical model
Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I do
Assessing the accuracy of a deterministic mathematical model Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I don't think you can assess the latter. As to the former, I would compute the mean square prediction error. By that I mean use the model to predict the mean annual temperature for each of the 30 years (presumably based on available inputs that the mode needs to get the estimates) and take the average squared difference between it and the actual recorded mean annual temperatures. This gives you an estimate but not the accuracy of the estimate. So you may have a standard for the accuracy and you want to test the hypothesis that accuracy is better than a certain level. Now I can give a somewhat vague description of how to do this. It is admittedly vague because I do not know what inputs go into the model to make the prediction. The idea would be to make small perturbations to the input and see how these perturbations affect the accuracy of the prediction. This would give you a distribution of mean square errors from which you could estimate a p-value for your hypothesis. All this assumes that you have a sensible way to perturb the inputs that would characterize the sampling variability in the inputs. The resulting estimates would then provide a representation of the variability of the individual predictions and from that the variability in the estimated means square error of prediction.
Assessing the accuracy of a deterministic mathematical model Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I do
46,304
Assessing the accuracy of a deterministic mathematical model
I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not involve trying to reduce model performance to a p-value. How well does your model predict parameters? If your model estimates parameters from data, how well does this estimate agree with observed parameters from data other than what you fit the model to? Does it generate the correct answer when confronted with parameters that result in a known change. For example, if your model is given all the parameters that occur before a heat wave, does it correctly produce said heat wave? As someone else has suggested, you could also compare the error between your predicted output and the actual output of the system, though this just gives you a number that you're trying to minimize, not actually a statistical estimate. Designing mathematical models to be tested statistically is very hard to do backwards - the elements you need generally need to be discussed in the model design step, just like with studies.
Assessing the accuracy of a deterministic mathematical model
I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not inv
Assessing the accuracy of a deterministic mathematical model I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not involve trying to reduce model performance to a p-value. How well does your model predict parameters? If your model estimates parameters from data, how well does this estimate agree with observed parameters from data other than what you fit the model to? Does it generate the correct answer when confronted with parameters that result in a known change. For example, if your model is given all the parameters that occur before a heat wave, does it correctly produce said heat wave? As someone else has suggested, you could also compare the error between your predicted output and the actual output of the system, though this just gives you a number that you're trying to minimize, not actually a statistical estimate. Designing mathematical models to be tested statistically is very hard to do backwards - the elements you need generally need to be discussed in the model design step, just like with studies.
Assessing the accuracy of a deterministic mathematical model I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not inv
46,305
Assessing the accuracy of a deterministic mathematical model
I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is generally applicable for the forecast of continuous stochastic variables. See https://doi.org/10.1016/j.renene.2021.08.032
Assessing the accuracy of a deterministic mathematical model
I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is
Assessing the accuracy of a deterministic mathematical model I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is generally applicable for the forecast of continuous stochastic variables. See https://doi.org/10.1016/j.renene.2021.08.032
Assessing the accuracy of a deterministic mathematical model I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is
46,306
Plotting interval censored follow-up time as a line chart
There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye. Just to give another perspective, here's an approach using the ggplot2 package. require(ggplot2) # Your example data dat <- structure(list(ID = 1:5, eventA = c(0L, 1L, 1L, 0L, 1L), eventB = c(1L, 0L, 0L, 1L, 0L), t1 = c(7, 5, 10, 4.5, 2), t2 = c(7, 5, 10, 4.5, 8), censored = c(0, 0, 0, 0, 1)), .Names = c("ID", "eventA", "eventB", "t1", "t2", "censored"), class = "data.frame", row.names = c(NA, -5L)) # Create event variable dat$event <- with(dat, ifelse(eventA, "A", "B")) # Create id.ordered, which is a factor that is ordered by t2 # This will allow the plot to be ordered by increasing t2, if desired dat$id.ordered <- factor(x = dat$ID, levels = order(dat$t2, decreasing = T)) # Use ggplot to plot data from dat object ggplot(dat, aes(x = id.ordered)) + # Plot solid line representing non-interval censored time from 0 to t1 geom_linerange(aes(ymin = 0, ymax = t1)) + # Plot line (dotted for censored time) representing time from t1 to t2 geom_linerange(aes(ymin = t1, ymax = t2, linetype = as.factor(censored))) + # Plot points representing event # The ifelse() function moves censored marker to middle of interval geom_point(aes(y = ifelse(censored, t1 + (t2 - t1) / 2, t2), shape = event), size = 4) + # Flip coordinates coord_flip() + # Add custom name to linetype scale, # otherwise it will default to "as.factor(censored))" scale_linetype_manual(name = "Censoring", values = c(1, 2), labels = c("Not censored", "Interval censored")) + # Add custom shape scale. Change the values to get different shapes. scale_shape_manual(name = "Event", values = c(19, 15)) + # Add main title and axis labels opts(title = "Patient follow-up") + xlab("Patient ID") + ylab("Days") + # I think the bw theme looks better for this graph, # but leave it out if you prefer the default theme theme_bw() And the result: When making graphs with line-type, color, size, etc. conditional on data, I find ggplot2 more intuitive than base graphics and even trellis, although trellis is much faster when plotting bigger data. I'm not sure if it is preferred to place the event marker in the middle of the censored interval or at the end. I chose the middle here, to emphasize that the event did not necessarily occur near the end of follow-up. Addendum Once you decide on a standard plot, and if you find yourself making these plots often, it can be convenient to wrap it up in a function and use R's S3 object system to dispatch the plotting method with a call to the plot() generic. The following does not relate directly to the original question, but for the sake of other readers interested in adding methods to the plot() generic I'll include it here. First, wrap up the ggplot call into a plot method function: plot.interval.censored <- function(x, title = "Patient follow-up", xlab = "Patient ID", ylab = "Days", linetype.values = c(1, 2), shape.values = c(19, 15)) { x$event <- with(dat, ifelse(eventA, "A", "B")) x$id.ordered <- factor(x = dat$ID, levels = order(dat$t2, decreasing = T)) out <- ggplot(x, aes(x = id.ordered)) + geom_linerange(aes(ymin = 0, ymax = t1)) + geom_linerange(aes(ymin = t1, ymax = t2, linetype = as.factor(censored))) + geom_point(aes(y = ifelse(censored, t1 + (t2 - t1) / 2, t2), shape = event), size = 4) + coord_flip() + scale_linetype_manual(name = "Censoring", values = linetype.values, labels = c("Not censored", "Interval censored")) + scale_shape_manual(name = "Event", values = shape.values) + opts(title = title) + xlab(xlab) + ylab(ylab) + theme_bw() return(out) } Then, add internal.censored to the class of your data object: class(dat) <- c("interval.censored", class(dat)) Now, the plot can be produced simply by calling: plot(dat) You can either put plot.interval.censored() into a package, or add it to your .Rprofile, in which case it will always be available to you when you start R on your machine. Publishing a package might be preferred, as it is easier to share with others or install when you are not on your own machine. However, editing .Rprofile might be simpler. Hadley has a great overview of the S3 object system.
Plotting interval censored follow-up time as a line chart
There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye. Ju
Plotting interval censored follow-up time as a line chart There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye. Just to give another perspective, here's an approach using the ggplot2 package. require(ggplot2) # Your example data dat <- structure(list(ID = 1:5, eventA = c(0L, 1L, 1L, 0L, 1L), eventB = c(1L, 0L, 0L, 1L, 0L), t1 = c(7, 5, 10, 4.5, 2), t2 = c(7, 5, 10, 4.5, 8), censored = c(0, 0, 0, 0, 1)), .Names = c("ID", "eventA", "eventB", "t1", "t2", "censored"), class = "data.frame", row.names = c(NA, -5L)) # Create event variable dat$event <- with(dat, ifelse(eventA, "A", "B")) # Create id.ordered, which is a factor that is ordered by t2 # This will allow the plot to be ordered by increasing t2, if desired dat$id.ordered <- factor(x = dat$ID, levels = order(dat$t2, decreasing = T)) # Use ggplot to plot data from dat object ggplot(dat, aes(x = id.ordered)) + # Plot solid line representing non-interval censored time from 0 to t1 geom_linerange(aes(ymin = 0, ymax = t1)) + # Plot line (dotted for censored time) representing time from t1 to t2 geom_linerange(aes(ymin = t1, ymax = t2, linetype = as.factor(censored))) + # Plot points representing event # The ifelse() function moves censored marker to middle of interval geom_point(aes(y = ifelse(censored, t1 + (t2 - t1) / 2, t2), shape = event), size = 4) + # Flip coordinates coord_flip() + # Add custom name to linetype scale, # otherwise it will default to "as.factor(censored))" scale_linetype_manual(name = "Censoring", values = c(1, 2), labels = c("Not censored", "Interval censored")) + # Add custom shape scale. Change the values to get different shapes. scale_shape_manual(name = "Event", values = c(19, 15)) + # Add main title and axis labels opts(title = "Patient follow-up") + xlab("Patient ID") + ylab("Days") + # I think the bw theme looks better for this graph, # but leave it out if you prefer the default theme theme_bw() And the result: When making graphs with line-type, color, size, etc. conditional on data, I find ggplot2 more intuitive than base graphics and even trellis, although trellis is much faster when plotting bigger data. I'm not sure if it is preferred to place the event marker in the middle of the censored interval or at the end. I chose the middle here, to emphasize that the event did not necessarily occur near the end of follow-up. Addendum Once you decide on a standard plot, and if you find yourself making these plots often, it can be convenient to wrap it up in a function and use R's S3 object system to dispatch the plotting method with a call to the plot() generic. The following does not relate directly to the original question, but for the sake of other readers interested in adding methods to the plot() generic I'll include it here. First, wrap up the ggplot call into a plot method function: plot.interval.censored <- function(x, title = "Patient follow-up", xlab = "Patient ID", ylab = "Days", linetype.values = c(1, 2), shape.values = c(19, 15)) { x$event <- with(dat, ifelse(eventA, "A", "B")) x$id.ordered <- factor(x = dat$ID, levels = order(dat$t2, decreasing = T)) out <- ggplot(x, aes(x = id.ordered)) + geom_linerange(aes(ymin = 0, ymax = t1)) + geom_linerange(aes(ymin = t1, ymax = t2, linetype = as.factor(censored))) + geom_point(aes(y = ifelse(censored, t1 + (t2 - t1) / 2, t2), shape = event), size = 4) + coord_flip() + scale_linetype_manual(name = "Censoring", values = linetype.values, labels = c("Not censored", "Interval censored")) + scale_shape_manual(name = "Event", values = shape.values) + opts(title = title) + xlab(xlab) + ylab(ylab) + theme_bw() return(out) } Then, add internal.censored to the class of your data object: class(dat) <- c("interval.censored", class(dat)) Now, the plot can be produced simply by calling: plot(dat) You can either put plot.interval.censored() into a package, or add it to your .Rprofile, in which case it will always be available to you when you start R on your machine. Publishing a package might be preferred, as it is easier to share with others or install when you are not on your own machine. However, editing .Rprofile might be simpler. Hadley has a great overview of the S3 object system.
Plotting interval censored follow-up time as a line chart There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye. Ju
46,307
Plotting interval censored follow-up time as a line chart
Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This: plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start") lines(data$t1, type="h", col="lightskyblue", lwd=2) points(data$atime, pch=19, cex=0.75, col="Black") Decided that adding markers for both A and B for the entire cohort was...a little crowded, so this is just shown with markers for A, but it's clearly generalizable. In the wrong orientation as well, but that's not a huge deal, and in the realm of tinkering.
Plotting interval censored follow-up time as a line chart
Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This: plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start") lines(data$t1, type="h", col="li
Plotting interval censored follow-up time as a line chart Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This: plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start") lines(data$t1, type="h", col="lightskyblue", lwd=2) points(data$atime, pch=19, cex=0.75, col="Black") Decided that adding markers for both A and B for the entire cohort was...a little crowded, so this is just shown with markers for A, but it's clearly generalizable. In the wrong orientation as well, but that's not a huge deal, and in the realm of tinkering.
Plotting interval censored follow-up time as a line chart Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This: plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start") lines(data$t1, type="h", col="li
46,308
Visualizing high dimensional binary data
Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your data. In R: # data is your data.frame/matrix of data pca <- prcomp(data, scale.=TRUE) # Screeplot to see how much variance is in the 2D plane plot(pca) # Projections plot(data %*% pca$rotation[,1:2])
Visualizing high dimensional binary data
Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your da
Visualizing high dimensional binary data Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your data. In R: # data is your data.frame/matrix of data pca <- prcomp(data, scale.=TRUE) # Screeplot to see how much variance is in the 2D plane plot(pca) # Projections plot(data %*% pca$rotation[,1:2])
Visualizing high dimensional binary data Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your da
46,309
Visualizing high dimensional binary data
Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data).
Visualizing high dimensional binary data
Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data).
Visualizing high dimensional binary data Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data).
Visualizing high dimensional binary data Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data).
46,310
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$
One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution. $\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{array}{c}\mu_{X}\\\mu_{Y}\\\mu_{Z}\end{array}\right],\left[\begin{array}{ccc}\sigma_{X}^{2} & 0 & 0\\0 & \sigma_{Y}^{2} & 0\\0 & 0 & \sigma_{Z}^{2}\end{array}\right]\right)$ Let $\left[\begin{array}{c} U\\ V\end{array}\right]=\left[\begin{array}{c} X-Y\\ Z-Y\end{array}\right]=\left[\begin{array}{ccc} 1 & -1 & 0\\ 0 & -1 & 1\end{array}\right]\left[\begin{array}{c} X\\ Y\\ Z\end{array}\right]$ Then by standard results on affine transformations of multivariate normal distributions, $\left[\begin{array}{c} U\\ V\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{array}{c} \mu_{X}-\mu_{Y}\\ \mu_{Z}-\mu_{Y}\end{array}\right],\left[\begin{array}{cc} \sigma_{X}^{2}+\sigma_{Y}^{2} & \sigma_{Y}^{2}\\ \sigma_{Y}^{2} & \sigma_{Z}^{2}+\sigma_{Y}^{2}\end{array}\right]\right)$ And since $P(Y \geq X, Y \leq Z) = P(U \leq 0, V \geq 0)$, you want the probability mass of this bivariate distribution in the second quadrant. This is not analytically solvable in general, but is easy to compute. If $\mu_X = \mu_Y = \mu_Z$, then there is an analytical expression (from equation 73 here): $P(U \leq 0, V \geq 0) = \frac{1}{2} \cos^{-1}\left(\frac{\sigma^2_{Y}}{\sqrt{(\sigma^2_{X} + \sigma^2_{Y}) (\sigma^2_{Z} + \sigma^2_{Y})}}\right)$. Added: Here's R code to compute the probability. install.packages("mvtnorm") library(mvtnorm) mu_x <- -1.4 mu_y <- 2 mu_z <- 1.7 mu_vec <- c(mu_x- mu_y, mu_z - mu_y) var_x <- 9 var_y <- 9 var_z <- 16 Sigma <- var_y + matrix(c(var_x, 0, 0 , var_z), nrow = 2) pmvnorm(lower = c(-Inf, 0), upper = c(0, Inf), mean = mu_vec, sigma = Sigma)
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia
One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution. $\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{ar
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$ One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution. $\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{array}{c}\mu_{X}\\\mu_{Y}\\\mu_{Z}\end{array}\right],\left[\begin{array}{ccc}\sigma_{X}^{2} & 0 & 0\\0 & \sigma_{Y}^{2} & 0\\0 & 0 & \sigma_{Z}^{2}\end{array}\right]\right)$ Let $\left[\begin{array}{c} U\\ V\end{array}\right]=\left[\begin{array}{c} X-Y\\ Z-Y\end{array}\right]=\left[\begin{array}{ccc} 1 & -1 & 0\\ 0 & -1 & 1\end{array}\right]\left[\begin{array}{c} X\\ Y\\ Z\end{array}\right]$ Then by standard results on affine transformations of multivariate normal distributions, $\left[\begin{array}{c} U\\ V\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{array}{c} \mu_{X}-\mu_{Y}\\ \mu_{Z}-\mu_{Y}\end{array}\right],\left[\begin{array}{cc} \sigma_{X}^{2}+\sigma_{Y}^{2} & \sigma_{Y}^{2}\\ \sigma_{Y}^{2} & \sigma_{Z}^{2}+\sigma_{Y}^{2}\end{array}\right]\right)$ And since $P(Y \geq X, Y \leq Z) = P(U \leq 0, V \geq 0)$, you want the probability mass of this bivariate distribution in the second quadrant. This is not analytically solvable in general, but is easy to compute. If $\mu_X = \mu_Y = \mu_Z$, then there is an analytical expression (from equation 73 here): $P(U \leq 0, V \geq 0) = \frac{1}{2} \cos^{-1}\left(\frac{\sigma^2_{Y}}{\sqrt{(\sigma^2_{X} + \sigma^2_{Y}) (\sigma^2_{Z} + \sigma^2_{Y})}}\right)$. Added: Here's R code to compute the probability. install.packages("mvtnorm") library(mvtnorm) mu_x <- -1.4 mu_y <- 2 mu_z <- 1.7 mu_vec <- c(mu_x- mu_y, mu_z - mu_y) var_x <- 9 var_y <- 9 var_z <- 16 Sigma <- var_y + matrix(c(var_x, 0, 0 , var_z), nrow = 2) pmvnorm(lower = c(-Inf, 0), upper = c(0, Inf), mean = mu_vec, sigma = Sigma)
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution. $\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{ar
46,311
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$
I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R: N=10^7 x=rnorm(N,mu_x,sig_x) y=rnorm(N,mu_y,sig_y) z=rnorm(N,mu_z,sig_z) sum(x<y & y >z )/N It is just an estimation so maybe do it a couple times. Quick and dirty
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia
I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R: N=10^7 x=rnorm(N,mu_x,sig_x) y=rnorm(N,mu_y,sig_y) z=rnorm(N,mu_z,sig_z)
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$ I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R: N=10^7 x=rnorm(N,mu_x,sig_x) y=rnorm(N,mu_y,sig_y) z=rnorm(N,mu_z,sig_z) sum(x<y & y >z )/N It is just an estimation so maybe do it a couple times. Quick and dirty
What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R: N=10^7 x=rnorm(N,mu_x,sig_x) y=rnorm(N,mu_y,sig_y) z=rnorm(N,mu_z,sig_z)
46,312
Why is generalized linear model (GLM) a semi-parametric model?
A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions. If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are independent and that $$ g(\mathbb{E}[\,Y_i|X_i=x_i\,]) = x_i^T\beta $$ then, under mild regularity conditions, solving the equations $$ \sum_i\frac{\partial g^{-1}(x_i^T\beta)}{\partial \beta}w(g^{-1}(x_i^T\beta))(Y_i - g^{-1}(x_i^T\beta)) = \mathbf{0} $$ provides consistent estimates for parameter $\beta$. The weighting term $w$ is arbitrary, but it determines the efficiency of this approach, and the best option is to use weights inversely proportional to the variance of $Y_i$, if you know this. How does this connect to GLMs? Well, the estimating equation above is just the score equation (i.e. the one that determines the MLE), under the assumption of a GLM. A particularly simple case of thise is when we use the "canonical" link function, chose so that part of the derivative term cancels with the inverse-variance weights, and we get $$ \sum_i x_i(Y_i - g^{-1}(x_i^T\beta)) = \mathbf{0}, $$ which should look familiar to anyone who's studied linear regression, or logistic regression, or Poisson regression. In general, we can view the point estimates from GLMs as MLEs under a particular fully parametric model for $Y$, or as consistent & efficient estimates resulting from assumptions on only the first and second moments of $Y$ - i.e. a semi-parametric model. Similar arguments apply to the confidence intervals these methods provide; see e.g. McCullagh and Nelder's book for the details.
Why is generalized linear model (GLM) a semi-parametric model?
A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions. If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are
Why is generalized linear model (GLM) a semi-parametric model? A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions. If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are independent and that $$ g(\mathbb{E}[\,Y_i|X_i=x_i\,]) = x_i^T\beta $$ then, under mild regularity conditions, solving the equations $$ \sum_i\frac{\partial g^{-1}(x_i^T\beta)}{\partial \beta}w(g^{-1}(x_i^T\beta))(Y_i - g^{-1}(x_i^T\beta)) = \mathbf{0} $$ provides consistent estimates for parameter $\beta$. The weighting term $w$ is arbitrary, but it determines the efficiency of this approach, and the best option is to use weights inversely proportional to the variance of $Y_i$, if you know this. How does this connect to GLMs? Well, the estimating equation above is just the score equation (i.e. the one that determines the MLE), under the assumption of a GLM. A particularly simple case of thise is when we use the "canonical" link function, chose so that part of the derivative term cancels with the inverse-variance weights, and we get $$ \sum_i x_i(Y_i - g^{-1}(x_i^T\beta)) = \mathbf{0}, $$ which should look familiar to anyone who's studied linear regression, or logistic regression, or Poisson regression. In general, we can view the point estimates from GLMs as MLEs under a particular fully parametric model for $Y$, or as consistent & efficient estimates resulting from assumptions on only the first and second moments of $Y$ - i.e. a semi-parametric model. Similar arguments apply to the confidence intervals these methods provide; see e.g. McCullagh and Nelder's book for the details.
Why is generalized linear model (GLM) a semi-parametric model? A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions. If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are
46,313
A Stats 101 question with a real world application
A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2. (1) Express the profit ratios as a multiplier (1+rateOfReturn) and plot them to see if they follow some likely distribution (you might start with a Q-Q plot for normality, and a Q-Q plot on log(1+rateOfReturn) for log-normality). There's a good chance your top 2 fall right in line with a log-normal distribution. But maybe not, and you're on to something. (2) Fit a multiple regression model (it's in the Data Analysis add-in for Excel) to predict the rate of return based on possible contributing factors, e.g. case loads, patient mix, etc. If your two hospitals are really unusual, they will have very large regression residuals.
A Stats 101 question with a real world application
A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2. (1) Express the profit ratios as a multiplier (1+rateOfReturn
A Stats 101 question with a real world application A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2. (1) Express the profit ratios as a multiplier (1+rateOfReturn) and plot them to see if they follow some likely distribution (you might start with a Q-Q plot for normality, and a Q-Q plot on log(1+rateOfReturn) for log-normality). There's a good chance your top 2 fall right in line with a log-normal distribution. But maybe not, and you're on to something. (2) Fit a multiple regression model (it's in the Data Analysis add-in for Excel) to predict the rate of return based on possible contributing factors, e.g. case loads, patient mix, etc. If your two hospitals are really unusual, they will have very large regression residuals.
A Stats 101 question with a real world application A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2. (1) Express the profit ratios as a multiplier (1+rateOfReturn
46,314
A Stats 101 question with a real world application
Before using Excel for something like this, first read the Spreadsheet Addiction page. One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then want to test them. This will always lead to some skepticism compared to if you formulated the question before looking at the data. Also you should look for additional potential explanations. You divided by admissions, did these hospitals have smaller admissions (dividing by a small number by chance makes the ratio look bigger). Also look at the sizes of the hospitals, small groups will vary more in aggregate statistics than will large groups. If you have several big hospitals and a few small ones, then some of the small ones will look larger or smaller due to chance and higher variation. Given all that, there could still be some possibilities. The simplest ones come if you can make reasonable assumptions about what the distribution should look like (but normal does not seem reasonable here). If there is not an obvious distribution then you could still estimate one. One possibility is to estimate a distribution based on the 83 lower values and the fact that there are 2 higher values, the logspline package for R has one possible way to do this. Then you can generate random samples from this distribution of size 85 (this assumes all the hospitals come from the same distribution, i.e. similar size, etc.) and in each compare the distance between the top 2 and the rest and see how that compares to your actual data. Even better would be to simulate the whole process of deciding how many outliers there are, then doing the whole process to test those outliers, but this is less clear how to automate and what assumptions would be needed.
A Stats 101 question with a real world application
Before using Excel for something like this, first read the Spreadsheet Addiction page. One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then w
A Stats 101 question with a real world application Before using Excel for something like this, first read the Spreadsheet Addiction page. One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then want to test them. This will always lead to some skepticism compared to if you formulated the question before looking at the data. Also you should look for additional potential explanations. You divided by admissions, did these hospitals have smaller admissions (dividing by a small number by chance makes the ratio look bigger). Also look at the sizes of the hospitals, small groups will vary more in aggregate statistics than will large groups. If you have several big hospitals and a few small ones, then some of the small ones will look larger or smaller due to chance and higher variation. Given all that, there could still be some possibilities. The simplest ones come if you can make reasonable assumptions about what the distribution should look like (but normal does not seem reasonable here). If there is not an obvious distribution then you could still estimate one. One possibility is to estimate a distribution based on the 83 lower values and the fact that there are 2 higher values, the logspline package for R has one possible way to do this. Then you can generate random samples from this distribution of size 85 (this assumes all the hospitals come from the same distribution, i.e. similar size, etc.) and in each compare the distance between the top 2 and the rest and see how that compares to your actual data. Even better would be to simulate the whole process of deciding how many outliers there are, then doing the whole process to test those outliers, but this is less clear how to automate and what assumptions would be needed.
A Stats 101 question with a real world application Before using Excel for something like this, first read the Spreadsheet Addiction page. One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then w
46,315
A Stats 101 question with a real world application
It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea: You want to know whether these to datapoints are clear outliers (i.e., conceptually what Mike Anderson says). The easiest way to do this is to make a boxplot of your data and see whther they can be considered as outliers (i.e., whether they are outside the whiskers). Following Tukey (1977) the whiskers extend to the median +/- 3 times the interquartile range (sometimes you also see 1.5 times the interquartile range, however, in Tukey 1977 you find both and I tend to use the more extreme criterion to classify outliers). If your data is approxamitely normal (use either a histogram or a qq-plot) you can simply use the standard-normal distribution to see whether your data point is an outlier. You need to transform your data into z-scores (i.e., each value minus mean divided by the mean). If the z-score is extreme enough (extremer than 1.95 for 5% significancer outlier, or extremer than 2.58 for 1% outlier) you even can say that this datapoint is significantly an outlier. This approach is described in Tabachnik and Fidell, I think in chapter 4. UPDATE: Given that you have a N of 85, perhaps it is better to treat only really rare cases as outliers (i.e., z-score mor eextreme than 3.29, wich refers to $p < .001)$. I would simply write in the text that there were clear outliers that were deleted and put in the footnote: These cases had extremely high z scores (above 3.29, p < .001).
A Stats 101 question with a real world application
It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea: You want to know wheth
A Stats 101 question with a real world application It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea: You want to know whether these to datapoints are clear outliers (i.e., conceptually what Mike Anderson says). The easiest way to do this is to make a boxplot of your data and see whther they can be considered as outliers (i.e., whether they are outside the whiskers). Following Tukey (1977) the whiskers extend to the median +/- 3 times the interquartile range (sometimes you also see 1.5 times the interquartile range, however, in Tukey 1977 you find both and I tend to use the more extreme criterion to classify outliers). If your data is approxamitely normal (use either a histogram or a qq-plot) you can simply use the standard-normal distribution to see whether your data point is an outlier. You need to transform your data into z-scores (i.e., each value minus mean divided by the mean). If the z-score is extreme enough (extremer than 1.95 for 5% significancer outlier, or extremer than 2.58 for 1% outlier) you even can say that this datapoint is significantly an outlier. This approach is described in Tabachnik and Fidell, I think in chapter 4. UPDATE: Given that you have a N of 85, perhaps it is better to treat only really rare cases as outliers (i.e., z-score mor eextreme than 3.29, wich refers to $p < .001)$. I would simply write in the text that there were clear outliers that were deleted and put in the footnote: These cases had extremely high z scores (above 3.29, p < .001).
A Stats 101 question with a real world application It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea: You want to know wheth
46,316
Is there a reliable recursive formula for a simple moving average (moving mean)?
Just try to remove the last value of the window and add the new one. If $$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$ then $$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$
Is there a reliable recursive formula for a simple moving average (moving mean)?
Just try to remove the last value of the window and add the new one. If $$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$ then $$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$
Is there a reliable recursive formula for a simple moving average (moving mean)? Just try to remove the last value of the window and add the new one. If $$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$ then $$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$
Is there a reliable recursive formula for a simple moving average (moving mean)? Just try to remove the last value of the window and add the new one. If $$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$ then $$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$
46,317
Is there a reliable recursive formula for a simple moving average (moving mean)?
double mean(const double F, const double C, unsigned int *n) { return (F*(*n)+C)/(++*n); } F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This does not need a buffer.
Is there a reliable recursive formula for a simple moving average (moving mean)?
double mean(const double F, const double C, unsigned int *n) { return (F*(*n)+C)/(++*n); } F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This doe
Is there a reliable recursive formula for a simple moving average (moving mean)? double mean(const double F, const double C, unsigned int *n) { return (F*(*n)+C)/(++*n); } F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This does not need a buffer.
Is there a reliable recursive formula for a simple moving average (moving mean)? double mean(const double F, const double C, unsigned int *n) { return (F*(*n)+C)/(++*n); } F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This doe
46,318
Probability and log probability in hidden Markov models
A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to observations. These probabilities have to be combined to produce the sequence probability. Therefore they are multiplied. (and for Log probabilities, this operator then becomes an addition)
Probability and log probability in hidden Markov models
A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to
Probability and log probability in hidden Markov models A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to observations. These probabilities have to be combined to produce the sequence probability. Therefore they are multiplied. (and for Log probabilities, this operator then becomes an addition)
Probability and log probability in hidden Markov models A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to
46,319
Probability and log probability in hidden Markov models
Yes, the probability of an observation sequence can be computed using the forward algorithm. Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multiplications are carried out in each iteration. So the answer to your second question is yes as well.
Probability and log probability in hidden Markov models
Yes, the probability of an observation sequence can be computed using the forward algorithm. Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multipl
Probability and log probability in hidden Markov models Yes, the probability of an observation sequence can be computed using the forward algorithm. Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multiplications are carried out in each iteration. So the answer to your second question is yes as well.
Probability and log probability in hidden Markov models Yes, the probability of an observation sequence can be computed using the forward algorithm. Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multipl
46,320
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman wrote an article on categorizing variables, but only the abstract is freely available, and I have not read the whole article. Second, the assumptions of linear regression are not that the dependent variable is normally distributed, but that the residuals from the model are. So, before you can see if your model violates the assumptions, you need to run it and look at the results. Third, if the residuals are not normally distributed, you have several choices: Multinomial logistic regression with the four categories you list Ordinal logistic regression with "mixed" excluded. Looking at each category separately Some sort of robust regression Before doing any of these, my impulse would be to look at the variables graphically, with density plots and possibly quantile normal plots.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman wrote an article on categorizing variables, but only the abstract is freely available, and I have not read the whole article. Second, the assumptions of linear regression are not that the dependent variable is normally distributed, but that the residuals from the model are. So, before you can see if your model violates the assumptions, you need to run it and look at the results. Third, if the residuals are not normally distributed, you have several choices: Multinomial logistic regression with the four categories you list Ordinal logistic regression with "mixed" excluded. Looking at each category separately Some sort of robust regression Before doing any of these, my impulse would be to look at the variables graphically, with density plots and possibly quantile normal plots.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman
46,321
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and Devlin, N. (2010) Statistical analysis of EQ-5D profiles: Does the use of value sets bias inference? Medical Decision Making. 30(5), 556-565 doi: 10.1177/0272989X09357473 The Paretian classification is described and discussed in Devlin, N., Parkin, D. and Browne, J. (2010) Patient-reported outcomes in the NHS: New methods for analysing and reporting EQ-5D data. Health Economics. 19(8), 886-905.DOI: 10.1002/hec.1608 If using the score is OK, what really matters is the distribution of the regression residuals, not the score. If you are using the Paretian classification, you are correct that an ordered model can't be used. 'Tariff' is a slightly silly label, but for historical reasons has been used in this context. Just means a set of scores attached to categorical EQ-5D health states.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and De
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and Devlin, N. (2010) Statistical analysis of EQ-5D profiles: Does the use of value sets bias inference? Medical Decision Making. 30(5), 556-565 doi: 10.1177/0272989X09357473 The Paretian classification is described and discussed in Devlin, N., Parkin, D. and Browne, J. (2010) Patient-reported outcomes in the NHS: New methods for analysing and reporting EQ-5D data. Health Economics. 19(8), 886-905.DOI: 10.1002/hec.1608 If using the score is OK, what really matters is the distribution of the regression residuals, not the score. If you are using the Paretian classification, you are correct that an ordered model can't be used. 'Tariff' is a slightly silly label, but for historical reasons has been used in this context. Just means a set of scores attached to categorical EQ-5D health states.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and De
46,322
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the dependent variable (which appears to be ordered categories of pre- versus post- differences), is the best way forward. I love the UCLA websites for the clarity of their explanations of various methods, here is their outline of ordinal logistic regression using SPSS and here is their example for Stata. As you can see from these sites, you will need to verify that the proportional odds assumption is met with your data. Ordinal logistic regression is an accepted statistical method, and Professor Agresti has written about it in his books on categorical data analysis. I recommend buying any of his books if your work is taking you down categorical data analysis paths.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the depend
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the dependent variable (which appears to be ordered categories of pre- versus post- differences), is the best way forward. I love the UCLA websites for the clarity of their explanations of various methods, here is their outline of ordinal logistic regression using SPSS and here is their example for Stata. As you can see from these sites, you will need to verify that the proportional odds assumption is met with your data. Ordinal logistic regression is an accepted statistical method, and Professor Agresti has written about it in his books on categorical data analysis. I recommend buying any of his books if your work is taking you down categorical data analysis paths.
Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the depend
46,323
Sphering data with SVD components of covariance matrix
I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening. $cov(X^*) = E[X^*X^{*T}]$ $= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$ $D^{-\frac{1}{2}T} = D^{-\frac{1}{2}}$ because it's a diagonal matrix $= D^{-\frac{1}{2}}U^TE[XX^T]UD^{-\frac{1}{2}}$ $= D^{-\frac{1}{2}}U^T\hat{\Sigma}UD^{-\frac{1}{2}}$ $= D^{-\frac{1}{2}}U^TUDU^TUD^{-\frac{1}{2}}$ The $U^TU = 1$ because U's have unit length. $= D^{-\frac{1}{2}}DD^{-\frac{1}{2}}$ $= I$
Sphering data with SVD components of covariance matrix
I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening. $cov(X^*) = E[X^*X^{*T}]$ $= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$ $D^{-\frac
Sphering data with SVD components of covariance matrix I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening. $cov(X^*) = E[X^*X^{*T}]$ $= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$ $D^{-\frac{1}{2}T} = D^{-\frac{1}{2}}$ because it's a diagonal matrix $= D^{-\frac{1}{2}}U^TE[XX^T]UD^{-\frac{1}{2}}$ $= D^{-\frac{1}{2}}U^T\hat{\Sigma}UD^{-\frac{1}{2}}$ $= D^{-\frac{1}{2}}U^TUDU^TUD^{-\frac{1}{2}}$ The $U^TU = 1$ because U's have unit length. $= D^{-\frac{1}{2}}DD^{-\frac{1}{2}}$ $= I$
Sphering data with SVD components of covariance matrix I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening. $cov(X^*) = E[X^*X^{*T}]$ $= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$ $D^{-\frac
46,324
Temporal analysis of variation in random effects
New answer: 2020 ! the main interest lies in the changes in hospital-level variation You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of around 1000 observations per hospital per year. I think there is only one approach that will answer the research question, and that is to divide the data into subsets. How many subsets will depend on what frequency you wish to the changes in variation. Yearly would be one option, quarterly another or even monthly. You would then fit a model on each subset of the data, with random intercepts for hospitals. Since your outcome is binary, this would be a generalised linear mixed model, with binomial family and logit link. You would use the same model formula for each subset. Then you simply extract the hospital level variation for each time period, ie. the variance or standard deviation of the hospital intercept and present that in whatever way is appropriate - graphically would be my choice. If you think that there is likely to be a seasonal component, then you probably want to use quarterly subsets. Another approach (mentioned in another answer) is to model the time:hospital interaction as random. Here you would fit a model on the whole dataset but additionally with random intercepts for for the time variable interacted with hospital. Again you can choose whatever period for the time variable makes the most sense. You could also fit a model with just the hospital as random and use a likelihood ratio test to determine which model fits best. However, this will not answer the question of how the hospital level variation changes over time. The same applies to using a correlation structure, such as AR1, because this also will not say anything about changes in hospital-level variation.
Temporal analysis of variation in random effects
New answer: 2020 ! the main interest lies in the changes in hospital-level variation You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of aro
Temporal analysis of variation in random effects New answer: 2020 ! the main interest lies in the changes in hospital-level variation You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of around 1000 observations per hospital per year. I think there is only one approach that will answer the research question, and that is to divide the data into subsets. How many subsets will depend on what frequency you wish to the changes in variation. Yearly would be one option, quarterly another or even monthly. You would then fit a model on each subset of the data, with random intercepts for hospitals. Since your outcome is binary, this would be a generalised linear mixed model, with binomial family and logit link. You would use the same model formula for each subset. Then you simply extract the hospital level variation for each time period, ie. the variance or standard deviation of the hospital intercept and present that in whatever way is appropriate - graphically would be my choice. If you think that there is likely to be a seasonal component, then you probably want to use quarterly subsets. Another approach (mentioned in another answer) is to model the time:hospital interaction as random. Here you would fit a model on the whole dataset but additionally with random intercepts for for the time variable interacted with hospital. Again you can choose whatever period for the time variable makes the most sense. You could also fit a model with just the hospital as random and use a likelihood ratio test to determine which model fits best. However, this will not answer the question of how the hospital level variation changes over time. The same applies to using a correlation structure, such as AR1, because this also will not say anything about changes in hospital-level variation.
Temporal analysis of variation in random effects New answer: 2020 ! the main interest lies in the changes in hospital-level variation You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of aro
46,325
Temporal analysis of variation in random effects
Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code. Modeling the outcome If the outcome is binary, use family = binomial(). If it is count data, use family = poisson() since it's a fixed time interval. You could also consider aggregating binary data to counts and then use Poisson. I'll just assume binomial from here and on. Modeling a fixed hospital:time interaction If you have few hospitals (say, less than five), there will be very little information to infer the random effects hyperparameters. In that case, they may just be modeled as fixed: fit_full = glm(status ~ time * hospital, data = df, family = binomial()) Then the resulting inference on the time:hospital interaction could be of interest. Modeling random hospital:time interaction One of the only practical implications of modeling a term as random is that it applies shrinkage. That is, data points far from the model's prediction are regarded as partially random fluctuations with the true value being closer to the mean. Read more here. Using the same model as above, but allowing for random slopes for each hospital: fit_full = lme4::glmer(status ~ time + (1 + time|hospital), family = binomial()) To test the random slope, you can do a Likelihood Ratio Test (LRT) by comparing to a (nested) model that does not contain this term: fit_null = lme4::glmer(status ~ time + (1|hospital), family = binomial()) summary(anova(fit_full, fit_null)) Personally, I have a preference for Bayesian inference and you could use the brms package which is much like glmer. As a quick fix, you can also compute a BIC-based Bayes Factor (cf. Wagenmakers et al. (2007): exp((BIC(fit_full) - BIC(fit_null))/2) Modeling time series I know of no other packages that can do the above and model some autocorrelation than brms (and perhaps nlmer::lme). brms may take hours to fit, though. For AR(1), it would be something like: fit = brms::brm(status ~ time + (1 + time|hospital), data = df, family = bernoulli(), autocor = cor_ar(~1, p = 1)) If your dates only come in integers (2013, 2014, 2015, 2016, 2017), then there is may be too little information to estimate autoregressive coefficient(s) and you may consider leaving it out. You do have a lot of data, so this may now be necessary. Your time variable would need a finer resolution for an autoregressive model to be identifiable.
Temporal analysis of variation in random effects
Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code. Modeling the outcome If the outcome is binary, use family = bin
Temporal analysis of variation in random effects Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code. Modeling the outcome If the outcome is binary, use family = binomial(). If it is count data, use family = poisson() since it's a fixed time interval. You could also consider aggregating binary data to counts and then use Poisson. I'll just assume binomial from here and on. Modeling a fixed hospital:time interaction If you have few hospitals (say, less than five), there will be very little information to infer the random effects hyperparameters. In that case, they may just be modeled as fixed: fit_full = glm(status ~ time * hospital, data = df, family = binomial()) Then the resulting inference on the time:hospital interaction could be of interest. Modeling random hospital:time interaction One of the only practical implications of modeling a term as random is that it applies shrinkage. That is, data points far from the model's prediction are regarded as partially random fluctuations with the true value being closer to the mean. Read more here. Using the same model as above, but allowing for random slopes for each hospital: fit_full = lme4::glmer(status ~ time + (1 + time|hospital), family = binomial()) To test the random slope, you can do a Likelihood Ratio Test (LRT) by comparing to a (nested) model that does not contain this term: fit_null = lme4::glmer(status ~ time + (1|hospital), family = binomial()) summary(anova(fit_full, fit_null)) Personally, I have a preference for Bayesian inference and you could use the brms package which is much like glmer. As a quick fix, you can also compute a BIC-based Bayes Factor (cf. Wagenmakers et al. (2007): exp((BIC(fit_full) - BIC(fit_null))/2) Modeling time series I know of no other packages that can do the above and model some autocorrelation than brms (and perhaps nlmer::lme). brms may take hours to fit, though. For AR(1), it would be something like: fit = brms::brm(status ~ time + (1 + time|hospital), data = df, family = bernoulli(), autocor = cor_ar(~1, p = 1)) If your dates only come in integers (2013, 2014, 2015, 2016, 2017), then there is may be too little information to estimate autoregressive coefficient(s) and you may consider leaving it out. You do have a lot of data, so this may now be necessary. Your time variable would need a finer resolution for an autoregressive model to be identifiable.
Temporal analysis of variation in random effects Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code. Modeling the outcome If the outcome is binary, use family = bin
46,326
Temporal analysis of variation in random effects
You've really been thrown in the deep end ! It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual data exploration etc. of course) I would probably fit a generalised linear mixed effects model. To include a time compnent, you could then add a time variate (1=2003, 2=2004 etc). There are probably better ways to build time into it - I imagine others will have a better odea on that.
Temporal analysis of variation in random effects
You've really been thrown in the deep end ! It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual
Temporal analysis of variation in random effects You've really been thrown in the deep end ! It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual data exploration etc. of course) I would probably fit a generalised linear mixed effects model. To include a time compnent, you could then add a time variate (1=2003, 2=2004 etc). There are probably better ways to build time into it - I imagine others will have a better odea on that.
Temporal analysis of variation in random effects You've really been thrown in the deep end ! It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual
46,327
How to treat holidays when working with time series data?
There is a little detail here, so a generic answer. First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy there. Your model might have already take holidays into account (for instance through some other predictors), or they are just irrelevant to what you trying to predict on your current accuracy level. If the difference exists, try adding information about the holidays to the variables in the model; you can start with binary isHoliday, then think how to extend this to something more complex (i.e. add some adjacent days where it is handy to get a leave to enlarge the break, think of some continuous measure of "holidayness"). If your model is too dumb to use such variables, consider making two -- for normal days and for holidays. Finally, if it happens you won't have to deal with predictions during holidays or it is a lesser problem to generate junk then, you may just throw this part of data.
How to treat holidays when working with time series data?
There is a little detail here, so a generic answer. First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy th
How to treat holidays when working with time series data? There is a little detail here, so a generic answer. First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy there. Your model might have already take holidays into account (for instance through some other predictors), or they are just irrelevant to what you trying to predict on your current accuracy level. If the difference exists, try adding information about the holidays to the variables in the model; you can start with binary isHoliday, then think how to extend this to something more complex (i.e. add some adjacent days where it is handy to get a leave to enlarge the break, think of some continuous measure of "holidayness"). If your model is too dumb to use such variables, consider making two -- for normal days and for holidays. Finally, if it happens you won't have to deal with predictions during holidays or it is a lesser problem to generate junk then, you may just throw this part of data.
How to treat holidays when working with time series data? There is a little detail here, so a generic answer. First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy th
46,328
How to treat holidays when working with time series data?
When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more complicated option would be to have separate dummy variables (0/1) for weekday vs. weekend and normal day vs. holiday. The holidayNERC function in the timeDate package is extremely useful in this situation. holidayNYSE is useful too.
How to treat holidays when working with time series data?
When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more co
How to treat holidays when working with time series data? When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more complicated option would be to have separate dummy variables (0/1) for weekday vs. weekend and normal day vs. holiday. The holidayNERC function in the timeDate package is extremely useful in this situation. holidayNYSE is useful too.
How to treat holidays when working with time series data? When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more co
46,329
How to treat holidays when working with time series data?
I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week effects , any fixed-day of the month effects that can be identified along with any holiday effects. Each holiday can have it's own lead, contemporaneous and lag structure and there may be accompanying Friday before a Monday Holiday or Monday after a Friday Holiday effect. This model may also include local time trends , level shifts and of course one-time only events (pulses) and a possible need for either or both variance changes or parameter changes over time. Now with this model in place we can make daily forecasts for the future periods. The second step is to to construct 24 individual hourly models reflecting the incorporation of the daily total series. The reason for the 24 separate hourly models is the fact that consumption patterns during the day (intra) are often quite different across(inter) days. Each of the hourly models could have ARIMA structure reflecting historical usage for that hour and of course Level Shifts , Local Trends and Pulse effects. Individual hourly demand may or may not reflect daily total demand thus one needs to pay attention to that possibility. Since forecasts exist for the daily totals this can then be used to predict the hourly values.
How to treat holidays when working with time series data?
I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week e
How to treat holidays when working with time series data? I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week effects , any fixed-day of the month effects that can be identified along with any holiday effects. Each holiday can have it's own lead, contemporaneous and lag structure and there may be accompanying Friday before a Monday Holiday or Monday after a Friday Holiday effect. This model may also include local time trends , level shifts and of course one-time only events (pulses) and a possible need for either or both variance changes or parameter changes over time. Now with this model in place we can make daily forecasts for the future periods. The second step is to to construct 24 individual hourly models reflecting the incorporation of the daily total series. The reason for the 24 separate hourly models is the fact that consumption patterns during the day (intra) are often quite different across(inter) days. Each of the hourly models could have ARIMA structure reflecting historical usage for that hour and of course Level Shifts , Local Trends and Pulse effects. Individual hourly demand may or may not reflect daily total demand thus one needs to pay attention to that possibility. Since forecasts exist for the daily totals this can then be used to predict the hourly values.
How to treat holidays when working with time series data? I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week e
46,330
How to treat holidays when working with time series data?
I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except that I forecast the daily peak (MW) and total energy (MW/h) using machine learning methods (as opposed to time series), and then construct a linear regression model for each hour (1-24) of the day. The forecasted peak/total energy are features in each of the hourly models. The rest of the features are basically the same for all 26 models, with day of week, holiday and season represented as dummy variables. Obviously weather and lagged dependent variable values are also important features. As an aside, six months of data is really not ideal regardless of your approach because this is clearly a process with an element of annual seasonality. Normally I'd say you need three years to properly train and test your model. With less than a year, you can't even fully assess those annual seasonality effects. If you do go the time series route, just dumping the holidays will be a problem if you have a weekly seasonality term, e.g. if you try to explain the following Thursday's load as a function of the load on Thanksgiving.
How to treat holidays when working with time series data?
I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except t
How to treat holidays when working with time series data? I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except that I forecast the daily peak (MW) and total energy (MW/h) using machine learning methods (as opposed to time series), and then construct a linear regression model for each hour (1-24) of the day. The forecasted peak/total energy are features in each of the hourly models. The rest of the features are basically the same for all 26 models, with day of week, holiday and season represented as dummy variables. Obviously weather and lagged dependent variable values are also important features. As an aside, six months of data is really not ideal regardless of your approach because this is clearly a process with an element of annual seasonality. Normally I'd say you need three years to properly train and test your model. With less than a year, you can't even fully assess those annual seasonality effects. If you do go the time series route, just dumping the holidays will be a problem if you have a weekly seasonality term, e.g. if you try to explain the following Thursday's load as a function of the load on Thanksgiving.
How to treat holidays when working with time series data? I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except t
46,331
Is it legal to publish the code of a published algorithm? [closed]
As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here are a few things that I have learned. On the NR homepage you can find the license information and information on redistribution. This is solely a copyright issue regarding the source code provided in the books. The algorithms themselves are not copyrighted, and to my understanding you can't get into trouble by implementing and sharing an algorithm that happens to be in NR unless your implementation is derived from the source code distributed with NR. Hmm, nothing is free ... There is almost always a copyright holder. If the paper is published, the copyright may be transferred to the journal, but in some cases the author retain the right to distribute the paper via his or her homepage, say. Patent law is a completely different ball game, but I actually don't know of any examples related to statistical and machine learning algorithms where a patent protection of the algorithm was a problem. Hence, to me, the most important aspect regarding algorithms in papers, whether published in journals or on the authors homepage, is not to violate the copyright. A trademark is a third thing. To my understanding it protects only the name or symbol. So you just can't implement a program and call it "Random Forests", but you can implement and distribute the random forest algorithm. I have no idea if Frank Masci broke the law, but it is a copyright issue as I see it. Who holds the copyright and which rights did the copyright holder give to others? From my point of view, though violations of patent rights might be serious, the more important issue for the average statistician is that of copyright. If you implement an algorithm from a paper from scratch and cite the paper I don't see any obvious problem with distributing the implementation regardless of the media. But you might think about what is the best way for you to use your copyright on the implementation. I don't know anything about how blogs are generally copyrighted, but if the copyright is transferred to the blog owner automatically, it might, in principle, be a bad idea to post hours of valuable implementations on a blog and thereby effectively give up the copyright of the work to somebody else. If you, on the other hand, modify an existing implementation, you could run into problems with the copyright license. If the license for the original implementation is GPL, the license for the redistribution has to be GPL. Hence, you have to distribute in a way so that the distribution can be under GPL. This works the other way too. If you want to distribute an implementation under GPL but have to rely on a library that is not distributed under the GNU license, then you might not be able to include the library in your distribution $-$ even if the library is open source and "free". The library might be distributed under a copyright license that is incompatible with GPL.
Is it legal to publish the code of a published algorithm? [closed]
As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here a
Is it legal to publish the code of a published algorithm? [closed] As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here are a few things that I have learned. On the NR homepage you can find the license information and information on redistribution. This is solely a copyright issue regarding the source code provided in the books. The algorithms themselves are not copyrighted, and to my understanding you can't get into trouble by implementing and sharing an algorithm that happens to be in NR unless your implementation is derived from the source code distributed with NR. Hmm, nothing is free ... There is almost always a copyright holder. If the paper is published, the copyright may be transferred to the journal, but in some cases the author retain the right to distribute the paper via his or her homepage, say. Patent law is a completely different ball game, but I actually don't know of any examples related to statistical and machine learning algorithms where a patent protection of the algorithm was a problem. Hence, to me, the most important aspect regarding algorithms in papers, whether published in journals or on the authors homepage, is not to violate the copyright. A trademark is a third thing. To my understanding it protects only the name or symbol. So you just can't implement a program and call it "Random Forests", but you can implement and distribute the random forest algorithm. I have no idea if Frank Masci broke the law, but it is a copyright issue as I see it. Who holds the copyright and which rights did the copyright holder give to others? From my point of view, though violations of patent rights might be serious, the more important issue for the average statistician is that of copyright. If you implement an algorithm from a paper from scratch and cite the paper I don't see any obvious problem with distributing the implementation regardless of the media. But you might think about what is the best way for you to use your copyright on the implementation. I don't know anything about how blogs are generally copyrighted, but if the copyright is transferred to the blog owner automatically, it might, in principle, be a bad idea to post hours of valuable implementations on a blog and thereby effectively give up the copyright of the work to somebody else. If you, on the other hand, modify an existing implementation, you could run into problems with the copyright license. If the license for the original implementation is GPL, the license for the redistribution has to be GPL. Hence, you have to distribute in a way so that the distribution can be under GPL. This works the other way too. If you want to distribute an implementation under GPL but have to rely on a library that is not distributed under the GNU license, then you might not be able to include the library in your distribution $-$ even if the library is open source and "free". The library might be distributed under a copyright license that is incompatible with GPL.
Is it legal to publish the code of a published algorithm? [closed] As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here a
46,332
Is it legal to publish the code of a published algorithm? [closed]
The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution (including commercialization and free release) is where one tends to run afoul of things. What's more, release can be constrained, regardless of patent and copyright - Americans cannot export source code for cryptographic algorithms to certain prohibited countries, for instance, and the same may be true of many other countries. A distinction has to be made between copyright and patents. Copyright is unlikely to affect you, and I believe the primary thing to consider is whether or not a patent may apply (and it need not be patented already: some countries allow some time after disclosure for the innovator to patent their work). Copyrights tend to affect distribution rights of original materials (so, you can't just republish an article - the journal or the author owns the copyright), patents affect distribution of the implementation of the idea. If there's no patent and the time for patenting has elapsed, then you should be good to go. I'd said that patents are nebulous: in some jurisdictions it isn't all that easy to patent algorithms. In some it is, but the laws and interpretations of those laws are changing. I hope this helps in focusing your analysis. I wouldn't worry about the case-by-case approach (e.g. is it published in this type of journal or that type), and instead focus on who holds the patent rights and how are they exercising those rights. It's also courteous to talk with the original innovator. If they intend to patent or they want to collaborate, you are better off working with them. They can help suggest ideas for implementations and they may encourage others to use your implementation as a reference.
Is it legal to publish the code of a published algorithm? [closed]
The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution
Is it legal to publish the code of a published algorithm? [closed] The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution (including commercialization and free release) is where one tends to run afoul of things. What's more, release can be constrained, regardless of patent and copyright - Americans cannot export source code for cryptographic algorithms to certain prohibited countries, for instance, and the same may be true of many other countries. A distinction has to be made between copyright and patents. Copyright is unlikely to affect you, and I believe the primary thing to consider is whether or not a patent may apply (and it need not be patented already: some countries allow some time after disclosure for the innovator to patent their work). Copyrights tend to affect distribution rights of original materials (so, you can't just republish an article - the journal or the author owns the copyright), patents affect distribution of the implementation of the idea. If there's no patent and the time for patenting has elapsed, then you should be good to go. I'd said that patents are nebulous: in some jurisdictions it isn't all that easy to patent algorithms. In some it is, but the laws and interpretations of those laws are changing. I hope this helps in focusing your analysis. I wouldn't worry about the case-by-case approach (e.g. is it published in this type of journal or that type), and instead focus on who holds the patent rights and how are they exercising those rights. It's also courteous to talk with the original innovator. If they intend to patent or they want to collaborate, you are better off working with them. They can help suggest ideas for implementations and they may encourage others to use your implementation as a reference.
Is it legal to publish the code of a published algorithm? [closed] The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution
46,333
Is it legal to publish the code of a published algorithm? [closed]
My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the algorithm, understand it, teach it to another person without ever letting them see the code in the original, and they implement it, it is completely unencumbered. That's the gold standard of clean room implementation (used for legally safe reverse engineering), but if you read the paper and write your own version, you're probably fine.
Is it legal to publish the code of a published algorithm? [closed]
My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the
Is it legal to publish the code of a published algorithm? [closed] My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the algorithm, understand it, teach it to another person without ever letting them see the code in the original, and they implement it, it is completely unencumbered. That's the gold standard of clean room implementation (used for legally safe reverse engineering), but if you read the paper and write your own version, you're probably fine.
Is it legal to publish the code of a published algorithm? [closed] My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the
46,334
How to plot results from text mining (e.g. classification or clustering)?
If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random initializations of k-means. This gives you some information about how well the algorithm would perform on average. If you're doing clustering (i.e. unsupervised clustering), then you can make a pretty picture of the clusters using a vector compression technique. A simple technique might be to pick a point in space, and plot each point in your dataset with its euclidean distance from that point and its category. You could also use more advanced techniques like PCA. If you're interested in finding out which features are good predictors, I suggest running something like a maximum entropy classifier on it, or one of many feature selection algorithms. These techniques will provide you with a weight for each feature indicating its importance in predicting the groupings.
How to plot results from text mining (e.g. classification or clustering)?
If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random
How to plot results from text mining (e.g. classification or clustering)? If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random initializations of k-means. This gives you some information about how well the algorithm would perform on average. If you're doing clustering (i.e. unsupervised clustering), then you can make a pretty picture of the clusters using a vector compression technique. A simple technique might be to pick a point in space, and plot each point in your dataset with its euclidean distance from that point and its category. You could also use more advanced techniques like PCA. If you're interested in finding out which features are good predictors, I suggest running something like a maximum entropy classifier on it, or one of many feature selection algorithms. These techniques will provide you with a weight for each feature indicating its importance in predicting the groupings.
How to plot results from text mining (e.g. classification or clustering)? If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random
46,335
How to plot results from text mining (e.g. classification or clustering)?
I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vector that corresponds to each data point. I use MDS if it's more convenient to generate a distance matrix than exact n-dimensional locations of individual items. Once the points are projected into 2-space use color and/or shape to illustrate the clusters. If you have a sparsely-connected distance matrix (i.e., most of the entries have no distance and are unconnected), then you could present the data as a sparsely-connected undirected graph. There are hundreds of appropriate graph-layout algorithms you could use. This method isn't terribly useful for identifying the structure of the clusters, but it does at least give you a pretty picture of all your points.
How to plot results from text mining (e.g. classification or clustering)?
I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vecto
How to plot results from text mining (e.g. classification or clustering)? I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vector that corresponds to each data point. I use MDS if it's more convenient to generate a distance matrix than exact n-dimensional locations of individual items. Once the points are projected into 2-space use color and/or shape to illustrate the clusters. If you have a sparsely-connected distance matrix (i.e., most of the entries have no distance and are unconnected), then you could present the data as a sparsely-connected undirected graph. There are hundreds of appropriate graph-layout algorithms you could use. This method isn't terribly useful for identifying the structure of the clusters, but it does at least give you a pretty picture of all your points.
How to plot results from text mining (e.g. classification or clustering)? I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vecto
46,336
Assessing error of a spatial interpolation algorithm
One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimated by comparing interpolated values at the validation point locations with the actual validation point values. Note that the appropriateness of this approach is largely driven by the sample point density and distribution vis-à-vis the type and scale of the underlying process you are attempting to model. Edit: This is an expansion of the original answer following @whuber’s comments. As noted by @whuber, one disadvantage with the aforementioned technique is the degradation in the interpolation’s quality when a subset of the sampled sites are removed. A solution to this problem is described in Maciej Timczak’s 1998 paper. The author applies a cross-validation technique to estimate the optimal interpolation parameters. He then uses a jackknife method to estimate the predicted value and uncertainty at an unsampled site $Z_J$. A brief summary of the techniques described in the paper with a simple example in R follows. In the cross-validation (leave-one-out) method, one data point $s_i$ is removed from the point data set $S$ and its interpolated value is computed using all other $(n-1)$ points of $S$. The interpolated value is then compared with the actual value $s_i$. This process is repeated with all other data points from $S$. The performance of the interpolator is evaluated via the root-mean of squared residuals (RMSE). The RMSE can be computed for different interpolation parameters (or different interpolators) then compared. The interpolator with the lowest RMSE is usually desired. $$RMSE=\sqrt{\frac{\sum_{i=1}^n (Z_{i(int)} -Z_{i})^2}{n}}$$ Once the interpolation technique is chosen, the jackknife technique is used to estimate the unknown value $Z_j$ at an unsampled location $s_j$ along with its confidence interval. The method, as implemented by the author, involves first interpolating the unknown value $Z_j$ at unsampled location $s_j$ using all sample points from $S$, then interpolating a pseudo-value $Z_i^*$ using $(n-1)$ points from the sample dataset $S$ (i.e. $Z_i^*$ is computed once for each omitted point sample $s_i$): $$Z_{i}^* = n Z_{all} - (n-1) Z_{-i}$$ where i = 1,2,..,n The jackknifed estimator of $Z_j$ at $s_j$ is computed by averaging all of the pseudo-values following: $$Z_{J} =\frac{ \sum_{i=1}^{n} Z_{i}^*}{n}$$ A confidence interval is then computed as follows: $$\sigma_J=\sqrt{\frac{1}{n(n-1)}\sum_{i=1}^{n}(Z_i^*-Z_J)^2}$$ The estimated value is therefore $Z_j \pm- t_{(\alpha/2,n-1)}\sigma_J$ A simple R example follows: X <- c(100.0,19.4,9.0,64.4,39.4,50.7,99.0,44.4,82.5,55.9, 56.2,54.0,14.9,54.8,35.5,34.6,15.2,32.0,23.8,87.4, 49.4,77.9,63.9,14.8,5.9,45.3,95.6,10.3,59.5,47.2, 26.7,46.5,41.3,62.9,34.2,3.7,57.7,78.5,73.1,28.3, 13.1,49.4,24.2,99.2,76.3,93.2,71.6,28.8,49.4,94.0, 84.4,0.0,90.3,48.4,44.8,5.1,29.8,27.7,93.8,25.6) Y <- c(0.0,1.9,3.2,6.0,12.4,13.3,13.7,15.3,15.6, 16.0,18.0,22.3,22.9,23.0,24.3,26.3,26.6,27.4, 31.6,33.1,33.8,35.0,35.2,42.0,44.9,45.3,45.8, 48.8,50.5,58.3,60.2,60.8,60.8,61.5,64.4,64.6, 65.5,69.2,69.3,69.4,71.2,73.3,78.5,80.1,83.6, 84.7,84.7,91.1,92.1,92.7,93.0,93.7,93.8,95.1, 96.0,96.4,96.7,97.6,99.8,100.0) Z <- c(209,478,424,817,866,720,327,833,731,1488,562,868,318,496,488, 1146,369,735,593,778,771,304,538,669,368,474,391,346,872,556, 348,765,779,809,357,720,416,544,338,560,455,555,340,307,589, 280,745,452,1116,442,659,343,385,655,828,490,425,665,276,333) library(akima) n <- 60 # Number of points in S # # Cross validation (leave-one-out) method to compare interpolators # P.spl <- vector() sum.dif2 <- numeric() for (i in 1:n){ P.spl[i] <- interpp(X[-i],Y[-i],Z[-i],X[i],Y[i],linear=F,extrap=T)$z sum.dif2 <- (P.spl[i] - Z[i])^2 } rmse = sqrt(sum.dif2 / n) rmse # # Jackknife to estimate the uncertainty in the interpolated value P.spl <- vector() Zi <- numeric() jx <- 40 # X coordinate of parameter Zj to be estimated at sj jy <- 40 # Y coordinate of parameter Zj to be estimated at sj Zall <- interpp(X,Y,Z,jx,jy,linear=T)$z # Interp value from all si for (i in 1:n){ Z1 <- interpp(X[-i],Y[-i],Z[-i],jx,jy,linear=T)$z # Interp value from si-1 # Calculated pseudo-value Z at j Zi[i] <- n * Zall - (n-1) * Z1 } # # Jackknifed estimator of parameter Z at location j # Zj <- sum(Zi) / n # Estimated value Zj # Estimated standard error sig.j <- sqrt(1/(n*(n-1)) * sum((Zi-Zj)^2)) # The confidence interval on Zj alpha <- 1 - .05 / 2 t.value <- qt(alpha,n-1) ci <- t.value * sig.j # Confidence interval for Zj
Assessing error of a spatial interpolation algorithm
One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimate
Assessing error of a spatial interpolation algorithm One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimated by comparing interpolated values at the validation point locations with the actual validation point values. Note that the appropriateness of this approach is largely driven by the sample point density and distribution vis-à-vis the type and scale of the underlying process you are attempting to model. Edit: This is an expansion of the original answer following @whuber’s comments. As noted by @whuber, one disadvantage with the aforementioned technique is the degradation in the interpolation’s quality when a subset of the sampled sites are removed. A solution to this problem is described in Maciej Timczak’s 1998 paper. The author applies a cross-validation technique to estimate the optimal interpolation parameters. He then uses a jackknife method to estimate the predicted value and uncertainty at an unsampled site $Z_J$. A brief summary of the techniques described in the paper with a simple example in R follows. In the cross-validation (leave-one-out) method, one data point $s_i$ is removed from the point data set $S$ and its interpolated value is computed using all other $(n-1)$ points of $S$. The interpolated value is then compared with the actual value $s_i$. This process is repeated with all other data points from $S$. The performance of the interpolator is evaluated via the root-mean of squared residuals (RMSE). The RMSE can be computed for different interpolation parameters (or different interpolators) then compared. The interpolator with the lowest RMSE is usually desired. $$RMSE=\sqrt{\frac{\sum_{i=1}^n (Z_{i(int)} -Z_{i})^2}{n}}$$ Once the interpolation technique is chosen, the jackknife technique is used to estimate the unknown value $Z_j$ at an unsampled location $s_j$ along with its confidence interval. The method, as implemented by the author, involves first interpolating the unknown value $Z_j$ at unsampled location $s_j$ using all sample points from $S$, then interpolating a pseudo-value $Z_i^*$ using $(n-1)$ points from the sample dataset $S$ (i.e. $Z_i^*$ is computed once for each omitted point sample $s_i$): $$Z_{i}^* = n Z_{all} - (n-1) Z_{-i}$$ where i = 1,2,..,n The jackknifed estimator of $Z_j$ at $s_j$ is computed by averaging all of the pseudo-values following: $$Z_{J} =\frac{ \sum_{i=1}^{n} Z_{i}^*}{n}$$ A confidence interval is then computed as follows: $$\sigma_J=\sqrt{\frac{1}{n(n-1)}\sum_{i=1}^{n}(Z_i^*-Z_J)^2}$$ The estimated value is therefore $Z_j \pm- t_{(\alpha/2,n-1)}\sigma_J$ A simple R example follows: X <- c(100.0,19.4,9.0,64.4,39.4,50.7,99.0,44.4,82.5,55.9, 56.2,54.0,14.9,54.8,35.5,34.6,15.2,32.0,23.8,87.4, 49.4,77.9,63.9,14.8,5.9,45.3,95.6,10.3,59.5,47.2, 26.7,46.5,41.3,62.9,34.2,3.7,57.7,78.5,73.1,28.3, 13.1,49.4,24.2,99.2,76.3,93.2,71.6,28.8,49.4,94.0, 84.4,0.0,90.3,48.4,44.8,5.1,29.8,27.7,93.8,25.6) Y <- c(0.0,1.9,3.2,6.0,12.4,13.3,13.7,15.3,15.6, 16.0,18.0,22.3,22.9,23.0,24.3,26.3,26.6,27.4, 31.6,33.1,33.8,35.0,35.2,42.0,44.9,45.3,45.8, 48.8,50.5,58.3,60.2,60.8,60.8,61.5,64.4,64.6, 65.5,69.2,69.3,69.4,71.2,73.3,78.5,80.1,83.6, 84.7,84.7,91.1,92.1,92.7,93.0,93.7,93.8,95.1, 96.0,96.4,96.7,97.6,99.8,100.0) Z <- c(209,478,424,817,866,720,327,833,731,1488,562,868,318,496,488, 1146,369,735,593,778,771,304,538,669,368,474,391,346,872,556, 348,765,779,809,357,720,416,544,338,560,455,555,340,307,589, 280,745,452,1116,442,659,343,385,655,828,490,425,665,276,333) library(akima) n <- 60 # Number of points in S # # Cross validation (leave-one-out) method to compare interpolators # P.spl <- vector() sum.dif2 <- numeric() for (i in 1:n){ P.spl[i] <- interpp(X[-i],Y[-i],Z[-i],X[i],Y[i],linear=F,extrap=T)$z sum.dif2 <- (P.spl[i] - Z[i])^2 } rmse = sqrt(sum.dif2 / n) rmse # # Jackknife to estimate the uncertainty in the interpolated value P.spl <- vector() Zi <- numeric() jx <- 40 # X coordinate of parameter Zj to be estimated at sj jy <- 40 # Y coordinate of parameter Zj to be estimated at sj Zall <- interpp(X,Y,Z,jx,jy,linear=T)$z # Interp value from all si for (i in 1:n){ Z1 <- interpp(X[-i],Y[-i],Z[-i],jx,jy,linear=T)$z # Interp value from si-1 # Calculated pseudo-value Z at j Zi[i] <- n * Zall - (n-1) * Z1 } # # Jackknifed estimator of parameter Z at location j # Zj <- sum(Zi) / n # Estimated value Zj # Estimated standard error sig.j <- sqrt(1/(n*(n-1)) * sum((Zi-Zj)^2)) # The confidence interval on Zj alpha <- 1 - .05 / 2 t.value <- qt(alpha,n-1) ci <- t.value * sig.j # Confidence interval for Zj
Assessing error of a spatial interpolation algorithm One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimate
46,337
Assessing error of a spatial interpolation algorithm
I have brief answers to the two points in your question, and encourage you to see the reference below for details. Most surface estimation algorithms estimate a point cloud P' to approximate the input set P. The point to point distance between the estimated point and corresponding input point may suffice for your error metric. You are seeking a metric for how smooth the surface estimate is. The literature has several popular metrics with popular choices being curvature and surface variation. Here is an excellent reference relevant to the two points in your question : Pauly, et. al. Efficient Simplification of Point-Sampled Surfaces, IEEE Visualization 2002
Assessing error of a spatial interpolation algorithm
I have brief answers to the two points in your question, and encourage you to see the reference below for details. Most surface estimation algorithms estimate a point cloud P' to approximate the in
Assessing error of a spatial interpolation algorithm I have brief answers to the two points in your question, and encourage you to see the reference below for details. Most surface estimation algorithms estimate a point cloud P' to approximate the input set P. The point to point distance between the estimated point and corresponding input point may suffice for your error metric. You are seeking a metric for how smooth the surface estimate is. The literature has several popular metrics with popular choices being curvature and surface variation. Here is an excellent reference relevant to the two points in your question : Pauly, et. al. Efficient Simplification of Point-Sampled Surfaces, IEEE Visualization 2002
Assessing error of a spatial interpolation algorithm I have brief answers to the two points in your question, and encourage you to see the reference below for details. Most surface estimation algorithms estimate a point cloud P' to approximate the in
46,338
How to apply a soft coefficient constraint to an OLS regression?
Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving $$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$$ If your software won't do that directly, you can get the same results with this trick: Include a column of 1's in the dataset to explicitly model the constant. Do the fitting without a constant term. For $p$ independent variables (including the constant), include $p$ additional fake data. For fake case $i$, $i=1,\ldots,p$, set $X_i = \sqrt{\lambda}$, $y = \sqrt{\lambda}\tilde{\beta}$, and all other $X_j=0$. (Of course we require $\lambda \ge 0$.) Although you can obtain the solution $\hat{b}$ this way, I doubt any of the statistics coming out of this fit will be meaningful.
How to apply a soft coefficient constraint to an OLS regression?
Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving $$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$
How to apply a soft coefficient constraint to an OLS regression? Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving $$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$$ If your software won't do that directly, you can get the same results with this trick: Include a column of 1's in the dataset to explicitly model the constant. Do the fitting without a constant term. For $p$ independent variables (including the constant), include $p$ additional fake data. For fake case $i$, $i=1,\ldots,p$, set $X_i = \sqrt{\lambda}$, $y = \sqrt{\lambda}\tilde{\beta}$, and all other $X_j=0$. (Of course we require $\lambda \ge 0$.) Although you can obtain the solution $\hat{b}$ this way, I doubt any of the statistics coming out of this fit will be meaningful.
How to apply a soft coefficient constraint to an OLS regression? Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving $$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$
46,339
How to apply a soft coefficient constraint to an OLS regression?
This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of those does exactly what you want they could be used as a starting point. You could also look at the lasso and lars algorithms (there are packages for these for R as well) which uses an L1 penalty term instead of L2.
How to apply a soft coefficient constraint to an OLS regression?
This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of th
How to apply a soft coefficient constraint to an OLS regression? This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of those does exactly what you want they could be used as a starting point. You could also look at the lasso and lars algorithms (there are packages for these for R as well) which uses an L1 penalty term instead of L2.
How to apply a soft coefficient constraint to an OLS regression? This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of th
46,340
Fitting a probability distribution to zero inflated data in R
You can use Vuong test in pscl package to compare non-nested models. Here is an example > m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin") > summary(m1) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "negbin") Pearson residuals: Min 1Q Median 3Q Max -0.3730 -0.3730 -0.3730 -0.2503 7.3544 Count model coefficients (negbin with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 1.1122 0.3831 2.903 0.00369 ** Log(theta) -1.9256 0.2839 -6.784 1.17e-11 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -9.815 96.462 -0.102 0.919 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Theta = 0.1458 Number of iterations in BFGS optimization: 579 Log-likelihood: -80.51 on 3 Df > m2 <- zeroinfl(i.vec ~ 1 | 1, dist = "poisson") > summary(m2) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "poisson") Pearson residuals: Min 1Q Median 3Q Max -0.7242 -0.7242 -0.7242 -0.4860 14.2795 Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 2.05911 0.08205 25.1 <2e-16 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) 0.4561 0.2933 1.555 0.12 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations in BFGS optimization: 11 Log-likelihood: -233.7 on 2 Df > vuong(m1, m2) Vuong Non-Nested Hypothesis Test-Statistic: 1.946095 (test-statistic is asymptotically distributed N(0,1) under the null that the models are indistinguishible) in this case: model1 > model2, with p-value 0.02582165 Vuong test also suggests that the zero-inflated negative binomial provides a better fit to your data compared to the ordinary negative binomial (not shown here, but you can fit both models and compare them).
Fitting a probability distribution to zero inflated data in R
You can use Vuong test in pscl package to compare non-nested models. Here is an example > m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin") > summary(m1) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "
Fitting a probability distribution to zero inflated data in R You can use Vuong test in pscl package to compare non-nested models. Here is an example > m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin") > summary(m1) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "negbin") Pearson residuals: Min 1Q Median 3Q Max -0.3730 -0.3730 -0.3730 -0.2503 7.3544 Count model coefficients (negbin with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 1.1122 0.3831 2.903 0.00369 ** Log(theta) -1.9256 0.2839 -6.784 1.17e-11 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -9.815 96.462 -0.102 0.919 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Theta = 0.1458 Number of iterations in BFGS optimization: 579 Log-likelihood: -80.51 on 3 Df > m2 <- zeroinfl(i.vec ~ 1 | 1, dist = "poisson") > summary(m2) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "poisson") Pearson residuals: Min 1Q Median 3Q Max -0.7242 -0.7242 -0.7242 -0.4860 14.2795 Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 2.05911 0.08205 25.1 <2e-16 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) 0.4561 0.2933 1.555 0.12 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations in BFGS optimization: 11 Log-likelihood: -233.7 on 2 Df > vuong(m1, m2) Vuong Non-Nested Hypothesis Test-Statistic: 1.946095 (test-statistic is asymptotically distributed N(0,1) under the null that the models are indistinguishible) in this case: model1 > model2, with p-value 0.02582165 Vuong test also suggests that the zero-inflated negative binomial provides a better fit to your data compared to the ordinary negative binomial (not shown here, but you can fit both models and compare them).
Fitting a probability distribution to zero inflated data in R You can use Vuong test in pscl package to compare non-nested models. Here is an example > m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin") > summary(m1) Call: zeroinfl(formula = i.vec ~ 1 | 1, dist = "
46,341
Fitting a probability distribution to zero inflated data in R
I don't think you necessarily need to inflate zeros. Your data seem quite consistent with a negative binomial: > library(MASS) > table(rnegbin(49,mu=3.1,theta=0.075)) 0 1 2 3 5 18 20 21 31 61 36 4 2 1 1 1 1 1 1 1 > table(i.vec) i.vec 0 1 2 3 4 6 11 44 63 30 8 5 1 1 1 1 1 1 So lets get some estimates: > mean(i.vec) [1] 3.040816 > theta.ml(i.vec,3.041) [1] 0.145777 attr(,"SE") [1] 0.04136887 So let's just take a look at mu = 3.041 and theta at say 0.14 (there's lots of uncertainty in theta here): Here's three random samples from that distribution: > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 5 6 7 8 18 49 30 5 1 3 2 2 1 2 1 1 1 > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 7 9 15 29 31 47 56 33 2 4 1 1 1 1 2 1 1 1 1 > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 7 9 12 16 48 66 36 4 1 1 1 1 1 1 1 1 1 They look similar enough to your data. Negative binomial seems at least plausible. You may like to play with the functions in MASS
Fitting a probability distribution to zero inflated data in R
I don't think you necessarily need to inflate zeros. Your data seem quite consistent with a negative binomial: > library(MASS) > table(rnegbin(49,mu=3.1,theta=0.075)) 0 1 2 3 5 18 20 21 31 61
Fitting a probability distribution to zero inflated data in R I don't think you necessarily need to inflate zeros. Your data seem quite consistent with a negative binomial: > library(MASS) > table(rnegbin(49,mu=3.1,theta=0.075)) 0 1 2 3 5 18 20 21 31 61 36 4 2 1 1 1 1 1 1 1 > table(i.vec) i.vec 0 1 2 3 4 6 11 44 63 30 8 5 1 1 1 1 1 1 So lets get some estimates: > mean(i.vec) [1] 3.040816 > theta.ml(i.vec,3.041) [1] 0.145777 attr(,"SE") [1] 0.04136887 So let's just take a look at mu = 3.041 and theta at say 0.14 (there's lots of uncertainty in theta here): Here's three random samples from that distribution: > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 5 6 7 8 18 49 30 5 1 3 2 2 1 2 1 1 1 > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 7 9 15 29 31 47 56 33 2 4 1 1 1 1 2 1 1 1 1 > table(rnegbin(49,mu=3.041,theta=0.14)) 0 1 2 3 4 7 9 12 16 48 66 36 4 1 1 1 1 1 1 1 1 1 They look similar enough to your data. Negative binomial seems at least plausible. You may like to play with the functions in MASS
Fitting a probability distribution to zero inflated data in R I don't think you necessarily need to inflate zeros. Your data seem quite consistent with a negative binomial: > library(MASS) > table(rnegbin(49,mu=3.1,theta=0.075)) 0 1 2 3 5 18 20 21 31 61
46,342
Fitting a probability distribution to zero inflated data in R
Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the variance of your error should be of the order of the inverse of the number of your observations (via Efron-Stein). Maybe you could use some convolution based estimator (like the density function in R, but with an integer supported kernel) to smooth things a little. Or see things as a mixture. But theres no reason to do so if you have no idea about where your data comes from.
Fitting a probability distribution to zero inflated data in R
Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the
Fitting a probability distribution to zero inflated data in R Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the variance of your error should be of the order of the inverse of the number of your observations (via Efron-Stein). Maybe you could use some convolution based estimator (like the density function in R, but with an integer supported kernel) to smooth things a little. Or see things as a mixture. But theres no reason to do so if you have no idea about where your data comes from.
Fitting a probability distribution to zero inflated data in R Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the
46,343
Are Fisher's linear discriminant and logistic regression related?
Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to outliers and to unbalanced n's; the predictors should normally be interval scale. All that is not required by LR which is therefore more universally robust. Probability of classification is estimated in LDA indirectly by formulas such as Bayes' but directly in LR. Still, when all assumptions for LDA are nicely met LDA is somewhat superior to LR from a statistical perspective.
Are Fisher's linear discriminant and logistic regression related?
Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to
Are Fisher's linear discriminant and logistic regression related? Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to outliers and to unbalanced n's; the predictors should normally be interval scale. All that is not required by LR which is therefore more universally robust. Probability of classification is estimated in LDA indirectly by formulas such as Bayes' but directly in LR. Still, when all assumptions for LDA are nicely met LDA is somewhat superior to LR from a statistical perspective.
Are Fisher's linear discriminant and logistic regression related? Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to
46,344
Are Fisher's linear discriminant and logistic regression related?
For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linear discriminant analysis (LDA) assumes that X has a multivariate normal distribution given Y. Using Bayes' rule to get Prob(Y|X) you get a logistic model. So if assumptions of LDA hold, assumptions of LR automatically hold. The reverse is not true, hence LR is more robust (e.g., X's can be dichotomous, far from normal). It is interesting that logistic regression is, in a sense, more related to the normal distribution that is probit regression.
Are Fisher's linear discriminant and logistic regression related?
For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linea
Are Fisher's linear discriminant and logistic regression related? For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linear discriminant analysis (LDA) assumes that X has a multivariate normal distribution given Y. Using Bayes' rule to get Prob(Y|X) you get a logistic model. So if assumptions of LDA hold, assumptions of LR automatically hold. The reverse is not true, hence LR is more robust (e.g., X's can be dichotomous, far from normal). It is interesting that logistic regression is, in a sense, more related to the normal distribution that is probit regression.
Are Fisher's linear discriminant and logistic regression related? For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linea
46,345
Do image recognition efforts always rely on machine learning and statistics?
No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detection as an inverse problem using parametric or implicit representations of your "pattern" or object of interest without making any probabilistic modeling explicit. For a more practical example, backprojection is a computationally efficient algorithm that solves the inverse radon transform and is often used to obtain tomographic pixel reconstructions (~ recognition of an image representing the object scanned). This is a situation where you have a well-posed inverse problem for a known forward model. That said, many inverse problems can be understood as bayesian MAP or ML inference problems, where the forward model is re-written as a probabilistic model. For example, if the inverse problem is ill-posed, it is common to use regularization methods (e.g. TV or TR) to make the numerical treatment easier. However, many regularizers can be understood in a bayesian sense as priors acting on the parameters that the inverse problem aims to recover.
Do image recognition efforts always rely on machine learning and statistics?
No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detecti
Do image recognition efforts always rely on machine learning and statistics? No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detection as an inverse problem using parametric or implicit representations of your "pattern" or object of interest without making any probabilistic modeling explicit. For a more practical example, backprojection is a computationally efficient algorithm that solves the inverse radon transform and is often used to obtain tomographic pixel reconstructions (~ recognition of an image representing the object scanned). This is a situation where you have a well-posed inverse problem for a known forward model. That said, many inverse problems can be understood as bayesian MAP or ML inference problems, where the forward model is re-written as a probabilistic model. For example, if the inverse problem is ill-posed, it is common to use regularization methods (e.g. TV or TR) to make the numerical treatment easier. However, many regularizers can be understood in a bayesian sense as priors acting on the parameters that the inverse problem aims to recover.
Do image recognition efforts always rely on machine learning and statistics? No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detecti
46,346
Do image recognition efforts always rely on machine learning and statistics?
Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they contract the identification efforts out to human beings - who are fairly good at some forms of recognition, using a service like the Amazon Mechanical Turk. Clearly, this approach won't work if you need real time image recognition, but it is an intriguing idea.
Do image recognition efforts always rely on machine learning and statistics?
Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they con
Do image recognition efforts always rely on machine learning and statistics? Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they contract the identification efforts out to human beings - who are fairly good at some forms of recognition, using a service like the Amazon Mechanical Turk. Clearly, this approach won't work if you need real time image recognition, but it is an intriguing idea.
Do image recognition efforts always rely on machine learning and statistics? Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they con
46,347
Do image recognition efforts always rely on machine learning and statistics?
Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into probabilistic models. Machine learning is the learning (training the parameters ) using statistics of the given samples. Enes
Do image recognition efforts always rely on machine learning and statistics?
Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into p
Do image recognition efforts always rely on machine learning and statistics? Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into probabilistic models. Machine learning is the learning (training the parameters ) using statistics of the given samples. Enes
Do image recognition efforts always rely on machine learning and statistics? Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into p
46,348
How to visualize iterative parameter constraint?
One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence. Tufte's sparklines: Translation as a probability distribution: Tufte might suggest reducing the height to make the posterior angles approach 45 degrees.
How to visualize iterative parameter constraint?
One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence. Tufte's sparkli
How to visualize iterative parameter constraint? One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence. Tufte's sparklines: Translation as a probability distribution: Tufte might suggest reducing the height to make the posterior angles approach 45 degrees.
How to visualize iterative parameter constraint? One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence. Tufte's sparkli
46,349
How to visualize iterative parameter constraint?
Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using some of your numbers: library(ggplot2) n = 10000; set.seed(0) x <- data.frame(theta1 = rnorm(n, 10, 3), theta2 = rnorm(n, 20, 1.5), theta3 = rnorm(n, 11, 0.5), theta4 = rnorm(n, 22, 1), theta5 = rnorm(n, 10.5, 0.3), theta6 = rnorm(n, 23, 0.8)) x <- melt(x) x$plots <- c(rep(1,20000),rep(2,20000),rep(3,20000)) ggplot(x, aes(value, fill=variable)) + geom_density() + facet_grid(~plots) `# use fill or colour, at your discretion`
How to visualize iterative parameter constraint?
Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using so
How to visualize iterative parameter constraint? Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using some of your numbers: library(ggplot2) n = 10000; set.seed(0) x <- data.frame(theta1 = rnorm(n, 10, 3), theta2 = rnorm(n, 20, 1.5), theta3 = rnorm(n, 11, 0.5), theta4 = rnorm(n, 22, 1), theta5 = rnorm(n, 10.5, 0.3), theta6 = rnorm(n, 23, 0.8)) x <- melt(x) x$plots <- c(rep(1,20000),rep(2,20000),rep(3,20000)) ggplot(x, aes(value, fill=variable)) + geom_density() + facet_grid(~plots) `# use fill or colour, at your discretion`
How to visualize iterative parameter constraint? Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using so
46,350
How to visualize iterative parameter constraint?
Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper... library(ggplot2) library(animation) n = 10000; set.seed(0) x <- data.frame(theta1 = rnorm(n, 10, 3), theta2 = rnorm(n, 20, 1.5), theta3 = rnorm(n, 11, 0.5), theta4 = rnorm(n, 22, 1), theta5 = rnorm(n, 10.5, 0.3), theta6 = rnorm(n, 23, 0.8)) x <- melt(x) plots <- list() x$plots <- c(rep(1,20000),rep(2,20000),rep(3,20000)) plots$p1 <- ggplot(droplevels(subset(x, variable == "theta1")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p2 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p3 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2" | variable == "theta3")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p4 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2" | variable == "theta3" | variable == "theta4")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) saveGIF(lapply(plots, print),clean=TRUE)
How to visualize iterative parameter constraint?
Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper... libra
How to visualize iterative parameter constraint? Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper... library(ggplot2) library(animation) n = 10000; set.seed(0) x <- data.frame(theta1 = rnorm(n, 10, 3), theta2 = rnorm(n, 20, 1.5), theta3 = rnorm(n, 11, 0.5), theta4 = rnorm(n, 22, 1), theta5 = rnorm(n, 10.5, 0.3), theta6 = rnorm(n, 23, 0.8)) x <- melt(x) plots <- list() x$plots <- c(rep(1,20000),rep(2,20000),rep(3,20000)) plots$p1 <- ggplot(droplevels(subset(x, variable == "theta1")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p2 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p3 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2" | variable == "theta3")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) plots$p4 <- ggplot(droplevels(subset(x, variable == "theta1" | variable == "theta2" | variable == "theta3" | variable == "theta4")), aes(value, color=variable)) + geom_density() + scale_x_continuous(limits=c(0,30)) + scale_y_continuous(limits=c(0,1)) saveGIF(lapply(plots, print),clean=TRUE)
How to visualize iterative parameter constraint? Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper... libra
46,351
What distribution would lead to this highly peaked and skewed density plot?
It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation). I would look at a qqplot. In R, if x contains your data: n <- length(x) qqplot(x, qexp( (1:n - 0.5)/n ) ) Note that in the use of density() for the non-negative case, it is best to use from=0 since you know the density is 0 below 0. plot(density(x, from=0)) I think also that, if $X$ follows an exponential distribution, then $e^{-X/\mu_X}$ should follow a uniform distribution, so the following could be a reasonable diagnostic: hist(exp(-x/mean(x)), breaks=2*sqrt(length(x)))
What distribution would lead to this highly peaked and skewed density plot?
It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation). I would look at a qqplot. In R, if x contains your data: n <-
What distribution would lead to this highly peaked and skewed density plot? It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation). I would look at a qqplot. In R, if x contains your data: n <- length(x) qqplot(x, qexp( (1:n - 0.5)/n ) ) Note that in the use of density() for the non-negative case, it is best to use from=0 since you know the density is 0 below 0. plot(density(x, from=0)) I think also that, if $X$ follows an exponential distribution, then $e^{-X/\mu_X}$ should follow a uniform distribution, so the following could be a reasonable diagnostic: hist(exp(-x/mean(x)), breaks=2*sqrt(length(x)))
What distribution would lead to this highly peaked and skewed density plot? It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation). I would look at a qqplot. In R, if x contains your data: n <-
46,352
What distribution would lead to this highly peaked and skewed density plot?
It's not usually possible to identify a distribution from looking at a histogram like this. As a start, plot the density on a log scale: The tail of this density (from around 40 onward) is close to linear, showing it is close to exponential. That's part of the characterization. To go further, compare the density to this characterization by forming the residuals (on a log scale, effectively taking the ratio of the density to an exponential curve): Clearly this density is not exponential: for small values it is almost four times greater than the exponential fit to the tail would indicate. We must go further with the characterization. We seek to characterize the residuals as simply as possible: this means in terms of longish straight segments or parabolic sections. (On this log scale, a straight segment is an exponential trend, whereas a parabolic section looks like a piece of a Normal distribution.) Evidently there are two parabolic-like sections: a sharp peaked one centered near 1 and a shallow, broad one centered near 25-30. The first would correspond to a healthy part of a truncated Normal distribution with small standard deviation (around 5-6) whereas the second would correspond to most of a Normal distribution with a larger standard deviation (around 10 perhaps). This indicates the density is not going to be adequately described by a simple mathematical formula, such as a Gamma or Weibull, but perhaps it can be decomposed into a mixture of two or three components. Look for each of those components to have some meaning: could these data indeed involve some combination of phenomena tending to occur near 1, near 25, and out beyond 40?
What distribution would lead to this highly peaked and skewed density plot?
It's not usually possible to identify a distribution from looking at a histogram like this. As a start, plot the density on a log scale: The tail of this density (from around 40 onward) is close to l
What distribution would lead to this highly peaked and skewed density plot? It's not usually possible to identify a distribution from looking at a histogram like this. As a start, plot the density on a log scale: The tail of this density (from around 40 onward) is close to linear, showing it is close to exponential. That's part of the characterization. To go further, compare the density to this characterization by forming the residuals (on a log scale, effectively taking the ratio of the density to an exponential curve): Clearly this density is not exponential: for small values it is almost four times greater than the exponential fit to the tail would indicate. We must go further with the characterization. We seek to characterize the residuals as simply as possible: this means in terms of longish straight segments or parabolic sections. (On this log scale, a straight segment is an exponential trend, whereas a parabolic section looks like a piece of a Normal distribution.) Evidently there are two parabolic-like sections: a sharp peaked one centered near 1 and a shallow, broad one centered near 25-30. The first would correspond to a healthy part of a truncated Normal distribution with small standard deviation (around 5-6) whereas the second would correspond to most of a Normal distribution with a larger standard deviation (around 10 perhaps). This indicates the density is not going to be adequately described by a simple mathematical formula, such as a Gamma or Weibull, but perhaps it can be decomposed into a mixture of two or three components. Look for each of those components to have some meaning: could these data indeed involve some combination of phenomena tending to occur near 1, near 25, and out beyond 40?
What distribution would lead to this highly peaked and skewed density plot? It's not usually possible to identify a distribution from looking at a histogram like this. As a start, plot the density on a log scale: The tail of this density (from around 40 onward) is close to l
46,353
What distribution would lead to this highly peaked and skewed density plot?
Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distribution. I'd start with either a exponential distribution, or the slightly more flexible Weibull distribution, and see if either one of those seems to fit well. Those two are a decent balance between difficulty to implement, visualize, etc. and having a decent likelihood of fitting your data.
What distribution would lead to this highly peaked and skewed density plot?
Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distrib
What distribution would lead to this highly peaked and skewed density plot? Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distribution. I'd start with either a exponential distribution, or the slightly more flexible Weibull distribution, and see if either one of those seems to fit well. Those two are a decent balance between difficulty to implement, visualize, etc. and having a decent likelihood of fitting your data.
What distribution would lead to this highly peaked and skewed density plot? Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distrib
46,354
What distribution would lead to this highly peaked and skewed density plot?
This is a long-tail distribution. GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data. It's in package GB2.
What distribution would lead to this highly peaked and skewed density plot?
This is a long-tail distribution. GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data. It's in package GB2.
What distribution would lead to this highly peaked and skewed density plot? This is a long-tail distribution. GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data. It's in package GB2.
What distribution would lead to this highly peaked and skewed density plot? This is a long-tail distribution. GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data. It's in package GB2.
46,355
Visualizing k-nearest neighbour?
If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage. They also have several neat examples for KNN regression, but I have not found the code for those. More to the point, the package you mentioned, kknn, has built-in functionality for plot() for many of its functions, and you should browse the vignette, which contains several examples.
Visualizing k-nearest neighbour?
If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage. They also have
Visualizing k-nearest neighbour? If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage. They also have several neat examples for KNN regression, but I have not found the code for those. More to the point, the package you mentioned, kknn, has built-in functionality for plot() for many of its functions, and you should browse the vignette, which contains several examples.
Visualizing k-nearest neighbour? If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage. They also have
46,356
Visualizing k-nearest neighbour?
kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple this would be rather harsh to decipher. You may do this by counting the distances between train objects the way you did it in kknn, then use cmdscale to cast this on 2D, finally plot directly or using some smoothed scatterplot using colours to show classes or values (the smoothed regression version would require probably some hacking with hue and intensity). However, as I wrote, this would be probably a totally useless plot.
Visualizing k-nearest neighbour?
kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple th
Visualizing k-nearest neighbour? kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple this would be rather harsh to decipher. You may do this by counting the distances between train objects the way you did it in kknn, then use cmdscale to cast this on 2D, finally plot directly or using some smoothed scatterplot using colours to show classes or values (the smoothed regression version would require probably some hacking with hue and intensity). However, as I wrote, this would be probably a totally useless plot.
Visualizing k-nearest neighbour? kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple th
46,357
Anyone know of a simple dendrogram visualizer?
TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems. More powerful solution is to use R, but here you would have to invest some time in making conversion to the dendrogram object (basically list-of-lists).
Anyone know of a simple dendrogram visualizer?
TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems. More powerful solutio
Anyone know of a simple dendrogram visualizer? TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems. More powerful solution is to use R, but here you would have to invest some time in making conversion to the dendrogram object (basically list-of-lists).
Anyone know of a simple dendrogram visualizer? TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems. More powerful solutio
46,358
Anyone know of a simple dendrogram visualizer?
You can use PhyFi web server for generating dendrograms from Newick files. Sample output using your data from PhyFi:
Anyone know of a simple dendrogram visualizer?
You can use PhyFi web server for generating dendrograms from Newick files. Sample output using your data from PhyFi:
Anyone know of a simple dendrogram visualizer? You can use PhyFi web server for generating dendrograms from Newick files. Sample output using your data from PhyFi:
Anyone know of a simple dendrogram visualizer? You can use PhyFi web server for generating dendrograms from Newick files. Sample output using your data from PhyFi:
46,359
Anyone know of a simple dendrogram visualizer?
Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulating the display.
Anyone know of a simple dendrogram visualizer?
Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulati
Anyone know of a simple dendrogram visualizer? Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulating the display.
Anyone know of a simple dendrogram visualizer? Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulati
46,360
Anyone know of a simple dendrogram visualizer?
While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross.
Anyone know of a simple dendrogram visualizer?
While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross.
Anyone know of a simple dendrogram visualizer? While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross.
Anyone know of a simple dendrogram visualizer? While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross.
46,361
Possible identifiability issue in hierarchical model
Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here for a detailed exposition of just this model and appropriate prior specification. In short, yes, this model can be very useful and there probably ought to be some information about the variance parameters even in relatively small samples - but you need to be careful in how you specify and fit it. Edit: When I wrote this answer I apparently hadn't read the OP properly (see my comment to @probabilityislogic's answer). Anyway as this model is written the parameters $\sigma, \sigma_\theta$ aren't separately identifiable as @probabilityislogic points out. I suspect that if you looked at the posterior distribution of $\sigma^2 + \sigma_\theta^2$ it would be doing something much more reasonable, and if you looked at the joint posterior of $\sigma, \sigma_\theta$ there would be a strong negative correlation. You should go back to the original problem and try to reformulate this model - either it's not posed correctly in the OP or you're hosed, I think.
Possible identifiability issue in hierarchical model
Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here f
Possible identifiability issue in hierarchical model Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here for a detailed exposition of just this model and appropriate prior specification. In short, yes, this model can be very useful and there probably ought to be some information about the variance parameters even in relatively small samples - but you need to be careful in how you specify and fit it. Edit: When I wrote this answer I apparently hadn't read the OP properly (see my comment to @probabilityislogic's answer). Anyway as this model is written the parameters $\sigma, \sigma_\theta$ aren't separately identifiable as @probabilityislogic points out. I suspect that if you looked at the posterior distribution of $\sigma^2 + \sigma_\theta^2$ it would be doing something much more reasonable, and if you looked at the joint posterior of $\sigma, \sigma_\theta$ there would be a strong negative correlation. You should go back to the original problem and try to reformulate this model - either it's not posed correctly in the OP or you're hosed, I think.
Possible identifiability issue in hierarchical model Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here f
46,362
Possible identifiability issue in hierarchical model
Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if you were to use jeffreys prior for both parameters, you would have an improper posterior. But note that the main justification for using jeffreys prior is that it is a scale parameter. However you can show for your model, that neither parameter sets the scale of the problem. If we consider the marginal model, with $\theta_{i}$ integrated out. It is a well-known result that if you integrate a normal with another normal, you get a normal. So we can skip the integration, and just work out the expectation and variance. We then get: $$E(y_{i}|\mu\sigma\sigma_{\theta})=E\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=E\left[\theta_{i}|\mu\sigma\sigma_{\theta}\right]=\mu$$ $$V(y_{i}|\mu\sigma\sigma_{\theta})=E\left[V(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]+V\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=\sigma^{2}+\sigma_{\theta}^{2}$$ And hence we have the marginal model: $$(y_{i}|\mu\sigma\sigma_{\theta})\sim N(\mu,\sigma^{2}+\sigma_{\theta}^{2})$$ And this does show an identifiability problem with this model - so the data cannot distinguish between the two variances, it can only give information about their sum. You may have been able to see this intuitively. For example, we can always take $\theta_{i}=y_{i}$ for all $i$ and hence this will set $\sigma=0$. Alternatively we can set $\theta_{i}=\mu$ for all $i$ and this will set $\sigma_{\theta}=0$. Both of these scenarios will be indistinguishable by the data - in the sense that if I was to generate two data sets, one from the first case, and one from the second (but ensured that $\sigma^{2}+\sigma_{\theta}^{2}$ was the same in both cases), you would not be able to tell which data set came from which case. This suggests that it is fundamentally the sum that sets the scale and so we should apply jeffreys prior to the parameter $\tau^{2}=\sigma^{2}+\sigma_{\theta}^{2}$. Now suppose that $\tau^{2}$ was known, I would have thought a non-informative choice of prior for $\sigma^{2}$ would be uniform between $0$ and $\tau^{2}$ (for a more informative choice I would use a re-scaled beta distribution over this range). So we have the prior: $$p(\tau^{2},\sigma^{2})\propto\frac{1}{\tau^{2}}\frac{I(0<\sigma^{2}<\tau^{2})}{\tau^{2}}$$ If we make the change of variables to $\sigma^{2},\tau^{2}\to\sigma,\sigma_{\theta}$ so that. We then get: $$p(\sigma_{\theta},\sigma)\propto\frac{1}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}|\frac{\partial\sigma^{2}}{\partial\sigma}\frac{\partial\tau^{2}}{\partial\sigma_{\theta}}-\frac{\partial\sigma^{2}}{\partial\sigma_{\theta}}\frac{\partial\tau^{2}}{\partial\sigma}| =\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}$$ Note that the non-identifiability is preserved in this prior because it is symmetric in its arguments. Another not so obvious symmetry is that if you were to integrate out either one of the variance parameters you would be left with the jeffreys prior for the other one: $$\int_{0}^{\infty}\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}d\sigma=\frac{1}{\sigma_{\theta}}$$ Hence, all you are required to input is the prior range for one of the parameters, as this will stop you from getting into trouble with improper priors. Call this $0<L_{\sigma}<\sigma<U_{\sigma}<\infty$. It is then easy to sample from the joint density using the inverse CDF method, for we have: $$F_{\sigma}(x)=\frac{\log\left(\frac{x}{L_{\sigma}}\right)}{\log\left(\frac{U_{\sigma}}{L_{\sigma}}\right)}\implies F^{-1}_{\sigma}(p)=\frac{U_{\sigma}^{p}}{L_{\sigma}^{p-1}}$$ $$F_{\sigma_{\theta}|\sigma}(y|x)=1-\frac{x^{2}}{y^{2}+x^{2}}\implies F^{-1}_{\sigma_{\theta}|\sigma}(p|x)=x\sqrt{\frac{p}{1-p}}$$ So you sample two independent uniform random variables $q_{1b},q_{2b}$, and then your random value of $\sigma^{(b)}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}$ and your random value of $\sigma^{(b)}_{\theta}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}\sqrt{\frac{q_{2b}}{1-q_{2b}}}$. Combine this with the usual flat prior for $-\infty<L_{\mu}<\mu<U_{\mu}<\infty$ generated by a third random uniform variable $\mu^{(b)}=L_{\mu}+q_{3b}(U_{\mu}-L_{\mu})$ and you have all the ingredients to do monte carlo posterior simulation - note that this is much better than "Gibbs sampling" because each simulation is independent, so no need to wait for convergence (and also less need for a large number of simulations) - and you are dealing with proper priors - so divergence is impossible (however some moments may or may not exist, but all quantiles exist).
Possible identifiability issue in hierarchical model
Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance pa
Possible identifiability issue in hierarchical model Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if you were to use jeffreys prior for both parameters, you would have an improper posterior. But note that the main justification for using jeffreys prior is that it is a scale parameter. However you can show for your model, that neither parameter sets the scale of the problem. If we consider the marginal model, with $\theta_{i}$ integrated out. It is a well-known result that if you integrate a normal with another normal, you get a normal. So we can skip the integration, and just work out the expectation and variance. We then get: $$E(y_{i}|\mu\sigma\sigma_{\theta})=E\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=E\left[\theta_{i}|\mu\sigma\sigma_{\theta}\right]=\mu$$ $$V(y_{i}|\mu\sigma\sigma_{\theta})=E\left[V(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]+V\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=\sigma^{2}+\sigma_{\theta}^{2}$$ And hence we have the marginal model: $$(y_{i}|\mu\sigma\sigma_{\theta})\sim N(\mu,\sigma^{2}+\sigma_{\theta}^{2})$$ And this does show an identifiability problem with this model - so the data cannot distinguish between the two variances, it can only give information about their sum. You may have been able to see this intuitively. For example, we can always take $\theta_{i}=y_{i}$ for all $i$ and hence this will set $\sigma=0$. Alternatively we can set $\theta_{i}=\mu$ for all $i$ and this will set $\sigma_{\theta}=0$. Both of these scenarios will be indistinguishable by the data - in the sense that if I was to generate two data sets, one from the first case, and one from the second (but ensured that $\sigma^{2}+\sigma_{\theta}^{2}$ was the same in both cases), you would not be able to tell which data set came from which case. This suggests that it is fundamentally the sum that sets the scale and so we should apply jeffreys prior to the parameter $\tau^{2}=\sigma^{2}+\sigma_{\theta}^{2}$. Now suppose that $\tau^{2}$ was known, I would have thought a non-informative choice of prior for $\sigma^{2}$ would be uniform between $0$ and $\tau^{2}$ (for a more informative choice I would use a re-scaled beta distribution over this range). So we have the prior: $$p(\tau^{2},\sigma^{2})\propto\frac{1}{\tau^{2}}\frac{I(0<\sigma^{2}<\tau^{2})}{\tau^{2}}$$ If we make the change of variables to $\sigma^{2},\tau^{2}\to\sigma,\sigma_{\theta}$ so that. We then get: $$p(\sigma_{\theta},\sigma)\propto\frac{1}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}|\frac{\partial\sigma^{2}}{\partial\sigma}\frac{\partial\tau^{2}}{\partial\sigma_{\theta}}-\frac{\partial\sigma^{2}}{\partial\sigma_{\theta}}\frac{\partial\tau^{2}}{\partial\sigma}| =\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}$$ Note that the non-identifiability is preserved in this prior because it is symmetric in its arguments. Another not so obvious symmetry is that if you were to integrate out either one of the variance parameters you would be left with the jeffreys prior for the other one: $$\int_{0}^{\infty}\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}d\sigma=\frac{1}{\sigma_{\theta}}$$ Hence, all you are required to input is the prior range for one of the parameters, as this will stop you from getting into trouble with improper priors. Call this $0<L_{\sigma}<\sigma<U_{\sigma}<\infty$. It is then easy to sample from the joint density using the inverse CDF method, for we have: $$F_{\sigma}(x)=\frac{\log\left(\frac{x}{L_{\sigma}}\right)}{\log\left(\frac{U_{\sigma}}{L_{\sigma}}\right)}\implies F^{-1}_{\sigma}(p)=\frac{U_{\sigma}^{p}}{L_{\sigma}^{p-1}}$$ $$F_{\sigma_{\theta}|\sigma}(y|x)=1-\frac{x^{2}}{y^{2}+x^{2}}\implies F^{-1}_{\sigma_{\theta}|\sigma}(p|x)=x\sqrt{\frac{p}{1-p}}$$ So you sample two independent uniform random variables $q_{1b},q_{2b}$, and then your random value of $\sigma^{(b)}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}$ and your random value of $\sigma^{(b)}_{\theta}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}\sqrt{\frac{q_{2b}}{1-q_{2b}}}$. Combine this with the usual flat prior for $-\infty<L_{\mu}<\mu<U_{\mu}<\infty$ generated by a third random uniform variable $\mu^{(b)}=L_{\mu}+q_{3b}(U_{\mu}-L_{\mu})$ and you have all the ingredients to do monte carlo posterior simulation - note that this is much better than "Gibbs sampling" because each simulation is independent, so no need to wait for convergence (and also less need for a large number of simulations) - and you are dealing with proper priors - so divergence is impossible (however some moments may or may not exist, but all quantiles exist).
Possible identifiability issue in hierarchical model Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance pa
46,363
Predicting forecasts for next 12 months using Box-Jenkins
If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing models, and uses a solid methodology to do so. It will probably get you a lot farther than what you are building in excel, especially because it will also allow you to explore exponential smoothing models. The two functions you are interested in are 'auto.arima' and 'ets.' /Edit: auto.arima can also be used to fit ARMAX models, which (if properly specified) can solve many of the problems identified by IrishStat.
Predicting forecasts for next 12 months using Box-Jenkins
If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing
Predicting forecasts for next 12 months using Box-Jenkins If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing models, and uses a solid methodology to do so. It will probably get you a lot farther than what you are building in excel, especially because it will also allow you to explore exponential smoothing models. The two functions you are interested in are 'auto.arima' and 'ets.' /Edit: auto.arima can also be used to fit ARMAX models, which (if properly specified) can solve many of the problems identified by IrishStat.
Predicting forecasts for next 12 months using Box-Jenkins If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing
46,364
Predicting forecasts for next 12 months using Box-Jenkins
The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type of models are only used for irregular part and by their design these model do not incorporate any trend (I am assuming that trend is some function which varies in time). So if you simply want to estimate AR(2) model no software will estimate the trend for you, since if it did, it would not be fitting AR(2) model. To forecast the trend you will first need to create some sort of model and test it, and only after you are confident that your model truly estimates the trend, then you can use it to forecast the trend. Without such model any forecasting is impossible. Sadly the majority of time series textbooks do not stress this when talking about forecasting.
Predicting forecasts for next 12 months using Box-Jenkins
The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type
Predicting forecasts for next 12 months using Box-Jenkins The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type of models are only used for irregular part and by their design these model do not incorporate any trend (I am assuming that trend is some function which varies in time). So if you simply want to estimate AR(2) model no software will estimate the trend for you, since if it did, it would not be fitting AR(2) model. To forecast the trend you will first need to create some sort of model and test it, and only after you are confident that your model truly estimates the trend, then you can use it to forecast the trend. Without such model any forecasting is impossible. Sadly the majority of time series textbooks do not stress this when talking about forecasting.
Predicting forecasts for next 12 months using Box-Jenkins The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type
46,365
Predicting forecasts for next 12 months using Box-Jenkins
Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. The best way to answer this question is to evaluate alternative final models for adequacy in terms of separating the observed observations to signal and noise. There are a number of possible pitfalls awaiting you . One of them " does the series have one or more trends AND/OR one or more level shifts " ? Another possible issue "does the series have a constant set of monthly indicators or have some months had a statistically significant change in the their effects ? In terms of a seasonal ARIMA model this question translates to " have the model parameters changed over time " ?. My experience with Excel Solver has not been very positive.
Predicting forecasts for next 12 months using Box-Jenkins
Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form.
Predicting forecasts for next 12 months using Box-Jenkins Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. The best way to answer this question is to evaluate alternative final models for adequacy in terms of separating the observed observations to signal and noise. There are a number of possible pitfalls awaiting you . One of them " does the series have one or more trends AND/OR one or more level shifts " ? Another possible issue "does the series have a constant set of monthly indicators or have some months had a statistically significant change in the their effects ? In terms of a seasonal ARIMA model this question translates to " have the model parameters changed over time " ?. My experience with Excel Solver has not been very positive.
Predicting forecasts for next 12 months using Box-Jenkins Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form.
46,366
Predicting forecasts for next 12 months using Box-Jenkins
As mentioned, Use R, not excel. My understanding of this process you are asking for. Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data points. Use that model and find the residuals from that. Fit your time series model to these residuals. To forecast, use the 'predict' function in R. Get the next point. Let's assume the model tells you the next error will be -2. (If you wanted to predict the next point, this would be the 16th data point.) Take Y= 3*16+1 = 49,,, and now add in the -2 from the time series prediction. Your forecast is now :49+(-2) = 47.
Predicting forecasts for next 12 months using Box-Jenkins
As mentioned, Use R, not excel. My understanding of this process you are asking for. Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data
Predicting forecasts for next 12 months using Box-Jenkins As mentioned, Use R, not excel. My understanding of this process you are asking for. Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data points. Use that model and find the residuals from that. Fit your time series model to these residuals. To forecast, use the 'predict' function in R. Get the next point. Let's assume the model tells you the next error will be -2. (If you wanted to predict the next point, this would be the 16th data point.) Take Y= 3*16+1 = 49,,, and now add in the -2 from the time series prediction. Your forecast is now :49+(-2) = 47.
Predicting forecasts for next 12 months using Box-Jenkins As mentioned, Use R, not excel. My understanding of this process you are asking for. Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data
46,367
Deriving mathematical model of pLSA
I am assuming you want to derive: \begin{align*} P(w,d) = \sum_{c} P(c) P(d|c) P(w|c) &= P(d) \sum_{c} P(c|d) P(w|c) \end{align*} Further, this is similar to Probabilistic latent semantic indexing (cf. Blei, Jordan, and Ng (2003) Latent Dirichlet Allocation. JMLR section 4.3). PLSI posits that a document label $d$ and a word $w$ are conditionally independent given an unobserved topic $z$. If this is true, your formula is a simple consequence of Bayes theorem. Here are the steps: \begin{align*} P(w, d) &= \displaystyle \sum_z P(w, z, d)\\ & = \displaystyle \sum_z P(w, d | z) p(z)\\ &= \displaystyle \sum_z P(w | z) p(d|z) p(z), \end{align*} where factorization into products is because of conditional independence. Now use Bayes theorem again to get \begin{align} \displaystyle \sum_z P(w | z) p(d|z) p(z) &= \displaystyle \sum_z P(w | z) p(z,d)\\ &= \displaystyle \sum_z P(w | z) p(z|d)p(d)\\ &= p(d)\displaystyle \sum_z P(w | z) p(z|d) \end{align}
Deriving mathematical model of pLSA
I am assuming you want to derive: \begin{align*} P(w,d) = \sum_{c} P(c) P(d|c) P(w|c) &= P(d) \sum_{c} P(c|d) P(w|c) \end{align*} Further, this is similar to Probabilistic latent semantic indexing (c
Deriving mathematical model of pLSA I am assuming you want to derive: \begin{align*} P(w,d) = \sum_{c} P(c) P(d|c) P(w|c) &= P(d) \sum_{c} P(c|d) P(w|c) \end{align*} Further, this is similar to Probabilistic latent semantic indexing (cf. Blei, Jordan, and Ng (2003) Latent Dirichlet Allocation. JMLR section 4.3). PLSI posits that a document label $d$ and a word $w$ are conditionally independent given an unobserved topic $z$. If this is true, your formula is a simple consequence of Bayes theorem. Here are the steps: \begin{align*} P(w, d) &= \displaystyle \sum_z P(w, z, d)\\ & = \displaystyle \sum_z P(w, d | z) p(z)\\ &= \displaystyle \sum_z P(w | z) p(d|z) p(z), \end{align*} where factorization into products is because of conditional independence. Now use Bayes theorem again to get \begin{align} \displaystyle \sum_z P(w | z) p(d|z) p(z) &= \displaystyle \sum_z P(w | z) p(z,d)\\ &= \displaystyle \sum_z P(w | z) p(z|d)p(d)\\ &= p(d)\displaystyle \sum_z P(w | z) p(z|d) \end{align}
Deriving mathematical model of pLSA I am assuming you want to derive: \begin{align*} P(w,d) = \sum_{c} P(c) P(d|c) P(w|c) &= P(d) \sum_{c} P(c|d) P(w|c) \end{align*} Further, this is similar to Probabilistic latent semantic indexing (c
46,368
Deriving mathematical model of pLSA
The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$. I'm not sure why you don't think Bayes theorem and basic probability rules are useful: Eq 1 is Bayes theorem (ie recognizing that $P(d|c)P(c) = P(c,d)$ and plugging in to the definition of conditional probability) Eq 2 follows immediately from eq 1 Eq 3 is just eq 2 multiplied through by $P(w|c)$. Since eq 3 holds for all $c$ the sums are equal. Then since $w$ is independent of $d$ given $c$ (an assumption from the model), $P(w|c)P(c|d) = P(w|c, d)P(c|d) = P(w,c|d)$ and so $\sum_c\ P(w,c|d) = P(w|d)$ by the law of total probability, giving you $P(w|d)P(d)$. Finally, $P(w|d)P(d)=P(w,d)$ from the definition of conditional probability. So basic probability is in fact both necessary and sufficient for the derivation!
Deriving mathematical model of pLSA
The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$. I'm not sure why you don't think Bayes theorem and basic probability rules are useful: Eq 1 is Bayes theorem (ie re
Deriving mathematical model of pLSA The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$. I'm not sure why you don't think Bayes theorem and basic probability rules are useful: Eq 1 is Bayes theorem (ie recognizing that $P(d|c)P(c) = P(c,d)$ and plugging in to the definition of conditional probability) Eq 2 follows immediately from eq 1 Eq 3 is just eq 2 multiplied through by $P(w|c)$. Since eq 3 holds for all $c$ the sums are equal. Then since $w$ is independent of $d$ given $c$ (an assumption from the model), $P(w|c)P(c|d) = P(w|c, d)P(c|d) = P(w,c|d)$ and so $\sum_c\ P(w,c|d) = P(w|d)$ by the law of total probability, giving you $P(w|d)P(d)$. Finally, $P(w|d)P(d)=P(w,d)$ from the definition of conditional probability. So basic probability is in fact both necessary and sufficient for the derivation!
Deriving mathematical model of pLSA The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$. I'm not sure why you don't think Bayes theorem and basic probability rules are useful: Eq 1 is Bayes theorem (ie re
46,369
Kruskal-Wallis test data considerations
With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_{0}$. If all distributions have the same shape (and are therefore identical under $H_{0}$), this is true. Here's a first try at looking at the case of 3 groups and no ties. First, let's compare the asymptotic $\chi^{2}$ distribution function against a MC-permutation one for given group sizes (this implementation will break for larger group sizes). P <- 3 # number of groups Nj <- c(4, 8, 6) # group sizes N <- sum(Nj) # total number of subjects IV <- factor(rep(1:P, Nj)) # grouping factor alpha <- 0.05 # alpha-level # there are N! permutations of ranks within the total sample, but we only want 5000 nPerms <- min(factorial(N), 5000) # random sample of all N! permutations # sample(1:factorial(N), nPerms) doesn't work for N! >= .Machine$integer.max permIdx <- unique(round(runif(nPerms) * (factorial(N)-1))) nPerms <- length(permIdx) H <- numeric(nPerms) # vector to later contain the test statistics # function to calculate test statistic from a given rank permutation getH <- function(ranks) { Rj <- tapply(ranks, IV, sum) (12 / (N*(N+1))) * sum((1/Nj) * (Rj-(Nj*(N+1) / 2))^2) } # all test statistics for the random sample of rank permutations (breaks for larger N) # numperm() internally orders all N! permutations and returns the one with a desired index library(sna) # for numperm() for(i in seq(along=permIdx)) { H[i] <- getH(numperm(N, permIdx[i]-1)) } # cumulative relative frequencies of test statistic from random permutations pKWH <- cumsum(table(round(H, 4)) / nPerms) qPerm <- quantile(H, probs=1-alpha) # critical value for level alpha from permutations qAsymp <- qchisq(1-alpha, P-1) # critical value for level alpha from chi^2 # illustration of cumRelFreq vs. chi^2 distribution function and resp. critical values plot(names(pKWH), pKWH, main="Kruskal-Wallis: permutation vs. asymptotic", type="n", xlab="h", ylab="P(H <= h)", cex.lab=1.4) points(names(pKWH), pKWH, pch=16, col="red") curve(pchisq(x, P-1), lwd=2, n=200, add=TRUE) abline(h=0.95, col="blue") # level alpha abline(v=c(qPerm, qAsymp), col=c("red", "black")) # critical values legend(x="bottomright", legend=c("permutation", "asymptotic"), pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2)) Now for an actual MC-permutation test. This compares the asymptotic $\chi^{2}$-derived p-value with the result from coin's oneway_test() and the cumulative relative frequency distribution from the MC-permutation sample above. > DV1 <- round(rnorm(Nj[1], 100, 15), 2) # data group 1 > DV2 <- round(rnorm(Nj[2], 110, 15), 2) # data group 2 > DV3 <- round(rnorm(Nj[3], 120, 15), 2) # data group 3 > DV <- c(DV1, DV2, DV3) # all data > kruskal.test(DV ~ IV) # asymptotic p-value Kruskal-Wallis rank sum test data: DV by IV Kruskal-Wallis chi-squared = 7.6506, df = 2, p-value = 0.02181 > library(coin) # for oneway_test() > oneway_test(DV ~ IV, distribution=approximate(B=9999)) Approximative K-Sample Permutation Test data: DV by IV (1, 2, 3) maxT = 2.5463, p-value = 0.0191 > Hobs <- getH(rank(DV)) # observed test statistic # proportion of test statistics at least as extreme as observed one (+1) > (pPerm <- (sum(H >= Hobs) + 1) / (length(H) + 1)) [1] 0.0139972
Kruskal-Wallis test data considerations
With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_
Kruskal-Wallis test data considerations With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_{0}$. If all distributions have the same shape (and are therefore identical under $H_{0}$), this is true. Here's a first try at looking at the case of 3 groups and no ties. First, let's compare the asymptotic $\chi^{2}$ distribution function against a MC-permutation one for given group sizes (this implementation will break for larger group sizes). P <- 3 # number of groups Nj <- c(4, 8, 6) # group sizes N <- sum(Nj) # total number of subjects IV <- factor(rep(1:P, Nj)) # grouping factor alpha <- 0.05 # alpha-level # there are N! permutations of ranks within the total sample, but we only want 5000 nPerms <- min(factorial(N), 5000) # random sample of all N! permutations # sample(1:factorial(N), nPerms) doesn't work for N! >= .Machine$integer.max permIdx <- unique(round(runif(nPerms) * (factorial(N)-1))) nPerms <- length(permIdx) H <- numeric(nPerms) # vector to later contain the test statistics # function to calculate test statistic from a given rank permutation getH <- function(ranks) { Rj <- tapply(ranks, IV, sum) (12 / (N*(N+1))) * sum((1/Nj) * (Rj-(Nj*(N+1) / 2))^2) } # all test statistics for the random sample of rank permutations (breaks for larger N) # numperm() internally orders all N! permutations and returns the one with a desired index library(sna) # for numperm() for(i in seq(along=permIdx)) { H[i] <- getH(numperm(N, permIdx[i]-1)) } # cumulative relative frequencies of test statistic from random permutations pKWH <- cumsum(table(round(H, 4)) / nPerms) qPerm <- quantile(H, probs=1-alpha) # critical value for level alpha from permutations qAsymp <- qchisq(1-alpha, P-1) # critical value for level alpha from chi^2 # illustration of cumRelFreq vs. chi^2 distribution function and resp. critical values plot(names(pKWH), pKWH, main="Kruskal-Wallis: permutation vs. asymptotic", type="n", xlab="h", ylab="P(H <= h)", cex.lab=1.4) points(names(pKWH), pKWH, pch=16, col="red") curve(pchisq(x, P-1), lwd=2, n=200, add=TRUE) abline(h=0.95, col="blue") # level alpha abline(v=c(qPerm, qAsymp), col=c("red", "black")) # critical values legend(x="bottomright", legend=c("permutation", "asymptotic"), pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2)) Now for an actual MC-permutation test. This compares the asymptotic $\chi^{2}$-derived p-value with the result from coin's oneway_test() and the cumulative relative frequency distribution from the MC-permutation sample above. > DV1 <- round(rnorm(Nj[1], 100, 15), 2) # data group 1 > DV2 <- round(rnorm(Nj[2], 110, 15), 2) # data group 2 > DV3 <- round(rnorm(Nj[3], 120, 15), 2) # data group 3 > DV <- c(DV1, DV2, DV3) # all data > kruskal.test(DV ~ IV) # asymptotic p-value Kruskal-Wallis rank sum test data: DV by IV Kruskal-Wallis chi-squared = 7.6506, df = 2, p-value = 0.02181 > library(coin) # for oneway_test() > oneway_test(DV ~ IV, distribution=approximate(B=9999)) Approximative K-Sample Permutation Test data: DV by IV (1, 2, 3) maxT = 2.5463, p-value = 0.0191 > Hobs <- getH(rank(DV)) # observed test statistic # proportion of test statistics at least as extreme as observed one (+1) > (pPerm <- (sum(H >= Hobs) + 1) / (length(H) + 1)) [1] 0.0139972
Kruskal-Wallis test data considerations With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_
46,370
Kruskal-Wallis test data considerations
You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”. If there is no exact test available, you can use bootstrap.
Kruskal-Wallis test data considerations
You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”. If there is no exact test available, you
Kruskal-Wallis test data considerations You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”. If there is no exact test available, you can use bootstrap.
Kruskal-Wallis test data considerations You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”. If there is no exact test available, you
46,371
Discrepancy measures for transition matrices
As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be more relevant but I clearly prefer to come back to probabilities. I assume you want to compare $Q=(Q_{ij})$ and $P=(P_{ij})$ with $P_{ij}=P(X^P_{t}=j|X^P_{t-1}=j)$ and that for $P$ (resp. $Q$) there exists a unique stationnary measure $\pi_{P}$ (resp. $\pi_{Q}$). Under these assumptions, I guess it is meaningfull to compare $\pi_{P}$ and $\pi_{Q}$ for example with a $L_{1}$ distance: $\sum_{j}|\pi_{P}[j]-\pi_{Q}[j]|$ or hellinger distance: $\sum_{j}|\pi^{1/2}_{P}[j]-\pi^{1/2}_{Q}[j]|^2$ or Kullback divergence: $\sum_{j}\pi_{P}[j] \log(\frac{\pi_{P}[j]}{\pi_{Q}[j]})$.
Discrepancy measures for transition matrices
As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be mor
Discrepancy measures for transition matrices As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be more relevant but I clearly prefer to come back to probabilities. I assume you want to compare $Q=(Q_{ij})$ and $P=(P_{ij})$ with $P_{ij}=P(X^P_{t}=j|X^P_{t-1}=j)$ and that for $P$ (resp. $Q$) there exists a unique stationnary measure $\pi_{P}$ (resp. $\pi_{Q}$). Under these assumptions, I guess it is meaningfull to compare $\pi_{P}$ and $\pi_{Q}$ for example with a $L_{1}$ distance: $\sum_{j}|\pi_{P}[j]-\pi_{Q}[j]|$ or hellinger distance: $\sum_{j}|\pi^{1/2}_{P}[j]-\pi^{1/2}_{Q}[j]|^2$ or Kullback divergence: $\sum_{j}\pi_{P}[j] \log(\frac{\pi_{P}[j]}{\pi_{Q}[j]})$.
Discrepancy measures for transition matrices As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be mor
46,372
Discrepancy measures for transition matrices
Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfies triangle inequality. I hope by 'transition matrix' you mean 'probability transition matrix'. Never mind, as long as the entries are non negative, I-divergence is considered to be the "best" measure of discrimination. See for example http://www.mdpi.com/1099-4300/10/3/261/. In fact certain axioms which any one would feel desirable lead to measures which are nonsymmetric in general.
Discrepancy measures for transition matrices
Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfie
Discrepancy measures for transition matrices Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfies triangle inequality. I hope by 'transition matrix' you mean 'probability transition matrix'. Never mind, as long as the entries are non negative, I-divergence is considered to be the "best" measure of discrimination. See for example http://www.mdpi.com/1099-4300/10/3/261/. In fact certain axioms which any one would feel desirable lead to measures which are nonsymmetric in general.
Discrepancy measures for transition matrices Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfie
46,373
How to choose number of dummy variables when encoding several categorical variables?
You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can think of the k-1 dummies as being contrasts between the effects of their corresponding levels, and the level whose dummy is left out.
How to choose number of dummy variables when encoding several categorical variables?
You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can
How to choose number of dummy variables when encoding several categorical variables? You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can think of the k-1 dummies as being contrasts between the effects of their corresponding levels, and the level whose dummy is left out.
How to choose number of dummy variables when encoding several categorical variables? You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can
46,374
How to choose number of dummy variables when encoding several categorical variables?
In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough information on your dependent variable or if you are using binary or multi logistic regression. Nevertheless,if you are using Gender as your dependent variable, then it must assume exactly two values representing male and female, and must not include unknown as you pointed out
How to choose number of dummy variables when encoding several categorical variables?
In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough informatio
How to choose number of dummy variables when encoding several categorical variables? In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough information on your dependent variable or if you are using binary or multi logistic regression. Nevertheless,if you are using Gender as your dependent variable, then it must assume exactly two values representing male and female, and must not include unknown as you pointed out
How to choose number of dummy variables when encoding several categorical variables? In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough informatio
46,375
Estimate the Kullback-Leibler divergence
Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits. red = GammaDistribution[20/17, 17/20]; gray = InverseGaussianDistribution[1, 832/1000]; kl[pF_, qF_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}] ]; kl[red, gray] In general, using a small Monte-Carlo sample is inadequate for computing these integrals. As we saw in another thread, the value of the KL divergence can be dominated by the integral in short intervals where the base PDF is nonzero and the other PDF is close to zero. A small sample can miss such intervals entirely and it can take a large sample to hit them enough times to obtain an accurate result. Take a look at the Gamma PDF (red, dashed) and the log PDF (gray) for the Inverse Gaussian near 0: In this case, the Gamma PDF stays high near 0 while the log of the Inverse Gaussian PDF diverges there. Virtually all of the KL divergence is contributed by values in the interval [0, 0.05], which has a probability of 3.2% under the Gamma distribution. The number of elements in a sample of $N = 1000$ Gamma variates which fall in this interval therefore has a Binomial(.032, $N$) distribution. Its standard deviation equals $.032(1 - .032)/\sqrt{N}$ = 0.55%. Thus, as a rough estimate, we can't expect your integral to have a relative accuracy much better than (3.2% - 2*0.55%) / 3.2% = give or take around 40%, because the number of times it samples this critical interval has this amount of error. That accounts for the difference between your result and Mathematica's. To get this error down to 1%--a mere two decimal places of precision--you would need to multiply your sample approximately by $(40/1)^2 = 1600$: that is, you would need a few million values. Here is a histogram of the natural logarithms of 1000 independent estimates of the KL divergence. Each estimate averages 1000 values randomly obtained from the Gamma distribution. The correct value is shown as the dashed red line. Histogram[Log[Table[ Mean[Log[PDF[red, #]/PDF[gray, #]] & /@ RandomReal[red, 1000]], {i,1,1000}]]] Although on the average this Monte-Carlo method is unbiased, most of the time (87% in this simulation) the estimate is too low. To make up for this, on occasion the overestimate can be gross: the largest of these estimates is 18.98. (The wide spread of values shows that the estimate of 1.286916 actually has no reliable decimal digits!) Because of this huge skewness in the distribution, the situation is actually much worse than I previously estimated with the simple binomial thought experiment. The average of these simulations (comprising 1000*1000 values total) is just 1.21, still about 25% less than the true value. For computing the KL divergence in general, you need to use adaptive quadrature or exact methods.
Estimate the Kullback-Leibler divergence
Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits. red = GammaDistribution[20/17, 17/20]; gray = InverseGaussianDi
Estimate the Kullback-Leibler divergence Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits. red = GammaDistribution[20/17, 17/20]; gray = InverseGaussianDistribution[1, 832/1000]; kl[pF_, qF_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}] ]; kl[red, gray] In general, using a small Monte-Carlo sample is inadequate for computing these integrals. As we saw in another thread, the value of the KL divergence can be dominated by the integral in short intervals where the base PDF is nonzero and the other PDF is close to zero. A small sample can miss such intervals entirely and it can take a large sample to hit them enough times to obtain an accurate result. Take a look at the Gamma PDF (red, dashed) and the log PDF (gray) for the Inverse Gaussian near 0: In this case, the Gamma PDF stays high near 0 while the log of the Inverse Gaussian PDF diverges there. Virtually all of the KL divergence is contributed by values in the interval [0, 0.05], which has a probability of 3.2% under the Gamma distribution. The number of elements in a sample of $N = 1000$ Gamma variates which fall in this interval therefore has a Binomial(.032, $N$) distribution. Its standard deviation equals $.032(1 - .032)/\sqrt{N}$ = 0.55%. Thus, as a rough estimate, we can't expect your integral to have a relative accuracy much better than (3.2% - 2*0.55%) / 3.2% = give or take around 40%, because the number of times it samples this critical interval has this amount of error. That accounts for the difference between your result and Mathematica's. To get this error down to 1%--a mere two decimal places of precision--you would need to multiply your sample approximately by $(40/1)^2 = 1600$: that is, you would need a few million values. Here is a histogram of the natural logarithms of 1000 independent estimates of the KL divergence. Each estimate averages 1000 values randomly obtained from the Gamma distribution. The correct value is shown as the dashed red line. Histogram[Log[Table[ Mean[Log[PDF[red, #]/PDF[gray, #]] & /@ RandomReal[red, 1000]], {i,1,1000}]]] Although on the average this Monte-Carlo method is unbiased, most of the time (87% in this simulation) the estimate is too low. To make up for this, on occasion the overestimate can be gross: the largest of these estimates is 18.98. (The wide spread of values shows that the estimate of 1.286916 actually has no reliable decimal digits!) Because of this huge skewness in the distribution, the situation is actually much worse than I previously estimated with the simple binomial thought experiment. The average of these simulations (comprising 1000*1000 values total) is just 1.21, still about 25% less than the true value. For computing the KL divergence in general, you need to use adaptive quadrature or exact methods.
Estimate the Kullback-Leibler divergence Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits. red = GammaDistribution[20/17, 17/20]; gray = InverseGaussianDi
46,376
Is it possible to do meta analysis of only two studies
Yes, it is possible, but whether it is appropriate depends on the intent of your analysis. Meta-analysis is a method of combining information from different sources, so it is technically possible to do a meta-analysis of only two studies - even of multiple results within a single paper. The key concern is not if you can do this, but that the method is appropriate for the questions that you have and the conclusions that you want to make, and that you acknowledgE the limitations of your analysis. For example, the typical use of meta-analysis is to quantitatively synthesize previous studies on a particular subject, such as the effects of some medical intervention. In this context, it is important to make your criteria for study selection before the analysis and then find all studies available that meet those criteria. These criteria might limit the scope of your search to publications in English, in a particular journal or set of journals, those that use particular methods, etc. In practice, it is necessary to be familiar with the studies you are interested in to state these criteria. However, if you non-randomly select two papers from among many that have been published, it would introduce bias into your study. If only two studies have been published, it might be hard to justify any conclusions from a meta-analysis but it could still be done. On the other hand, I have used the meta-analytical approach to synthesize data from a single study, for example if summary statistics are reported for subgroups but I am interested in finding the overall mean and variance. I don't always call this a meta-analysis in mixed company, so as not to confuse this application of the method with the more common use of meta-analysis as a comprehensive review sensu stricto.
Is it possible to do meta analysis of only two studies
Yes, it is possible, but whether it is appropriate depends on the intent of your analysis. Meta-analysis is a method of combining information from different sources, so it is technically possible to d
Is it possible to do meta analysis of only two studies Yes, it is possible, but whether it is appropriate depends on the intent of your analysis. Meta-analysis is a method of combining information from different sources, so it is technically possible to do a meta-analysis of only two studies - even of multiple results within a single paper. The key concern is not if you can do this, but that the method is appropriate for the questions that you have and the conclusions that you want to make, and that you acknowledgE the limitations of your analysis. For example, the typical use of meta-analysis is to quantitatively synthesize previous studies on a particular subject, such as the effects of some medical intervention. In this context, it is important to make your criteria for study selection before the analysis and then find all studies available that meet those criteria. These criteria might limit the scope of your search to publications in English, in a particular journal or set of journals, those that use particular methods, etc. In practice, it is necessary to be familiar with the studies you are interested in to state these criteria. However, if you non-randomly select two papers from among many that have been published, it would introduce bias into your study. If only two studies have been published, it might be hard to justify any conclusions from a meta-analysis but it could still be done. On the other hand, I have used the meta-analytical approach to synthesize data from a single study, for example if summary statistics are reported for subgroups but I am interested in finding the overall mean and variance. I don't always call this a meta-analysis in mixed company, so as not to confuse this application of the method with the more common use of meta-analysis as a comprehensive review sensu stricto.
Is it possible to do meta analysis of only two studies Yes, it is possible, but whether it is appropriate depends on the intent of your analysis. Meta-analysis is a method of combining information from different sources, so it is technically possible to d
46,377
Is it possible to do meta analysis of only two studies
If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect.
Is it possible to do meta analysis of only two studies
If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect.
Is it possible to do meta analysis of only two studies If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect.
Is it possible to do meta analysis of only two studies If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect.
46,378
Threshold models and flu epidemic recognition
The CDC uses the epidemic threshold of 1.645 standard deviations above the baseline for that time of year. The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve is not really a series, but rather a modeled seasonal mean, and the upper black curve is 1.645 sd's above that mean). http://www.cdc.gov/mmwr/PDF/ss/ss5107.pdf http://www.cdc.gov/flu/weekly/pdf/overview.pdf > pnorm(1.645) [1] 0.950015 So it's a 95% threshold. (And it does look as though about 1 out of 20 weeks are over the threshold. You pick your thresholds, not to be perfect, but to have the sensitivity you deem necessary.) The seasonal adjustment model appears to be sinusoidal. There is an R "flubase" package that should be consulted.
Threshold models and flu epidemic recognition
The CDC uses the epidemic threshold of 1.645 standard deviations above the baseline for that time of year. The definition may have multiple sorts of detection or mortality endpoints. (The one you a
Threshold models and flu epidemic recognition The CDC uses the epidemic threshold of 1.645 standard deviations above the baseline for that time of year. The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve is not really a series, but rather a modeled seasonal mean, and the upper black curve is 1.645 sd's above that mean). http://www.cdc.gov/mmwr/PDF/ss/ss5107.pdf http://www.cdc.gov/flu/weekly/pdf/overview.pdf > pnorm(1.645) [1] 0.950015 So it's a 95% threshold. (And it does look as though about 1 out of 20 weeks are over the threshold. You pick your thresholds, not to be perfect, but to have the sensitivity you deem necessary.) The seasonal adjustment model appears to be sinusoidal. There is an R "flubase" package that should be consulted.
Threshold models and flu epidemic recognition The CDC uses the epidemic threshold of 1.645 standard deviations above the baseline for that time of year. The definition may have multiple sorts of detection or mortality endpoints. (The one you a
46,379
Threshold models and flu epidemic recognition
A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to it. What happens is, you have a linear model with a Poisson (or negative binomial) distribution in roughly the following form: log(Counts) = b0 + b1t + b2cos(2piwt) + b3sin(2piw*t) Where t is time, and w is 1/365 (for a yearly disease like flu. Generally its 1/n, where n is the length of your cycle). That's where you get that smooth black curve from, and it's standard error. That is the expected number of counts for that time t - it goes up and down over time. Then, as the flu season occurs, the CDC watches for when it crosses that threshold, and classifies that as "an epidemic". These can get way more complex - multiple harmonic functions to account for different peaks, such as the usually somewhat later peak in influenza B cases, explanatory variables for all kinds of things that will account for upswings in cases, etc. But that's it in its most basic form. But the term "epidemic" is a tricky one. This technique works for well understood seasonal, recurring diseases with lots of data, like influenza. In contrast, any counts above 0 for say, smallpox, would be treated as an outbreak. For papers using this technique, I can refer you to several. Both of the papers below use a model like the one above, though not for declaring an epidemic, but for characterizing what a "flu season" looks like: https://doi.org/10.1007/978-3-540-72608-1_11 http://onlinelibrary.wiley.com/doi/10.1111/j.1750-2659.2010.00137.x/abstract This can be easily implemented in R using the glm function.
Threshold models and flu epidemic recognition
A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to i
Threshold models and flu epidemic recognition A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to it. What happens is, you have a linear model with a Poisson (or negative binomial) distribution in roughly the following form: log(Counts) = b0 + b1t + b2cos(2piwt) + b3sin(2piw*t) Where t is time, and w is 1/365 (for a yearly disease like flu. Generally its 1/n, where n is the length of your cycle). That's where you get that smooth black curve from, and it's standard error. That is the expected number of counts for that time t - it goes up and down over time. Then, as the flu season occurs, the CDC watches for when it crosses that threshold, and classifies that as "an epidemic". These can get way more complex - multiple harmonic functions to account for different peaks, such as the usually somewhat later peak in influenza B cases, explanatory variables for all kinds of things that will account for upswings in cases, etc. But that's it in its most basic form. But the term "epidemic" is a tricky one. This technique works for well understood seasonal, recurring diseases with lots of data, like influenza. In contrast, any counts above 0 for say, smallpox, would be treated as an outbreak. For papers using this technique, I can refer you to several. Both of the papers below use a model like the one above, though not for declaring an epidemic, but for characterizing what a "flu season" looks like: https://doi.org/10.1007/978-3-540-72608-1_11 http://onlinelibrary.wiley.com/doi/10.1111/j.1750-2659.2010.00137.x/abstract This can be easily implemented in R using the glm function.
Threshold models and flu epidemic recognition A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to i
46,380
Can the multiple linear correlation coefficient be negative?
The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would calculate $R^{2}$ is: $$R^2=\frac{SSR}{TSS}$$ where $$ SSR = \sum_{i} (\hat{Y_i}-\bar{Y})^2$$ and $$ TSS = \sum_{i} (Y_i-\bar{Y})^2$$ Since sums of squares can never be negative, neither can the $R^2$ value, as long as its calculated this way. However, $R^2$ calculated this way can be greater than 1, if you use an estimator which does not have the observed residuals sum to zero. Or mathematically, $R^2$ will be necessarily bounded by 1 if $$\sum_{i} (\hat{Y_i}-Y_i)=0$$ and $$\sum_{i} (\hat{Y_i}-Y_i)\hat{Y_i}=0$$ Or in words, the average of the residuals is equal to 0, and the fitted values are uncorrelated with the residuals over the whole data set. This is because you can expand TSS as follows $$ TSS = \sum_{i} (Y_i-\bar{Y})^2 = \sum_{i} ([Y_i-\hat{Y_i}]-[\bar{Y}-\hat{Y_i}])^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2-2\sum_{i} [Y_i-\hat{Y_i}][\bar{Y}-\hat{Y_i}]+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2-2\bar{Y}\sum_{i} [Y_i-\hat{Y_i}]+2\sum_{i} [Y_i-\hat{Y_i}]\hat{Y_i}+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$\implies TSS=SSR+\sum_{i} (Y_i-\hat{Y_i})^2 \geq SSR \geq 0$$ $$\implies 1 \geq \frac{SSR}{TSS}=R^2 \geq 0$$ The constraints listed are always satisfied by the usual OLS estimators (in fact they form part of the equations that define OLS estimation) $R^2$ can go negative if it is calculated by $1-\frac{SSE}{TSS}$ Where $SSE=\sum_{i} (Y_i-\hat{Y_i})^2$ instead of the way I described. As a (silly) example of $R^2>1$, you can put as the estimate $\hat{Y_i}=\bar{Y_i}+TSS$ So That $SSR=n(TSS)^2$ and $R^2=n(TSS)$ Which will exceed 1 for big enough n or TSS. To make $R^2$ go negative, set $\hat{Y_i}=Y_i+TSS$ so that $SSE=n(TSS)^2$ and $R^2=1-n(TSS)$ which will be less than 0 for big enough n and TSS
Can the multiple linear correlation coefficient be negative?
The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would
Can the multiple linear correlation coefficient be negative? The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would calculate $R^{2}$ is: $$R^2=\frac{SSR}{TSS}$$ where $$ SSR = \sum_{i} (\hat{Y_i}-\bar{Y})^2$$ and $$ TSS = \sum_{i} (Y_i-\bar{Y})^2$$ Since sums of squares can never be negative, neither can the $R^2$ value, as long as its calculated this way. However, $R^2$ calculated this way can be greater than 1, if you use an estimator which does not have the observed residuals sum to zero. Or mathematically, $R^2$ will be necessarily bounded by 1 if $$\sum_{i} (\hat{Y_i}-Y_i)=0$$ and $$\sum_{i} (\hat{Y_i}-Y_i)\hat{Y_i}=0$$ Or in words, the average of the residuals is equal to 0, and the fitted values are uncorrelated with the residuals over the whole data set. This is because you can expand TSS as follows $$ TSS = \sum_{i} (Y_i-\bar{Y})^2 = \sum_{i} ([Y_i-\hat{Y_i}]-[\bar{Y}-\hat{Y_i}])^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2-2\sum_{i} [Y_i-\hat{Y_i}][\bar{Y}-\hat{Y_i}]+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2-2\bar{Y}\sum_{i} [Y_i-\hat{Y_i}]+2\sum_{i} [Y_i-\hat{Y_i}]\hat{Y_i}+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$=\sum_{i} (Y_i-\hat{Y_i})^2+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$ $$\implies TSS=SSR+\sum_{i} (Y_i-\hat{Y_i})^2 \geq SSR \geq 0$$ $$\implies 1 \geq \frac{SSR}{TSS}=R^2 \geq 0$$ The constraints listed are always satisfied by the usual OLS estimators (in fact they form part of the equations that define OLS estimation) $R^2$ can go negative if it is calculated by $1-\frac{SSE}{TSS}$ Where $SSE=\sum_{i} (Y_i-\hat{Y_i})^2$ instead of the way I described. As a (silly) example of $R^2>1$, you can put as the estimate $\hat{Y_i}=\bar{Y_i}+TSS$ So That $SSR=n(TSS)^2$ and $R^2=n(TSS)$ Which will exceed 1 for big enough n or TSS. To make $R^2$ go negative, set $\hat{Y_i}=Y_i+TSS$ so that $SSE=n(TSS)^2$ and $R^2=1-n(TSS)$ which will be less than 0 for big enough n and TSS
Can the multiple linear correlation coefficient be negative? The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would
46,381
Can the multiple linear correlation coefficient be negative?
$R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number. For example, if we correlated income and time spent in jail throughout life, I would guess we would get a negative correlation (I haven't done this, I'm just guessing).
Can the multiple linear correlation coefficient be negative?
$R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number. For example, if we correlated in
Can the multiple linear correlation coefficient be negative? $R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number. For example, if we correlated income and time spent in jail throughout life, I would guess we would get a negative correlation (I haven't done this, I'm just guessing).
Can the multiple linear correlation coefficient be negative? $R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number. For example, if we correlated in
46,382
Discerning between two different linear regression models in one sample
You need to model the observations as a mixture model. Define: $p$ as the probability that a sample belongs to the first data generating process. Thus, the density function of $y_i$ is given by: $f(y_i|-) \sim p f_1(y_i|-) + (1-p) f_2(y_i|-)$ where $f_1(.)$ is the density that arises because of the first data generating process and $f_2(.)$ is the density that arises because of the second data generating process. You can then either use maximum likelihood (see for example the EM algorithm) or bayesian approaches to estimate the model.
Discerning between two different linear regression models in one sample
You need to model the observations as a mixture model. Define: $p$ as the probability that a sample belongs to the first data generating process. Thus, the density function of $y_i$ is given by: $f(y_
Discerning between two different linear regression models in one sample You need to model the observations as a mixture model. Define: $p$ as the probability that a sample belongs to the first data generating process. Thus, the density function of $y_i$ is given by: $f(y_i|-) \sim p f_1(y_i|-) + (1-p) f_2(y_i|-)$ where $f_1(.)$ is the density that arises because of the first data generating process and $f_2(.)$ is the density that arises because of the second data generating process. You can then either use maximum likelihood (see for example the EM algorithm) or bayesian approaches to estimate the model.
Discerning between two different linear regression models in one sample You need to model the observations as a mixture model. Define: $p$ as the probability that a sample belongs to the first data generating process. Thus, the density function of $y_i$ is given by: $f(y_
46,383
Discerning between two different linear regression models in one sample
The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well.
Discerning between two different linear regression models in one sample
The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well.
Discerning between two different linear regression models in one sample The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well.
Discerning between two different linear regression models in one sample The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well.
46,384
Might be an unbalanced within subjects repeated measures?
It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means. Just as you aggregated your accuracies to get a percentage correct and do your ANOVA in the first place you average your latencies as well. Each participant provides 6 values, therefore it is not imbalanced. Most likely though... the ANOVA was not the best analysis in the first place. You should probably be using mixed-effect modelling. For the initial test of the accuracies you'd use mixed effects logistic regression. For the second one you propose it would be a 3-levels x 2-correctnesses analysis of the latencies. Both would have subjects as a random effect. In addition it's often best to do some sort of normality correction on the times like a log or -1/T correction. This is less of a concern in ANOVA because you aggregate across a number of means first and that often ameliorates the skew of latencies through the central limit theorem. You could check with a boxcox analysis to see what fits best. On a more important note though... what are you expecting to find? Is this just exploratory? What would it mean to have different latencies in the correct and incorrect groups and what would it mean for them to interact? Unless you are fully modelling the relationship between accuracy and speed in your experiment, or you have a full model that you are testing, then you are probably wasting your time. A latency with an incorrect response means that someone did something other than what you wanted them to... and it could be anything. That's why people almost always only work with the latencies to the correct responses. (these two types of responses also often have very different distributions with incorrect much flatter because they disproportionately make up both the short and long latencies)
Might be an unbalanced within subjects repeated measures?
It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means. Just as
Might be an unbalanced within subjects repeated measures? It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means. Just as you aggregated your accuracies to get a percentage correct and do your ANOVA in the first place you average your latencies as well. Each participant provides 6 values, therefore it is not imbalanced. Most likely though... the ANOVA was not the best analysis in the first place. You should probably be using mixed-effect modelling. For the initial test of the accuracies you'd use mixed effects logistic regression. For the second one you propose it would be a 3-levels x 2-correctnesses analysis of the latencies. Both would have subjects as a random effect. In addition it's often best to do some sort of normality correction on the times like a log or -1/T correction. This is less of a concern in ANOVA because you aggregate across a number of means first and that often ameliorates the skew of latencies through the central limit theorem. You could check with a boxcox analysis to see what fits best. On a more important note though... what are you expecting to find? Is this just exploratory? What would it mean to have different latencies in the correct and incorrect groups and what would it mean for them to interact? Unless you are fully modelling the relationship between accuracy and speed in your experiment, or you have a full model that you are testing, then you are probably wasting your time. A latency with an incorrect response means that someone did something other than what you wanted them to... and it could be anything. That's why people almost always only work with the latencies to the correct responses. (these two types of responses also often have very different distributions with incorrect much flatter because they disproportionately make up both the short and long latencies)
Might be an unbalanced within subjects repeated measures? It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means. Just as
46,385
Might be an unbalanced within subjects repeated measures?
I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclusions. See: http://dx.doi.org/10.1016/j.jml.2007.11.004 As John Christie notes, the best way to approach analysis of accuracy data is a mixed effects model using the binomial link and participants as a random effect, eg: #R code library(lme4) fit = lmer( formula = acc ~ my_IV + (1|participant) , family = 'binomial' , data = my_data ) print(fit) Note that "my_data" should be the raw, trial-by-trial data such that "acc" is either 1 for accurate trials or 0 for inaccurate trials. That is, data should not be aggregated to proportions before analysis.
Might be an unbalanced within subjects repeated measures?
I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclu
Might be an unbalanced within subjects repeated measures? I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclusions. See: http://dx.doi.org/10.1016/j.jml.2007.11.004 As John Christie notes, the best way to approach analysis of accuracy data is a mixed effects model using the binomial link and participants as a random effect, eg: #R code library(lme4) fit = lmer( formula = acc ~ my_IV + (1|participant) , family = 'binomial' , data = my_data ) print(fit) Note that "my_data" should be the raw, trial-by-trial data such that "acc" is either 1 for accurate trials or 0 for inaccurate trials. That is, data should not be aggregated to proportions before analysis.
Might be an unbalanced within subjects repeated measures? I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclu
46,386
Might be an unbalanced within subjects repeated measures?
So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them). I think the easiest way for doing this would be to take the mean response time for each subject for each of the three levels (which will results in 3 numbers per subject). And then run a friedman test on that (there is also a post hoc friedman test in R, in case you would want that - I assume you would) The downside of this is that this assumes, in a sense, that your estimation of the three means (a mean for each of the three levels, per subject), are the same where in fact they are not. You have more variability in your estimation of level 3 then of level 1. Realistically, I would ignore that. Theoretically, I hope someone here can offer a better solution so both of us would be able to learn :)
Might be an unbalanced within subjects repeated measures?
So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them). I think the easiest way for doin
Might be an unbalanced within subjects repeated measures? So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them). I think the easiest way for doing this would be to take the mean response time for each subject for each of the three levels (which will results in 3 numbers per subject). And then run a friedman test on that (there is also a post hoc friedman test in R, in case you would want that - I assume you would) The downside of this is that this assumes, in a sense, that your estimation of the three means (a mean for each of the three levels, per subject), are the same where in fact they are not. You have more variability in your estimation of level 3 then of level 1. Realistically, I would ignore that. Theoretically, I hope someone here can offer a better solution so both of us would be able to learn :)
Might be an unbalanced within subjects repeated measures? So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them). I think the easiest way for doin
46,387
When should normalization never be used?
Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some tests are more prone to failure when normalizing non-normal data, while some are more resistant ("robust" tests). One less-robust statistic is the mean, which is sensitive to outliers (i.e. non-normal data). Alternatively, the median is less sensitive to outliers (and therefore more robust). A great example of non-normal data when many statistics fail is bi-modally distributed data. Because of this, it's always good practice to visualize your data as a frequency distribution (or even better, test for normality!)
When should normalization never be used?
Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some te
When should normalization never be used? Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some tests are more prone to failure when normalizing non-normal data, while some are more resistant ("robust" tests). One less-robust statistic is the mean, which is sensitive to outliers (i.e. non-normal data). Alternatively, the median is less sensitive to outliers (and therefore more robust). A great example of non-normal data when many statistics fail is bi-modally distributed data. Because of this, it's always good practice to visualize your data as a frequency distribution (or even better, test for normality!)
When should normalization never be used? Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some te
46,388
When should normalization never be used?
Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution. For example one might want to rescale observables $X$ to all be normal with $(X-\mu)/\sigma$, but this can only work if the data is normal and if both $\mu$ and $\sigma$ are the same for all data points (e.g. $\sigma$ doesn't depend on $\mu$ in a particular $X$ range).
When should normalization never be used?
Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution. For example one might want to rescale observables $X$ to all be normal with $(X-\mu
When should normalization never be used? Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution. For example one might want to rescale observables $X$ to all be normal with $(X-\mu)/\sigma$, but this can only work if the data is normal and if both $\mu$ and $\sigma$ are the same for all data points (e.g. $\sigma$ doesn't depend on $\mu$ in a particular $X$ range).
When should normalization never be used? Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution. For example one might want to rescale observables $X$ to all be normal with $(X-\mu
46,389
When should normalization never be used?
I thought this was too obvious, until I saw this question! When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you have a good reason, e.g. storage.
When should normalization never be used?
I thought this was too obvious, until I saw this question! When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you ha
When should normalization never be used? I thought this was too obvious, until I saw this question! When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you have a good reason, e.g. storage.
When should normalization never be used? I thought this was too obvious, until I saw this question! When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you ha
46,390
Intro to statistics for an MD?
This is the one I've used successfully: Statistics Without Maths for Psychology: Using Spss for Windows. I just stumbled on this too, this might be useful: Statistics Notes in the British Medical Journal. I'm sure I knew of a free pdf that some doctors I know use, but I can't seem to find it at the moment. I will try to dig it out.
Intro to statistics for an MD?
This is the one I've used successfully: Statistics Without Maths for Psychology: Using Spss for Windows. I just stumbled on this too, this might be useful: Statistics Notes in the British Medical Jour
Intro to statistics for an MD? This is the one I've used successfully: Statistics Without Maths for Psychology: Using Spss for Windows. I just stumbled on this too, this might be useful: Statistics Notes in the British Medical Journal. I'm sure I knew of a free pdf that some doctors I know use, but I can't seem to find it at the moment. I will try to dig it out.
Intro to statistics for an MD? This is the one I've used successfully: Statistics Without Maths for Psychology: Using Spss for Windows. I just stumbled on this too, this might be useful: Statistics Notes in the British Medical Jour
46,391
Intro to statistics for an MD?
My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math.
Intro to statistics for an MD?
My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math.
Intro to statistics for an MD? My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math.
Intro to statistics for an MD? My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math.
46,392
Intro to statistics for an MD?
I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. If an online reference works, I like Gerard Dallal's Handbook of Statistical Practice, which may do the trick if he's just refreshing previous knowledge.
Intro to statistics for an MD?
I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience.
Intro to statistics for an MD? I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. If an online reference works, I like Gerard Dallal's Handbook of Statistical Practice, which may do the trick if he's just refreshing previous knowledge.
Intro to statistics for an MD? I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience.
46,393
Intro to statistics for an MD?
Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2 Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch Logistic Regression: A Self-Learning Text (Statistics for Biology and Health) Third (3rd) Edition David G. Kleinbaum, Mitchel Klein Survival Analysis: A Self-Learning Text, Third Edition (Statistics for Biology and Health) 3rd ed. 2012 Edition David G. Kleinbaum, Mitchel Klein Categorical Data Analysis, Second Edition Alan Agresti The assumptions here are: a. As an MD their nose is in the literature all the time, and they are encountering the types of statistics that are typically presented in NEJM, JAMA, Lancet, etc. b. They are less concerned with actually writing the code and fitting the analysis than of being a high caliber consumer, and potential director of information.
Intro to statistics for an MD?
Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2 Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch Logistic Regr
Intro to statistics for an MD? Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2 Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch Logistic Regression: A Self-Learning Text (Statistics for Biology and Health) Third (3rd) Edition David G. Kleinbaum, Mitchel Klein Survival Analysis: A Self-Learning Text, Third Edition (Statistics for Biology and Health) 3rd ed. 2012 Edition David G. Kleinbaum, Mitchel Klein Categorical Data Analysis, Second Edition Alan Agresti The assumptions here are: a. As an MD their nose is in the literature all the time, and they are encountering the types of statistics that are typically presented in NEJM, JAMA, Lancet, etc. b. They are less concerned with actually writing the code and fitting the analysis than of being a high caliber consumer, and potential director of information.
Intro to statistics for an MD? Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2 Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch Logistic Regr
46,394
What is the adequate regression model for bounded, continuous but poisson-like data?
Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, say, between 4 and 5 or between 8 and 9. Frank Harrell recommends this as an approach when you have this type of data, even when there are so many ordered outcome levels that you might think of the outcomes as continuous. Chapter 13 of his Regression Modeling Strategies provides much detail. The R GLMMadaptive package can fit mixed models with ordinal outcomes. This page shows how to proceed.
What is the adequate regression model for bounded, continuous but poisson-like data?
Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, s
What is the adequate regression model for bounded, continuous but poisson-like data? Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, say, between 4 and 5 or between 8 and 9. Frank Harrell recommends this as an approach when you have this type of data, even when there are so many ordered outcome levels that you might think of the outcomes as continuous. Chapter 13 of his Regression Modeling Strategies provides much detail. The R GLMMadaptive package can fit mixed models with ordinal outcomes. This page shows how to proceed.
What is the adequate regression model for bounded, continuous but poisson-like data? Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, s
46,395
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$ Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the sequence of probability spaces $\langle (\mathcal X_i, \boldsymbol{\mathfrak A}_i, \mathbf P_i)\rangle_{i=1}^\infty,$ where $(\mathcal X_i, \Vert \cdot \Vert_i)$ is a normed linear space. Consider a sequence of rvs $\langle X_i\rangle_{i=1}^\infty$ and a sequence of real numbers $\langle r_i\rangle_{i=1}^\infty.$ Then $X_n = o_P(r_n)\iff \Pr[\Vert X_n \Vert_n\leq c|r_n|] = 1, ~\forall c >0.$ Now consider a sequence of measurable functions $f_n:\mathcal X_n\to \mathcal R ,~\mathcal R$ being a normed linear space with Borel $\sigma$-field. Define $T_n := f_n(X_n)$ and $T: \Omega \to \mathcal R. $ Then $T_n$ converges in probability to $T$ if and only if $\Vert T_n - T\Vert = o_P(1).$ Now consider a parametric family of distributions $\{\mathbf P_\theta\mid \theta\in \Theta \}$ on a sequence space $\mathcal X^\infty.$ Define a measurable function $g: \Theta\to \mathcal G, ~\mathcal G$ being a metric space with Borel $\sigma$-field. Take $\mathcal X_n = \mathcal X^n$ and take measurable functions $T_n: \mathcal X_n\to \mathcal G.$ Then $T_n$ is consistent for $g(\theta)$ if for each $\theta, ~T_n\overset{\mathbf P}{\to} g(\theta).$ The simplest and most common instance is taking $\Omega = \mathbb R^\infty, ~\mathcal X_n = \mathbb R^n.$ Observe how the underlying probability space is at work here based on the implications of the characterization of the convergence in probability above. (Also, as a footnote, one can see how in probability can be generalized: Take $S\subseteq \prod_{i=1}^\infty \mathcal X_i.$ Then $S$ occurs in probability (denoted by $\mathcal P(S)$) if, for each $i,$ there exists $S_i(\varepsilon)\in\boldsymbol{\mathfrak A}_i$ such that $\prod_{i=1}^\infty S_i(\varepsilon)\subseteq S$ and for each $\varepsilon > 0,~\mathbf P_i(S_i(\varepsilon))\geq 1-\varepsilon. $ To see how powerful it is, consider $f_n:\mathcal X_n \to \mathbb R$ and, as above, take $T_n = f_n(X_n).$ Now, define $S:= \left\{\langle x_i\rangle_{i=1}^\infty\mid\lim_{n\to\infty} f_n(x_n) = 0\right\}.$ Then $T_n = o_{\mathbf P}(1)\iff \mathcal P(S).$) Reference: $\rm [I]$ Theory of Statistics, Mark J. Schervish, Springer-Verlag, $1995,$ sec. $7.1.2,$ pp. $395-398.$
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$ Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the se
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$ Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the sequence of probability spaces $\langle (\mathcal X_i, \boldsymbol{\mathfrak A}_i, \mathbf P_i)\rangle_{i=1}^\infty,$ where $(\mathcal X_i, \Vert \cdot \Vert_i)$ is a normed linear space. Consider a sequence of rvs $\langle X_i\rangle_{i=1}^\infty$ and a sequence of real numbers $\langle r_i\rangle_{i=1}^\infty.$ Then $X_n = o_P(r_n)\iff \Pr[\Vert X_n \Vert_n\leq c|r_n|] = 1, ~\forall c >0.$ Now consider a sequence of measurable functions $f_n:\mathcal X_n\to \mathcal R ,~\mathcal R$ being a normed linear space with Borel $\sigma$-field. Define $T_n := f_n(X_n)$ and $T: \Omega \to \mathcal R. $ Then $T_n$ converges in probability to $T$ if and only if $\Vert T_n - T\Vert = o_P(1).$ Now consider a parametric family of distributions $\{\mathbf P_\theta\mid \theta\in \Theta \}$ on a sequence space $\mathcal X^\infty.$ Define a measurable function $g: \Theta\to \mathcal G, ~\mathcal G$ being a metric space with Borel $\sigma$-field. Take $\mathcal X_n = \mathcal X^n$ and take measurable functions $T_n: \mathcal X_n\to \mathcal G.$ Then $T_n$ is consistent for $g(\theta)$ if for each $\theta, ~T_n\overset{\mathbf P}{\to} g(\theta).$ The simplest and most common instance is taking $\Omega = \mathbb R^\infty, ~\mathcal X_n = \mathbb R^n.$ Observe how the underlying probability space is at work here based on the implications of the characterization of the convergence in probability above. (Also, as a footnote, one can see how in probability can be generalized: Take $S\subseteq \prod_{i=1}^\infty \mathcal X_i.$ Then $S$ occurs in probability (denoted by $\mathcal P(S)$) if, for each $i,$ there exists $S_i(\varepsilon)\in\boldsymbol{\mathfrak A}_i$ such that $\prod_{i=1}^\infty S_i(\varepsilon)\subseteq S$ and for each $\varepsilon > 0,~\mathbf P_i(S_i(\varepsilon))\geq 1-\varepsilon. $ To see how powerful it is, consider $f_n:\mathcal X_n \to \mathbb R$ and, as above, take $T_n = f_n(X_n).$ Now, define $S:= \left\{\langle x_i\rangle_{i=1}^\infty\mid\lim_{n\to\infty} f_n(x_n) = 0\right\}.$ Then $T_n = o_{\mathbf P}(1)\iff \mathcal P(S).$) Reference: $\rm [I]$ Theory of Statistics, Mark J. Schervish, Springer-Verlag, $1995,$ sec. $7.1.2,$ pp. $395-398.$
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$ Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the se
46,396
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
It is customary in probability or mathematical statistics to encounter statements such as Let $X$ be an absolutely continuous random variable with density $f$ with no reference to underlying probability space. However, we can always supply an appropriate space as follows. Take $\Omega = \mathbb{R}$, $\mathcal{F} =$ Borel sets, $P(B) = \int_B f(x)\,dx$ for all $B\in \mathcal{F}$. If $X(w) = \omega$, $\omega \in\Omega$, then $X$ is absolutely continuous and has density $f$. In a sense, it does not make any difference how we arrive at $\Omega$ and $P$; we may equally use a different $\Omega$ and different $P$ and a different $X$, as long as $X$ is absolutely continuous with density $f$. No matter what construction we use, we get the same essential result, that is $$ P(X\in B) = \int_B f(x)\,dx. $$ Therefore, questions about probabilities of events involving $X$ are answered completely by knowledge of the density $f$. This implies that probabilities of events of $T(X_1,\ldots,X_n)$ are also defined by the density of $T$.
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
It is customary in probability or mathematical statistics to encounter statements such as Let $X$ be an absolutely continuous random variable with density $f$ with no reference to underlying probabi
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? It is customary in probability or mathematical statistics to encounter statements such as Let $X$ be an absolutely continuous random variable with density $f$ with no reference to underlying probability space. However, we can always supply an appropriate space as follows. Take $\Omega = \mathbb{R}$, $\mathcal{F} =$ Borel sets, $P(B) = \int_B f(x)\,dx$ for all $B\in \mathcal{F}$. If $X(w) = \omega$, $\omega \in\Omega$, then $X$ is absolutely continuous and has density $f$. In a sense, it does not make any difference how we arrive at $\Omega$ and $P$; we may equally use a different $\Omega$ and different $P$ and a different $X$, as long as $X$ is absolutely continuous with density $f$. No matter what construction we use, we get the same essential result, that is $$ P(X\in B) = \int_B f(x)\,dx. $$ Therefore, questions about probabilities of events involving $X$ are answered completely by knowledge of the density $f$. This implies that probabilities of events of $T(X_1,\ldots,X_n)$ are also defined by the density of $T$.
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? It is customary in probability or mathematical statistics to encounter statements such as Let $X$ be an absolutely continuous random variable with density $f$ with no reference to underlying probabi
46,397
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, ..., X_n)$ is a small abuse of notation as random variables are functions from $\Omega$, while the right hand side is a "deterministic" function $t_n$ from $S_1\times S_2\times ...\times S_n$. So I would rewrite $$ T_n := t_n(X_1, X_2, ..., X_n) $$ with $ \omega_i \in \Omega_i $ and with this we can consider $$ t_n(X_1(\omega_1), X_2(\omega_2),..., X_n(\omega_n)) = T_n(\omega_1, \omega_2, ..., \omega_n)$$ which tells us that $T_n$ is a function from $\Omega_1\times \Omega_2\times ...\times \Omega_n$, which therefore can be used to define $\Omega$. Now $\mathcal{F}$ and $P_\theta$ can be anything as long as $\mathcal{F}_i$ and $P_{i, \theta}$ are their projections down to the individual $\Omega_i$. To answer the questions in your comments I will be a little bit more explicit: If $X_i$ are iid with $(\Omega_x, P_{x, \theta}, \mathcal{F}_x)$, then $\mathcal{F}$ is actually not $\mathcal{F}_x^n$ but instead the $\sigma$-Alegabra generated by $\mathcal{F}_x^n$ through intersection, compliments and $\sigma$-countable unions. $P_\theta$ is a probability measure that means a function from $\mathcal{F}$ to $[0, 1]$ with certain properties. Now $A \in \mathcal{F}_x^n$ then $A = A_1 \times ... \times A_n, A_i \in \mathcal{F}_x$ and $P_\theta(A) = P_{x, \theta}(A_1) \cdot ...\cdot P_{x, \theta}(A_n)$. If $A\in\mathcal{F}/\mathcal{F}_x^n $ then $P_{\theta}(A)$ is determined by how it was generated from the elements of $\mathcal{F}_x^n$ When you write $P_\theta(|T_n - \theta| < \varepsilon)$ it really means $P_\theta(\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\})$, with $\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\} \in \mathcal{F} $
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, .
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, ..., X_n)$ is a small abuse of notation as random variables are functions from $\Omega$, while the right hand side is a "deterministic" function $t_n$ from $S_1\times S_2\times ...\times S_n$. So I would rewrite $$ T_n := t_n(X_1, X_2, ..., X_n) $$ with $ \omega_i \in \Omega_i $ and with this we can consider $$ t_n(X_1(\omega_1), X_2(\omega_2),..., X_n(\omega_n)) = T_n(\omega_1, \omega_2, ..., \omega_n)$$ which tells us that $T_n$ is a function from $\Omega_1\times \Omega_2\times ...\times \Omega_n$, which therefore can be used to define $\Omega$. Now $\mathcal{F}$ and $P_\theta$ can be anything as long as $\mathcal{F}_i$ and $P_{i, \theta}$ are their projections down to the individual $\Omega_i$. To answer the questions in your comments I will be a little bit more explicit: If $X_i$ are iid with $(\Omega_x, P_{x, \theta}, \mathcal{F}_x)$, then $\mathcal{F}$ is actually not $\mathcal{F}_x^n$ but instead the $\sigma$-Alegabra generated by $\mathcal{F}_x^n$ through intersection, compliments and $\sigma$-countable unions. $P_\theta$ is a probability measure that means a function from $\mathcal{F}$ to $[0, 1]$ with certain properties. Now $A \in \mathcal{F}_x^n$ then $A = A_1 \times ... \times A_n, A_i \in \mathcal{F}_x$ and $P_\theta(A) = P_{x, \theta}(A_1) \cdot ...\cdot P_{x, \theta}(A_n)$. If $A\in\mathcal{F}/\mathcal{F}_x^n $ then $P_{\theta}(A)$ is determined by how it was generated from the elements of $\mathcal{F}_x^n$ When you write $P_\theta(|T_n - \theta| < \varepsilon)$ it really means $P_\theta(\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\})$, with $\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\} \in \mathcal{F} $
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, .
46,398
Equivalent definition of stochastic dominance
The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (do not confuse $F, G$ in this theorem with $F, G$ in your question): Let $F$ and $G$ be two nondecreasing, right-continuous functions on an interval $[a, b]$. If $F$ and $G$ have no common points of discontinuity in $(a, b]$, then \begin{align} \int_{(a, b]}G(x)dF(x) = F(b)G(b) - F(a)G(a) - \int_{(a, b]}F(x)dG(x). \tag{1} \end{align} Equation $(1)$ is a good start for proving the result of your interest -- if we can extend the integral region $(a, b]$ to $\mathbb{R}$. To this end, we would need to impose integrability conditions of $u$ (of course, for $(1)$ to hold, we need to also assume that $u$ and $F$ / $G$ have no common discontinuity in $\mathbb{R}$ and $u$ is right-continuous): \begin{align} \int_{\mathbb{R}}|u(x)|dF(x) < \infty, \; \int_{\mathbb{R}}|u(x)|dG(x) < \infty. \tag{2} \end{align} By $(1)$, for every $n$: \begin{align} & \int_{(-n, n]}u(x)dF(x) = F(n)u(n) - F(-n)u(-n) - \int_{(-n, n]}F(x)du(x). \tag{3} \\ & \int_{(-n, n]}u(x)dG(x) = G(n)u(n) - G(-n)u(-n) - \int_{(-n, n]}G(x)du(x). \tag{4} \end{align} It then follows by $(3), (4)$ and $F(x) \geq G(x)$ for all $x \in \mathbb{R}$ that \begin{align} & \int_{(-n, n]}u(x)dF(x) - \int_{(-n, n]}u(x)dG(x) \\ =& u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)) - \int_{(-n, n]}[F(x) - G(x)]du(x) \\ \leq & u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)). \tag{5} \end{align} If $u$ is non-negative, then the right-hand side of $(5)$ is bounded by $u(n)(F(n) - G(n))$, which can be rewritten as $u(n)(1 - G(n)) - u(n)(1 - F(n))$, which converges to $0$ as $n \to \infty$ by the integrability of $F, G$ and the monotonicity of $u$. Similarly, if $u$ is non-positive, then the right-hand side of $(5)$ is bounded by $-u(-n)(F(-n) - G(-n))$ and converges to $0$ as $n \to \infty$ (see the next paragraph for a more detailed derivation). If $u$ takes both negative and positive values, it follows by the monotonicity of $u$ that for sufficiently large $N$, $u(N) > 0$ and $u(-N) < 0$, whence for all $n > N$, again by monotonicity of $u$: \begin{align} 0 \leq u(n)(1 - F(n)) \leq \int_n^\infty u(x)dF(x), \; \int_{-\infty}^{-n}u(x)dF(x) \leq u(-n)F(-n) \leq 0. \tag{6} \end{align} By condition $(2)$ and Lebesgue's dominated convergence theorem (DCT), $(6)$ implies that $u(n)(1 - F(n)) \to 0$ and $u(-n)F(-n) \to 0$ as $n \to \infty$. Similarly, $u(n)(1 - G(n)) \to 0$ and $u(-n)G(-n) \to 0$ as $n \to \infty$. Therefore, the right-hand side of $(5)$ always converges to $0$ as $n \to \infty$ for $u$ that is nondecreasing and integrable. Now the result follows by passing $n \to \infty$ on both sides of $(5)$ (note that condition $(2)$ and DCT imply the left-hand side of $(5)$ converges to $E_F[u] - E_G[u]$).
Equivalent definition of stochastic dominance
The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (
Equivalent definition of stochastic dominance The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (do not confuse $F, G$ in this theorem with $F, G$ in your question): Let $F$ and $G$ be two nondecreasing, right-continuous functions on an interval $[a, b]$. If $F$ and $G$ have no common points of discontinuity in $(a, b]$, then \begin{align} \int_{(a, b]}G(x)dF(x) = F(b)G(b) - F(a)G(a) - \int_{(a, b]}F(x)dG(x). \tag{1} \end{align} Equation $(1)$ is a good start for proving the result of your interest -- if we can extend the integral region $(a, b]$ to $\mathbb{R}$. To this end, we would need to impose integrability conditions of $u$ (of course, for $(1)$ to hold, we need to also assume that $u$ and $F$ / $G$ have no common discontinuity in $\mathbb{R}$ and $u$ is right-continuous): \begin{align} \int_{\mathbb{R}}|u(x)|dF(x) < \infty, \; \int_{\mathbb{R}}|u(x)|dG(x) < \infty. \tag{2} \end{align} By $(1)$, for every $n$: \begin{align} & \int_{(-n, n]}u(x)dF(x) = F(n)u(n) - F(-n)u(-n) - \int_{(-n, n]}F(x)du(x). \tag{3} \\ & \int_{(-n, n]}u(x)dG(x) = G(n)u(n) - G(-n)u(-n) - \int_{(-n, n]}G(x)du(x). \tag{4} \end{align} It then follows by $(3), (4)$ and $F(x) \geq G(x)$ for all $x \in \mathbb{R}$ that \begin{align} & \int_{(-n, n]}u(x)dF(x) - \int_{(-n, n]}u(x)dG(x) \\ =& u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)) - \int_{(-n, n]}[F(x) - G(x)]du(x) \\ \leq & u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)). \tag{5} \end{align} If $u$ is non-negative, then the right-hand side of $(5)$ is bounded by $u(n)(F(n) - G(n))$, which can be rewritten as $u(n)(1 - G(n)) - u(n)(1 - F(n))$, which converges to $0$ as $n \to \infty$ by the integrability of $F, G$ and the monotonicity of $u$. Similarly, if $u$ is non-positive, then the right-hand side of $(5)$ is bounded by $-u(-n)(F(-n) - G(-n))$ and converges to $0$ as $n \to \infty$ (see the next paragraph for a more detailed derivation). If $u$ takes both negative and positive values, it follows by the monotonicity of $u$ that for sufficiently large $N$, $u(N) > 0$ and $u(-N) < 0$, whence for all $n > N$, again by monotonicity of $u$: \begin{align} 0 \leq u(n)(1 - F(n)) \leq \int_n^\infty u(x)dF(x), \; \int_{-\infty}^{-n}u(x)dF(x) \leq u(-n)F(-n) \leq 0. \tag{6} \end{align} By condition $(2)$ and Lebesgue's dominated convergence theorem (DCT), $(6)$ implies that $u(n)(1 - F(n)) \to 0$ and $u(-n)F(-n) \to 0$ as $n \to \infty$. Similarly, $u(n)(1 - G(n)) \to 0$ and $u(-n)G(-n) \to 0$ as $n \to \infty$. Therefore, the right-hand side of $(5)$ always converges to $0$ as $n \to \infty$ for $u$ that is nondecreasing and integrable. Now the result follows by passing $n \to \infty$ on both sides of $(5)$ (note that condition $(2)$ and DCT imply the left-hand side of $(5)$ converges to $E_F[u] - E_G[u]$).
Equivalent definition of stochastic dominance The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (
46,399
Equivalent definition of stochastic dominance
This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs: When $f:\mathbb R\to\mathbb R$ is an integrable function with non-zero norm $|f|=\int |f(x)|\,\mathrm dx \lt \infty$ and $\mathcal A$ is a set of positive measure $|\mathcal A| = \int_{\mathcal A}\mathrm dx \gt 0$ on which the values of $f$ all exceed some positive number $\epsilon \gt 0,$ then there exists an increasing (measurable) function $u$ for which the transformed function $f\circ u$ has a positive integral, $$\int_\mathbb{R}f(u(x))\,\mathrm dx \gt 0.$$ The idea is to make the image of $u$ focus on $\mathcal A$ while practically skipping over everything else: the integral is then at least $\epsilon$ (the minimum value of $f$ on $\mathcal A$) times the measure of $\mathcal A$ -- plus any negative contributions elsewhere. By limiting the latter we wind up with a positive integral. In this illustration, the set $\mathcal A$ is highlighted in orange along the horizontal axis and the area under $f$ over the region $\mathcal A$ is shaded. One such function $u$ is obtained by inverting the (strictly) increasing function $$v(y) = \int_{-\infty}^y \mathcal{I}_\mathcal{A}(x) + \delta(1-\mathcal{I}_\mathcal{A}(x))\,\mathrm dx$$ for a positive $\delta$ to be determined. ($\mathcal I$ is the indicator function.) This illustration graphs $v$ for $\delta = 0.05.$ Its slopes are $1$ (orange) and $0.05$ (gray). The Fundamental Theorem of Calculus and the rule of differentiating inverse functions show the inverse $u=v^{-1}$ is (a) differentiable with (b) derivative equal to $1$ on $\mathcal A$ and $1/\delta$ elsewhere. Writing $v(\mathcal A)^\prime$ for the complement of $v(\mathcal A)$ within the image of $v$ (which is $\mathbb R$ itself), use the standard integral inequalities (Holder's, for instance) and the change of variables formula for integrals to deduce $$\begin{aligned} \int f(u(x))\,\mathrm dx &= \int_{v(\mathcal A)} f(u(x))\,\mathrm dx + \int_{v(\mathcal A)^\prime} f(u(x))\frac{|u^\prime(x)|}{|u^\prime(x)|}\,\mathrm dx\\ &\ge \int_{v(\mathcal A)} f(u(x))\,\mathrm dx - \left(\sup_{x\in v(\mathcal A)^\prime} \frac{1}{|u^\prime(x)|}\right)\left|\int f(u(x))|u^\prime(x)|\,\mathrm dx\right|\\ &\ge |\mathcal A|\epsilon - \delta|f|. \end{aligned}$$ Taking $\delta = |\mathcal A|\epsilon / (2|f|)$ produces a strictly positive value, proving the lemma. This illustration of the graph of $f\circ u$ shows how the horizontal axis has been squeezed at all places where $f\lt \epsilon,$ thereby giving the entire integral a positive value. Making $\delta$ sufficiently close to zero will effectively eliminate the dips in the graph below $\epsilon.$ As a corollary, applying the lemma to $-f$ shows that when there is a set of positive measure on which $f$ has negative values below $-\epsilon \lt 0,$ then there is an increasing function $u$ for which $f\circ u$ has a negative integral. Consequently, if for all increasing (measurable) functions $u$ the integral in the lemma is positive, it follows that the set of places where $f$ has a negative value has measure zero. That's the heart of the matter. Let's pause to notice two things. The first is technical: in this construction of $u,$ $u^{-1}$ is also almost everywhere differentiable and therefore continuous and measurable, allowing us to focus on such "nice" functions. The second is probabilistic: when $F_X$ is the distribution function of a random variable $X$ -- that is, $F_X(x)=\Pr(X\le x)$ -- and $u$ is an increasing (measurable) function with an increasing (measurable) inverse $u^{-1},$ then the distribution function of $u^{-1}(X)$ is $$F_{u^{-1}(X)}(y) = \Pr(u^{-1}(X)\lt y) = \Pr(X \le u(y)) = F_X(u(y)).$$ That is, $F_{u^{-1}(X)} = F_X\circ u.$ Now observe that when $F$ and $G$ are distinct distribution functions for a random variable $X$ and $u$ is an increasing (measurable) function, $$E_G[u^{-1}(X)] - E_F[u^{-1}(X)] = \int F(u(x)) - G(u(x))\,\mathrm dx = \int (F-G)(u(x))\,\mathrm dx.$$ (For the elementary proof see Expectation of a function of a random variable from CDF for instance. It's just an integration by parts.) Proof of the theorem Applying the corollary to the function $f = F-G$ (which has a nonzero norm since $F$ and $G$ are distinct), under the assumption $f$ has finite norm, shows that when all such integrals are positive, the set on which $F-G$ is negative has measure zero: that is, $G$ stochastically dominates $F,$ QED. We can eliminate the finite-norm assumption by noting that $F-G$ can have an infinite norm only by diverging at infinity: it cannot have vertical asymptotes. (The values are differences of probabilities, whence they are bounded by $\pm 1.$) Consequently we can approximate $F-G$ on an expanding sequence of compact sets, such as the intervals $(-n,n)$ for $n=1,2,3,\cdots,$ and apply a limiting argument. But that should be viewed as a technicality, because the underlying idea remains the same, as expressed in the lemma.
Equivalent definition of stochastic dominance
This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs: When $f:\mathbb R\to\mathbb R$ is an
Equivalent definition of stochastic dominance This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs: When $f:\mathbb R\to\mathbb R$ is an integrable function with non-zero norm $|f|=\int |f(x)|\,\mathrm dx \lt \infty$ and $\mathcal A$ is a set of positive measure $|\mathcal A| = \int_{\mathcal A}\mathrm dx \gt 0$ on which the values of $f$ all exceed some positive number $\epsilon \gt 0,$ then there exists an increasing (measurable) function $u$ for which the transformed function $f\circ u$ has a positive integral, $$\int_\mathbb{R}f(u(x))\,\mathrm dx \gt 0.$$ The idea is to make the image of $u$ focus on $\mathcal A$ while practically skipping over everything else: the integral is then at least $\epsilon$ (the minimum value of $f$ on $\mathcal A$) times the measure of $\mathcal A$ -- plus any negative contributions elsewhere. By limiting the latter we wind up with a positive integral. In this illustration, the set $\mathcal A$ is highlighted in orange along the horizontal axis and the area under $f$ over the region $\mathcal A$ is shaded. One such function $u$ is obtained by inverting the (strictly) increasing function $$v(y) = \int_{-\infty}^y \mathcal{I}_\mathcal{A}(x) + \delta(1-\mathcal{I}_\mathcal{A}(x))\,\mathrm dx$$ for a positive $\delta$ to be determined. ($\mathcal I$ is the indicator function.) This illustration graphs $v$ for $\delta = 0.05.$ Its slopes are $1$ (orange) and $0.05$ (gray). The Fundamental Theorem of Calculus and the rule of differentiating inverse functions show the inverse $u=v^{-1}$ is (a) differentiable with (b) derivative equal to $1$ on $\mathcal A$ and $1/\delta$ elsewhere. Writing $v(\mathcal A)^\prime$ for the complement of $v(\mathcal A)$ within the image of $v$ (which is $\mathbb R$ itself), use the standard integral inequalities (Holder's, for instance) and the change of variables formula for integrals to deduce $$\begin{aligned} \int f(u(x))\,\mathrm dx &= \int_{v(\mathcal A)} f(u(x))\,\mathrm dx + \int_{v(\mathcal A)^\prime} f(u(x))\frac{|u^\prime(x)|}{|u^\prime(x)|}\,\mathrm dx\\ &\ge \int_{v(\mathcal A)} f(u(x))\,\mathrm dx - \left(\sup_{x\in v(\mathcal A)^\prime} \frac{1}{|u^\prime(x)|}\right)\left|\int f(u(x))|u^\prime(x)|\,\mathrm dx\right|\\ &\ge |\mathcal A|\epsilon - \delta|f|. \end{aligned}$$ Taking $\delta = |\mathcal A|\epsilon / (2|f|)$ produces a strictly positive value, proving the lemma. This illustration of the graph of $f\circ u$ shows how the horizontal axis has been squeezed at all places where $f\lt \epsilon,$ thereby giving the entire integral a positive value. Making $\delta$ sufficiently close to zero will effectively eliminate the dips in the graph below $\epsilon.$ As a corollary, applying the lemma to $-f$ shows that when there is a set of positive measure on which $f$ has negative values below $-\epsilon \lt 0,$ then there is an increasing function $u$ for which $f\circ u$ has a negative integral. Consequently, if for all increasing (measurable) functions $u$ the integral in the lemma is positive, it follows that the set of places where $f$ has a negative value has measure zero. That's the heart of the matter. Let's pause to notice two things. The first is technical: in this construction of $u,$ $u^{-1}$ is also almost everywhere differentiable and therefore continuous and measurable, allowing us to focus on such "nice" functions. The second is probabilistic: when $F_X$ is the distribution function of a random variable $X$ -- that is, $F_X(x)=\Pr(X\le x)$ -- and $u$ is an increasing (measurable) function with an increasing (measurable) inverse $u^{-1},$ then the distribution function of $u^{-1}(X)$ is $$F_{u^{-1}(X)}(y) = \Pr(u^{-1}(X)\lt y) = \Pr(X \le u(y)) = F_X(u(y)).$$ That is, $F_{u^{-1}(X)} = F_X\circ u.$ Now observe that when $F$ and $G$ are distinct distribution functions for a random variable $X$ and $u$ is an increasing (measurable) function, $$E_G[u^{-1}(X)] - E_F[u^{-1}(X)] = \int F(u(x)) - G(u(x))\,\mathrm dx = \int (F-G)(u(x))\,\mathrm dx.$$ (For the elementary proof see Expectation of a function of a random variable from CDF for instance. It's just an integration by parts.) Proof of the theorem Applying the corollary to the function $f = F-G$ (which has a nonzero norm since $F$ and $G$ are distinct), under the assumption $f$ has finite norm, shows that when all such integrals are positive, the set on which $F-G$ is negative has measure zero: that is, $G$ stochastically dominates $F,$ QED. We can eliminate the finite-norm assumption by noting that $F-G$ can have an infinite norm only by diverging at infinity: it cannot have vertical asymptotes. (The values are differences of probabilities, whence they are bounded by $\pm 1.$) Consequently we can approximate $F-G$ on an expanding sequence of compact sets, such as the intervals $(-n,n)$ for $n=1,2,3,\cdots,$ and apply a limiting argument. But that should be viewed as a technicality, because the underlying idea remains the same, as expressed in the lemma.
Equivalent definition of stochastic dominance This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs: When $f:\mathbb R\to\mathbb R$ is an
46,400
How to compute a prediction interval from ordinary least squares regression output alone?
$S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula $$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$ where the left hand side is the standard error of the slope, given as $1373$ in the question, and $MS_{Res}$ is the mean squared residual, whose square root (the "residual standard error" is given as $36600$ in the question. The mean of the explanatory variable can almost be recovered from the formula for the estimated sampling variance of the intercept, $$\widehat{\operatorname{Var}}(\hat\beta_0) = MS_{Res}\left(\frac{1}{n} + \frac{\bar x^2}{S_{xx}}\right).$$ In the question, the left hand side is the square of the standard error, $\widehat{\operatorname{Var}}(\hat\beta_0) = 8004^2$ and $n = 98 + 2$ is found by adding the number of estimated coefficients to the "degrees of freedom" reported for the $F$ ratio statistic. Solving this for $\bar x$ usually gives two possible values. Unless you have some sense of what the value should be (one solution is positive and the other is negative), you're stuck (because, as you clearly are aware, the prediction interval at any value $x_0$ depends on its distance from $\bar x$ and the only value where that distance does not depend on the solution is $x_0=0$). As an example of the problem, here is R code to manufacture two different datasets with differing values of $\bar x$ and identical ordinary least squares output. x <- seq(2, 10, length.out = 30) y <- x + rnorm(length(x)) fit <- lm(y ~ x) b <- coefficients(fit) x.bar <- mean(x) x <- x - 2 * x.bar y <- y - 2 * b[2] * x.bar all.equal(summary(fit), summary(lm(y ~ x))) It alters the initial data by subtracting $2\bar x$ from all $x$ values, subtracting $2\hat\beta_1 \bar x$ from all $y$ values to keep the coefficient estimates the same, and comparing their summaries. Its output is Component “cov.unscaled”: Mean relative difference: 0.4387476 That is, the only difference between the two datasets lies in the estimated covariance between $\hat\beta_0$ and $\hat\beta_1$ -- but that is not part of your regression output. (The sign of this covariance will differ in the two datasets.) If it could be recovered from the output then some other numbers in the output would have to differ, too, but that's not the case. Here is a plot of the original data (blue; $\bar x = 6$) and their transformed version (red; $\bar x = -6$). The line is the common least squares fit.
How to compute a prediction interval from ordinary least squares regression output alone?
$S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula $$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$ where the left hand side is the standar
How to compute a prediction interval from ordinary least squares regression output alone? $S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula $$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$ where the left hand side is the standard error of the slope, given as $1373$ in the question, and $MS_{Res}$ is the mean squared residual, whose square root (the "residual standard error" is given as $36600$ in the question. The mean of the explanatory variable can almost be recovered from the formula for the estimated sampling variance of the intercept, $$\widehat{\operatorname{Var}}(\hat\beta_0) = MS_{Res}\left(\frac{1}{n} + \frac{\bar x^2}{S_{xx}}\right).$$ In the question, the left hand side is the square of the standard error, $\widehat{\operatorname{Var}}(\hat\beta_0) = 8004^2$ and $n = 98 + 2$ is found by adding the number of estimated coefficients to the "degrees of freedom" reported for the $F$ ratio statistic. Solving this for $\bar x$ usually gives two possible values. Unless you have some sense of what the value should be (one solution is positive and the other is negative), you're stuck (because, as you clearly are aware, the prediction interval at any value $x_0$ depends on its distance from $\bar x$ and the only value where that distance does not depend on the solution is $x_0=0$). As an example of the problem, here is R code to manufacture two different datasets with differing values of $\bar x$ and identical ordinary least squares output. x <- seq(2, 10, length.out = 30) y <- x + rnorm(length(x)) fit <- lm(y ~ x) b <- coefficients(fit) x.bar <- mean(x) x <- x - 2 * x.bar y <- y - 2 * b[2] * x.bar all.equal(summary(fit), summary(lm(y ~ x))) It alters the initial data by subtracting $2\bar x$ from all $x$ values, subtracting $2\hat\beta_1 \bar x$ from all $y$ values to keep the coefficient estimates the same, and comparing their summaries. Its output is Component “cov.unscaled”: Mean relative difference: 0.4387476 That is, the only difference between the two datasets lies in the estimated covariance between $\hat\beta_0$ and $\hat\beta_1$ -- but that is not part of your regression output. (The sign of this covariance will differ in the two datasets.) If it could be recovered from the output then some other numbers in the output would have to differ, too, but that's not the case. Here is a plot of the original data (blue; $\bar x = 6$) and their transformed version (red; $\bar x = -6$). The line is the common least squares fit.
How to compute a prediction interval from ordinary least squares regression output alone? $S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula $$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$ where the left hand side is the standar