idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
12,001
How to tell if my data distribution is symmetric?
No doubt you have been told otherwise, but mean $=$ median does not imply symmetry. There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the distribution is not symmetric (like any of the common skewness measures). Similarly, the relationship between mean and median doesn't necessarily imply a similar relationship between the midhinge ($(Q_1+Q_3)/2$) and median. They can suggest opposite skewness, or one may equal the median while the other doesn't. One way to investigate symmetry is via a symmetry plot*. If $Y_{(1)}, Y_{(2)}, ..., Y_{(n)}$ are the ordered observations from smallest to largest (the order statistics), and $M$ is the median, then a symmetry plot plots $Y_{(n)}-M$ vs $M-Y_{(1)}$, $Y_{(n-1)}-M$ vs $M-Y_{(2)}$ , ... and so on. * Minitab can do those. Indeed I raise this plot as a possibility because I've seen them done in Minitab. Here are four examples: $\hspace{6cm} \textbf{Symmetry plots}$ (The actual distributions were (left to right, top row first) - Laplace, Gamma(shape=0.8), beta(2,2) and beta(5,2). The code is Ross Ihaka's, from here) With heavy-tailed symmetric examples, it's often the case that the most extreme points can be very far from the line; you would pay less attention to the distance from the line of one or two points as you near the top right of the figure. There are of course, other plots (I mentioned the symmetry plot not from a particular sense of advocacy of that particular one, but because I knew it was already implemented in Minitab). So let's explore some others. Here's the corresponding skewplots that Nick Cox suggested in comments: $\hspace{6cm} \textbf{Skewness plots}$ In these plots, a trend up would indicate a typically heavier right tail than left and a trend down would indicate a typically heavier left tail than right, while symmetry would be suggested by a relatively flat (though perhaps fairly noisy) plot. Nick suggests that this plot is better (specifically "more direct"). I am inclined to agree; the interpretation of the plot seems consequently a little easier, though the information in the corresponding plots are often quite similar (after you subtract the unit slope in the first set, you get something very like the second set). [Of course, none of these things will tell us that the distribution the data were drawn from is actually symmetric; we get an indication of how near-to-symmetric the sample is, and so to that extent we can judge if the data are reasonably consistent with being drawn from a near-symmetrical population.]
How to tell if my data distribution is symmetric?
No doubt you have been told otherwise, but mean $=$ median does not imply symmetry. There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the dis
How to tell if my data distribution is symmetric? No doubt you have been told otherwise, but mean $=$ median does not imply symmetry. There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the distribution is not symmetric (like any of the common skewness measures). Similarly, the relationship between mean and median doesn't necessarily imply a similar relationship between the midhinge ($(Q_1+Q_3)/2$) and median. They can suggest opposite skewness, or one may equal the median while the other doesn't. One way to investigate symmetry is via a symmetry plot*. If $Y_{(1)}, Y_{(2)}, ..., Y_{(n)}$ are the ordered observations from smallest to largest (the order statistics), and $M$ is the median, then a symmetry plot plots $Y_{(n)}-M$ vs $M-Y_{(1)}$, $Y_{(n-1)}-M$ vs $M-Y_{(2)}$ , ... and so on. * Minitab can do those. Indeed I raise this plot as a possibility because I've seen them done in Minitab. Here are four examples: $\hspace{6cm} \textbf{Symmetry plots}$ (The actual distributions were (left to right, top row first) - Laplace, Gamma(shape=0.8), beta(2,2) and beta(5,2). The code is Ross Ihaka's, from here) With heavy-tailed symmetric examples, it's often the case that the most extreme points can be very far from the line; you would pay less attention to the distance from the line of one or two points as you near the top right of the figure. There are of course, other plots (I mentioned the symmetry plot not from a particular sense of advocacy of that particular one, but because I knew it was already implemented in Minitab). So let's explore some others. Here's the corresponding skewplots that Nick Cox suggested in comments: $\hspace{6cm} \textbf{Skewness plots}$ In these plots, a trend up would indicate a typically heavier right tail than left and a trend down would indicate a typically heavier left tail than right, while symmetry would be suggested by a relatively flat (though perhaps fairly noisy) plot. Nick suggests that this plot is better (specifically "more direct"). I am inclined to agree; the interpretation of the plot seems consequently a little easier, though the information in the corresponding plots are often quite similar (after you subtract the unit slope in the first set, you get something very like the second set). [Of course, none of these things will tell us that the distribution the data were drawn from is actually symmetric; we get an indication of how near-to-symmetric the sample is, and so to that extent we can judge if the data are reasonably consistent with being drawn from a near-symmetrical population.]
How to tell if my data distribution is symmetric? No doubt you have been told otherwise, but mean $=$ median does not imply symmetry. There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the dis
12,002
How to tell if my data distribution is symmetric?
The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, but in most practical cases it would. As @NickCox noted, there's more than one definition of skewness. I use the one that's compatible with Excel, but you can use any other.
How to tell if my data distribution is symmetric?
The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical,
How to tell if my data distribution is symmetric? The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, but in most practical cases it would. As @NickCox noted, there's more than one definition of skewness. I use the one that's compatible with Excel, but you can use any other.
How to tell if my data distribution is symmetric? The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical,
12,003
How to tell if my data distribution is symmetric?
Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sample Kolmogorov-Smirnov test by comparing the two partitions to each other. Make your conclusion based on the p-value.
How to tell if my data distribution is symmetric?
Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sa
How to tell if my data distribution is symmetric? Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sample Kolmogorov-Smirnov test by comparing the two partitions to each other. Make your conclusion based on the p-value.
How to tell if my data distribution is symmetric? Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sa
12,004
How to tell if my data distribution is symmetric?
Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column. Then compute the correlation coefficient (call it Rm) between these two columns. Compute the chiral index: CHI=(1+Rm)/2. CHI takes values in the interval [0..1]. CHI is null IF and ONLY IF your sample is symmetrically distributed. No need of the third moment. Theory: http://petitjeanmichel.free.fr/itoweb.petitjean.skewness.html http://petitjeanmichel.free.fr/itoweb.petitjean.html (most papers cited in these two pages are downloadable there in pdf) Hope it helps, even lately.
How to tell if my data distribution is symmetric?
Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column. Then compute the correlation coefficient (call it Rm) between these two c
How to tell if my data distribution is symmetric? Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column. Then compute the correlation coefficient (call it Rm) between these two columns. Compute the chiral index: CHI=(1+Rm)/2. CHI takes values in the interval [0..1]. CHI is null IF and ONLY IF your sample is symmetrically distributed. No need of the third moment. Theory: http://petitjeanmichel.free.fr/itoweb.petitjean.skewness.html http://petitjeanmichel.free.fr/itoweb.petitjean.html (most papers cited in these two pages are downloadable there in pdf) Hope it helps, even lately.
How to tell if my data distribution is symmetric? Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column. Then compute the correlation coefficient (call it Rm) between these two c
12,005
difference between R square and rmse in linear regression [duplicate]
Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$. The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the root mean squared error is the square root thus $RMSE=\sqrt{MSE}$. The $R^2$ is equal to $R^2=1-\frac{SSE}{TSS}$ where $SSE$ is the sum of squared errors or $SSE=\sum_{i=1}^n (y_i - \hat{y}_i)^2 )$, and by definition this is equal to $SSE=n \times MSE$. The $TSS$ is the total sum of squares and is equal to $TSS=\sum_{i=1}^n (y_i - \bar{y} )^2$, where $\bar{y}=\frac{1}n{}\sum_{i=1}^n y_i$. So $R^2=1-\frac{n \times MSE} {\sum_{i=1}^n (y_i - \bar{y} )^2}$. For a regression with an intercept, $R^2$ is between 0 and 1, and from its definition $R^2=1-\frac{SSE}{TSS}$ we can find an interpretation: $\frac{SSE}{TSS}$ is the sum of squared errors divided by the total sum of squares, so it is the fraction ot the total sum of squares that is contained in the error term. So one minus this is the fraction of the total sum of squares that is not in the error, or $R^2$ is the fraction of the total sum of squares that is 'explained by' the regression. The RMSE is a measure of the average deviation of the estimates from the observed values (this is what @user3796494 also said) . For $R^2$ you can also take a look at Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
difference between R square and rmse in linear regression [duplicate]
Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$. The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the roo
difference between R square and rmse in linear regression [duplicate] Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$. The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the root mean squared error is the square root thus $RMSE=\sqrt{MSE}$. The $R^2$ is equal to $R^2=1-\frac{SSE}{TSS}$ where $SSE$ is the sum of squared errors or $SSE=\sum_{i=1}^n (y_i - \hat{y}_i)^2 )$, and by definition this is equal to $SSE=n \times MSE$. The $TSS$ is the total sum of squares and is equal to $TSS=\sum_{i=1}^n (y_i - \bar{y} )^2$, where $\bar{y}=\frac{1}n{}\sum_{i=1}^n y_i$. So $R^2=1-\frac{n \times MSE} {\sum_{i=1}^n (y_i - \bar{y} )^2}$. For a regression with an intercept, $R^2$ is between 0 and 1, and from its definition $R^2=1-\frac{SSE}{TSS}$ we can find an interpretation: $\frac{SSE}{TSS}$ is the sum of squared errors divided by the total sum of squares, so it is the fraction ot the total sum of squares that is contained in the error term. So one minus this is the fraction of the total sum of squares that is not in the error, or $R^2$ is the fraction of the total sum of squares that is 'explained by' the regression. The RMSE is a measure of the average deviation of the estimates from the observed values (this is what @user3796494 also said) . For $R^2$ you can also take a look at Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
difference between R square and rmse in linear regression [duplicate] Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$. The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the roo
12,006
difference between R square and rmse in linear regression [duplicate]
Both indicate the goodness of the fit. R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more easily interpreted, but with RMSE we explicitly know how much our predictions deviate, on average, from the actual values in the dataset. So in a way, RMSE tells you more. I also found this video really helpful.
difference between R square and rmse in linear regression [duplicate]
Both indicate the goodness of the fit. R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more
difference between R square and rmse in linear regression [duplicate] Both indicate the goodness of the fit. R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more easily interpreted, but with RMSE we explicitly know how much our predictions deviate, on average, from the actual values in the dataset. So in a way, RMSE tells you more. I also found this video really helpful.
difference between R square and rmse in linear regression [duplicate] Both indicate the goodness of the fit. R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more
12,007
How to setup xreg argument in auto.arima() in R? [closed]
The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works. library(forecast) # create some artifical data modelfitsample <- data.frame(Customer_Visit=rpois(49,3000),Weekday=rep(1:7,7), Christmas=c(rep(0,40),1,rep(0,8)),Day=1:49) # Create matrix of numeric predictors xreg <- cbind(Weekday=model.matrix(~as.factor(modelfitsample$Weekday)), Day=modelfitsample$Day, Christmas=modelfitsample$Christmas) # Remove intercept xreg <- xreg[,-1] # Rename columns colnames(xreg) <- c("Mon","Tue","Wed","Thu","Fri","Sat","Day","Christmas") # Variable to be modelled visits <- ts(modelfitsample$Customer_Visit, frequency=7) # Find ARIMAX model modArima <- auto.arima(visits, xreg=xreg)
How to setup xreg argument in auto.arima() in R? [closed]
The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works. library(forecast) # create some artifical data
How to setup xreg argument in auto.arima() in R? [closed] The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works. library(forecast) # create some artifical data modelfitsample <- data.frame(Customer_Visit=rpois(49,3000),Weekday=rep(1:7,7), Christmas=c(rep(0,40),1,rep(0,8)),Day=1:49) # Create matrix of numeric predictors xreg <- cbind(Weekday=model.matrix(~as.factor(modelfitsample$Weekday)), Day=modelfitsample$Day, Christmas=modelfitsample$Christmas) # Remove intercept xreg <- xreg[,-1] # Rename columns colnames(xreg) <- c("Mon","Tue","Wed","Thu","Fri","Sat","Day","Christmas") # Variable to be modelled visits <- ts(modelfitsample$Customer_Visit, frequency=7) # Find ARIMAX model modArima <- auto.arima(visits, xreg=xreg)
How to setup xreg argument in auto.arima() in R? [closed] The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works. library(forecast) # create some artifical data
12,008
Low classification accuracy, what to do next?
First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a position, is weather you or a domain expert could infer the class (with an accuracy greater than a random classifier) based on given features. If no, then getting more data rows or changing the classifier won't help. What you need to do is get more data using different features. IF on the other hand you think the information needed to infer the class is already in the labels, you should check whether your classifier suffers from a high bias or high variance problem. To do this, graph the validation error and training set error, as a function of training examples. If the lines seem to converge to the same value and are close at the end, then your classifier has high bias and adding more data won't help. A good idea in this case is to either change the classifier for a one that has higher variance, or simply lower the regularization parameter of your current one. If on the other hand the lines are quite far apart, and you have a low training set error but high validation error, then your classifier has too high variance. In this case getting more data is very likely to help. If after getting more data the variance will still be too high, you can increase the regularization parameter. This are the general rules I would use when faced with a problem like yours. Cheers.
Low classification accuracy, what to do next?
First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a positi
Low classification accuracy, what to do next? First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a position, is weather you or a domain expert could infer the class (with an accuracy greater than a random classifier) based on given features. If no, then getting more data rows or changing the classifier won't help. What you need to do is get more data using different features. IF on the other hand you think the information needed to infer the class is already in the labels, you should check whether your classifier suffers from a high bias or high variance problem. To do this, graph the validation error and training set error, as a function of training examples. If the lines seem to converge to the same value and are close at the end, then your classifier has high bias and adding more data won't help. A good idea in this case is to either change the classifier for a one that has higher variance, or simply lower the regularization parameter of your current one. If on the other hand the lines are quite far apart, and you have a low training set error but high validation error, then your classifier has too high variance. In this case getting more data is very likely to help. If after getting more data the variance will still be too high, you can increase the regularization parameter. This are the general rules I would use when faced with a problem like yours. Cheers.
Low classification accuracy, what to do next? First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a positi
12,009
Low classification accuracy, what to do next?
I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification. It is worth examining your features on an individual basis to see if there is any relationship with the outcome of interest - it may that the features you have do not have any association with the class labels. How do you know if the features you have will be any use? You could start with doing hypothesis testing or correlation analysis to test for relationships. Generating class specific histograms for features (i.e. plotting histograms of the data for each class, for a given feature on the same axis) can also be a good way to show if a feature discriminates well between the two classes. It is important to remember though not to let the results of your exploratory analysis influence your choices for classification. Choosing features for classification based on a prior exploratory analysis on the same data, can lead to overfitting and biased performance estimates (see discussion here) but an exploratory analysis will at least give you an idea of whether the task you are trying to do is even possible.
Low classification accuracy, what to do next?
I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification. It is worth examining your features on an individual basis to see if there is any relati
Low classification accuracy, what to do next? I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification. It is worth examining your features on an individual basis to see if there is any relationship with the outcome of interest - it may that the features you have do not have any association with the class labels. How do you know if the features you have will be any use? You could start with doing hypothesis testing or correlation analysis to test for relationships. Generating class specific histograms for features (i.e. plotting histograms of the data for each class, for a given feature on the same axis) can also be a good way to show if a feature discriminates well between the two classes. It is important to remember though not to let the results of your exploratory analysis influence your choices for classification. Choosing features for classification based on a prior exploratory analysis on the same data, can lead to overfitting and biased performance estimates (see discussion here) but an exploratory analysis will at least give you an idea of whether the task you are trying to do is even possible.
Low classification accuracy, what to do next? I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification. It is worth examining your features on an individual basis to see if there is any relati
12,010
Low classification accuracy, what to do next?
Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see practically no separation, that could indicate a lack of predictability; you can do this with all the pairs of covariates. That gives you some ideas about the ability to use these covariates to predict. If you see some hope that these variables can separate a little then start thinking about linear discriminants, quadratic discriminants, kernel discrimination, regularization, tree classification, SVM etc.
Low classification accuracy, what to do next?
Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see pract
Low classification accuracy, what to do next? Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see practically no separation, that could indicate a lack of predictability; you can do this with all the pairs of covariates. That gives you some ideas about the ability to use these covariates to predict. If you see some hope that these variables can separate a little then start thinking about linear discriminants, quadratic discriminants, kernel discrimination, regularization, tree classification, SVM etc.
Low classification accuracy, what to do next? Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see pract
12,011
Low classification accuracy, what to do next?
It's good that you separated your data into the training data and test data. Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect the error on your test set to be greater than the error on your training set, so if you have an unacceptably high error on your training set there is little hope of success. Getting rid of features can avoid some types of overfitting. However, it should not improve the error on your training set. A low error on your training set and a high error on your test set might be an indication that you overfit using an overly flexible feature set. However, it is safer to check this through cross-validation than on your test set. Once you select your feature set based on your test set, it is no longer valid as a test set.
Low classification accuracy, what to do next?
It's good that you separated your data into the training data and test data. Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect th
Low classification accuracy, what to do next? It's good that you separated your data into the training data and test data. Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect the error on your test set to be greater than the error on your training set, so if you have an unacceptably high error on your training set there is little hope of success. Getting rid of features can avoid some types of overfitting. However, it should not improve the error on your training set. A low error on your training set and a high error on your test set might be an indication that you overfit using an overly flexible feature set. However, it is safer to check this through cross-validation than on your test set. Once you select your feature set based on your test set, it is no longer valid as a test set.
Low classification accuracy, what to do next? It's good that you separated your data into the training data and test data. Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect th
12,012
Is AR(1) a Markov process?
The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively as $$X_n = f_n(X_{n-1}, \epsilon_n), \quad X_0 = x_0 \in F$$ the process $(X_n)_{n \geq 0}$ in $F$ is a Markov process starting at $x_0$. The process is time-homogeneous if the $\epsilon$'s are identically distributed and all the $f$-functions are identical. The AR(1) and VAR(1) are both processes given in this form with $$f_n(x, \epsilon) = \rho x + \epsilon.$$ Thus they are homogeneous Markov processes if the $\epsilon$'s are i.i.d. Technically, the spaces $E$ and $F$ need a measurable structure and the $f$-functions must be measurable. It is quite interesting that a converse result holds if the space $F$ is a Borel space. For any Markov process $(X_n)_{n \geq 0}$ on a Borel space $F$ there are i.i.d. uniform random variables $\epsilon_1, \epsilon_2, \ldots$ in $[0,1]$ and functions $f_n : F \times [0, 1] \to F$ such that with probability one $$X_n = f_n(X_{n-1}, \epsilon_n).$$ See Proposition 8.6 in Kallenberg, Foundations of Modern Probability.
Is AR(1) a Markov process?
The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively a
Is AR(1) a Markov process? The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively as $$X_n = f_n(X_{n-1}, \epsilon_n), \quad X_0 = x_0 \in F$$ the process $(X_n)_{n \geq 0}$ in $F$ is a Markov process starting at $x_0$. The process is time-homogeneous if the $\epsilon$'s are identically distributed and all the $f$-functions are identical. The AR(1) and VAR(1) are both processes given in this form with $$f_n(x, \epsilon) = \rho x + \epsilon.$$ Thus they are homogeneous Markov processes if the $\epsilon$'s are i.i.d. Technically, the spaces $E$ and $F$ need a measurable structure and the $f$-functions must be measurable. It is quite interesting that a converse result holds if the space $F$ is a Borel space. For any Markov process $(X_n)_{n \geq 0}$ on a Borel space $F$ there are i.i.d. uniform random variables $\epsilon_1, \epsilon_2, \ldots$ in $[0,1]$ and functions $f_n : F \times [0, 1] \to F$ such that with probability one $$X_n = f_n(X_{n-1}, \epsilon_n).$$ See Proposition 8.6 in Kallenberg, Foundations of Modern Probability.
Is AR(1) a Markov process? The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively a
12,013
Is AR(1) a Markov process?
A process $X_{t}$ is an AR(1) process if $$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$ where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if $$P(X_{t} = x_t | {\rm entire \ history \ of \ the \ process }) = P(X_{t}=x_t| X_{t-1}=x_{t-1})$$ From the first equation, the probability distribution of $X_{t}$ clearly only depends on $X_{t-1}$, so, yes, an AR(1) process is a Markov process.
Is AR(1) a Markov process?
A process $X_{t}$ is an AR(1) process if $$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$ where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if $$P(X_{t} = x_t | {\rm e
Is AR(1) a Markov process? A process $X_{t}$ is an AR(1) process if $$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$ where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if $$P(X_{t} = x_t | {\rm entire \ history \ of \ the \ process }) = P(X_{t}=x_t| X_{t-1}=x_{t-1})$$ From the first equation, the probability distribution of $X_{t}$ clearly only depends on $X_{t-1}$, so, yes, an AR(1) process is a Markov process.
Is AR(1) a Markov process? A process $X_{t}$ is an AR(1) process if $$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$ where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if $$P(X_{t} = x_t | {\rm e
12,014
Is AR(1) a Markov process?
What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition $$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 \right ),...,X\left ( t-1 \right )= x\left ( t-1 \right )\right ]=P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( t-1 \right )= x\left ( t-1 \right )\right ]$$ holds. Since next value (i.e. distribution of next value) of $AR(1)$ process only depends on current process value and does not depend on the rest history, it is a Markov process. When we observe the state of autoregressive process, the past history (or observations) do not supply any additional information. So, this implies that probability distribution of next value is not affected (is independent on) by our information about the past. The same holds for VAR(1) being first order multivariate Markov process.
Is AR(1) a Markov process?
What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition $$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0
Is AR(1) a Markov process? What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition $$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 \right ),...,X\left ( t-1 \right )= x\left ( t-1 \right )\right ]=P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( t-1 \right )= x\left ( t-1 \right )\right ]$$ holds. Since next value (i.e. distribution of next value) of $AR(1)$ process only depends on current process value and does not depend on the rest history, it is a Markov process. When we observe the state of autoregressive process, the past history (or observations) do not supply any additional information. So, this implies that probability distribution of next value is not affected (is independent on) by our information about the past. The same holds for VAR(1) being first order multivariate Markov process.
Is AR(1) a Markov process? What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition $$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0
12,015
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a rather non-representative value. Consider the log-normal distribution: x <- rlnorm(1000) plot(density(x), xlim=c(0, 10)) abline(v=mean(x), col="red") abline(v=mean(x, tr=.20), col="darkgreen") abline(v=median(x), col="blue") The mean (red line) is rather far away from the bulk of the data. 20% trimmed mean (green) and median (blue) are closer to the "typical" value. The results depend on the type of your "non-normal" distribution (a histogram of your actual data would be helpful). If it is not skewed, but has heavy tails, your CIs will be very wide. In any case, I think that bootstrapping indeed is a good approach, as it also can give you asymmetrical CIs. The R package simpleboot is a good start: library(simpleboot) # 20% trimmed mean bootstrap b1 <- one.boot(x, mean, R=2000, tr=.2) boot.ci(b1, type=c("perc", "bca")) ... gives you following result: # The bootstrap trimmed mean: > b1$t0 [1] 1.144648 BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 2000 bootstrap replicates Intervals : Level Percentile BCa 95% ( 1.062, 1.228 ) ( 1.065, 1.229 ) Calculations and Intervals on Original Scale
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a
How can I calculate the confidence interval of a mean in a non-normally distributed sample? First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a rather non-representative value. Consider the log-normal distribution: x <- rlnorm(1000) plot(density(x), xlim=c(0, 10)) abline(v=mean(x), col="red") abline(v=mean(x, tr=.20), col="darkgreen") abline(v=median(x), col="blue") The mean (red line) is rather far away from the bulk of the data. 20% trimmed mean (green) and median (blue) are closer to the "typical" value. The results depend on the type of your "non-normal" distribution (a histogram of your actual data would be helpful). If it is not skewed, but has heavy tails, your CIs will be very wide. In any case, I think that bootstrapping indeed is a good approach, as it also can give you asymmetrical CIs. The R package simpleboot is a good start: library(simpleboot) # 20% trimmed mean bootstrap b1 <- one.boot(x, mean, R=2000, tr=.2) boot.ci(b1, type=c("perc", "bca")) ... gives you following result: # The bootstrap trimmed mean: > b1$t0 [1] 1.144648 BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 2000 bootstrap replicates Intervals : Level Percentile BCa 95% ( 1.062, 1.228 ) ( 1.065, 1.229 ) Calculations and Intervals on Original Scale
How can I calculate the confidence interval of a mean in a non-normally distributed sample? First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a
12,016
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shifted by $\hat\kappa/(6s^2n)$, where $\hat\kappa$ is the estimate of the population third moment, and the width stays the same. Given that the width of the confidence interval is $O(n^{-1/2})$, and the correction for the mean is $O(n^{-1})$, you need to have a really sizable skewness (of the order $n^{1/2}>20$) for it to matter with $n>400$. The bootstrap should give you an asymptotically equivalent interval, but you would also have the simulation noise added to the picture. (The bootstrap CI corrects for the same first order term automatically, according to the general Bootstrap and Edgeworth Expansion (Hall 1995) theory.) For what I can recall about simulation evidence, the bootstrap CIs are somewhat fatter than the CIs based on the analytic expressions, though. Having the analytic form of the mean correction would give you an immediate idea of whether the skewness really needs to be taken into account in your mean estimation problem. In a way, this is a diagnostic tool of how bad the situation is. In the example of the lognormal distribution given by Felix, the normalized skewness of the population distribution is $(\exp(1)+2)*\sqrt{ \exp(1) - 1}$, which is kappa = (exp(1)+2)*sqrt( exp(1) - 1) = 6.184877. The width of the CI (using the standard deviation of the population distribution, s = sqrt( (exp(1)-1)*exp(1) ) = 2.161197) is 2*s*qnorm(0.975)/sqrt(n) = 0.2678999, while the correction for the mean is kappa*s/(6*n) = 0.00222779 (the standard deviation migrated to the numerator since kappa is the scale-free skewness, while Johnson's formula deals with the unscaled population third central moment), i.e., about 1/100th of the width of the CI. Should you bother? I'd say, no.
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shi
How can I calculate the confidence interval of a mean in a non-normally distributed sample? If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shifted by $\hat\kappa/(6s^2n)$, where $\hat\kappa$ is the estimate of the population third moment, and the width stays the same. Given that the width of the confidence interval is $O(n^{-1/2})$, and the correction for the mean is $O(n^{-1})$, you need to have a really sizable skewness (of the order $n^{1/2}>20$) for it to matter with $n>400$. The bootstrap should give you an asymptotically equivalent interval, but you would also have the simulation noise added to the picture. (The bootstrap CI corrects for the same first order term automatically, according to the general Bootstrap and Edgeworth Expansion (Hall 1995) theory.) For what I can recall about simulation evidence, the bootstrap CIs are somewhat fatter than the CIs based on the analytic expressions, though. Having the analytic form of the mean correction would give you an immediate idea of whether the skewness really needs to be taken into account in your mean estimation problem. In a way, this is a diagnostic tool of how bad the situation is. In the example of the lognormal distribution given by Felix, the normalized skewness of the population distribution is $(\exp(1)+2)*\sqrt{ \exp(1) - 1}$, which is kappa = (exp(1)+2)*sqrt( exp(1) - 1) = 6.184877. The width of the CI (using the standard deviation of the population distribution, s = sqrt( (exp(1)-1)*exp(1) ) = 2.161197) is 2*s*qnorm(0.975)/sqrt(n) = 0.2678999, while the correction for the mean is kappa*s/(6*n) = 0.00222779 (the standard deviation migrated to the numerator since kappa is the scale-free skewness, while Johnson's formula deals with the unscaled population third central moment), i.e., about 1/100th of the width of the CI. Should you bother? I'd say, no.
How can I calculate the confidence interval of a mean in a non-normally distributed sample? If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shi
12,017
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
Try a log-normal distribution, calculating: Logarithm of the data; Mean and standard deviation of (1) Confidence interval corresponding to (2) Exponential of (3) You'll end up with an asymmetric confidence interval around the expected value (which is not the mean of the raw data).
How can I calculate the confidence interval of a mean in a non-normally distributed sample?
Try a log-normal distribution, calculating: Logarithm of the data; Mean and standard deviation of (1) Confidence interval corresponding to (2) Exponential of (3) You'll end up with an asymmetric con
How can I calculate the confidence interval of a mean in a non-normally distributed sample? Try a log-normal distribution, calculating: Logarithm of the data; Mean and standard deviation of (1) Confidence interval corresponding to (2) Exponential of (3) You'll end up with an asymmetric confidence interval around the expected value (which is not the mean of the raw data).
How can I calculate the confidence interval of a mean in a non-normally distributed sample? Try a log-normal distribution, calculating: Logarithm of the data; Mean and standard deviation of (1) Confidence interval corresponding to (2) Exponential of (3) You'll end up with an asymmetric con
12,018
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example?
The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p_i)$, so the normal approximation to the binomial distribution gives (taking $ \hat{p_{i}} = Y_{i}$) $$ \hat{p}_{i}\approx \mathsf{norm}(\mathtt{mean}=p_{i},\mathtt{var} = p_{i}(1-p_{i})/45). $$ Obviously in this case the variances are not equal, yet if they had been equal to a common value then we could estimate it with the pooled estimator $$ \hat{\sigma}^2 = \frac{\hat{p}(1 - \hat{p})}{45}, $$ where $\hat{p}$ is the grand mean $$ \hat{p} = \frac{1}{18\cdot 45}\sum_{i = 1}^{18}45\cdot{Y_{i}}=\overline{Y}. $$ It looks as though this is what Efron and Morris have done (in the 1977 paper). You can check this with the following R code. Here are the data: y <- c(0.4, 0.378, 0.356, 0.333, 0.311, 0.311, 0.289, 0.267, 0.244, 0.244, 0.222, 0.222, 0.222, 0.222, 0.222, 0.2, 0.178, 0.156) and here is the estimate for $\sigma^2$: s2 <- mean(y)*(1 - mean(y))/45 which is $\hat{\sigma}^2 \approx 0.004332392$. The shrinkage factor in the paper is then 1 - 15*s2/(17*var(y)) which gives $c \approx 0.2123905$. Note that in the second paper they made a transformation to sidestep the variance problem (as @Wolfgang said). Also note in the 1975 paper they used $k - 2$ while in the 1977 paper they used $k - 3$.
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b
The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(4
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example? The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p_i)$, so the normal approximation to the binomial distribution gives (taking $ \hat{p_{i}} = Y_{i}$) $$ \hat{p}_{i}\approx \mathsf{norm}(\mathtt{mean}=p_{i},\mathtt{var} = p_{i}(1-p_{i})/45). $$ Obviously in this case the variances are not equal, yet if they had been equal to a common value then we could estimate it with the pooled estimator $$ \hat{\sigma}^2 = \frac{\hat{p}(1 - \hat{p})}{45}, $$ where $\hat{p}$ is the grand mean $$ \hat{p} = \frac{1}{18\cdot 45}\sum_{i = 1}^{18}45\cdot{Y_{i}}=\overline{Y}. $$ It looks as though this is what Efron and Morris have done (in the 1977 paper). You can check this with the following R code. Here are the data: y <- c(0.4, 0.378, 0.356, 0.333, 0.311, 0.311, 0.289, 0.267, 0.244, 0.244, 0.222, 0.222, 0.222, 0.222, 0.222, 0.2, 0.178, 0.156) and here is the estimate for $\sigma^2$: s2 <- mean(y)*(1 - mean(y))/45 which is $\hat{\sigma}^2 \approx 0.004332392$. The shrinkage factor in the paper is then 1 - 15*s2/(17*var(y)) which gives $c \approx 0.2123905$. Note that in the second paper they made a transformation to sidestep the variance problem (as @Wolfgang said). Also note in the 1975 paper they used $k - 2$ while in the 1977 paper they used $k - 3$.
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(4
12,019
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example?
I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data: Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and its generalizations. Journal of the American Statistical Association, 70(350), 311-319 (link to pdf) or more detailed Efron, B., & Morris, C. (1974). Data analysis using Stein's estimator and its generalizations. R-1394-OEO, The RAND Corporation, March 1974 (link to pdf). On page 312, you will see that Efron & Morris use an arc-sin transformation of these data, so that the variance of the batting averages is approximately unity: > dat <- read.table("data.txt", header=T, sep=",") > yi <- dat$avg45 > k <- length(yi) > yi <- sqrt(45) * asin(2*yi-1) > c <- 1 - (k-3)*1 / sum((yi - mean(yi))^2) > c [1] 0.2091971 Then they use c=.209 for the computation of the $z$ values, which we can easily back-transform: > zi <- mean(yi) + c * (yi - mean(yi)) > round((sin(zi/sqrt(45)) + 1)/2,3) ### back-transformation [1] 0.290 0.286 0.282 0.277 0.273 0.273 0.268 0.264 0.259 [10] 0.259 0.254 0.254 0.254 0.254 0.254 0.249 0.244 0.239 So these are the values of the Stein estimator. For Clemente, we get .290, which is quite close to the .294 from the 1977 article.
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b
I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data: Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example? I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data: Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and its generalizations. Journal of the American Statistical Association, 70(350), 311-319 (link to pdf) or more detailed Efron, B., & Morris, C. (1974). Data analysis using Stein's estimator and its generalizations. R-1394-OEO, The RAND Corporation, March 1974 (link to pdf). On page 312, you will see that Efron & Morris use an arc-sin transformation of these data, so that the variance of the batting averages is approximately unity: > dat <- read.table("data.txt", header=T, sep=",") > yi <- dat$avg45 > k <- length(yi) > yi <- sqrt(45) * asin(2*yi-1) > c <- 1 - (k-3)*1 / sum((yi - mean(yi))^2) > c [1] 0.2091971 Then they use c=.209 for the computation of the $z$ values, which we can easily back-transform: > zi <- mean(yi) + c * (yi - mean(yi)) > round((sin(zi/sqrt(45)) + 1)/2,3) ### back-transformation [1] 0.290 0.286 0.282 0.277 0.273 0.273 0.268 0.264 0.259 [10] 0.259 0.254 0.254 0.254 0.254 0.254 0.249 0.244 0.239 So these are the values of the Stein estimator. For Clemente, we get .290, which is quite close to the .294 from the 1977 article.
James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data: Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and
12,020
Is a decision stump a linear model?
No, unless you transform the data. It is a linear model if you transform $x$ using indicator function: $$ x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x>2 \end{align}\end{cases} $$ Then $f(x) = 2x' + 3 = \left(\matrix{3 \\2}\right)^T \left(\matrix{1 \\x'}\right)$ Edit: this was mentioned in the comments but I want to emphasize it here as well. Any function that partitions the data into two pieces can be transformed into a linear model of this form, with an intercept and a single input (an indicator of which "side" of the partition the data point is on). It is important to take note of the difference between a decision function and a decision boundary.
Is a decision stump a linear model?
No, unless you transform the data. It is a linear model if you transform $x$ using indicator function: $$ x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x
Is a decision stump a linear model? No, unless you transform the data. It is a linear model if you transform $x$ using indicator function: $$ x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x>2 \end{align}\end{cases} $$ Then $f(x) = 2x' + 3 = \left(\matrix{3 \\2}\right)^T \left(\matrix{1 \\x'}\right)$ Edit: this was mentioned in the comments but I want to emphasize it here as well. Any function that partitions the data into two pieces can be transformed into a linear model of this form, with an intercept and a single input (an indicator of which "side" of the partition the data point is on). It is important to take note of the difference between a decision function and a decision boundary.
Is a decision stump a linear model? No, unless you transform the data. It is a linear model if you transform $x$ using indicator function: $$ x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x
12,021
Is a decision stump a linear model?
Answers to your questions: A decision stump is not a linear model. The decision boundary can be a line, even if the model is not linear. Logistic regression is an example. The boosted model does not have to be the same kind of model as the base learner. If you think about it, your example of boosting, plus the question you linked to, proves that the decision stump is not a linear model.
Is a decision stump a linear model?
Answers to your questions: A decision stump is not a linear model. The decision boundary can be a line, even if the model is not linear. Logistic regression is an example. The boosted model does not
Is a decision stump a linear model? Answers to your questions: A decision stump is not a linear model. The decision boundary can be a line, even if the model is not linear. Logistic regression is an example. The boosted model does not have to be the same kind of model as the base learner. If you think about it, your example of boosting, plus the question you linked to, proves that the decision stump is not a linear model.
Is a decision stump a linear model? Answers to your questions: A decision stump is not a linear model. The decision boundary can be a line, even if the model is not linear. Logistic regression is an example. The boosted model does not
12,022
Is a decision stump a linear model?
This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts. I once was in a court room and the judge asked (for good reason in context) , if we call a dog's tail a leg, does that mean a dog has 5 legs ? So what is a linear model ? In the context of statistics I've been told by an expert that a linear model means a statistical model constructed from a set of functions $ f_1, f_2, \ldots, f_n$ of the form $ y = \sum a_i f_i $ with the important constraint that the error terms are independent and normally distributed. With that definition, one can't say if your model is linear because you have given no information about the error term. If one drops the error term constraint, then it is tautologically linear in the function you give or in the function ssdecontrol gives. However naively, in the context of this question, that may be unsatisfactory. Any function can be considered as the basis of a linear in that sense. That is because any space of functions can be turned into a vector space of functions. If you are asking on the nose, that is mathematically, if your function linear, then the answer is no. A linear function is one whose graph is a straight line, while clearly your function doesn't have that property. In answer to the question you pose at the end, that is can one find $\beta$ so that $ f(x) = \beta^{T} x $ , then no. Any function of the class you give would satisfy $f(x+y) = f(x) + f(y) $ for any (real) numbers $x$ and $y$. Notice that your function satisfies $ f(1.5) = 3$ and $f(3) = 5$, so $ f(3) \neq f(1.5) + f(1.5)$ as would be required if your function was of the form $f(x) = \beta^T x$. Notice the class you propose for linear functions is a sub-class of what are usually called linear functions.
Is a decision stump a linear model?
This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts. I once was in a court room and the judge asked (for good reason in context)
Is a decision stump a linear model? This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts. I once was in a court room and the judge asked (for good reason in context) , if we call a dog's tail a leg, does that mean a dog has 5 legs ? So what is a linear model ? In the context of statistics I've been told by an expert that a linear model means a statistical model constructed from a set of functions $ f_1, f_2, \ldots, f_n$ of the form $ y = \sum a_i f_i $ with the important constraint that the error terms are independent and normally distributed. With that definition, one can't say if your model is linear because you have given no information about the error term. If one drops the error term constraint, then it is tautologically linear in the function you give or in the function ssdecontrol gives. However naively, in the context of this question, that may be unsatisfactory. Any function can be considered as the basis of a linear in that sense. That is because any space of functions can be turned into a vector space of functions. If you are asking on the nose, that is mathematically, if your function linear, then the answer is no. A linear function is one whose graph is a straight line, while clearly your function doesn't have that property. In answer to the question you pose at the end, that is can one find $\beta$ so that $ f(x) = \beta^{T} x $ , then no. Any function of the class you give would satisfy $f(x+y) = f(x) + f(y) $ for any (real) numbers $x$ and $y$. Notice that your function satisfies $ f(1.5) = 3$ and $f(3) = 5$, so $ f(3) \neq f(1.5) + f(1.5)$ as would be required if your function was of the form $f(x) = \beta^T x$. Notice the class you propose for linear functions is a sub-class of what are usually called linear functions.
Is a decision stump a linear model? This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts. I once was in a court room and the judge asked (for good reason in context)
12,023
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity). For any nonzero rational number $x$ let $a/b=x$ be its representation in lowest terms: that is, $b\gt 0$ and $\gcd(a,b)=1$. $F$ induces a discrete distribution $G$ on $[0,1]\cap \mathbb{Q}$ via the rules $$G(x) = G\left(\frac{a}{b}\right) = \sum_{n=1}^\infty F\left(an, bn\right)=\frac{3}{2^{1+a+b}-2}.$$ (and $G(0)=0$). Every rational number in $(0,1]$ has nonzero probability. (If you must include $0$ among the values with positive probability, just take some of the probability away from another number--like $1$--and assign it to $0$.) To understand this construction, look at this depiction of $F$: $F$ gives probability masses at all points $p,q$ with positive integral coordinates. Values of $F$ are represented by the colored areas of circular symbols. The lines have slopes $p/q$ for all possible combinations of coordinates $p$ and $q$ appearing in the plot. They are colored in the same way the circular symbols are: according to their slopes. Thus, slope (which clearly ranges from $0$ through $1$) and color correspond to the argument of $G$ and the values of $G$ are obtained by summing the areas of all circles lying on each line. For instance, $G(1)$ is obtained by summing the areas of all the (red) circles along the main diagonal of slope $1$, given by $F(1,1)+F(2,2)+F(3,3)+\cdots$ = $3/8 + 3/32 + 3/128 + \cdots = 1/2$. This figure shows an approximation to $G$ achieved by limiting $q\le 100$: it plots its values at $3044$ rational numbers ranging from $1/100$ through $1$. The largest probability masses are $\frac{1}{2},\frac{3}{14},\frac{1}{10},\frac{3}{62},\frac{3}{62},\frac{1}{42},\ldots$. Here is the full CDF of $G$ (accurate to the resolution of the image). The six numbers just listed give the sizes of the visible jumps, but every part of the CDF consists of jumps, without exception:
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all s
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity). For any nonzero rational number $x$ let $a/b=x$ be its representation in lowest terms: that is, $b\gt 0$ and $\gcd(a,b)=1$. $F$ induces a discrete distribution $G$ on $[0,1]\cap \mathbb{Q}$ via the rules $$G(x) = G\left(\frac{a}{b}\right) = \sum_{n=1}^\infty F\left(an, bn\right)=\frac{3}{2^{1+a+b}-2}.$$ (and $G(0)=0$). Every rational number in $(0,1]$ has nonzero probability. (If you must include $0$ among the values with positive probability, just take some of the probability away from another number--like $1$--and assign it to $0$.) To understand this construction, look at this depiction of $F$: $F$ gives probability masses at all points $p,q$ with positive integral coordinates. Values of $F$ are represented by the colored areas of circular symbols. The lines have slopes $p/q$ for all possible combinations of coordinates $p$ and $q$ appearing in the plot. They are colored in the same way the circular symbols are: according to their slopes. Thus, slope (which clearly ranges from $0$ through $1$) and color correspond to the argument of $G$ and the values of $G$ are obtained by summing the areas of all circles lying on each line. For instance, $G(1)$ is obtained by summing the areas of all the (red) circles along the main diagonal of slope $1$, given by $F(1,1)+F(2,2)+F(3,3)+\cdots$ = $3/8 + 3/32 + 3/128 + \cdots = 1/2$. This figure shows an approximation to $G$ achieved by limiting $q\le 100$: it plots its values at $3044$ rational numbers ranging from $1/100$ through $1$. The largest probability masses are $\frac{1}{2},\frac{3}{14},\frac{1}{10},\frac{3}{62},\frac{3}{62},\frac{1}{42},\ldots$. Here is the full CDF of $G$ (accurate to the resolution of the image). The six numbers just listed give the sizes of the visible jumps, but every part of the CDF consists of jumps, without exception:
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all s
12,024
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem. My notation: $Q$ is a RV whose support is $\mathbb{Q}\cap\left[0,1\right]$ -- my $Q$ is not the same as the $Q$ the OP constructs from his $\frac{X}{Y}$. We'll define this $Q$ using $Y$ and $f$, which I introduce below. $Y$ is any RV whose support is $\mathbb{N}\equiv\left\{1, 2, \ldots\right\}$ -- the $Y$ given by the OP would work, for example. $f$ is any one-to-one correspondence $f:\mathbb{N}\rightarrow\mathbb{Q}\cap\left[0,1\right]$ and $f^{-1}$ is its inverse. We know these exist. Now I claim I can reduce your problem to just finding an $f$ and its $f^{-1}$: Just let $Q=f\left(Y\right)$ and you are done. The PMF of $Q$ is $\Pr[Q =q] = \Pr[Y = f^{-1}(q)]$. Edit: Here is a function g that plays the role of $f$, despite not being a one-to-one correspondence (because of duplicates): g <- function(y) { y <- as.integer(y) stopifnot(y >= 1) b <- 0 a <- 0 for (unused_index in seq(1, y)) { if (a >= b) { b <- b+1 a <- 0 } else { a <- a+1 } } return(sprintf("q = %s / %s", a, b)) ## return(a / b) }
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem. My notation: $Q$ is a R
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem. My notation: $Q$ is a RV whose support is $\mathbb{Q}\cap\left[0,1\right]$ -- my $Q$ is not the same as the $Q$ the OP constructs from his $\frac{X}{Y}$. We'll define this $Q$ using $Y$ and $f$, which I introduce below. $Y$ is any RV whose support is $\mathbb{N}\equiv\left\{1, 2, \ldots\right\}$ -- the $Y$ given by the OP would work, for example. $f$ is any one-to-one correspondence $f:\mathbb{N}\rightarrow\mathbb{Q}\cap\left[0,1\right]$ and $f^{-1}$ is its inverse. We know these exist. Now I claim I can reduce your problem to just finding an $f$ and its $f^{-1}$: Just let $Q=f\left(Y\right)$ and you are done. The PMF of $Q$ is $\Pr[Q =q] = \Pr[Y = f^{-1}(q)]$. Edit: Here is a function g that plays the role of $f$, despite not being a one-to-one correspondence (because of duplicates): g <- function(y) { y <- as.integer(y) stopifnot(y >= 1) b <- 0 a <- 0 for (unused_index in seq(1, y)) { if (a >= b) { b <- b+1 a <- 0 } else { a <- a+1 } } return(sprintf("q = %s / %s", a, b)) ## return(a / b) }
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem. My notation: $Q$ is a R
12,025
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some arbitrary surjective function $H: \mathbb{N} \rightarrow \mathbb{Q}_*$, which can be interpreted as a sequence on $\mathbb{Q}_*$ that fully covers $\mathbb{Q}_*$ (due to the fact it is surjective). Then you can form a distribution with support $\mathbb{Q}_*$ by taking any probability distribution with support $\mathbb{N}$ and then transforming over to $\mathbb{Q}_*$ using the mapping/sequence $H$. To see exactly how this is done, let's suppose we have a distribution with support on the natural numbers with the probability mass function $g$. Let $\mathcal{N}_q \equiv \{ n \in \mathbb{N} | h(n)=q \}$ denote the preimage for the rational number $q \in \mathbb{Q}_*$. We can then obtain a probability mass function on the support $\mathbb{Q}_*$ by taking: $$f(q) = \sum_{n \in \mathcal{N}_q} g(n) \quad \quad \quad \text{for all } q \in \mathbb{Q}_*.$$ There are lots of well-known surjective functions from the natural numbers to various subsets of the rational numbers, and these can easily be used to obtain a mapping/sequence $H$ of the above type. This will always give a valid probability distribution on the desired set, but it won't always be possible to write the probability mass function in a simple form. In general, if you can write the set $\mathcal{N}_q$ in a simple way then you will be able to write the probability mass $f$ in a simple way. Example (Construction using the Calkin-Wilf tree): Consider the Calkin-Wilf sequence which is defined recursively by: $$\bar{H}(n+1) = \frac{1}{\lfloor \bar{H}(n) \rfloor - \bar{H}(n)+1} \quad \quad \quad \bar{H}(1) = 1.$$ This is a surjective function that maps the natural numbers onto the set of all positive rational numbers. We can obtain a surjective mapping $H: \mathbb{N} \rightarrow \mathbb{Q}_*$ from this sequence by taking: $$H(n+1) = \min \bigg( \bar{H}(n), \frac{1}{\bar{H}(n)} \bigg). \quad \quad \quad H(1) = 0.$$ The sequence $\bar{H}$ involves a quasi-symmetry property where rational numbers and their inverses appear in a regular pattern. This allows you to simplify the above form using the $\text{fusc}$ function, giving the alternative form: $$H(n+1) = \frac{\min(\text{fusc}(n), \text{fusc}(n+1))}{\max(\text{fusc}(n), \text{fusc}(n+1))}.$$ The sequence $H$ runs through all the numbers in $\mathbb{Q}_*$ twice, except for zero and one, which each appear only once (as the first two elements of the sequence).
Constructing a discrete r.v. having as support all the rationals in $[0,1]$
One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some a
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some arbitrary surjective function $H: \mathbb{N} \rightarrow \mathbb{Q}_*$, which can be interpreted as a sequence on $\mathbb{Q}_*$ that fully covers $\mathbb{Q}_*$ (due to the fact it is surjective). Then you can form a distribution with support $\mathbb{Q}_*$ by taking any probability distribution with support $\mathbb{N}$ and then transforming over to $\mathbb{Q}_*$ using the mapping/sequence $H$. To see exactly how this is done, let's suppose we have a distribution with support on the natural numbers with the probability mass function $g$. Let $\mathcal{N}_q \equiv \{ n \in \mathbb{N} | h(n)=q \}$ denote the preimage for the rational number $q \in \mathbb{Q}_*$. We can then obtain a probability mass function on the support $\mathbb{Q}_*$ by taking: $$f(q) = \sum_{n \in \mathcal{N}_q} g(n) \quad \quad \quad \text{for all } q \in \mathbb{Q}_*.$$ There are lots of well-known surjective functions from the natural numbers to various subsets of the rational numbers, and these can easily be used to obtain a mapping/sequence $H$ of the above type. This will always give a valid probability distribution on the desired set, but it won't always be possible to write the probability mass function in a simple form. In general, if you can write the set $\mathcal{N}_q$ in a simple way then you will be able to write the probability mass $f$ in a simple way. Example (Construction using the Calkin-Wilf tree): Consider the Calkin-Wilf sequence which is defined recursively by: $$\bar{H}(n+1) = \frac{1}{\lfloor \bar{H}(n) \rfloor - \bar{H}(n)+1} \quad \quad \quad \bar{H}(1) = 1.$$ This is a surjective function that maps the natural numbers onto the set of all positive rational numbers. We can obtain a surjective mapping $H: \mathbb{N} \rightarrow \mathbb{Q}_*$ from this sequence by taking: $$H(n+1) = \min \bigg( \bar{H}(n), \frac{1}{\bar{H}(n)} \bigg). \quad \quad \quad H(1) = 0.$$ The sequence $\bar{H}$ involves a quasi-symmetry property where rational numbers and their inverses appear in a regular pattern. This allows you to simplify the above form using the $\text{fusc}$ function, giving the alternative form: $$H(n+1) = \frac{\min(\text{fusc}(n), \text{fusc}(n+1))}{\max(\text{fusc}(n), \text{fusc}(n+1))}.$$ The sequence $H$ runs through all the numbers in $\mathbb{Q}_*$ twice, except for zero and one, which each appear only once (as the first two elements of the sequence).
Constructing a discrete r.v. having as support all the rationals in $[0,1]$ One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some a
12,026
How to calculate cumulative distribution in R?
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example: > X = rnorm(100) # X is a sample of 100 normally distributed random variables > P = ecdf(X) # P is a function giving the empirical CDF of X > P(0.0) # This returns the empirical CDF at zero (should be close to 0.5) [1] 0.52 > plot(P) # Draws a plot of the empirical CDF (see below) If you want to have an object representing the empirical CDF evaluated at specific values (rather than as a function object) then you can do > z = seq(-3, 3, by=0.01) # The values at which we want to evaluate the empirical CDF > p = P(z) # p now stores the empirical CDF evaluated at the values in z Note that p contains at most the same amount of information as P (and possibly it contains less) which in turn contains the same amount of information as X.
How to calculate cumulative distribution in R?
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example: > X = rnorm(100) # X is a sample of 100 normally distributed ran
How to calculate cumulative distribution in R? The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example: > X = rnorm(100) # X is a sample of 100 normally distributed random variables > P = ecdf(X) # P is a function giving the empirical CDF of X > P(0.0) # This returns the empirical CDF at zero (should be close to 0.5) [1] 0.52 > plot(P) # Draws a plot of the empirical CDF (see below) If you want to have an object representing the empirical CDF evaluated at specific values (rather than as a function object) then you can do > z = seq(-3, 3, by=0.01) # The values at which we want to evaluate the empirical CDF > p = P(z) # p now stores the empirical CDF evaluated at the values in z Note that p contains at most the same amount of information as P (and possibly it contains less) which in turn contains the same amount of information as X.
How to calculate cumulative distribution in R? The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example: > X = rnorm(100) # X is a sample of 100 normally distributed ran
12,027
How to calculate cumulative distribution in R?
What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the argument of that function, if it were a stair, would be the index of the tread. You can use this: acumulated.distrib= function(sample,x){ minors= 0 for(n in sample){ if(n<=x){ minors= minors+1 } } return (minors/length(sample)) } mysample = rnorm(100) acumulated.distrib(mysample,1.21) #1.21 or any other value you want. Sadly the use of this function is not very fast. I don't know if R has a function that does this returning you a function, that would be more efficient.
How to calculate cumulative distribution in R?
What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the
How to calculate cumulative distribution in R? What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the argument of that function, if it were a stair, would be the index of the tread. You can use this: acumulated.distrib= function(sample,x){ minors= 0 for(n in sample){ if(n<=x){ minors= minors+1 } } return (minors/length(sample)) } mysample = rnorm(100) acumulated.distrib(mysample,1.21) #1.21 or any other value you want. Sadly the use of this function is not very fast. I don't know if R has a function that does this returning you a function, that would be more efficient.
How to calculate cumulative distribution in R? What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the
12,028
How to calculate cumulative distribution in R?
I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead. First install data.table. Then install my package, mltools (or just copy the empirical_cdf() method into your R environment.) Then it's as easy as # load packages library(data.table) library(mltools) # Make some data dt <- data.table(x=c(0.3, 1.3, 1.4, 3.6), y=c(1.2, 1.2, 3.8, 3.9)) dt x y 1: 0.3 1.2 2: 1.3 1.2 3: 1.4 3.8 4: 3.6 3.9 CDF of a vector empirical_cdf(dt$x, ubounds=seq(1, 4, by=1.0)) UpperBound N.cum CDF 1: 1 1 0.25 2: 2 3 0.75 3: 3 3 0.75 4: 4 4 1.00 CDF of column 'x' of dt empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0))) x N.cum CDF 1: 1 1 0.25 2: 2 3 0.75 3: 3 3 0.75 4: 4 4 1.00 CDF of columns 'x' and 'y' of dt empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0), y=seq(1, 4, by=1.0))) x y N.cum CDF 1: 1 1 0 0.00 2: 1 2 1 0.25 3: 1 3 1 0.25 4: 1 4 1 0.25 5: 2 1 0 0.00 6: 2 2 2 0.50 7: 2 3 2 0.50 8: 2 4 3 0.75 9: 3 1 0 0.00 10: 3 2 2 0.50 11: 3 3 2 0.50 12: 3 4 3 0.75 13: 4 1 0 0.00 14: 4 2 2 0.50 15: 4 3 2 0.50 16: 4 4 4 1.00
How to calculate cumulative distribution in R?
I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead. First install data.table. Then install my package,
How to calculate cumulative distribution in R? I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead. First install data.table. Then install my package, mltools (or just copy the empirical_cdf() method into your R environment.) Then it's as easy as # load packages library(data.table) library(mltools) # Make some data dt <- data.table(x=c(0.3, 1.3, 1.4, 3.6), y=c(1.2, 1.2, 3.8, 3.9)) dt x y 1: 0.3 1.2 2: 1.3 1.2 3: 1.4 3.8 4: 3.6 3.9 CDF of a vector empirical_cdf(dt$x, ubounds=seq(1, 4, by=1.0)) UpperBound N.cum CDF 1: 1 1 0.25 2: 2 3 0.75 3: 3 3 0.75 4: 4 4 1.00 CDF of column 'x' of dt empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0))) x N.cum CDF 1: 1 1 0.25 2: 2 3 0.75 3: 3 3 0.75 4: 4 4 1.00 CDF of columns 'x' and 'y' of dt empirical_cdf(dt, ubounds=list(x=seq(1, 4, by=1.0), y=seq(1, 4, by=1.0))) x y N.cum CDF 1: 1 1 0 0.00 2: 1 2 1 0.25 3: 1 3 1 0.25 4: 1 4 1 0.25 5: 2 1 0 0.00 6: 2 2 2 0.50 7: 2 3 2 0.50 8: 2 4 3 0.75 9: 3 1 0 0.00 10: 3 2 2 0.50 11: 3 3 2 0.50 12: 3 4 3 0.75 13: 4 1 0 0.00 14: 4 2 2 0.50 15: 4 3 2 0.50 16: 4 4 4 1.00
How to calculate cumulative distribution in R? I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead. First install data.table. Then install my package,
12,029
How to calculate cumulative distribution in R?
friend, you can read the code on this blog. sample.data = read.table ('data.txt', header = TRUE, sep = "\t") cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf() cdf more details can be found on following link: r cdf and histogram
How to calculate cumulative distribution in R?
friend, you can read the code on this blog. sample.data = read.table ('data.txt', header = TRUE, sep = "\t") cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf() cdf
How to calculate cumulative distribution in R? friend, you can read the code on this blog. sample.data = read.table ('data.txt', header = TRUE, sep = "\t") cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf() cdf more details can be found on following link: r cdf and histogram
How to calculate cumulative distribution in R? friend, you can read the code on this blog. sample.data = read.table ('data.txt', header = TRUE, sep = "\t") cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf() cdf
12,030
Poisson regression vs. log-count least-squares regression?
On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$. On the other hand, in a "standard" linear model, the left-hand side is the expected value of the normal response variable: $E[Y|x]$. In particular, the link function is the identity function. Now, let us say $Y$ is a Poisson variable and that you intend to normalise it by taking the log: $Y' = \log(Y)$. Because $Y'$ is supposed to be normal you plan to fit the standard linear model for which the left-hand side is $E[Y'|x] = E[\log(Y)|x]$. But, in general, $E[\log(Y) | x] \neq \log(E[Y|x])$. As a consequence, these two modelling approaches are different.
Poisson regression vs. log-count least-squares regression?
On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$. On the other hand, in a "standard" linear model, the left-han
Poisson regression vs. log-count least-squares regression? On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$. On the other hand, in a "standard" linear model, the left-hand side is the expected value of the normal response variable: $E[Y|x]$. In particular, the link function is the identity function. Now, let us say $Y$ is a Poisson variable and that you intend to normalise it by taking the log: $Y' = \log(Y)$. Because $Y'$ is supposed to be normal you plan to fit the standard linear model for which the left-hand side is $E[Y'|x] = E[\log(Y)|x]$. But, in general, $E[\log(Y) | x] \neq \log(E[Y|x])$. As a consequence, these two modelling approaches are different.
Poisson regression vs. log-count least-squares regression? On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$. On the other hand, in a "standard" linear model, the left-han
12,031
Poisson regression vs. log-count least-squares regression?
I see two important differences. First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model the represent conditional means. Since data in this type of analysis are often skewed right, the conditional geometric mean will underestimate the conditional mean. A second difference is the implied distribution : lognormal versus poisson. This relates to the heteroskedasticity structure of the residuals : residual variance proportional to the squared expected values (lognormal) versus residual variance proportional to the expected value (Poisson).
Poisson regression vs. log-count least-squares regression?
I see two important differences. First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model
Poisson regression vs. log-count least-squares regression? I see two important differences. First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model the represent conditional means. Since data in this type of analysis are often skewed right, the conditional geometric mean will underestimate the conditional mean. A second difference is the implied distribution : lognormal versus poisson. This relates to the heteroskedasticity structure of the residuals : residual variance proportional to the squared expected values (lognormal) versus residual variance proportional to the expected value (Poisson).
Poisson regression vs. log-count least-squares regression? I see two important differences. First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model
12,032
C++ libraries for statistical computing
We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package. And because linear algebra is already such a well-understood and coded-for field, Armadillo, a current, modern, plesant, well-documted, small, templated, ... library was a very natural fit for our first extended wrapper: RcppArmadillo. This has caught the attention of other MCMC users as well. I gave a one-day work at the U of Rochester business school last summer, and have help another researcher in the MidWest with similar explorations. Give RcppArmadillo a try -- it works well, is actively maintained (new Armadillo release 1.1.4 today, I will make a new RcppArmadillo later) and supported. And because I just luuv this example so much, here is a quick "fast" version of lm() returning coefficient and std.errors: extern "C" SEXP fastLm(SEXP ys, SEXP Xs) { try { Rcpp::NumericVector yr(ys); // creates Rcpp vector Rcpp::NumericMatrix Xr(Xs); // creates Rcpp matrix int n = Xr.nrow(), k = Xr.ncol(); arma::mat X(Xr.begin(), n, k, false); // avoids extra copy arma::colvec y(yr.begin(), yr.size(), false); arma::colvec coef = arma::solve(X, y); // fit model y ~ X arma::colvec res = y - X*coef; // residuals double s2 = std::inner_product(res.begin(), res.end(), res.begin(), double())/(n - k); // std.errors of coefficients arma::colvec std_err = arma::sqrt(s2 * arma::diagvec( arma::pinv(arma::trans(X)*X) )); return Rcpp::List::create(Rcpp::Named("coefficients") = coef, Rcpp::Named("stderr") = std_err, Rcpp::Named("df") = n - k); } catch( std::exception &ex ) { forward_exception_to_r( ex ); } catch(...) { ::Rf_error( "c++ exception (unknown reason)" ); } return R_NilValue; // -Wall } Lastly, you also get immediate prototyping via inline which may make 'time to code' faster.
C++ libraries for statistical computing
We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package. And because linear algebra is already such a well-understood and coded-for f
C++ libraries for statistical computing We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package. And because linear algebra is already such a well-understood and coded-for field, Armadillo, a current, modern, plesant, well-documted, small, templated, ... library was a very natural fit for our first extended wrapper: RcppArmadillo. This has caught the attention of other MCMC users as well. I gave a one-day work at the U of Rochester business school last summer, and have help another researcher in the MidWest with similar explorations. Give RcppArmadillo a try -- it works well, is actively maintained (new Armadillo release 1.1.4 today, I will make a new RcppArmadillo later) and supported. And because I just luuv this example so much, here is a quick "fast" version of lm() returning coefficient and std.errors: extern "C" SEXP fastLm(SEXP ys, SEXP Xs) { try { Rcpp::NumericVector yr(ys); // creates Rcpp vector Rcpp::NumericMatrix Xr(Xs); // creates Rcpp matrix int n = Xr.nrow(), k = Xr.ncol(); arma::mat X(Xr.begin(), n, k, false); // avoids extra copy arma::colvec y(yr.begin(), yr.size(), false); arma::colvec coef = arma::solve(X, y); // fit model y ~ X arma::colvec res = y - X*coef; // residuals double s2 = std::inner_product(res.begin(), res.end(), res.begin(), double())/(n - k); // std.errors of coefficients arma::colvec std_err = arma::sqrt(s2 * arma::diagvec( arma::pinv(arma::trans(X)*X) )); return Rcpp::List::create(Rcpp::Named("coefficients") = coef, Rcpp::Named("stderr") = std_err, Rcpp::Named("df") = n - k); } catch( std::exception &ex ) { forward_exception_to_r( ex ); } catch(...) { ::Rf_error( "c++ exception (unknown reason)" ); } return R_NilValue; // -Wall } Lastly, you also get immediate prototyping via inline which may make 'time to code' faster.
C++ libraries for statistical computing We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package. And because linear algebra is already such a well-understood and coded-for f
12,033
C++ libraries for statistical computing
I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntactic sugar is really sweet (pun intended). As a side remark, I would recommend that you have a look at JAGS, which does MCMC and its source code is in C++.
C++ libraries for statistical computing
I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntac
C++ libraries for statistical computing I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntactic sugar is really sweet (pun intended). As a side remark, I would recommend that you have a look at JAGS, which does MCMC and its source code is in C++.
C++ libraries for statistical computing I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntac
12,034
C++ libraries for statistical computing
Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as Uniform (real) Uniform (unit sphere or arbitrary dimension) Bernoulli Binomial Cauchy Gamma Poisson Geometric Triangle Exponential Normal Lognormal In addition, Boost Math complements the above distributions you can sample from with numerous density functions of many distributions. It also has several neat helper functions; just to give you an idea: students_t dist(5); cout << "CDF at t = 1 is " << cdf(dist, 1.0) << endl; cout << "Complement of CDF at t = 1 is " << cdf(complement(dist, 1.0)) << endl; for(double i = 10; i < 1e10; i *= 10) { // Calculate the quantile for a 1 in i chance: double t = quantile(complement(dist, 1/i)); // Print it out: cout << "Quantile of students-t with 5 degrees of freedom\n" "for a 1 in " << i << " chance is " << t << endl; } If you decided to use Boost, you also get to use its UBLAS library that features a variety of different matrix types and operations.
C++ libraries for statistical computing
Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as Uniform (real) Uniform (
C++ libraries for statistical computing Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as Uniform (real) Uniform (unit sphere or arbitrary dimension) Bernoulli Binomial Cauchy Gamma Poisson Geometric Triangle Exponential Normal Lognormal In addition, Boost Math complements the above distributions you can sample from with numerous density functions of many distributions. It also has several neat helper functions; just to give you an idea: students_t dist(5); cout << "CDF at t = 1 is " << cdf(dist, 1.0) << endl; cout << "Complement of CDF at t = 1 is " << cdf(complement(dist, 1.0)) << endl; for(double i = 10; i < 1e10; i *= 10) { // Calculate the quantile for a 1 in i chance: double t = quantile(complement(dist, 1/i)); // Print it out: cout << "Quantile of students-t with 5 degrees of freedom\n" "for a 1 in " << i << " chance is " << t << endl; } If you decided to use Boost, you also get to use its UBLAS library that features a variety of different matrix types and operations.
C++ libraries for statistical computing Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as Uniform (real) Uniform (
12,035
C++ libraries for statistical computing
There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially useful because they are written in C but have excellent Python wrappers already written. 1) IMSL C and PyIMSL 2) trilinos and pytrilinos I have never used trilinos as the functionality is primarily on numerical analysis methods, but I use PyIMSL a lot for statistical work (and in a previous work life I developed the software too). With respect to RNGs, here are the ones in C and Python in IMSL DISCRETE random_binomial: Generates pseudorandom binomial numbers from a binomial distribution. random_geometric: Generates pseudorandom numbers from a geometric distribution. random_hypergeometric: Generates pseudorandom numbers from a hypergeometric distribution. random_logarithmic: Generates pseudorandom numbers from a logarithmic distribution. random_neg_binomial: Generates pseudorandom numbers from a negative binomial distribution. random_poisson: Generates pseudorandom numbers from a Poisson distribution. random_uniform_discrete: Generates pseudorandom numbers from a discrete uniform distribution. random_general_discrete: Generates pseudorandom numbers from a general discrete distribution using an alias method or optionally a table lookup method. UNIVARIATE CONTINUOUS DISTRIBUTIONS random_beta: Generates pseudorandom numbers from a beta distribution. random_cauchy: Generates pseudorandom numbers from a Cauchy distribution. random_chi_squared: Generates pseudorandom numbers from a chi-squared distribution. random_exponential: Generates pseudorandom numbers from a standard exponential distribution. random_exponential_mix: Generates pseudorandom mixed numbers from a standard exponential distribution. random_gamma: Generates pseudorandom numbers from a standard gamma distribution. random_lognormal: Generates pseudorandom numbers from a lognormal distribution. random_normal: Generates pseudorandom numbers from a standard normal distribution. random_stable: Sets up a table to generate pseudorandom numbers from a general discrete distribution. random_student_t: Generates pseudorandom numbers from a Student's t distribution. random_triangular: Generates pseudorandom numbers from a triangular distribution. random_uniform: Generates pseudorandom numbers from a uniform (0, 1) distribution. random_von_mises: Generates pseudorandom numbers from a von Mises distribution. random_weibull: Generates pseudorandom numbers from a Weibull distribution. random_general_continuous: Generates pseudorandom numbers from a general continuous distribution. MULTIVARIATE CONTINUOUS DISTRIBUTIONS random_normal_multivariate: Generates pseudorandom numbers from a multivariate normal distribution. random_orthogonal_matrix: Generates a pseudorandom orthogonal matrix or a correlation matrix. random_mvar_from_data: Generates pseudorandom numbers from a multivariate distribution determined from a given sample. random_multinomial: Generates pseudorandom numbers from a multinomial distribution. random_sphere: Generates pseudorandom points on a unit circle or K-dimensional sphere. random_table_twoway: Generates a pseudorandom two-way table. ORDER STATISTICS random_order_normal: Generates pseudorandom order statistics from a standard normal distribution. random_order_uniform: Generates pseudorandom order statistics from a uniform (0, 1) distribution. STOCHASTIC PROCESSES random_arma: Generates pseudorandom ARMA process numbers. random_npp: Generates pseudorandom numbers from a nonhomogeneous Poisson process. SAMPLES AND PERMUTATIONS random_permutation: Generates a pseudorandom permutation. random_sample_indices: Generates a simple pseudorandom sample of indices. random_sample: Generates a simple pseudorandom sample from a finite population. UTILITY FUNCTIONS random_option: Selects the uniform (0, 1) multiplicative congruential pseudorandom number generator. random_option_get: Retrieves the uniform (0, 1) multiplicative congruential pseudorandom number generator. random_seed_get: Retrieves the current value of the seed used in the IMSL random number generators. random_substream_seed_get: Retrieves a seed for the congruential generators that do not do shuffling that will generate random numbers beginning 100,000 numbers farther along. random_seed_set: Initializes a random seed for use in the IMSL random number generators. random_table_set: Sets the current table used in the shuffled generator. random_table_get: Retrieves the current table used in the shuffled generator. random_GFSR_table_set: Sets the current table used in the GFSR generator. random_GFSR_table_get: Retrieves the current table used in the GFSR generator. random_MT32_init: Initializes the 32-bit Mersenne Twister generator using an array. random_MT32_table_get: Retrieves the current table used in the 32-bit Mersenne Twister generator. random_MT32_table_set: Sets the current table used in the 32-bit Mersenne Twister generator. random_MT64_init: Initializes the 64-bit Mersenne Twister generator using an array. random_MT64_table_get: Retrieves the current table used in the 64-bit Mersenne Twister generator. random_MT64_table_set: Sets the current table used in the 64-bit Mersenne Twister generator. LOW-DISCREPANCY SEQUENCE faure_next_point: Computes a shuffled Faure sequence.
C++ libraries for statistical computing
There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially usef
C++ libraries for statistical computing There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially useful because they are written in C but have excellent Python wrappers already written. 1) IMSL C and PyIMSL 2) trilinos and pytrilinos I have never used trilinos as the functionality is primarily on numerical analysis methods, but I use PyIMSL a lot for statistical work (and in a previous work life I developed the software too). With respect to RNGs, here are the ones in C and Python in IMSL DISCRETE random_binomial: Generates pseudorandom binomial numbers from a binomial distribution. random_geometric: Generates pseudorandom numbers from a geometric distribution. random_hypergeometric: Generates pseudorandom numbers from a hypergeometric distribution. random_logarithmic: Generates pseudorandom numbers from a logarithmic distribution. random_neg_binomial: Generates pseudorandom numbers from a negative binomial distribution. random_poisson: Generates pseudorandom numbers from a Poisson distribution. random_uniform_discrete: Generates pseudorandom numbers from a discrete uniform distribution. random_general_discrete: Generates pseudorandom numbers from a general discrete distribution using an alias method or optionally a table lookup method. UNIVARIATE CONTINUOUS DISTRIBUTIONS random_beta: Generates pseudorandom numbers from a beta distribution. random_cauchy: Generates pseudorandom numbers from a Cauchy distribution. random_chi_squared: Generates pseudorandom numbers from a chi-squared distribution. random_exponential: Generates pseudorandom numbers from a standard exponential distribution. random_exponential_mix: Generates pseudorandom mixed numbers from a standard exponential distribution. random_gamma: Generates pseudorandom numbers from a standard gamma distribution. random_lognormal: Generates pseudorandom numbers from a lognormal distribution. random_normal: Generates pseudorandom numbers from a standard normal distribution. random_stable: Sets up a table to generate pseudorandom numbers from a general discrete distribution. random_student_t: Generates pseudorandom numbers from a Student's t distribution. random_triangular: Generates pseudorandom numbers from a triangular distribution. random_uniform: Generates pseudorandom numbers from a uniform (0, 1) distribution. random_von_mises: Generates pseudorandom numbers from a von Mises distribution. random_weibull: Generates pseudorandom numbers from a Weibull distribution. random_general_continuous: Generates pseudorandom numbers from a general continuous distribution. MULTIVARIATE CONTINUOUS DISTRIBUTIONS random_normal_multivariate: Generates pseudorandom numbers from a multivariate normal distribution. random_orthogonal_matrix: Generates a pseudorandom orthogonal matrix or a correlation matrix. random_mvar_from_data: Generates pseudorandom numbers from a multivariate distribution determined from a given sample. random_multinomial: Generates pseudorandom numbers from a multinomial distribution. random_sphere: Generates pseudorandom points on a unit circle or K-dimensional sphere. random_table_twoway: Generates a pseudorandom two-way table. ORDER STATISTICS random_order_normal: Generates pseudorandom order statistics from a standard normal distribution. random_order_uniform: Generates pseudorandom order statistics from a uniform (0, 1) distribution. STOCHASTIC PROCESSES random_arma: Generates pseudorandom ARMA process numbers. random_npp: Generates pseudorandom numbers from a nonhomogeneous Poisson process. SAMPLES AND PERMUTATIONS random_permutation: Generates a pseudorandom permutation. random_sample_indices: Generates a simple pseudorandom sample of indices. random_sample: Generates a simple pseudorandom sample from a finite population. UTILITY FUNCTIONS random_option: Selects the uniform (0, 1) multiplicative congruential pseudorandom number generator. random_option_get: Retrieves the uniform (0, 1) multiplicative congruential pseudorandom number generator. random_seed_get: Retrieves the current value of the seed used in the IMSL random number generators. random_substream_seed_get: Retrieves a seed for the congruential generators that do not do shuffling that will generate random numbers beginning 100,000 numbers farther along. random_seed_set: Initializes a random seed for use in the IMSL random number generators. random_table_set: Sets the current table used in the shuffled generator. random_table_get: Retrieves the current table used in the shuffled generator. random_GFSR_table_set: Sets the current table used in the GFSR generator. random_GFSR_table_get: Retrieves the current table used in the GFSR generator. random_MT32_init: Initializes the 32-bit Mersenne Twister generator using an array. random_MT32_table_get: Retrieves the current table used in the 32-bit Mersenne Twister generator. random_MT32_table_set: Sets the current table used in the 32-bit Mersenne Twister generator. random_MT64_init: Initializes the 64-bit Mersenne Twister generator using an array. random_MT64_table_get: Retrieves the current table used in the 64-bit Mersenne Twister generator. random_MT64_table_set: Sets the current table used in the 64-bit Mersenne Twister generator. LOW-DISCREPANCY SEQUENCE faure_next_point: Computes a shuffled Faure sequence.
C++ libraries for statistical computing There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially usef
12,036
Minimum sample size per cluster in a random effect model
TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high" Longer version: In general, the number of clusters is more important than the number of observations per cluster. With 700, clearly you have no problem there. Small cluster sizes are quite common, especially in social science surveys that follow stratified sampling designs, and there is a body of research that has investigated cluster-level sample size. While increasing the cluster size increases statistical power to estimate the random effects (Austin & Leckie, 2018), small cluster sizes do not lead to serious bias (Bell et al, 2008; Clarke, 2008; Clarke & Wheaton, 2007; Maas & Hox, 2005). Thus, the minimum sample size per cluster is 1. In particular, Bell, et al (2008) performed a Monte Carlo simulation study with proportions of singleton clusters (clusters containing only a single observation) ranging from 0% to 70%, and found that, provided the number of clusters was large (~500) the small cluster sizes had almost no impact on bias and Type 1 error control. They also reported very few problems with model convergence under any of their modelling scenarios. For the particular scenario in the OP, I would suggest running the model with 700 clusters in the first instance. Unless there was a clear problem with this, I would be disinclined to merge clusters. I ran a simple simulation in R: Here we create a clustered dataset with with a residual variance of 1, a single fixed effect also of 1, 700 clusters, of which 690 are singletons and 10 have just 2 observations. We run the simulation 1000 times and observe the histograms of the estimated fixed and residual random effects. > set.seed(15) > dtB <- expand.grid(Subject = 1:700, measure = c(1)) > dtB <- rbind(dtB, dtB[691:700, ]) > fixef.v <- numeric(1000) > ranef.v <- numeric(1000) > for (i in 1:1000) { dtB$x <- rnorm(nrow(dtB), 0, 1) dtB$y <- dtB$Subject/100 + rnorm(nrow(dtB), 0, 1) + dtB$x * 1 fm0B <- lmer(y ~ x + (1|Subject), data = dtB) fixef.v[i] <- fixef(fm0B)[[2]] ranef.v[i] <- attr(VarCorr(fm0B), "sc") } > hist(fixef.v, breaks = 15) > hist(ranef.v, breaks = 15) As you can see, the fixed effects are very well estimated, while the residual random effects appear to be a little downward-biased, but not drastically so: > summary(fixef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6479 0.9439 0.9992 1.0005 1.0578 1.2544 > summary(ranef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2796 0.7745 0.9004 0.8993 1.0212 1.4837 The OP specifically mentions the estimation of cluster-level random effects. In the simulation above, the random effects were created simply as the value of each Subject's ID (scaled down by a factor of 100). Obviously these are not normally distributed, which is the assumption of linear mixed effects models, however, we can extract the (conditional modes of) the cluster level effects and plot them against the actual Subject IDs: > re <- ranef(fm0B)[[1]][, 1] > dtB$re <- append(re, re[691:700]) > hist(dtB$re) > plot(dtB$re, dtB$Subject) The histogram departs from normality somewhat, but this is due to the way we simulated the data. There is still a reasonable relationship between the estimated and actual random effects. References: Peter C. Austin & George Leckie (2018) The effect of number of clusters and cluster size on statistical power and Type I error rates when testing random effects variance components in multilevel linear and logistic regression models, Journal of Statistical Computation and Simulation, 88:16, 3151-3163, DOI: 10.1080/00949655.2018.1504945 Bell, B. A., Ferron, J. M., & Kromrey, J. D. (2008). Cluster size in multilevel models: the impact of sparse data structures on point and interval estimates in two-level models. JSM Proceedings, Section on Survey Research Methods, 1122-1129. Clarke, P. (2008). When can group level clustering be ignored? Multilevel models versus single-level models with sparse data. Journal of Epidemiology and Community Health, 62(8), 752-758. Clarke, P., & Wheaton, B. (2007). Addressing data sparseness in contextual population research using cluster analysis to create synthetic neighborhoods. Sociological Methods & Research, 35(3), 311-351. Maas, C. J., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1(3), 86-92.
Minimum sample size per cluster in a random effect model
TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high" Longer version: In
Minimum sample size per cluster in a random effect model TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high" Longer version: In general, the number of clusters is more important than the number of observations per cluster. With 700, clearly you have no problem there. Small cluster sizes are quite common, especially in social science surveys that follow stratified sampling designs, and there is a body of research that has investigated cluster-level sample size. While increasing the cluster size increases statistical power to estimate the random effects (Austin & Leckie, 2018), small cluster sizes do not lead to serious bias (Bell et al, 2008; Clarke, 2008; Clarke & Wheaton, 2007; Maas & Hox, 2005). Thus, the minimum sample size per cluster is 1. In particular, Bell, et al (2008) performed a Monte Carlo simulation study with proportions of singleton clusters (clusters containing only a single observation) ranging from 0% to 70%, and found that, provided the number of clusters was large (~500) the small cluster sizes had almost no impact on bias and Type 1 error control. They also reported very few problems with model convergence under any of their modelling scenarios. For the particular scenario in the OP, I would suggest running the model with 700 clusters in the first instance. Unless there was a clear problem with this, I would be disinclined to merge clusters. I ran a simple simulation in R: Here we create a clustered dataset with with a residual variance of 1, a single fixed effect also of 1, 700 clusters, of which 690 are singletons and 10 have just 2 observations. We run the simulation 1000 times and observe the histograms of the estimated fixed and residual random effects. > set.seed(15) > dtB <- expand.grid(Subject = 1:700, measure = c(1)) > dtB <- rbind(dtB, dtB[691:700, ]) > fixef.v <- numeric(1000) > ranef.v <- numeric(1000) > for (i in 1:1000) { dtB$x <- rnorm(nrow(dtB), 0, 1) dtB$y <- dtB$Subject/100 + rnorm(nrow(dtB), 0, 1) + dtB$x * 1 fm0B <- lmer(y ~ x + (1|Subject), data = dtB) fixef.v[i] <- fixef(fm0B)[[2]] ranef.v[i] <- attr(VarCorr(fm0B), "sc") } > hist(fixef.v, breaks = 15) > hist(ranef.v, breaks = 15) As you can see, the fixed effects are very well estimated, while the residual random effects appear to be a little downward-biased, but not drastically so: > summary(fixef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6479 0.9439 0.9992 1.0005 1.0578 1.2544 > summary(ranef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2796 0.7745 0.9004 0.8993 1.0212 1.4837 The OP specifically mentions the estimation of cluster-level random effects. In the simulation above, the random effects were created simply as the value of each Subject's ID (scaled down by a factor of 100). Obviously these are not normally distributed, which is the assumption of linear mixed effects models, however, we can extract the (conditional modes of) the cluster level effects and plot them against the actual Subject IDs: > re <- ranef(fm0B)[[1]][, 1] > dtB$re <- append(re, re[691:700]) > hist(dtB$re) > plot(dtB$re, dtB$Subject) The histogram departs from normality somewhat, but this is due to the way we simulated the data. There is still a reasonable relationship between the estimated and actual random effects. References: Peter C. Austin & George Leckie (2018) The effect of number of clusters and cluster size on statistical power and Type I error rates when testing random effects variance components in multilevel linear and logistic regression models, Journal of Statistical Computation and Simulation, 88:16, 3151-3163, DOI: 10.1080/00949655.2018.1504945 Bell, B. A., Ferron, J. M., & Kromrey, J. D. (2008). Cluster size in multilevel models: the impact of sparse data structures on point and interval estimates in two-level models. JSM Proceedings, Section on Survey Research Methods, 1122-1129. Clarke, P. (2008). When can group level clustering be ignored? Multilevel models versus single-level models with sparse data. Journal of Epidemiology and Community Health, 62(8), 752-758. Clarke, P., & Wheaton, B. (2007). Addressing data sparseness in contextual population research using cluster analysis to create synthetic neighborhoods. Sociological Methods & Research, 35(3), 311-351. Maas, C. J., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1(3), 86-92.
Minimum sample size per cluster in a random effect model TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high" Longer version: In
12,037
Minimum sample size per cluster in a random effect model
In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the overall mean of the model described by the fixed-effects part. The degree of shrinkage depends on two components: The magnitude of the variance of the random effects compared to the magnitude of the variance of the error terms. The larger the variance of the random effects in relation to the variance of the error terms, the smaller the degree of shrinkage. The number of repeated measurements in the clusters. Random effects estimates of clusters with more repeated measurements are shrunk less towards the overall mean compared to clusters with fewer measurements. In your case, the second point is more relevant. However, note that your suggested solution of merging clusters may impact the first point as well.
Minimum sample size per cluster in a random effect model
In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the o
Minimum sample size per cluster in a random effect model In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the overall mean of the model described by the fixed-effects part. The degree of shrinkage depends on two components: The magnitude of the variance of the random effects compared to the magnitude of the variance of the error terms. The larger the variance of the random effects in relation to the variance of the error terms, the smaller the degree of shrinkage. The number of repeated measurements in the clusters. Random effects estimates of clusters with more repeated measurements are shrunk less towards the overall mean compared to clusters with fewer measurements. In your case, the second point is more relevant. However, note that your suggested solution of merging clusters may impact the first point as well.
Minimum sample size per cluster in a random effect model In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the o
12,038
What is the difference between univariate and multivariate time series?
Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a one-dimensional value, which is the temperature. Multivariate time series: Multiple variables are varying over time. For example, a tri-axial accelerometer. There are three accelerations, one for each axis (x,y,z) and they vary simultaneously over time. Considering the data you showed in the question, you are dealing with a multivariate time series, where value_1, value_2 andvalue_3 are three variables changing simultaneously over time.
What is the difference between univariate and multivariate time series?
Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a
What is the difference between univariate and multivariate time series? Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a one-dimensional value, which is the temperature. Multivariate time series: Multiple variables are varying over time. For example, a tri-axial accelerometer. There are three accelerations, one for each axis (x,y,z) and they vary simultaneously over time. Considering the data you showed in the question, you are dealing with a multivariate time series, where value_1, value_2 andvalue_3 are three variables changing simultaneously over time.
What is the difference between univariate and multivariate time series? Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a
12,039
Texas sharpshooter fallacy in exploratory data analysis
If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independent. Many researchers attempt to "reconcile differences" with things like pooled analyses, meta analyses, and Bayesian methods. This means that at least some of the evidence presented in such an analysis includes "the circle around the random bullet holes".
Texas sharpshooter fallacy in exploratory data analysis
If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independen
Texas sharpshooter fallacy in exploratory data analysis If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independent. Many researchers attempt to "reconcile differences" with things like pooled analyses, meta analyses, and Bayesian methods. This means that at least some of the evidence presented in such an analysis includes "the circle around the random bullet holes".
Texas sharpshooter fallacy in exploratory data analysis If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independen
12,040
Texas sharpshooter fallacy in exploratory data analysis
This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?" Accepting unadjusted p-values from EDA methods will lead to vastly inflated type I error rates. But I think Tukey would not be happy with anyone doing this. The point of EDA is not to make definitive conclusions about relations in the data, but rather to look for potential novel relations in the data to follow up on. Leaving out this step in the larger scientific process is essentially hamstringing science to never be able to find new interesting aspects of our data, outside of pure logical deduction. Ever try to logically deduce how over expression of a set of genes will affect survival of a cell? Hint: it's not very easy (one of our favorite jokes among the bioinformatics staff at my work was when a physicist asked "Why don't you just simulate the physical properties of different gene interactions? It's a finite parameters space.") Personally, I think confusion about this can lead to a great slow down in scientific progress. I know too many non-statistical researchers that will state that they do not want to do EDA procedures on preliminary data, because they "know that EDA can be bad". In conclusion, it's absolutely true that using EDA methods and treating them as confirmatory data analysis methods will lead to invalid results. However, the lack of proper use of EDA can lead to almost no results.
Texas sharpshooter fallacy in exploratory data analysis
This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?" Accepting u
Texas sharpshooter fallacy in exploratory data analysis This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?" Accepting unadjusted p-values from EDA methods will lead to vastly inflated type I error rates. But I think Tukey would not be happy with anyone doing this. The point of EDA is not to make definitive conclusions about relations in the data, but rather to look for potential novel relations in the data to follow up on. Leaving out this step in the larger scientific process is essentially hamstringing science to never be able to find new interesting aspects of our data, outside of pure logical deduction. Ever try to logically deduce how over expression of a set of genes will affect survival of a cell? Hint: it's not very easy (one of our favorite jokes among the bioinformatics staff at my work was when a physicist asked "Why don't you just simulate the physical properties of different gene interactions? It's a finite parameters space.") Personally, I think confusion about this can lead to a great slow down in scientific progress. I know too many non-statistical researchers that will state that they do not want to do EDA procedures on preliminary data, because they "know that EDA can be bad". In conclusion, it's absolutely true that using EDA methods and treating them as confirmatory data analysis methods will lead to invalid results. However, the lack of proper use of EDA can lead to almost no results.
Texas sharpshooter fallacy in exploratory data analysis This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?" Accepting u
12,041
Texas sharpshooter fallacy in exploratory data analysis
It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses. I would temper this statement and express it a little differently: Choosing a hypothesis to test based on the data undermines the test if one doesn't use the correct null hypothesis. The thrust of the Nature article is, essentially, that it's easy for analysts to kid themselves into ignoring all of the multiple comparisons they're implicitly making during exploration. Nature quotes Andrew Gelman, but doesn't mention his paper with Eric Loken about just this topic. An excerpt: When criticisms of multiple comparisons have come up in regards to some of the papers we discuss here, the researchers never respond that they had chosen all the details of their data processing and data analysis ahead of time; rather, they claim that they picked only one analysis for the particular data they saw. Intuitive as this defense may seem, it does not address the fundamental frequentist concern of multiple comparisons. Another: It’s not that the researchers performed hundreds of different comparisons and picked ones that were statistically significant. Rather, they start with a somewhat-formed idea in their mind of what comparison to perform, and they refine that idea in light of the data. They saw a pattern in red and pink, and they combined the colors. Succinctly: There is a one-to-many mapping from scientific to statistical hypotheses. And one more, emphasis mine: In all the cases we have discussed, the published analysis has a story that is consistent with the scientific hypotheses that motivated the work, but other data patterns (which, given the sample sizes, could easily have occurred by chance) would naturally have led to different data analyses (for example, a focus on main effects rather than interactions, or a different choice of data subsets to compare) which equally could have been used to support the research hypotheses. The result remains, as we have written elsewhere, a sort of machine for producing and publicizing random patterns. In short, it's not that EDA leads to a "spurious hypothesis"; it's that testing a hypothesis with the same dataset that prompted the hypothesis can lead to spurious conclusions. If you're interested in conquering this obstacle, Gelman has another paper arguing that many of these problems disappear in a Bayesian framework, and the paper with Loken references "pre-publication replication" as anecdotally described in the first section of this paper.
Texas sharpshooter fallacy in exploratory data analysis
It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses. I would temper this statement and express it a little differently: Cho
Texas sharpshooter fallacy in exploratory data analysis It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses. I would temper this statement and express it a little differently: Choosing a hypothesis to test based on the data undermines the test if one doesn't use the correct null hypothesis. The thrust of the Nature article is, essentially, that it's easy for analysts to kid themselves into ignoring all of the multiple comparisons they're implicitly making during exploration. Nature quotes Andrew Gelman, but doesn't mention his paper with Eric Loken about just this topic. An excerpt: When criticisms of multiple comparisons have come up in regards to some of the papers we discuss here, the researchers never respond that they had chosen all the details of their data processing and data analysis ahead of time; rather, they claim that they picked only one analysis for the particular data they saw. Intuitive as this defense may seem, it does not address the fundamental frequentist concern of multiple comparisons. Another: It’s not that the researchers performed hundreds of different comparisons and picked ones that were statistically significant. Rather, they start with a somewhat-formed idea in their mind of what comparison to perform, and they refine that idea in light of the data. They saw a pattern in red and pink, and they combined the colors. Succinctly: There is a one-to-many mapping from scientific to statistical hypotheses. And one more, emphasis mine: In all the cases we have discussed, the published analysis has a story that is consistent with the scientific hypotheses that motivated the work, but other data patterns (which, given the sample sizes, could easily have occurred by chance) would naturally have led to different data analyses (for example, a focus on main effects rather than interactions, or a different choice of data subsets to compare) which equally could have been used to support the research hypotheses. The result remains, as we have written elsewhere, a sort of machine for producing and publicizing random patterns. In short, it's not that EDA leads to a "spurious hypothesis"; it's that testing a hypothesis with the same dataset that prompted the hypothesis can lead to spurious conclusions. If you're interested in conquering this obstacle, Gelman has another paper arguing that many of these problems disappear in a Bayesian framework, and the paper with Loken references "pre-publication replication" as anecdotally described in the first section of this paper.
Texas sharpshooter fallacy in exploratory data analysis It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses. I would temper this statement and express it a little differently: Cho
12,042
Texas sharpshooter fallacy in exploratory data analysis
Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters. The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself about how many tests you've really performed, and thus in assigning some kind of $p$-value to your discovery. Even in cases when the search space could in principle be counted, the $p$-value calculation is either done wrongly or not at all: see wikipedia for a notorious example.
Texas sharpshooter fallacy in exploratory data analysis
Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters. The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself ab
Texas sharpshooter fallacy in exploratory data analysis Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters. The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself about how many tests you've really performed, and thus in assigning some kind of $p$-value to your discovery. Even in cases when the search space could in principle be counted, the $p$-value calculation is either done wrongly or not at all: see wikipedia for a notorious example.
Texas sharpshooter fallacy in exploratory data analysis Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters. The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself ab
12,043
Texas sharpshooter fallacy in exploratory data analysis
Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis), you can get a sense of its robustness by performing cross-validation (CV) or bootstrap simulations. If your findings depend on only a few key observations, then CV or Bootstrap will show that many of the folds(CV) or boostrap samples fail to reproduce the observed feature. This is not a foolproof method, but its a good intermediate check before going for a full CDA (or purposefully holding out a "validation set" from your initial data pool).
Texas sharpshooter fallacy in exploratory data analysis
Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis
Texas sharpshooter fallacy in exploratory data analysis Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis), you can get a sense of its robustness by performing cross-validation (CV) or bootstrap simulations. If your findings depend on only a few key observations, then CV or Bootstrap will show that many of the folds(CV) or boostrap samples fail to reproduce the observed feature. This is not a foolproof method, but its a good intermediate check before going for a full CDA (or purposefully holding out a "validation set" from your initial data pool).
Texas sharpshooter fallacy in exploratory data analysis Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis
12,044
Texas sharpshooter fallacy in exploratory data analysis
The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data. This can, in theory, result from exploratory data analysis alone. See "Causal deconvolution by algorithmic generative models"
Texas sharpshooter fallacy in exploratory data analysis
The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data
Texas sharpshooter fallacy in exploratory data analysis The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data. This can, in theory, result from exploratory data analysis alone. See "Causal deconvolution by algorithmic generative models"
Texas sharpshooter fallacy in exploratory data analysis The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data
12,045
Neyman-Pearson lemma
I think you understood the lemma well. Why it does not work for a composite alternative? As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If the alternative is composite, which parameter are you going to plug in?
Neyman-Pearson lemma
I think you understood the lemma well. Why it does not work for a composite alternative? As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If
Neyman-Pearson lemma I think you understood the lemma well. Why it does not work for a composite alternative? As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If the alternative is composite, which parameter are you going to plug in?
Neyman-Pearson lemma I think you understood the lemma well. Why it does not work for a composite alternative? As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If
12,046
Neyman-Pearson lemma
I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the lemma. As often in probability, it is based on a discrete probability mass function so it is easy to undertand than when working with pdf's. Also, take into account I define the likelihood ratio as the likelihood of the alternative hypothesis vs. the null hypothesis, contrary to your lemma statement. The explanation is the same, but rather than less than is now greater than. I hope it helps... Those of you who work in data analysis and have been through some statistics courses may have come to know Neyman-Pearson lemma (NP-lemma). The message is simple, the demonstration not so much but what I always found difficult was to get a common-sense feeling of what it was about. Reading a book named "Common Errors in Statistics" by P.I.Good and J.W.Hardin I got to an explanation and example that helped me get this gut feeling about the NP-lemma I had always missed. In not 100% mathematically perfect language, what Neyman-Pearson tells us is that the most powerful test one can come up to validate a given hypothesis within a certain significance level is given by a rejection region made by all possible observations coming from this test with a likelihood ratio above a certain threshold... woahhh! Who said it was easy! Keep calm and deconstruct the lemma: Hypothesis. In statistics one always works with two hypothesis that a statistical test should reject or not reject. There is the null hypothesis, that will not be rejected until sample evidence against it is strong enough. There is also the alternative hypothesis, the one we will take if the null seems to be false. Power of a test (a.k.a. sensitivity) tells us which proportion of times we will correctly reject the null hypothesis when it is wrong. We want powerful tests, so most of the time we reject the null hypothesis we are right! Significance level of a test (a.k.a. false positive rate) tells us which proportion of times we will wrongly reject the null hypothesis when it is true. We want a small significance level so most of the times we reject the null hypothesis we are not wrong! Rejection region, given all possible outcomes of the test, the rejection region includes those outcomes that will make us reject the null hypothesis in benefit of its alternative one. Likelihood is the probability of having seen the observed outcome of the test given that the null hypothesis (Likelihood of the null hypothesis) or the alternative one (Likelihood of the alternative hypothesis) were true. Likelihood ratio, is the ratio of the alternative hypothesis likelihood divided by the null hypothesis likelihood. If the test outcome was very much expected if the null hypothesis were true versus the alternative one, the likelihood ratio should be small. Enough definitions! (although if you look at them carefully, you will realize they are very insightful!). Let's go to what Neyman and Pearson tell us: if you want to have the best possible statistical test from the point of view of its power just define the rejection region by including those test results that have the highest likelihood ratio, and keep adding more test results until you reach a certain value for the number of times your test will reject the null hypothesis when it is true (significance level). Let's see an example where hopefully everything will come together. The example is based on the book mentioned above. It is completely made up by myself so it should not be viewed as reflecting any reality or personal opinion. Imagine one wants to determine whether somebody is in favor of setting immigration quotas (null hypothesis) or not (alternative hypothesis) by asking his/her feelings versus the European Union. Imagine we knew the actual probability distribution for both types of people regarding the answer to our question: Let's imagine we are willing to accept a false positive error of 30%, that is, 30% of the time we will reject the null hypothesis and assume the interviewed person is against quotas when he/she is really for them. How would we construct the test? According to Neyman and Pearson we would first take the result with the highest likelihood ratio. This is the answer of "really like the EU" with a ratio of 3. With this result, if we assume somebody is against quotas when he/she said he "really likes the EU", 10% of the time we would be assigning for quotas people as against (significance). However we would only be correctly classifying against quota people 30% of the time (power) as not everybody in this group have the same opinion about the EU. This seems to be a poor result as far as power is concerned. However, the test does not make many mistakes at misclassifying for quota people (significance) . As we are more flexible regarding significance, let's look for the next test result that we should add to the bag of answers that reject the null hypothesis (rejection region). The next answer with the highest likelihood ratio is "like the EU". If we use the answers "really like" and "like" the EU as test results that allow us to reject the null hypothesis of somebody being for quotas, we would be misclassifying for quotas people as not 30% of the time (10% from the "really like" and 20% from the "like") and we would be correctly classifying against quotas people 65% of the time (30% from "really like" and 35% from "like"). In statistical jargon: our significance increased from 10% to 30% (bad!) while the power of our test increased from 30% to 65% (good!). This is a situation all statistical tests have. There is not something such as a free lunch even in statistics! If you want to increase the power of your test you do it at the expense of increasing the level of significance. Or in simpler terms: you want to better classify the good guys, you will do at the expense of having more bad guys looking good! Basically, now we are done! We created the most powerful test we could with the given data and a significance level of 30% by using "really like" and "like" labels to determine if somebody is against quotas... are we sure? What would have happened if we had included in the second step after the "really like" answer was chosen, the answer "indifferent" instead of "like"? The significance of the test would have been the same than before at 30%: 10% for quota people answer "really" like and 20% for quota people answer "dislike". Both tests would be as bad at misclassifying for quota individuals. However, the power would get worse! With the new test we would have a power of 50% instead of the 65% we had before: 30% from "really likes" and 20% from "indifferent". With the new test we would be less precise at identifying against quota individuals! Who helped out here? Neyman-Person likelihood ratio remarkable idea! Taking at each time the answer with the highest likelihood ratio ensured us that we include in the new test as much power as possible (large numerator) while keeping the significance under control (small denominator)!
Neyman-Pearson lemma
I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the le
Neyman-Pearson lemma I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the lemma. As often in probability, it is based on a discrete probability mass function so it is easy to undertand than when working with pdf's. Also, take into account I define the likelihood ratio as the likelihood of the alternative hypothesis vs. the null hypothesis, contrary to your lemma statement. The explanation is the same, but rather than less than is now greater than. I hope it helps... Those of you who work in data analysis and have been through some statistics courses may have come to know Neyman-Pearson lemma (NP-lemma). The message is simple, the demonstration not so much but what I always found difficult was to get a common-sense feeling of what it was about. Reading a book named "Common Errors in Statistics" by P.I.Good and J.W.Hardin I got to an explanation and example that helped me get this gut feeling about the NP-lemma I had always missed. In not 100% mathematically perfect language, what Neyman-Pearson tells us is that the most powerful test one can come up to validate a given hypothesis within a certain significance level is given by a rejection region made by all possible observations coming from this test with a likelihood ratio above a certain threshold... woahhh! Who said it was easy! Keep calm and deconstruct the lemma: Hypothesis. In statistics one always works with two hypothesis that a statistical test should reject or not reject. There is the null hypothesis, that will not be rejected until sample evidence against it is strong enough. There is also the alternative hypothesis, the one we will take if the null seems to be false. Power of a test (a.k.a. sensitivity) tells us which proportion of times we will correctly reject the null hypothesis when it is wrong. We want powerful tests, so most of the time we reject the null hypothesis we are right! Significance level of a test (a.k.a. false positive rate) tells us which proportion of times we will wrongly reject the null hypothesis when it is true. We want a small significance level so most of the times we reject the null hypothesis we are not wrong! Rejection region, given all possible outcomes of the test, the rejection region includes those outcomes that will make us reject the null hypothesis in benefit of its alternative one. Likelihood is the probability of having seen the observed outcome of the test given that the null hypothesis (Likelihood of the null hypothesis) or the alternative one (Likelihood of the alternative hypothesis) were true. Likelihood ratio, is the ratio of the alternative hypothesis likelihood divided by the null hypothesis likelihood. If the test outcome was very much expected if the null hypothesis were true versus the alternative one, the likelihood ratio should be small. Enough definitions! (although if you look at them carefully, you will realize they are very insightful!). Let's go to what Neyman and Pearson tell us: if you want to have the best possible statistical test from the point of view of its power just define the rejection region by including those test results that have the highest likelihood ratio, and keep adding more test results until you reach a certain value for the number of times your test will reject the null hypothesis when it is true (significance level). Let's see an example where hopefully everything will come together. The example is based on the book mentioned above. It is completely made up by myself so it should not be viewed as reflecting any reality or personal opinion. Imagine one wants to determine whether somebody is in favor of setting immigration quotas (null hypothesis) or not (alternative hypothesis) by asking his/her feelings versus the European Union. Imagine we knew the actual probability distribution for both types of people regarding the answer to our question: Let's imagine we are willing to accept a false positive error of 30%, that is, 30% of the time we will reject the null hypothesis and assume the interviewed person is against quotas when he/she is really for them. How would we construct the test? According to Neyman and Pearson we would first take the result with the highest likelihood ratio. This is the answer of "really like the EU" with a ratio of 3. With this result, if we assume somebody is against quotas when he/she said he "really likes the EU", 10% of the time we would be assigning for quotas people as against (significance). However we would only be correctly classifying against quota people 30% of the time (power) as not everybody in this group have the same opinion about the EU. This seems to be a poor result as far as power is concerned. However, the test does not make many mistakes at misclassifying for quota people (significance) . As we are more flexible regarding significance, let's look for the next test result that we should add to the bag of answers that reject the null hypothesis (rejection region). The next answer with the highest likelihood ratio is "like the EU". If we use the answers "really like" and "like" the EU as test results that allow us to reject the null hypothesis of somebody being for quotas, we would be misclassifying for quotas people as not 30% of the time (10% from the "really like" and 20% from the "like") and we would be correctly classifying against quotas people 65% of the time (30% from "really like" and 35% from "like"). In statistical jargon: our significance increased from 10% to 30% (bad!) while the power of our test increased from 30% to 65% (good!). This is a situation all statistical tests have. There is not something such as a free lunch even in statistics! If you want to increase the power of your test you do it at the expense of increasing the level of significance. Or in simpler terms: you want to better classify the good guys, you will do at the expense of having more bad guys looking good! Basically, now we are done! We created the most powerful test we could with the given data and a significance level of 30% by using "really like" and "like" labels to determine if somebody is against quotas... are we sure? What would have happened if we had included in the second step after the "really like" answer was chosen, the answer "indifferent" instead of "like"? The significance of the test would have been the same than before at 30%: 10% for quota people answer "really" like and 20% for quota people answer "dislike". Both tests would be as bad at misclassifying for quota individuals. However, the power would get worse! With the new test we would have a power of 50% instead of the 65% we had before: 30% from "really likes" and 20% from "indifferent". With the new test we would be less precise at identifying against quota individuals! Who helped out here? Neyman-Person likelihood ratio remarkable idea! Taking at each time the answer with the highest likelihood ratio ensured us that we include in the new test as much power as possible (large numerator) while keeping the significance under control (small denominator)!
Neyman-Pearson lemma I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the le
12,047
Neyman-Pearson lemma
The Context (In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section) The Neyman-Pearson lemma comes up in the problem of simple hypothesis testing. We have two different probability distributions on a common space $\Omega$: $P_0$ and $P_1$, called the null and the alternative hypotheses. Based on a single observation $\omega\in\Omega$, we have to come up with a guess for which of the two probability distributions is in effect. A test is therefore a function which to each $\omega$ assigns a guess of either "null hypothesis" or "alternative hypothesis". A test can obviously be identified with the region on which it returns "alternative", so we're just looking for subsets (events) of the probability space. Typically in applications, the null hypothesis corresponds to some kind of status quo, whereas the alternative hypothesis is some new phenomenon which you're trying to prove or disprove is real. For example, you may be testing someone for psychic powers. You run the standard test with the cards with squiggly lines or what not, and get them to guess a certain number of times. The null hypothesis is that they'll get no more than one in five right (since there's five cards), the alternative hypothesis is that they're psychic and may get more right. What we'd like to do is minimize the probability of making a mistake. Unfortunately, that's a meaningless notion. There are two ways you could make a mistake. Either the null hypothesis is true, and you sample an $\omega$ in your test's "alternative" region, or the alternative hypothesis is true, and you sample the "null" region. Now, if you fix a region $A$ of the probability space (a test), then the numbers $P_0(A)$ and $P_1(A^{c})$, the probabilities of making those two kinds of errors, are completely well-defined, but since you have no prior notion of "probability that the null/alternative hypothesis is true", you can't get a meaningful "probability of either kind of mistake". So this is a fairly typical situation in mathematics where we want the "best" of some class of objects, but when you look closely, there is no "best". In fact, what we're trying to do is minimize $P_0(A)$ while maximizing $P_1(A)$, which are clearly opposing goals. Keeping in mind the example of the psychic abilities test, I like to refer to the type of mistake in which the null is true but you conclude the alternative as true as "delusion" (you believe the guy's psychic but he's not), and the other kind of mistake as "obliviousness". The Lemma The approach of the Neyman-Pearson lemma is the following: let's just pick some maximal probability of delusion $\alpha$ that we're willing to tolerate, and then find the test that has minimal probability of obliviousness while satisfying that upper bound. The result is that such tests always have the form of a likelihood-ratio test: Proposition (Neyman-Pearson lemma) If $L_0, L_1$ are the likelihood functions (PDFs) of the null and alternative hypotheses, and $\alpha > 0$, then the region $A\subseteq \Omega$ which maximizes $P_1(A)$ while maintaining $P_0(A)\leq \alpha$ is of the form $$A=\{\omega\in \Omega \mid \frac{L_1(\omega)}{L_0(\omega)} \geq K \}$$ for some constant $K>0$. Conversely, for any $K$, the above test has $P_1(A)\geq P_1(B)$ for any $B$ such that $P_0(B)\leq P_0(A)$. Thus, all we have to do is find the constant $K$ such that $P_0(A)=\alpha$. The proof on Wikipedia at time of writing is a pretty typically oracular mathematical proof that just consists in conjecturing that form and then verifying that it is indeed optimal. Of course the real mystery is where did this idea of taking a ratio of the likelihoods even came from, and the answer is: the likelihood ratio is simply the density of $P_1$ with respect to $P_0$. If you've learned probability via the modern approach with Lebesgue integrals and what not, then you know that under fairly unrestrictive conditions, it's always possible to express one probability measure as being given by a density function with respect to another. In the conditions of the Neyman-Pearson lemma, we have two probability measures $P_0$, $P_1$ which both have densities with respect to some underlying measure, usually the counting measure on a discrete space, or the Lebesgue measure on $\mathbb R^n$. It turns out that since the quantity that we're interested in controlling is $P_0(A)$, we should be taking $P_0$ as our underlying measure, and viewing $P_1$ in terms of how it relates to $P_0$, thus, we consider $P_1$ to be given by a density function with respect to $P_0$. Buying land The heart of the lemma is therefore the following: Let $\mu$ be a measure on some space $\Omega$, and let $f$ be a positive, integrable function on $\Omega$. Let $\alpha > 0$. Then the set $A$ with $\mu(A)\leq\alpha$ which maximizes $\int_A fd\mu$ is of the form $$\{\omega\in\Omega\mid f(\omega)\geq K\}$$ for some constant $K>0$, and conversely, any such set maximizes $\int f$ over all sets $B$ smaller than itself in measure. Suppose you're buying land. You can only afford $\alpha$ acres, but there's a utility function $f$ over the land, quantifying, say, potential for growing crops, and so you want a region maximizing $\int f$. Then the above proposition says that your best bet is to basically order the land from most useful to least useful, and buy it up in order of best to worst until you reach the maximum area $\alpha$. In hypothesis testing, $\mu$ is $P_0$, and $f$ is the density of $P_1$ with respect to $P_0$ (which, as already stated, is $L_1/L_0$). Here's a quick heuristic proof: out of a given region of land $A$, consider some small one meter by one meter square tile, $B$. If you can find another tile $B'$ of the same area somewhere outside of $A$, but such that the utility of $B'$ is greater than that of $B$, then clearly $A$ is not optimal, since it could be improved by swapping $B$ for $B'$. Thus an optimal region must be "closed upwards", meaning if $x\in A$ and $f(y)>f(x)$, then $y$ must be in $A$, otherwise we could do better by swapping $x$ and $y$. This is equivalent to saying that $A$ is simply $f^{-1}([K, +\infty))$ for some $K$.
Neyman-Pearson lemma
The Context (In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section) The Neyman
Neyman-Pearson lemma The Context (In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section) The Neyman-Pearson lemma comes up in the problem of simple hypothesis testing. We have two different probability distributions on a common space $\Omega$: $P_0$ and $P_1$, called the null and the alternative hypotheses. Based on a single observation $\omega\in\Omega$, we have to come up with a guess for which of the two probability distributions is in effect. A test is therefore a function which to each $\omega$ assigns a guess of either "null hypothesis" or "alternative hypothesis". A test can obviously be identified with the region on which it returns "alternative", so we're just looking for subsets (events) of the probability space. Typically in applications, the null hypothesis corresponds to some kind of status quo, whereas the alternative hypothesis is some new phenomenon which you're trying to prove or disprove is real. For example, you may be testing someone for psychic powers. You run the standard test with the cards with squiggly lines or what not, and get them to guess a certain number of times. The null hypothesis is that they'll get no more than one in five right (since there's five cards), the alternative hypothesis is that they're psychic and may get more right. What we'd like to do is minimize the probability of making a mistake. Unfortunately, that's a meaningless notion. There are two ways you could make a mistake. Either the null hypothesis is true, and you sample an $\omega$ in your test's "alternative" region, or the alternative hypothesis is true, and you sample the "null" region. Now, if you fix a region $A$ of the probability space (a test), then the numbers $P_0(A)$ and $P_1(A^{c})$, the probabilities of making those two kinds of errors, are completely well-defined, but since you have no prior notion of "probability that the null/alternative hypothesis is true", you can't get a meaningful "probability of either kind of mistake". So this is a fairly typical situation in mathematics where we want the "best" of some class of objects, but when you look closely, there is no "best". In fact, what we're trying to do is minimize $P_0(A)$ while maximizing $P_1(A)$, which are clearly opposing goals. Keeping in mind the example of the psychic abilities test, I like to refer to the type of mistake in which the null is true but you conclude the alternative as true as "delusion" (you believe the guy's psychic but he's not), and the other kind of mistake as "obliviousness". The Lemma The approach of the Neyman-Pearson lemma is the following: let's just pick some maximal probability of delusion $\alpha$ that we're willing to tolerate, and then find the test that has minimal probability of obliviousness while satisfying that upper bound. The result is that such tests always have the form of a likelihood-ratio test: Proposition (Neyman-Pearson lemma) If $L_0, L_1$ are the likelihood functions (PDFs) of the null and alternative hypotheses, and $\alpha > 0$, then the region $A\subseteq \Omega$ which maximizes $P_1(A)$ while maintaining $P_0(A)\leq \alpha$ is of the form $$A=\{\omega\in \Omega \mid \frac{L_1(\omega)}{L_0(\omega)} \geq K \}$$ for some constant $K>0$. Conversely, for any $K$, the above test has $P_1(A)\geq P_1(B)$ for any $B$ such that $P_0(B)\leq P_0(A)$. Thus, all we have to do is find the constant $K$ such that $P_0(A)=\alpha$. The proof on Wikipedia at time of writing is a pretty typically oracular mathematical proof that just consists in conjecturing that form and then verifying that it is indeed optimal. Of course the real mystery is where did this idea of taking a ratio of the likelihoods even came from, and the answer is: the likelihood ratio is simply the density of $P_1$ with respect to $P_0$. If you've learned probability via the modern approach with Lebesgue integrals and what not, then you know that under fairly unrestrictive conditions, it's always possible to express one probability measure as being given by a density function with respect to another. In the conditions of the Neyman-Pearson lemma, we have two probability measures $P_0$, $P_1$ which both have densities with respect to some underlying measure, usually the counting measure on a discrete space, or the Lebesgue measure on $\mathbb R^n$. It turns out that since the quantity that we're interested in controlling is $P_0(A)$, we should be taking $P_0$ as our underlying measure, and viewing $P_1$ in terms of how it relates to $P_0$, thus, we consider $P_1$ to be given by a density function with respect to $P_0$. Buying land The heart of the lemma is therefore the following: Let $\mu$ be a measure on some space $\Omega$, and let $f$ be a positive, integrable function on $\Omega$. Let $\alpha > 0$. Then the set $A$ with $\mu(A)\leq\alpha$ which maximizes $\int_A fd\mu$ is of the form $$\{\omega\in\Omega\mid f(\omega)\geq K\}$$ for some constant $K>0$, and conversely, any such set maximizes $\int f$ over all sets $B$ smaller than itself in measure. Suppose you're buying land. You can only afford $\alpha$ acres, but there's a utility function $f$ over the land, quantifying, say, potential for growing crops, and so you want a region maximizing $\int f$. Then the above proposition says that your best bet is to basically order the land from most useful to least useful, and buy it up in order of best to worst until you reach the maximum area $\alpha$. In hypothesis testing, $\mu$ is $P_0$, and $f$ is the density of $P_1$ with respect to $P_0$ (which, as already stated, is $L_1/L_0$). Here's a quick heuristic proof: out of a given region of land $A$, consider some small one meter by one meter square tile, $B$. If you can find another tile $B'$ of the same area somewhere outside of $A$, but such that the utility of $B'$ is greater than that of $B$, then clearly $A$ is not optimal, since it could be improved by swapping $B$ for $B'$. Thus an optimal region must be "closed upwards", meaning if $x\in A$ and $f(y)>f(x)$, then $y$ must be in $A$, otherwise we could do better by swapping $x$ and $y$. This is equivalent to saying that $A$ is simply $f^{-1}([K, +\infty))$ for some $K$.
Neyman-Pearson lemma The Context (In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section) The Neyman
12,048
How to calculate purity?
Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in the unit range [0..1]. $$Purity = \frac 1 N \sum_{i=1}^k max_j | c_i \cap t_j | $$ where $N$ = number of objects(data points), $k$ = number of clusters, $c_i$ is a cluster in $C$, and $t_j$ is the classification which has the max count for cluster $c_i$ When we say "correctly" that implies that each cluster $c_i$ has identified a group of objects as the same class that the ground truth has indicated. We use the ground truth classification $t_i$ of those objects as the measure of assignment correctness, however to do so we must know which cluster $c_i$ maps to which ground truth classification $t_i$. If it were 100% accurate then each $c_i$ would map to exactly 1 $t_i$, but in reality our $c_i$ contains some points whose ground truth classified them as several other classifications. Naturally then we can see that the highest clustering quality will be obtained by using the $c_i$ to $t_i$ mapping which has the most number of correct classifications i.e. $c_i \cap t_i$. That is where the the $max$ comes from in the equation. To calculate Purity first create your confusion matrix This can be done by looping through each cluster $c_i$ and counting how many objects were classified as each class $t_i$. | T1 | T2 | T3 --------------------- C1 | 0 | 53 | 10 C2 | 0 | 1 | 60 C3 | 0 | 16 | 0 Then for each cluster $c_i$, select the maximum value from its row, sum them together and finally divide by the total number of data points. Purity = (53 + 60 + 16) / 140 = 0.92142
How to calculate purity?
Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in
How to calculate purity? Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in the unit range [0..1]. $$Purity = \frac 1 N \sum_{i=1}^k max_j | c_i \cap t_j | $$ where $N$ = number of objects(data points), $k$ = number of clusters, $c_i$ is a cluster in $C$, and $t_j$ is the classification which has the max count for cluster $c_i$ When we say "correctly" that implies that each cluster $c_i$ has identified a group of objects as the same class that the ground truth has indicated. We use the ground truth classification $t_i$ of those objects as the measure of assignment correctness, however to do so we must know which cluster $c_i$ maps to which ground truth classification $t_i$. If it were 100% accurate then each $c_i$ would map to exactly 1 $t_i$, but in reality our $c_i$ contains some points whose ground truth classified them as several other classifications. Naturally then we can see that the highest clustering quality will be obtained by using the $c_i$ to $t_i$ mapping which has the most number of correct classifications i.e. $c_i \cap t_i$. That is where the the $max$ comes from in the equation. To calculate Purity first create your confusion matrix This can be done by looping through each cluster $c_i$ and counting how many objects were classified as each class $t_i$. | T1 | T2 | T3 --------------------- C1 | 0 | 53 | 10 C2 | 0 | 1 | 60 C3 | 0 | 16 | 0 Then for each cluster $c_i$, select the maximum value from its row, sum them together and finally divide by the total number of data points. Purity = (53 + 60 + 16) / 140 = 0.92142
How to calculate purity? Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in
12,049
What is the difference between "margin of error" and "standard error"?
Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer: you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the 75th percentile of the gain in achievement scores to whatever). You collect your data, you run your estimation procedure, and the very first thing you look at is the point estimate, the quantity that approximates what you want to learn about your population (the sample proportion of redheads is 7%). Since this is a sample statistic, it is a random variable. As a random variable, it has a (sampling) distribution that can be characterized by mean, variance, distribution function, etc. While the point estimate is your best guess regarding the population parameter, the standard error is your best guess regarding the standard deviation of your estimator (or, in some cases, the square root of the mean squared error, MSE = bias$^2$ + variance). For a sample of size $n=1000$, the standard error of your proportion estimate is $\sqrt{0.07\cdot0.93/1000}$ $=0.0081$. The margin of error is the half-width of the associated confidence interval, so for the 95% confidence level, you would have $z_{0.975}=1.96$ resulting in a margin of error $0.0081\cdot1.96=0.0158$.
What is the difference between "margin of error" and "standard error"?
Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer: you are estimating a certain population parameter (say, proportion of people with red
What is the difference between "margin of error" and "standard error"? Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer: you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the 75th percentile of the gain in achievement scores to whatever). You collect your data, you run your estimation procedure, and the very first thing you look at is the point estimate, the quantity that approximates what you want to learn about your population (the sample proportion of redheads is 7%). Since this is a sample statistic, it is a random variable. As a random variable, it has a (sampling) distribution that can be characterized by mean, variance, distribution function, etc. While the point estimate is your best guess regarding the population parameter, the standard error is your best guess regarding the standard deviation of your estimator (or, in some cases, the square root of the mean squared error, MSE = bias$^2$ + variance). For a sample of size $n=1000$, the standard error of your proportion estimate is $\sqrt{0.07\cdot0.93/1000}$ $=0.0081$. The margin of error is the half-width of the associated confidence interval, so for the 95% confidence level, you would have $z_{0.975}=1.96$ resulting in a margin of error $0.0081\cdot1.96=0.0158$.
What is the difference between "margin of error" and "standard error"? Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer: you are estimating a certain population parameter (say, proportion of people with red
12,050
What is the difference between "margin of error" and "standard error"?
This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions. Standard Error: The standard error (SE) of the sampling distribution a proportion $p$ is defined as: $\text{SE}_p=\sqrt{\frac{p\,(1-p)}{n}}$. This can be contrasted to the standard deviation (SD) of the sampling distribution of a proportion $\pi$: $\sigma_p=\sqrt{\frac{\pi\,(1-\pi)}{n}}$. Confidence Interval: The confidence interval estimates the population parameter $\pi$ based on the sampling distribution and the central limit theorem (CLT) that allows a normal approximation. Hence, given a SE, and a proportion, $95\%$ the confidence interval will be calculated as: $$p\,\pm\,Z_{\alpha/2}\,\text{SE}$$ Given that $Z_{\alpha/2}=Z_{0.975}=1.959964\sim1.96$, the CI will be: $$p\,\pm\,1.96\,\sqrt{\frac{p\,(1-p)}{n}}$$. This raises a question regarding the utilization of the normal distribution even if we really don't know the population SD - when estimating confidence intervals for means, if the SE is used in lieu of the SD, the $t$ distribution is typically felt to be a better choice due to its fatter tails. However, in the case of a proportion, there is only one parameter, $p$, being estimated, since the formula for the Bernouilli variance is entirely dependent on $p$ as $p\,(1-p)$. This is very nicely explained here. Margin of Error: The margin of error is simply the "radius" (or half the width) of a confidence interval for a particular statistic, in this case the sample proportion: $\text{ME}_{\text{@ 95% CI}}=1.96\,\sqrt{\frac{p\,(1-p)}{n}}$. Graphically,
What is the difference between "margin of error" and "standard error"?
This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions. Standard Error: The standard error (SE) of the sampling distribution a proportion $p$ is
What is the difference between "margin of error" and "standard error"? This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions. Standard Error: The standard error (SE) of the sampling distribution a proportion $p$ is defined as: $\text{SE}_p=\sqrt{\frac{p\,(1-p)}{n}}$. This can be contrasted to the standard deviation (SD) of the sampling distribution of a proportion $\pi$: $\sigma_p=\sqrt{\frac{\pi\,(1-\pi)}{n}}$. Confidence Interval: The confidence interval estimates the population parameter $\pi$ based on the sampling distribution and the central limit theorem (CLT) that allows a normal approximation. Hence, given a SE, and a proportion, $95\%$ the confidence interval will be calculated as: $$p\,\pm\,Z_{\alpha/2}\,\text{SE}$$ Given that $Z_{\alpha/2}=Z_{0.975}=1.959964\sim1.96$, the CI will be: $$p\,\pm\,1.96\,\sqrt{\frac{p\,(1-p)}{n}}$$. This raises a question regarding the utilization of the normal distribution even if we really don't know the population SD - when estimating confidence intervals for means, if the SE is used in lieu of the SD, the $t$ distribution is typically felt to be a better choice due to its fatter tails. However, in the case of a proportion, there is only one parameter, $p$, being estimated, since the formula for the Bernouilli variance is entirely dependent on $p$ as $p\,(1-p)$. This is very nicely explained here. Margin of Error: The margin of error is simply the "radius" (or half the width) of a confidence interval for a particular statistic, in this case the sample proportion: $\text{ME}_{\text{@ 95% CI}}=1.96\,\sqrt{\frac{p\,(1-p)}{n}}$. Graphically,
What is the difference between "margin of error" and "standard error"? This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions. Standard Error: The standard error (SE) of the sampling distribution a proportion $p$ is
12,051
What is the difference between "margin of error" and "standard error"?
The margin of error is the amount added and subtracted in a confidence interval. The standard error is the standard deviation of the sample statistics if we could take many samples of the same size.
What is the difference between "margin of error" and "standard error"?
The margin of error is the amount added and subtracted in a confidence interval. The standard error is the standard deviation of the sample statistics if we could take many samples of the same size.
What is the difference between "margin of error" and "standard error"? The margin of error is the amount added and subtracted in a confidence interval. The standard error is the standard deviation of the sample statistics if we could take many samples of the same size.
What is the difference between "margin of error" and "standard error"? The margin of error is the amount added and subtracted in a confidence interval. The standard error is the standard deviation of the sample statistics if we could take many samples of the same size.
12,052
What is the difference between "margin of error" and "standard error"?
sampling error measures the extent to which a sample statistic differs with the parameter being estimated on the other hand standard error try to quantify the variation among sample statistics drawn from the same population
What is the difference between "margin of error" and "standard error"?
sampling error measures the extent to which a sample statistic differs with the parameter being estimated on the other hand standard error try to quantify the variation among sample statistics drawn f
What is the difference between "margin of error" and "standard error"? sampling error measures the extent to which a sample statistic differs with the parameter being estimated on the other hand standard error try to quantify the variation among sample statistics drawn from the same population
What is the difference between "margin of error" and "standard error"? sampling error measures the extent to which a sample statistic differs with the parameter being estimated on the other hand standard error try to quantify the variation among sample statistics drawn f
12,053
What is the difference between "margin of error" and "standard error"?
Using @Antoni Parellada's example of sampling proportion, $\hat{p}$, The standard error of the sample is defined as: $$ \hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} $$ The margin of error utilizes the z-score at a specified level of confidence $\alpha$ (e.g., $\alpha =$ 0.05 corresponds to a 95% CI): $$ M = z_{\alpha/2} \cdot \hat{SE} $$ The margin of error is the half-width of the confidence interval [of the sampling proportion in this case]: $$ \hat{p} \pm M $$
What is the difference between "margin of error" and "standard error"?
Using @Antoni Parellada's example of sampling proportion, $\hat{p}$, The standard error of the sample is defined as: $$ \hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} $$ The margin of error utilizes
What is the difference between "margin of error" and "standard error"? Using @Antoni Parellada's example of sampling proportion, $\hat{p}$, The standard error of the sample is defined as: $$ \hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} $$ The margin of error utilizes the z-score at a specified level of confidence $\alpha$ (e.g., $\alpha =$ 0.05 corresponds to a 95% CI): $$ M = z_{\alpha/2} \cdot \hat{SE} $$ The margin of error is the half-width of the confidence interval [of the sampling proportion in this case]: $$ \hat{p} \pm M $$
What is the difference between "margin of error" and "standard error"? Using @Antoni Parellada's example of sampling proportion, $\hat{p}$, The standard error of the sample is defined as: $$ \hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} $$ The margin of error utilizes
12,054
Are mediation analyses inherently causal?
A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable through its effect on the mediator, variance in which in turn causes the outcome to vary. But modeling something as a "mediator" doesn't mean it really is a mediator--this is the causation issue. Your post & comment in response to Macro suggest that you have in mind a path analysis in which a variable is modeled as a mediator but isn't viewed as "causal"; I'm not quite seeing why, though. Are you positing that the relationship is spurious--that there is some 3rd variable that is causing both the "independent variable" and the "mediator"? And maybe that both the "independent variable" & the "mediator" in your analysis are in fact mediators of the 3rd variable's influence on the outcome variable? If so, then a reviewer (or any thoughtful person) will want to know what the 3rd variable is & what evidence you have that it is responsible for spurious relationships between what are in fact mediators. This will get you into issues posed by Macro's answer. B. To extend Macro's post, this is a notorious thicket, overgrown with dogma and scholasticism. But here are some highlights: Some people think that you can only "prove" mediation if you experimentally manipulate the mediator as well as the influence that is hypothesized to exert the causal effect. Accordingly, if you did an experiment that manipulated only the causal influence & observed that its impact on the outcome variable was mirrored by changes in the mediator, they'd so "nope! Not good enough!" Basically, though, they just don't think observational methods ever support causal inferences & unmanipulated mediators in experiments are just a special case for them. Other people, who don't exclude causal inferences from observational studies out of hand, nevertheless believe that if you use really really really complicated statistical methods (including but not limited to structural equation models that compare the covariance matrix for the posited mediating relationship with those for various alternatives), you can effectively silence the critics I just mentioned. Basically this is Baron & Kenny, but on steroids. Empirically speaking, they haven't silenced them; logically, I don't see how they could. Still others, most notably, Judea Pearl, say that the soundness of causal inferences in either experimental or observational studies can never be proven w/ statistics; the strength of the inference inheres in the validity of the design. Statistics only confirm the effect causal inference contemplates or depends on. Some readings (all of which are good, not dogmatic or scholastic): Green, D.P., Ha, S.E. & Bullock, J.G. Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose. The ANNALS of the American Academy of Political and Social Science 628, 200-208 (2010). Sobel, M.E. Identification of Causal Parameters in Randomized Studies With Mediating Variables. Journal of Educational and Behavioral Statistics 33, 230-251 (2008). Pearl, J. An Introduction to Causal Inference. The International Journal of Biostatistics 6, Article 7 (2010). Last but by no means least, part of a cool exchange between Gelman & Pearl on causal inference in which mediation was central focus: http://andrewgelman.com/2007/07/identification/
Are mediation analyses inherently causal?
A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable throu
Are mediation analyses inherently causal? A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable through its effect on the mediator, variance in which in turn causes the outcome to vary. But modeling something as a "mediator" doesn't mean it really is a mediator--this is the causation issue. Your post & comment in response to Macro suggest that you have in mind a path analysis in which a variable is modeled as a mediator but isn't viewed as "causal"; I'm not quite seeing why, though. Are you positing that the relationship is spurious--that there is some 3rd variable that is causing both the "independent variable" and the "mediator"? And maybe that both the "independent variable" & the "mediator" in your analysis are in fact mediators of the 3rd variable's influence on the outcome variable? If so, then a reviewer (or any thoughtful person) will want to know what the 3rd variable is & what evidence you have that it is responsible for spurious relationships between what are in fact mediators. This will get you into issues posed by Macro's answer. B. To extend Macro's post, this is a notorious thicket, overgrown with dogma and scholasticism. But here are some highlights: Some people think that you can only "prove" mediation if you experimentally manipulate the mediator as well as the influence that is hypothesized to exert the causal effect. Accordingly, if you did an experiment that manipulated only the causal influence & observed that its impact on the outcome variable was mirrored by changes in the mediator, they'd so "nope! Not good enough!" Basically, though, they just don't think observational methods ever support causal inferences & unmanipulated mediators in experiments are just a special case for them. Other people, who don't exclude causal inferences from observational studies out of hand, nevertheless believe that if you use really really really complicated statistical methods (including but not limited to structural equation models that compare the covariance matrix for the posited mediating relationship with those for various alternatives), you can effectively silence the critics I just mentioned. Basically this is Baron & Kenny, but on steroids. Empirically speaking, they haven't silenced them; logically, I don't see how they could. Still others, most notably, Judea Pearl, say that the soundness of causal inferences in either experimental or observational studies can never be proven w/ statistics; the strength of the inference inheres in the validity of the design. Statistics only confirm the effect causal inference contemplates or depends on. Some readings (all of which are good, not dogmatic or scholastic): Green, D.P., Ha, S.E. & Bullock, J.G. Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose. The ANNALS of the American Academy of Political and Social Science 628, 200-208 (2010). Sobel, M.E. Identification of Causal Parameters in Randomized Studies With Mediating Variables. Journal of Educational and Behavioral Statistics 33, 230-251 (2008). Pearl, J. An Introduction to Causal Inference. The International Journal of Biostatistics 6, Article 7 (2010). Last but by no means least, part of a cool exchange between Gelman & Pearl on causal inference in which mediation was central focus: http://andrewgelman.com/2007/07/identification/
Are mediation analyses inherently causal? A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable throu
12,055
Are mediation analyses inherently causal?
Causality and Mediation A mediation model makes theoretical claims about causality. The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of causality whereby the IV causes the MEDIATOR which in turn causes the DV. Support for a mediational model does not prove the proposed causal pathway. Statistical tests of mediation are typically based on observational studies. The range of alternative causal interpretations is large (e.g., third variables, alternative directions, reciprocity, etc.) I am typically not persuaded by arguments (if any) presented by researchers who propose causal claims implied in mediation models. Support for a mediational model may provide evidence to supplement other sources of evidence when building an argument for a causal claim. In summary, correlation does not prove causation, but it can provide supplementary evidence. Despite the limitations of tests of mediation in observational studies, (a) mediation models are good for getting researchers thinking about causal pathways, and (b) there are better and worse ways to write up mediation models, where better ways acknowledge nuances in interpretation and provide thorough theoretical discussion of the evidence both for the proposed causal pathway and alternative causal pathways (see this page of tips that I prepared). @dmk38 has provided some excellent references and additional discussion. Showing that a variable explains the prediction of another variable Based on your description, mediation does NOT appear to be aligned with your research question. As such I would avoid using the language of mediation in your analyses. As I understand it, your research question is concerned with whether the prediction of one variable (lets call it X1 instead of IV) on the DV is explained by a second variable (lets call it X2 instead of MEDIATOR). You may also be making causal claims like X2 causes DV but X1 is only correlated with X2 and does not cause DV. There are several statistical tests that might be suitable for testing this research question: Compare zero-order (X1 with DV) with semi-partial correlations (X1 partialling out X2 with DV). I imagine the interesting element would be the degree of reduction and not so much the statistical significance (although of course you would want to get some confidence intervals on that reduction). Or similarly, compare incremental R-square of a hierarchical regression where you add X2 in block 1 and X1 in block 2 with the R-square of a model with just X1 predicting DV. I imagine you could also draw a path diagram that aligned with your causal assumptions (e.g., double headed arrows between X1 and X2 and a single headed arrow between X2 and DV.
Are mediation analyses inherently causal?
Causality and Mediation A mediation model makes theoretical claims about causality. The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of
Are mediation analyses inherently causal? Causality and Mediation A mediation model makes theoretical claims about causality. The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of causality whereby the IV causes the MEDIATOR which in turn causes the DV. Support for a mediational model does not prove the proposed causal pathway. Statistical tests of mediation are typically based on observational studies. The range of alternative causal interpretations is large (e.g., third variables, alternative directions, reciprocity, etc.) I am typically not persuaded by arguments (if any) presented by researchers who propose causal claims implied in mediation models. Support for a mediational model may provide evidence to supplement other sources of evidence when building an argument for a causal claim. In summary, correlation does not prove causation, but it can provide supplementary evidence. Despite the limitations of tests of mediation in observational studies, (a) mediation models are good for getting researchers thinking about causal pathways, and (b) there are better and worse ways to write up mediation models, where better ways acknowledge nuances in interpretation and provide thorough theoretical discussion of the evidence both for the proposed causal pathway and alternative causal pathways (see this page of tips that I prepared). @dmk38 has provided some excellent references and additional discussion. Showing that a variable explains the prediction of another variable Based on your description, mediation does NOT appear to be aligned with your research question. As such I would avoid using the language of mediation in your analyses. As I understand it, your research question is concerned with whether the prediction of one variable (lets call it X1 instead of IV) on the DV is explained by a second variable (lets call it X2 instead of MEDIATOR). You may also be making causal claims like X2 causes DV but X1 is only correlated with X2 and does not cause DV. There are several statistical tests that might be suitable for testing this research question: Compare zero-order (X1 with DV) with semi-partial correlations (X1 partialling out X2 with DV). I imagine the interesting element would be the degree of reduction and not so much the statistical significance (although of course you would want to get some confidence intervals on that reduction). Or similarly, compare incremental R-square of a hierarchical regression where you add X2 in block 1 and X1 in block 2 with the R-square of a model with just X1 predicting DV. I imagine you could also draw a path diagram that aligned with your causal assumptions (e.g., double headed arrows between X1 and X2 and a single headed arrow between X2 and DV.
Are mediation analyses inherently causal? Causality and Mediation A mediation model makes theoretical claims about causality. The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of
12,056
Are mediation analyses inherently causal?
I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on paper and work it over in your mind a couple of times or draw the hypothesized effects.
Are mediation analyses inherently causal?
I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on pap
Are mediation analyses inherently causal? I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on paper and work it over in your mind a couple of times or draw the hypothesized effects.
Are mediation analyses inherently causal? I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on pap
12,057
Are mediation analyses inherently causal?
Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ice-cream consumption causes drowning. Spurious correlation occurs when a third "moderating" variable is actually causal with respect to the first two. In our example, we looked at ice-cream sales and drowning in time, and forgot about seasonal effects moderated by temperature, and, sure enough, more ice-cream is eaten when it is hot, and more people drown, because more seek relief from heat by swimming and eating ice-cream. Some humorous examples. The question, then, boils down to what would one use a spurious correlation for? And, it turns out, they are used because people do not test their theories. For example, kidney function is often "normalized" to estimated body surface, as estimated by a formula of weight and height. Now, body surface area does not cause urine to form, and in the weight and height formula, the weight is causal via Kleiber's law and the height actually makes the formula less predictive.
Are mediation analyses inherently causal?
Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ic
Are mediation analyses inherently causal? Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ice-cream consumption causes drowning. Spurious correlation occurs when a third "moderating" variable is actually causal with respect to the first two. In our example, we looked at ice-cream sales and drowning in time, and forgot about seasonal effects moderated by temperature, and, sure enough, more ice-cream is eaten when it is hot, and more people drown, because more seek relief from heat by swimming and eating ice-cream. Some humorous examples. The question, then, boils down to what would one use a spurious correlation for? And, it turns out, they are used because people do not test their theories. For example, kidney function is often "normalized" to estimated body surface, as estimated by a formula of weight and height. Now, body surface area does not cause urine to form, and in the weight and height formula, the weight is causal via Kleiber's law and the height actually makes the formula less predictive.
Are mediation analyses inherently causal? Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ic
12,058
Are mediation analyses inherently causal?
I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's genetic code can be thought of as randomized (due to how sex cells are formed and ultimately pair up). Coupling this with known mutations associated with both a "mediator" and an ultimate response, one can reason a causal effect of a mediator on that response under certain definitions of causality (which I'm sure could spark a lengthy debate here). In the case where you use a mediation model and don't claim causality, I couldn't think of why the reviewer would argue. Although you would likely have to rule out the whether or not the mediation effect you observed is confounded by third variable. If you're interested in causality explicitly you may want to look into methods from epidemiology like Mendelian Randomization or the "Causal Inference Test". Or start with Instrumental Variable Analysis.
Are mediation analyses inherently causal?
I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's gene
Are mediation analyses inherently causal? I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's genetic code can be thought of as randomized (due to how sex cells are formed and ultimately pair up). Coupling this with known mutations associated with both a "mediator" and an ultimate response, one can reason a causal effect of a mediator on that response under certain definitions of causality (which I'm sure could spark a lengthy debate here). In the case where you use a mediation model and don't claim causality, I couldn't think of why the reviewer would argue. Although you would likely have to rule out the whether or not the mediation effect you observed is confounded by third variable. If you're interested in causality explicitly you may want to look into methods from epidemiology like Mendelian Randomization or the "Causal Inference Test". Or start with Instrumental Variable Analysis.
Are mediation analyses inherently causal? I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's gene
12,059
Assessing the significance of differences in distributions
I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF) of two samples, meaning it is sensitive to both location and shape of the the two samples. It also generalizes out to a multivariate form. This test is found in various forms in different packages in R, so if you are basically proficient, all you have to do is install one of them (e.g. fBasics), and run it on your sample data.
Assessing the significance of differences in distributions
I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF
Assessing the significance of differences in distributions I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF) of two samples, meaning it is sensitive to both location and shape of the the two samples. It also generalizes out to a multivariate form. This test is found in various forms in different packages in R, so if you are basically proficient, all you have to do is install one of them (e.g. fBasics), and run it on your sample data.
Assessing the significance of differences in distributions I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF
12,060
Assessing the significance of differences in distributions
I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way? Is it that the data that you are using are representative samples from populations or processes, and you want to assess the evidence that those populations or processes differ? If so, then a statistical test is right for you. But this seems like a strange question to me. Or, are you interested in whether you really need to behave as though those populations or processes are different, regardless of the truth? Then you will be better off determining a loss function, ideally one that returns units that are meaningful to you, and predicting the expected loss when you (a) treat the populations as different, and (b) treat them as the same. Or you can choose some quantile of the loss distribution if you want to adopt a more or less conservative position.
Assessing the significance of differences in distributions
I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way? Is it that the data that you are using are represen
Assessing the significance of differences in distributions I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way? Is it that the data that you are using are representative samples from populations or processes, and you want to assess the evidence that those populations or processes differ? If so, then a statistical test is right for you. But this seems like a strange question to me. Or, are you interested in whether you really need to behave as though those populations or processes are different, regardless of the truth? Then you will be better off determining a loss function, ideally one that returns units that are meaningful to you, and predicting the expected loss when you (a) treat the populations as different, and (b) treat them as the same. Or you can choose some quantile of the loss distribution if you want to adopt a more or less conservative position.
Assessing the significance of differences in distributions I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way? Is it that the data that you are using are represen
12,061
Assessing the significance of differences in distributions
You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability plot, you can construct a relative CDF/PDF, which is a ratio of the densities. This relative density can be used for inference. If the distributions are identical, you expect a uniform relative distribution. There are tools, graphical and statistical, to explore and examine departures from uniformity. A good starting point to get a better sense is Applying Relative Distrbution Methods in R and the reldist package in R. For details, you'll need to refer to the book, Relative Distribution Methods in the Social Sciences by Handcock and Morris. There's also a paper by the authors covering the relevant techniques.
Assessing the significance of differences in distributions
You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability
Assessing the significance of differences in distributions You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability plot, you can construct a relative CDF/PDF, which is a ratio of the densities. This relative density can be used for inference. If the distributions are identical, you expect a uniform relative distribution. There are tools, graphical and statistical, to explore and examine departures from uniformity. A good starting point to get a better sense is Applying Relative Distrbution Methods in R and the reldist package in R. For details, you'll need to refer to the book, Relative Distribution Methods in the Social Sciences by Handcock and Morris. There's also a paper by the authors covering the relevant techniques.
Assessing the significance of differences in distributions You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability
12,062
Assessing the significance of differences in distributions
One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distributions in a Reproducing Kernel Hilbert Space (RKHS). See this paper "A kernel method for the two sample problem".
Assessing the significance of differences in distributions
One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distr
Assessing the significance of differences in distributions One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distributions in a Reproducing Kernel Hilbert Space (RKHS). See this paper "A kernel method for the two sample problem".
Assessing the significance of differences in distributions One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distr
12,063
Assessing the significance of differences in distributions
I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test.
Assessing the significance of differences in distributions
I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test.
Assessing the significance of differences in distributions I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test.
Assessing the significance of differences in distributions I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test.
12,064
Why is the James-Stein estimator called a "shrinkage" estimator?
A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can see, what Stein's estimator does is move each of the values closer to the grand average. It makes values greater than the grand average smaller, and values smaller than the grand average, greater. By shrinkage we mean moving the values towards the average, or towards zero in some cases - like regularized regression - that shrinks the parameters towards zero. Of course, it is not only about shrinking itself, but what Stein (1956) and James and Stein (1961) have proved, is that Stein's estimator dominates the maximum likelihood estimator in terms of total squared error, $$ E_\mu(\| \boldsymbol{\hat\mu}^{JS} - \boldsymbol{\mu} \|^2) < E_\mu(\| \boldsymbol{\hat\mu}^{MLE} - \boldsymbol{\mu} \|^2) $$ where $\boldsymbol{\mu} = (\mu_1,\mu_2,\dots,\mu_p)'$, $\hat\mu^{JS}_i$ is the Stein's estimator and $\hat\mu^{MLE}_i = x_i$, where both estimators are estimated on the $x_1,x_2,\dots,x_p$ sample. The proofs are given in the original papers and the appendix of the paper you refer to. In plain English, what they have shown is that if you simultaneously make $p > 2$ guesses, then in terms of total squared error, you'd do better by shrinking them, as compared to sticking to your initial guesses. Finally, Stein's estimator is certainly not the only estimator that gives the shrinkage effect. For other examples, you can check this blog entry, or the referred Bayesian data analysis book by Gelman et al. You can also check the threads about regularized regression, e.g. What problem do shrinkage methods solve?, or When to use regularization methods for regression?, for other practical applications of this effect.
Why is the James-Stein estimator called a "shrinkage" estimator?
A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can se
Why is the James-Stein estimator called a "shrinkage" estimator? A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can see, what Stein's estimator does is move each of the values closer to the grand average. It makes values greater than the grand average smaller, and values smaller than the grand average, greater. By shrinkage we mean moving the values towards the average, or towards zero in some cases - like regularized regression - that shrinks the parameters towards zero. Of course, it is not only about shrinking itself, but what Stein (1956) and James and Stein (1961) have proved, is that Stein's estimator dominates the maximum likelihood estimator in terms of total squared error, $$ E_\mu(\| \boldsymbol{\hat\mu}^{JS} - \boldsymbol{\mu} \|^2) < E_\mu(\| \boldsymbol{\hat\mu}^{MLE} - \boldsymbol{\mu} \|^2) $$ where $\boldsymbol{\mu} = (\mu_1,\mu_2,\dots,\mu_p)'$, $\hat\mu^{JS}_i$ is the Stein's estimator and $\hat\mu^{MLE}_i = x_i$, where both estimators are estimated on the $x_1,x_2,\dots,x_p$ sample. The proofs are given in the original papers and the appendix of the paper you refer to. In plain English, what they have shown is that if you simultaneously make $p > 2$ guesses, then in terms of total squared error, you'd do better by shrinking them, as compared to sticking to your initial guesses. Finally, Stein's estimator is certainly not the only estimator that gives the shrinkage effect. For other examples, you can check this blog entry, or the referred Bayesian data analysis book by Gelman et al. You can also check the threads about regularized regression, e.g. What problem do shrinkage methods solve?, or When to use regularization methods for regression?, for other practical applications of this effect.
Why is the James-Stein estimator called a "shrinkage" estimator? A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can se
12,065
Importance of the bias node in neural networks
Removing the bias will definitely affect the performance and here is why... Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the weights and the bias affects the initial level of squashing in the sigmoid function (tanh etc.), which results the desired the non-linearity. For example, assume that you want a neuron to fire $y\approx1$ when all the input pixels are black $x\approx0$. If there is no bias no matter what weights $W$ you have, given the equation $y=\sigma(W x)$ the neuron will always fire $y\approx0.5$. Therefore, by removing the bias terms you would substantially decrease your neural network's performance.
Importance of the bias node in neural networks
Removing the bias will definitely affect the performance and here is why... Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the
Importance of the bias node in neural networks Removing the bias will definitely affect the performance and here is why... Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the weights and the bias affects the initial level of squashing in the sigmoid function (tanh etc.), which results the desired the non-linearity. For example, assume that you want a neuron to fire $y\approx1$ when all the input pixels are black $x\approx0$. If there is no bias no matter what weights $W$ you have, given the equation $y=\sigma(W x)$ the neuron will always fire $y\approx0.5$. Therefore, by removing the bias terms you would substantially decrease your neural network's performance.
Importance of the bias node in neural networks Removing the bias will definitely affect the performance and here is why... Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the
12,066
Importance of the bias node in neural networks
I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of large numbers will be roughly normal. At the first layer, the ability for this to happens depends on your input distribution. For MNIST for example, the input's average activation is roughly constant. On a small network, of course you need a bias input, but on a large network, removing it makes almost no difference. (But, why would you remove it?)
Importance of the bias node in neural networks
I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little differ
Importance of the bias node in neural networks I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of large numbers will be roughly normal. At the first layer, the ability for this to happens depends on your input distribution. For MNIST for example, the input's average activation is roughly constant. On a small network, of course you need a bias input, but on a large network, removing it makes almost no difference. (But, why would you remove it?)
Importance of the bias node in neural networks I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little differ
12,067
Importance of the bias node in neural networks
I'd comment on @NeilG's answer if I had enough reputation, but alas... I disagree with you, Neil, on this. You say: ... the average activation of all of its inputs, which by the law of large numbers will be roughly normal. I'd argue against that, and say that the law of large number necessitates that all observations are independent of each other. This is very much not the case in something like neural nets. Even if each activation is normally distributed, if you observe one input value as being exceptionally high, it changes the probability of all the other inputs. Thus, the "observations", in this case, inputs, are not independent, and the law of large numbers does not apply. Unless I'm not understanding your answer.
Importance of the bias node in neural networks
I'd comment on @NeilG's answer if I had enough reputation, but alas... I disagree with you, Neil, on this. You say: ... the average activation of all of its inputs, which by the law of large numbers
Importance of the bias node in neural networks I'd comment on @NeilG's answer if I had enough reputation, but alas... I disagree with you, Neil, on this. You say: ... the average activation of all of its inputs, which by the law of large numbers will be roughly normal. I'd argue against that, and say that the law of large number necessitates that all observations are independent of each other. This is very much not the case in something like neural nets. Even if each activation is normally distributed, if you observe one input value as being exceptionally high, it changes the probability of all the other inputs. Thus, the "observations", in this case, inputs, are not independent, and the law of large numbers does not apply. Unless I'm not understanding your answer.
Importance of the bias node in neural networks I'd comment on @NeilG's answer if I had enough reputation, but alas... I disagree with you, Neil, on this. You say: ... the average activation of all of its inputs, which by the law of large numbers
12,068
What are efficient ways to organize R code and output? [closed]
You are not the first person to ask this question. Managing a statistical analysis project – guidelines and best practices A workflow for R R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpages dedicated to R Workflow) My own stuff: Dynamic documents with R and LATEX as an important part of reproducible research More links to project organization: How to efficiently manage a statistical analysis project?
What are efficient ways to organize R code and output? [closed]
You are not the first person to ask this question. Managing a statistical analysis project – guidelines and best practices A workflow for R R Workflow: Slides from a Talk at Melbourne R Users by Je
What are efficient ways to organize R code and output? [closed] You are not the first person to ask this question. Managing a statistical analysis project – guidelines and best practices A workflow for R R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpages dedicated to R Workflow) My own stuff: Dynamic documents with R and LATEX as an important part of reproducible research More links to project organization: How to efficiently manage a statistical analysis project?
What are efficient ways to organize R code and output? [closed] You are not the first person to ask this question. Managing a statistical analysis project – guidelines and best practices A workflow for R R Workflow: Slides from a Talk at Melbourne R Users by Je
12,069
What are efficient ways to organize R code and output? [closed]
I for one organize everything into 4 files for every project or analysis. (1) 'code' Where I store text files of R functions. (2) 'sql' Where I keep the queries used to gather my data. (3) 'dat' Where I keep copies (usually csv) of my raw and processed data. (4) 'rpt' Where I store the reports I've distributed. ALL of my files are named using very verbose names such as 'analysis_of_network_abc_for_research_on_modified_buffer_19May2011' I also write detailed documentation up front where I organize the hypothesis, any assumptions, inclusion and exclusion criteria, and steps I intend to take to reach my deliverable. All of this is invaluable for repeatable research and makes my annual goal setting process easier.
What are efficient ways to organize R code and output? [closed]
I for one organize everything into 4 files for every project or analysis. (1) 'code' Where I store text files of R functions. (2) 'sql' Where I keep the queries used to gather my data. (3) 'dat' Where
What are efficient ways to organize R code and output? [closed] I for one organize everything into 4 files for every project or analysis. (1) 'code' Where I store text files of R functions. (2) 'sql' Where I keep the queries used to gather my data. (3) 'dat' Where I keep copies (usually csv) of my raw and processed data. (4) 'rpt' Where I store the reports I've distributed. ALL of my files are named using very verbose names such as 'analysis_of_network_abc_for_research_on_modified_buffer_19May2011' I also write detailed documentation up front where I organize the hypothesis, any assumptions, inclusion and exclusion criteria, and steps I intend to take to reach my deliverable. All of this is invaluable for repeatable research and makes my annual goal setting process easier.
What are efficient ways to organize R code and output? [closed] I for one organize everything into 4 files for every project or analysis. (1) 'code' Where I store text files of R functions. (2) 'sql' Where I keep the queries used to gather my data. (3) 'dat' Where
12,070
What are efficient ways to organize R code and output? [closed]
Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much easier to correct one minor thing at the beginning and have it ripple through the output without having to rerun anything manually.
What are efficient ways to organize R code and output? [closed]
Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much
What are efficient ways to organize R code and output? [closed] Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much easier to correct one minor thing at the beginning and have it ripple through the output without having to rerun anything manually.
What are efficient ways to organize R code and output? [closed] Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much
12,071
What are efficient ways to organize R code and output? [closed]
For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explain the usage of it in more detail in this blog post.
What are efficient ways to organize R code and output? [closed]
For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explai
What are efficient ways to organize R code and output? [closed] For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explain the usage of it in more detail in this blog post.
What are efficient ways to organize R code and output? [closed] For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explai
12,072
A Measure Theoretic Formulation of Bayes' Theorem
One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995). The conditional distribution of $\Theta$ given $X=x$ is called the posterior distribution of $\Theta$. The next theorem shows us how to calculate the posterior distribution of a parameter in the case in which there is a measure $\nu$ such that each $P_\theta \ll \nu$. Theorem 1.31 (Bayes' theorem). Suppose that $X$ has a parametric family $\mathcal{P}_0$ of distributions with parameter space $\Omega$. Suppose that $P_\theta \ll \nu$ for all $\theta \in \Omega$, and let $f_{X\mid\Theta}(x\mid\theta)$ be the conditional density (with respect to $\nu$) of $X$ given $\Theta = \theta$. Let $\mu_\Theta$ be the prior distribution of $\Theta$. Let $\mu_{\Theta\mid X}(\cdot \mid x)$ denote the conditional distribution of $\Theta$ given $X = x$. Then $\mu_{\Theta\mid X} \ll \mu_\Theta$, a.s. with respect to the marginal of $X$, and the Radon-Nikodym derivative is $$ \tag{1} \label{1} \frac{d\mu_{\Theta\mid X}}{d\mu_\Theta}(\theta \mid x) = \frac{f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_{X\mid\Theta}(x\mid t) \, d\mu_\Theta(t)} $$ for those $x$ such that the denominator is neither $0$ nor infinite. The prior predictive probability of the set of $x$ values such that the denominator is $0$ or infinite is $0$, hence the posterior can be defined arbitrarily for such $x$ values. Edit 1. The setup for this theorem is as follows: There is some underlying probability space $(S, \mathcal{S}, \Pr)$ with respect to which all probabilities are computed. There is a standard Borel space $(\mathcal{X}, \mathcal{B})$ (the sample space) and a measurable map $X : S \to \mathcal{X}$ (the sample or data). There is a standard Borel space $(\Omega, \tau)$ (the parameter space) and a measurable map $\Theta : S \to \Omega$ (the parameter). The distribution of $\Theta$ is $\mu_\Theta$ (the prior distribution); this is the probability measure on $(\Omega, \tau)$ given by $\mu_\Theta(A) = \Pr(\Theta \in A)$ for all $A \in \tau$. The distribution of $X$ is $\mu_X$ (the marginal distribution mentioned in the theorem); this is the probability measure on $(\mathcal{X}, \mathcal{B})$ given by $\mu_X(B) = \Pr(X \in B)$ for all $B \in \mathcal{B}$. There is a probability kernel $P : \Omega \times \mathcal{B} \to [0, 1]$, denoted $(\theta, B) \mapsto P_\theta(B)$ which represents the conditional distribution of $X$ given $\Theta$. This means that for each $B \in \mathcal{B}$, the map $\theta \mapsto P_\theta(B)$ from $\Omega$ into $[0, 1]$ is measurable, $P_\theta$ is a probability measure on $(\mathcal{X}, \mathcal{B})$ for each $\theta \in \Omega$, and for all $A \in \tau$ and $B \in \mathcal{B}$, $$ \Pr(\Theta \in A, X \in B) = \int_A P_\theta(B) \, d\mu_\Theta(\theta). $$ This is the parametric family of distributions of $X$ given $\Theta$. We assume that there exists a measure $\nu$ on $(\mathcal{X}, \mathcal{B})$ such that $P_\theta \ll \nu$ for all $\theta \in \Omega$, and we choose a version $f_{X\mid\Theta}(\cdot\mid\theta)$ of the Radon-Nikodym derivative $d P_\theta / d \nu$ (strictly speaking, the guaranteed existence of this Radon-Nikodym derivative might require $\nu$ to be $\sigma$-finite). This means that $$ P_\theta(B) = \int_B f_{X\mid\Theta}(x \mid \theta) \, d\nu(x) $$ for all $B \in \mathcal{B}$. It follows that $$ \Pr(\Theta \in A, X \in B) = \int_A \int_B f_{X \mid \Theta}(x \mid \theta) \, d\nu(x) \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $B \in \mathcal{B}$. We may assume without loss of generality (e.g., see exercise 9 in Chapter 1 of Schervish's book) that the map $(x, \theta) \mapsto f_{X\mid \Theta}(x\mid\theta)$ of $\mathcal{X}\times\Omega$ into $[0, \infty]$ is measurable. Then by Tonelli's theorem we can change the order of integration: $$ \Pr(\Theta \in A, X \in B) = \int_B \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x) $$ for all $A \in \tau$ and $B \in \mathcal{B}$. In particular, the marginal probability of a set $B \in \mathcal{B}$ is $$ \mu_X(B) = \Pr(X \in B) = \int_B \int_\Omega f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x), $$ which shows that $\mu_X \ll \nu$, with Radon-Nikodym derivative $$ \frac{d\mu_X}{d\nu} = \int_\Omega f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta). $$ There exists a probability kernel $\mu_{\Theta \mid X} : \mathcal{X} \times \tau \to [0, 1]$, denoted $(x, A) \mapsto \mu_{\Theta \mid X}(A \mid x)$, which represents the conditional distribution of $\Theta$ given $X$ (i.e., the posterior distribution). This means that for each $A \in \tau$, the map $x \mapsto \mu_{\Theta \mid X}(A \mid x)$ from $\mathcal{X}$ into $[0, 1]$ is measurable, $\mu_{\Theta \mid X}(\cdot \mid x)$ is a probability measure on $(\Omega, \tau)$ for each $x \in \mathcal{X}$, and for all $A \in \tau$ and $B \in \mathcal{B}$, $$ \Pr(\Theta \in A, X \in B) = \int_B \mu_{\Theta \mid X}(A \mid x) \, d\mu_X(x) $$ Edit 2. Given the setup above, the proof of Bayes' theorem is relatively straightforward. Proof. Following Schervish, let $$ C_0 = \left\{x \in \mathcal{X} : \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = 0\right\} $$ and $$ C_\infty = \left\{x \in \mathcal{X} : \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = \infty\right\} $$ (these are the sets of potentially problematic $x$ values for the denominator of the right-hand-side of \eqref{1}). We have $$ \mu_X(C_0) = \Pr(X \in C_0) = \int_{C_0} \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \, d\nu(x) = 0, $$ and $$ \mu_X(C_\infty) = \int_{C_\infty} \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \, d\nu(x) = \begin{cases} \infty, & \text{if $\nu(C_\infty) > 0$,} \\ 0, & \text{if $\nu(C_\infty) = 0$.} \end{cases} $$ Since $\mu_X(C_\infty) = \infty$ is impossible ($\mu_X$ is a probability measure), it follows that $\nu(C_\infty) = 0$, whence $\mu_X(C_\infty) = 0$ as well. Thus, $\mu_X(C_0 \cup C_\infty) = 0$, so the set of all $x \in \mathcal{X}$ such that the denominator of the right-hand-side of \eqref{1} is zero or infinite has zero marginal probability. Next, consider that, if $A \in \tau$ and $B \in \mathcal{B}$, then $$ \Pr(\Theta \in A, X \in B) = \int_B \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x) $$ and simultaneously $$ \begin{aligned} \Pr(\Theta \in A, X \in B) &= \int_B \mu_{\Theta \mid X}(A \mid x) \, d\mu_X(x) \\ &= \int_B \left( \mu_{\Theta \mid X}(A \mid x) \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \right) \, d\nu(x). \end{aligned} $$ It follows that $$ \mu_{\Theta \mid X}(A \mid x) \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $\nu$-a.e. $x \in \mathcal{X}$, and hence $$ \mu_{\Theta \mid X}(A \mid x) = \int_A \frac{f_{X \mid \Theta}(x \mid \theta)}{\int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t)} \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $\mu_X$-a.e. $x \in \mathcal{X}$. Thus, for $\mu_X$-a.e. $x \in \mathcal{X}$, $\mu_{\Theta\mid X}(\cdot \mid x) \ll \mu_\Theta$, and the Radon-Nikodym derivative is $$ \frac{d\mu_{\Theta \mid X}}{d \mu_\Theta}(\theta \mid x) = \frac{f_{X \mid \Theta}(x \mid \theta)}{\int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t)}, $$ as claimed, completing the proof. Lastly, how do we reconcile the colloquial version of Bayes' theorem found so commonly in statistics/machine learning literature, namely, $$ \tag{2} \label{2} p(\theta \mid x) = \frac{p(\theta) p(x \mid \theta)}{p(x)}, $$ with \eqref{1}? On the one hand, the left-hand-side of \eqref{2} is supposed to represent a density of the conditional distribution of $\Theta$ given $X$ with respect to some unspecified dominating measure on the parameter space. In fact, none of the dominating measures for the four different densities in \eqref{2} (all named $p$) are explicitly mentioned. On the other hand, the left-hand-side of \eqref{1} is the density of the conditional distribution of $\Theta$ given $X$ with respect to the prior distribution. If, in addition, the prior distribution $\mu_\Theta$ has a density $f_\Theta$ with respect to some (let's say $\sigma$-finite) measure $\lambda$ on the parameter space $\Omega$, then $\mu_{\Theta \mid X}(\cdot\mid x)$ is also absolutely continuous with respect to $\lambda$ for $\mu_X$-a.e. $x \in \mathcal{X}$, and if $f_{\Theta \mid X}$ represents a version of the Radon-Nikodym derivative $d\mu_{\Theta\mid X}/d\lambda$, then \eqref{1} yields $$ \begin{aligned} f_{\Theta \mid X}(\theta \mid x) &= \frac{d \mu_{\Theta \mid X}}{d\lambda}(\theta \mid x) \\ &= \frac{d \mu_{\Theta \mid X}}{d\mu_\Theta}(\theta \mid x) \frac{d \mu_{\Theta}}{d\lambda}(\theta) \\ &= \frac{d \mu_{\Theta \mid X}}{d\mu_\Theta}(\theta \mid x) f_\Theta(\theta) \\ &= \frac{f_\Theta(\theta) f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_{X\mid\Theta}(x\mid t) \, d\mu_\Theta(t)} \\ &= \frac{f_\Theta(\theta) f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_\Theta(t) f_{X\mid\Theta}(x\mid t) \, d\lambda(t)}. \end{aligned} $$ The translation between this new form and \eqref{2} is $$ \begin{aligned} p(\theta \mid x) &= f_{\Theta \mid X}(\theta \mid x) = \frac{d \mu_{\Theta \mid X}}{d\lambda}(\theta \mid x), &&\text{(posterior)}\\ p(\theta) &= f_\Theta(\theta) = \frac{d \mu_\Theta}{d\lambda}(\theta), &&\text{(prior)} \\ p(x \mid \theta) &= f_{X\mid\Theta}(x\mid\theta) = \frac{d P_\theta}{d\nu}(x), &&\text{(likelihood)} \\ p(x) &= \int_\Omega f_\Theta(t) f_{X\mid\Theta}(x\mid t) \, d\lambda(t). &&\text{(evidence)} \end{aligned} $$
A Measure Theoretic Formulation of Bayes' Theorem
One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995). The conditional distribution of $\Theta$ given $X=x$ is called the posterior d
A Measure Theoretic Formulation of Bayes' Theorem One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995). The conditional distribution of $\Theta$ given $X=x$ is called the posterior distribution of $\Theta$. The next theorem shows us how to calculate the posterior distribution of a parameter in the case in which there is a measure $\nu$ such that each $P_\theta \ll \nu$. Theorem 1.31 (Bayes' theorem). Suppose that $X$ has a parametric family $\mathcal{P}_0$ of distributions with parameter space $\Omega$. Suppose that $P_\theta \ll \nu$ for all $\theta \in \Omega$, and let $f_{X\mid\Theta}(x\mid\theta)$ be the conditional density (with respect to $\nu$) of $X$ given $\Theta = \theta$. Let $\mu_\Theta$ be the prior distribution of $\Theta$. Let $\mu_{\Theta\mid X}(\cdot \mid x)$ denote the conditional distribution of $\Theta$ given $X = x$. Then $\mu_{\Theta\mid X} \ll \mu_\Theta$, a.s. with respect to the marginal of $X$, and the Radon-Nikodym derivative is $$ \tag{1} \label{1} \frac{d\mu_{\Theta\mid X}}{d\mu_\Theta}(\theta \mid x) = \frac{f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_{X\mid\Theta}(x\mid t) \, d\mu_\Theta(t)} $$ for those $x$ such that the denominator is neither $0$ nor infinite. The prior predictive probability of the set of $x$ values such that the denominator is $0$ or infinite is $0$, hence the posterior can be defined arbitrarily for such $x$ values. Edit 1. The setup for this theorem is as follows: There is some underlying probability space $(S, \mathcal{S}, \Pr)$ with respect to which all probabilities are computed. There is a standard Borel space $(\mathcal{X}, \mathcal{B})$ (the sample space) and a measurable map $X : S \to \mathcal{X}$ (the sample or data). There is a standard Borel space $(\Omega, \tau)$ (the parameter space) and a measurable map $\Theta : S \to \Omega$ (the parameter). The distribution of $\Theta$ is $\mu_\Theta$ (the prior distribution); this is the probability measure on $(\Omega, \tau)$ given by $\mu_\Theta(A) = \Pr(\Theta \in A)$ for all $A \in \tau$. The distribution of $X$ is $\mu_X$ (the marginal distribution mentioned in the theorem); this is the probability measure on $(\mathcal{X}, \mathcal{B})$ given by $\mu_X(B) = \Pr(X \in B)$ for all $B \in \mathcal{B}$. There is a probability kernel $P : \Omega \times \mathcal{B} \to [0, 1]$, denoted $(\theta, B) \mapsto P_\theta(B)$ which represents the conditional distribution of $X$ given $\Theta$. This means that for each $B \in \mathcal{B}$, the map $\theta \mapsto P_\theta(B)$ from $\Omega$ into $[0, 1]$ is measurable, $P_\theta$ is a probability measure on $(\mathcal{X}, \mathcal{B})$ for each $\theta \in \Omega$, and for all $A \in \tau$ and $B \in \mathcal{B}$, $$ \Pr(\Theta \in A, X \in B) = \int_A P_\theta(B) \, d\mu_\Theta(\theta). $$ This is the parametric family of distributions of $X$ given $\Theta$. We assume that there exists a measure $\nu$ on $(\mathcal{X}, \mathcal{B})$ such that $P_\theta \ll \nu$ for all $\theta \in \Omega$, and we choose a version $f_{X\mid\Theta}(\cdot\mid\theta)$ of the Radon-Nikodym derivative $d P_\theta / d \nu$ (strictly speaking, the guaranteed existence of this Radon-Nikodym derivative might require $\nu$ to be $\sigma$-finite). This means that $$ P_\theta(B) = \int_B f_{X\mid\Theta}(x \mid \theta) \, d\nu(x) $$ for all $B \in \mathcal{B}$. It follows that $$ \Pr(\Theta \in A, X \in B) = \int_A \int_B f_{X \mid \Theta}(x \mid \theta) \, d\nu(x) \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $B \in \mathcal{B}$. We may assume without loss of generality (e.g., see exercise 9 in Chapter 1 of Schervish's book) that the map $(x, \theta) \mapsto f_{X\mid \Theta}(x\mid\theta)$ of $\mathcal{X}\times\Omega$ into $[0, \infty]$ is measurable. Then by Tonelli's theorem we can change the order of integration: $$ \Pr(\Theta \in A, X \in B) = \int_B \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x) $$ for all $A \in \tau$ and $B \in \mathcal{B}$. In particular, the marginal probability of a set $B \in \mathcal{B}$ is $$ \mu_X(B) = \Pr(X \in B) = \int_B \int_\Omega f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x), $$ which shows that $\mu_X \ll \nu$, with Radon-Nikodym derivative $$ \frac{d\mu_X}{d\nu} = \int_\Omega f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta). $$ There exists a probability kernel $\mu_{\Theta \mid X} : \mathcal{X} \times \tau \to [0, 1]$, denoted $(x, A) \mapsto \mu_{\Theta \mid X}(A \mid x)$, which represents the conditional distribution of $\Theta$ given $X$ (i.e., the posterior distribution). This means that for each $A \in \tau$, the map $x \mapsto \mu_{\Theta \mid X}(A \mid x)$ from $\mathcal{X}$ into $[0, 1]$ is measurable, $\mu_{\Theta \mid X}(\cdot \mid x)$ is a probability measure on $(\Omega, \tau)$ for each $x \in \mathcal{X}$, and for all $A \in \tau$ and $B \in \mathcal{B}$, $$ \Pr(\Theta \in A, X \in B) = \int_B \mu_{\Theta \mid X}(A \mid x) \, d\mu_X(x) $$ Edit 2. Given the setup above, the proof of Bayes' theorem is relatively straightforward. Proof. Following Schervish, let $$ C_0 = \left\{x \in \mathcal{X} : \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = 0\right\} $$ and $$ C_\infty = \left\{x \in \mathcal{X} : \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = \infty\right\} $$ (these are the sets of potentially problematic $x$ values for the denominator of the right-hand-side of \eqref{1}). We have $$ \mu_X(C_0) = \Pr(X \in C_0) = \int_{C_0} \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \, d\nu(x) = 0, $$ and $$ \mu_X(C_\infty) = \int_{C_\infty} \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \, d\nu(x) = \begin{cases} \infty, & \text{if $\nu(C_\infty) > 0$,} \\ 0, & \text{if $\nu(C_\infty) = 0$.} \end{cases} $$ Since $\mu_X(C_\infty) = \infty$ is impossible ($\mu_X$ is a probability measure), it follows that $\nu(C_\infty) = 0$, whence $\mu_X(C_\infty) = 0$ as well. Thus, $\mu_X(C_0 \cup C_\infty) = 0$, so the set of all $x \in \mathcal{X}$ such that the denominator of the right-hand-side of \eqref{1} is zero or infinite has zero marginal probability. Next, consider that, if $A \in \tau$ and $B \in \mathcal{B}$, then $$ \Pr(\Theta \in A, X \in B) = \int_B \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) \, d\nu(x) $$ and simultaneously $$ \begin{aligned} \Pr(\Theta \in A, X \in B) &= \int_B \mu_{\Theta \mid X}(A \mid x) \, d\mu_X(x) \\ &= \int_B \left( \mu_{\Theta \mid X}(A \mid x) \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) \right) \, d\nu(x). \end{aligned} $$ It follows that $$ \mu_{\Theta \mid X}(A \mid x) \int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t) = \int_A f_{X \mid \Theta}(x \mid \theta) \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $\nu$-a.e. $x \in \mathcal{X}$, and hence $$ \mu_{\Theta \mid X}(A \mid x) = \int_A \frac{f_{X \mid \Theta}(x \mid \theta)}{\int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t)} \, d\mu_\Theta(\theta) $$ for all $A \in \tau$ and $\mu_X$-a.e. $x \in \mathcal{X}$. Thus, for $\mu_X$-a.e. $x \in \mathcal{X}$, $\mu_{\Theta\mid X}(\cdot \mid x) \ll \mu_\Theta$, and the Radon-Nikodym derivative is $$ \frac{d\mu_{\Theta \mid X}}{d \mu_\Theta}(\theta \mid x) = \frac{f_{X \mid \Theta}(x \mid \theta)}{\int_\Omega f_{X \mid \Theta}(x \mid t) \, d\mu_\Theta(t)}, $$ as claimed, completing the proof. Lastly, how do we reconcile the colloquial version of Bayes' theorem found so commonly in statistics/machine learning literature, namely, $$ \tag{2} \label{2} p(\theta \mid x) = \frac{p(\theta) p(x \mid \theta)}{p(x)}, $$ with \eqref{1}? On the one hand, the left-hand-side of \eqref{2} is supposed to represent a density of the conditional distribution of $\Theta$ given $X$ with respect to some unspecified dominating measure on the parameter space. In fact, none of the dominating measures for the four different densities in \eqref{2} (all named $p$) are explicitly mentioned. On the other hand, the left-hand-side of \eqref{1} is the density of the conditional distribution of $\Theta$ given $X$ with respect to the prior distribution. If, in addition, the prior distribution $\mu_\Theta$ has a density $f_\Theta$ with respect to some (let's say $\sigma$-finite) measure $\lambda$ on the parameter space $\Omega$, then $\mu_{\Theta \mid X}(\cdot\mid x)$ is also absolutely continuous with respect to $\lambda$ for $\mu_X$-a.e. $x \in \mathcal{X}$, and if $f_{\Theta \mid X}$ represents a version of the Radon-Nikodym derivative $d\mu_{\Theta\mid X}/d\lambda$, then \eqref{1} yields $$ \begin{aligned} f_{\Theta \mid X}(\theta \mid x) &= \frac{d \mu_{\Theta \mid X}}{d\lambda}(\theta \mid x) \\ &= \frac{d \mu_{\Theta \mid X}}{d\mu_\Theta}(\theta \mid x) \frac{d \mu_{\Theta}}{d\lambda}(\theta) \\ &= \frac{d \mu_{\Theta \mid X}}{d\mu_\Theta}(\theta \mid x) f_\Theta(\theta) \\ &= \frac{f_\Theta(\theta) f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_{X\mid\Theta}(x\mid t) \, d\mu_\Theta(t)} \\ &= \frac{f_\Theta(\theta) f_{X\mid \Theta}(x\mid \theta)}{\int_\Omega f_\Theta(t) f_{X\mid\Theta}(x\mid t) \, d\lambda(t)}. \end{aligned} $$ The translation between this new form and \eqref{2} is $$ \begin{aligned} p(\theta \mid x) &= f_{\Theta \mid X}(\theta \mid x) = \frac{d \mu_{\Theta \mid X}}{d\lambda}(\theta \mid x), &&\text{(posterior)}\\ p(\theta) &= f_\Theta(\theta) = \frac{d \mu_\Theta}{d\lambda}(\theta), &&\text{(prior)} \\ p(x \mid \theta) &= f_{X\mid\Theta}(x\mid\theta) = \frac{d P_\theta}{d\nu}(x), &&\text{(likelihood)} \\ p(x) &= \int_\Omega f_\Theta(t) f_{X\mid\Theta}(x\mid t) \, d\lambda(t). &&\text{(evidence)} \end{aligned} $$
A Measure Theoretic Formulation of Bayes' Theorem One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995). The conditional distribution of $\Theta$ given $X=x$ is called the posterior d
12,073
Is decision threshold a hyperparameter in logistic regression?
The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold will decrease the number of positives that you predict and increase the number of negatives that you predict. The decision threshold is not a hyper-parameter in the sense of model tuning because it doesn't change the flexibility of the model. The way you're thinking about the word "tune" in the context of the decision threshold is different from how hyper-parameters are tuned. Changing $C$ and other model hyper-parameters changes the model (e.g., the logistic regression coefficients will be different), while adjusting the threshold can only do two things: trade off TP for FN, and FP for TN. However, the model remains the same, because this doesn't change the coefficients. (The same is true for models which do not have coefficients, such as random forests: changing the threshold doesn't change anything about the trees.) So in a narrow sense, you're correct that finding the best trade-off among errors is "tuning," but you're wrong in thinking that changing the threshold is linked to other model hyper-parameters in a way that is optimized by GridSearchCV. Stated another way, changing the decision threshold reflects a choice on your part about how many False Positives and False Negatives that you want to have. Consider the hypothetical that you set the decision threshold to a completely implausible value like -1. All probabilities are non-negative, so with this threshold you will predict "positive" for every observation. From a certain perspective, this is great, because your false negative rate is 0.0. However, your false positive rate is also at the extreme of 1.0, so in that sense your choice of threshold at -1 is terrible. The ideal, of course, is to have a TPR of 1.0 and a FPR of 0.0 and a FNR of 0.0. But this is usually impossible in real-world applications, so the question then becomes "how much FPR am I willing to accept for how much TPR?" And this is the motivation of roc curves.
Is decision threshold a hyperparameter in logistic regression?
The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold w
Is decision threshold a hyperparameter in logistic regression? The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold will decrease the number of positives that you predict and increase the number of negatives that you predict. The decision threshold is not a hyper-parameter in the sense of model tuning because it doesn't change the flexibility of the model. The way you're thinking about the word "tune" in the context of the decision threshold is different from how hyper-parameters are tuned. Changing $C$ and other model hyper-parameters changes the model (e.g., the logistic regression coefficients will be different), while adjusting the threshold can only do two things: trade off TP for FN, and FP for TN. However, the model remains the same, because this doesn't change the coefficients. (The same is true for models which do not have coefficients, such as random forests: changing the threshold doesn't change anything about the trees.) So in a narrow sense, you're correct that finding the best trade-off among errors is "tuning," but you're wrong in thinking that changing the threshold is linked to other model hyper-parameters in a way that is optimized by GridSearchCV. Stated another way, changing the decision threshold reflects a choice on your part about how many False Positives and False Negatives that you want to have. Consider the hypothetical that you set the decision threshold to a completely implausible value like -1. All probabilities are non-negative, so with this threshold you will predict "positive" for every observation. From a certain perspective, this is great, because your false negative rate is 0.0. However, your false positive rate is also at the extreme of 1.0, so in that sense your choice of threshold at -1 is terrible. The ideal, of course, is to have a TPR of 1.0 and a FPR of 0.0 and a FNR of 0.0. But this is usually impossible in real-world applications, so the question then becomes "how much FPR am I willing to accept for how much TPR?" And this is the motivation of roc curves.
Is decision threshold a hyperparameter in logistic regression? The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold w
12,074
Is decision threshold a hyperparameter in logistic regression?
But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter? Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the underlying regression. If so, why is it (for example) not possible to easily search over a grid of thresholds using scikit-learn's GridSearchCV method (as you would do for the regularisation parameter C). This is a design error in sklearn. The best practice for most classification scenarios is to fit the underlying model (which predicts probabilities) using some measure of the quality of these probabilities (like the log-loss in a logistic regression). Afterwards, a decision threshold on these probabilities should be tuned to optimize some business objective of your classification rule. The library should make it easy to optimize the decision threshold based on some measure of quality, but I don't believe it does that well. I think this is one of the places sklearn got it wrong. The library includes a method, predict, on all classification models that thresholds at 0.5. This method is useless, and I strongly advocate for not ever invoking it. It's unfortunate that sklearn is not encouraging a better workflow.
Is decision threshold a hyperparameter in logistic regression?
But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter? Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the un
Is decision threshold a hyperparameter in logistic regression? But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter? Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the underlying regression. If so, why is it (for example) not possible to easily search over a grid of thresholds using scikit-learn's GridSearchCV method (as you would do for the regularisation parameter C). This is a design error in sklearn. The best practice for most classification scenarios is to fit the underlying model (which predicts probabilities) using some measure of the quality of these probabilities (like the log-loss in a logistic regression). Afterwards, a decision threshold on these probabilities should be tuned to optimize some business objective of your classification rule. The library should make it easy to optimize the decision threshold based on some measure of quality, but I don't believe it does that well. I think this is one of the places sklearn got it wrong. The library includes a method, predict, on all classification models that thresholds at 0.5. This method is useless, and I strongly advocate for not ever invoking it. It's unfortunate that sklearn is not encouraging a better workflow.
Is decision threshold a hyperparameter in logistic regression? But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter? Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the un
12,075
Is the median a type of mean, for some generalization of "mean"?
Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics: $$\bar{x} = \sum_i w_i x_{(i)},\qquad w_i=\frac{_1}{^n}\,.$$ Then by replacing that ordinary average of order statistics with some other weight function, we get a notion of "generalized mean" that accounts for order. In that case, a host of potential measures of center become "generalized sorts of means". In the case of the median, for odd $n$, $w_{(n+1)/2}=1$ and all others are 0, and for even $n$, $w_{\frac{n}{2}}=w_{\frac{n}{2}+1}=\frac{1}{2}$. Similarly, if we look at M-estimation, location estimates might also be thought of as a generalization of the arithmetic mean (where for the mean, $\rho$ is quadratic, $\psi$ is linear, or the weight-function is flat), and the median falls also into this class of generalizations. This is a somewhat different generalization than the previous one. There are a variety of other ways we might extend the notion of 'mean' that could include median.
Is the median a type of mean, for some generalization of "mean"?
Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics: $$\bar{x} = \sum_i w_i x_{(i)},\qquad w
Is the median a type of mean, for some generalization of "mean"? Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics: $$\bar{x} = \sum_i w_i x_{(i)},\qquad w_i=\frac{_1}{^n}\,.$$ Then by replacing that ordinary average of order statistics with some other weight function, we get a notion of "generalized mean" that accounts for order. In that case, a host of potential measures of center become "generalized sorts of means". In the case of the median, for odd $n$, $w_{(n+1)/2}=1$ and all others are 0, and for even $n$, $w_{\frac{n}{2}}=w_{\frac{n}{2}+1}=\frac{1}{2}$. Similarly, if we look at M-estimation, location estimates might also be thought of as a generalization of the arithmetic mean (where for the mean, $\rho$ is quadratic, $\psi$ is linear, or the weight-function is flat), and the median falls also into this class of generalizations. This is a somewhat different generalization than the previous one. There are a variety of other ways we might extend the notion of 'mean' that could include median.
Is the median a type of mean, for some generalization of "mean"? Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics: $$\bar{x} = \sum_i w_i x_{(i)},\qquad w
12,076
Is the median a type of mean, for some generalization of "mean"?
If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 loss function. No transformations required. So the median is an example of a Fréchet mean.
Is the median a type of mean, for some generalization of "mean"?
If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 lo
Is the median a type of mean, for some generalization of "mean"? If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 loss function. No transformations required. So the median is an example of a Fréchet mean.
Is the median a type of mean, for some generalization of "mean"? If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 lo
12,077
Is the median a type of mean, for some generalization of "mean"?
The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that it becomes almost useless for data analysis. This reply discusses some of the axiomatic properties that any reasonably useful definition of "mean" should have. Basic Axioms A usefully broad definition of "mean" for the purpose of data analysis would be any sequence of well-defined, deterministic functions $f_n:A^n\to A$ for $A\subset\mathbb{R}$ and $n=1, 2, \ldots$ such that (1) $\newcommand{\x}{\mathrm{x}} \newcommand{\min}{\text{min}}\min (\x)\le f_n(\x)\le \max(\x)$ for all $\x = (x_1, x_2, \ldots, x_n)\in A^n$ (a mean lies between the extremes), (2) $f_n$ is invariant under permutations of its arguments (means do not care about the order of the data), and (3) each $f_n$ is nondecreasing in each of its arguments (as the numbers increase, their mean cannot decrease). We must allow for $A$ to be a proper subset of real numbers (such as all positive numbers) because plenty of means, such as geometric means, are defined only on such subsets. We might also want to add that (1') there exists at least some $\x\in A$ for which $\min(\x)\ne f_n(\x)\ne \max(\x)$ (means are not extremes). (We cannot require that this always hold. For instance, the median of $(0,0,\ldots,0,1)$ equals $0$, which is the minimum.) These properties seem to capture the idea behind a "mean" being some kind of "middle value" of a set of (unordered) data. Consistency axioms I am further tempted to stipulate the rather less obvious consistency criterion (4.a) The range of $f_{n+1}(t, x_1, x_2, \ldots, x_n)$ as $t$ varies throughout the interval $[\min(\x), \max(\x)]$ includes $f_n(\x)$. In other words, it is always possible to leave the mean unchanged by adjoining an appropriate value $t$ to a dataset. In conjunction with (3), it implies that adjoining extreme values to a dataset will pull the mean towards those extremes. If we wish to apply the concept of mean to a distribution or "infinite population", then one way would be to obtain it in the limit of arbitrarily large random samples. Of course the limit might not always exist (it does not exist for the arithmetic mean when the distribution has no expectation, for instance). Therefore I do not want to impose any additional axioms to guarantee the existence of such limits, but the following seems natural and useful: (4.b) Whenever $A$ is bounded and $\x_n$ is a sequence of samples from a distribution $F$ supported on $A$, then the limit of $f_n(\x_n)$ almost surely exists. This prevents the mean from forever "bouncing around" within $A$ even as sample sizes get larger and larger. Along the same lines, we could further narrow the idea of a mean to insist that it become a better estimator of "location" as sample sizes increase: (4.c) Whenever $A$ is bounded, then the variance of the sampling distribution of $f_n(X^{(n)})$ for a random sample $X^{(n)} = (X_1, X_2, \ldots, X_n)$ of $F$ is nondecreasing in $n$. Continuity axiom We might consider asking means to vary "nicely" with the data: (5) $f_n$ is separately continuous in each argument (a small change in the data values should not induce a sudden jump in their mean). This requirement might eliminate some strange generalizations, but it does not rule out any well-known mean. It will rule out some aggregation functions. An invariance axiom We can conceive of means as applying to either interval or ratio data (in Stevens' well-known sense). We cannot demand they be invariant under shifts of location (the geometric mean is not), but we can require (6) $f_n(\lambda \x) = \lambda f_n(\x)$ for all $\x \in A^n$ and all $\lambda \gt 0$ for which $\lambda \x \in A^n$. This says only that we are free to compute $f_n$ using any units of measurement we like. All the means mentioned in the question satisfy this axiom except for some aggregation functions. Discussion General aggregation functions $f_2$, as described in the question, do not necessarily satisfy axioms (1'), (2), (3), (5), or (6). Whether they satisfy any consistency axioms may depend on how they are extended to $n\gt 2$. The usual sample median enjoys all these axiomatic properties. We could augment the consistency axioms to include (4.d) $f_{2n}(\x;\x) = f_n(\x)$ for all $\x \in A^n.$ This implies that when all elements of a dataset are repeated equally often, the mean does not change. This may be too strong, though: the Winsorized mean does not have this property (except asymptotically). The purpose of Winsorizing at the $100\alpha\%$ level is to provide resistance against changes in at least $100\alpha\%$ of the data at either extreme. For instance, the 10% Winsorized mean of $(1,2,3,6)$ is the arithmetic mean of $(2,2,3,3)$, equal to $2.5$, but the 10% Winsorized mean of $(1,1,2,2,3,3,6,6)$ is $3.5$. I do not know which of the consistency axioms (4.a), (4.b), or (4.c) would be most desirable or useful. They appear to be independent: I don't think any two of them imply the third.
Is the median a type of mean, for some generalization of "mean"?
The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that
Is the median a type of mean, for some generalization of "mean"? The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that it becomes almost useless for data analysis. This reply discusses some of the axiomatic properties that any reasonably useful definition of "mean" should have. Basic Axioms A usefully broad definition of "mean" for the purpose of data analysis would be any sequence of well-defined, deterministic functions $f_n:A^n\to A$ for $A\subset\mathbb{R}$ and $n=1, 2, \ldots$ such that (1) $\newcommand{\x}{\mathrm{x}} \newcommand{\min}{\text{min}}\min (\x)\le f_n(\x)\le \max(\x)$ for all $\x = (x_1, x_2, \ldots, x_n)\in A^n$ (a mean lies between the extremes), (2) $f_n$ is invariant under permutations of its arguments (means do not care about the order of the data), and (3) each $f_n$ is nondecreasing in each of its arguments (as the numbers increase, their mean cannot decrease). We must allow for $A$ to be a proper subset of real numbers (such as all positive numbers) because plenty of means, such as geometric means, are defined only on such subsets. We might also want to add that (1') there exists at least some $\x\in A$ for which $\min(\x)\ne f_n(\x)\ne \max(\x)$ (means are not extremes). (We cannot require that this always hold. For instance, the median of $(0,0,\ldots,0,1)$ equals $0$, which is the minimum.) These properties seem to capture the idea behind a "mean" being some kind of "middle value" of a set of (unordered) data. Consistency axioms I am further tempted to stipulate the rather less obvious consistency criterion (4.a) The range of $f_{n+1}(t, x_1, x_2, \ldots, x_n)$ as $t$ varies throughout the interval $[\min(\x), \max(\x)]$ includes $f_n(\x)$. In other words, it is always possible to leave the mean unchanged by adjoining an appropriate value $t$ to a dataset. In conjunction with (3), it implies that adjoining extreme values to a dataset will pull the mean towards those extremes. If we wish to apply the concept of mean to a distribution or "infinite population", then one way would be to obtain it in the limit of arbitrarily large random samples. Of course the limit might not always exist (it does not exist for the arithmetic mean when the distribution has no expectation, for instance). Therefore I do not want to impose any additional axioms to guarantee the existence of such limits, but the following seems natural and useful: (4.b) Whenever $A$ is bounded and $\x_n$ is a sequence of samples from a distribution $F$ supported on $A$, then the limit of $f_n(\x_n)$ almost surely exists. This prevents the mean from forever "bouncing around" within $A$ even as sample sizes get larger and larger. Along the same lines, we could further narrow the idea of a mean to insist that it become a better estimator of "location" as sample sizes increase: (4.c) Whenever $A$ is bounded, then the variance of the sampling distribution of $f_n(X^{(n)})$ for a random sample $X^{(n)} = (X_1, X_2, \ldots, X_n)$ of $F$ is nondecreasing in $n$. Continuity axiom We might consider asking means to vary "nicely" with the data: (5) $f_n$ is separately continuous in each argument (a small change in the data values should not induce a sudden jump in their mean). This requirement might eliminate some strange generalizations, but it does not rule out any well-known mean. It will rule out some aggregation functions. An invariance axiom We can conceive of means as applying to either interval or ratio data (in Stevens' well-known sense). We cannot demand they be invariant under shifts of location (the geometric mean is not), but we can require (6) $f_n(\lambda \x) = \lambda f_n(\x)$ for all $\x \in A^n$ and all $\lambda \gt 0$ for which $\lambda \x \in A^n$. This says only that we are free to compute $f_n$ using any units of measurement we like. All the means mentioned in the question satisfy this axiom except for some aggregation functions. Discussion General aggregation functions $f_2$, as described in the question, do not necessarily satisfy axioms (1'), (2), (3), (5), or (6). Whether they satisfy any consistency axioms may depend on how they are extended to $n\gt 2$. The usual sample median enjoys all these axiomatic properties. We could augment the consistency axioms to include (4.d) $f_{2n}(\x;\x) = f_n(\x)$ for all $\x \in A^n.$ This implies that when all elements of a dataset are repeated equally often, the mean does not change. This may be too strong, though: the Winsorized mean does not have this property (except asymptotically). The purpose of Winsorizing at the $100\alpha\%$ level is to provide resistance against changes in at least $100\alpha\%$ of the data at either extreme. For instance, the 10% Winsorized mean of $(1,2,3,6)$ is the arithmetic mean of $(2,2,3,3)$, equal to $2.5$, but the 10% Winsorized mean of $(1,1,2,2,3,3,6,6)$ is $3.5$. I do not know which of the consistency axioms (4.a), (4.b), or (4.c) would be most desirable or useful. They appear to be independent: I don't think any two of them imply the third.
Is the median a type of mean, for some generalization of "mean"? The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that
12,078
Is the median a type of mean, for some generalization of "mean"?
One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with equal weights $w_i = 1/n$. Letting the weights depend on the order of values in magnitude, from smallest to largest, points to various other special cases, notably the idea of a trimmed mean, which is known by other names too. To avoid excessive use of notation where it is not needed or especially helpful, imagine for example ignoring the smallest and largest values and taking the (equally weighted) mean of the others. Or imagine ignoring the two smallest and two largest and taking the mean of the others; and so forth. The most vigorous trimming would ignore all but the one or two middle values in order, depending on whether the number of values was odd or even, which is naturally just the familiar median. Nothing in the idea of trimming commits you to ignoring equal numbers in each tail of a sample, but saying more about asymmetric trimming would take us further away from the main idea in this thread. In short, means (unqualified) and medians are extreme limiting cases of the family of (symmetric) trimmed means. The overall idea is to allow compromises between one ideal of using all the information in the data and another ideal of protecting oneself from extreme data points, which may be unreliable outliers. See the reference here for one fairly recent review.
Is the median a type of mean, for some generalization of "mean"?
One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with
Is the median a type of mean, for some generalization of "mean"? One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with equal weights $w_i = 1/n$. Letting the weights depend on the order of values in magnitude, from smallest to largest, points to various other special cases, notably the idea of a trimmed mean, which is known by other names too. To avoid excessive use of notation where it is not needed or especially helpful, imagine for example ignoring the smallest and largest values and taking the (equally weighted) mean of the others. Or imagine ignoring the two smallest and two largest and taking the mean of the others; and so forth. The most vigorous trimming would ignore all but the one or two middle values in order, depending on whether the number of values was odd or even, which is naturally just the familiar median. Nothing in the idea of trimming commits you to ignoring equal numbers in each tail of a sample, but saying more about asymmetric trimming would take us further away from the main idea in this thread. In short, means (unqualified) and medians are extreme limiting cases of the family of (symmetric) trimmed means. The overall idea is to allow compromises between one ideal of using all the information in the data and another ideal of protecting oneself from extreme data points, which may be unreliable outliers. See the reference here for one fairly recent review.
Is the median a type of mean, for some generalization of "mean"? One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with
12,079
Is the median a type of mean, for some generalization of "mean"?
I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini mean. If you are going to perform some operation over a set of values, the Chisini mean is a number that you can substitute for all of the original values in the set and still get the same result. For example, if you want to sum your values, replacing all the values with the arithmetic mean will yield the same sum. The idea is that a certain value is representative of the numbers in the set in the context of a certain operation over those numbers. (An interesting implication of this way of thinking is that a given value—the arithmetic mean—can only be considered representative under the assumption that you are doing certain things with those numbers.) This is less obvious for the median (and I note that the median is not listed as one of the Chisini means on Wolfram or Wikipedia), but if you were to allow operations over ranks, the median could fit within the same idea.
Is the median a type of mean, for some generalization of "mean"?
I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini
Is the median a type of mean, for some generalization of "mean"? I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini mean. If you are going to perform some operation over a set of values, the Chisini mean is a number that you can substitute for all of the original values in the set and still get the same result. For example, if you want to sum your values, replacing all the values with the arithmetic mean will yield the same sum. The idea is that a certain value is representative of the numbers in the set in the context of a certain operation over those numbers. (An interesting implication of this way of thinking is that a given value—the arithmetic mean—can only be considered representative under the assumption that you are doing certain things with those numbers.) This is less obvious for the median (and I note that the median is not listed as one of the Chisini means on Wolfram or Wikipedia), but if you were to allow operations over ranks, the median could fit within the same idea.
Is the median a type of mean, for some generalization of "mean"? I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini
12,080
Is the median a type of mean, for some generalization of "mean"?
The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of central tendency we could say both Mean and Median are generealization but not of each other. Part of my background is in non parametrics so I like the median and the robustness it provides, invariance to monotonic transformation and more. but each measure has it's place depending on objective.
Is the median a type of mean, for some generalization of "mean"?
The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of
Is the median a type of mean, for some generalization of "mean"? The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of central tendency we could say both Mean and Median are generealization but not of each other. Part of my background is in non parametrics so I like the median and the robustness it provides, invariance to monotonic transformation and more. but each measure has it's place depending on objective.
Is the median a type of mean, for some generalization of "mean"? The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of
12,081
CHAID vs CRT (or CART)
I will list some properties and later give you my appraisal for what its worth: CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two nodes). This may or may not be desired (it can lead to better segments or easier interpretation). What it definitely does, though, is thin out the sample size in the nodes and thus lead to less deep trees. When used for segmentation purposes this can backfire soon as CHAID needs a large sample sizes to work well. CART does binary splits (each node is split into two daughter nodes) by default. CHAID is intended to work with categorical/discretized targets (XAID was for regression but perhaps they have been merged since then). CART can definitely do regression and classification. CHAID uses a pre-pruning idea. A node is only split if a significance criterion is fulfilled. This ties in with the above problem of needing large sample sizes as the Chi-Square test has only little power in small samples (which is effectively reduced even further by a Bonferroni correction for multiple testing). CART on the other hand grows a large tree and then post-prunes the tree back to a smaller version. Thus CHAID tries to prevent overfitting right from the start (only split is there is significant association), whereas CART may easily overfit unless the tree is pruned back. On the other hand this allows CART to perform better than CHAID in and out-of-sample (for a given tuning parameter combination). The most important difference in my opinion is that split variable and split point selection in CHAID is less strongly confounded as in CART. This is largely irrelevant when the trees are used for prediction but is an important issue when trees are used for interpretation: A tree that has those two parts of the algorithm highly confounded is said to be "biased in variable selection" (an unfortunate name). This means that split variable selection prefers variables with many possible splits (say metric predictors). CART is highly "biased" in that sense, CHAID not so much. With surrogate splits CART knows how to handle missing values (surrogate splits means that with missing values (NAs) for predictor variables the algorithm uses other predictor variables that are not as "good" as the primary split variable but mimic the splits produced by the primary splitter). CHAID has no such thing afaik. So depending on what you need it for I'd suggest to use CHAID if the sample is of some size and the aspects of interpretation are more important. Also, if multiway splits or smaller trees are desired CHAID is better. CART on the other hand is a well working prediction machine so if prediction is your aim, I'd go for CART.
CHAID vs CRT (or CART)
I will list some properties and later give you my appraisal for what its worth: CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two node
CHAID vs CRT (or CART) I will list some properties and later give you my appraisal for what its worth: CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two nodes). This may or may not be desired (it can lead to better segments or easier interpretation). What it definitely does, though, is thin out the sample size in the nodes and thus lead to less deep trees. When used for segmentation purposes this can backfire soon as CHAID needs a large sample sizes to work well. CART does binary splits (each node is split into two daughter nodes) by default. CHAID is intended to work with categorical/discretized targets (XAID was for regression but perhaps they have been merged since then). CART can definitely do regression and classification. CHAID uses a pre-pruning idea. A node is only split if a significance criterion is fulfilled. This ties in with the above problem of needing large sample sizes as the Chi-Square test has only little power in small samples (which is effectively reduced even further by a Bonferroni correction for multiple testing). CART on the other hand grows a large tree and then post-prunes the tree back to a smaller version. Thus CHAID tries to prevent overfitting right from the start (only split is there is significant association), whereas CART may easily overfit unless the tree is pruned back. On the other hand this allows CART to perform better than CHAID in and out-of-sample (for a given tuning parameter combination). The most important difference in my opinion is that split variable and split point selection in CHAID is less strongly confounded as in CART. This is largely irrelevant when the trees are used for prediction but is an important issue when trees are used for interpretation: A tree that has those two parts of the algorithm highly confounded is said to be "biased in variable selection" (an unfortunate name). This means that split variable selection prefers variables with many possible splits (say metric predictors). CART is highly "biased" in that sense, CHAID not so much. With surrogate splits CART knows how to handle missing values (surrogate splits means that with missing values (NAs) for predictor variables the algorithm uses other predictor variables that are not as "good" as the primary split variable but mimic the splits produced by the primary splitter). CHAID has no such thing afaik. So depending on what you need it for I'd suggest to use CHAID if the sample is of some size and the aspects of interpretation are more important. Also, if multiway splits or smaller trees are desired CHAID is better. CART on the other hand is a well working prediction machine so if prediction is your aim, I'd go for CART.
CHAID vs CRT (or CART) I will list some properties and later give you my appraisal for what its worth: CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two node
12,082
CHAID vs CRT (or CART)
All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree averaging (bagging, boosting, random forests) is necessary (except that you lose the advantage of trees - interpretability). The simplicity of single trees is largely an illusion. They are simple because they are wrong in the sense that training the tree to multiple large subsets of the data will reveal great disagreement between tree structures. I haven't looked at any recent CHAID methodology but CHAID in its original incarnation was a great exercise in overinterpretation of data.
CHAID vs CRT (or CART)
All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree
CHAID vs CRT (or CART) All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree averaging (bagging, boosting, random forests) is necessary (except that you lose the advantage of trees - interpretability). The simplicity of single trees is largely an illusion. They are simple because they are wrong in the sense that training the tree to multiple large subsets of the data will reveal great disagreement between tree structures. I haven't looked at any recent CHAID methodology but CHAID in its original incarnation was a great exercise in overinterpretation of data.
CHAID vs CRT (or CART) All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree
12,083
Whether to use structural equation modelling to analyse observational studies in psychology
My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from the sounds of it, probably a bit more comfortable with such designs than Henrik (though his concerns about causal interpretations are totally legitimate). Under What Conditions Is SEM An Appropriate Data Analysis Technique? To me, this question actually actually gets at two distinct sub-questions: Why use SEM in the first place? If a researcher has decided to use SEM, what are the data-related requirements for using SEM? Why use SEM in the first place? SEM is a more nuanced and complicated--and therefore less accessible--approach to data analysis than other, more typical, general linear modelling approaches (e.g., ANOVAs, correlations, regression, and their extensions, etc.,). Anything you can think of doing with those approaches, you can do with SEM. As such, I think would-be users should first strongly evaluate why they are compelled to use SEM in the first place. To be sure, SEM offers some powerful benefits to its users, but I have reviewed papers in which none of these benefits are utilized, and the end-product is a data analysis section in a paper that is needlessly more difficult for typical readers to understand. It's just simply not worth the trouble--for the researcher, or the reader--if the benefits of SEM vs. other data analysis approaches are not being reaped. So what do I see as the primary benefits of an SEM approach? The big ones, in my opinion are: (1) Modeling latent variables: SEM allows users to examine structural relations (variances, covariances/correlations, regressions, group mean differences) among unobserved latent variables, which are essentially the shared covariance between a group of variables (e.g., items from an anxiety measure your students might use). The big selling point for analyzing latent variables (e.g., latent anxiety) vs. an observed score of the construct (e.g., an average of the anxiety items) is that latent variables are error-free--latent variables are formed of shared covariance, and error is theorized to covary with nothing. This translates to increased statistical power, as users no longer have to worry about measurement unreliability attenuating the effects they are trying to model. Another, more understated, reason to consider using SEM is in some cases it is a more construct-valid way of testing our theories about constructs. If your students, for example, were using three different measures of anxiety, wouldn't it be better to understand the causes/consequences of what those three measures have in common--presumably anxiety-- in an SEM framework, instead of privileging any particular one measure as the measure of anxiety? (2) Modeling multiple dependent variables: Even if someone isn't going to use SEM to model latent variables, it can still be quite useful as a framework for simultaneously analyzing multiple outcome variables in one model. For example, perhaps your students are interested in exploring how the same predictors are associated with a number of different clinically relevant outcomes (e.g., anxiety, depression, loneliness, self-esteem, etc.,). Why run four separate models (increasing Type I error rate), when you can just run one model for all four outcomes that you are interested in? This is also a reason to use SEM when dealing with certain types of dependent data, where multiple, dependent respondents might both yield predictor and outcome responses (e.g., dyadic data; see Kenny, Kashy, and Cook, 2006, for a description of the SEM approach to using the Actor-Partner Interdependence Model [APIM]). (3) Modeling assumptions, instead of making them: With many other approaches to data analysis (e.g., ANOVA, correlation, regression), we make a ton of assumptions about the properties of the data we are dealing with--such as homogeneity of variance/homoskedasticity. SEM (usually combined with a latent variable approach) enables users to actually model variance parameters simultaneously alongside means and/or correlations/regressive pathways. This means that users can begin theorizing about and testing hypothesis about variability, in addition to mean differences/covariability, instead of just treating variability as an annoying assumption-related afterthought. Another testable assumption, when comparing group mean levels on some variable, is whether that variable actually means the same thing to each group--referred to as measurement invariance in the SEM literature (see Vandenberg & Lance, 2000, for a review of this process). If so, then comparisons on mean levels of that variable are valid, but if groups have a significantly different understanding of what something is, comparing mean levels between groups is questionable. We make this particular assumption implicitly all the time in research using group-comparisons. And then there is the assumption, that when you average or sum item scores (e.g., on an anxiety measure) to create an aggregate index, that each item is an equally good measure of the underlying construct (because each item is weighted equally in the averaging/summing). SEM eliminates this assumption when latent variables are used, by estimating different factor loading values (the association between the item and the latent variable) for each item. Lastly, other assumptions about the data (e.g, normality), though still important for SEM, can be managed (e..g, through the use of "robust" estimators, see Finney & DiStefano, 2008) when the data fail to meet certain criteria (low levels of skewness and kurtosis). (4) Specifying model constraints: The last big reason, in my opinion, to consider using SEM, is because it makes it very easy to test particular hypotheses you might have about your model of data, by forcing ("constraining" in SEM terms) certain paths in your model to take on particular values, and examining how that impacts the fit of your model to your data. Some examples include: (A) constraining a regression pathway to zero, to test whether it's necessary in the model; (B) containing multiple regression pathways to be equal in magnitude (e.g., is the associative strength for some predictor roughly equal for anxiety and depression?); (C) constraining the measurement parameters necessary to evaluate measurement invariance (described above); (D) constraining a regression pathway to be equal in strength between two different groups, in order to test moderation by group. What are the data-related requirements for SEM? The data-related requirements for SEM are pretty modest; you need an adequate sample size, and for your data to meet the assumptions of the model estimator you have selected (Maximum-Liklihood is typical). It is difficult to give a one-size-fits-all recommendation for sample size. Based on some straightforward simulations, Little (2013) suggests that for very simple models, 100-150 observations might be enough, but sample size needs will increase as models become more complex, and/or as the reliability/validity of the variables used in the model decreases. If model complexity is a concern, you could consider parcelling the indicators of your latent variables, but not all are onboard with this approach (Little, Cunningham, Shahar, & Widaman, 2002). But generally speaking, all else being equal, bigger samples (I strive for 200 minimum in my own research) are better. As for meeting the assumptions of a selected estimator, usually this is pretty easy to assess (e.g., look at skewness and kurtosis values for a maximum likelihood estimator). And even if data depart from assumed properties, a research could consider the use of a "robust" estimator (Finney & DiStefano, 2008), or an estimator that assumes a different kind of data (e.g., a categorical estimator, like diagonally weighted least squares). Alternatives To SEM for Data Analysis? If a researcher isn't going to take advantage of the benefits provided by an SEM approach that I've highlighted above, I'd recommend sticking to the more straight-forward and accessible version of that particular analysis (e..g, t-tests, ANOVAs, correlation analysis, regression models [including mediation, moderation, and conditional process models]). Readers are more familiar with them, and will therefore more easily understand them. It's just not worth confusing readers with the minutiae of SEM if you're essentially using SEM to the same effect as a simpler analytic approach. Advice to Researchers Considering The Use Of SEM? For those brand new to SEM: Get a comprehensive, accessibly-written foundation SEM text. I like Beaujean (2014), Brown (2015; the earlier edition is solid too), and Little (2013; good overall introduction, even though it later focuses specifically on longitudinal models). Learn how to use the lavaan package for R (Rosseel, 2012). It's syntax is as easy as SEM syntax can get, it's functionality is broad enough for many folks' SEM needs (definitely for beginners), and it's free. The Beaujean book gives a great simultaneous introduction to SEM and the lavaan package. Consult/use CrossValidated and StacksOverflow regularly. Unexpected things can happen when fitting SEM models, and chances are, many of the weird things you might experience have already been described and troubleshot on Stacks. As Herik points out, note that just because you are specifying a model that implies causal associations, it does not mean that SEM helps to establish causality in a cross-sectional/non-experimental study. Also, it's totally worth considering the use of SEM to analyze data from longitudinal and/or experimental designs. And for those who are beginning to actually use SEM: You will, at some point, be tempted to specify correlated residuals willy-nilly, in an effort to improve the fit of your model. Don't. At least not without a good a priori reason. More often than not, a larger sample, or a simpler model is the cure. Avoid the use of the marker-variable method of identification for latent variables (i.e., fixing the first factor loading to 1). It privileges that indicator as the "gold-standard" indicator of your latent variable, when in most cases, there is no reason to assume this is the case. Be aware that this is the default identification setting in most programs. References Beaujean, A. A. (2014). Latent variable modeling using R: A step-by-step guide. New York, NY: Routledge. Brown, T. A. (2015). Confirmatory factor analysis for applied researchers (2nd edition). New York, NY: Guilford Press. Finney, S. J., & DiStefano, C. (2008). Non-normal and categorical data in structural equation modeling. In G. R. Hancock & R. D. Mueller (Eds.), Structural equation modeling: A second course (pp. 269-314). Information Age Publishing. Kenny, D. A., Kashy, D. A., & Cook, W. L. (2006). Dyadic data analysis. New York, NY: Guilford Press. Little, T. D. (2013). Longitudinal structural equation modeling. New York, NY: Guilford Press. Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151-173. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1-36. Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational researchers. Organizational Research Methods, 3, 4-70.
Whether to use structural equation modelling to analyse observational studies in psychology
My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from
Whether to use structural equation modelling to analyse observational studies in psychology My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from the sounds of it, probably a bit more comfortable with such designs than Henrik (though his concerns about causal interpretations are totally legitimate). Under What Conditions Is SEM An Appropriate Data Analysis Technique? To me, this question actually actually gets at two distinct sub-questions: Why use SEM in the first place? If a researcher has decided to use SEM, what are the data-related requirements for using SEM? Why use SEM in the first place? SEM is a more nuanced and complicated--and therefore less accessible--approach to data analysis than other, more typical, general linear modelling approaches (e.g., ANOVAs, correlations, regression, and their extensions, etc.,). Anything you can think of doing with those approaches, you can do with SEM. As such, I think would-be users should first strongly evaluate why they are compelled to use SEM in the first place. To be sure, SEM offers some powerful benefits to its users, but I have reviewed papers in which none of these benefits are utilized, and the end-product is a data analysis section in a paper that is needlessly more difficult for typical readers to understand. It's just simply not worth the trouble--for the researcher, or the reader--if the benefits of SEM vs. other data analysis approaches are not being reaped. So what do I see as the primary benefits of an SEM approach? The big ones, in my opinion are: (1) Modeling latent variables: SEM allows users to examine structural relations (variances, covariances/correlations, regressions, group mean differences) among unobserved latent variables, which are essentially the shared covariance between a group of variables (e.g., items from an anxiety measure your students might use). The big selling point for analyzing latent variables (e.g., latent anxiety) vs. an observed score of the construct (e.g., an average of the anxiety items) is that latent variables are error-free--latent variables are formed of shared covariance, and error is theorized to covary with nothing. This translates to increased statistical power, as users no longer have to worry about measurement unreliability attenuating the effects they are trying to model. Another, more understated, reason to consider using SEM is in some cases it is a more construct-valid way of testing our theories about constructs. If your students, for example, were using three different measures of anxiety, wouldn't it be better to understand the causes/consequences of what those three measures have in common--presumably anxiety-- in an SEM framework, instead of privileging any particular one measure as the measure of anxiety? (2) Modeling multiple dependent variables: Even if someone isn't going to use SEM to model latent variables, it can still be quite useful as a framework for simultaneously analyzing multiple outcome variables in one model. For example, perhaps your students are interested in exploring how the same predictors are associated with a number of different clinically relevant outcomes (e.g., anxiety, depression, loneliness, self-esteem, etc.,). Why run four separate models (increasing Type I error rate), when you can just run one model for all four outcomes that you are interested in? This is also a reason to use SEM when dealing with certain types of dependent data, where multiple, dependent respondents might both yield predictor and outcome responses (e.g., dyadic data; see Kenny, Kashy, and Cook, 2006, for a description of the SEM approach to using the Actor-Partner Interdependence Model [APIM]). (3) Modeling assumptions, instead of making them: With many other approaches to data analysis (e.g., ANOVA, correlation, regression), we make a ton of assumptions about the properties of the data we are dealing with--such as homogeneity of variance/homoskedasticity. SEM (usually combined with a latent variable approach) enables users to actually model variance parameters simultaneously alongside means and/or correlations/regressive pathways. This means that users can begin theorizing about and testing hypothesis about variability, in addition to mean differences/covariability, instead of just treating variability as an annoying assumption-related afterthought. Another testable assumption, when comparing group mean levels on some variable, is whether that variable actually means the same thing to each group--referred to as measurement invariance in the SEM literature (see Vandenberg & Lance, 2000, for a review of this process). If so, then comparisons on mean levels of that variable are valid, but if groups have a significantly different understanding of what something is, comparing mean levels between groups is questionable. We make this particular assumption implicitly all the time in research using group-comparisons. And then there is the assumption, that when you average or sum item scores (e.g., on an anxiety measure) to create an aggregate index, that each item is an equally good measure of the underlying construct (because each item is weighted equally in the averaging/summing). SEM eliminates this assumption when latent variables are used, by estimating different factor loading values (the association between the item and the latent variable) for each item. Lastly, other assumptions about the data (e.g, normality), though still important for SEM, can be managed (e..g, through the use of "robust" estimators, see Finney & DiStefano, 2008) when the data fail to meet certain criteria (low levels of skewness and kurtosis). (4) Specifying model constraints: The last big reason, in my opinion, to consider using SEM, is because it makes it very easy to test particular hypotheses you might have about your model of data, by forcing ("constraining" in SEM terms) certain paths in your model to take on particular values, and examining how that impacts the fit of your model to your data. Some examples include: (A) constraining a regression pathway to zero, to test whether it's necessary in the model; (B) containing multiple regression pathways to be equal in magnitude (e.g., is the associative strength for some predictor roughly equal for anxiety and depression?); (C) constraining the measurement parameters necessary to evaluate measurement invariance (described above); (D) constraining a regression pathway to be equal in strength between two different groups, in order to test moderation by group. What are the data-related requirements for SEM? The data-related requirements for SEM are pretty modest; you need an adequate sample size, and for your data to meet the assumptions of the model estimator you have selected (Maximum-Liklihood is typical). It is difficult to give a one-size-fits-all recommendation for sample size. Based on some straightforward simulations, Little (2013) suggests that for very simple models, 100-150 observations might be enough, but sample size needs will increase as models become more complex, and/or as the reliability/validity of the variables used in the model decreases. If model complexity is a concern, you could consider parcelling the indicators of your latent variables, but not all are onboard with this approach (Little, Cunningham, Shahar, & Widaman, 2002). But generally speaking, all else being equal, bigger samples (I strive for 200 minimum in my own research) are better. As for meeting the assumptions of a selected estimator, usually this is pretty easy to assess (e.g., look at skewness and kurtosis values for a maximum likelihood estimator). And even if data depart from assumed properties, a research could consider the use of a "robust" estimator (Finney & DiStefano, 2008), or an estimator that assumes a different kind of data (e.g., a categorical estimator, like diagonally weighted least squares). Alternatives To SEM for Data Analysis? If a researcher isn't going to take advantage of the benefits provided by an SEM approach that I've highlighted above, I'd recommend sticking to the more straight-forward and accessible version of that particular analysis (e..g, t-tests, ANOVAs, correlation analysis, regression models [including mediation, moderation, and conditional process models]). Readers are more familiar with them, and will therefore more easily understand them. It's just not worth confusing readers with the minutiae of SEM if you're essentially using SEM to the same effect as a simpler analytic approach. Advice to Researchers Considering The Use Of SEM? For those brand new to SEM: Get a comprehensive, accessibly-written foundation SEM text. I like Beaujean (2014), Brown (2015; the earlier edition is solid too), and Little (2013; good overall introduction, even though it later focuses specifically on longitudinal models). Learn how to use the lavaan package for R (Rosseel, 2012). It's syntax is as easy as SEM syntax can get, it's functionality is broad enough for many folks' SEM needs (definitely for beginners), and it's free. The Beaujean book gives a great simultaneous introduction to SEM and the lavaan package. Consult/use CrossValidated and StacksOverflow regularly. Unexpected things can happen when fitting SEM models, and chances are, many of the weird things you might experience have already been described and troubleshot on Stacks. As Herik points out, note that just because you are specifying a model that implies causal associations, it does not mean that SEM helps to establish causality in a cross-sectional/non-experimental study. Also, it's totally worth considering the use of SEM to analyze data from longitudinal and/or experimental designs. And for those who are beginning to actually use SEM: You will, at some point, be tempted to specify correlated residuals willy-nilly, in an effort to improve the fit of your model. Don't. At least not without a good a priori reason. More often than not, a larger sample, or a simpler model is the cure. Avoid the use of the marker-variable method of identification for latent variables (i.e., fixing the first factor loading to 1). It privileges that indicator as the "gold-standard" indicator of your latent variable, when in most cases, there is no reason to assume this is the case. Be aware that this is the default identification setting in most programs. References Beaujean, A. A. (2014). Latent variable modeling using R: A step-by-step guide. New York, NY: Routledge. Brown, T. A. (2015). Confirmatory factor analysis for applied researchers (2nd edition). New York, NY: Guilford Press. Finney, S. J., & DiStefano, C. (2008). Non-normal and categorical data in structural equation modeling. In G. R. Hancock & R. D. Mueller (Eds.), Structural equation modeling: A second course (pp. 269-314). Information Age Publishing. Kenny, D. A., Kashy, D. A., & Cook, W. L. (2006). Dyadic data analysis. New York, NY: Guilford Press. Little, T. D. (2013). Longitudinal structural equation modeling. New York, NY: Guilford Press. Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151-173. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1-36. Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational researchers. Organizational Research Methods, 3, 4-70.
Whether to use structural equation modelling to analyse observational studies in psychology My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from
12,084
Whether to use structural equation modelling to analyse observational studies in psychology
Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this. To answer your first and second question: I think for a design like this a SEM or, depending on the number of variables involved, mediation or moderation analyses is the natural way of dealing with the data. I have no good idea what else to recommend. For your third question: I think the main advantage with a design like this is it's main disadvantage. Namely that you (given enough variables) will find significant results. The question is, how you interpret these results. That is, you can look at so many hypotheses (some more some less inspired by the relevant literature) that you will probably find something significant (not in the literal sense of rejecting a SEM) that will be interpratable in a psychological sense. Therefore, my advice to anyone doing this would be twofold: Stress the problem with causal interpretation of these designs. I am not an expert in this but know, that a fully cross-sectional design can hardly be interpreted causal, independent of how intuitively plausible that may sound. More advanced designs like cross-lagged pnael designs or stuff like this is needed for causal interpetations. I think the work by Shadish, Cook & Campbell (or at least some of them) are a good ressource for further discussion of these topics. Stress the individual responsibility and scientific ethics. If you see that your initial idea is not supported by the data, it is the natural next step to inspect the data further. However, you shall never rely on HARKing (Hypothesizing After the Results are Known; Kerr, 1998, see also Maxwell, 2004). That is, you should stress that there is a thin line between a reasonable adaption of your hypotheses given the data and cherry picking of significant results.
Whether to use structural equation modelling to analyse observational studies in psychology
Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this. To answer your first and second question: I think for
Whether to use structural equation modelling to analyse observational studies in psychology Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this. To answer your first and second question: I think for a design like this a SEM or, depending on the number of variables involved, mediation or moderation analyses is the natural way of dealing with the data. I have no good idea what else to recommend. For your third question: I think the main advantage with a design like this is it's main disadvantage. Namely that you (given enough variables) will find significant results. The question is, how you interpret these results. That is, you can look at so many hypotheses (some more some less inspired by the relevant literature) that you will probably find something significant (not in the literal sense of rejecting a SEM) that will be interpratable in a psychological sense. Therefore, my advice to anyone doing this would be twofold: Stress the problem with causal interpretation of these designs. I am not an expert in this but know, that a fully cross-sectional design can hardly be interpreted causal, independent of how intuitively plausible that may sound. More advanced designs like cross-lagged pnael designs or stuff like this is needed for causal interpetations. I think the work by Shadish, Cook & Campbell (or at least some of them) are a good ressource for further discussion of these topics. Stress the individual responsibility and scientific ethics. If you see that your initial idea is not supported by the data, it is the natural next step to inspect the data further. However, you shall never rely on HARKing (Hypothesizing After the Results are Known; Kerr, 1998, see also Maxwell, 2004). That is, you should stress that there is a thin line between a reasonable adaption of your hypotheses given the data and cherry picking of significant results.
Whether to use structural equation modelling to analyse observational studies in psychology Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this. To answer your first and second question: I think for
12,085
What is/are the implicit priors in frequentist statistics?
In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and sufficient condition (Stein. 1955; Farrell, 1968b) states that, under the following assumptions the sampling density $f(x|\theta)$ is continuous in $\theta$ and strictly positive on $\Theta$; and the loss function $L$ is strictly convex, continuous and, if $E\subset\Theta$ is compact, $$ \lim_{\|\delta\|\rightarrow +\infty} \inf_{\theta\in E}L(\theta,\delta) =+\infty. $$ an estimator $\delta$ is admissible if, and only if, there exist a sequence $(F_n)$ of increasing compact sets such that $\Theta=\bigcup_n F_n$, a sequence $(\pi_n)$ of finite measures with support $F_n$, and a sequence $(\delta_n)$ of Bayes estimators associated with $\pi_n$ such that there exists a compact set $E_0\subset \Theta$ such that $\inf_n \pi_n(E_0) \ge 1$ if $E\subset \Theta$ is compact, $\sup_n \pi_n(E) <+\infty$ $\lim_n r(\pi_n,\delta)-r(\pi_n) = 0$ and $\lim_n R(\theta,\delta_n)= R(\theta,\delta)$. [reproduced from my book, Bayesian Choice, Theorem 8.3.0, p.407] In this restricted sense, the frequentist property of admissibility is endowed with a Bayesian background, hence associating an implicit prior (or sequence thereof) with each admissible estimator. Sidenote: In a sad coincidence, Charles Stein passed away on November 25 in Palo Alto, California. He was 96. There is a similar (if mathematically involved) result for invariant or equivariant estimation, namely that the the best equivariant estimator is a Bayes estimator for every transitive group acting on a statistical model, associated with the right Haar measure , $\pi^*$, induced on $\Theta$ by this group and the corresponding invariant loss. See Pitman (1939), Stein (1964), or Zidek (1969) for the involved details. This is most likely what Jaynes had in mind, as he argued forcibly about the resolution of the marginalisation paradoxes by invariance principles. Furthermore, as detailed in civilstat answer, another frequentist notion of optimality, namely minimaxity, is also connected to Bayesian procedures in that the minimax procedure that minimises the maximal error (over the parameter space) is often the maximin procedure that maximises the minimal error (over all prior distributions), hence is a Bayes or limit of Bayes procedure(s). Q.: Is there a pithy takeaway I can use to transfer my Bayesian intuition to frequentist models? First I would avoid using the term "frequentist model" as there are sampling models (the data $x$ is a realisation of $X\sim f(x|\theta)$ for a parameter value $\theta$) and frequentist procedures (best unbiased estimator, minimum variance confidence interval, &tc.) Second, I do not see a compelling methodological or theoretical reason for considering frequentist methods as borderline or limiting Bayesian methods. The justification to a frequentist procedure, when it exists, is to satisfy some optimality property in the sampling space, that is when repeating the observations. The primary justification for Bayesian procedures is to be optimal [under a specific criterion or loss function] given a prior distribution and one realisation from the sampling model. Sometimes, the resulting procedure satisfies some frequentist property (the $95$% credible region is a $95$% confidence region), but this is happenstance in that this optimality does not transfer to all procedures associated with the Bayesian model.
What is/are the implicit priors in frequentist statistics?
In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and suffi
What is/are the implicit priors in frequentist statistics? In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and sufficient condition (Stein. 1955; Farrell, 1968b) states that, under the following assumptions the sampling density $f(x|\theta)$ is continuous in $\theta$ and strictly positive on $\Theta$; and the loss function $L$ is strictly convex, continuous and, if $E\subset\Theta$ is compact, $$ \lim_{\|\delta\|\rightarrow +\infty} \inf_{\theta\in E}L(\theta,\delta) =+\infty. $$ an estimator $\delta$ is admissible if, and only if, there exist a sequence $(F_n)$ of increasing compact sets such that $\Theta=\bigcup_n F_n$, a sequence $(\pi_n)$ of finite measures with support $F_n$, and a sequence $(\delta_n)$ of Bayes estimators associated with $\pi_n$ such that there exists a compact set $E_0\subset \Theta$ such that $\inf_n \pi_n(E_0) \ge 1$ if $E\subset \Theta$ is compact, $\sup_n \pi_n(E) <+\infty$ $\lim_n r(\pi_n,\delta)-r(\pi_n) = 0$ and $\lim_n R(\theta,\delta_n)= R(\theta,\delta)$. [reproduced from my book, Bayesian Choice, Theorem 8.3.0, p.407] In this restricted sense, the frequentist property of admissibility is endowed with a Bayesian background, hence associating an implicit prior (or sequence thereof) with each admissible estimator. Sidenote: In a sad coincidence, Charles Stein passed away on November 25 in Palo Alto, California. He was 96. There is a similar (if mathematically involved) result for invariant or equivariant estimation, namely that the the best equivariant estimator is a Bayes estimator for every transitive group acting on a statistical model, associated with the right Haar measure , $\pi^*$, induced on $\Theta$ by this group and the corresponding invariant loss. See Pitman (1939), Stein (1964), or Zidek (1969) for the involved details. This is most likely what Jaynes had in mind, as he argued forcibly about the resolution of the marginalisation paradoxes by invariance principles. Furthermore, as detailed in civilstat answer, another frequentist notion of optimality, namely minimaxity, is also connected to Bayesian procedures in that the minimax procedure that minimises the maximal error (over the parameter space) is often the maximin procedure that maximises the minimal error (over all prior distributions), hence is a Bayes or limit of Bayes procedure(s). Q.: Is there a pithy takeaway I can use to transfer my Bayesian intuition to frequentist models? First I would avoid using the term "frequentist model" as there are sampling models (the data $x$ is a realisation of $X\sim f(x|\theta)$ for a parameter value $\theta$) and frequentist procedures (best unbiased estimator, minimum variance confidence interval, &tc.) Second, I do not see a compelling methodological or theoretical reason for considering frequentist methods as borderline or limiting Bayesian methods. The justification to a frequentist procedure, when it exists, is to satisfy some optimality property in the sampling space, that is when repeating the observations. The primary justification for Bayesian procedures is to be optimal [under a specific criterion or loss function] given a prior distribution and one realisation from the sampling model. Sometimes, the resulting procedure satisfies some frequentist property (the $95$% credible region is a $95$% confidence region), but this is happenstance in that this optimality does not transfer to all procedures associated with the Bayesian model.
What is/are the implicit priors in frequentist statistics? In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and suffi
12,086
What is/are the implicit priors in frequentist statistics?
@Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.) Frequentists often (but not always) like to use estimators that are "minimax": if I want to estimate $\theta$, my estimator $\hat{\theta}$'s worst-case risk should be better than any other estimator's worst-case risk. It turns out that MLEs are often (approximately) minimax. See details e.g here or here. In order to find the minimax estimator for a problem, one way is to think Bayesian for a moment and find the "least favorable prior" $\pi$. This is the prior whose Bayes estimator has higher average risk than any other prior's Bayes estimator. If you can find it, then it turns out $\pi$'s Bayes estimator is minimax. In this sense, you could pithily say: A (minimax-using) Frequentist is like a Bayesian who chose (the point estimate based on) a least-favorable prior. Maybe you could stretch this to say: Such a Frequentist is a conservative Bayesian, choosing not subjective priors or even uninformative priors but (in this specific sense) worst-case priors. Finally, as others have said, it's a stretch to compare Frequentists and Bayesians in this way. Being a Frequentist doesn't necessarily imply that you use a certain estimator. It just means that you ask questions about your estimator's sampling properties, whereas these questions are not a Bayesian's top priority. (So any Bayesian who hopes for good sampling properties, e.g. "calibrated Bayes," is also a Frequentist.) Even if you define a Frequentist as one whose estimators always have optimal sampling properties, there are many such properties and you can't always meet them all at once. So it's hard to speak generally about "all Frequentist models."
What is/are the implicit priors in frequentist statistics?
@Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.) Frequentists often (
What is/are the implicit priors in frequentist statistics? @Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.) Frequentists often (but not always) like to use estimators that are "minimax": if I want to estimate $\theta$, my estimator $\hat{\theta}$'s worst-case risk should be better than any other estimator's worst-case risk. It turns out that MLEs are often (approximately) minimax. See details e.g here or here. In order to find the minimax estimator for a problem, one way is to think Bayesian for a moment and find the "least favorable prior" $\pi$. This is the prior whose Bayes estimator has higher average risk than any other prior's Bayes estimator. If you can find it, then it turns out $\pi$'s Bayes estimator is minimax. In this sense, you could pithily say: A (minimax-using) Frequentist is like a Bayesian who chose (the point estimate based on) a least-favorable prior. Maybe you could stretch this to say: Such a Frequentist is a conservative Bayesian, choosing not subjective priors or even uninformative priors but (in this specific sense) worst-case priors. Finally, as others have said, it's a stretch to compare Frequentists and Bayesians in this way. Being a Frequentist doesn't necessarily imply that you use a certain estimator. It just means that you ask questions about your estimator's sampling properties, whereas these questions are not a Bayesian's top priority. (So any Bayesian who hopes for good sampling properties, e.g. "calibrated Bayes," is also a Frequentist.) Even if you define a Frequentist as one whose estimators always have optimal sampling properties, there are many such properties and you can't always meet them all at once. So it's hard to speak generally about "all Frequentist models."
What is/are the implicit priors in frequentist statistics? @Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.) Frequentists often (
12,087
r glmer warnings: model fails to converge & model is nearly unidentifiable
There is a nice description of how to troubleshoot this issue here: https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html Basically, the recommendations are to rescale and center your variables, check for singularity, double-check gradient calculations, add more iterations by restarting from previous fit, and try different optimizers. The last recommendation (i.e., optimizers) has worked for me in the past: e.g., add control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5)) to your glmer call. Edit: This chapter may also be helpful: https://www.learn-mlms.com/07-module-7.html
r glmer warnings: model fails to converge & model is nearly unidentifiable
There is a nice description of how to troubleshoot this issue here: https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html Basically, the recommendations are to resca
r glmer warnings: model fails to converge & model is nearly unidentifiable There is a nice description of how to troubleshoot this issue here: https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html Basically, the recommendations are to rescale and center your variables, check for singularity, double-check gradient calculations, add more iterations by restarting from previous fit, and try different optimizers. The last recommendation (i.e., optimizers) has worked for me in the past: e.g., add control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5)) to your glmer call. Edit: This chapter may also be helpful: https://www.learn-mlms.com/07-module-7.html
r glmer warnings: model fails to converge & model is nearly unidentifiable There is a nice description of how to troubleshoot this issue here: https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html Basically, the recommendations are to resca
12,088
r glmer warnings: model fails to converge & model is nearly unidentifiable
The correlation of fixed effects in your last output suggests that there is a problem of multicollinearity. Some of the fixed effects are almost perfectly correlated (r = 1 or r = -1). Especially, group1 and its interactions seem to be problematic. You could check some descriptive statistics and plots of your fixed effect variables and the interactions. Maybe it's just a simple coding error in constructing the group categories.
r glmer warnings: model fails to converge & model is nearly unidentifiable
The correlation of fixed effects in your last output suggests that there is a problem of multicollinearity. Some of the fixed effects are almost perfectly correlated (r = 1 or r = -1). Especially, gr
r glmer warnings: model fails to converge & model is nearly unidentifiable The correlation of fixed effects in your last output suggests that there is a problem of multicollinearity. Some of the fixed effects are almost perfectly correlated (r = 1 or r = -1). Especially, group1 and its interactions seem to be problematic. You could check some descriptive statistics and plots of your fixed effect variables and the interactions. Maybe it's just a simple coding error in constructing the group categories.
r glmer warnings: model fails to converge & model is nearly unidentifiable The correlation of fixed effects in your last output suggests that there is a problem of multicollinearity. Some of the fixed effects are almost perfectly correlated (r = 1 or r = -1). Especially, gr
12,089
Interpreting estimates of cloglog regression
With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course. the estimate of time is 0.015. Is it correct to say the odds of mortality per unit time is multiplied by exp(0.015) = 1.015113 (~1.5% increase per unit time) No, because it doesn't model in terms of log-odds. That's what you'd have with a logit link; if you want a model that works in terms of log-odds, use a logit-link. The complementary-log-log link function says that $\eta(x) = \log(-\log(1-\pi_x))=\mathbf{x}\beta$ where $\pi_x=P(Y=1|X=\mathbf{x})$. So $\exp(\eta)$ is not the odds ratio; indeed $\exp(\eta)=-\log(1-\pi_x)$. Hence $\exp(-\exp(\eta))=(1-\pi_x)$ and $1-\exp(-\exp(\eta))=\pi_x$. As a result, if you need an odds ratio for some specific $\mathbf{x}$, you can compute one, but the parameters don't have a direct simple interpretation in terms of contribution to log-odds. Instead (unsurprisingly) a parameter shows (for a unit change in $x$) contribution to the complementary-log-log. As Ben gently hinted in his question in comments: is it true to say that the probability of mortality per unit time (i.e. the hazard) is increased by 1.5% ? Parameters in the complementary log-log model do have a neat interpretation in terms of hazard ratio. We have that: $e^{\eta(x)}=-\log(1-\pi_x) = -\log(S_x)$, where $S$ is the survival function. (So log-survival will drop by about 1.5% per unit of time in the example.) Now the hazard, $h(x)=-\frac{d}{dx}\log(S_x)=\frac{d}{dx}e^{\eta(x)}$, so indeed it seems that in the example given in the question, the probability of mortality* per unit of time is increased by about 1.5% * (or for binomial models with cloglog link more generally, of $P(Y=1)$)
Interpreting estimates of cloglog regression
With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course. the estimate of time is 0.015. Is it
Interpreting estimates of cloglog regression With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course. the estimate of time is 0.015. Is it correct to say the odds of mortality per unit time is multiplied by exp(0.015) = 1.015113 (~1.5% increase per unit time) No, because it doesn't model in terms of log-odds. That's what you'd have with a logit link; if you want a model that works in terms of log-odds, use a logit-link. The complementary-log-log link function says that $\eta(x) = \log(-\log(1-\pi_x))=\mathbf{x}\beta$ where $\pi_x=P(Y=1|X=\mathbf{x})$. So $\exp(\eta)$ is not the odds ratio; indeed $\exp(\eta)=-\log(1-\pi_x)$. Hence $\exp(-\exp(\eta))=(1-\pi_x)$ and $1-\exp(-\exp(\eta))=\pi_x$. As a result, if you need an odds ratio for some specific $\mathbf{x}$, you can compute one, but the parameters don't have a direct simple interpretation in terms of contribution to log-odds. Instead (unsurprisingly) a parameter shows (for a unit change in $x$) contribution to the complementary-log-log. As Ben gently hinted in his question in comments: is it true to say that the probability of mortality per unit time (i.e. the hazard) is increased by 1.5% ? Parameters in the complementary log-log model do have a neat interpretation in terms of hazard ratio. We have that: $e^{\eta(x)}=-\log(1-\pi_x) = -\log(S_x)$, where $S$ is the survival function. (So log-survival will drop by about 1.5% per unit of time in the example.) Now the hazard, $h(x)=-\frac{d}{dx}\log(S_x)=\frac{d}{dx}e^{\eta(x)}$, so indeed it seems that in the example given in the question, the probability of mortality* per unit of time is increased by about 1.5% * (or for binomial models with cloglog link more generally, of $P(Y=1)$)
Interpreting estimates of cloglog regression With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course. the estimate of time is 0.015. Is it
12,090
Can we use leave one out mean and standard deviation to reveal the outliers?
It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one should never do it: the risks of it not working are large and besides, there exists a simpler, much safer and better established alternative available at no extra cost. First, it is true that if there is a single outlier, then you will eventually find it using the procedure you suggest. But, in general (when there may be more than a single outlier in the data), the algorithm you suggest completely breaks down, in the sense of potentially leading you to reject a good data point as an outlier or keep outliers as good data points with potentially catastrophic consequences. Below, I give a simple numerical example where the rule you propose breaks down and then I propose a much safer and more established alternative, but before this I will explain a) what is wrong with the method you propose and b) what the usually preferred alternative to it is. In essence, you cannot use the distance of an observation from the leave one out mean and standard deviation of your data to reliably detect outliers because the estimates you use (leave one out mean and standard deviation) are still liable to being pulled towards the remaining outliers: this is called the masking effect. In a nutshell, one simple way to reliably detect outliers is to use the general idea you suggested (distance from estimate of location and scale) but replacing the estimators you used (leave one out mean, sd) by robust ones--i.e., estimates designed to be much less susceptible to being swayed by outliers. Consider this example, where I add 3 outliers to 47 genuine observations drawn from a Normal 0,1: n <- 50 set.seed(123) # for reproducibility x <- round(rnorm(n,0,1), 1) x[1] <- x[1]+1000 x[2] <- x[2]+10 x[3] <- x[3]+10 The code below computes the outlyingness index based on the leave one out mean and standard deviation (e.g. the approach you suggest). out_1 <- rep(NA,n) for(i in 1:n){ out_1[i] <- abs( x[i]-mean(x[-i]) )/sd(x[-i]) } and this code produces the plot you see below. plot(x, out_1, ylim=c(0,1), xlim=c(-3,20)) points(x[1:3], out_1[1:3], col="red", pch=16) Image 1 depicts the value of your outlyingness index as a function of the value of the observations (the furthest away of the outliers is outside the range of this plot but the other two are shown as red dots). As you can see, except for the most extreme one, an outlyingness index constructed as you suggest would fail to reveal the outliers: indeed the second and third (milder) outliers now even have a value (on your outlyingness index) smaller than all the genuine observations!...Under the approach you suggest, one would keep these two extreme outliers in the set of genuine observations, leading you to use the 49 remaining observations as if they were coming from the same homogeneous process, giving you a final estimate of the mean and sd based on these 49 data points of 0.45 and 2.32, a very poor description of either part of your sample! Contrast this outcome with the results you would have obtained using an outlier detection rule based on the median and the mad where the outlyingness of point $x_i$ wrt to a data vector $X$ is $$O(x_i,X)=\frac{|x_i-\mbox{med}(X)|}{\mbox{mad}(X)}$$ where $\mbox{med}(X)$ is the median of the entries of $X$ (all of them, without exclusion) and $\mbox{mad}(X)$ is their median absolute deviation times 1.4826 (I defer to the linked wiki article for an explanation of where this number comes from since it is orthogonal to the main issue here). In R, this second outlyingness index can be computed as: out_2 <- abs( x-median(x) )/mad(x) and plotted (as before) using: plot(x, out_2, ylim=c(0,15), xlim=c(-3,20)) points(x[1:3], out_2[1:3], col="red", pch=16) Image 2 plots the value of this alternative outlyingness index for the same data set. As you can see, now all three outliers are clearly revealed as such. Furthermore, this outlier detection rule has some established statistical properties. This leads, among other things, to usable cut-off rules. For example, if the genuine part of the data can be assumed to be drawn from a symmetric distribution with finite second moment, you can reject all data points for which $$\frac{|x_i-\mbox{med}(X)|}{\mbox{mad}(X)}>3.5$$ as outliers. In the example above, application of this rule would lead you to correctly flag observation 1,2 and 3. Rejecting these, the mean and sd of the remaining observations is 0.021 and 0.93 receptively, a much better description of the genuine part of the sample!
Can we use leave one out mean and standard deviation to reveal the outliers?
It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one s
Can we use leave one out mean and standard deviation to reveal the outliers? It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one should never do it: the risks of it not working are large and besides, there exists a simpler, much safer and better established alternative available at no extra cost. First, it is true that if there is a single outlier, then you will eventually find it using the procedure you suggest. But, in general (when there may be more than a single outlier in the data), the algorithm you suggest completely breaks down, in the sense of potentially leading you to reject a good data point as an outlier or keep outliers as good data points with potentially catastrophic consequences. Below, I give a simple numerical example where the rule you propose breaks down and then I propose a much safer and more established alternative, but before this I will explain a) what is wrong with the method you propose and b) what the usually preferred alternative to it is. In essence, you cannot use the distance of an observation from the leave one out mean and standard deviation of your data to reliably detect outliers because the estimates you use (leave one out mean and standard deviation) are still liable to being pulled towards the remaining outliers: this is called the masking effect. In a nutshell, one simple way to reliably detect outliers is to use the general idea you suggested (distance from estimate of location and scale) but replacing the estimators you used (leave one out mean, sd) by robust ones--i.e., estimates designed to be much less susceptible to being swayed by outliers. Consider this example, where I add 3 outliers to 47 genuine observations drawn from a Normal 0,1: n <- 50 set.seed(123) # for reproducibility x <- round(rnorm(n,0,1), 1) x[1] <- x[1]+1000 x[2] <- x[2]+10 x[3] <- x[3]+10 The code below computes the outlyingness index based on the leave one out mean and standard deviation (e.g. the approach you suggest). out_1 <- rep(NA,n) for(i in 1:n){ out_1[i] <- abs( x[i]-mean(x[-i]) )/sd(x[-i]) } and this code produces the plot you see below. plot(x, out_1, ylim=c(0,1), xlim=c(-3,20)) points(x[1:3], out_1[1:3], col="red", pch=16) Image 1 depicts the value of your outlyingness index as a function of the value of the observations (the furthest away of the outliers is outside the range of this plot but the other two are shown as red dots). As you can see, except for the most extreme one, an outlyingness index constructed as you suggest would fail to reveal the outliers: indeed the second and third (milder) outliers now even have a value (on your outlyingness index) smaller than all the genuine observations!...Under the approach you suggest, one would keep these two extreme outliers in the set of genuine observations, leading you to use the 49 remaining observations as if they were coming from the same homogeneous process, giving you a final estimate of the mean and sd based on these 49 data points of 0.45 and 2.32, a very poor description of either part of your sample! Contrast this outcome with the results you would have obtained using an outlier detection rule based on the median and the mad where the outlyingness of point $x_i$ wrt to a data vector $X$ is $$O(x_i,X)=\frac{|x_i-\mbox{med}(X)|}{\mbox{mad}(X)}$$ where $\mbox{med}(X)$ is the median of the entries of $X$ (all of them, without exclusion) and $\mbox{mad}(X)$ is their median absolute deviation times 1.4826 (I defer to the linked wiki article for an explanation of where this number comes from since it is orthogonal to the main issue here). In R, this second outlyingness index can be computed as: out_2 <- abs( x-median(x) )/mad(x) and plotted (as before) using: plot(x, out_2, ylim=c(0,15), xlim=c(-3,20)) points(x[1:3], out_2[1:3], col="red", pch=16) Image 2 plots the value of this alternative outlyingness index for the same data set. As you can see, now all three outliers are clearly revealed as such. Furthermore, this outlier detection rule has some established statistical properties. This leads, among other things, to usable cut-off rules. For example, if the genuine part of the data can be assumed to be drawn from a symmetric distribution with finite second moment, you can reject all data points for which $$\frac{|x_i-\mbox{med}(X)|}{\mbox{mad}(X)}>3.5$$ as outliers. In the example above, application of this rule would lead you to correctly flag observation 1,2 and 3. Rejecting these, the mean and sd of the remaining observations is 0.021 and 0.93 receptively, a much better description of the genuine part of the sample!
Can we use leave one out mean and standard deviation to reveal the outliers? It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one s
12,091
Appropriateness of ANOVA after k-means cluster analysis
No! You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering will impose one by grouping together points which are nearby. This shrinks the within-group variance and grows the across-group variance, which biases you towards false positives. This effect is surprisingly strong. Here are the results of a simulation that draws a 1000 data points from a standard normal distribution. If we assign the points to one of five groups at random before running the ANOVA, we find that the p-values are uniformly distributed: 5% of the runs are significant at the (uncorrected) 0.05 level, 1% at the 0.01 level, etc. In other words, there is no effect. However, if $k$-means is used to cluster the data into 5 groups, we find a significant effect virtually every time, even though the data has no actual structure. There is nothing special about a k-means or an ANOVA here--you would see similar effects using non-parametric tests or logistic regression and a decision tree, even just taking the min/max. After you impose some kind of structure on the data, you cannot to test whether some structure exists, since it obvious does!. As a result, validating clustering algorithms' performance is tricky, particularly if the data are not labelled. However, there are a few approaches to "internal validation", or measuring the clusters' quality without using external data sources. They generally focus on the compactness and separability of the clusters. This review by Lui et al. (2010) might be a good place to start.
Appropriateness of ANOVA after k-means cluster analysis
No! You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering
Appropriateness of ANOVA after k-means cluster analysis No! You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering will impose one by grouping together points which are nearby. This shrinks the within-group variance and grows the across-group variance, which biases you towards false positives. This effect is surprisingly strong. Here are the results of a simulation that draws a 1000 data points from a standard normal distribution. If we assign the points to one of five groups at random before running the ANOVA, we find that the p-values are uniformly distributed: 5% of the runs are significant at the (uncorrected) 0.05 level, 1% at the 0.01 level, etc. In other words, there is no effect. However, if $k$-means is used to cluster the data into 5 groups, we find a significant effect virtually every time, even though the data has no actual structure. There is nothing special about a k-means or an ANOVA here--you would see similar effects using non-parametric tests or logistic regression and a decision tree, even just taking the min/max. After you impose some kind of structure on the data, you cannot to test whether some structure exists, since it obvious does!. As a result, validating clustering algorithms' performance is tricky, particularly if the data are not labelled. However, there are a few approaches to "internal validation", or measuring the clusters' quality without using external data sources. They generally focus on the compactness and separability of the clusters. This review by Lui et al. (2010) might be a good place to start.
Appropriateness of ANOVA after k-means cluster analysis No! You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering
12,092
Appropriateness of ANOVA after k-means cluster analysis
Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap statistic to estimate the number of clusters. On the other hand, the snooped p-values are biased downward, so if ANOVA or KW test result is insignificant, then the "true" p-value is even larger and you may decide to merge the clusters.
Appropriateness of ANOVA after k-means cluster analysis
Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap
Appropriateness of ANOVA after k-means cluster analysis Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap statistic to estimate the number of clusters. On the other hand, the snooped p-values are biased downward, so if ANOVA or KW test result is insignificant, then the "true" p-value is even larger and you may decide to merge the clusters.
Appropriateness of ANOVA after k-means cluster analysis Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap
12,093
Appropriateness of ANOVA after k-means cluster analysis
I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions. What you'd need to do is simulate from the situation in which your null is true, apply the whole procedure (clustering, etc), and then calculate whichever statistic each time. Applied over many simulations, you would get a distribution for the statistic under the null against which your sample value could be compared. By incorporating the data-snooping into the calculation you account for its effect. [Alternatively one could perhaps develop a resampling-based test (whether based on permutation/randomization or bootstrapping).]
Appropriateness of ANOVA after k-means cluster analysis
I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions. What you'd need to do is simulate
Appropriateness of ANOVA after k-means cluster analysis I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions. What you'd need to do is simulate from the situation in which your null is true, apply the whole procedure (clustering, etc), and then calculate whichever statistic each time. Applied over many simulations, you would get a distribution for the statistic under the null against which your sample value could be compared. By incorporating the data-snooping into the calculation you account for its effect. [Alternatively one could perhaps develop a resampling-based test (whether based on permutation/randomization or bootstrapping).]
Appropriateness of ANOVA after k-means cluster analysis I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions. What you'd need to do is simulate
12,094
Appropriateness of ANOVA after k-means cluster analysis
Not exactly an answer, but a proposal on how one would find the solution. I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seeing if the same kmeans occurs within a distribution (example with clustergram) from various samples (normally kmeans itself produces different kmeans depending on it's starting point. An algorithm that hones in on the same kmeans over multiple iterations like clustergram might be more apt). Just as a mean is derived from samples in statistics. But k means has various proportion sizes for its clusters but the point is do the same means appear within a distribution. But how would one compare the distributions of the various vars in the cluster? Normally coefficients are derived from a covariance matrix (or predictor matrix?) which is based on a given y. This has no y. So im wondering if each cluster could be whitened using zca (or even pca). Something w eigenvalues. Use this to derive some type of meaningful means or coefficients. Else one has a set of means. Then one needs to derive the standard error. The standard error is definately based on the covariance matrix (again substitute pca or zca)? But I'd have to brush up on standard error. I believe standard error is a function of standard deviation but instead of a variance of a sample its a variance of a mean Edit: For statistical significance Use the gap statistic method as discussed here http://www.datanovia.com/en/lessons/determining-the-optimal-number-of-clusters-3-must-know-methods/#at_pco=wnm-1.0&at_si=609664423560aa01&at_ab=per-2&at_pos=0&at_tot=1 I also recommend this article for a discussion of other related measures https://medium.com/@haataa/how-to-measure-clustering-performances-when-there-are-no-ground-truth-db027e9a871c
Appropriateness of ANOVA after k-means cluster analysis
Not exactly an answer, but a proposal on how one would find the solution. I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seein
Appropriateness of ANOVA after k-means cluster analysis Not exactly an answer, but a proposal on how one would find the solution. I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seeing if the same kmeans occurs within a distribution (example with clustergram) from various samples (normally kmeans itself produces different kmeans depending on it's starting point. An algorithm that hones in on the same kmeans over multiple iterations like clustergram might be more apt). Just as a mean is derived from samples in statistics. But k means has various proportion sizes for its clusters but the point is do the same means appear within a distribution. But how would one compare the distributions of the various vars in the cluster? Normally coefficients are derived from a covariance matrix (or predictor matrix?) which is based on a given y. This has no y. So im wondering if each cluster could be whitened using zca (or even pca). Something w eigenvalues. Use this to derive some type of meaningful means or coefficients. Else one has a set of means. Then one needs to derive the standard error. The standard error is definately based on the covariance matrix (again substitute pca or zca)? But I'd have to brush up on standard error. I believe standard error is a function of standard deviation but instead of a variance of a sample its a variance of a mean Edit: For statistical significance Use the gap statistic method as discussed here http://www.datanovia.com/en/lessons/determining-the-optimal-number-of-clusters-3-must-know-methods/#at_pco=wnm-1.0&at_si=609664423560aa01&at_ab=per-2&at_pos=0&at_tot=1 I also recommend this article for a discussion of other related measures https://medium.com/@haataa/how-to-measure-clustering-performances-when-there-are-no-ground-truth-db027e9a871c
Appropriateness of ANOVA after k-means cluster analysis Not exactly an answer, but a proposal on how one would find the solution. I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seein
12,095
Test for IID sampling
What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data was collected and other outside information. Consider some examples. Scenario 1: We generate a set of data independently from a single distribution that happens to be a mixture of 2 normals. Scenario 2: We first generate a gender variable from a binomial distribution, then within males and females we independently generate data from a normal distribution (but the normals are different for males and females), then we delete or lose the gender information. In scenario 1 the data is IID and in scenario 2 the data is clearly not Identically distributed (different distributions for males and females), but the 2 distributions for the 2 scenarios are indistinguishable from the data, you have to know things about how the data was generated to determine the difference. Scenario 3: I take a simple random sample of people living in my city and administer a survey and analyse the results to make inferences about all people in the city. Scenario 4: I take a simple random sample of people living in my city and administer a survey and analyze the results to make inferences about all people in the country. In scenario 3 the subjects would be considered independent (simple random sample of the population of interest), but in scenario 4 they would not be considered independent because they were selected from a small subset of the population of interest and the geographic closeness would likely impose dependence. But the 2 datasets are identical, it is the way that we intend to use the data that determines if they are independent or dependent in this case. So there is no way to test using only the data to show that data is IID, plots and other diagnostics can show some types of non-IID, but lack of these does not guarantee that the data is IID. You can also compare to specific assumptions (IID normal is easier to disprove than just IID). Any test is still just a rule out, but failure to reject the tests never proves that it is IID. Decisions about whether you are willing to assume that IID conditions hold need to be made based on the science of how the data was collected, how it relates to other information, and how it will be used. Edits: Here are another set of examples for non-identical. Scenario 5: the data is residuals from a regression where there is heteroscedasticity (the variances are not equal). Scenario 6: the data is from a mixture of normals with mean 0 but different variances. In scenario 5 we can clearly see that the residuals are not identically distributed if we plot the residuals against fitted values or other variables (predictors, or potential predictors), but the residuals themselves (without the outside info) would be indistinguishable from scenario 6.
Test for IID sampling
What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data w
Test for IID sampling What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data was collected and other outside information. Consider some examples. Scenario 1: We generate a set of data independently from a single distribution that happens to be a mixture of 2 normals. Scenario 2: We first generate a gender variable from a binomial distribution, then within males and females we independently generate data from a normal distribution (but the normals are different for males and females), then we delete or lose the gender information. In scenario 1 the data is IID and in scenario 2 the data is clearly not Identically distributed (different distributions for males and females), but the 2 distributions for the 2 scenarios are indistinguishable from the data, you have to know things about how the data was generated to determine the difference. Scenario 3: I take a simple random sample of people living in my city and administer a survey and analyse the results to make inferences about all people in the city. Scenario 4: I take a simple random sample of people living in my city and administer a survey and analyze the results to make inferences about all people in the country. In scenario 3 the subjects would be considered independent (simple random sample of the population of interest), but in scenario 4 they would not be considered independent because they were selected from a small subset of the population of interest and the geographic closeness would likely impose dependence. But the 2 datasets are identical, it is the way that we intend to use the data that determines if they are independent or dependent in this case. So there is no way to test using only the data to show that data is IID, plots and other diagnostics can show some types of non-IID, but lack of these does not guarantee that the data is IID. You can also compare to specific assumptions (IID normal is easier to disprove than just IID). Any test is still just a rule out, but failure to reject the tests never proves that it is IID. Decisions about whether you are willing to assume that IID conditions hold need to be made based on the science of how the data was collected, how it relates to other information, and how it will be used. Edits: Here are another set of examples for non-identical. Scenario 5: the data is residuals from a regression where there is heteroscedasticity (the variances are not equal). Scenario 6: the data is from a mixture of normals with mean 0 but different variances. In scenario 5 we can clearly see that the residuals are not identically distributed if we plot the residuals against fitted values or other variables (predictors, or potential predictors), but the residuals themselves (without the outside info) would be indistinguishable from scenario 6.
Test for IID sampling What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data w
12,096
Test for IID sampling
If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence part. I think your approach is trying to mainly address the identically distributed part of the assumption. I think there are some problems with your approach. I think you need a lot of splits to get enough p-values to test for uniformity. Then each K-S test loses power. If you are using splits that overlap on parts of the data set the tests will be correlated. With a small number of splits the test of uniformity lacks power. But with many splits the uniformity test may be powerful but the K-S tests would not. Also it seems that this approach won't help detect dependence between variables. @gu11aume I am not sure what you are asking for with a general test for non-time series. Spatial data provide one form of non-time series data. There the function called the variogram might be looked at. For one-dimensional sequences I don't see much difference between sequences ordered by time versus any other way of ordering the data. An autocorrelation function can still be defined and tested. When you say that you want to test independence in sampling, I think you have an order in which the samples are collected. So I think all the 1-dimensional cases work the same way.
Test for IID sampling
If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence p
Test for IID sampling If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence part. I think your approach is trying to mainly address the identically distributed part of the assumption. I think there are some problems with your approach. I think you need a lot of splits to get enough p-values to test for uniformity. Then each K-S test loses power. If you are using splits that overlap on parts of the data set the tests will be correlated. With a small number of splits the test of uniformity lacks power. But with many splits the uniformity test may be powerful but the K-S tests would not. Also it seems that this approach won't help detect dependence between variables. @gu11aume I am not sure what you are asking for with a general test for non-time series. Spatial data provide one form of non-time series data. There the function called the variogram might be looked at. For one-dimensional sequences I don't see much difference between sequences ordered by time versus any other way of ordering the data. An autocorrelation function can still be defined and tested. When you say that you want to test independence in sampling, I think you have an order in which the samples are collected. So I think all the 1-dimensional cases work the same way.
Test for IID sampling If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence p
12,097
When mathematical statistics outsmarts probability theory
I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$. So let's use this to construct an example using $Y\sim \mathcal N(0,1)$: let $X=\pm\frac1Y$ with equal probability (or $0$ in the zero-probability case of $Y=0$) clearly $\mathbb E\left[X \mid Y\right]=0$ so $\mathbb E\big[\mathbb E\left[X \mid Y\right]\big]=0$ but $X$ and $\frac1Y$ have the same heavy-tailed symmetric distribution so $\mathbb E\left[X\right]$ is undefined
When mathematical statistics outsmarts probability theory
I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$. So l
When mathematical statistics outsmarts probability theory I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$. So let's use this to construct an example using $Y\sim \mathcal N(0,1)$: let $X=\pm\frac1Y$ with equal probability (or $0$ in the zero-probability case of $Y=0$) clearly $\mathbb E\left[X \mid Y\right]=0$ so $\mathbb E\big[\mathbb E\left[X \mid Y\right]\big]=0$ but $X$ and $\frac1Y$ have the same heavy-tailed symmetric distribution so $\mathbb E\left[X\right]$ is undefined
When mathematical statistics outsmarts probability theory I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$. So l
12,098
When mathematical statistics outsmarts probability theory
While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons: (i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the expectation under the distribution of $X$; (ii) conditional distributions and therefore expectations are only defined with respect to or in terms of a joint distribution, meaning that logically we start from this joint and derive the conditional, rather than the opposite, so logically we do not "encounter first a conditional expectation"; (iii) conditional distributions are usually equipped with lighter tails than marginal ones, hence it is not surprising that the conditional expectation may exist for all realisations of $Y$ while the marginal expectation does not exist; (iv) there is no "defeat of probability theory" there (and even less of a connection with "mathematical statistics"): the law of total expectation states the existence of $\mathbb E[X]$ as its main assumption.
When mathematical statistics outsmarts probability theory
While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons: (i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the e
When mathematical statistics outsmarts probability theory While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons: (i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the expectation under the distribution of $X$; (ii) conditional distributions and therefore expectations are only defined with respect to or in terms of a joint distribution, meaning that logically we start from this joint and derive the conditional, rather than the opposite, so logically we do not "encounter first a conditional expectation"; (iii) conditional distributions are usually equipped with lighter tails than marginal ones, hence it is not surprising that the conditional expectation may exist for all realisations of $Y$ while the marginal expectation does not exist; (iv) there is no "defeat of probability theory" there (and even less of a connection with "mathematical statistics"): the law of total expectation states the existence of $\mathbb E[X]$ as its main assumption.
When mathematical statistics outsmarts probability theory While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons: (i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the e
12,099
When mathematical statistics outsmarts probability theory
Note that we have $$E(X∣Y)=0.$$ $E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$. In a way the example poses the naive statement that a distribution must have $E[X] = m$ when it is symmetric about $m$. And it does this via the expression $E (X) = E [E (X∣ Y)]$ which hides the situation that an undefined term is included.
When mathematical statistics outsmarts probability theory
Note that we have $$E(X∣Y)=0.$$ $E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$. In a way the example poses the naive statement that
When mathematical statistics outsmarts probability theory Note that we have $$E(X∣Y)=0.$$ $E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$. In a way the example poses the naive statement that a distribution must have $E[X] = m$ when it is symmetric about $m$. And it does this via the expression $E (X) = E [E (X∣ Y)]$ which hides the situation that an undefined term is included.
When mathematical statistics outsmarts probability theory Note that we have $$E(X∣Y)=0.$$ $E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$. In a way the example poses the naive statement that
12,100
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
#A geometrical interpretation The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem: $$\text{minimize $f(\beta)$ subject to $g(\beta) \leq t$ and $h(\beta) = 1$ } $$ $$\begin{align} f(\beta) &= \lVert y-X\beta \lVert^2 \\ g(\beta) &= \lVert \beta \lVert^2\\ h(\beta) &= \lVert X\beta \lVert^2 \end{align}$$ which can be viewed, geometrically, as finding the smallest ellipsoid $f(\beta)=\text{RSS }$ that touches the intersection of the sphere $g(\beta) = t$ and the ellipsoid $h(\beta)=1$ Comparison to the standard ridge regression view In terms of a geometrical view this changes the old view (for standard ridge regression) of the point where a spheroid (errors) and sphere ($\|\beta\|^2=t$) touch. Into a new view where we look for the point where the spheroid (errors) touches a curve (norm of beta constrained by $\|X\beta\|^2=1$). The one sphere (blue in the left image) changes into a lower dimension figure due to the intersection with the $\|X\beta\|=1$ constraint. In the two dimensional case this is simple to view. When we tune the parameter $t$ then we change the relative length of the blue/red spheres or the relative sizes of $f(\beta)$ and $g(\beta)$ (In the theory of Lagrangian multipliers there is probably a neat way to formally and exactly describe that this means that for each $t$ as function of $\lambda$, or reversed, is a monotonous function. But I imagine that you can see intuitively that the sum of squared residuals only increases when we decrease $||\beta||$.) The solution $\beta_\lambda$ for $\lambda=0$ is as you argued on a line between 0 and $\beta_{LS}$ The solution $\beta_\lambda$ for $\lambda \to \infty$ is (indeed as you commented) in the loadings of the first principal component. This is the point where $\lVert \beta \rVert^2$ is the smallest for $\lVert \beta X \rVert^2 = 1$. It is the point where the circle $\lVert \beta \rVert^2=t$ touches the ellipse $|X\beta|=1$ in a single point. In this 2-d view the edges of the intersection of the sphere $\lVert \beta \rVert^2 =t$ and spheroid $\lVert \beta X \rVert^2 = 1$ are points. In multiple dimensions these will be curves (I imagined first that these curves would be ellipses but they are more complicated. You could imagine the ellipsoid $\lVert X \beta \rVert^2 = 1$ being intersected by the ball $\lVert \beta \rVert^2 \leq t$ as some sort of ellipsoid frustum but with edges that are not a simple ellipses) ##Regarding the limit $\lambda \to \infty$ At first (previous edits) I wrote that there will be some limiting $\lambda_{lim}$ above which all the solutions are the same (and they reside in the point $\beta^*_\infty$). But this is not the case Consider the optimization as a LARS algorithm or gradient descent. If for any point $\beta$ there is a direction in which we can change the $\beta$ such that the penalty term $|\beta|^2$ increases less than the SSR term $|y-X\beta|^2$ decreases then you are not in a minimum. In normal ridge regression you have a zero slope (in all directions) for $|\beta|^2$ in the point $\beta=0$. So for all finite $\lambda$ the solution can not be $\beta = 0$ (since an infinitesimal step can be made to reduce the sum of squared residuals without increasing the penalty). For LASSO this is not the same since: the penalty is $\lvert \beta \rvert_1$ (so it is not quadratic with zero slope). Because of that LASSO will have some limiting value $\lambda_{lim}$ above which all the solutions are zero because the penalty term (multiplied by $\lambda$) will increase more than the residual sum of squares decreases. For the constrained ridge you get the same as the regular ridge regression. If you change the $\beta$ starting from the $\beta^*_\infty$ then this change will be perpendicular to $\beta$ (the $\beta^*_\infty$ is perpendicular to the surface of the ellipse $|X\beta|=1$) and $\beta$ can be changed by an infinitesimal step without changing the penalty term but decreasing the sum of squared residuals. Thus for any finite $\lambda$ the point $\beta^*_\infty$ can not be the solution. ##Further notes regarding the limit $\lambda \to \infty$ The usual ridge regression limit for $\lambda$ to infinity corresponds to a different point in the constrained ridge regression. This 'old' limit corresponds to the point where $\mu$ is equal to -1. Then the derivative of the Lagrange function in the normalized problem $$2 (1+\mu) X^{T}X \beta + 2 X^T y + 2 \lambda \beta$$ corresponds to a solution for the derivative of the Lagrange function in the standard problem $$2 X^{T}X \beta^\prime + 2 X^T y + 2 \frac{\lambda}{(1+\mu)} \beta^\prime \qquad \text{with $\beta^\prime = (1+\mu)\beta$}$$
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
#A geometrical interpretation The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem: $$\text{minimize $f(\beta)$ subject to $g(\beta) \leq
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ #A geometrical interpretation The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem: $$\text{minimize $f(\beta)$ subject to $g(\beta) \leq t$ and $h(\beta) = 1$ } $$ $$\begin{align} f(\beta) &= \lVert y-X\beta \lVert^2 \\ g(\beta) &= \lVert \beta \lVert^2\\ h(\beta) &= \lVert X\beta \lVert^2 \end{align}$$ which can be viewed, geometrically, as finding the smallest ellipsoid $f(\beta)=\text{RSS }$ that touches the intersection of the sphere $g(\beta) = t$ and the ellipsoid $h(\beta)=1$ Comparison to the standard ridge regression view In terms of a geometrical view this changes the old view (for standard ridge regression) of the point where a spheroid (errors) and sphere ($\|\beta\|^2=t$) touch. Into a new view where we look for the point where the spheroid (errors) touches a curve (norm of beta constrained by $\|X\beta\|^2=1$). The one sphere (blue in the left image) changes into a lower dimension figure due to the intersection with the $\|X\beta\|=1$ constraint. In the two dimensional case this is simple to view. When we tune the parameter $t$ then we change the relative length of the blue/red spheres or the relative sizes of $f(\beta)$ and $g(\beta)$ (In the theory of Lagrangian multipliers there is probably a neat way to formally and exactly describe that this means that for each $t$ as function of $\lambda$, or reversed, is a monotonous function. But I imagine that you can see intuitively that the sum of squared residuals only increases when we decrease $||\beta||$.) The solution $\beta_\lambda$ for $\lambda=0$ is as you argued on a line between 0 and $\beta_{LS}$ The solution $\beta_\lambda$ for $\lambda \to \infty$ is (indeed as you commented) in the loadings of the first principal component. This is the point where $\lVert \beta \rVert^2$ is the smallest for $\lVert \beta X \rVert^2 = 1$. It is the point where the circle $\lVert \beta \rVert^2=t$ touches the ellipse $|X\beta|=1$ in a single point. In this 2-d view the edges of the intersection of the sphere $\lVert \beta \rVert^2 =t$ and spheroid $\lVert \beta X \rVert^2 = 1$ are points. In multiple dimensions these will be curves (I imagined first that these curves would be ellipses but they are more complicated. You could imagine the ellipsoid $\lVert X \beta \rVert^2 = 1$ being intersected by the ball $\lVert \beta \rVert^2 \leq t$ as some sort of ellipsoid frustum but with edges that are not a simple ellipses) ##Regarding the limit $\lambda \to \infty$ At first (previous edits) I wrote that there will be some limiting $\lambda_{lim}$ above which all the solutions are the same (and they reside in the point $\beta^*_\infty$). But this is not the case Consider the optimization as a LARS algorithm or gradient descent. If for any point $\beta$ there is a direction in which we can change the $\beta$ such that the penalty term $|\beta|^2$ increases less than the SSR term $|y-X\beta|^2$ decreases then you are not in a minimum. In normal ridge regression you have a zero slope (in all directions) for $|\beta|^2$ in the point $\beta=0$. So for all finite $\lambda$ the solution can not be $\beta = 0$ (since an infinitesimal step can be made to reduce the sum of squared residuals without increasing the penalty). For LASSO this is not the same since: the penalty is $\lvert \beta \rvert_1$ (so it is not quadratic with zero slope). Because of that LASSO will have some limiting value $\lambda_{lim}$ above which all the solutions are zero because the penalty term (multiplied by $\lambda$) will increase more than the residual sum of squares decreases. For the constrained ridge you get the same as the regular ridge regression. If you change the $\beta$ starting from the $\beta^*_\infty$ then this change will be perpendicular to $\beta$ (the $\beta^*_\infty$ is perpendicular to the surface of the ellipse $|X\beta|=1$) and $\beta$ can be changed by an infinitesimal step without changing the penalty term but decreasing the sum of squared residuals. Thus for any finite $\lambda$ the point $\beta^*_\infty$ can not be the solution. ##Further notes regarding the limit $\lambda \to \infty$ The usual ridge regression limit for $\lambda$ to infinity corresponds to a different point in the constrained ridge regression. This 'old' limit corresponds to the point where $\mu$ is equal to -1. Then the derivative of the Lagrange function in the normalized problem $$2 (1+\mu) X^{T}X \beta + 2 X^T y + 2 \lambda \beta$$ corresponds to a solution for the derivative of the Lagrange function in the standard problem $$2 X^{T}X \beta^\prime + 2 X^T y + 2 \frac{\lambda}{(1+\mu)} \beta^\prime \qquad \text{with $\beta^\prime = (1+\mu)\beta$}$$
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ #A geometrical interpretation The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem: $$\text{minimize $f(\beta)$ subject to $g(\beta) \leq