Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11037 | 2 | null | 11019 | 1 | null | I find this interesting, you say that you somehow "know" that it does not belong to the classes given. Yet you do not then describe what precisely it is about that item that makes you think this. As soon as you suspect a class coming from "something else" (called SE from now on) you have basically begun to describe a model for predicting that class - if only intuitively. The task then is to "weed out" the model that is hiding in your intuition.
This is always done by describing what distinguishes SE from the rest of your data (this always exists, or else you wouldn't suspect anything). The main problem with the SE is that this distinguishing feature must be based on the prior information alone - no data can possibly be used to train a classifier for SE. For if it could, then SE would effectively not exist as you explain, it would just be one of the classes.
The best way to incorporate SE is to make a prediction about what kind of document you would see if it came from SE. Given that the situation comes up pretty frequently, this should not be too difficult to do (as you would have some idea of the type of features that you would predict).
If you are using a Bayesian posterior probability based approach this is fairly simple to incorporate into a standard analysis. We just add one extra likelihood into the denominator, so you have:
$$P(C_{i}|D_{new}D_{train}I)=\frac{P(C_{i}|D_{train}I)P(D_{new}|C_{i}D_{train}I)}{P(D_{new}|D_{train}I)}$$
$$=\frac{P(C_{i}|D_{train}I)P(D_{new}|C_{i}D_{train}I)}{\left[\sum_{j=1}^{r}P(C_{j}|D_{train}I)P(D_{new}|C_{j}D_{train}I)\right]+P(C_{SE}|D_{train}I)P(D_{new}|C_{SE}D_{train}I)}$$
Where $C_{j}$ is the jth class (plus something else), $D_{new}$ is the data from the document you are trying to classify, $D_{train}$ is the training data, and $I$ is your prior information. Note that the general procedure is not any different in principle from adding any of the other classes. But it is different in practice because the phrase "something else" is vague and does not make any obvious predictions about future data.
| null | CC BY-SA 3.0 | null | 2011-05-20T13:42:11.990 | 2011-05-20T13:42:11.990 | null | null | 2392 | null |
11039 | 1 | null | null | 2 | 1733 | I want to run a multiple regression analysis measuring the effect of various independent variables on a continuous dependent variable measuring the strength of a political institution. The problem is that the data is measured in different years. Thus I might have data for countries A and B only for the year 2009 and for countries C and D only for the year 2010. In order to increase my sample, I would like to use all countries. How can I do that, ie which statistical method is most adequate? Thank you sooo much!
| How to use cross-section country samples of different years in multiple regression analysis? | CC BY-SA 3.0 | null | 2011-05-20T10:58:57.617 | 2011-05-21T05:08:42.717 | null | null | null | [
"r",
"regression",
"stata"
] |
11040 | 2 | null | 11009 | 44 | null | You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you.
The simplest example of an interaction is a model with one dependent variable $Z$ and two independent variables $X$, $Y$ in the form
$$Z = \alpha + \beta' X + \gamma' Y + \delta' X Y + \varepsilon,$$
with $\varepsilon$ a random term variable having zero expectation, and using parameters $\alpha, \beta', \gamma',$ and $\delta'$. It's often worthwhile checking whether $\delta'$ approximates $\beta' \gamma'$, because an algebraically equivalent expression of the same model is
$$Z = \alpha \left(1 + \beta X + \gamma Y + \delta X Y \right) + \varepsilon$$
$$= \alpha \left(1 + \beta X \right) \left(1 + \gamma Y \right) + \alpha \left( \delta - \beta \gamma \right) X Y + \varepsilon$$
(where $\beta' = \alpha \beta$, etc).
Whence, if there's a reason to suppose $\left( \delta - \beta \gamma \right) \sim 0$, we can absorb it in the error term $\varepsilon$. Not only does this give a "pure interaction", it does so without a constant term. This in turn strongly suggests taking logarithms. Some heteroscedasticity in the residuals--that is, a tendency for residuals associated with larger values of $Z$ to be larger in absolute value than average--would also point in this direction. We would then want to explore an alternative formulation
$$\log(Z) = \log(\alpha) + \log(1 + \beta X) + \log(1 + \gamma Y) + \tau$$
with iid random error $\tau$. Furthermore, if we expect $\beta X$ and $\gamma Y$ to be large compared to $1$, we would instead just propose the model
$$\log(Z) = \left(\log(\alpha) + \log(\beta) + \log(\gamma)\right) + \log(X) + \log(Y) + \tau$$
$$= \eta + \log(X) + \log(Y) + \tau.$$
This new model has just a single parameter $\eta$ instead of four parameters ($\alpha$, $\beta'$, etc.) subject to a quadratic relation ($\delta' = \beta' \gamma'$), a considerable simplification.
I am not saying that this is a necessary or even the only step to take, but I am suggesting that this kind of algebraic rearrangement of the model is usually worth considering whenever interactions alone appear to be significant.
Some excellent ways to explore models with interaction, especially with just two and three independent variables, appear in chapters 10 - 13 of Tukey's [EDA](http://rads.stackoverflow.com/amzn/click/B0007347RW).
| null | CC BY-SA 3.0 | null | 2011-05-20T14:07:16.690 | 2011-05-20T14:07:16.690 | null | null | 919 | null |
11041 | 2 | null | 7385 | 5 | null | I had the same issue in my dissertation. In Stata, I just built myself a custom .ado program with two calls to xtgee.
For this, I found the ["Modeling Health Care Costs and Counts"](http://harrisschool.uchicago.edu/faculty/web-pages/willard-manning.asp) slides/programs by Partha Deb, Willard Manning, and Edward Norton to be useful. They don't talk about longitudinal data, but it's a useful starting point.
| null | CC BY-SA 3.0 | null | 2011-05-20T14:15:43.860 | 2011-05-20T14:15:43.860 | null | null | 4691 | null |
11042 | 2 | null | 10985 | 3 | null | For a general exponential family, we have the variance in the following form:
$$Var(Y_{i})=\phi h(E[Y_{i}])$$
for some function $h(.)$. Using the [wikipedia definition of negative binomial](http://en.wikipedia.org/wiki/Negative_binomial_distribution) we have a pdf of:
$$p(Y_{i}=y|r,p)={r+y-1 \choose y}p^{y}(1-p)^{r}\;\;\;\;\;\;y=0,1,2,\dots$$
And this has expectation $E[Y_{i}]=\frac{pr}{1-p}$ and a variance equal to $Var[Y_{i}]=\frac{pr}{(1-p)^{2}}$. Note that this cannot be written in the usual form for a generalised linear model, but has the form:
$$Var[Y_{i}]=E[Y_{i}]+\frac{1}{r}E[Y_{i}]^{2}$$
And as such it can be seen as taking the function $h(x)=x+\frac{1}{r}x^{2}$. hence dispersion equal to "1". although neg binomial technically not a member of exponential family (as it is a mixture of exponential family, similar to student distribution).
| null | CC BY-SA 3.0 | null | 2011-05-20T14:27:28.050 | 2011-05-20T14:27:28.050 | null | null | 2392 | null |
11044 | 2 | null | 11009 | 32 | null | While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense. I'll give you the simplest example I can imagine.
Suppose subjects randomly assigned to two groups are measured twice, once at baseline (i.e., right after the randomization) and once after group T received some kind of treatment, while group C did not. Then a repeated-measures model for these data would include a main effect for measurement occasion (a dummy variable that is 0 for baseline and 1 for the follow-up) and an interaction term between the group dummy (0 for C, 1 for T) and the time dummy.
The model intercept then estimates the average score of the subjects at baseline (regardless of the group they are in). The coefficient for the measurement occasion dummy indicates the change in the control group between baseline and the follow-up. And the coefficient for the interaction term indicates how much bigger/smaller the change was in the treatment group compared to the control group.
Here, it is not necessary to include the main effect for group, because at baseline, the groups are equivalent by definition due to the randomization.
One could of course argue that the main effect for group should still be included, so that, in case the randomization failed, this will be revealed by the analysis. However, that is equivalent to testing the baseline means of the two groups against each other. And there are plenty of people who frown upon testing for baseline differences in randomized studies (of course, there are also plenty who find it useful, but this is another issue).
| null | CC BY-SA 3.0 | null | 2011-05-20T15:07:08.627 | 2011-05-20T15:07:08.627 | null | null | 1934 | null |
11045 | 1 | null | null | 2 | 139 | I am trying to study the public opinion concept linked to quantitative measurement of opinion distributions and i am trying to define major parameters and methods that can help me starting from a certain finite number of people replying to a survey to generalize the results of that survey to say that this result can be considered as a public opinion.
Is there any statistical methods and parameters linked to country that i should integrate to define the concept of a public opinion ? what minimum number of people that must reply to a survey ? is there any intrinsic parameters linked to the survey it self ?
| How to define specific population for Public Opinion | CC BY-SA 3.0 | null | 2011-05-20T15:19:52.550 | 2011-05-20T15:53:13.237 | null | null | 4531 | [
"mathematical-statistics",
"finite-population"
] |
11046 | 2 | null | 11019 | 6 | null | It sounds to me that the problem is one of "novelty detection", you want to identify test patterns of a type not seen in the training data. This can be achieved using the one-class support vector machine, which IIRC tries to construct a small volume in a kernel induced feature space that contains all (or a large fraction) of the training set, so any novel patterns encountered in operation are likely to fall outside this boundary. There are loads of papers on uses of one-class SVM indexed on Google Scholar, so it should be easy to find something relevant to your application.
Another approach would be to build a classifier by constructing a density estimator for each class, and then combine using Bayes rule for the classification. If the likelihood of a test observation for the winning class is low, you can then reject the pattern as a possible novelty.
| null | CC BY-SA 3.0 | null | 2011-05-20T15:25:44.807 | 2011-05-20T15:25:44.807 | null | null | 887 | null |
11047 | 2 | null | 11045 | 1 | null | The population is the sampling frame. If you're using some non-random sampling scheme like convenience sampling, the population is harder to define.
Random samples of the same size from populations of different sizes have different margins of error; samples from smaller populations have smaller margins of error. Statistical methods often assume infinite population sizes. Read about "finite population correction" to see how to adjust for the population size.
When you have a large, heterogeneous population, you'll need to be concerned as to whether everyone will interpret the questionnaire the way you expect them to, so you might want to put more time into questionnaire development and pilot testing.
| null | CC BY-SA 3.0 | null | 2011-05-20T15:53:13.237 | 2011-05-20T15:53:13.237 | null | null | 3874 | null |
11048 | 1 | null | null | 0 | 131 | I want to determine the amount of food stamp fraud a retailer perpetrated based on food stamp sales in other stores. The retailer had both legitimate food stamps sales, and illegitimate sales. The illegitimate sales consisted of food stamps used to buy ineligible items or whee food stamps were illegally redeemed for cash.
I know the total amount of food stamps sales, but that figure reflects the total of all legitimate and illegitimate sales. Can an average of food stamp sales from other stores be compared to the retailer in a way that is statistically accurate? If so how big must that sample of other stores be?
Thanks,
John
| Market share based on comparison of competitors' average sales | CC BY-SA 3.0 | null | 2011-05-20T17:48:11.040 | 2011-05-20T18:36:29.503 | null | null | 4694 | [
"sample-size"
] |
11049 | 1 | null | null | 4 | 979 | I have a dataset of 380 samples of 6 variables. These variables are counts of different types of events in each of the 380 defined regions. These counts are per month, which means that I have several of these datasets (for now, I only have four months).
When looking at the data, I can clearly see that there is some missing (or incomplete) data. For instance, the counts for one given region are about the same for all months, except one (where it's close to zero). However, I does not seem likely that there were actually that few events during this period.
What I would like is to be able, given data for a few months, to detect missing or incomplete values, and possibly to correct/complete them. Detecting missing values is not as obvious as looking for zeros, because it may happen that no event occurred in some regions.
I read a few things about matrix factorization, but I'm not sure it would apply to my case. It seems suited for the cases where you know what data is missing.
I assume this kind of problem is be quite common, for instance in biology for population estimation.
| How to detect (and possibly estimate/interpolate) missing or incomplete data? | CC BY-SA 3.0 | null | 2011-05-20T17:53:41.003 | 2016-05-24T02:42:53.250 | null | null | 3699 | [
"missing-data",
"count-data"
] |
11050 | 1 | null | null | 18 | 1439 | I find that simple data analysis exercises can often help to illustrate and clarify statistical concepts. What data analysis exercises do you use to teach statistical concepts?
| Learning statistical concepts through data analysis exercises | CC BY-SA 3.0 | null | 2011-05-20T18:18:51.617 | 2011-05-21T06:17:49.747 | 2011-05-20T18:22:22.097 | 930 | 485 | [
"teaching"
] |
11051 | 2 | null | 11039 | 0 | null | I think that a Bayesian approach may solve your problem. But it depends on how many data you have.
I ned did it, but I guesst's possible to estimate a hierarchical model, in which data are nested by country or time. For example, assume you have $i = 1, 2, ..., n$ countries and $t = 1, 2, ..., T$ years. For country $j$ you don't have information for year $l$ and for country $k$ you don't have information for year $m$. So, you would have a model like this:
$y_{it} \sim N(a + b_{t}*x_{it})$
$b_{t} \sim N(0, sigma^{2})$
$sigma^{2} \sim Unif(0, 100)$
$x_{jk} \sim N(0, 100)$
$x_{lm} \sim N(0, 100)$
The details of the priors will depend of your theory and your data. It is fairly simple to estimate this model using WinBugs or Jags (so you don't have to worry about the MCMC machinery).
| null | CC BY-SA 3.0 | null | 2011-05-20T18:32:50.783 | 2011-05-20T18:32:50.783 | null | null | 3058 | null |
11052 | 2 | null | 11048 | 2 | null | If the proportion of illegitimate sales were known to be the same in all stores, then your problem would be strictly a statistical one. As it is, it's first and foremost a matter of content knowledge: on what bases are you able to estimate the proportion in this store as compared to in others? Do you have any other evidence to bring to bear on this situation? Others more skilled in Bayesian methods will be able to give you specific steps to take so that you can create a credible interval for the proportion in the store in question.
| null | CC BY-SA 3.0 | null | 2011-05-20T18:36:29.503 | 2011-05-20T18:36:29.503 | null | null | 2669 | null |
11053 | 2 | null | 11050 | 5 | null | Multiple Regression Coefficients and the Expected Sign Fallacy
One of my favorite illustrations of a statistical concept through a data analysis exercise is the deconstruction of a multiple regression into multiple bivariate regressions.
Objectives
- To clarify the meaning of regression
coefficients in the presence of
multiple predictors.
- To illustrate why it is incorrect to
“expect” a multiple regression
coefficient to have a particular sign
based on its bivariate relationship
with Y when the predictors are
correlated.
Concept
The regression coefficients in a multiple regression model represent the relationship between a) the part of a given predictor variable (x1) that is not related to all of the other predictor variables (x2...xN) in the model; and 2) the part of the response variable (Y) that is not related to all of the other predictor variables (x2...xN) in the model. When there is correlation among the predictors, the signs associated with the predictor coefficients represent the relationships among those residuals.
Exercise
- Generate some random data for two
predictors (x1, x2) and a response
(y).
- Regress y on x2 and store the
residuals.
- Regress x1 on x2 and store the
residuals.
- Regress the residuals of step 2 (r1)
on the residuals of step 3 (r2).
The coefficient for step 4 for r2 will be the coefficient of x1 for the multiple regression model with x1 and x2. You could do the same for x2 by partialing out x1 for both y and x2.
Here's some R code for this exercise.
```
set.seed(3338)
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- 0 + 2*x1 + 5*x2 + rnorm(100)
lm(y ~ x1 + x2) # Multiple regression Model
ry1 <- residuals( lm( y ~ x2) ) # The part of y not related to x2
rx1 <- residuals( lm(x1 ~ x2) ) # The part of x1 not related to x2
lm( ry1 ~ rx1)
ry2 <- residuals( lm( y ~ x1) ) # The part of y not related to x1
rx2 <- residuals( lm(x2 ~ x1) ) # The part of x2 not related to x1
lm( ry2 ~ rx2)
```
Here are the relevant outputs and results.
```
Call:
lm(formula = y ~ x1 + x2)
Coefficients:
(Intercept) ***x1*** ***x2***
-0.02410 ***1.89527*** ***5.07549***
Call:
lm(formula = ry1 ~ rx1)
Coefficients:
(Intercept) ***rx1***
-2.854e-17 ***1.895e+00***
Call:
lm(formula = ry2 ~ rx2)
Coefficients:
(Intercept) ***rx2***
3.406e-17 ***5.075e+00***
```
| null | CC BY-SA 3.0 | null | 2011-05-20T18:39:55.037 | 2011-05-21T06:17:49.747 | 2011-05-21T06:17:49.747 | 2116 | 485 | null |
11054 | 1 | 11057 | null | 7 | 7223 | For my classification problem, I am trying to classify an object as Good or Bad. I have been able to create a good first classification step that separates the data into 2 groups using SVM.
After tuning the parameters for the SVM using a training/holdout set (75% training, 25% holdout), I obtained the following results from the holdout set: Group 1 (model classified as Bad) consisted of 99% Bad objects, and Group 2 (model classified as Good) consisted of about 45% Good objects and 55% Bad objects. I verified the performance of the model using k-fold CV (k=5) and found the model to be stable and perform relatively consistently in terms of misclassification rates.
Now, I want to pass these objects through another round of classification by training another model (may or may not be SVM) on my group 2 of maybe good/maybe bad objects to try and correctly classify this second group now that I have gotten rid of the obviously bad objects.
I had a couple of thoughts, but am unsure of how to proceed.
(1) My first idea was to use the data from the classified objects from the Holdout set to train another model. I was able to train another classification model from the results of the holdout set. The problem is I am using less than 25% of the original data, and I am worried of overfitting on a very small subset of my data.
(2) My second idea was to gather the results of the 5-fold CV to create another dataset. My reasoning is that since the data is partitioned into 5 parts, and each part is classified into two groups from a model trained by the other 4 parts, I thought that I could aggregate the predicted results of the 5 parts to obtain a classified version of my original dataset and continue from there.
The only problem is, I have a sinking feeling that both methods are no good. Could CV shed some light on some possible next steps?
EDIT
Sorry, my question was badly worded. Let me try to clarify what I am trying to do. It can be thought of like a tree...
- Let me call the original dataset Node 0.
- I used classification method 1 to split Node 0 into Node 1 and Node 2.
Node 1 has low misclassification rate (Mostly consists of bad objects)
Node 2 has high misclassification rate (Roughly even mix of good and bad objects)
- I now want to use classification method 2 to split Node 2 into Node 3 and 4
The "classification method" can be anything (LDA, QDA, SVM, CART, Random Forest, etc). So I guess what I am trying to achieve here is a "classification" tree (not CART), where each node is subjected to a different classification method to obtain an overall high "class purity". Basically, I want to use a mix of different classification methods to obtain reasonable results.
My problem lies in the loss of training data after the first split. I run out of usable data after I run it through "classification method 1", which was SVM in my case.
| Training multiple models for classification using the same dataset | CC BY-SA 4.0 | null | 2011-05-20T18:43:03.960 | 2019-03-12T17:41:00.700 | 2019-03-12T17:41:00.700 | 128677 | 2252 | [
"machine-learning",
"classification"
] |
11055 | 2 | null | 11049 | 3 | null | I think you are really looking for outliers in your data (small values where most 'similar' values, i.e. values where a set of covariates hold the same values, are 'bigger').
You could look at [outliers in MV data](https://stats.stackexchange.com/questions/213/what-is-the-best-way-to-identify-outliers-in-multivariate-data) for this for starters, although you may be able to utilize some particularities of your situation (you are only interested in outliers in one variable, and only in one direction).
If your sample size is big enough for that, you can simply use the outlier definition as used in most boxplots (1.5 IQR away from the outer quartiles). If not, you should apply some truly parametric way of detecting outliers (e.g. residuals in a regression).
| null | CC BY-SA 3.0 | null | 2011-05-20T19:34:15.570 | 2011-05-20T19:34:15.570 | 2017-04-13T12:44:24.667 | -1 | 4257 | null |
11056 | 2 | null | 11054 | 6 | null | I would use the same training dataset for both models, and use the same CV-folds for tuning. Don't use ANY of the 25% hold-out for training or tuning. Once you've fit your 2 models on the 75% training sample, evaluate your performance using the holdout.
If you are using R, the caret package has functions for creating folds on a dataset that you can re-use to tune multiple models and then evaluate their predictive accuracy. If you would like I can help you with some example code.
Edit:
Here is the promised code, modified from the [vignette](http://cran.r-project.org/web/packages/caret/vignettes/caretTrain.pdf) for the package [caret](http://cran.r-project.org/web/packages/caret/index.html):
```
#Setup
rm(list = ls(all = TRUE)) #CLEAR WORKSPACE
set.seed(123)
#Pretend we only care about virginica
Data <- iris
virginica <- Data$Species=='virginica'
Data$Species <- NULL
#Look at the variable relationships
library(PerformanceAnalytics)
chart.Correlation(Data,col=ifelse(virginica,1,2))
#Create cross-validation folds to use for multiple models
#Use 10-fold CV, repeat 5 times
library(caret)
MyFolds <- createMultiFolds(virginica, k = 10, times = 5)
MyControl <- trainControl(method = "repeatedCV", index = MyFolds,
summaryFunction = twoClassSummary,
classProbs = TRUE)
#Define Equation for Models
fmla <- as.formula(paste("virginica ~ ", paste(names(Data), collapse= "+")))
#Fit some models
Data$virginica <- as.factor(ifelse(virginica,'Yes','No'))
svmModel <- train(fmla,Data,method='svmRadial',
tuneLength=3,metric='ROC',trControl=MyControl)
rfModel <- train(fmla,Data,method='rf',
tuneLength=3,metric='ROC',trControl=MyControl)
#Compare Models
resamps <- resamples(list(
SVM = svmModel,
RandomForest = rfModel
))
summary(resamps)
densityplot(resamps,auto.key = TRUE, metric='ROC')
```
| null | CC BY-SA 3.0 | null | 2011-05-20T19:38:54.947 | 2011-05-24T18:00:35.007 | 2011-05-24T18:00:35.007 | 2817 | 2817 | null |
11057 | 2 | null | 11054 | 3 | null | Just to make sure that we are on the same page, I take it from your description that you consider a supervised learning problem where you know the Good/Bad status of your objects and where you have a vector of features for each object that you want to use to classify the object as either Good or Bad. Moreover, the result of training an SVM is to give a classifier, which, on the holdout data, gives almost no false Bad predictions, but 55% false Good predictions. I have not personally worked with problems with such a huge difference in error rates on the two groups. It suggests to me that the distribution of features in the two groups overlap, but that the distribution of features in the Bad group is more spread out. Like two Gaussian distributions with almost the same mean but larger variance for the group of Bad objects. If that is the case, I would imagine that it will be difficult, if not impossible, to improve much on the error rate for the Good predictions. There may be other explanations that I am not aware of.
Having said that, I think it is a sensible strategy to combine classification procedures in a hierarchical way as you suggest. First, one classifier splits the full training set into two groups, and then other classifiers split each of the groups into two groups etc. In fact, that is what classification trees do, but typically using very simple splits in each step. I see no formal problem in training whatever model you like on the training data that is classified as being Good by the SVM. You don't need to use the holdout data. In fact, you shouldn't, if you need the holdout data for assessment of the model.
Your second suggestion is closely related to just using the group classified as Good from your training data to train a second model. I don't see any particular reason to use CV-based classifications to obtain this group. Just remember, that if you are going to use CV, then the entire training procedure must be carried out each time.
My suggestion is to first get a better understanding of what the feature distributions look like in the two groups from low-dimensional projections and exploratory visualizations. It might shed some light on why the error rate on the Good classifications is so large.
| null | CC BY-SA 3.0 | null | 2011-05-20T19:51:32.450 | 2011-05-20T20:03:08.883 | 2011-05-20T20:03:08.883 | 4376 | 4376 | null |
11058 | 2 | null | 3104 | 4 | null | At risk of sounding too simplistic, I think the best problem to introduce depends on who you are talking to.
For example my arts friends freak out when I talk about math and stats, but then I tell them they shouldn't be afraid because they speak math all the time. So I give them examples such as "What are the odds it will rain today?", you don't acknowledge you're doing the computation but you are assessing some probability in your mind. So for them I like to pick very relateable problems dealing with weather and emotions ("For example, given you are depressed, how likely is it do be raining outside?") and show them the math behind how we might answer that. Then later after they have discovered an intuition for mathematical problem solving I tell them what the terminology is for it. AND yes I have gotten my arts friends to sit willing through that!
I personally learned statistics better when I had a problem in my domain I understood very well. I find when you understand a problem very well it becomes easier to understand the math. I think too often people just learn by rote and look to fit problems they've already seen onto new ones rather than trying to understand each problem.
| null | CC BY-SA 3.0 | null | 2011-05-20T19:55:14.197 | 2011-05-20T19:55:14.197 | null | null | 4673 | null |
11059 | 2 | null | 11050 | 9 | null | As I have to explain variable selection methods quite often, not in a teaching context, but for non-statisticians requesting aid with their research, I love this extremely simple example that illustrates why single variable selection is not necessarily a good idea.
If you have this dataset:
```
y X1 x2
1 1 1
1 0 0
0 1 0
0 0 1
```
It doesn't take long to realize that both X1 and X2 individually are completely noninformative for y (when they are the same, y is 'certain' to be 1 - I'm ignoring sample size issues here, just assume these four observations to be the whole universe). However, the combination of the two variables is completely informative. As such, it is more easy for people to understand why it is not a good idea to (e.g.) only check the p-value for models with each individual variable as a regressor.
In my experience, this really gets the message through.
| null | CC BY-SA 3.0 | null | 2011-05-20T20:15:32.913 | 2011-05-20T20:15:32.913 | null | null | 4257 | null |
11060 | 1 | 11063 | null | 6 | 4430 | I'm hoping someone can help me sort out how the proportion comparisons using a GLM works in R.
I'm comparing hatch success among multiple years (and later, sites). I've used a GLM to compare among years by making a txt file where the success column contains the number of chicks that hatched, and failures are the number of eggs in each nest that didn't hatch. Each row corresponds to a nest. (I did this according to descriptions in "The R Book" by Michael Crawley)
The code looks like this:
```
y<-cbind(success,fail)
hsmodel1<-glm(y~year,binomial)
```
This tests for differences in hatch success among years as #of chicks hatched/eggs laid, correct? Not # of chicks hatched /nest?
Secondly, if my species lays up to 2 eggs, is using this proportional method still valid, since there could be 0, 1 or 2 successes or failures? I'm pretty sure it is, but I am starting to doubt myself from a question a colleague asked today.
Thanks!
Mog
| GLM for proportional data | CC BY-SA 3.0 | null | 2011-05-20T20:57:49.180 | 2011-05-21T01:06:56.623 | 2011-05-21T01:06:56.623 | 4238 | 4238 | [
"r",
"generalized-linear-model",
"proportion"
] |
11061 | 2 | null | 3104 | 1 | null | For a gentle introduction, I like examples using 2x2 contingency tables. The diagnostic testing example as mentioned above, where the Probability of a positive test result given disease is not equal to the Probability of disease given a positive test result. Also, one can use designs with different sampling schemes, such as the cohort study vs. the case-control study, to illustrate how that affects what probabilities can be estimated.
| null | CC BY-SA 3.0 | null | 2011-05-20T21:19:25.467 | 2011-05-20T21:19:25.467 | null | null | 2312 | null |
11063 | 2 | null | 11060 | 2 | null | Logistic regression, like this is, assumes a binomial distribution, or, as I prefer, a Bernoulli distribution per event. I know of no case nor reason where this should not be safely assumed by itself (either it happens or it doesn't, and in a population you can always assign a probability to this). There is no reason the upper limit on your number of events per nest should influence this.
That distribution, by linearity, is assumed conditionally on the year, where the logodds are linear in year. This could be faulty, but that has nothing to do with the possible number of events, just the fact that any model can be wrong.
You can (with `predict(type="response")`) get the probability of an egg hatching, conditional on the year from this type of model (technically that is not exactly the same as a rate, but for most practical purposes, it is).
| null | CC BY-SA 3.0 | null | 2011-05-20T22:08:44.360 | 2011-05-20T22:08:44.360 | null | null | 4257 | null |
11064 | 1 | 11065 | null | 21 | 10303 | In a number of statistical packages including SAS, SPSS and maybe more, there is an option to "suppress the intercept". Why would you want to do that?
| Why would one suppress the intercept in linear regression? | CC BY-SA 3.0 | null | 2011-05-20T22:18:14.793 | 2019-11-04T01:23:45.863 | 2019-11-04T01:23:45.863 | 11887 | 333 | [
"regression",
"intercept"
] |
11065 | 2 | null | 11064 | 16 | null | If for some reason you know the intercept (particularly if it is zero), you can avoid wasting the variance in your data for estimating something you already know, and have more confidence in the values you do have to estimate.
A somewhat oversimplified example is if you already know (from domain knowledge) that one variable is (on average) a multiple of another, and you are trying to find that multiple.
| null | CC BY-SA 3.0 | null | 2011-05-20T22:30:41.580 | 2011-05-20T22:30:41.580 | null | null | 4257 | null |
11066 | 1 | 11067 | null | 0 | 10575 | I'm estimating a GLM with a bunch of parameters in R.
When I run this:
```
M <- glm( Y ~ factor(X1) + factor(X2) )
summary(M)
```
R only gives me part of the table, then cuts out with the message:
```
[ reached getOption("max.print") -- omitted 621 rows ]]
```
The summary table will be big, but I want the whole thing. How do I change the max.print option? I've tried several different versions and nothing works.
Edit: Here's another attempt which gives me a different error.
```
summary(M, options(max.print=100000))
Error in dispersion * covmat.unscaled :
non-numeric argument to binary operator
```
| How do I change the max.print option in R's summary? | CC BY-SA 3.0 | null | 2011-05-20T22:31:13.453 | 2011-05-21T06:14:51.543 | 2011-05-20T22:46:20.803 | 4110 | 4110 | [
"r",
"generalized-linear-model",
"descriptive-statistics"
] |
11067 | 2 | null | 11066 | 2 | null | I just googled "R getOption("max.print")", and found: `options(max.print=5.5E5)`...
| null | CC BY-SA 3.0 | null | 2011-05-20T22:38:56.957 | 2011-05-20T22:38:56.957 | null | null | 4257 | null |
11068 | 2 | null | 11064 | 14 | null | Consider the case of a 3-level categorical covariate. If one has an intercept, that would require 2 indicator variables. Using the usual coding for indicator variables, the coefficient for either indicator variable is the mean difference compared to the reference group. By suppressing the intercept, you would have 3 variables representing the categorical covariate, instead of just 2. A coefficient is then the mean estimate for that group. A more concrete example of where to do this is in political science where one may be studying the 50 states of the United States. Instead of having an intercept and 49 indicator variables for the states, it is often preferable to suppress the intercept and instead have 50 variables.
| null | CC BY-SA 3.0 | null | 2011-05-20T23:48:34.010 | 2011-05-20T23:48:34.010 | null | null | 2312 | null |
11070 | 1 | null | null | 11 | 754 | The "Linear Ballistic Accumulator" model (LBA) is a rather successful model for human behaviour in speeded simple decision tasks. [Donkin et al](http://www.ncbi.nlm.nih.gov/pubmed/19897817) (2009, [PDF](http://mypage.iu.edu/~cdonkin/pubs/brm09b.pdf)) provide code that permits estimating the parameters of the model given human behavioural data, and I've copied this code (with some minor formatting changes) to a gist [here](https://gist.github.com/941314). However, I'd like to make a seemingly minor modification to the model but I'm not sure how to achieve this modification in the code.
To start with the canonical model, LBA represents each response alternative as a competitor in a rather strange race such that the competitors can differ in the following characteristics:
- Starting position: this varies from race to race according to a uniform distribution bounded by U(0,X1).
- Speed: this is kept constant within a given race (no acceleration) but varies from race to race according to a Gaussian distribution defined by N(X2,X3)
- Finish line position (X4)
Thus, each competitor has its own set of values for X1, X2, X3 and X4.
The race is repeated many times, with the winner and their time recorded after each race. A constant of X5 is added to every winning time.
Now, the modification I want to make is to swap the variability in the starting point to the finish line. That is, I want the start point to be zero for all competitors and all races, thereby eliminating X1, but I want to add a parameter, X6, that specifies the size of the range of a uniform distribution centered on X4 from which each competitor's finish line is sampled for each race. In this model, then, each competitor will have values for X2, X3, X4, and X6, and we still have the across-competitor value for X5.
I'd be very grateful if anyone is willing to help with this.
Oh, and to provide a mapping from the "X" named parameters described above to the variable names used by the LBA code I linked: X1 = x0max; X2 = driftrate; X3 = sddrift; X4 = chi; X5 = Ter.
| Modifying linear ballistic accumulator (LBA) simulation in R | CC BY-SA 3.0 | null | 2011-05-21T03:05:59.953 | 2012-07-05T08:40:06.213 | 2012-04-06T07:39:37.327 | 1766 | 364 | [
"r",
"stochastic-processes"
] |
11072 | 1 | null | null | 1 | 449 | This is perhaps basic but I couldn't find a suitable reference.
I have a regression model with a rather complicated link function.
So $\vec{x}$ is a vector of continuous predictors, and $z$ is a binary variable such
that according to the model: $Pr(z=1) = f(\vec{x})$ for some (known) function $f$.
I observe data of the form $(\vec{x}^{(1)}, z^{(1)}), (\vec{x}^{(2)}, z^{(2)}), (\vec{x}^{(n)}, z^{(n)})$ and want to test the null hypothesis that the above model is the one generating the data - that is compute a statistic and reject the model if the statistic is too extreme. What would be a good goodness-of-fit test for this case? is there a 'standard' way to test for this?
One possibility is binning the data points by the value of $f(\vec{x})$, (say to $10$ bins: $([0,0.1], ..[0.9,1])$ and performing a chi-square test for expected vs. observed proportion of $z$'s in each bin. Another is to bin the multidimensional space of the $\vec{x}$'s (say if $\vec{x}$ is two-dimensional, we can divide $R^2$ to $100$ squares and compute a chi-square for observed vs. expected for each square). Yet another one is not binning at all but just computing $\sum_i (z^{(i)} - f(\vec{x}^{(i)}))^2/f(\vec{x}^{(i)})$
but this seems to cause numerical issues since sometimes $f(\vec{x}^{(i)})$ is very small.
Are there other known approaches? which test would be the most appropriate?
| Goodness of fit for a regression with multiple predictors | CC BY-SA 3.0 | null | 2011-05-21T03:22:11.233 | 2011-05-26T18:25:22.063 | 2011-05-21T05:25:53.687 | 3036 | 3036 | [
"regression",
"goodness-of-fit"
] |
11073 | 2 | null | 11039 | 1 | null | You originally asked this in reference to R (on stackoverflow), so I'll answer with reference to R.
I'm not sure exactly what your goal is, but I'd guess that you want to estimate a panel data model. In that context, you have an "unbalanced panel" and if you want to stick with R, I'd recommend the package plm. The package documentation describes how it deals with unbalanced panels.
You will need to sort out how to model panel data and interpret the results--I'd suggest looking at an econometrics textbook (whichever one is at the appropriate level for you; these models are covered in almost all econometrics texts). The book "Applied Econometrics with R" also gives some examples and a short, fairly accessible (if terse) introduction to these and other models.
| null | CC BY-SA 3.0 | null | 2011-05-21T04:29:57.587 | 2011-05-21T05:08:42.717 | 2011-05-21T05:08:42.717 | 4699 | 4699 | null |
11074 | 2 | null | 11066 | 3 | null | You may want to use `sink`. This will divert output to a file, which then you can inspect.
```
sink(file="output.txt")
summary(M)
sink(NULL)
```
The error you get in your code is because the second argument of function `summary.glm` is `dispersion`, which according to help page should be either numeric or NULL. You supply the output of `options(max.print=100000)` which is not what R expects, hence the error.
| null | CC BY-SA 3.0 | null | 2011-05-21T06:14:51.543 | 2011-05-21T06:14:51.543 | null | null | 2116 | null |
11076 | 2 | null | 11064 | 3 | null | To illustrate @Nick Sabbe's point with a specific example.
I once saw a researcher present a model of the age of a tree as a function of its width. It can be assumed that when the tree is at age zero, it effectively has a width of zero. Thus, an intercept is not required.
| null | CC BY-SA 3.0 | null | 2011-05-21T06:57:45.463 | 2011-05-21T06:57:45.463 | null | null | 183 | null |
11077 | 2 | null | 10167 | 4 | null | The following are just a few points:
- If you have departure from normality then bootstrapping is often a good idea.
- You mention using "1000" replicates. Increasing the number of replicates increases computational time and accuracy. Thus, sometimes when first setting up your model, you'll set the number of replicates at a level that is relatively quick to run. However, for your final model that you report, you may want to push up the number of replicates to 10,000 or more.
- If the departure of your data from normality is mild, then coefficient and model fit tests that assume normality are often a reasonable approximation. In particular when you have a big sample, as is often the case with structural equation modelling, assumption tests that perform a significant test with the null hypothesis as normality often are overly sensitive for the purpose of deciding whether to persist with methods that assume normality. I would pay more attention to the actual indices of non-normality like skewness and kurtosis values (or if your intuition is sufficiently trained, check out histograms of the variables).
- If the departure from normality is mild, I would expect that both standard and bootstrapped approaches should yield similar results. Showing that your results are robust to such analytic decisions may provide you with greater confidence in your results.
| null | CC BY-SA 3.0 | null | 2011-05-21T07:31:43.063 | 2011-05-21T07:54:27.303 | 2011-05-21T07:54:27.303 | 183 | 183 | null |
11078 | 2 | null | 11032 | 7 | null | Baum-Welch is an optimization algorithm for computing the maximum-likelihood estimator. For hidden Markov models the likelihood surface may be quite ugly, and it is certainly not concave. With good starting points the algorithm may converge faster and towards the MLE.
If you already know the transition probabilities and want to predict hidden states by the Viterbi algorithm, you need the transition probabilities. If you already know them, there is no need to re-estimate them using Baum-Welch. The re-estimation is computationally more expensive than the prediction.
| null | CC BY-SA 3.0 | null | 2011-05-21T08:51:14.170 | 2011-05-21T08:51:14.170 | null | null | 4376 | null |
11079 | 1 | null | null | 4 | 30625 | I need an help because I don´t know if the command for the ANOVA analysis I am
performing in R is correct. Indeed using the function aov I get the following error: `In aov (......) Error() model is singular`
The structure of my table is the following: subject, stimulus, condition, sex, response
Example:
```
subject stimulus condition sex response
subject1 gravel EXP1 M 59.8060
subject2 gravel EXP1 M 49.9880
subject3 gravel EXP1 M 73.7420
subject4 gravel EXP1 M 45.5190
subject5 gravel EXP1 M 51.6770
subject6 gravel EXP1 M 42.1760
subject7 gravel EXP1 M 56.1110
subject8 gravel EXP1 M 54.9500
subject9 gravel EXP1 M 62.6920
subject10 gravel EXP1 M 50.7270
subject1 gravel EXP2 M 70.9270
subject2 gravel EXP2 M 61.3200
subject3 gravel EXP2 M 70.2930
subject4 gravel EXP2 M 49.9880
subject5 gravel EXP2 M 69.1670
subject6 gravel EXP2 M 62.2700
subject7 gravel EXP2 M 70.9270
subject8 gravel EXP2 M 63.6770
subject9 gravel EXP2 M 72.4400
subject10 gravel EXP2 M 58.8560
subject11 gravel EXP1 F 46.5750
subject12 gravel EXP1 F 58.1520
subject13 gravel EXP1 F 57.4490
subject14 gravel EXP1 F 59.8770
subject15 gravel EXP1 F 55.5480
subject16 gravel EXP1 F 46.2230
subject17 gravel EXP1 F 63.3260
subject18 gravel EXP1 F 60.6860
subject19 gravel EXP1 F 59.4900
subject20 gravel EXP1 F 52.6630
subject11 gravel EXP2 F 55.7240
subject12 gravel EXP2 F 66.4220
subject13 gravel EXP2 F 65.9300
subject14 gravel EXP2 F 61.8120
subject15 gravel EXP2 F 62.5160
subject16 gravel EXP2 F 65.5780
subject17 gravel EXP2 F 59.5600
subject18 gravel EXP2 F 63.8180
subject19 gravel EXP2 F 61.4250
.....
.....
.....
.....
```
As you can notice each subject repeated the evaluation in 2 conditions (EXP1 and EXP2).
What I am interested in is to know if there are significant differences between
the evaluations of the males and the females.
This is the command I used to perform the ANOVA with repeated measures:
```
aov1 = aov(response ~ stimulus*sex + Error(subject/(stimulus*sex)), data=scrd)
summary(aov1)
```
I get the following error:
```
> aov1 = aov(response ~ stimulus*sex + Error(subject/(stimulus*sex)), data=scrd)
Warning message:
In aov(response ~ stimulus * sex + Error(subject/(stimulus * sex)), :
Error() model is singular
> summary(aov1)
Error: subject
Df Sum Sq Mean Sq F value Pr(>F)
sex 1 166.71 166.72 1.273 0.274
Residuals 18 2357.29 130.96
Error: subject:stimulus
Df Sum Sq Mean Sq F value Pr(>F)
stimulus 6 7547.9 1257.98 35.9633 <2e-16 ***
stimulus:sex 6 94.2 15.70 0.4487 0.8445
Residuals 108 3777.8 34.98
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 420 9620.6 22.906
>
```
The thing is that looking at the data it is evident for me that there is a
difference between male and females, because for each stimulus I always get
a mean higher for the males rather than the females.
Therefore the ANOVA should indicate significant differences....
Is there anyone who can suggest me where I am wrong?
Finally, I know that in R there are two libraries on linear mixed models called
nlme and lme4, but I have never used it so far and I don´t know if I have to utilize it for my case.
Is it the case to utilize it? If yes, could you please provide a quick R example
of a command which could solve my problem?
Thanks in advance!
Best regards
---
Dear all,
I am stuck now ;-( Indeed I understood everything you suggested me but still I don´t get significance in the ANOVA results, and definitively there is an error, because results cannot be non-significant. Indeed looking at the means for each stimulus, it is possible to notice that males gave always higher evaluations than females.
To prove this I discarded for a moment the effect of the repeated measures, and I performed an ANOVA separately on both the two conditions (EXP1 and EXP2) during which the evaluations were given.
What I get is significant differences between males and female, in both EXP1 and EXP2.
Now, why when I perform the ANOVA with repeated measures I don´t get the same behavior?
My design is the following:
-sex is a between-subjects factor (with two levels)
-stimulus is a within-subjects factor (with 3 assumed levels)
-condition is a within-subjects factor (with 2 levels)
-all factors are fully crossed
I tried, both the ways suggested but without achieving significance:
```
mDf <- aggregate(response ~ subject + sex, data=scrd, FUN=mean)
summary(aov(response ~ sex, data=mDf)) # ANOVA with just the between-effect
```
and
```
aov1 = aov(response ~ sex*stimulus*condition + Error(subject/(stimulus*condition)), data=scrd)
summary(aov1)
```
Instead if I perform the ANOVA on the two subtables of EXP 1 and 2 I get significant differences.
```
table_EXP1 <- subset(scrd, condition == "EXP1")
table_EXP2 <- subset(scrd, condition == "EXP2")
fit_table_EXP1 <- lm(response ~ stimulus*sex, data=table_EXP1)
summary(fit_table_EXP1 )
anova(fit_table_EXP1 )
fit_table_EXP2 <- lm(response ~ stimulus*sex, data=table_EXP2)
summary(fit_table_EXP2)
anova(fit_table_EXP2)
```
....how can this be possible?...it is a contraddiction....
HELP!
Please enlighten me!
Thanks in advance
Cheers
| Problem with ANOVA repeated measures: "Error() model is singular" | CC BY-SA 3.0 | null | 2011-05-21T12:24:06.390 | 2017-02-28T01:46:26.293 | 2011-05-28T20:12:23.450 | 919 | 4701 | [
"r",
"anova",
"mixed-model"
] |
11080 | 2 | null | 11009 | 68 | null | In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between $x_1$ and $x_2$ can be stand-ins for main effects of $x_3$ and $x_4$. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms.
| null | CC BY-SA 3.0 | null | 2011-05-21T12:31:20.447 | 2017-01-01T23:12:33.313 | 2017-01-01T23:12:33.313 | 11887 | 4253 | null |
11081 | 2 | null | 11009 | 7 | null | I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either:
- Calculating its probability, if it is the object of interest
- Integrating or averaging it out, if it is not of interest, but may still affect your conclusions
This is exactly what people do when testing for "significant effects" by using t-quantiles instead of normal quantiles. Because you have uncertainty about the "true noise level" you take this into account by using a more spread out distribution in testing. So from your perspective the "main effect" is actually a "nuisance parameter" in relation to the question that you are asking. So you simply average out the two cases (or more generally, over the models you are considering). So I would have the (vague) hypothesis:
$$\newcommand{\int}{\mathrm{int}}H_{\int}:\text{The interaction between A and B is significant}$$
I would say that although not precisely defined, this is the question you want to answer here. And note that it is not the verbal statements such as above which "define" the hypothesis, but the mathematical equations as well. We have some data $D$, and prior information $I$, then we simply calculate:
$$P(H_{\int}|DI)=P(H_{\int}|I)\frac{P(D|H_{\int}I)}{P(D|I)}$$
(small note: no matter how many times I write out this equation, it always helps me understand the problem better. weird). The main quantity to calculate is the likelihood $P(D|H_{int}I)$, this makes no reference to the model, so the model must have been removed using the law of total probability:
$$P(D|H_{\int}I)=\sum_{m=1}^{N_{M}}P(DM_{m}|H_{\int}I)=\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{\int}I)$$
Where $M_{m}$ indexes the mth model, and $N_{M}$ is the number of models being considered. The first term is the "model weight" which says how much the data and prior information support the mth model. The second term indicates how much the mth model supports the hypothesis. Plugging this equation back into the original Bayes theorem gives:
$$P(H_{\int}|DI)=\frac{P(H_{\int}|I)}{P(D|I)}\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{int}I)$$
$$=\frac{1}{P(D|I)}\sum_{m=1}^{N_{M}}P(DM_{m}|I)\frac{P(M_{m}H_{\int}D|I)}{P(DM_{m}|I)}=\sum_{m=1}^{N_{M}}P(M_{m}|DI)P(H_{\int}|DM_{m}I)$$
And you can see from this that $P(H_{\int}|DM_{m}I)$ is the "conditional conclusion" of the hypothesis under the mth model (this is usually all that is considered, for a chosen "best" model). Note that this standard analysis is justified whenever $P(M_{m}|DI)\approx 1$ - an "obviously best" model - or whenever $P(H_{\int}|DM_{j}I)\approx P(H_{\int}|DM_{k}I)$ - all models give the same/similar conclusions. However if neither are met, then Bayes' Theorem says the best procedure is to average out the results, placing higher weights on the models which are most supported by the data and prior information.
| null | CC BY-SA 3.0 | null | 2011-05-21T14:49:49.257 | 2013-01-09T21:03:55.810 | 2013-01-09T21:03:55.810 | 17230 | 2392 | null |
11083 | 2 | null | 3814 | 16 | null | Using causal language to describe associations in observational data when omitted variables are almost certainly a serious concern.
| null | CC BY-SA 3.0 | null | 2011-05-21T16:04:11.060 | 2011-05-21T16:04:11.060 | null | null | 3748 | null |
11084 | 1 | 11086 | null | 16 | 3947 | I've received a results from a Mann-Whitney rank test that I don't understand.
The median of the 2 populations is identical (6.9). The uppper and lower quantiles of each population are:
- 6.64 & 7.2
- 6.60 & 7.1
The p-value resulting from the test comparing these populations is 0.007. How can these populations be significantly different? Is it due to the spread about the median? A boxplot comparing the 2 shows that the second one has far more outliers than the first.
Thanks for any suggestions.
| Why is the Mann–Whitney U test significant when the medians are equal? | CC BY-SA 3.0 | null | 2011-05-21T16:36:36.803 | 2020-01-20T20:30:16.293 | 2011-05-21T18:43:02.763 | 307 | 4238 | [
"nonparametric",
"median",
"ranks",
"wilcoxon-mann-whitney-test"
] |
11085 | 1 | null | null | 4 | 6449 | I have conducted a search for genetic interactions using a simple dosage model:
Y ~ A + B + AB
where Y is the phenotype, in this case, gene expression values and A and B are vectors of genotype information for ~500 samples. I wish to determine a signficance threshold using permutation testing in order to correct for multiple testing.
To date, I have recalculated the p-values for the interaction term (AB) for 100 permutations (I permuted the phenotype values) and am unsure how to proceed in order to derive a false discovery rate (FDR).
Any suggestions?
Thanks, D.
| False discovery rate from permutation testing? | CC BY-SA 3.0 | null | 2011-05-21T16:39:00.437 | 2016-03-01T20:53:25.667 | 2011-05-21T17:01:38.293 | 930 | 2842 | [
"genetics",
"permutation-test",
"multiple-comparisons"
] |
11086 | 2 | null | 11084 | 11 | null | [FAQ: Why is the Mann-Whitney significant when the medians are equal?](https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-why-is-the-mann-whitney-significant-when-the-medians-are-equal/)
| null | CC BY-SA 4.0 | null | 2011-05-21T16:50:26.763 | 2020-01-20T20:30:16.293 | 2020-01-20T20:30:16.293 | 25 | 307 | null |
11087 | 1 | 11089 | null | 56 | 20474 | I was just wondering why regression problems are called "regression" problems. What is the story behind the name?
>
One definition for regression:
"Relapse to a less perfect or
developed state."
| Why are regression problems called "regression" problems? | CC BY-SA 3.0 | null | 2011-05-21T18:25:00.283 | 2021-02-03T23:07:19.587 | 2016-01-29T11:23:55.950 | 28666 | 3541 | [
"regression",
"terminology",
"history",
"etymology"
] |
11088 | 1 | 11092 | null | 6 | 9068 | I am a newbie in stat. I am completing my thesis in [Evolutionary algorithm](http://en.wikipedia.org/wiki/Evolutionary_algorithm). I have to generate some random numbers from [T-distribution](http://en.wikipedia.org/wiki/Student%27s_t-distribution) or [Laplace distribution](http://en.wikipedia.org/wiki/Laplace_distribution). How can I do this?
An easy and simple explanation would be appreciated.
| Random number generation using t-distribution or laplace distribution | CC BY-SA 4.0 | 0 | 2011-05-21T18:53:13.727 | 2018-10-19T21:40:48.917 | 2018-10-19T21:40:48.917 | 11887 | 4319 | [
"distributions",
"matlab",
"random-generation",
"t-distribution",
"laplace-distribution"
] |
11089 | 2 | null | 11087 | 42 | null | The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of [regression toward the mean](http://en.wikipedia.org/wiki/Regression_toward_the_mean). The term was then adopted by others to get more or less the meaning it has today as a general statistical method.
| null | CC BY-SA 3.0 | null | 2011-05-21T18:54:18.390 | 2011-05-21T18:54:18.390 | null | null | 4376 | null |
11090 | 2 | null | 11088 | 6 | null | Easy answer: Use R and get `n` variables for a $t$-distribution with `df` degrees of freedom by `rt(n, df)`. If you don't use R, maybe you can write what language you use, and others may be able to tell precisely what to do.
If you don't use R or another language with a built in random number generator for the $t$-distribution, but you have access to the quantile function, $Q$, for the $t$-distribution and you can generate a uniform random variable $U$ on $[0,1]$ then $Q(U)$ follows a $t$-distribution.
Else take a look at [this brief section](http://en.wikipedia.org/wiki/Student%27s_t-distribution#Monte_Carlo_sampling) in the Wikipedia page.
| null | CC BY-SA 3.0 | null | 2011-05-21T19:05:29.677 | 2011-05-21T19:05:29.677 | null | null | 4376 | null |
11091 | 1 | 11094 | null | 8 | 327 | I believe that independent variables $X_1,X_2$ affect the dependent variable $Y$ through a latent variable $Z$ such that
$$
\begin{align}
Y &= \beta_0 + \beta_1Z \\
Z &= \operatorname{Logit}^{-1}(\beta_2X_1 + \beta_3X_2) \\
\\
Y &= \beta_0 + \beta_1\operatorname{Logit}^{-1}(\beta_2X_1 + \beta_3X_2)
\end{align}
$$
Is it possible to estimate $\beta_2$ and $\beta_3$, given $Y$?
| Estimating effect of latent variable in regression | CC BY-SA 3.0 | null | 2011-05-21T19:13:02.953 | 2017-08-16T15:34:29.873 | 2017-08-16T15:34:29.873 | 28666 | 82 | [
"nonlinear-regression"
] |
11092 | 2 | null | 11088 | 9 | null | Here's how to do this in Matlab using [TINV](http://www.mathworks.com/help/toolbox/stats/tinv.html) from that statistics toolbox:
```
%# choose the degree of freedom
df = 4; %# note you can also choose an array of df's if necessary
%# create a vector of 100,000 uniformly distributed random varibles
uni = rand(100000,1);
%# look up the corresponding t-values
out = tinv(uni,df);
```
With a more recent version of Matlab, you can also simply use [TRND](http://www.mathworks.com/help/toolbox/stats/trnd.html) to create the random numbers directly.
```
out = trnd(100000,df);
```
Here's the histogram of `out` 
EDIT Re:merged question
Matlab has no built-in function for drawing numbers from a Laplace distribution. However, there is the function [LAPRND](http://www.mathworks.com/matlabcentral/fileexchange/13705-laplacian-random-number-generator) from the Matlab File Exchange that provides a well-written implementation.
| null | CC BY-SA 3.0 | null | 2011-05-21T20:47:23.670 | 2011-05-22T17:35:16.307 | 2011-05-22T17:35:16.307 | 198 | 198 | null |
11093 | 1 | null | null | 5 | 8305 | Apologies for what is probably a very basic question. I have looked around both here and in the usual places and haven't had any luck.
I have read that there are at least two methods for linearly transforming data so that you can give your distribution a certain desired standard deviation. What are they and are there cases where you'd want to use one method rather than another?
Just for concreteness's sake, let's say you have test scores from 0-50, a mean of 35 and sd of 10, and you wanted to rescale so the sd is 15.
| Rescaling for desired standard deviation | CC BY-SA 3.0 | null | 2011-05-21T21:07:11.437 | 2011-05-21T22:38:27.183 | null | null | 52 | [
"data-transformation",
"standard-deviation"
] |
11094 | 2 | null | 11091 | 8 | null | One answer is "no." Another is, "of course."
### No
To simplify notation, let $\lambda(x) = 1/(1 + \exp(-x))$, the inverse logit. Because $\lambda(x) = 1 - \lambda(-x)$,
$$\beta_0 + \beta_1 \lambda(x) = (\beta_0 + \beta_1) - \beta_1 \lambda(-x)).$$
Therefore it is impossible to distinguish the parameters $(\beta_0, \beta_1, \beta_2, \beta_3)$ from $(\beta_0+\beta_1, -\beta_1, -\beta_2, -\beta_3)$.
### Of course
Let us stipulate that the first nonzero element of $(\beta_1, \beta_2, \beta_3)$ must be positive. That resolves the indeterminacy. We still need a model for the errors. If we suppose, for instance, that $Y - \left(\beta_0 + \beta_1 \lambda(\beta_2 X_2 + \beta_3 X_3)\right)$ has a Normal distribution and the various $Y$'s are independent, then we can use least squares to estimate the parameters. There is no exact solution to this nonlinear optimization problem, but it is straightforward to do numerically.

This graphic shows 50 points generated with standard Normal values for $X_1$ and $X_2$, parameter $\beta = (1,2,1/2,-1)$, with iid Normal errors of standard deviation 1/2. The surface shows the fit, $\hat{\beta} = (2.68, -1.23, -0.89, 1.75) \sim (1.45, 1.23, 0.89, -1.75)$.
Least squares is the maximum likelihood with iid Normal errors. With another error distribution, use MLE directly. You can obtain asymptotic confidence intervals for the parameters in the standard ways.
| null | CC BY-SA 3.0 | null | 2011-05-21T21:19:28.657 | 2011-05-21T21:19:28.657 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
11095 | 2 | null | 11093 | 5 | null | The SD is directly proportional to the data. Therefore, to change it from 10 to 15 = 1.5 * 10, multiply all scores by 1.5. The other way is to multiply all scores by -1.5, because negating all values does not change the SD. Of course you can also add an arbitrary constant to all the scores, too, without changing the SD. That is an exhaustive description of the linear transformations of the data that change the SD to the desired value.
You would use the negative multiple when you want to reverse the order of the data.
| null | CC BY-SA 3.0 | null | 2011-05-21T21:30:00.263 | 2011-05-21T21:30:00.263 | null | null | 919 | null |
11096 | 1 | null | null | 82 | 100634 | How can I interpret the main effects (coefficients for dummy-coded factor) in a Poisson regression?
Assume the following example:
```
treatment <- factor(rep(c(1, 2), c(43, 41)),
levels = c(1, 2),
labels = c("placebo", "treated"))
improved <- factor(rep(c(1, 2, 3, 1, 2, 3), c(29, 7, 7, 13, 7, 21)),
levels = c(1, 2, 3),
labels = c("none", "some", "marked"))
numberofdrugs <- rpois(84, 10) + 1
healthvalue <- rpois(84, 5)
y <- data.frame(healthvalue, numberofdrugs, treatment, improved)
test <- glm(healthvalue~numberofdrugs+treatment+improved, y, family=poisson)
summary(test)
```
The output is:
```
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.88955 0.19243 9.819 <2e-16 ***
numberofdrugs -0.02303 0.01624 -1.418 0.156
treatmenttreated -0.01271 0.10861 -0.117 0.907 MAIN EFFECT
improvedsome -0.13541 0.14674 -0.923 0.356 MAIN EFFECT
improvedmarke -0.10839 0.12212 -0.888 0.375 MAIN EFFECT
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
I know that the incident rate for `numberofdrugs` is `exp(-0.023)=0.977`. But how do I interpret the main effects for the dummy variables?
| How to interpret coefficients in a Poisson regression? | CC BY-SA 3.0 | null | 2011-05-21T15:10:15.500 | 2023-05-30T17:18:54.797 | 2017-05-11T19:59:05.170 | 7290 | null | [
"r",
"generalized-linear-model",
"interpretation",
"poisson-distribution",
"regression-coefficients"
] |
11097 | 2 | null | 11096 | 71 | null | The exponentiated `numberofdrugs` coefficient is the multiplicative term to use for the goal of calculating the estimated `healthvalue` when `numberofdrugs` increases by 1 unit. In the case of categorical (factor) variables, the exponentiated coefficient is the multiplicative term relative to the base (first factor) level for that variable (since R uses treatment contrasts by default). The `exp(Intercept)` is the baseline rate, and all other estimates would be relative to it.
In your example the estimated `healthvalue` for someone with `2` drugs, `"placebo"` and `improvement=="none"` would be (using addition inside exp as the equivalent of multiplication):
```
exp( 1.88955 + # thats the baseline contribution
2*-0.02303 + 0 + 0 ) # and estimated value will be somewhat lower
[1] 6.318552
```
While someone on `4` drugs, `"treated"`, and `"some"` improvement would have an estimated `healthvalue` of
```
exp( 1.88955 + 4*-0.02303 + -0.01271 + -0.13541)
[1] 5.203388
```
ADDENDUM: This is what it means to be "additive on the log scale". "Additive on the log-odds scale" was the phrase that my teacher, Barbara McKnight, used when emphasizing the need to use all applicable term values times their estimated coefficients when doing any kind of prediction. This was in discussions of interpreting logistic regression coefficients, but Poisson regression is similar if you use an offset of time at risk to get rates. You add first all the coefficients (including the intercept term) times eachcovariate values and then exponentiate the resulting sum. The way to return coefficients from regression objects in R is generally to use the `coef()` extractor function (done with a different random realization below):
```
coef(test)
# (Intercept) numberofdrugs treatmenttreated improvedsome improvedmarked
# 1.18561313 0.03272109 0.05544510 -0.09295549 0.06248684
```
So the calculation of the estimate for a subject with `4` drugs, `"treated"`, with `"some"` improvement would be:
```
exp( sum( coef(test)[ c(1,2,3,4) ]* c(1,4,1,1) ) )
[1] 3.592999
```
And the linear predictor for that case should be the sum of:
```
coef(test)[c(1,2,3,4)]*c(1,4,1,1)
# (Intercept) numberofdrugs treatmenttreated improvedsome
# 1.18561313 0.13088438 0.05544510 -0.09295549
```
These principles should apply to any stats package that returns a table of coefficients to the user. The method and principles is more general than might appear from my use of R.
---
I'm copying selected clarifying comments since they 'disappear' in the default display:
>
Q: So you interpret the coefficients as ratios! Thank you! – MarkDollar
A: The coefficients are the natural_logarithms of the ratios. – DWin
>
Q2: In that case, in a poisson regression, are the exponentiated coefficients also referred to as "odds ratios"? – oort
A2: No. If it were logistic regression they would be but in Poisson regression, where the LHS is number of events and the implicit denominator is the number at risk, then the exponentiated coefficients are "rate ratios" or "relative risks".
| null | CC BY-SA 4.0 | null | 2011-05-21T15:28:42.660 | 2023-05-30T17:18:54.797 | 2023-05-30T17:18:54.797 | 2129 | 2129 | null |
11098 | 2 | null | 11079 | 2 | null | Clearly sex is a between condition. You've stated below in the comments that stimulus is within subjects and condition is as well. You are only supposed to put your within conditions in the error term.
So, ...
```
aov(response ~ stimulus * sex * condition + Error(subject/(stimulus * condition))
```
Or, if as you've done it in your example it looks like maybe you don't actually want to test condition analyzed, in which case it would be...
```
a <- aggregate(response ~ stimulus + sex + subject, myData, mean)
aov(response ~ stimulus * sex + Error(subject/stimulus), a)
```
| null | CC BY-SA 3.0 | null | 2011-05-21T21:47:32.110 | 2011-05-22T21:37:31.003 | 2011-05-22T21:37:31.003 | 601 | 601 | null |
11099 | 2 | null | 11093 | 5 | null | If you have a random variable (or observed data) $X$ with mean $\mu_x$ and standard deviation $\sigma_x$, and then apply any linear transformation $$Y=a+bX$$ then you will find the mean of $Y$ is $$\mu_y = a + b \mu_x$$ and the standard deviation of $Y$ is $$\sigma_y = |b|\; \sigma_x.$$
So for example, as whuber says, to multiply the standard deviation by 1.5, the two possibilities are $b=1.5$ or $b=-1.5$, while $a$ can have any value.
| null | CC BY-SA 3.0 | null | 2011-05-21T22:38:27.183 | 2011-05-21T22:38:27.183 | null | null | 2958 | null |
11100 | 2 | null | 11079 | 4 | null | Assuming your design is the following:
- sex is a between-subjects IV (with two levels)
- stimulus is a within-subjects IV (with 3 assumed levels)
- condition is a within-subjects IV (with 2 levels)
- all IVs are fully crossed
Then this is what you can do to run the full analysis, or to just test for a main effect of `sex` (generating some data first):
```
Nj <- 10 # number of subjects per sex
P <- 2 # number of levels for IV sex
Q <- 3 # number of levels for IV stimulus
R <- 2 # number of levels for IV condition
subject <- factor(rep(1:(P*Nj), times=Q*R)) # subject id
sex <- factor(rep(1:P, times=Q*R*Nj), labels=c("F", "M")) # IV sex
stimulus <- factor(rep(1:Q, each=P*R*Nj)) # IV stimulus
condition <- factor(rep(rep(1:R, each=P*Nj), times=Q), labels=c("EXP1", "EXP2"))
DV_t11 <- round(rnorm(P*Nj, 8, 2), 2) # responses for stimulus=1 and condition=1
DV_t21 <- round(rnorm(P*Nj, 13, 2), 2) # responses for stimulus=2 and condition=1
DV_t31 <- round(rnorm(P*Nj, 13, 2), 2)
DV_t12 <- round(rnorm(P*Nj, 10, 2), 2)
DV_t22 <- round(rnorm(P*Nj, 15, 2), 2)
DV_t32 <- round(rnorm(P*Nj, 15, 2), 2)
response <- c(DV_t11, DV_t12, DV_t21, DV_t22, DV_t31, DV_t32) # all responses
dfL <- data.frame(subject, sex, stimulus, condition, response) # long format
```
Now with the data set up, you can use `aov()`, but you won't get the $\hat{\epsilon}$ corrections for the within-effects.
```
> summary(aov(response ~ sex*stimulus*condition
+ + Error(subject/(stimulus*condition)), data=dfL))
Error: subject
Df Sum Sq Mean Sq F value Pr(>F)
sex 1 2.803 2.8030 0.51 0.4843 # ... snip ...
```
You can also use the `Anova()` function from the `car` package, which gives you the $\hat{\epsilon}$ corrections. However, it requires your data to be in wide format. You have to use multivariate notation for your model formula.
```
> sexW <- factor(rep(1:P, Nj), labels=c("F", "M")) # factor sex for wide format
> dfW <- data.frame(sexW, DV_t11, DV_t21, DV_t31, DV_t12, DV_t22, DV_t32) # wide format
> # between-model in multivariate notation
> fit <- lm(cbind(DV_t11, DV_t21, DV_t31, DV_t12, DV_t22, DV_t32) ~ sexW, data=dfW)
> # dataframe describing the columns of the data matrix
> intra <- expand.grid(stimulus=gl(Q, 1), condition=gl(R, 1))
> library(car) # for Anova()
> summary(Anova(fit, idata=intra, idesign=~stimulus*condition),
+ multivariate=FALSE, univariate=TRUE)
Univariate Type II Repeated-Measures ANOVA Assuming Sphericity
SS num Df Error SS den Df F Pr(>F)
(Intercept) 17934.1 1 98.930 18 3263.0403 < 2.2e-16 ***
sexW 2.8 1 98.930 18 0.5100 0.4843021 # ... snip ...
```
Using the `ez` package and the command suggested by @Mike Lawrence gives the same result:
```
> library(ez) # for ezANOVA()
> ezANOVA(data=dfL, wid=.(subject), dv=.(response),
+ within=.(stimulus, condition), between=.(sex), observed=.(sex))
$ANOVA
Effect DFn DFd F p p<.05 ges
2 sex 1 18 0.5099891 4.843021e-01 0.004660043 # ... snip ...
```
Finally, if the main effect for `sex` is really all you're interested in, it's equivalent to just average for each person across all the conditions created by the combinations of `stimulus` and `condition`, and then run a between-subjects ANOVA for the aggregated data.
```
# average per subject across all repeated measures
> mDf <- aggregate(response ~ subject + sex, data=dfL, FUN=mean)
> summary(aov(response ~ sex, data=mDf)) # ANOVA with just the between-effect
Df Sum Sq Mean Sq F value Pr(>F)
sex 1 0.4672 0.46716 0.51 0.4843
Residuals 18 16.4884 0.91602
```
| null | CC BY-SA 3.0 | null | 2011-05-21T23:33:03.443 | 2011-05-21T23:57:52.530 | 2011-05-21T23:57:52.530 | 1909 | 1909 | null |
11101 | 1 | null | null | 3 | 873 | I was trying to do a test of reliability for my survey items. In addition to Cronbach's alpha I'm looking at communalities. My criteria is that survey items with communality below 0.4 will be dropped. But when I looked at my communality table, I saw that some items had .99 for communality. Is this problematic? What should I do with these?
| Implications of communalities close to 1.00 for reliability analysis and survey design | CC BY-SA 3.0 | null | 2011-05-22T02:36:46.997 | 2011-05-23T15:54:10.807 | 2011-05-23T01:43:04.013 | 183 | 4702 | [
"factor-analysis",
"reliability"
] |
11102 | 1 | 11103 | null | 7 | 5509 | I'm having trouble performing factor analysis on my dataset.
When I perform the factor analysis in SPSS (default settings), it works fine. Problem is, I need to do it programmatically (in Python). When I try using Python (MDP library) to do factor analysis on the same dataset, I get this error:
"The covariance matrix of the data is singular. Redundant dimensions need to be removed"
Upon looking into the MDP documentation, it says "...returns the Maximum A Posteriori estimate of the latent variables." Being a factor analysis newbie, I wasn't too clear on what this meant, but I tried changing the default extraction method in SPSS from "principal components" to "maximum likelihood". Then, in SPSS, I get the error:
"This matrix is not positive definite."
Are these two errors the same thing? Regardless, what can I do to fix my dataset so that the covariance matrix is not singular?
Thanks!
edit: OK, so I was trying to keep things simplified, but perhaps its better to just explain everything from the start.
I have a series of documents. Yes, I'm only using 9 documents as a simple test case, but my final objective will be to use it on a much larger corpus.
I've built a term-document matrix, performed tf-idf, and did SVD-- mostly with the help of blog.josephwilk.net/.../latent-semantic-analysis-in-python.html
Now I have a reconstructed matrix, and I want to sort the documents into categories. So, I tried using factor analysis. In fact, it seems to work-- when I put it in SPSS, the factor loadings indicate that the documents are grouped the way I thought they should be, and the loading are higher than if I hadn't performed SVD. (Although I think technically, SPSS is doing PCA even though its under the 'Factor Analysis' heading).
I tried using MDP's PCANode, but that doesn't seem to give me anything close to what I want. Strangely, if I transpose my matrix, the factor analysis does work (it will group the terms, instead of the documents).
Hopefully this all makes a little more sense now...
| Factor analysis problem -- singular covariance matrix? | CC BY-SA 3.0 | null | 2011-05-22T03:56:56.093 | 2011-05-23T00:13:58.417 | 2011-05-23T00:13:58.417 | 1977 | 1977 | [
"spss",
"factor-analysis",
"python"
] |
11103 | 2 | null | 11102 | 6 | null | Yes, the two errors amount to the same thing. They're telling you (roughly) that two or more of your manifest variables are linearly dependent (like $y_1 = ay_2 + b$ for scalars $a, b$). These two variables (dimensions) would be "redundant", meaning that the sample covariance matrix is not invertible (ie is singular) and therefore not positive definite either.
As for what you ought to do about it, that depends. First I would try to find out which variables are giving you the trouble; a scatterplot matrix might be enough to tell you that. Then you can decide what to do from there - most likely dropping some redundant variables.
| null | CC BY-SA 3.0 | null | 2011-05-22T04:08:51.723 | 2011-05-22T04:08:51.723 | null | null | 26 | null |
11104 | 2 | null | 1995 | 6 | null | Multi-level modelling is appropriate, as the name suggests, when your data have influences occurring at different levels (individual, over time, over domains, etc). Single level modeling assumes everything is occurring at the lowest level. Another thing that a multi-level model does is to introduce correlations among nested units. So level-1 units within the same level-2 unit will be correlated.
In some sense you can think of multi-level modelling as finding the middle ground between the "individualist fallacy" and the "ecological fallacy". Individualist fallacy is when "community effects" are ignored such as the compatibility of a teacher's style with a student's learning style, for example (the effect is assumed to come from the individual alone, so just do regression at level 1). whereas "ecological fallacy" is the opposite, and would be like supposing the best teacher had the students with the best grades (and so that the level-1 is not needed, just do regression entirely at level 2). In most settings, neither is appropriate (the student-teacher is a "classical" example).
Note that in the school example, there was a "natural" clustering or structure in the data. But this is not an essential feature of multi-level/hierachical modeling. However, the natural clustering makes the mathematics and computations easier. The key ingredient is the prior information which says that there are processes happening at different levels. In fact you can devise clustering algorithms by imposing a multi-level structure on your data with uncertainty about which unit is in which higher level. So you have $y_{ij}$ with the subscript $j$ being unknown.
| null | CC BY-SA 3.0 | null | 2011-05-22T04:40:26.447 | 2011-05-22T04:40:26.447 | null | null | 2392 | null |
11105 | 1 | null | null | 0 | 1836 | I am a newbie in stat. I am completing my thesis in [Evolutionary algorithm](http://en.wikipedia.org/wiki/Evolutionary_algorithm). I have to generate some random numbers from [Laplace distribution](http://en.wikipedia.org/wiki/Laplace_distribution). How can I do this using matlab?
An easy and simple explanation would be appreciated. Thanks in advance.
| Random number generation using laplace distribution | CC BY-SA 3.0 | 0 | 2011-05-22T05:11:51.350 | 2011-05-22T06:22:48.647 | null | null | 4319 | [
"distributions",
"matlab",
"random-generation"
] |
11106 | 2 | null | 11088 | 2 | null | You can use the same approach that was described in response to your question about generating random numbers from a t-distribution. First generate uniformly distributed random numbers from (0,1) and then apply the inverse cumulative distribution function of the Laplace distribution, which is given in the Wikipedia article you linked to.
| null | CC BY-SA 3.0 | null | 2011-05-22T06:22:48.647 | 2011-05-22T06:22:48.647 | null | null | 3835 | null |
11107 | 1 | null | null | 2 | 4555 | I need to do a logistic regression using R on my data. My response variable (`y`) is survival at weaning (`surv=0`; did not `surv=1`) and I have several independent variables which are binary and categoricals in nature.
I am following some examples on this website [http://www.ats.ucla.edu/stat/r/dae/logit.htm](http://www.ats.ucla.edu/stat/r/dae/logit.htm) and trying to run some models.
Running the model:
```
> mysurv2 <- glm(surv~as.factor(PTEM) + as.factor(pshiv) + as.factor(presp) +
as.factor(pmtone), family=binomial(link="logit"), data=ap)
> summary(mysurv2)
Call:
glm(formula = surv ~ as.factor(PTEM) + as.factor(pshiv) + as.factor(presp) +
as.factor(pmtone), family = binomial(link = "logit"), data = ap)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.2837 -0.5121 -0.5121 -0.5058 2.0590
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.01135 0.23613 -0.048 0.96166
as.factor(PTEM)2 -0.74642 0.24482 -3.049 0.00230 **
as.factor(PTEM)3 -1.95401 0.23259 -8.401 < 2e-16 ***
as.factor(pshiv)2 -0.02638 0.06784 -0.389 0.69738
as.factor(presp)2 0.74549 0.10532 7.079 1.46e-12 ***
as.factor(presp)3 0.66793 0.66540 1.004 0.31547
as.factor(pmtone)2 0.54699 0.09678 5.652 1.58e-08 ***
as.factor(pmtone)3 1.82337 0.75409 2.418 0.01561 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 7892.6 on 8791 degrees of freedom
Residual deviance: 7252.8 on 8784 degrees of freedom
(341 observations deleted due to missingness)
AIC: 7268.8
Number of Fisher Scoring iterations: 4
```
Adding the `na.action=na.pass` at the end of the model gave me an error message. I thought that this would take care NA's in my independent variables.
```
> mysurv <- glm(surv~as.factor(PTEM) + as.factor(pshiv) + as.factor(presp) +
as.factor(pmtone), family=binomial(link="logit"), data=ap,
na.action=na.pass)
Error: NA/NaN/Inf in foreign function call (arg 1)
```
Since this is my first time to venture into logistic regression, I am wondering whether there is any package in R that would be more suitable?
I am also tryng to understand the regression coefficients. The independent variables used in the model are:
- rectal temperature:
(PTEM)1 = newborns with rectal temp. below 35.4 0C
(PTEM)2 = newborns with rectal temp. between 35.4 to 36.9 0C
(PTEM)3 = newborns with rectal temp. above 37.0 0C
- shivering:
(pshiv)1 = newborns that were not shivering
(pshiv)2 = newborns that were shivering
- respiration:
(presp)1 = newborns with normal respiration
(presp)2 = newborns with slight respiration problem
(presp)3 = newborns with poor respiration
- muscle tone:
(pmtone)1 = newborns with normal muscle tone
(pmtone)2 = newborns with moderate muscle tone
(pmtone)1 = newborns with poor muscle tone
Looking at the coefficients, I got the following:
```
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.01135 0.23613 -0.048 0.96166
as.factor(PTEM)2 -0.74642 0.24482 -3.049 0.00230 **
as.factor(PTEM)3 -1.95401 0.23259 -8.401 < 2e-16 ***
as.factor(pshiv)2 -0.02638 0.06784 -0.389 0.69738
as.factor(presp)2 0.74549 0.10532 7.079 1.46e-12 ***
as.factor(presp)3 0.66793 0.66540 1.004 0.31547
as.factor(pmtone)2 0.54699 0.09678 5.652 1.58e-08 ***
as.factor(pmtone)3 1.82337 0.75409 2.418 0.01561 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
In my other analysis, I found that newborns:
a) with higher rectal temperature
b) do not shiver
c) good respiration and
d) good muscle tone at birth were more likely to survive.
I am a bit confused with the coefficients I am getting above. I am wondering whether whether I am not interpreting the results correctly or is it something else?
| Doing logistic regression using R | CC BY-SA 3.0 | null | 2011-05-22T07:14:51.037 | 2012-12-12T18:28:05.403 | 2012-12-12T18:18:39.480 | 7290 | 4263 | [
"r",
"logistic",
"interpretation"
] |
11108 | 1 | null | null | 6 | 2491 | Take a look at this photo:

It depicts a [box plot](http://en.wikipedia.org/wiki/Box_plot) of series of identical runs for successive i values. (AFAIK it's the standard Min/Max and 1rst, 2nd, 3rd quartiles.) So the x-axis of 1 represents 1000 runs where i=1; and the second plot shows 1000 runs where i=2; and so on.
It's easy to eye-ball and see that there's a split between the i=1,2 and i=3-19. The values for i=2 are on 'average' larger, by a little bit.
What I aim to do is given the input that produced this graph, programmatically find that split (between 2 and 3) where there's the sudden consistent change. (Step 1) It was also be awesome if there was some sort of confidence score to go along with it - just for user feedback. The change may be up or down, but I know that on both sides of the split the values will be consistent (just like for i > 2 the box plots stay pretty even and don't return to i<2 values).
Then, after that, I want to take a measurement for an unknown i and decide which 'side' of the split it falls on. Now I know I could never know that answer conclusively from a single measurement, so I plan to take several (5? 50? 100?) measurements for this unknown-but-unchanging i value. Then using those measurements know which side of the split the i falls on (Step 2). Again, it would be awesome if there was a confidence value associated with this decision.
I'm working in python so if there's a library awesome, but I'm cool with implementing an algorithm/equation myself. What's the techniques/equations/papers I should read up on to learn how to do this?
| Automatically detecting sudden change of mean | CC BY-SA 3.0 | null | 2011-05-21T22:15:36.050 | 2013-07-29T14:14:46.560 | 2011-05-22T15:40:30.043 | 919 | 4703 | [
"time-series",
"python",
"change-point"
] |
11109 | 1 | 11110 | null | 205 | 221921 | If you have a variable which perfectly separates zeroes and ones in target variable, R will yield the following "perfect or quasi perfect separation" warning message:
```
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
```
We still get the model but the coefficient estimates are inflated.
How do you deal with this in practice?
| How to deal with perfect separation in logistic regression? | CC BY-SA 3.0 | null | 2011-05-22T10:37:08.303 | 2022-08-27T17:39:21.800 | 2022-08-27T17:39:21.800 | 11887 | 333 | [
"r",
"regression",
"logistic",
"separation",
"faq"
] |
11110 | 2 | null | 11109 | 130 | null | A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other interesting properties.
Install and load package glmnet in R and you're mostly ready to go. One of the less user-friendly aspects of glmnet is that you can only feed it matrices, not formulas as we're used to. However, you can look at model.matrix and the like to construct this matrix from a data.frame and a formula...
Now, when you expect that this perfect separation is not just a byproduct of your sample, but could be true in the population, you specifically don't want to handle this: use this separating variable simply as the sole predictor for your outcome, not employing a model of any kind.
| null | CC BY-SA 3.0 | null | 2011-05-22T11:14:19.717 | 2013-11-04T15:52:42.270 | 2013-11-04T15:52:42.270 | 17230 | 4257 | null |
11112 | 1 | null | null | 5 | 2409 | I have some data acquired by an acoustic sensor with 1 Hz sampling rate. Due to some inevitable issues, I have some noise in my signal, saying 10% pollution.
I'm looking for a reliable method for replacing the outliers.
In order to find a suitable approach I manipulated a clean record such that it contains 9% spurious data. I replace outliers with some different methods like Kalman Predictor, Linear Time Series Modeling, and the Local Mean.
Now, I want to compare them together with. Can you suggest criteria to show which method restores fluctuations better if compared to the clean original signal.
If the figure below shows the cross-correlation between the original signal and the restored ones, how can I interpret the large negative peak in lag19?

Furthermore, is it correct if I say: as the ACF of the signal resored by wavelet-LTS tracks that of original signal, this method could mimic the original signal better than the others?

| What is the best way to compare fluctuations of two signals? | CC BY-SA 3.0 | 0 | 2011-05-22T14:03:41.867 | 2011-05-26T02:51:52.030 | 2011-05-26T02:51:52.030 | 4286 | 4286 | [
"correlation",
"autocorrelation",
"signal-processing",
"covariance-matrix"
] |
11113 | 1 | null | null | 4 | 1372 | I am using ANOVA with repeated measures to test significance between males and females results of an experiment during which participants had to evaluate 7 stimuli in 2 conditions (EXP1 and EXP2).
The problem is that even if from results it is clear that there are significant differences between males and females, I don´t get significance in the ANOVA results.
Definitively there is an error, because results cannot be non-significant. Indeed looking at the means for each stimulus, it is possible to notice that males gave always higher evaluations than females.
To prove this I discarded for a moment the effect of the repeated measures, and I performed an ANOVA separately on both the two conditions (EXP1 and EXP2) during which the evaluations were given.
What I get is significant differences between males and female, in both EXP1 and EXP2.
Now, why when I perform the ANOVA with repeated measures I don´t get the same behavior?
The structure of my table is the following: subject, stimulus, condition, sex, response. The design is the following:
- sex is a between-subjects factor (with two levels)
- stimulus is a within-subjects factor (with 3 assumed levels)
- condition is a within-subjects factor (with 2 levels)
- all factors are fully crossed
Example:
```
subject stimulus condition sex response
subject1 gravel EXP1 M 59.8060
subject2 gravel EXP1 M 49.9880
subject3 gravel EXP1 M 73.7420
subject4 gravel EXP1 M 45.5190
subject5 gravel EXP1 M 51.6770
subject6 gravel EXP1 M 42.1760
subject7 gravel EXP1 M 56.1110
subject8 gravel EXP1 M 54.9500
subject9 gravel EXP1 M 62.6920
subject10 gravel EXP1 M 50.7270
subject1 gravel EXP2 M 70.9270
subject2 gravel EXP2 M 61.3200
subject3 gravel EXP2 M 70.2930
subject4 gravel EXP2 M 49.9880
subject5 gravel EXP2 M 69.1670
subject6 gravel EXP2 M 62.2700
subject7 gravel EXP2 M 70.9270
subject8 gravel EXP2 M 63.6770
subject9 gravel EXP2 M 72.4400
subject10 gravel EXP2 M 58.8560
subject11 gravel EXP1 F 46.5750
subject12 gravel EXP1 F 58.1520
subject13 gravel EXP1 F 57.4490
subject14 gravel EXP1 F 59.8770
subject15 gravel EXP1 F 55.5480
subject16 gravel EXP1 F 46.2230
subject17 gravel EXP1 F 63.3260
subject18 gravel EXP1 F 60.6860
subject19 gravel EXP1 F 59.4900
subject20 gravel EXP1 F 52.6630
subject11 gravel EXP2 F 55.7240
subject12 gravel EXP2 F 66.4220
subject13 gravel EXP2 F 65.9300
subject14 gravel EXP2 F 61.8120
subject15 gravel EXP2 F 62.5160
subject16 gravel EXP2 F 65.5780
subject17 gravel EXP2 F 59.5600
subject18 gravel EXP2 F 63.8180
subject19 gravel EXP2 F 61.4250
.....
.....
.....
.....
```
As you can notice each subject repeated the evaluation in 2 conditions (EXP1 and EXP2).
What I am interested in is to know if there are significant differences between
the evaluations of the males and the females (both at global level and for each stimulus).
This is the command I used to perform the ANOVA with repeated measures:
```
aov1 = aov(response ~ sex*stimulus*condition + Error(subject/(stimulus*condition)), data=scrd)
summary(aov1)
```
Doing so I don´t get significance for the differences between males and females.
Instead if I perform the ANOVA on the two subtables of EXP 1 and 2 I get significant differences.
```
table_EXP1 <- subset(scrd, condition == "EXP1")
table_EXP2 <- subset(scrd, condition == "EXP2")
fit_table_EXP1 <- lm(response ~ stimulus*sex, data=table_EXP1)
anova(fit_table_EXP1)
fit_table_EXP2 <- lm(response ~ stimulus*sex, data=table_EXP2)
anova(fit_table_EXP2)
```
How can this be possible? Is it a contradiction?
| Wrong results using ANOVA with repeated measures | CC BY-SA 3.0 | null | 2011-05-22T14:20:49.473 | 2012-02-13T19:14:33.003 | 2011-05-22T14:51:56.993 | 2116 | 4701 | [
"r",
"anova",
"repeated-measures"
] |
11114 | 2 | null | 11009 | 13 | null | this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means given the process you are modeling and whether a model w/o the moderator & predictor makes more sense given your theory or hypothesis. The observation that the product term is significant but only when moderator & predictor are not included doesn't tell you anything (except maybe that you are fishing around for "significance") w/o a cogent explanation of why it makes sense to leave them out.
| null | CC BY-SA 3.0 | null | 2011-05-22T14:26:28.640 | 2011-05-22T14:26:28.640 | null | null | 11954 | null |
11115 | 1 | null | null | 2 | 13711 | Does anybody know how to plot all AIC values for different size models, when using the command `regsubsets` from the package `leaps`?
Assume you have the following variables:
```
treatment <- factor(rep(c(1, 2), c(43, 41)), levels = c(1, 2),labels = c("placebo", "treated"))
improved <- factor(rep(c(1, 2, 3, 1, 2, 3), c(29, 7, 7, 13, 7, 21)),levels = c(1, 2, 3),labels = c("none", "some", "marked"))
numberofdrugs<-rpois(84, 5)+1
healthvalue<-rpois(84,5)
```
And now you want to select variables. Then you can use the following commands
```
require(leaps)
require(faraway)
d<-regsubsets(healthvalue~numberofdrugs*improved*treatment,x, nvmax=10)
rs<-summary(d)
plot(rs$bic, xlab="Parameter", ylab="BIC") #where is AIC (bic works)?
```
It works when using `rs$bic`, but why isn't there a way to use `rs$aic`? When looking at the help `?regsubsets`, it seems to be not available. Do I understand the help wrong? And if I don't understand it wrong, how can I plot the above code when using AIC?
| How to plot AIC values when using the leaps package? | CC BY-SA 3.0 | null | 2011-05-22T14:33:20.660 | 2016-06-15T09:57:40.210 | 2011-05-22T17:14:58.163 | 930 | 4496 | [
"r",
"aic",
"stepwise-regression",
"validation",
"bic"
] |
11116 | 2 | null | 11107 | 5 | null | The lrm function in the R rms package is devoted to binary and ordinal logistic regression, and my help, once you understand the rms documentation. Detailed case studies using rms may be found in course notes at [http://biostat.mc.vanderbilt.edu/rms](http://biostat.mc.vanderbilt.edu/rms). However there are more important issues. Categorizing continuous variables leads to erroneous conclusions, underfitting, and residual confounding (here, temperature and perhaps respiratory rate).
One R coding convention suggestion: Set up your data frame the way you want to treat the variables in later modeling steps. For example if variables are really factors, give them descriptive levels and make them factors once and for all. The rms package almost requires this.
If you do use rms you will find its summary, Predict, plot, contrast, and nomogram functions handy for interpreting the model, once the model properly uses continuous variables.
| null | CC BY-SA 3.0 | null | 2011-05-22T14:44:36.060 | 2011-05-22T14:44:36.060 | null | null | 4253 | null |
11117 | 2 | null | 11115 | 6 | null | The relevant excerpt from the `regsubsets` help pages is the following:
>
Since this function returns separate
best models of all sizes up to nvmax
and since different model selection
criteria such as AIC, BIC, CIC, DIC,
... differ only in how models of
different sizes are compared, the
results do not depend on the choice of
cost-complexity tradeoff.
This means that it does not matter which criteria you use, the end result will be the same. Hence the graph with AIC instead of BIC will be similar, i.e. will convey the same information.
| null | CC BY-SA 3.0 | null | 2011-05-22T15:04:26.447 | 2011-05-22T15:04:26.447 | null | null | 2116 | null |
11118 | 1 | 11119 | null | 6 | 381 | Given a data frame in R is there a way to export it in R syntax such that executing this code would re-create the data frame? I would find this useful to store results in R files along with calculations without depending on external files.
| How to export data in R syntax? | CC BY-SA 3.0 | null | 2011-05-22T16:37:29.693 | 2011-06-03T22:30:59.033 | 2011-06-03T22:30:59.033 | 930 | 4195 | [
"r"
] |
11119 | 2 | null | 11118 | 8 | null | You can use `dput()` to get a `structure()` that can be used later.
```
> #Build the original data frame
> x <- seq(1, 10, 1)
> y <- seq(10, 100, 10)
> df <- data.frame(x=x, y=y)
> df
x y
1 1 10
2 2 20
3 3 30
4 4 40
5 5 50
6 6 60
7 7 70
8 8 80
9 9 90
10 10 100
> #Use the dput() statement to print out the structure of df
> dput(df)
structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10,
20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA,
-10L), class = "data.frame")
```
The above `structure` statement is the output of `dput(df)`. If you copy/paste that into your R text file, you can use it later. Here's how.
```
> #Build a new dataframe from the structure() statement
> newdf <- structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10,
20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA,
-10L), class = "data.frame")
> newdf
x y
1 1 10
2 2 20
3 3 30
4 4 40
5 5 50
6 6 60
7 7 70
8 8 80
9 9 90
10 10 100
```
| null | CC BY-SA 3.0 | null | 2011-05-22T17:06:59.327 | 2011-05-22T17:06:59.327 | null | null | 2775 | null |
11120 | 1 | 11128 | null | 12 | 9955 | I have 5 emerging market foreign exchange total return series, for which I am forecasting single period future returns (1 year). I would like to construct a Markowitz mean variance optimized portfolio of the 5 series, using historical variances and covariances (1) and my own forecast expected returns. Does R have an (easy) way/library to do this? In addition how would I go about calculating (1) is there a built in function?
For interest sake my currencies are USDTRY, USDZAR, USDRUB, USDHUF and USDPLN.
| Markowitz portfolio mean variance optimization in R | CC BY-SA 3.0 | null | 2011-05-22T17:21:46.187 | 2012-12-06T16:54:26.557 | null | null | 4705 | [
"r"
] |
11121 | 2 | null | 11108 | 1 | null | If I understand you you correctly, you might need to learn about multiple comparisons:
[http://en.wikipedia.org/wiki/Multiple_comparisons](http://en.wikipedia.org/wiki/Multiple_comparisons)
The choice of a particular procedure is a different question, e.g., Scheffe vs. Tukey vs. Bonferroni.
At least in this framework, there is a clear and straightforward way to have hypothesis testing as well as confidence interval estimation.
| null | CC BY-SA 3.0 | null | 2011-05-22T18:00:54.037 | 2013-07-29T08:59:09.597 | 2013-07-29T08:59:09.597 | 22047 | 4617 | null |
11122 | 2 | null | 11107 | 10 | null | I think you're confused because you defined survival at weaning as surv=0 rather than surv=1. In your model, negative coefficients indicate high odds of survival (low odds of surv=1).
| null | CC BY-SA 3.0 | null | 2011-05-22T18:05:50.277 | 2011-05-24T15:46:47.447 | 2011-05-24T15:46:47.447 | 3874 | 3874 | null |
11123 | 1 | null | null | 3 | 590 | I've been using R to run GLMs with the logit link to compare clutch sizes (a binomial data set) and proportional data among several years. I now need to compare 2 groups of years (good years and poor years) between 2 sites for the same data.
I am not interested in any interactions between year-type (good or poor) and site, but really am just interested in comparing good vs. good and poor vs. poor between sites. If my data were continuous, I'd use a 2-way ANOVA followed up with t-tests for each comparisons, using the Holm–Bonferroni method for these multiple (2) comparisons. However, I'm not entirely sure how to approach these comparisons with binomial data.
To start, I'm considering clutch sizes (proportional data will come next). I'm thinking that this code (below) is akin to the 2-way ANOVA for binomial data?
```
model <- glm(clutchsize ~ Site + Year_type, binomial)
```
Then, since I got a significant result here, I can go forward and make direct comparisons between the good years and the poor 'independently' of each other (using the Holm–Bonferroni method). But how can I make those comparisons? Can I use the `glm` model for that as well, like this:
```
model <- glm(clsz_gd ~ Site_gd, binomial)
```
where `clsz_gd` is binomial and `Site_gd` is a factor with only 2 levels? I presume I can't use t-tests!
Is this the way to approach these data?
| GLM for comparing 2 populations with binomial data? | CC BY-SA 4.0 | null | 2011-05-22T18:09:43.557 | 2022-12-16T17:45:04.453 | 2022-12-16T17:45:04.453 | 11887 | 4238 | [
"r",
"generalized-linear-model",
"binomial-distribution"
] |
11127 | 1 | 11132 | null | 80 | 77377 | I have 2 dependent variables (DVs) each of whose score may be influenced by the set of 7 independent variables (IVs). DVs are continuous, while the set of IVs consists of a mix of continuous and binary coded variables. (In code below continuous variables are written in upper case letters and binary variables in lower case letters.)
The aim of the study is to uncover how these DVs are influenced by IVs variables. I proposed the following multivariate multiple regression (MMR) model:
```
my.model <- lm(cbind(A, B) ~ c + d + e + f + g + H + I)
```
To interpret the results I call two statements:
- summary(manova(my.model))
- Manova(my.model)
Outputs from both calls are pasted below and are significantly different. Can somebody please explain which statement among the two should be picked to properly summarize the results of MMR, and why? Any suggestion would be greatly appreciated.
Output using `summary(manova(my.model))` statement:
```
> summary(manova(my.model))
Df Pillai approx F num Df den Df Pr(>F)
c 1 0.105295 5.8255 2 99 0.004057 **
d 1 0.085131 4.6061 2 99 0.012225 *
e 1 0.007886 0.3935 2 99 0.675773
f 1 0.036121 1.8550 2 99 0.161854
g 1 0.002103 0.1043 2 99 0.901049
H 1 0.228766 14.6828 2 99 2.605e-06 ***
I 1 0.011752 0.5887 2 99 0.556999
Residuals 100
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Output using `Manova(my.model)` statement:
```
> library(car)
> Manova(my.model)
Type II MANOVA Tests: Pillai test statistic
Df test stat approx F num Df den Df Pr(>F)
c 1 0.030928 1.5798 2 99 0.21117
d 1 0.079422 4.2706 2 99 0.01663 *
e 1 0.003067 0.1523 2 99 0.85893
f 1 0.029812 1.5210 2 99 0.22355
g 1 0.004331 0.2153 2 99 0.80668
H 1 0.229303 14.7276 2 99 2.516e-06 ***
I 1 0.011752 0.5887 2 99 0.55700
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
| Multivariate multiple regression in R | CC BY-SA 3.0 | null | 2011-05-22T18:33:57.020 | 2020-05-05T11:18:56.240 | null | null | 609 | [
"r",
"multivariate-analysis",
"manova",
"multiple-regression",
"multivariate-regression"
] |
11128 | 2 | null | 11120 | 12 | null | You might look at the following:
[http://cran.r-project.org/web/packages/tawny/index.html](http://cran.r-project.org/web/packages/tawny/index.html)
[http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf](http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf)
[http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny](http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny)
[http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/](http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/)
| null | CC BY-SA 3.0 | null | 2011-05-22T18:59:54.430 | 2011-05-22T18:59:54.430 | null | null | 2775 | null |
11129 | 2 | null | 11049 | 1 | null | Dealing with missing observation is a never ending problem.
There are a few approaches i will mention, but I am sure others will add more.
- Make sure there is no any problems in recording the data, such as systematic omitting certain values. Given you have no control for it, then other options need to be considered.
- Based on the proportion of missing values, you have different options.
If there are less than 5% missing observations:
2.1 I would not care, and use either average or median based on the shape of the sample's distribution. Some use this approach even when there are more than 5% missing observations. It is a very quick solution when time pressed.
If there are more than 5% missing observations, some advanced techniques can used:
2.2 Data imputation methods is a wide class of techniques to impute missing values based on the observed values of the variable/or other variables.
For example, in time series framework, some values could be forecast based on the observed time series data.
In regression framework, knowledge of other variables as well can be used to impute missing variables, i.e., build a regression model for the variable of interest (the one with missing values) as a function of other variables available (probably excluding the original dependent variable Y) and then predict the missing values using the model.
A more advanced technique is Expectation Maximization method.
Finally, a dummy variable can be used just to control for the missing values.
M
| null | CC BY-SA 3.0 | null | 2011-05-22T19:06:50.440 | 2011-05-22T19:06:50.440 | null | null | 4617 | null |
11130 | 2 | null | 10949 | 1 | null | Sara, to understand the difference in the two models is to ask you what question you are trying to find an answer to:
- What is the health care expenditure of those who have it?
- What is the impact on having health insurance on health care expenditure?
To answer the first question, you have a subsample (as you refer to it) of those who have health insurance and there is observed health care expenditures as well as other explanatory variables (age, occupation, etc.). In this case, you are trying to explain variation in health care expenditures of people who choose to have health insurance. There is no way two answer your fist question.
The second question is more general. You need to have two groups of people with and without insurance. Also, if you just use OLS here on the subsample of insured people, you will get biased and inconsistent OLS parameter estimates due to self-selection bias, and you would not be able to correctly estimate the impact of having the insurance on the expenditure. So, Heckman's!
I hope that helps.
M
| null | CC BY-SA 3.0 | null | 2011-05-22T19:19:30.557 | 2011-05-22T19:19:30.557 | null | null | 4617 | null |
11131 | 1 | null | null | 13 | 3890 | I am wondering if there is a sample size formula like Lehr's formula that applies to an F-test? Lehr's formula for t-tests is $n = 16 / \Delta^2$, where $\Delta$ is the effect size (e.g. $\Delta = (\mu_1 - \mu_2) / \sigma$). This can be generalized to $n = c / \Delta^2$ where $c$ is a constant that depends on the type I rate, the desired power, and whether one is performing a one-sided or two sided test.
I am looking for a similar formula for an F-test. My test statistic is distributed, under the alternative, as a non-central F with $k,n$ degrees of freedom and non-centrality parameter $n \lambda$, where $\lambda$ depends only on population parameters, which are unknown but posited to take some value. The parameter $k$ is fixed by the experiment, and $n$ is the sample size. Ideally I am looking for a (preferably well-known) formula of the form
$$n = \frac{c}{g(k,\lambda)}$$
where $c$ depends only on the type I rate and the power.
The sample size should satisfy
$$
F(F^{-1}(1-\alpha;k,n,0);k,n,n\lambda) = \beta,$$
where $F(x;k,n,\delta)$ is the CDF of a non-central F with $k,n$ d.o.f. and non-centrality parameter $\delta$, and $\alpha, \beta$ are the type I and type II rates. We can assume $k \ll n$, i.e. $n$ need be 'sufficiently large.'
My attempts at fiddling with this in R have not been fruitful. I have seen $g(k,\lambda) = \lambda / \sqrt{k+1}$ suggested but the fits have not looked very good.
edit: originally I had vaguely stated that the non-centrality parameter 'depends' on the sample size. On second thought, I found that too confusing, so made the relationship clear.
Also, I can compute the value of $n$ exactly by solving the implicit equation via a root finder (e.g. Brent's method). I am looking for an equation to guide my intuition and for use as a rule of thumb.
| Sample size formula for an F-test? | CC BY-SA 3.0 | null | 2011-05-22T19:34:27.523 | 2018-08-23T16:00:06.400 | 2013-06-20T07:05:30.763 | 805 | 795 | [
"sample-size",
"statistical-power",
"non-central",
"f-test"
] |
11132 | 2 | null | 11127 | 88 | null | Briefly stated, this is because base-R's `manova(lm())` uses sequential model comparisons for so-called Type I sum of squares, whereas `car`'s `Manova()` by default uses model comparisons for Type II sum of squares.
I assume you're familiar with the model-comparison approach to ANOVA or regression analysis. This approach defines these tests by comparing a restricted model (corresponding to a null hypothesis) to an unrestricted model (corresponding to the alternative hypothesis). If you're not familiar with this idea, I recommend Maxwell & Delaney's excellent "Designing experiments and analyzing data" (2004).
For type I SS, the restricted model in a regression analysis for your first predictor `c` is the null-model which only uses the absolute term: `lm(Y ~ 1)`, where `Y` in your case would be the multivariate DV defined by `cbind(A, B)`. The unrestricted model then adds predictor `c`, i.e. `lm(Y ~ c + 1)`.
For type II SS, the unrestricted model in a regression analysis for your first predictor `c` is the full model which includes all predictors except for their interactions, i.e., `lm(Y ~ c + d + e + f + g + H + I)`. The restricted model removes predictor `c` from the unrestricted model, i.e., `lm(Y ~ d + e + f + g + H + I)`.
Since both functions rely on different model comparisons, they lead to different results. The question which one is preferable is hard to answer - it really depends on your hypotheses.
What follows assumes you're familiar with how multivariate test statistics like the Pillai-Bartlett Trace are calculated based on the null-model, the full model, and the pair of restricted-unrestricted models. For brevity, I only consider predictors `c` and `H`, and only test for `c`.
```
N <- 100 # generate some data: number of subjects
c <- rbinom(N, 1, 0.2) # dichotomous predictor c
H <- rnorm(N, -10, 2) # metric predictor H
A <- -1.4*c + 0.6*H + rnorm(N, 0, 3) # DV A
B <- 1.4*c - 0.6*H + rnorm(N, 0, 3) # DV B
Y <- cbind(A, B) # DV matrix
my.model <- lm(Y ~ c + H) # the multivariate model
summary(manova(my.model)) # from base-R: SS type I
# Df Pillai approx F num Df den Df Pr(>F)
# c 1 0.06835 3.5213 2 96 0.03344 *
# H 1 0.32664 23.2842 2 96 5.7e-09 ***
# Residuals 97
```
For comparison, the result from `car`'s `Manova()` function using SS type II.
```
library(car) # for Manova()
Manova(my.model, type="II")
# Type II MANOVA Tests: Pillai test statistic
# Df test stat approx F num Df den Df Pr(>F)
# c 1 0.05904 3.0119 2 96 0.05387 .
# H 1 0.32664 23.2842 2 96 5.7e-09 ***
```
Now manually verify both results. Build the design matrix $X$ first and compare to R's design matrix.
```
X <- cbind(1, c, H)
XR <- model.matrix(~ c + H)
all.equal(X, XR, check.attributes=FALSE)
# [1] TRUE
```
Now define the orthogonal projection for the full model ($P_{f} = X (X'X)^{-1} X'$, using all predictors). This gives us the matrix $W = Y' (I-P_{f}) Y$.
```
Pf <- X %*% solve(t(X) %*% X) %*% t(X)
Id <- diag(N)
WW <- t(Y) %*% (Id - Pf) %*% Y
```
Restricted and unrestricted models for SS type I plus their projections $P_{rI}$ and $P_{uI}$, leading to matrix $B_{I} = Y' (P_{uI} - P_{PrI}) Y$.
```
XrI <- X[ , 1]
PrI <- XrI %*% solve(t(XrI) %*% XrI) %*% t(XrI)
XuI <- X[ , c(1, 2)]
PuI <- XuI %*% solve(t(XuI) %*% XuI) %*% t(XuI)
Bi <- t(Y) %*% (PuI - PrI) %*% Y
```
Restricted and unrestricted models for SS type II plus their projections $P_{rI}$ and $P_{uII}$, leading to matrix $B_{II} = Y' (P_{uII} - P_{PrII}) Y$.
```
XrII <- X[ , -2]
PrII <- XrII %*% solve(t(XrII) %*% XrII) %*% t(XrII)
PuII <- Pf
Bii <- t(Y) %*% (PuII - PrII) %*% Y
```
Pillai-Bartlett trace for both types of SS: trace of $(B + W)^{-1} B$.
```
(PBTi <- sum(diag(solve(Bi + WW) %*% Bi))) # SS type I
# [1] 0.0683467
(PBTii <- sum(diag(solve(Bii + WW) %*% Bii))) # SS type II
# [1] 0.05904288
```
Note that the calculations for the orthogonal projections mimic the mathematical formula, but are a bad idea numerically. One should really use QR-decompositions or SVD in combination with `crossprod()` instead.
| null | CC BY-SA 3.0 | null | 2011-05-22T19:42:34.220 | 2015-12-13T02:39:13.693 | 2015-12-13T02:39:13.693 | null | 1909 | null |
11133 | 2 | null | 10890 | 13 | null | The terms endogeneity and unobserved heterogeneity often refer to the same thing but usage varies somewhat, even within economics, the discipline I most associate with the terms.
In a regression equation, an explanatory variable is [endogenous](http://en.wikipedia.org/wiki/Endogeneity) if it is correlated with the error term.
Endogeneity is often described as having three sources: omitted variables, measurement error, and simultaneity. Though it is often helpful to mention these "sources" separately, confusion sometimes arises because they are not truly distinct. Imagine a regression predicting the effect of education on wages. Perhaps our measure of education is simply the number of years someone spent in formal education, regardless of the type of education. If I have a clear idea of what type of education affects wages, I might describe this situation as measurement error in the education variable. Alternatively, I could describe the situation as an omitted variables problem (the variables indicating type of education).
Perhaps wages also affect education decisions. If wages and education are measured at the same time this is an example of simultaneity, but it too, might be reframed in terms of omitted variables.
Unobserved heterogeneity is simply variation/differences among cases which are not measured. If you understand endogeneity, I think you understand the implications of unobserved heterogeneity in a regression context.
| null | CC BY-SA 3.0 | null | 2011-05-22T19:44:47.340 | 2011-05-22T19:44:47.340 | null | null | 3748 | null |
11134 | 1 | 11140 | null | 4 | 1755 | I was playing around with writing a code for Montecarlo integration of a function defined in spherical coordinates. As a first simple rapid test I decided to write a test code to obtain the solid angle under an angle $\theta_m$. For two random number $u$ and $v$ in $[0,1)$. I generate a homogeneous random sampling of the spherical angle using
$$\phi=2\pi u$$
$$\theta = \arccos(1-2v)$$
For $N$ generated points, I have $M$ points for which $\theta < \theta_m$. My first idea was that since I have an homogeneous sampling I should have obtained the correct solid angle $\Omega=2 \pi (1-\cos (\theta_m))$ simply as $4\pi\times M/N$.
Actually it looks like I the correct result comes out only if I use:
$$\Omega=\sum_{i=1}^M \frac{4\pi}{N} 2 \cos(\theta_i)$$
I can not see the reason why this should be correct.
The probability distribution function in $\theta$ is $PDF=1/2 \sin(\theta)$ so I would rather expect I should normalize each point of the sum by this function but this doesn't work. What am I doing wrong and how could I justify the cosine? Many thanks!
| Monte carlo integration in spherical coordinates | CC BY-SA 3.0 | null | 2011-05-22T20:31:12.243 | 2011-05-23T06:19:34.313 | 2011-05-23T06:19:34.313 | 2116 | 4706 | [
"sampling",
"monte-carlo",
"integral"
] |
11135 | 1 | null | null | 5 | 2916 | I am doing a meta analysis for the first time and have a few basic questions regarding statistical analysis.
Let's say I have one study where the primary outcome (thrombosis) in the 2 treatment groups (intervention v. placebo) was compared using the Mantel-Haenszel chi-square test. It does not report df in the article (is this something I can deduce?). Then it tells me that "Treatment effects were expressed as a weighted average of the strata-specific relative risks." Is this something I need to consider for the meta-analysis statistical calculation? If so, what does it mean?
In the results we learn "Among the 866 participants who had patency assessed, the primary outcome of fistula thrombosis at 6 weeks occurred in 53 participants (12.2%) in the clopidogrel group compared with 84 participants (19.5%) in the placebo group (relative risk, 0.63; 95% CI, 0.46-0.97;P=.018).
What do I do with all this information? I tried using a computerized program, but I didn't know what to fill all the blanks in with...
In the second study, we have the following:
"The hazard ratio was 0.81 (95% CI, 0.47 to 1.40) in favor of aspirin and clopidogrel therapy, but the reduction was not significant (P= 0.45). Although the event rates in the two treatment arms converged, there was no evidence that the proportional-hazards assumption was violated (P 0.53). The annual hazard rate for thrombosis among articipants receiving placebos was 0.59 (95% CI, 0.39 to 0.87), compared with 0.47 (95% CI, 0.31 to 0.71) among participants receiving aspirin and clopidogrel. The absolute
risk reduction for the group receiving aspirin and clopidogrel was 0.03 (95% CI,-0.19 to 0.07), and the number needed to treat was 33.3 (95% CI, 14.9 to >/=1000)."
Regarding analysis, it states: "The cumulative incidence of the first episode of thrombosis was estimated with the Kaplan-Meier method, and differences in rates between treatment groups were tested with the log rank test. The cumulative incidence of the first bleeding event was analyzed in a similar manner."
What do I do with all this information? Thank you for any help you can provide.
| How to extract data from published articles (RCTs) to do a meta-analysis? | CC BY-SA 3.0 | null | 2011-05-22T21:16:30.673 | 2012-03-22T14:57:44.067 | 2011-05-23T01:31:35.270 | 183 | 4707 | [
"meta-analysis",
"clinical-trials"
] |
11136 | 1 | null | null | 5 | 979 |
### Context
I have a multivariate dataset with a test group and three control groups.
I was thinking that the best way to determine if and how the test group differed from all of the control groups would be to perform a MANOVA, then perform contrast analysis between the test group and all of the control groups.
### Main Question
I cannot seem to find a good description of how contrast analysis for MANOVA is performed.
Every tutorial/online class I can find, just talks about throwing the contrast into SPSS, or the univariate case.
- Can anyone point me to a good tutorial on contrasts for MANOVA?
### Questions
- Is MANOVA with contrasts the appropriate way of answering my research question?
- Would it be better to perform repeated Hotelling's tests on the test group vs each control group, and just use a Bonferroni correction?
| Tutorial for performing Contrasts for MANOVA | CC BY-SA 3.0 | null | 2011-05-22T21:59:09.607 | 2017-04-28T07:45:25.343 | 2017-04-28T07:45:25.343 | 28666 | 3629 | [
"manova",
"contrasts",
"hotelling-t2"
] |
11137 | 2 | null | 11109 | 3 | null | Be careful with this warning message from R. Take a look at this [blog post](http://www.stat.columbia.edu/~cook/movabletype/archives/2011/05/whassup_with_gl.html) by Andrew Gelman, and you will see that it is not always a problem of perfect separation, but sometimes a bug with `glm`. It seems that if the starting values are too far from the maximum-likelihood estimate, it blows up. So, check first with other software, like Stata.
If you really have this problem, you may try to use Bayesian modeling, with informative priors.
But in practice I just get rid of the predictors causing the trouble, because I don't know how to pick an informative prior. But I guess there is a paper by Gelman about using informative prior when you have this problem of perfect separation problem. Just google it. Maybe you should give it a try.
| null | CC BY-SA 3.0 | null | 2011-05-23T00:00:20.187 | 2013-09-01T17:26:14.537 | 2013-09-01T17:26:14.537 | 17230 | 3058 | null |
11138 | 1 | null | null | 3 | 3630 | Given an hierarchical clustering of data points, some of which are labeled, are there good ways to use the tree/dendrogram to make predictions for the unlabeled points?
One approach might be to find the "best" place to cut the tree so that clusters match labels.
I'd be especially interested in efficient ways to cut each cluster at a different height. But I'd also be interested in approaches that don't use "hard" cuts to make the predictions.
| Using hierarchical clustering to classify? | CC BY-SA 3.0 | null | 2011-05-23T00:40:54.910 | 2011-06-22T19:02:45.040 | 2011-05-23T01:28:43.287 | 183 | 4711 | [
"clustering",
"classification"
] |
11139 | 2 | null | 10003 | 8 | null | As long as the sponsors of the site are committed to keeping the site running, it would be premature to declare it 'dead.' It is not out of the question that StatProb.com may experience a revival in the future. In judging the longevity of a resource like StatProb.com, the short-term trends are irrelevant. Instead, the right questions to ask are:
- Is the principle behind a site like StatProb.com a sound one? Is the idea of a free access peer-reviewed encyclopedia an idea that will grow in relevance over time, or diminish?
- If the answer to the first question is "Yes", then is it likely that an alternative to the site will arise?
I think the answer to the first question is Yes. The field of statistics is rapidly growing and the demand for online statistical answers is growing, as evidenced by this site (stats.SE). The value of online encyclopedias has been proven by the success of Wikipedia. Yet because Wikipedia is open to everyone, peer-reviewed alternatives to Wikipedia will eventually be needed.
As a site like StatProb.com gains more articles, it will gain more users, and as it gains more users, it will increase its public profile. As it increases its public profile, more researchers will be interesting in contributing to the site. That StatProb.com is off to a slow start gives no indication of where it may one day end up.
I think the answer to the second question is No, because Springer.com has taken the lead in the online academic publishing world and it seems unlikely that it will give up that lead. Any prospective competitor to StatProb.com will need a strong advantage to compensate for the brand-name recognition that Springer possesses.
I checked the site, and recently (5/11) a new article has appeared on 'Strong Mixing Conditions.' As long as the site has the name of Springer attached to it, it will have some credibility in the academic world (whether it deserves it or not!) and a smart researcher can take advantage of this credibility. I imagine it would be a useful place to write background information for you or a colleague to cite in your own papers. I will keep the site StatProb in mind as a potential resource to that end, and I upvoted this question for making me aware of the site as a potential resource for my own academic career.
| null | CC BY-SA 3.0 | null | 2011-05-23T01:11:57.523 | 2011-05-23T02:51:24.220 | 2011-05-23T02:51:24.220 | 3567 | 3567 | null |
11140 | 2 | null | 11134 | 5 | null | Your calculations are correct and you should get the right Monte Carlo estimate from your formula $4\pi M/N$ with the mapping $(u,v)\to(\phi,\theta)$ you presented. So I would guess it's an error in your code and not in your reasoning. You shouldn't need the cosine weighting factor.
Perhaps if you show your code we can spot the error.
Also, on a related note, there's a really nice paper by James Arvo called [Stratified Sampling of 2-Manifolds](http://www.ics.uci.edu/~arvo/papers/notes2001.pdf) which explains how to construct an area-preserving mapping from the unit-square to a 2-manifold in general, but also specifically for a (hemi-)sphere.
| null | CC BY-SA 3.0 | null | 2011-05-23T01:15:42.030 | 2011-05-23T01:15:42.030 | null | null | 4360 | null |
11141 | 1 | 11144 | null | 7 | 3877 | I know I'm asking a lot of questions these days! Sorry about that, but I'm trying to work through my grad thesis data collected over the last 4 years, and am repeatedly tripping on my beginner's grasp on stats.
The background:
My basic question is the same as described in [another question on this site](https://stats.stackexchange.com/questions/11123/glm-for-comparing-2-populations-with-binomial-data). In brief, I'm trying to sort out a reasonable way to compare good years vs. poor between 2 sites for a bunch of variables (breeding metrics of birds). I'm only interested in comparing good years between sites and poor years between sites. I'm not interested (much) in whether good years at one site are better than poor years at the other site (so I guess that means I'm not interested in interactions? Although what I want to get at is whether Site within Year-type is sig. different, so perhaps interactions are important?). Though the metrics are related in that they're all from the same 2 populations, the variables I'm looking at are independent otherwise.
This question is about comparing nest initiation date (converted to ordinal day and is therefore ordinal data) between these 2 sites and 2 year-types. The data are skewed and the variances heterogeneous from a glance at a histogram and boxplots. My sample size ranges (for good within-site and poor within-site) from n= 69 to 864. I thought I should use a non-parametric 2-way ANOVAs and did some searching on those. I've seen Friedmans' test offered up as a non-parametric alternative to 2-way ANOVAs, but from what I gather, it's only for repeated measures data. I found some who recommended that you can rank-transform data and run the 2-way ANOVA on that.
To do this, I just converted the days within each site and within each year-type to ranks by making the first day (say, 154) = 1, day 155 = 2 etc. , but this really doesn't change anything in my data since I have ordinal dates anyway.
So, what can I do? I've run the 2-way ANOVA on ranks and on the original centred data. While the numbers aren't the same from each test, the ultimate result is: the Holm-Sidak mulitple comparisons show that within each year-type of good and poor, the sites are significantly different (p<0.001). Can I accept these results? Or is there a better way of analyzing these data?
I'm currently using Sigmaplot 11 (with built-in Sigmastat functionality) for the basic stuff, and turning to R whenever it can't handle what I need to do.
Thanks for reading, and for any tips you can provide!
Mog
EDIT: I'm also interested in what I can do with continuous data that is non-normal (skewed) and has unequal variances. Is rank-transforming inappropriate for that as well?
| Rank transformed 2-way ANOVA | CC BY-SA 3.0 | null | 2011-05-23T01:59:04.253 | 2011-05-23T16:51:00.203 | 2017-04-13T12:44:45.783 | -1 | 4238 | [
"anova",
"continuous-data",
"ordinal-data",
"ranks"
] |
11142 | 1 | 16676 | null | 25 | 3509 | I participate in predictive modeling competitions on [Kaggle](http://www.kaggle.com/), [TunedIt](http://tunedit.org/), and [CrowdAnalytix](http://www.crowdanalytix.com/). I find that these sites are a good way to "work-out" for statistics/machine learning.
- Are there any other sites I should know about?
- How do you all feel about competitions where the host intends to profit from competitors' submissions?
/edit: Here's a more complete list:
[Kaggle](http://www.kaggle.com/)
[TunedIt](http://tunedit.org/)
[Clopinte](http://clopinet.com/challenges/)
[KDD Cup](http://www.sigkdd.org/kddcup/)
[Innocentive](https://www.innocentive.com/)
[Crowdanalytix](http://www.crowdanalytix.com/welcome)
[Topcoder](http://community.topcoder.com/coeci/nitrd/)
| Sites for predictive modeling competitions | CC BY-SA 3.0 | null | 2011-05-23T02:47:10.030 | 2015-12-11T23:22:27.570 | 2013-01-31T17:20:41.887 | 2817 | 2817 | [
"machine-learning",
"predictive-models"
] |
11144 | 2 | null | 11141 | 11 | null | The proportional odds (PO) ordinal logistic model is a generalization of the Wilcoxon and Kruskal-Wallis tests, allowing for covariates, interactions, and anything else you can do in a regression model for a univariate response. A two-way ANOVA on ranks is not based on strong statistical principles.
One of many computational tools for the PO model is the `lrm` function in the R rms package. The rms package's `contrast`, `anova`, `summary`, `plot`, and `nomogram` functions can help.
| null | CC BY-SA 3.0 | null | 2011-05-23T03:42:57.647 | 2011-05-23T06:21:18.493 | 2011-05-23T06:21:18.493 | 2116 | 4253 | null |
11145 | 1 | null | null | 1 | 334 | Suppose I thought that ingesting greater than 100 mg of chemical X annually noticeably decreased one's weight. Also, I had data (from a "natural" experiment) from 100 people (some male and some female) measuring how much of chemical X they had eaten, and their weights at the time of the experiment.
- What is the best way to test whether or not eating said amount of X leads to weight loss?
| Determining causality from a natural experiment | CC BY-SA 3.0 | null | 2011-05-23T03:43:22.740 | 2011-05-25T01:35:52.533 | 2011-05-23T07:21:07.450 | 183 | 4713 | [
"hypothesis-testing"
] |
11146 | 2 | null | 11145 | 8 | null | By a "natural" experiment you mean that you do not control, by randomization, say, the amount of chemical X that each subject takes. This is also often called an observational study. Do you know the difficulties in drawing conclusions about cause and effect from such data?
It's really not a question about statistical tests - they are blind to causality. You can use standard methods to test if intake of chemical X is associated with weight loss correcting for gender if you like, but that does not in itself prove that intake of X causes weight loss. An association might be viewed as evidence in the direction of a causal effect, but how strong the evidence is, and how serious it will be taken, is much more a question of understanding the subject matter than the statistical test.
There is a literature on causal inference, which I am only superficially familiar with, which gives a more nuanced picture on what you can say about causal effects and how you can do it, but a basic premise is a set of untestable assumptions.
If you can provide more details about what you know and what you want, I might be able to give you some appropriate references. There are also other, related, questions with answers [here](https://stats.stackexchange.com/questions/534/under-what-conditions-does-correlation-imply-causation) and [here](https://stats.stackexchange.com/questions/3400/from-a-statistical-perspective-can-one-infer-causality-using-propensity-scores-w).
| null | CC BY-SA 3.0 | null | 2011-05-23T04:57:46.807 | 2011-05-23T05:03:41.510 | 2017-04-13T12:44:29.013 | -1 | 4376 | null |
11147 | 2 | null | 11145 | 5 | null | It sounds like everyone in your sample has ingested X. If your hypothesis is that ingesting X causes you to lose weight, you need a sample of people, some of whom have ingested X and some of whom haven't. If your hypothesis is that the more X you ingest, the more weight you lose, then, of course, it's okay if your entire sample has ingested X. You can also split up your sample into two groups if you thought that ONLY those who ingested > 100mg will lose weight, but there are a lot of potential problems there and I wouldn't recommend it.
From the sound of it, what you probably want to start with is a simple regression to see if there is any connection between X and weight. You can simply graph your sample, with weight lost (or gained) on one axis and amount of X ingested on the other. Using a stat package like SPSS or R will tell you the correlation between X and losing weight (r), and how much of the variance (of weight loss) is accounted for by X (called r^2). It's always a good idea to have a control group, though, to make sure the weight loss could not be attributed to something else.
There are other things you can do depending on what data you have, but this is a good place to start I think.
| null | CC BY-SA 3.0 | null | 2011-05-23T05:51:29.287 | 2011-05-23T05:51:29.287 | null | null | 1977 | null |
11148 | 2 | null | 11142 | 4 | null | Here are some nice datasets: [http://archive.ics.uci.edu/ml/](http://archive.ics.uci.edu/ml/)
Update: The question has changed since I gave this answer.
| null | CC BY-SA 3.0 | null | 2011-05-23T06:30:18.670 | 2011-10-07T18:00:33.420 | 2011-10-07T18:00:33.420 | 2860 | 2860 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.