Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
11256
1
11260
null
7
4919
I need to do a regression with a non-normal DV for which no proper non-linear transformation (that I know of) exists: ![enter image description here](https://i.stack.imgur.com/7fPjE.png) It is a score ranging from 10 to 50, with a high peak at 10, a drop at 11 and a regular decline from 11 to 50. The distribution of residuals is not normal. ![enter image description here](https://i.stack.imgur.com/pSjZ3.png) There are more or less 500 subjects in the study, and the covariates are all dichotomous. I thought of using n-tiles and performing an ordinal regression on the resulting variable, but then I end up with a high proportion of empty cells -- not because of a low sample size, but rather because of the several covariates that need to be in the model. The best I could do for now is to remove covariates and use tertiles (instead of, say, quintiles) to minimize the number of empty cells. I am not sure, though, to what extent avoiding empty cells is THAT important. Compared to the original analysis on a raw score, regressing on tertiles with less covariates seems like a lot of sacrifices. Based on the decent sample size, and given the fact that the distribution of residuals is not THAT far from normal, I am wondering if the results of the regression would be reliable as they are.
Regression on a non-normal dependent variable
CC BY-SA 3.0
null
2011-05-26T04:36:57.507
2011-05-26T12:42:05.990
2011-05-26T06:03:45.123
2116
4754
[ "regression", "ordinal-data" ]
11257
1
null
null
4
5074
### Context: I am analysing some impact assessment data (measuring invertebrate richness in response to pollution), but they are unbalanced - there are not data for every site at sampling occasion, and there were more datapoints recorded after the impact than before the impact. I am a new user to R, and have gathered through reading on this site and others that the standard anova packages `aov()` and `ezAnova()` can't deal with unbalanced designs. I assume I should instead be using a package like `lme4`. However, I am not sure how to structure my data, or program the analysis. One of the problems is that I'm not sure how to incorporate sampling dates as the repeated measures aspect of my design. My data has 5 columns Site code, Date, BeforeAfter, ControlImpact, Richness. ### Questions: - How should I set up my data for conducting repeated measures analysis with unbalanced data in R? - Should I use lme4 or some other package?
How do I set up an unbalanced repeated measures analysis in R?
CC BY-SA 3.0
null
2011-05-26T04:38:23.123
2011-10-24T01:00:49.183
2011-07-25T22:05:14.627
930
4758
[ "r", "mixed-model", "repeated-measures" ]
11258
2
null
11253
8
null
Are you familiar with [Simpson's paradox](http://en.wikipedia.org/wiki/Simpson%27s_paradox)? This would seem to be what you're observing here. Edit: I didn't answer your question :) What exactly you should do is to some degree context dependent (Are the groups meaningful? Does this represent a problem in the study design? etc). At the very least you should report both results IMO.
null
CC BY-SA 3.0
null
2011-05-26T05:12:45.137
2011-05-26T05:12:45.137
null
null
26
null
11259
1
null
null
8
237
[Related to my earlier question](https://stats.stackexchange.com/questions/11256/regression-on-a-non-normal-dependent-variable), I need to perform regression on a skewed dependent variable (n = 500). Since the residuals weren't normally distributed, I was able to transform the DV non-linearly in a way that it now approaches normality. Residuals are normal when using this transformed variable as a dependent variable. For the two models, The p-values for the various predictors are very much alike, and the relative sizes of the coefficients are very similar as well. - To what extent are those two facts are indicators (or not) of the reliability of the coefficients and p-values obtained in the first model (using raw data)?
Does similarity of coefficients and p-values regardless of whether dependent variable is transformed suggest untransformed model is reliable?
CC BY-SA 3.0
null
2011-05-26T05:17:10.670
2011-09-28T15:44:29.120
2017-04-13T12:44:52.660
-1
4754
[ "regression" ]
11260
2
null
11256
7
null
The normality assumption is a convenient property of model's residuals, since it enables correct inferences about the estimated parameters and critical values of many other tests are also dependent on this assumption (therefore some corrections should be made, or you may roughly take more strict rule-of-thumb criteria, increasing the acceptable range of your tests), however it doesn't ruin the regression estimators. Thus it may (you still need to check the other assumptions) produce well behaved predictions, but data-mining and hypothesis testing would be a bit more difficult. At this point I do agree with Huber that you need to clarify the purpose of the model. --- Regarding some tips: At first glance it seems that your distribution after $Y-10$ transformation, could be approximated by some truncated versions of continuous distributions: exponential ([Gamma](http://en.wikipedia.org/wiki/Gamma_distribution)), [log-normal](http://en.wikipedia.org/wiki/Log-normal_distribution), [Pareto](http://en.wikipedia.org/wiki/Pareto_distribution) or some other. So in log-normal case you still may move to something close to normality. Another option could be to try something like fitting the combo of [generalized logistic function](http://en.wikipedia.org/wiki/Generalised_logistic_function) and [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). Since you DO know the upper and lower limits it seems feasible.
null
CC BY-SA 3.0
null
2011-05-26T05:26:42.400
2011-05-26T05:26:42.400
null
null
2645
null
11262
1
null
null
3
33
This question is motivated by a issue regarding network motifs. To determine if a (connected, induced) subgraph $H$ occurs with significantly high frequency in an input network $G$, we generate an ensemble of comparison networks similar to $G$ and count the number of occurences of $H$ in them. Thus we obtain the number of copies of $H$ in $G$ (call this frequency $f$), and the number of copies of $H$ in the comparison networks (call these frequencies $x=(x_1,x_2,...,x_n)$). A trade-off arises: We want $n$ to be large in order to give a better comparison, and $n$ to be small to be able to actually perform the computation. I'm concerned, however, that sometimes $n$ is chosen too small. Question: How can we determine if $n$ is too small? I.e. how can we determine if we have sufficiently many comparisons? My feeling is to compare $f$ with the mean of datasets $x$ and `c(x,ceiling(mean(x)))` [in R notation]. I.e. in the second case, we add an artificial point to the dataset, which is close to the mean of $x$, integral and non-zero. The idea is, if we instead chose to use $n+14 comparison networks, and the new comparison graph turned out to have `ceiling(mean(x))` copies of $H$, would we have come to a different conclusion? If so, this would indicate that $n$ is too small.
Methods for checking if the number of comparisons is sufficient
CC BY-SA 3.0
null
2011-05-26T05:50:38.113
2021-11-12T22:32:35.583
2021-11-12T22:32:35.583
11887
386
[ "statistical-significance", "networks" ]
11263
1
11295
null
1
2345
I'm sure this is a pretty standard statistics question, but I'm no expert... I'm running an A/B test on my website to see if a change results in users adding more content. So there are 2 basic things I'm looking at; the # of users adding at least 1 piece of content and the total # of pieces of content added by all users. I really care much more about the total # of pieces of content added by all users. I'll make the change permanent if I know it's at least not worse than the existing site. So I need to know how many samples (users logging in) I need to have a 95% confidence level. Normally I can use [one of the many web A/B test calculators](http://www.prconline.com/education/tools/statsignificance/index.asp) that use chi-square and similar tests to figure out if my test is statistically significant or to figure out the sample size I need. In the first case of seeing how many users added content, I can do this. But to see the total pieces of content added among all users, I can't use those tests as their isn't a "conversion" event. So what's the best way to see what sample size I need to be statistically significant to a 95% confidence level? And how can I see if my test shows whether there was a difference? Again I just want to make sure the new change isn't worse (or isn't "much" worse, do I need to and how should I define "much"?).
Statistical significance when A/B test has multiple values
CC BY-SA 3.0
null
2011-05-26T06:28:43.387
2012-11-11T21:32:33.883
null
null
4759
[ "confidence-interval", "statistical-significance", "sample-size", "chi-squared-test", "ab-test" ]
11264
2
null
11263
-1
null
Applying binomial distribution you can calculate variance out of number of events. Formula for standard deviation from binomial distribution is ``` sigma = sqrt( n * p * (1-p) ) ``` where n is number of all events and p is probability of event you observing. To be sure 95% you have to be at lest 2 sigma away from previous result.
null
CC BY-SA 3.0
null
2011-05-26T07:27:25.430
2011-05-26T07:33:00.320
2011-05-26T07:33:00.320
4738
4738
null
11265
2
null
11246
6
null
Just to add to Frank's points and paint a somewhat finer picture: CART/RPART is indeed highly exploratory and adding a p-value is difficult. I have seen some rare cases where people tried to use bootstrapping to get such p-value but I agree with Frank that it's not worth the effort. As for combining statistical inference with recursive partitioning more generally: The CTree and MOB methods implemented in the "party" package as well as several other tools outside R (especially the work of Loh & co-authors) tries to combine standard statistical tests (nonparametric in case of CTree, parametric for MOB) for growing the trees. They also control their error level in the sense of a closed testing procedure. However, the inference for the final fitted tree is still not trivial. The situation is similar to a regression model where you have selected interactions between your regressors in a forward search (and we know how Frank feels about that). Hence, also the "party" package has no anova() methods. We do provide AIC() for "mob" objects, though. It is not strictly valid (because we haven't done full optimization of the log-likelihood but only forward search) but it would be conservative. So the error is into the preferable direction.
null
CC BY-SA 3.0
null
2011-05-26T07:49:23.200
2011-05-26T07:49:23.200
null
null
4760
null
11266
1
11272
null
14
1031
I would like (in the distant future) to teach statistics to kids. For that matter, I'd be happy to know of software (obviously I am tending towards FOSS), or webapps, that are helpful in explaining statistical/probabilistic ideas to kids (or adults for that matter). This can be used either by the instructor, the kids, or both. Suggested format of the answer: Software name, what it help teach, who should use it, link.
Software (or webapps) for teaching kids statistics or probability?
CC BY-SA 3.0
null
2011-05-26T08:32:34.833
2016-12-21T01:56:36.627
2016-12-21T01:56:36.627
22468
253
[ "probability", "references", "software", "teaching" ]
11267
2
null
11266
5
null
GGobi Help teach interactive data visualization. Including - histograms, scatter plots (2d, 3d, multi-d), with brushing/linking etc. Mostly for the teacher - less for the children (but still possible) [http://www.ggobi.org/](http://www.ggobi.org/)
null
CC BY-SA 3.0
null
2011-05-26T08:34:15.173
2011-05-26T08:34:15.173
null
null
253
null
11268
2
null
11266
7
null
RcmdrPlugin.TeachingDemos: Rcmdr Teaching Demos Plug-In Extending R with Rcmdr and give demos for probability and statistics ideas. - Interactive: correlation and linear regression. - Static: power of a test, confidence interval, central limit theorem. Mostly for the teacher - less for the children [http://cran.r-project.org/web/packages/RcmdrPlugin.TeachingDemos/index.html](http://cran.r-project.org/web/packages/RcmdrPlugin.TeachingDemos/index.html)
null
CC BY-SA 3.0
null
2011-05-26T08:42:20.983
2011-05-26T09:06:15.867
2011-05-26T09:06:15.867
253
253
null
11269
2
null
11266
4
null
animation: A Gallery of Animations in Statistics and Utilities to Create Animations An R package. Enables the teacher to create many animation that can be made into webapps. Great for the teacher to create a children webapp. [http://cran.r-project.org/web/packages/animation/index.html](http://cran.r-project.org/web/packages/animation/index.html) Examples: [http://animation.yihui.name/](http://animation.yihui.name/)
null
CC BY-SA 3.0
null
2011-05-26T08:43:59.467
2011-05-26T08:43:59.467
null
null
253
null
11270
1
null
null
2
265
My study is about customers’ perception of the effectiveness for a Malaysia corporate Weblog. In my study, customers’ perception of the effectiveness defined as perceived ease of use, perceived interactivity, and perceived trustworthiness; whereas the corporate Weblog defined as Weblog publishing software, Weblog comment system, and Weblog blogroll and hyperlink. In my study, I want to determine whether there is a relationships between: a) Customers’ perception of ease of use and Weblog publishing software; b) Customers’ perception of interactivity and Weblog comment system; and c) Customers’ perception of trustworthiness and Weblog blogroll and hyperlink. In the questionnaire, each section (perceived easy of use; perceived interactivity; perceived trustworthiness; Weblog publishing software; Weblog comment system; and Weblog blogroll and hyperlink) consist of 5 questions. As the questionnaire design in 7-likert scale (ordinal measurement), therefore I’m using non-parametric correlation (Spearman’s correlation) to analyzed the data. Besides that, as in my dissertation chapter 1, there have some research hypothesis; therefore I’m selected one-tailed significance. ### Questions: - Is Spearman Correlation the appropriate method to analyse my data? - Because each section has 5 questions (eg: Perceived ease of use and Weblog publishing software), the statistical output has 10 correlation coefficients, but how should I interpret these 10 correlation coefficient into one conclusion as to whether there is a relationship between customers’ perception of ease of use and Weblog publishing software? - If I wanted to do a regression for customers’ perceived ease of use (5 questions) and Weblog publishing software (5 questions), which regression should I use?
How to analyse a study looking at relationship between one set of five items (predictors) and a second set of five items (outcomes)
CC BY-SA 3.0
null
2011-05-26T10:13:55.363
2017-03-06T16:52:53.847
2017-03-06T16:52:53.847
101426
4762
[ "regression", "spearman-rho" ]
11271
2
null
11257
3
null
I believe that your scenario is generally described as one of missing data, not as an unbalanced design, which is usually reserved for cases of unequal numbers of observations between independent groups. `ezANOVA()` from the [ez package](http://cran.r-project.org/web/packages/ez/index.html) can handle unbalanced designs, but cannot handle missing data. `lmer()` fromthe [lme4 package](http://cran.r-project.org/web/packages/lme4/index.html) can handle missing data. You can either use `lmer()` directly, as in: ``` my_lmer = lmer( formula = richness ~ (1|site) + BeforeAfter*ControlImpact , data = my_data_frame , family = gaussian #or change to whatever model of error is appropriate ) anova(my_lmer) ``` Or you could check out the ezMixed function from the ez package, which wraps lmer and produces likelihood ratios (a superior metric of evidence): ``` my_mix = ezMixed( data = my_data_frame , dv = .(richness) , random = .(site) , fixed = .(BeforeAfter,ControlImpact) , family = gaussian ) print(my_mix$summary) ``` Finally, be careful with Date as a predictor; it sounds like it may be confounded with BeforeAfter.
null
CC BY-SA 3.0
null
2011-05-26T11:05:46.140
2011-05-26T11:05:46.140
null
null
364
null
11272
2
null
11266
3
null
[Videos](http://understandinguncertainty.org/view/videos) and [animations](http://understandinguncertainty.org/view/animations) from Understanding Uncertainty website.
null
CC BY-SA 3.0
null
2011-05-26T12:05:30.677
2011-05-26T12:05:30.677
null
null
22
null
11273
1
null
null
6
9242
In principle a simple question: What is the pull distribution? (All I could find out is that it is the error-weighted distribution of estimators around the true value.) I'd be interested in the precise mathematical definition, how, why, and when to use it, what is it expected to look like, and if both estimator values and "true" value have errors associated to them, do these add up in quadrature. PS: Feel free to add / change tags, I didn't find any good ones.
What is the pull distribution?
CC BY-SA 3.0
null
2011-05-26T12:31:52.523
2012-04-21T19:07:03.437
2011-05-26T12:54:14.570
1512
1512
[ "distributions", "data-mining" ]
11274
2
null
11270
3
null
About your first question (using Spearman rank correlation with ordinal scales), I think you will find useful responses on this site (search for spearman, likert, ordinal or scale). About your second question: As I understand the situation, for each dimension (what you call a "section"), you have a set of five questions scored on a 7-point Likert-type scale. If those five questions all define a single construct, that is if we can consider they form an unidimensional scale (such an assumption might be checked, anyway), why don't you use a summated scale score (add up the individuals response to the five responses)? This way, your problem would vanish because then you only have one single estimate for the correlation between, say Perceived ease of use and Weblog publishing. Another option is to use [Canonical Correlation Analysis](http://en.wikipedia.org/wiki/Canonical_correlation) (CCA) which allows to build maximally correlated linear combinations of two-block data structure, as described in [this response](https://stats.stackexchange.com/questions/8300/correlation-analysis-and-correcting-p-values-for-multiple-testing/8323#8323). The pattern of loadings on the two blocks will help you to summarize which item contribute the most information in each block and how they relate each other (under the constraint imposed by CCA). The canonical correlation itself will give you a single number to summarize the association between any two section (again, when considering a linear combination of the 5 questions that compose a section). For your third question, I would suggest considering [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression), where you define one block of variables as outcomes and the other one as predictors. The idea of PLS regression is to build successive linear combinations of the variables belonging to each block such that their covariance (instead of the correlation like in CCA) is maximal, within a regression approach (because there's an asymmetric deflation process when constructing the next linear combination). In other words, you build "latent variables" that account for a maximum of information included in the block of predictors while allowing to predict the block of outcomes with minimal error. As you are working with ordinal data, you can even preprocess each variable with optimal scaling if you want, see for example the [homals](http://cran.r-project.org/web/packages/homals/index.html) package in R, and the papers referenced in the dcocumentation.
null
CC BY-SA 3.0
null
2011-05-26T12:38:05.190
2011-05-26T12:38:05.190
2017-04-13T12:44:41.493
-1
930
null
11275
2
null
11256
8
null
Ordinal regression is not affected by empty cells of Y. Quantile grouping is not required unless you just want to reduce computational burden. Proportional odds or continuation ratio ordinal logistic models are likely to be able to handle the distribution of Y you plotted (with no grouping of Y).
null
CC BY-SA 3.0
null
2011-05-26T12:42:05.990
2011-05-26T12:42:05.990
null
null
4253
null
11276
2
null
6723
7
null
Stepwise regression in the absence of penalization is frought with so many difficulties that I'm surprised people are still using it. The web has long lists of problems, starting with the extremely low probability of finding the "right" model.
null
CC BY-SA 3.0
null
2011-05-26T12:43:37.610
2011-05-26T12:43:37.610
null
null
4253
null
11277
1
11324
null
5
176
In a nutshell, here's what I have: - Annual population estimates for the State - Periodical (5 years) age, population, and basic census data per zones Here's what I want to do: - Create a simplistic model to generate the data for the missing years between the period for each zone, and have the total sums add up to where the yearly state population estimate is. All in all, I'm looking for a non complicated statistical model that is able to generate values with an acceptable (doesn't have to be super high) precision.
Is there a simplistic model to disaggregate census data based on years and smaller zones?
CC BY-SA 3.0
null
2011-05-26T12:51:55.790
2011-05-27T23:41:14.597
2011-05-26T13:09:38.747
59
59
[ "estimation", "census" ]
11278
2
null
11253
7
null
I agree with JMS advice, that the answer is totally context dependent. But what you are looking at may also be considered a [moderation effect](http://en.wikipedia.org/wiki/Moderation_%28statistics%29). > In statistics, moderation occurs when the relationship between two variables depends on a third variable. (quoted from [wikipedia](http://en.wikipedia.org/wiki/Moderation_%28statistics%29)) A moderation is statistically significant if in a multiple regression analyses the interaction of the predictor with the third variable is significant.
null
CC BY-SA 3.0
null
2011-05-26T13:45:16.910
2011-05-26T13:45:16.910
null
null
442
null
11279
2
null
11249
10
null
Although I would always recommend to use R, you could nevertheless achieve what you want with python. There is at least a package for reading [dbf files](http://pypi.python.org/pypi/dbf/). Furthermore, [scipy](http://www.scipy.org/) offers a great range of functions for statistical analysis. For example the library [ScientifyPython](http://dirac.cnrs-orleans.fr/plone/software/scientificpython/) probably contains [the functions](http://dirac.cnrs-orleans.fr/ScientificPython/ScientificPythonManual/toc-Scientific.Statistics-module.html) you need. The best idea is to check [scipy.org](http://www.scipy.org/). There you will find what you want. (But learning R is a great idea!!)
null
CC BY-SA 3.0
null
2011-05-26T13:55:58.253
2011-05-26T13:55:58.253
null
null
442
null
11280
1
null
null
3
1653
I am new to R and some help would be of great use to me. Basically, I need to perform a GLM analysis with negative binomial errors and with fixed factors, no covariates and no random effects. My factors are of type: year (1-4), site (1-3), sex (1-2), age (1-3), with a sample size of around 5000. Currently I am fitting GLMs with negative binomial errors, using the `glm.nb` routine in the MASS library of R. I then need to remove all nonsignificant interactions by hand (using the R console). Can someone point me into the right direction of how can I rather automate the process, in such a way that R will choose desired factors by trying different combinations and ignoring nonsignificant ones? (I am sorry if the question lack details, I can add more info if you need it.)
Automatisation of GLM analysis with negative binomial errors
CC BY-SA 3.0
null
2011-05-26T14:51:26.587
2011-05-27T09:30:47.783
2011-05-26T16:02:15.300
930
4766
[ "r", "model-selection", "generalized-linear-model" ]
11281
2
null
4753
2
null
One has to be careful about the meaning of the word sparse. Your matrix contains many zeroes and one may represent such a matrix in a sparse way (to save on storage). But since the figures represent co-occurrences these zeroes are still to be considered informative (they are not missing; they are not structurally zero) and should therefore be taken into account when modeling the content of the matrix. The many zeroes and the skewness (approximately geometric) would suggest to use generalized forms of bilinear models (see de Falguerolles/Gabriel : Generalized Linear-Bilinear Models). The R-package gnm supports this type of models. The sparse variants of PCA/SVD you are referring to rather relate to L1-regularisations of the factorial representation such that estimated loadings come out as sparse (many zeroes).
null
CC BY-SA 3.0
null
2011-05-26T15:05:23.787
2011-05-26T15:05:23.787
null
null
4767
null
11282
2
null
11280
3
null
The `stepAIC` function in MASS can perform the kinds of variable selection you are looking for. In addition, the `leaps` package appears to have similar capacities. That being said, I have not used it, so cannot speak directly on its efficacy.
null
CC BY-SA 3.0
null
2011-05-26T15:41:30.293
2011-05-26T15:41:30.293
null
null
656
null
11283
1
11285
null
3
270
I am running a logistic model on insurance data. I have a field agent gender which matters for a channel A (say) and doesn't matter for B. I want to put null values in case of B. The only thing I risk is exclusion by SAS (as SAS excludes every missing case by default). I heard that pairwise exclusion can solve my problem. Please tell me the following: - What is the difference between normal exclusion and pairwise exclusion? - How to do this is SAS? - What is listwise exclusion?
Pairwise exclusions
CC BY-SA 3.0
null
2011-05-26T16:26:07.750
2011-05-27T06:45:52.127
2011-05-27T06:45:52.127
2116
1763
[ "modeling", "dataset", "sas" ]
11284
2
null
11283
4
null
There are two main types of traditional treatments of missing data. These are: 1) listwise 2) pairwise Listwise is (from what you have said) the default in SAS. It means that you exclude any observation that has missing values in any of the terms in your model. The advantage of this is that it ensures that all variables have the same n in the model. The disadvantage is that if you have one variable with a high proportion of missing data, much of the rest of the data can be ignored. Pairwise, by contrast, only excludes cases where the value is missing for that particular variable. This has the advantage of keeping more data in the model that listwise does. However, the disadvantage is that some of your results will be based on different subsets of the data, and this can cause problems with p values and confidence intervals. A better approach is probably multiple imputation where you attempt to predict the missing values using all of the data you have. This normally involves simulating a number of datasets with the missing data filled in, and then performing all analyses on each of the datasets individually, and averaging the results. A good paper on treatment of missing data is Graham 2009 [Missing Data: Making it work in the real world](http://www.iapsych.com/articles/graham2009.pdf) which goes into much more detail than I have here. You also need to determine if the missingness in your data is random or not, as if the probability of missingness depends on exogenous variables to your dataset, then none of these approaches will work correctly. A good resource on multiple imputation is [here](http://www.multiple-imputation.com/) [This](http://www.ats.ucla.edu/stat/sas/modules/missing.htm) article on missing data in SAS may prove helpful, or someone else may answer as to how to do this in SAS.
null
CC BY-SA 3.0
null
2011-05-26T16:51:15.040
2011-05-26T16:51:15.040
null
null
656
null
11285
2
null
11283
6
null
This doesn't sound like a missing data problem to me: it sounds like a question of model structure. Distilling it to its essence, it seems you have two independent categorical variables gender ($X$, say) and "channel" ($Y$) and a binary response ($Z$). Conceptually the model is $$logit(\Pr(Z=1)) = \beta_0 + \beta_1 X + \beta_2 Y + \varepsilon$$ when $Y$ = "A" and otherwise $$logit(\Pr(Z=1)) = \beta_0 + \beta_2 Y + \varepsilon$$ when $Y \ne $ "A" (with parameters $\beta_0$, $\beta_1$, and $\beta_2$ and zero-mean random error $\varepsilon$). If this interpretation is correct, just set $X = 0$ in the data whenever $Y \ne $ "A". This will cause such cases to have no effect on the value of $\beta_1$, which will be estimated solely from the other cases where $Y$ = "A".
null
CC BY-SA 3.0
null
2011-05-26T18:11:50.257
2011-05-26T18:11:50.257
null
null
919
null
11286
2
null
10890
15
null
I agree with @Michael's description of endogeneity---this is about a problem with the variables that you include and their relationship to the variables that you do not (i.e., the stuff in the error term). Unobserved heterogeneity is typically about unobservable componenents of the effects that you are estimating. Continuing with @Michael's education example, unobserved heterogeneity might be that some people have higher returns (e.g., increases in wages) from going to school than others. Let the returns for person $i$ be $\beta + b_i$ with $\mathbb{E}(b_i) = 0$. We have $$\begin{equation*} y_i = x_i (\beta + b_i) + w^\prime_i \gamma + \epsilon_i, \end{equation*}$$ where $y_i$ is (typically, log) income, $x_i$ is years of education, and $w_i$ is a set of other controls. An example of endogeneity is when $x_i$ is correlated with $\epsilon_i$ (e.g., education is correlated with IQ, which is not among our other predictors). If we estimate a single coefficient, we have $$\begin{equation*} y_i = x_i \beta + w^\prime_i \gamma + (\epsilon_i + b x_i) = x_i \beta + w^\prime_i \gamma + \tilde{\epsilon}_i \end{equation*}$$ See that the included variable $x_i$ is correlated with the error term $\tilde{\epsilon}_i$, inducing the same problems as the case of endogeneity.
null
CC BY-SA 3.0
null
2011-05-26T18:19:51.773
2011-05-26T18:19:51.773
null
null
401
null
11287
2
null
11072
2
null
The $R^2$ in regression is given that name because it is the correlation between $y_i$ and $\hat{y}_i$ squared. You could calculate the correlation between $z_i$ and $\hat{z}_i$ in your case, square it, and use that as a measure of goodness-of-fit. I can't say what the statistical properties of this measure will be for your particular case, however.
null
CC BY-SA 3.0
null
2011-05-26T18:25:22.063
2011-05-26T18:25:22.063
null
null
401
null
11288
2
null
10613
35
null
Under the null hypothesis, your test statistic $T$ has the distribution $F(t)$ (e.g., standard normal). We show that the p-value $P=F(T)$ has a probability distribution $$\begin{equation*} \Pr(P < p) = \Pr(F^{-1}(P) < F^{-1}(p)) = \Pr(T < t) \equiv p; \end{equation*}$$ in other words, $P$ is distributed uniformly. This holds so long as $F(\cdot)$ is invertible, a necessary condition of which is that $T$ is not a discrete random variable. This result is general: the distribution of an invertible CDF of a random variable is uniform on $[0,1]$.
null
CC BY-SA 3.0
null
2011-05-26T18:50:27.493
2011-05-27T00:19:15.937
2011-05-27T00:19:15.937
401
401
null
11289
1
null
null
37
2603
I am looking for some probability inequalities for sums of unbounded random variables. I would really appreciate it if anyone can provide me some thoughts. My problem is to find an exponential upper bound over the probability that the sum of unbounded i.i.d. random variables, which are in fact the multiplication of two i.i.d. Gaussian, exceeds some certain value, i.e., $\mathrm{Pr}[ X \geq \epsilon\sigma^2 N] \leq \exp(?)$, where $X = \sum_{i=1}^{N} w_iv_i$, $w_i$ and $v_i$ are generated i.i.d. from $\mathcal{N}(0, \sigma)$. I tried to use the Chernoff bound using moment generating function (MGF), the derived bound is given by: $\begin{eqnarray} \mathrm{Pr}[ X \geq \epsilon\sigma^2 N] &\leq& \min\limits_s \exp(-s\epsilon\sigma^2 N)g_X(s) \\ &=& \exp\left(-\frac{N}{2}\left(\sqrt{1+4\epsilon^2} -1 + \log(\sqrt{1+4\epsilon^2}-1) - \log(2\epsilon^2)\right)\right) \end{eqnarray}$ where $g_X(s) = \left(\frac{1}{1-\sigma^4 s^2}\right)^{\frac{N}{2}}$ is the MGF of $X$. But the bound is not so tight. The main issue in my problem is that the random variables are unbounded, and unfortunately I can not use the bound of Hoeffding inequality. I will be to happy if you help me find some tight exponential bound.
Probability inequalities
CC BY-SA 3.0
null
2011-05-26T19:27:05.283
2019-10-06T16:06:34.740
2018-09-26T08:43:57.307
11887
4770
[ "probability", "mathematical-statistics", "probability-inequalities", "moment-generating-function" ]
11290
1
11314
null
10
6930
I have come across the sampling method called "Propensity Weighting Sampling/RIM", but I do not have a good idea of what these survey methods are all about. What references in the literature cover this topic?
What is a propensity weighting sampling / RIM?
CC BY-SA 3.0
null
2011-05-26T20:06:04.120
2016-07-10T21:14:22.153
2012-12-16T09:36:23.267
3826
4278
[ "sampling", "weighted-sampling" ]
11291
2
null
11236
5
null
From the [documentation for qqmath](http://stat.ethz.ch/R-manual/R-devel/library/lattice/html/qqmath.html) it seems that the default behavior is to compare the empirical quantiles to those of a normal distribution. So what the QQ plot for $\sigma^2$ (which is the error variance) is telling you is that its marginal posterior distribution is not normal, which you would fully expect. So it's not really informative or meaningful. The remaining panels are telling you that the marginal posterior distributions of the regression coefficients are basically normal, perhaps with slightly heavier tails because they are marginalized over $\sigma^2$. The MCMC regression does give you more information in that you have a full posterior distribution over all the parameters, so you can answer questions like "Given the data I've seen and my prior information, what is the probability that the tempwarmer coefficient is positve". You can also look at joint distributions of two or more parameters, or functions of the coefficients, and on and on. I should mention that these QQ plots aren't really giving you an indication of goodness of fit. For that you would want to look at some other measures - posterior predictive distributions, for example, or MCMCregress may be able to do formal Bayesian model comparisons for you too. I'm not familiar with that particular function but a cursory Googling turned up some potential leads. All that said, if the lm() fit is good I wouldn't loose much sleep over it. But I would most wholeheartedly encourage any forays into Bayesian analysis :)
null
CC BY-SA 3.0
null
2011-05-26T20:15:53.020
2011-05-26T20:15:53.020
null
null
26
null
11292
1
null
null
3
482
I am performing a retrospective study on patients looking at the size of their nostril (continuous variable measured in millimetres) and the need for treatment which is either conservative or surgical (this is a categorical variable). Sample size is only 15. What would be the right test to compare groups to determine if size of nostril was a significant factor in predicting method of treatment?
How to compare two groups of patients with a continuous outcome?
CC BY-SA 3.0
null
2011-05-26T21:13:05.773
2011-05-27T12:59:13.753
2011-05-27T12:59:13.753
183
4772
[ "statistical-significance" ]
11293
2
null
11253
5
null
The previous comments are all good, but with group sample sizes of 5, 7, and 11, I wouldn't trust any of their correlations as far as I could throw them. You'll need to give the overall r a wide confidence interval as well. btw Nice job on the graph.
null
CC BY-SA 3.0
null
2011-05-26T21:28:02.027
2011-05-26T21:28:02.027
null
null
2669
null
11294
2
null
11292
8
null
Note that the sample size is very small and it is therefore higly likely that you run into power problems if you get a nonsignificant result. Therefore, definitely report the effect size whatever the result of your test will be (i.e., the differences in nostril size between the two treatment groups, in relation to the standard deviations = [cohen's d et al.](http://en.wikipedia.org/wiki/Effect_size)). As a test, the [t-test](http://en.wikipedia.org/wiki/Student%27s_t-test) for independent samples seems to be appropriate.
null
CC BY-SA 3.0
null
2011-05-26T21:36:45.680
2011-05-26T21:36:45.680
null
null
442
null
11295
2
null
11263
3
null
The perspective Ralu is using is basically p is the probability of A and for the binomial he's saying you have the events A and not A which for you is B and that's your event space. Since you don't know your actual value for P(A) and assuming you don't have a good guess for it you'll want to use a conservative estimate of .5 plugging that into the equation in the other answer is going to imply you need 16 observations in your sample. However I'm not sure binomial is the best choice in this case. When determining sample size there are two things you're going to want to decide. First is your confidence level (which you have as 95%) the next is you're going to want to decide what margin of error is acceptable for your analysis. It might be worth considering [example 8.10 in Wackerly et al](http://books.google.com/books?id=ZvPKTemPsY4C&lpg=PP1&dq=wackerly%20mathematical%20statistics&pg=PA423#v=onepage&q=Example%208.10&f=false). Since it actually looks at determining sample size for two sample groups which is your situation. The explanation in the example seems thorough enough (if you have questions though please ask), but in case you don't click to take a look it will result ( 2n = 1/[$(1/1.96){}^{2}$/8] = 31 so n = 62 ) in requiring 31 in each group so 62 in total. Notice that this is nearly four times the size of what the other method suggested it is also larger than the 40 samples usually cited for the Central Limit Theorem which gives it good properties. However that has a fairly large margin of error (1 = 100% margin of error). Let's say instead you wanted a very small margin of error such as 4% 2n = 1/[$(.04/1.96){}^{2}$/8] = 19208 so n = 38416 Remember though in this example we assumed the range was 8 and used 4*Sigma is approximately equal to the range. Which may or may not make sense for your problem. Range is Max - Min. So if in your current data you see a very different range you may want to use that value instead to recalculate accordingly. As for determining whether there was a difference you're going to want to use a hypothesis test. In particular you're going to want to use a [two sample T-Test](http://www.acastat.com/Statbook/ttest2.htm). Your null hypothesis is that the means are equal. In your case you want to know if the new one is greater than the old one so you'll want a one tail (also called a directional hypothesis) alternative hypothesis. Once you've calculated the T statistic using the formulas in that link you'll need to find the corresponding critical value [from a table](http://www.statsoft.com/textbook/distribution-tables/#t). For your confidence level you'll want to know if the test statistic is greater than this value: 1.644854 if it is than the new mean which we're going to consider as mu1 is greater than the old one mu2. If it is not greater than that value you fail to reject because your evidence isn't strong enough. Hopefully this helps!
null
CC BY-SA 3.0
null
2011-05-26T21:39:40.440
2011-05-26T21:39:40.440
null
null
4325
null
11296
1
null
null
20
64366
I have a table with four groups (4 BMI groups) as the independent variable (factor). I have a dependent variable that is "percent mother smoking in pregnancy". Is it permissible to use ANOVA for this or do I have to use chi-square or some other test?
Using ANOVA on percentages?
CC BY-SA 3.0
null
2011-05-27T00:39:52.903
2021-02-18T14:43:35.973
2021-02-18T14:43:35.973
11887
4774
[ "anova", "percentage" ]
11297
2
null
11296
21
null
It depends on how close the responses within different groups are to 0 or 100%. If there are a lot of extreme values (i.e. many values piled up on 0 or 100%) this will be difficult. (If you don't know the "denominators", i.e. the numbers of subjects from which the percentages are calculated, then you can't use contingency table approaches anyway.) If the values within groups are more reasonable, then you can transform the response variable (e.g. classical arcsine-square-root or perhaps logit transform). There are a variety of graphical (preferred) and null-hypothesis testing (less preferred) approaches for deciding whether your transformed data meet the assumptions of ANOVA adequately (homogeneity of variance and normality, the former more important than the latter). Graphical tests: boxplots (homogeneity of variance) and Q-Q plots (normality) [the latter should be done within groups, or on residuals]. Null-hypothesis tests: e.g. Bartlett or Fligner test (homogeneity of variance), Shapiro-Wilk, Jarque-Bera, etc.
null
CC BY-SA 3.0
null
2011-05-27T01:05:52.183
2011-05-27T01:05:52.183
null
null
2126
null
11298
2
null
11296
23
null
There is a difference between having a binary variable as your dependent variable and having a proportion as your dependent variable. - Binary dependent variable: This sounds like what you have. (i.e., each mother either smoked or she did not smoke) In this case I would not use ANOVA. Logistic regression with some form of coding (perhaps dummy coding) for the categorical predictor variable is the obvious choice if you are conceptualising the binary variable as the dependent variable (otherwise you could do chi-square). - Proportion as dependent variable: This does not sound like what you have. (i.e., you don't have data on the proportion of total waking time that a mother was smoking during pregnancy in a sample of smoking pregnant women). In this case, ANOVA and standard linear model approaches in general may or may not be reasonable for your purposes. See @Ben Bolker's answer for a discussion of the issues.
null
CC BY-SA 3.0
null
2011-05-27T02:23:47.557
2011-05-27T02:23:47.557
null
null
183
null
11299
1
null
null
6
170
### Context: I'm investigating behaviour in a clinical study involving children. I had both parents and teachers completing questionnaires to inform an understanding of the same underlying constructs, for example reactive aggression. At the conclusion of data collection I have parent data in all cases, n=55, and teacher data for 41 cases - therefore, I have 14 cases where I only have the parent data. For the purposes of our study it makes sense to aggregate parent and teacher observations for each case. As I have a small sample and I do have parent data for every case, deleting cases in a pairwise manner does not seem a viable option. There are several variables for which I need to do this, all of which seem to correlate well. Before aggregating data, I thought it would make sense to address the missing data issue. With regards to 'substituting' I have done some brief reading and am familiar with the basic 'within variable' substitution options, however, I thought there may be a more powerful method available where I could use: (a) the parent score we have in every case for every variable, n=55, 100% of cases. (b) our understanding of the relationship between parent and teacher scores in cases where we have both, n=41, 74.5% of cases ### Question: - Does the general idea outlined above seem reasonable? - What would be a good algorithm for implementing it in detail? - How could it be implemented in SPSS?
Getting an average measurement based on two raters for cases where data is missing for one rater
CC BY-SA 3.0
null
2011-05-27T04:08:42.267
2011-05-27T10:02:06.747
2011-05-27T06:25:27.220
183
4775
[ "spss", "missing-data", "data-imputation" ]
11300
1
11306
null
3
62
I have a particular semiparametric model which I'm fitting via MCMC. One of the model parameters I have "semiparametric'ed" away (say $\alpha$) is known to lie between two other parameters, $\theta_1$ and $\theta_2$. Since I have a series of samples $(\theta_1^t, \theta_2^t)$ I also have a series of interval estimates for $\alpha$. What is a reasonable way to summarize these? I can think of doing something like putting down a grid and just recording for each gridpoint whether each sample covers it or not. But I worry about efficiency since I don't have a great sense of what a plausible range is, and I actually have hundreds to thousands of $\alpha$'s. (I'm also welcoming retags, since I can't seem to find any I like...)
Summarizing samples of an interval
CC BY-SA 3.0
null
2011-05-27T04:59:18.913
2011-05-27T09:12:29.650
null
null
26
[ "estimation" ]
11302
2
null
8903
2
null
According to [Wikipedia's article of tf-idf](http://en.wikipedia.org/wiki/Tf-idf): > The term count in the given document is simply the number of times a given term appears in that document. This count is usually normalized to prevent a bias towards longer documents (which may have a higher term count regardless of the actual importance of that term in the document) to give a measure of the importance of the term t within the particular document d So, normalize the frequency of a term t by the length of the document d in which it occurs. Then you can compute cosine similarity between your tf-idf vectors.
null
CC BY-SA 3.0
null
2011-05-27T06:31:45.607
2011-05-27T06:31:45.607
null
null
4777
null
11303
2
null
11231
2
null
The classic problem with PCR is that principal components corresponding to small eigenvalues (and hence discarded) can be significant for explaining the dependent variable. One of the solutions to this problem is to use [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression). In PLS regression the principal components are picked to have maximal correlation with dependent variable.
null
CC BY-SA 3.0
null
2011-05-27T06:59:20.680
2011-05-27T06:59:20.680
null
null
2116
null
11304
2
null
11255
13
null
Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this. To answer your first and second question: I think for a design like this a SEM or, depending on the number of variables involved, mediation or moderation analyses is the natural way of dealing with the data. I have no good idea what else to recommend. For your third question: I think the main advantage with a design like this is it's main disadvantage. Namely that you (given enough variables) will find significant results. The question is, how you interpret these results. That is, you can look at so many hypotheses (some more some less inspired by the relevant literature) that you will probably find something significant (not in the literal sense of rejecting a SEM) that will be interpratable in a psychological sense. Therefore, my advice to anyone doing this would be twofold: - Stress the problem with causal interpretation of these designs. I am not an expert in this but know, that a fully cross-sectional design can hardly be interpreted causal, independent of how intuitively plausible that may sound. More advanced designs like cross-lagged pnael designs or stuff like this is needed for causal interpetations. I think the work by Shadish, Cook & Campbell (or at least some of them) are a good ressource for further discussion of these topics. - Stress the individual responsibility and scientific ethics. If you see that your initial idea is not supported by the data, it is the natural next step to inspect the data further. However, you shall never rely on HARKing (Hypothesizing After the Results are Known; Kerr, 1998, see also Maxwell, 2004). That is, you should stress that there is a thin line between a reasonable adaption of your hypotheses given the data and cherry picking of significant results.
null
CC BY-SA 3.0
null
2011-05-27T08:08:38.927
2011-05-27T08:08:38.927
null
null
442
null
11305
2
null
726
32
null
> "The first time I was in a statistics course, I was there to teach it" John Tukey ([link](http://www.stat.berkeley.edu/~brill/Papers/life.pdf))
null
CC BY-SA 3.0
null
2011-05-27T08:11:49.203
2011-05-27T08:11:49.203
null
null
74
null
11306
2
null
11300
1
null
Let's see: $$ P(\alpha) = \int P(\alpha, \theta_1, \theta_2) d\theta_1 d\theta_2 = \int P(\alpha | \theta_1, \theta_2) P(\theta_1, \theta_2) d\theta_1 d\theta_2 $$ With the good help of Monte Carlo, we can approximate this as $$ \frac{1}{n} \sum_t P(\alpha | \theta_1^t, \theta_2^t) $$ With the grid trick, you are doing something like this, but then you are implicitly assuming that $P(\alpha | \theta_1, \theta_2)$ is uniform. Is that an OK assumption for you? If so, it's a good technique if you want to get a view on the distribution of $\alpha$. If you are only out for confidence intervals for $\alpha$, you can probably do better by ordering your intervalsbase on $\theta_1$, and then finding the last value that is only covered by the first $2.5\%$ of you intervals (if you're aiming for $95\%$ confidence), and then similar for the other side. An OK range of values for your grid would be from the minimum $\theta_1$ up till the maximum $\theta_2$, no? If you mean by 'hundreds of thousands of $\alpha$s that you have that many parameters that you wish to get information about simultaneously, then yes, you are in trouble: you cannot abstract out those parameters, and then hope to automagically and easily get this kind of information back. Univariately though, you're safe.
null
CC BY-SA 3.0
null
2011-05-27T09:12:29.650
2011-05-27T09:12:29.650
null
null
4257
null
11307
2
null
11280
2
null
See also the [glmulti](http://cran.r-project.org/web/packages/glmulti/index.html) package on CRAN and the accompanying [JSS paper](http://www.jstatsoft.org/v34/i12/paper): `glmulti` provides a wrapper for `glm` and similar functions (`glm.nb`, etc.), automatically generating all possible models (under constraints set by the user) with the specified response and explanatory variables, and finding the best models in terms of some Information Criterion (AIC, AICc or BIC). It can handle very large numbers of candidate models and features a Genetic Algorithm to find the best models when an exhaustive screening of the candidates is not feasible.
null
CC BY-SA 3.0
null
2011-05-27T09:30:47.783
2011-05-27T09:30:47.783
null
null
103
null
11308
2
null
11299
4
null
The idea above sounds rather like single imputation. This is a better idea when faced with missing data than either list-wise or pair-wise deletion. However, its still not a good approach. A better approach could be multiple imputation. Essentially, you simulate from 3-10 datasets conditional on your observed data. You then perform all of your analyses on each of these datasets, and combine the results at the end. The purposes in the simulation of multiple datasets is to ensure that the uncertainty in the imputation process is accounted for. This can be done using the Multiple Imputation procedure in SPSS (i believe its on the analyze menu). However, while multiple imputation has been shown to be valid with large datasets, there is not as much information on its use in small samples. A good introduction (from an educational perspective) can be found [here](http://www.csos.jhu.edu/contact/staff/jwayman_pub/wayman_multimp_aera2003.pdf) A paper on its use in small samples (in a longitudinal context) can be found [here](http://www.ncbi.nlm.nih.gov/pubmed/16220515)
null
CC BY-SA 3.0
null
2011-05-27T10:02:06.747
2011-05-27T10:02:06.747
null
null
656
null
11309
1
null
null
2
235
I am wondering what the proper term is, for when a table like this (where values that did not occur are omitted entirely): ``` ________ _______ | Length | Count | |--------|-------| | 1 | 5 | | 3 | 2 | | 6 | 12 | |________|_______| ``` Is rewritten like this (where values that did not occur are noted with a frequency of zero): ``` ________ _______ | Length | Count | | 1 | 5 | | 2 | 0 | | 3 | 2 | | 4 | 0 | | 5 | 0 | | 6 | 12 | |________|_______| ```
What do you call adding zeros to a table of frequency counts of consecutive integers where the given integer does not occur
CC BY-SA 3.0
null
2011-05-27T12:17:11.043
2016-04-07T11:43:22.660
2016-04-07T11:43:22.660
22228
4781
[ "data-transformation", "tables", "presentation" ]
11310
1
16875
null
6
10629
I've found critical values for the Anderson Darling test for a Normal Distribution at 1%, 2.5%, 5%, 10% and 15% significance levels from various sources, including wikipedia: [http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test](http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test) I'd really like a critical value for a 0.1% significance level (in "case 4" - neither mean nor variance known). I couldn't find it by searching the web, and I am not sure how I should calculate it. Can anyone help?
Critical values for Anderson-Darling test
CC BY-SA 3.0
null
2011-05-27T13:30:12.590
2017-01-10T06:02:58.947
2011-05-27T14:02:45.477
null
4780
[ "distributions", "hypothesis-testing", "normal-distribution" ]
11311
2
null
11310
5
null
You can use simulation (this is not a new idea, it is how Gosset/Student derived the original t table (but we have faster tools than he did)). Generate a psuedo random sample from a normal distribution (or at least as close as the computer can come) of the sample size of interest and compute the Anderson Darling Statistic for that sample. Now repeate this process a few million times (or maybe more than a few depending on how precise you want to be). The 0.1% critical value will be the 0.1% or 99.9% percentile. However, I have a hard time imagining a useful question that would be answered by an Anderson-Darling test at 0.1% significance. What is the question that you are trying to answer?
null
CC BY-SA 3.0
null
2011-05-27T15:04:37.847
2011-05-27T15:04:37.847
null
null
4505
null
11312
2
null
11309
2
null
Mapping a set of observed value onto the observable values expected for a given variable? That is, a variable is characterized by all hypothetical values that can be observed when using it, but observed values may not reflect the full range of possible values. For example, when collecting n=100 discrete scores on a 0-20 point scale, you might end up with some scores that were observed more often than other, while some never occurred. And if the number of observable (distinct) value is so small, I would suggest some kind of bar graph (or a dot chart) rather than an histogram, which for a random sample might look like this: ![enter image description here](https://i.stack.imgur.com/wtczl.png)
null
CC BY-SA 3.0
null
2011-05-27T15:27:04.487
2011-05-27T15:27:04.487
null
null
930
null
11313
2
null
726
8
null
> "He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may be cast." - Leonardo da Vinci, 1452-1519 Found [here](http://socserv.mcmaster.ca/jfox/).
null
CC BY-SA 3.0
null
2011-05-27T15:30:01.647
2011-05-27T15:30:01.647
null
null
253
null
11314
2
null
11290
9
null
You may know that weighting generally aims at ensuring that a given sample is representative of its target population. If in your sample some attributes (e.g., gender, SES, type of medication) are less well represented than in the population from which the sample comes from, then we may adjust the weights of the incriminated statistical units to better reflect the hypothetical target population. RIM weighting (or raking) means that we will equate the sample marginal distribution to the theoretical marginal distribution. It bears some idea with post-stratification, but allows to account for many covariates. I found a good overview in this handout about [Weighting Methods](http://sekhon.berkeley.edu/causalinf/sp2010/section/week9.pdf), and here is an example of its use in a real study: [Raking Fire Data](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=D4BC4EF16C2805DD6BEBB0CCFF9418F5?doi=10.1.1.159.7090&rep=rep1&type=pdf). Propensity weighting is used to compensate for unit non-response in a survey, for example, by increasing the sampling weights of the respondents in the sample using estimates of the probabilities that they responded to the survey. This is in spirit the same idea than the use of propensity scores to adjust for treatment selection bias in observational clinical studies: based on external information, we estimate the probability of patients being included in a given treatment group and compute weights based on factors hypothesized to influence treatment selection. Here are some pointers I found to go further: - The propensity score and estimation in nonrandom surveys - an overview - A Simulation Study to Compare Weighting Methods for Nonresponses in the National Survey of Recent College Graduates - A Comparison of Propensity Score and Linear Regression Analysis of Complex Survey Data. As for a general reference, I would suggest > Kalton G, Flores-Cervantes I. Weighting Methods. J. Off. Stat. (2003) 19: 81-97. Available on http://www.jos.nu/
null
CC BY-SA 3.0
null
2011-05-27T16:14:16.013
2016-07-10T21:14:22.153
2016-07-10T21:14:22.153
43080
930
null
11315
1
11323
null
15
11474
So when I assume that the error terms are normally distributed in a linear regression, what does it mean for the response variable, $y$?
How does the distribution of the error term affect the distribution of the response?
CC BY-SA 3.0
null
2011-05-27T16:14:56.817
2011-05-28T23:34:21.947
2011-05-27T18:37:16.203
930
4496
[ "regression", "distributions" ]
11316
2
null
11315
19
null
The short answer is that you cannot conclude anything about the distribution of $y$, because it depends on the distribution of the $x$'s and the strength and shape of the relationship. More formally, $y$ will have a "mixture of normals" distribution, which in practice can be pretty much anything. Here are two extreme examples to illustrate this: - Suppose there are only two possible $x$ values, 0 an 1, and $y = 10x + N(0,1)$. Then $y$ will have a strongly bimodal distribution with bumps at 0 and 10. - Now assume the same relationship, but let $x$ be uniformly distributed on the 0-1 interval with lots of values. Then $y$ will be almost uniformly distributed over the 0-10 interval (with some half-normal tails at the edges). In fact, since every distribution can be approximated arbitrarily well with mixture of normals, you can really get any distribution for $y$.
null
CC BY-SA 3.0
null
2011-05-27T16:36:35.590
2011-05-28T02:52:27.773
2011-05-28T02:52:27.773
279
279
null
11318
2
null
11315
8
null
We invent the error term by imposing a fictitious model on real data; the distribution of the error term does not affect the distribution of the response. We often assume that the error is distributed normally and thus try to construct the model such that our estimated residuals are normally distributed. This can be difficult for some distributions of $y$. In these cases, I suppose you could say that the distribution of the response affects the error term.
null
CC BY-SA 3.0
null
2011-05-27T16:54:10.607
2011-05-28T19:05:10.253
2011-05-28T19:05:10.253
3874
3874
null
11319
1
11332
null
4
256
I have an experiment where people click on different ads online. My measure is click counts. I end up finding that I should use models for count data such as Poisson, Quasi-Poisson, or Negative Binomial regression. Is there a standard in marketing regarding what model should be used for click counts? Thanks
Is there a standard procedure or regression model in marketing for explaining click rates on ads?
CC BY-SA 3.0
null
2011-05-27T16:57:08.000
2011-05-28T17:17:05.713
2011-05-27T20:18:44.890
930
4679
[ "poisson-distribution", "count-data" ]
11320
2
null
726
17
null
> The Earth is round. p < .05 Jacob Cohen
null
CC BY-SA 3.0
null
2011-05-27T18:19:55.340
2011-05-27T18:19:55.340
null
null
686
null
11321
2
null
726
17
null
> When I see articles with lots of significance tests, I say that the statisticians are p-ing on the research. Herman Friedmann (by recollection, he said this in class)
null
CC BY-SA 3.0
null
2011-05-27T18:21:27.587
2011-05-27T18:21:27.587
null
null
686
null
11322
1
28503
null
8
935
Here is a recent Google correlate query: [http://www.google.com/trends/correlate/search?e=internet+usage&t=weekly#](http://www.google.com/trends/correlate/search?e=internet+usage&t=weekly#) As you can see in the search box at that link, I entered "internet usage" and Google did the rest. It shows a value of 0.9298 as the "correlation" with the query "data mining". However, when I read [page 2 of the Google white paper [PDF]](http://www.google.com/trends/correlate/whitepaper.pdf), it says: > The objective of Google Correlate is to surface the queries in the database whose spatial or temporal pattern is most highly correlated with a target pattern. Google Correlate employs a novel approximate nearest neighbor (ANN) algorithm over millions of candidate queries in an online search tree to produce results similar to the batch-based approach employed by Google Flu Trends but in a fraction of a second. For additional details, please see the Methods section below.... So, my question is: Is Google using a normal Pearson or Spearman correlation to find this stuff or are they using something else? If so, can you explain the general technique? ================== Also, notice in the plot that the search for "internet usage" (and "data mining") drops during the summer months and really dives around Christmas. I would guess that kids and their homework have something to do with this.
What method is used in Google's correlate?
CC BY-SA 3.0
null
2011-05-27T20:07:41.690
2012-05-26T07:08:01.567
2012-05-26T07:08:01.567
5505
2775
[ "time-series", "correlation" ]
11323
2
null
11315
8
null
Maybe I'm off but I think we ought to be wondering about $f(y|\beta, X)$, which is how I read the OP. In the very simplest case of linear regression if your model is $y=X\beta + \epsilon$ then the only stochastic component in your model is the error term. As such it determines the sampling distribution of $y$. If $\epsilon\sim N(0, \sigma^2I)$ then $y|X, \beta\sim N(X\beta, \sigma^2I)$. What @Aniko says is certainly true of $f(y)$ (marginally over $X, \beta$), however. So as it stands the question is slightly vague.
null
CC BY-SA 3.0
null
2011-05-27T23:07:30.837
2011-05-27T23:07:30.837
null
null
26
null
11324
2
null
11277
3
null
About the simplest thing you can do is interpolate normalized counts over time and (almost) the simplest form of interpolation is linear. Specifically, suppose $y_i$ is the state population at time $i$ and $x_i$ is some other count (by age, tract, or whatever). Define $\xi_i = x_i/y_i$. Suppose $i$ is a year for which you do not have the periodic data. Let $i_{-}$ and $i_{+}$ be the years immediately preceding and following $i$, respectively, for which $x_i$ is available. The linearly interpolated estimate of $\xi_i$ is $$\hat{\xi}_i = \frac{\xi_{i_{-}} (i_{+} - i) + \xi_{i_{+}} (i - i_{-})} {i_{+} - i_{-}} \text{.}$$ The estimate of $x_i$ is $$\hat{x}_i = \hat{\xi}_i y_i.$$ The sums will come out correctly because this estimator is linear with weights summing to unity. For example, suppose you are tracking two variables $x$ and $z$ which count complementary parts of the population (such as males and females), so that $x_i+z_i = y_i$ whenever you have all three counts. Defining $\xi_i = x_i/y_i$ as before and, similarly, $\zeta_i = z_i/y_i$, the two fractions sum to unity: $\xi_i + \zeta_i = y_i/y_i = 1$ for all $i$. Therefore the interpolated fractions also sum to unity: $$\hat{\xi}_i + \hat{\zeta}_i = \frac{\xi_{i_{-}} (i_{+} - i) + \xi_{i_{+}} (i - i_{-})} {i_{+} - i_{-}} + \frac{\zeta_{i_{-}} (i_{+} - i) + \zeta_{i_{+}} (i - i_{-})} {i_{+} - i_{-}}$$ $$= \frac{(\xi_{i_{-}} + \zeta_{i_{-}}) (i_{+} - i) + (\xi_{i_{+}} + \zeta_{i_{+}}) (i - i_{-})} {i_{+} - i_{-}}$$ $$= \frac{(i_{+} - i) + (i - i_{-})} {i_{+} - i_{-}}$$ $$= 1.$$ Whence $\hat{x}_i + \hat{z}_i = y_i(\hat{\xi}_i + \hat{\zeta}_i) = y_i$ as desired. This generalizes to population partitions of any size, such as age distributions.
null
CC BY-SA 3.0
null
2011-05-27T23:41:14.597
2011-05-27T23:41:14.597
null
null
919
null
11325
2
null
726
6
null
> Statistics' real contribution to society is primarily moral, not technical. Steve Vardeman and Max Morris
null
CC BY-SA 3.0
null
2011-05-28T13:02:46.950
2012-06-20T18:18:51.487
2012-06-20T18:18:51.487
1381
2669
null
11326
2
null
11315
2
null
If you write out the response as $$\bf{y}=m+e$$ Where $\bf{m}$ is the "model" (the prediction for $\bf{y}$) and $\bf{e}$ is the "errors", then this can be re-arranged to indicate $\bf{y}-m=e$. So assigning a distribution for the errors is the same thing as indicating the ways your model is incomplete. To put it another way is that it indicates to what extent you don't know why the observed response was the value that it actually was, and not what the model predicted. If you knew your model was perfect, then you would assign a probability distribution with all of its mass on zero for the errors. Assigning a $N(0,\sigma^{2})$ basically says that the errors are small in units of $\sigma$. The idea is that the model predictions tend to be "wrong" by similar amounts for different observations, and is "about right" on the scale of $\sigma$. As a contrast, an alternative assignment is $Cauchy(0,\gamma)$ which says that most of the errors are small, but some errors are quite large - the model has the occasional "blunder" or "shocker" in terms of predicting the response. In a sense the error distribution is more closely linked to the model than to the response. This can be seen from the non-identifiability of the above equation, for if both $\bf{m}$ and $\bf{e}$ are unknown then adding an arbitrary vector to $\bf{m}$ and subtracting it from $\bf{e}$ leads to the same value of $\bf{y}$, $\bf{y}=m+e=(m+b)+(e-b)=m'+e'$. The assignment of an error distribution and a model equation basically says which arbitrary vectors are more plausible than others.
null
CC BY-SA 3.0
null
2011-05-28T13:14:20.353
2011-05-28T23:34:21.947
2011-05-28T23:34:21.947
2392
2392
null
11327
1
11330
null
5
2337
We've created a survey asking students, among other things, their GPA (=weighted average of grades) and their marks in some specific courses (which count towards GPA). We wanted to see which regressors influence the GPA using a simple OLS model. Is it sensible to use a formula like this? ``` GPA ~ grade_maths + grade_statistics + grade_privatelaw + ... + {other regressors, like study habits or origin} ``` Of course, the grade regressors turn out to be highly significant (some more than others, and not directly related to the weight they have in GPA), while few of the other ones are... Is this a case of endogeneity, i.e. does a regression like this violate strict exogeneity? With this regression we want to get an quick overview of which variables will likely be useful in the regressions that follow, which for example try to find out whether being good in quantitative courses tends to help get good grades in law and other ideas like this...
Dependent variable is a function of independent variables; can I sensibly include them in a regression?
CC BY-SA 3.0
null
2011-05-28T13:37:12.280
2011-05-30T08:16:18.460
2011-05-30T07:44:27.810
2116
4788
[ "regression", "least-squares" ]
11328
1
null
null
0
85
> Possible Duplicate: Wrong results using ANOVA with repeated measures Hello everybody, I did an experiment and I need to understand how to detect, by means of an ANOVA (repeated measures), the differences between males and females evaluations at stimulus level. In the experiment, participants had to evaluate 7 stimuli in 2 conditions (EXP1 and EXP2). The structure of the table is the following: subject, stimulus, condition, sex, response. The design is the following: - sex is a between-subjects factor (with two levels) - stimulus is a within-subjects factor (with 3 assumed levels) - condition is a within-subjects factor (with 2 levels) - all factors are fully crossed For example now I want to detect the difference between males and females evaluation for stimulus 1. So far the only way that I found is to use a t-test. I used the following R command for conducting the ANOVA: ``` aov1 = aov(response ~ sex*stimulus*condition + Error(subject/(stimulus*condition)), data=scrd) summary(aov1) ``` but I don´t see the way to understand if evaluataions between males and females for stimulus 1 are significant. I can only see if there is a difference at global level between males and females for all the stimuli. But I am interested in discovering it for each stimulus. How can I reach this goal?....I don´t think that the interactions of the previous ANOVA code can help me. I used the t-test in this way ``` table_gravel_M <- subset(my_table, stimulus == "gravel" & sex == "M") table_gravel_F <- subset(my_table, stimulus == "gravel" & sex == "F") t.test(table_gravel_M$response,table_gravel_F$response) ``` Any suggestion? Thanks in advance Here an example of the table I used: ``` subject stimulus condition sex response subject1 gravel EXP1 M 59.8060 subject2 gravel EXP1 M 49.9880 subject3 gravel EXP1 M 73.7420 subject4 gravel EXP1 M 45.5190 subject5 gravel EXP1 M 51.6770 subject6 gravel EXP1 M 42.1760 subject7 gravel EXP1 M 56.1110 subject8 gravel EXP1 M 54.9500 subject9 gravel EXP1 M 62.6920 subject10 gravel EXP1 M 50.7270 subject1 gravel EXP2 M 70.9270 subject2 gravel EXP2 M 61.3200 subject3 gravel EXP2 M 70.2930 subject4 gravel EXP2 M 49.9880 subject5 gravel EXP2 M 69.1670 subject6 gravel EXP2 M 62.2700 subject7 gravel EXP2 M 70.9270 subject8 gravel EXP2 M 63.6770 subject9 gravel EXP2 M 72.4400 subject10 gravel EXP2 M 58.8560 subject11 gravel EXP1 F 46.5750 subject12 gravel EXP1 F 58.1520 subject13 gravel EXP1 F 57.4490 subject14 gravel EXP1 F 59.8770 subject15 gravel EXP1 F 55.5480 subject16 gravel EXP1 F 46.2230 subject17 gravel EXP1 F 63.3260 subject18 gravel EXP1 F 60.6860 subject19 gravel EXP1 F 59.4900 subject20 gravel EXP1 F 52.6630 subject11 gravel EXP2 F 55.7240 subject12 gravel EXP2 F 66.4220 subject13 gravel EXP2 F 65.9300 subject14 gravel EXP2 F 61.8120 subject15 gravel EXP2 F 62.5160 subject16 gravel EXP2 F 65.5780 subject17 gravel EXP2 F 59.5600 subject18 gravel EXP2 F 63.8180 subject19 gravel EXP2 F 61.4250 ..... ..... ..... ..... ```
Detecting significant differences for each stimulus using ANOVA repeated measures
CC BY-SA 3.0
null
2011-05-28T13:41:53.917
2011-05-28T13:41:53.917
2017-04-13T12:44:39.283
-1
4701
[ "anova", "repeated-measures", "t-test" ]
11329
1
14862
null
7
387
Suppose we have a set $S$ consisting of $p$ features, and a subset $S_+$ of the features are positive. If $Q$ is any subset of $S$, define the false positive rate as the proportion of features in $Q$ which are not positive: $$FPR[Q] = 1 - \frac{|Q \cap S_+|}{|Q|}$$ where $|\cdot|$ denotes cardinality. If $Q$ is a function of the data, $d$, then we can define the false discovery rate as the expected false positive rate: $$FDR[Q(d)] = \mathbb{E}_d[FPR[Q(d)]].$$ Now suppose that I have a method for ranking the features in $S$ by likelihood of significance. I will report the top $r$ features most likely to be significant, based on my data, $Q_r(d)$. Formally, I have a family of set-valued functions $$Q_1(d) \subset Q_2(d) \subset Q_3(d) \subset \cdots \subset Q_p(d)$$ where $|Q_r(d)| = r.$ What I want to know is the maximum $r$ such that the set $Q_r$ has a false discovery rate less than a certain critical value, $q_{crit}$. That is, I want to know what is the value $$IFDR_{q_{crit}} = \max_r \{r \in \{1,...,d\}: FDR[Q_r] \leq q_{crit}\}$$ Is there a name for this 'inverse false discovery rate' function? If not, can you suggest a name better than 'inverse false discovery rate'?
Inverse of false discovery rate (FDR)
CC BY-SA 3.0
null
2011-05-28T13:56:54.953
2011-12-06T01:06:12.233
2011-05-29T01:03:15.980
3567
3567
[ "error", "multiple-comparisons" ]
11330
2
null
11327
2
null
I see no problem with fitting the regression. We do regressions because we believe that the predictors may be related to the response, you just have more knowledge to begin with. But what questions are you actually trying to answer? The fact that certain coefficients are significant is not surprising, so those were not really interesting questions to begin with. What may be interesting is if they differ from a value other than 0 (the weight in the GPA). That could tell you if they have an indirect effect in addition to the known effect, e.g. math score could be related to science score which is not in your model, but contributes to the GPA.
null
CC BY-SA 3.0
null
2011-05-28T15:38:07.043
2011-05-28T15:38:07.043
null
null
4505
null
11331
1
null
null
3
1837
I asked this on the mathematics site, but now I think this is a better place. Sorry for the cross-post. Given any line graph, is there a reliable way to identify any sort of regular oscillation? Let's assume I'm charting the prevalence of different species of animals in a single location, over the span of several years. What sort of algorithm could I apply that would identify a regular migration instead of a steady increase or decline? I hope this makes sense. Thanks for the help!
Identifying oscillation in a time series
CC BY-SA 3.0
null
2011-05-28T16:04:40.950
2022-08-30T19:38:44.387
2011-05-28T22:28:15.937
4792
4792
[ "time-series", "data-visualization" ]
11332
2
null
11319
4
null
You can use Poisson regression and in more general form, Poisson process when data is following a Poisson distribution. In terms of Bayesian inference, you can make your own likelihood model, and then by conjugating your Poisson prior by likelihood, derive your posterior distribution. Here I borrow an example from glm function in R: ``` ## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) print(d.AD <- data.frame(treatment, outcome, counts)) glm.D93 <- glm(counts ~ outcome + treatment, family=poisson()) anova(glm.D93) summary(glm.D93) ``` Also, some slides about Poission Rregression with some example are [here](http://www.stat.wisc.edu/courses/st572-larget/Spring2007/handouts24-2.pdf) and [here](http://www.statpower.net/Content/MLRM/Lecture%20Slides/PoissonRegression.pdf)
null
CC BY-SA 3.0
null
2011-05-28T17:17:05.713
2011-05-28T17:17:05.713
null
null
4581
null
11333
2
null
11248
1
null
I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calculate standard deviations. This is easy to do, if you have the MCMC output. Just take the values sampled (after a burnin period) and compute the means, standard deviations and plot histograms or densities. Another advantage of a posterior distribution is that you can assess your uncertainty on tails of the distribution and also if the distribution is simetric around the mode/mean. Confidence intervals in general assume that the distribution is simetric and that outliers are rare...
null
CC BY-SA 3.0
null
2011-05-28T18:12:47.033
2011-05-31T22:49:06.153
2011-05-31T22:49:06.153
3058
3058
null
11334
2
null
11248
3
null
Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of those predictions (or better still treat the predictions for all sampled coefficient vectors as the posterior distribution of the probability of class membership - the spread of that distribution is a useful indicator of how confident the classifier is about the probabilistic classification). The distribution of the sampled coeffcient vector gives an impression of how well the training data constrain the value of that parameter, so if the distribution is broad, we can't be confident of the "true" value of that coefficient (as explained by Manoel Galdino). However, the key advantage of having a distribution of plausible coefficient vectors is that it provides you with a rational way to get a distribution of plausible values for the probability of class membership, which is what we really want. Often using a Bayesian approach, we are not really interested in the coefficients of the model, but in the function implemented by the model.
null
CC BY-SA 3.0
null
2011-05-28T18:57:08.913
2011-05-28T18:57:08.913
null
null
887
null
11335
2
null
11327
4
null
Another point to consider: what enables a student to do well in one course is related to what enables him/her to do well in another. There are overarching factors (cognitive, personality, circumstances) that play some role in determining each of the individual course grades. So to use regression--to see how X1 relates to GPA while controlling for X2, X3, X4, etc.--, will "cannibalize" each X-Y relationship, partialling a portion of the relationship right out of itself. The coefficients you obtain will be, in Tukey and Mosteller's words, "arbitrary nonsense." Here's how Elazar Pedhazur puts it (Multiple Regression in Behavioral Research, 3rd Ed., 170-2): > “Partial correlation is not an all-purpose method of control […] Controlling variables without regard to the theoretical considerations about the pattern of relations among them may yield misleading or meaningless results […] It makes no sense to control for one measure of mental ability, say, while correlating another measure of mental ability with academic achievement when the aim is to study the relation between mental ability and academic achievement. […] This is tantamount to partialling a relation out of itself and may lead to the fallacious conclusion that mental ability and academic achievement are not correlated.” I'd recommend studying bivariate correlations among the variables over a regression predicting GPA. Predicting each individual course grade (@probabilityislogic's idea) also seems very worth doing. @Manoel's factor analysis idea makes me pause because you may not have all the variables necessary to map out the key underlying factors.
null
CC BY-SA 3.0
null
2011-05-28T20:45:28.527
2011-05-29T12:07:32.897
2011-05-29T12:07:32.897
2669
2669
null
11336
1
11345
null
6
2023
I have a multinomial model estimated with the zelig package in R. Whenever I try to use the setx() command, I get an error message saying there is more than one mode. So in stead of using Zelig, I thought I would do it the hard way. I used the instructions [here](http://www.ats.ucla.edu/stat/r/dae/mlogit.htm), but I am not sure if I trust these instructions as they give me some weird results. - Could anyone explain how to get predicted probabilities from a multinomial logit in R? --- PS: here are the coefficients from a simple model (there are 6 choice catagories): ``` structure(c(3.68021133487111, -0.903496528862169, -1.56339830041814, -1.13238307296064, -1.67706243532044, -0.177585202845615, 0.0611115470557421, -0.0458373863009504, 0.0881133593132653, -0.0686190052488972, 0.0163917121907627, 0.0165232098847022, 0.0373815294869855, -0.0353209839724262, -0.00698911507852077), .Names = c("(Intercept):1", "(Intercept):2", "(Intercept):3", "(Intercept):4", "(Intercept):5", "Deviance:1", "Deviance:2", "Deviance:3", "Deviance:4", "Deviance:5", "Votes:1", "Votes:2", "Votes:3", "Votes:4", "Votes:5")) ``` and here are some values for the independent variables: ``` structure(c(0.71847390030784, 1.01838748408701, 1.01838748408701, 1.20499277373001, 0.71847390030784, 1.20499277373001, 0.56393315893118, 1.20499277373001, 0.71847390030784, 0.56393315893118, 2, 5, 4, 7, 10, 27, 5, 9, 17, 16), .Dim = c(10L, 2L)) ```
Predicted probabilities from a multinomial regression model using zelig and R
CC BY-SA 3.0
null
2011-05-28T23:03:15.690
2011-05-30T02:45:08.173
2011-05-30T02:45:08.173
183
2704
[ "r", "probability", "multinomial-distribution" ]
11337
1
11340
null
7
7941
A six-sided die is rolled 100 times. Using the normal approximation, find the probability that the face showing six turns up between 15 and 20 times. Find the probability that the sum of the face values of the 100 trials is less than 300. For the first part of the question, I did the following: $P(15 \le X \le 20) = \sum_{15 \le i \le 20} C(100,i)(\frac{1}{6})^i(\frac{5}{6})^{100-i}$ Where X is the number of sixes rolled. My answer was about `0.56`. I have no idea how to do the second part. I know I have to do something like $P(Y<300|N=100)$ Where Y is the sum and N is the number of times rolled. But I don't know the probability of the sum so I'm stuck.
Probability of a certain sum of values from a set of dice rolls
CC BY-SA 3.0
null
2011-05-28T23:35:40.667
2011-05-31T05:07:31.587
2011-05-31T05:07:31.587
183
4401
[ "self-study", "binomial-distribution", "dice" ]
11338
2
null
11337
4
null
Due to the CLT, a sum of i.i.d. random variables is distributed: $$ \sum_{i=1}^nX_i \sim N\left(\mu =n\cdot\mu_{X_i},\sigma^2 = n\cdot\sigma^2_{X_i}\right) $$ The mean of a single dice roll ($X_i$) is 3.5 and the variance is 35/12. That should help you find the answer.
null
CC BY-SA 3.0
null
2011-05-29T00:02:10.430
2011-05-30T07:10:59.470
2011-05-30T07:10:59.470
2116
2310
null
11339
2
null
11331
6
null
You may want to look at spectral analysis techniques. Look at, [Shumway & Stoffer](http://rads.stackoverflow.com/amzn/click/144197864X) (among many other books which treat the subject) or look "Spectral analysis" in the Wikipedia for some pointers.
null
CC BY-SA 3.0
null
2011-05-29T07:17:39.377
2011-05-29T07:17:39.377
null
null
892
null
11340
2
null
11337
4
null
In the comments to Glen's answer you seem to have used a normal approximation `pnorm(300, 350, sqrt(3500/12))` to get 0.001707396. This is not a bad answer, though you can do better. If you used the continuity correction the continuity correction `pnorm(299.5, 350, sqrt(3500/12))` you would get `0.001553355`. I suspect this is what was being asked for. It is in fact possible to calculate this more precisely. The following R code does so (yes, I know it has `for` loops). ``` sides <- 6 throws <- 100 ## p[j,i] is probability of exactly (j+sides) after (i+1) throws p <- matrix(rep(0, sides*(throws+1)^2 ), ncol=throws+1 ) p[sides,1] <- 1 # probability 1 of score of 0 after 0 throws for (i in 2:(throws+1) ){ for (j in (sides+1):(sides*(throws+1)) ){ p[j,i] <- sum(p[(j-sides):(j-1), i-1]) / sides } } sum( p[0:(299+sides), throws+1] ) ``` This gives the result `0.001505810`. The normal approximation with continuity correction is within 0.00005, which looks good, though the relative error is about 3%, which looks slightly less impressive; this often happens using the normal approximation in the tail of the distribution.
null
CC BY-SA 3.0
null
2011-05-29T10:52:08.007
2011-05-29T10:52:08.007
null
null
2958
null
11341
1
null
null
4
1881
Is there a way to give (in R or Minitab or Statgraphics) a fractional factorial design like that and inspect the generators and the complete defining relation ($2^4 - 1$ relations)? ``` A B C D E F G H -1 1 1 1 -1 -1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 1 -1 1 1 -1 -1 1 -1 -1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 1 1 -1 -1 1 1 1 -1 -1 -1 1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 -1 1 1 -1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 1 1 1 1 1 1 1 1 -1 -1 1 -1 -1 1 1 1 1 1 -1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 1 ``` EDIT (This is not an answer. It's just a workaround) ``` df<- read.table(textConnection(" A B C D E F G H -1 1 1 1 -1 -1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 1 -1 1 1 -1 -1 1 -1 -1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 1 1 -1 -1 1 1 1 -1 -1 -1 1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 -1 1 1 -1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 1 1 1 1 1 1 1 1 -1 -1 1 -1 -1 1 1 1 1 1 -1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 1 ")->con,header=T) close(con) ``` ABCDEFGH gives a column of ones, so I=ABCDEFGH. We need the 14 remaining defining relations. ``` # start with four way interactions four.way <- combn(c("A","B","C","D","E","F","G","H"),4) res <- apply(four.way,2,function(x) { apply(df[,x],1,prod)}) colnames(res) <- apply(four.way,2,paste,collapse="") ``` The res matrix has 70 columns of the products of the 4-combinations. Get the column names with a colSum of either 16 or -16 (they are defining relations) ``` def <- colnames(res[,colSums(res) == 16 | colSums(res) == -16]) [1] "ABCH" "ABDE" "ABFG" "ACDF" "ACEG" "ADGH" "AEFH" "BCDG" "BCEF" "BDFH" "BEGH" "CDEH" "CFGH" "DEFG" ``` So I got another 14 defining relations (including the generalized interactions). No need to look for other "words" since I already got 15. The resolution is IV (min word length). It's easy to observe that the BGHA columns form the typical $2^4$ design (with opposite signs for B and G). Using the above defining relations you can get the generators: C=ABH, D=AGH, E=BGH and F=ABG. To get the alias structure I did the following (I applied it only for the main effects and 2-way interactions, but you can get whatever you ask for) ``` # two way interactions two.way <- apply(combn(c("A","B","C","D","E","F","G","H"),2),2,paste,collapse="") test <- c("A","B","C","D","E","F","G","H",two.way) gen <- c(def,"ABCDEFGH") mat <- character(length(test)*(length(gen)+1)) dim(mat) <- c(length(test),length(gen)+1) colnames(mat) <- c("Effect",paste("I=",gen,sep="")) for (j in 1:length(test)){ mat[j,1] <- test[j] for (i in 1:length(gen)) { res <- paste(sort(c(unlist(strsplit(gen[i],"")),unlist(strsplit(test[j],"")))),collapse="") le <- rle(unlist(strsplit(res,"")))$lengths va <- rle(unlist(strsplit(res,"")))$values mat[j,i+1] <- paste(sort(va[le %% 2 == 1]),collapse="") }} noquote(mat) ``` And here is the result ``` Effect I=ABCH I=ABDE I=ABFG I=ACDF I=ACEG I=ADGH I=AEFH I=BCDG I=BCEF I=BDFH I=BEGH I=CDEH I=CFGH I=DEFG I=ABCDEFGH [1,] A BCH BDE BFG CDF CEG DGH EFH ABCDG ABCEF ABDFH ABEGH ACDEH ACFGH ADEFG BCDEFGH [2,] B ACH ADE AFG ABCDF ABCEG ABDGH ABEFH CDG CEF DFH EGH BCDEH BCFGH BDEFG ACDEFGH [3,] C ABH ABCDE ABCFG ADF AEG ACDGH ACEFH BDG BEF BCDFH BCEGH DEH FGH CDEFG ABDEFGH [4,] D ABCDH ABE ABDFG ACF ACDEG AGH ADEFH BCG BCDEF BFH BDEGH CEH CDFGH EFG ABCEFGH [5,] E ABCEH ABD ABEFG ACDEF ACG ADEGH AFH BCDEG BCF BDEFH BGH CDH CEFGH DFG ABCDFGH [6,] F ABCFH ABDEF ABG ACD ACEFG ADFGH AEH BCDFG BCE BDH BEFGH CDEFH CGH DEG ABCDEGH [7,] G ABCGH ABDEG ABF ACDFG ACE ADH AEFGH BCD BCEFG BDFGH BEH CDEGH CFH DEF ABCDEFH [8,] H ABC ABDEH ABFGH ACDFH ACEGH ADG AEF BCDGH BCEFH BDF BEG CDE CFG DEFGH ABCDEFG [9,] AB CH DE FG BCDF BCEG BDGH BEFH ACDG ACEF ADFH AEGH ABCDEH ABCFGH ABDEFG CDEFGH [10,] AC BH BCDE BCFG DF EG CDGH CEFH ABDG ABEF ABCDFH ABCEGH ADEH AFGH ACDEFG BDEFGH [11,] AD BCDH BE BDFG CF CDEG GH DEFH ABCG ABCDEF ABFH ABDEGH ACEH ACDFGH AEFG BCEFGH [12,] AE BCEH BD BEFG CDEF CG DEGH FH ABCDEG ABCF ABDEFH ABGH ACDH ACEFGH ADFG BCDFGH [13,] AF BCFH BDEF BG CD CEFG DFGH EH ABCDFG ABCE ABDH ABEFGH ACDEFH ACGH ADEG BCDEGH [14,] AG BCGH BDEG BF CDFG CE DH EFGH ABCD ABCEFG ABDFGH ABEH ACDEGH ACFH ADEF BCDEFH [15,] AH BC BDEH BFGH CDFH CEGH DG EF ABCDGH ABCEFH ABDF ABEG ACDE ACFG ADEFGH BCDEFG [16,] BC AH ACDE ACFG ABDF ABEG ABCDGH ABCEFH DG EF CDFH CEGH BDEH BFGH BCDEFG ADEFGH [17,] BD ACDH AE ADFG ABCF ABCDEG ABGH ABDEFH CG CDEF FH DEGH BCEH BCDFGH BEFG ACEFGH [18,] BE ACEH AD AEFG ABCDEF ABCG ABDEGH ABFH CDEG CF DEFH GH BCDH BCEFGH BDFG ACDFGH [19,] BF ACFH ADEF AG ABCD ABCEFG ABDFGH ABEH CDFG CE DH EFGH BCDEFH BCGH BDEG ACDEGH [20,] BG ACGH ADEG AF ABCDFG ABCE ABDH ABEFGH CD CEFG DFGH EH BCDEGH BCFH BDEF ACDEFH [21,] BH AC ADEH AFGH ABCDFH ABCEGH ABDG ABEF CDGH CEFH DF EG BCDE BCFG BDEFGH ACDEFG [22,] CD ABDH ABCE ABCDFG AF ADEG ACGH ACDEFH BG BDEF BCFH BCDEGH EH DFGH CEFG ABEFGH [23,] CE ABEH ABCD ABCEFG ADEF AG ACDEGH ACFH BDEG BF BCDEFH BCGH DH EFGH CDFG ABDFGH [24,] CF ABFH ABCDEF ABCG AD AEFG ACDFGH ACEH BDFG BE BCDH BCEFGH DEFH GH CDEG ABDEGH [25,] CG ABGH ABCDEG ABCF ADFG AE ACDH ACEFGH BD BEFG BCDFGH BCEH DEGH FH CDEF ABDEFH [26,] CH AB ABCDEH ABCFGH ADFH AEGH ACDG ACEF BDGH BEFH BCDF BCEG DE FG CDEFGH ABDEFG [27,] DE ABCDEH AB ABDEFG ACEF ACDG AEGH ADFH BCEG BCDF BEFH BDGH CH CDEFGH FG ABCFGH [28,] DF ABCDFH ABEF ABDG AC ACDEFG AFGH ADEH BCFG BCDE BH BDEFGH CEFH CDGH EG ABCEGH [29,] DG ABCDGH ABEG ABDF ACFG ACDE AH ADEFGH BC BCDEFG BFGH BDEH CEGH CDFH EF ABCEFH [30,] DH ABCD ABEH ABDFGH ACFH ACDEGH AG ADEF BCGH BCDEFH BF BDEG CE CDFG EFGH ABCEFG [31,] EF ABCEFH ABDF ABEG ACDE ACFG ADEFGH AH BCDEFG BC BDEH BFGH CDFH CEGH DG ABCDGH [32,] EG ABCEGH ABDG ABEF ACDEFG AC ADEH AFGH BCDE BCFG BDEFGH BH CDGH CEFH DF ABCDFH [33,] EH ABCE ABDH ABEFGH ACDEFH ACGH ADEG AF BCDEGH BCFH BDEF BG CD CEFG DFGH ABCDFG [34,] FG ABCFGH ABDEFG AB ACDG ACEF ADFH AEGH BCDF BCEG BDGH BEFH CDEFGH CH DE ABCDEH [35,] FH ABCF ABDEFH ABGH ACDH ACEFGH ADFG AE BCDFGH BCEH BD BEFG CDEF CG DEGH ABCDEG [36,] GH ABCG ABDEGH ABFH ACDFGH ACEH AD AEFG BCDH BCEFGH BDFG BE CDEG CF DEFH ABCDEF ```
Inspect generators and defining relations of a fractional factorial design
CC BY-SA 3.0
null
2011-05-29T12:00:36.773
2017-07-31T14:53:42.900
2017-07-31T14:53:42.900
11887
339
[ "r", "experiment-design" ]
11342
2
null
11341
3
null
Disclaimer: Not really a positive answer... Take a look at the [FrF2](http://cran.r-project.org/web/packages/FrF2/index.html) package, for example: ``` des.24 <- FrF2(16,8) design.info(des.24)$aliased # look at the alias structure ``` create a randomized fractional design with 8 factors, 16 runs. To print all designs, ``` print(catlg, nfactor=8, nruns=16) ``` For example, we have design 8-4.1 for a $2_{IV}^{8-4}$ design, whose generators are $E=ABC$, $F=ABD$, $G=ACD$, and $H=BCD$ (with defining relations in e.g., Montgomery 5ed Appendix X p. 629): ``` summary(FrF2(design="8-4.1")) FrF2:::generators.from.design(FrF2(design="8-4.1")) ``` Yet I found no way to update the design matrix. It seems we can update a response vector (there's an example of use with `design()`/`undesign()`), but AFAIK there's no function that would import a matrix of contrasts and allow to match it in the catalog or find the generators. P.S. Apart from Minitab and StatGraphics, [DOE++](http://www.reliasoft.com/doe/) seems to offer many facilities to work with two-level fractional designs, but I cannot test it unfortunately.
null
CC BY-SA 3.0
null
2011-05-29T12:25:29.513
2011-05-30T08:39:55.627
2011-05-30T08:39:55.627
930
930
null
11343
2
null
11248
3
null
I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you have actually used. But the means can be useful for giving you a feel for which combinations of regressor variables are likely to lead to classifying towards a particular category. However this can be a horrendously complicated beast for multinomial regression, as you have a matrix of betas to interpret (one column for each category, except for the reference, which can be thought of as having all betas "estimated" as zero with zero standard error). Given that this seems to be an attempt at a intuitive way to understand what your classifier is doing, let me propose another. I will delete this section if this is not what you were intending. You have your MCMC samples of the beta matrix: call this $\beta_{ij}^{(b)}$ where $i=1,\dots,R$ denotes the multinomial category, $j=1,\dots,p$ denotes the regressor variable (the $X$), and $b=1,\dots,B$ denotes the $bth$ MCMC sampled value. If the categories have different $X$ variables, then simply set those excluded variables' betas to zero in the matrix: $\beta_{ik}^{(b)}=0$ for all $b$ if variable $k$ was not part of the model fit to the ith category, and $\beta_{Rj}^{(b)}=0$ for all $j$ and $b$. The first thing you need is a set of covariates to use $X_{mj}\;\;\;\;m=1,\dots,M$, where $m$ is the "observation number" and $M$ is the number of predictions you are going to make. The data used to fit your model should do for this purpose, so $M=\text{sample size}$. You now calculate the linear predictor for each category for each prediction for each MCMC sample: $$y_{im}^{(b)}=\sum_{j=1}^{p}X_{mj}\beta_{ij}^{(b)}$$ (this may be quicker to code up as a matrix/array operation). Note that $y_{Rm}^{(b)}=0$ for all $b$ and $m$. Then convert this into a probability for the $mth$ observation belonging to the $ith$ category/class, call this quantity $Classify(m,i)$. $$Classify(m,i)=\frac{1}{B}\sum_{b=1}^{B}\frac{\exp\left(y_{im}^{(b)}\right)}{\sum_{l=1}^{R}\exp\left(y_{lm}^{(b)}\right)}$$ Now you plot the value of $Classify(i,m)$ against $X_{mj}$, so you will have a total of $R\times p$ plots. Looking at these should give you a feel for what the classifier is doing in relation to the regressor variables. Note that when it comes to actually classifying a new variable, you only need $Classify(i,m)$ in order to do this - all other quantities from the MCMC are irrelevant for the purpose of classification. What you do need though is a loss matrix which describes the loss incurred from classifying into category $i_{est}$ when the true category is actually $i_{true}$, this will be a $R\times R$ matrix, usually zero on the diagonal and positive everywhere else. This can be very important if correctly identifying "rare" classes is crucial compared to correctly identifying "common" classes.
null
CC BY-SA 3.0
null
2011-05-29T12:42:17.257
2011-05-29T12:42:17.257
null
null
2392
null
11344
2
null
11296
11
null
You need to have the raw data, so that the response variable is 0/1 (not smoke, smoke). Then you can use binary logistic regression. It is not correct to group BMI into intervals. The cutpoints are not correct, probably don't exist, and you are not officially testing whether BMI is associated with smoking. You are currently testing whether BMI with much of its information discarded is associated with smoking. You'll find that especially the outer BMI intervals are quite heterogeneous.
null
CC BY-SA 3.0
null
2011-05-29T13:18:28.267
2011-05-29T13:18:28.267
null
null
4253
null
11345
2
null
11336
3
null
The first thing to do is to construct the "linear predictors" or "logits" for each category for each prediction. So you have your model equation: $$\eta_{ir}=\sum_{j=1}^{p}X_{ij}\hat{\beta}_{jr}\;\; (i=1,\dots,m\;\; r=1,\dots,R)$$ Where for notational convenience, the above is to be understood to have $\hat{\beta}_{jR}=\eta_{iR}=0$ (as this is the reference category), and $\hat{\beta}_{jr}=0$ if variable $j$ was not included in the linear predictor for class $r$. So you will have an $m\times R$ matrix of logits. You then exponentiate to form predicted odds ratios and renormalise to form predicted probabilities. Note that the predicted odds ratios can be calculated by a simple matrix operation if your data is sufficiently organised: $$\bf{O}=\exp(\boldsymbol{\eta})=\exp(\bf{X}\boldsymbol{\beta})$$ of the $m\times p$ prediction matrix $\bf{X}\equiv\it{\{X_{ij}\}}$ with the $p\times R$ estimated co-efficient matrix $\boldsymbol{\beta}\equiv\it{\{\hat{\beta}_{jr}\}}$, and $\exp(.)$ is defined component wise (i.e. not the matrix exponential). The matrix $\bf{O}$ is an "odds ratio" matrix, with the last column should be all ones. If we take the $m\times 1$ vector $\bf{T}=O1_{R}$ This gives the normalisation constant for each prediction "row" of odds ratios. Now create the $(m\times m)$ diagonal matrix defined by $W_{kk}\equiv T_{k}^{-1}$, and the predicted probability matrix is given by: $$\bf{P}=\bf{W}\bf{O}$$ So in the example you give the matrix $\boldsymbol{\beta}$ would look like this: $$\begin{array}{c|c} Int:1 & Int:2 & Int:3 & Int:4 & Int:5 & 0 \\ \hline Dev:1 & Dev:2 & Dev:3 & Dev:4 & Dev:5 & 0 \\ \hline Vote:1 & Vote:2 & Vote:3 & Vote:4 & Vote:5 & 0 \\ \hline \end{array}$$ With the (roughly rounded) values plugged in we get: $$\boldsymbol{\beta}=\begin{array}{c|c} 3.68 & -0.90 & -1.56 & -1.13 & -1.68 & 0 \\ \hline -0.18 & 0.06 & -0.04 & 0.08 & -0.07 & 0 \\ \hline 0.02 & 0.02 & 0.04 & -0.04 & -0.01 & 0 \\ \hline \hline \end{array}$$ And the $\bf{X}$ matrix would look like this: $$\bf{X}=\begin{array}{c|c} 1 & 0.72 & 2\\ \hline 1 & 1.02 & 5\\ \hline 1 & 1.02 & 4\\ \hline 1 & 1.20 & 7\\ \hline 1 & 0.72 & 10\\ \hline 1 & 1.20 & 27\\ \hline 1 & 0.56 & 5\\ \hline 1 & 1.20 & 9\\ \hline 1 & 0.72 & 17\\ \hline 1 & 0.56 & 16\\ \hline \end{array}$$ So some R-code to do this would simply be (with the matrices $\bf{X}$ and $\boldsymbol{\beta}$ defined as above). The main parts are reading in the data, and padding it with 1s and 0s for $\bf{X}$ and $\boldsymbol{\beta}$: > beta<-cbind(as.matrix( structure(c(3.68021133487111, -0.903496528862169, -1.56339830041814, -1.13238307296064, -1.67706243532044, -0.177585202845615, 0.0611115470557421, -0.0458373863009504, 0.0881133593132653, -0.0686190052488972, 0.0163917121907627, 0.0165232098847022, 0.0373815294869855, -0.0353209839724262, -0.00698911507852077), .Names = c("(Intercept):1", "(Intercept):2", "(Intercept):3", "(Intercept):4", "(Intercept):5", "Deviance:1", "Deviance:2", "Deviance:3", "Deviance:4", "Deviance:5", "Votes:1", "Votes:2", "Votes:3", "Votes:4", "Votes:5") , .Dim=c(3L,5L)) ),0) X<-cbind(1,as.matrix( structure(c(0.71847390030784, 1.01838748408701, 1.01838748408701, 1.20499277373001, 0.71847390030784, 1.20499277373001, 0.56393315893118, 1.20499277373001, 0.71847390030784, 0.56393315893118, 2, 5, 4, 7, 10, 27, 5, 9, 17, 16), .Dim = c(10L, 2L)) )) P<-diag(as.vector(exp(X %*% beta) %*% as.matrix(rep(1,ncol(beta))))^-1) %*% exp(X %*% beta)
null
CC BY-SA 3.0
null
2011-05-29T15:32:53.610
2011-05-29T15:32:53.610
null
null
2392
null
11346
1
11350
null
4
1064
This is probably a pretty simple question, but I have been having some trouble interpreting the documentation for the `predict` function. I am generating a simple linear model from a data frame containing (X, Y) pairs, which I would then like to use to predict Y given new X. My code looks something like this: ``` my_lm = lm(Y ~ X, data=my_data) new_Y = predict(my_lm, new_data, interval="confidence", type="response") print(new_Y[,'fit']) ``` This does seem to give me values already, but I am a bit confused about the `type` parameter. The documentation only mentions that `type` specifies "response or model term", but it doesn't say what that actually means. Searching the web for examples I have seen both methods mentioned on different sites, with no clear explanation why they were used. Can someone tell me what the right way to solve my problem is and where I can read up on the actual meaning of the type?
Obtaining predictions from linear model
CC BY-SA 3.0
null
2011-05-29T17:20:28.353
2011-05-29T18:13:04.837
null
null
3031
[ "r", "linear-model" ]
11347
1
11349
null
0
1471
I want to jointly estimate a very simple MV-Normal two-dimensional AR[1] process, $[x_t,y_t]=[x_{t-1},y_{t-1}]+\text{[Bivariate Gaussian error]}$, in BUGS. But the syntax has been impossible to figure out. Here's the problem part of the code: ``` ## transition model (aka random walk prior) for(i in 2:NPERIODS1){ mu.vector[i,1:2]<-vector[i-1,1:2] vector[i,1:2]~dmnorm(mu.vector[i,1:2], omega[1:2,1:2]) } ``` The compiler throws up a "Expected a multivariate node" error. Looking through some examples, there doesn't seem to be any easy way to introduce a structured mean or covariance variables for the multivariate normal function. How should I proceed? Edit: Changed omega[,] to omega[1:2,1:2] for clarity.
Multivariate random walks in BUGS
CC BY-SA 3.0
null
2011-05-29T17:35:12.357
2012-07-19T06:52:44.510
2011-05-29T18:57:29.510
919
996
[ "time-series", "multivariate-analysis", "markov-chain-montecarlo", "bugs" ]
11348
2
null
11242
1
null
First: Why can't you get the raw data from the GSS? It's easily available. Fail that, you can work with ANES or with the US sample of the World Value Survey. Or raw exit poll data. If you need academic access to get the files, contact me. Second: The poly-sci way to do this is to run the Ideal or OC to construct a d-dimensional "Issue-space", figure out where the candidates are in issue space(Pretty easy, either by interpreting item parameters of the "Supports Candidate X" question or just by looking at the coordinates of candidate supporters, and then find which 4 questions are maximally informative with regards to issue space. I actually just finished working through a similar problem.
null
CC BY-SA 3.0
null
2011-05-29T18:01:20.593
2011-05-29T18:01:20.593
null
null
996
null
11349
2
null
11347
1
null
Have you tried replacing omega[,] with omega[1:2,1:2]? I haven't got BUGS here but IIRC that's what it expects inside dmnorm.
null
CC BY-SA 3.0
null
2011-05-29T18:11:00.007
2011-05-29T18:11:00.007
null
null
26
null
11350
2
null
11346
5
null
The `type` argument specifies if you want predictions of the response (the $Y$ variable) or if you want predictions for the individual terms in the model. In combination with the `terms` argument you can get predictions for some or all (default) of the terms. In your example, there is just one term in addition to the intercept. Your model is $$Y_i = \alpha + \beta X_i + \epsilon_i$$ hence, predictions of the terms would give you $\hat{\beta}X^{\text{new}}$ while predictions of the response would give you $\hat{\alpha} + \hat{\beta}X^{\text{new}}$. Note that `type = "response"` is the default for `predict.lm`, so you get predictions of responses by default, which I guess is what you would expect from a predict function. If you look at `predict.glm` instead the default is `"link"`, which gives the linear predictors by default, so for generalized linear models you need to be explicit if you want predictions of the response.
null
CC BY-SA 3.0
null
2011-05-29T18:13:04.837
2011-05-29T18:13:04.837
null
null
4376
null
11351
1
11352
null
12
6653
This is pretty hard for me to describe, but I'll try to make my problem understandable. So first you have to know that I've done a very simple linear regression so far. Before I estimated the coefficient, I watched the distribution of my $y$. It is heavy left skewed. After I estimated the model, I was quite sure to observe a left-skewed residual in a QQ-Plot as wel, but I absolutely did not. What might be the reason for this solution? Where is the mistake? Or has the distribution $y$ nothing to do with the distribution of the error term?
Left skewed vs. symmetric distribution observed
CC BY-SA 3.0
null
2011-05-29T20:14:17.173
2014-03-09T16:22:20.847
2014-03-09T16:22:20.847
36515
4496
[ "regression", "residuals", "skewness" ]
11352
2
null
11351
25
null
To answer your question, let's take a very simple example. The simple regression model is given by $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_i \sim N(0,\sigma^2)$. Now suppose that $x_i$ is dichotomous. If $\beta_1$ is not equal to zero, then the distribution of $y_i$ will not be normal, but actually a mixture of two normal distributions, one with mean $\beta_0$ and one with mean $\beta_0 + \beta_1$. If $\beta_1$ is large enough and $\sigma^2$ is small enough, then a histogram of $y_i$ will look bimodal. However, one can also get a histogram of $y_i$ that looks like a "single" skewed distribution. Here is one example (using R): ``` xi <- rbinom(10000, 1, .2) yi <- 0 + 3 * xi + rnorm(10000, .7) hist(yi, breaks=20) qqnorm(yi); qqline(yi) ``` It's not the distribution of $y_i$ that matters -- but the distribution of the error terms. ``` res <- lm(yi ~ xi) hist(resid(res), breaks=20) qqnorm(resid(res)); qqline(resid(res)) ``` And that looks perfectly normal -- not only figuratively speaking =)
null
CC BY-SA 3.0
null
2011-05-29T21:06:10.660
2011-05-29T21:06:10.660
null
null
1934
null
11353
1
null
null
3
520
I have hundreds of explanatory variables and under 100 observations (saturated data set). I'd like to create a linear model in which I have two or so composite variables made up of a dozen of the explanatory variables each. How do I find the best variables to use for the composites without going through every combination? Currently my $R^2 = .64$, I'd like to improve that. The model looks something like this: $$Y = B_1(v_1 + v_2 + \dots + v_{12}) + B_2(v_{13} + v_{14} + \dots +v_{24})$$ where $B_1$ and $B_2$ are the coefficients and the entirety of ($v_1 + v_2 + \dots + v_{12}$) is acting as one variable.
How do I find the best model with a saturated dataset?
CC BY-SA 3.0
null
2011-05-29T23:16:54.367
2011-05-31T13:35:40.367
2011-05-30T06:07:01.963
2116
4798
[ "regression", "modeling" ]
11355
2
null
11353
2
null
How about factor analysis over your variables ? That will give you the list of variables which behave similarly to give you a latent variable. Apart from this you should also run multicollinearity diagonistics to detect collinearity in your present list of 100 variables. Chances are high that many of your variables would be correlated.
null
CC BY-SA 3.0
null
2011-05-30T07:03:04.490
2011-05-30T07:03:04.490
null
null
1763
null
11356
2
null
11353
2
null
The standard answer when determining the "best" linear combinations of variables is [principal component analysis](http://en.wikipedia.org/wiki/Principal_component_analysis). Its regression counterparts are [principal components regression](http://en.wikipedia.org/wiki/Principal_component_regression) and [partial least squares regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression). Both methods will select several linear combinations of all the variables which should be the "best" predictors of the dependent variable. I suggest starting with PCR, since it looks like more suited for your analysis as it is stated now. It would help if you would elaborate more on what are you trying to model. Goal of raising $R^2$ is a bit suspicious, as it should never be the sole goal in any kind of modeling.
null
CC BY-SA 3.0
null
2011-05-30T07:08:04.773
2011-05-30T07:08:04.773
null
null
2116
null
11358
2
null
11327
3
null
Strong exogeneity is a term related to dynamic models, i.e. when there is time-series data involved. Since you are doing one-time survey, this term does not apply. What might be the problem with the regression though is [omitted variable bias](http://en.wikipedia.org/wiki/Omitted-variable_bias). Since GPA is a weighted average purely arithmetical formula applies: $$GPA = w_1 G_1+...+ w_n G_n$$ where $w_i$ are the weights and $G_i$ are grades. This equation is not stochastic. However we can say that each grade is determined by student's ability plus stochastic term: $$G_i=f_i(\mathbf{A})+\varepsilon_i$$ where $\mathbf{A}$ is vector of variables which determine students ability, and $f$ is the functional form of the relationship. When viewed in this light it makes no sense including grades in the regression. What might be of interest how different functional relationships $f_i$ aggregate into functional relationship $f$: $$GPA=f(\mathbf{A})+\varepsilon$$ But this does not answer the questions in your given example. So as other's suggested it is better to use [factor analysis](http://en.wikipedia.org/wiki/Factor_analysis).
null
CC BY-SA 3.0
null
2011-05-30T08:16:18.460
2011-05-30T08:16:18.460
null
null
2116
null
11359
1
11373
null
24
57959
What is the primary reason that someone would apply the square root transformation to their data? I always observe that doing this always increases the $R^2$. However, this is probably just due to centering the data. Any thoughts are appreciated!
What could be the reason for using square root transformation on data?
CC BY-SA 4.0
null
2011-05-30T08:47:46.340
2020-04-25T13:16:17.543
2020-04-25T13:16:17.543
273266
4496
[ "regression", "data-transformation", "variance-stabilizing" ]
11361
2
null
7249
7
null
Try [Orange Canvas](http://orange.biolab.si/), it will give you option to build interactive decision tree.
null
CC BY-SA 3.0
null
2011-05-30T09:13:13.920
2011-05-31T06:39:19.290
2011-05-31T06:39:19.290
2116
4802
null