Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11149 | 1 | null | null | 8 | 18390 | Is the method of mean substitution for replacing missing data out of date? Are there more sophisticated models that should be used? If so, what are they?
| Is the method of mean substitution for replacing missing data out of date? | CC BY-SA 3.0 | null | 2011-05-23T11:33:59.683 | 2011-05-24T10:59:39.857 | 2011-05-23T11:54:25.253 | 183 | 4716 | [
"missing-data"
] |
11150 | 2 | null | 11149 | 14 | null | Barring the fact that it's not necessary to shoot mosquitoes with a cannon (i.e. if you have one missing value in a million data points, just drop it), using the mean could be suboptimal to say the least: the result can be biased, and you should at least correct the result for the uncertainty.
There are some other options, but the one easiest to explain is multiple imputation. The concept is simple: based upon a model for your data itself (e.g. obtained from the complete cases, though other options are available, like MICE), draw values from the associated distribution to 'complete' your dataset. Then in this completed dataset you don't have anymore missing data, and you can run your analysis of interest.
If you did this only once (in fact, replacing the missing values with the mean is a very contorted form of this), it would be called single imputation, and there is no reason why it would perform better than mean replacement.
However: the trick is to do this repeatedly (hence Multiple Imputation), and each time do your analysis on each completed (=imputed) dataset. The result is typically a set of parameter estimates or similar for each completed dataset. Under relatively loose conditions, it is OK to average your parameter estimates over all these imputed datasets.
The advantage is that there also exists a simple formula to adjust the standard error for the uncertainty caused by the missing data.
If you want to know more, you probably want to read Little and Rubin's 'Statistical Analysis with Missing Data'. This also holds other methods (EM,...) and more explanation on how/why/when they work.
| null | CC BY-SA 3.0 | null | 2011-05-23T11:53:59.070 | 2011-05-23T12:21:03.900 | 2011-05-23T12:21:03.900 | 2116 | 4257 | null |
11151 | 2 | null | 11149 | 11 | null | You did not tell us very much about the nature of your missing data. Did you check for MCAR ([Missing Completely at Random](http://en.wikipedia.org/wiki/Missing_completely_at_random))? Given that you cannot assume MCAR, mean substitution can lead to biased estimators.
As a non-mathematical starting point, I can recommend the following two references:
- Graham, Hohn W. (2009): Missing Data Analysis: Making It Work in the Real World.
- Allison, Paul (2002): Missing data. (see section "Imputation", p. 11)
| null | CC BY-SA 3.0 | null | 2011-05-23T11:54:42.270 | 2011-05-23T11:54:42.270 | null | null | 307 | null |
11152 | 1 | 11154 | null | 3 | 123 | I am going to be hosting a number (~10) of [potluck meals](http://en.wikipedia.org/wiki/Potluck) over the course of the summer, my pool of people to invite is about 40 people with about 10-15 coming to each meal. So I figure this would be a good opportunity to record data over time about the meals/people. The issue I am having is I am not sure what information to keep track of and what format to record it in.
Here are some examples of trends I think would be interesting:
- How many meals I have invited people to
- On average which round of invites did people get invited to (some people rsvp as no in the beginning and so there is another 'round' of invites)
How many meals people have attended
What items people have brought
I have started a spreadsheet where each page is a meal, the first few columns of the page represent different rounds of invites, I input a persons name in the column that corresponds to the round of their invite. The last two columns are the ultimate rsvp from any round of invitation and the item brought if applicable.
To summarize I am looking for an efficient and concise way of recording the data associated with these meals for the trends mentioned. Additionally I am looking for other trends I can keep track of, I am doing a lot of this communication via email so timestamps would potentially be available for other interesting trends.
Help with good tags for this question would be appreciated.
| Statistics of events and invitations | CC BY-SA 3.0 | null | 2011-05-23T13:12:51.430 | 2011-05-23T22:49:21.180 | 2011-05-23T22:49:21.180 | 307 | 4717 | [
"dataset",
"multilevel-analysis",
"trend"
] |
11153 | 2 | null | 11149 | 2 | null | If your missing values are randomly distributed, or your sample size is small, you might be better off just using the mean. I would first split the data into two parts: 1 with the missing values and the other without and then test for the difference in means of some key variables between the two samples. If there is no difference, you have some support for substituting the mean, or just deleting the observations entirely.
-Ralph Winters
| null | CC BY-SA 3.0 | null | 2011-05-23T15:03:30.203 | 2011-05-23T15:03:30.203 | null | null | 3489 | null |
11154 | 2 | null | 11152 | 3 | null | Yoel, great question! I will address your question of what can be an "efficient and concise way of recording the data". Given your small data set, the following thoughts are more of theoretical nature than of practical use.
You have (what social scientists call) a multilevel data set, e.g. students (level 1) are nested in classes (level 2) which are nested in schools (level 3).
Unfortunately, your case is more complicated because each meal can be attended by more than one person and each person can attend more than one meal. So, there is no easy way to handle $1:n$ relationship between meals and persons. Furthermore, it is unrealistic to assume that each person brings unique items to each meal, i.e. many will bring salad, bread, cheese, etc. (that's at least my experience ;-) Again, there is no easy $1:n$ relationship between person and item(s). The following picture may be helpful to illustrate what I mean:

If you had more data, I would recommand using a relational database management system (MS-Access, MySQL, SQLite etc.) and you would need the following 5 tables (or relations). Each table needs at a minimum the following identifier variables:
Meal:
- meal.id
Person:
- person.id
Item:
- item.id
Since you do not have $1:n$-relations, you also need auxiliary-tables which help you to establish $n:m$ relations:
meal-person:
- meal.id
- person.id
person-item:
- person.id
- item.id
By the way, if you intend to do more complicated (regression) analyzes than those mentioned in your question, then you will need to run "Multilevel Models With Crossed Random Effects".
Update
Yoel asked for some resources to get started with multilevel analysis:
- My all time favourite intro text is "Using SAS Proc Mixed to Fit Multilevel Models, Hierarchical Models, and Individual Growth Models" by Judith Singer. This is a basic introduction to multilevel models. The text seems, however, currently not available.
- The University of Bristol Centre for Multilevel Modelling offers "Training materials and online information about multilevel modelling and MLwiN". Lean back and enjoy the videos!
- Joop Hox has written one of the best textbooks and a lot of articles. Some of them can be found on his homepage.
- The UCLA Academic Technology Service offers textbook examples, see the section about Multilevel Modeling.
- Douglas Bates is currently working on a book called "lme4: Mixed-effects Modeling with R". Chapter two is about "Models With Multiple Random-effects Terms".
However, I was not successful in finding a good intro text which covers multilevel models with crossed random effects. Maybe, this [blog post](http://www.analysisfactor.com/statchat/multilevel-models-with-crossed-random-effects/) can get you started.
| null | CC BY-SA 3.0 | null | 2011-05-23T15:06:28.850 | 2011-05-23T22:44:00.780 | 2011-05-23T22:44:00.780 | 307 | 307 | null |
11155 | 2 | null | 11101 | 1 | null | With a sample size of 104, any factor analysis is going to be shaky at best. The best approach is probably to collect more data (not really that useful an answer, but its true). [This page](http://www.technion.ac.il/docs/sas/stat/chap26/sect21.htm) gives some useful advice.
[Fabrigar et al (1999)](http://www.statpower.net/Content/312/Handout/Fabrigar1999.pdf) indicate that large communalities can often lower the sample size required, but almost 1 is probably way too high. I would drop the offending items and re-run the analysis (using ML, principal axis and another method of your choice) and see what the results are. If they still produce heywood cases, then FA is probably not the right approach.
| null | CC BY-SA 3.0 | null | 2011-05-23T15:54:10.807 | 2011-05-23T15:54:10.807 | null | null | 656 | null |
11156 | 2 | null | 11088 | 4 | null | The best (fastest to run, not fastest to code;) free solution I have found in Matlab was to wrap R's MATHLIB_STANDALONE c library with a mex function. This gives you access to R's t-distribution PRNG. One advantage of this approach is that you also can use the same trick to get variates from a non-central t distribution.
The second best free solution was to use octave's implementation of [trnd](http://www.gnu.org/software/octave/doc/interpreter/Random-Number-Generation.html). Porting from octave turned out to be more work than wrapping c code for me.
For my tastes, using uniform generation via `rand` and inverting via `tinv` was much too slow. YMMV.
| null | CC BY-SA 3.0 | null | 2011-05-23T17:29:06.840 | 2011-06-15T04:05:39.333 | 2011-06-15T04:05:39.333 | 795 | 795 | null |
11157 | 1 | 11162 | null | 4 | 1045 | I have time series data that represent dates/times of trades taken in a financial market.
I would like to assign a score to this data that represents whether the trades are `mostly clustered` around particular time values or if they are `mostly spread out` evenly. I am going to have about 1000+ results per dataset.
Example situation one (High degree of "clustering" ):
```
1. 01/01/01 : 13:00
2. 01/01/01 : 13:10
3. 01/01/01 : 13:15
4. 01/01/01 : 13:25
5. 03/05/01 : 17:20
6. 03/05/01 : 17:35
7. 03/05/01 : 17:40
8. 03/05/01 : 17:45
```
Example situation two( Low degree of "clustering)"
```
1. 01/01/01 : 13:00
2. 01/05/01 : 02:30
4. 02/12/01 : 06:40
5. 02/25/01 : 02:30
6. 03/30/01 : 21:10
7. 04/12/01 : 02:20
8. 05/02/01 : 03:25
```
I can of course convert all the timestamps to posix time or whatnot so doing calculation with the time values won't be a problem.
I was thinking possibly standard error?
(For those who want more background info: I am using backtest results to modulate the size of my entry position in a complex manner. If the results contain trades that are clustered together, then they don't really count as 1 trade each (more like one big trade). This means that such results are untrustworthy and I should not act on them.)
Thanks!
| What statistical test can I use to detect clumping? | CC BY-SA 3.0 | null | 2011-05-23T18:01:24.313 | 2011-05-24T21:34:01.523 | null | null | 4544 | [
"time-series",
"statistical-significance",
"standard-error"
] |
11158 | 2 | null | 11021 | 1 | null | I contacted Sean at RezScore and he clarified some things for me. In a nutshell, inserting buzzwords into a hidden text box seems to be a good idea if you don't want to put them in your actual resume. However, you should be selective about which words you include because many of the algorithms penalize verbosity.
Maybe RezScore will include a feature to do just this for specific industries, I'd bet it would be cheaper than a resume rewrite.
| null | CC BY-SA 3.0 | null | 2011-05-23T18:12:47.270 | 2011-05-23T18:12:47.270 | null | null | 4685 | null |
11159 | 2 | null | 11138 | 1 | null | I'm answering about another approach that doesn't use hard cuts on the dendrogram. I would suggest you to use something like linear discriminant analysis (LDA) or any other technique that allows you to predict the class of the unlabeled points. (There are many techniques that can do the job, but I find LDA the easiest)
LAD is used when you have a set of multivariate observations, and those observations belong to a particular class. This dataset is composed by the labeled points. On the other hand you may have a set of points in the same variables but the class is unknown. These are the unlabeled points.
The goal of LDA is to classify the unknown points in the given classes. It is important to notice that in your case, the classes are defined by the hierarchical clustering you've already performed.
Discriminant analysis tries to define linear boundaries between the classes, creating some sort of "territories" (or regions) for each class. For any unlabeled point, you must check to which region it belongs.
You can check this [lecture on LDA](http://research.cs.tamu.edu/prism/lectures/pr/pr_l10.pdf). It is simple and sufficiently explained.
If you need software to accomplish this task, you might check [R documentation](http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/lda.html) on the functions `lda()` and `predict.lda()` in `MASS` package. Check [Quick-R](http://www.statmethods.net/advstats/discriminant.html) for additional help. SPSS has a very easy implementation of LDA.
In fact, I think it is very odd to "force" the dendrogram to include the unlabeled points. Normally, the dendrogram is built using all of the unlabeled points in a dataset.
Hope this is useful :)
| null | CC BY-SA 3.0 | null | 2011-05-23T18:21:27.423 | 2011-05-23T18:21:27.423 | null | null | 2902 | null |
11160 | 2 | null | 11138 | 0 | null | Just my two cents, but I would look at decision trees or using your initial cluster analysis to determine a suitable number of clusters, and then use kmeans to refine. From there, you can get the cluster centers and reclassify new cases based on those centers.
HTH.
| null | CC BY-SA 3.0 | null | 2011-05-23T18:45:34.503 | 2011-05-23T18:45:34.503 | null | null | 569 | null |
11161 | 1 | null | null | 1 | 137 | I have a sequence of integers that represent total sales of my product for each day. From time to time, we have large press or marketing events that increase sales on the day of the event and for a few days after that but then eventually taper down to the long-run average. Here's some made-up numbers showing what I mean:
```
34, 40, 35, 36, 150, 110, 140, 107, 80, 68, 75, 50, 45, 35, 38, 41, 42,...
^^^ ^^^
event occurs here reversion to mean
```
I have limited experience with statistics, so I'm looking for some guidance on statistical methods that I could use to determine:
- how much of the sales on the event day and the following days should be attributed to the event
- whether the press or marketing event contributes to a permanently higher sales average, even after its initial effect has worn off.
| How can I isolate the effect of an event on a sequence of sales numbers? | CC BY-SA 3.0 | null | 2011-05-23T19:24:32.360 | 2011-05-24T01:52:15.880 | null | null | 4718 | [
"mean"
] |
11162 | 2 | null | 11157 | 1 | null | I would simply calculate a rolling window of the number of trades (or dollar volume) per hour, day, week, or whatever time frame that makes sense. For example, you might use 1 day as the rolling window. If 1 trade per day is a low degree of clustering then 10 trades per day might be a high degree of clustering. If so, then a linear "y" scale for a "clustering plot" is probably reasonable.
Here's an example:
Edit 2 ===========================================
Below is the updated version of the plot. Just like the previous plots, the gray line is from a "window" of 1 day where the "cluster number" is the number of trades for the previous day. The new blue line is from a "window" of 5 days where the "cluster number" is the sum of the trades for the previous 5 days divided by 5 (the divide by 5 is to scale the result so it can be directly compared to a 1 day "window"). The new purple line is from a "window" that sums the trades for 10 days and then divides by 10, and the new green line is from a "window" for 20 days, divided by 20.
The last day in the plot (far right hand side) is for the day 2010-07-02 where the values are:
1 day window = 0
5 day window = 2
10 day window = 1.5
20 day window = 1.25
If you had chosen a "window" of 5 days, then before you trade on 2010-07-03 (assuming that's the next trading day), your "cluster number" would be 2 (averaging 2 trades per day for the previous 5 days).
Just like any moving average, the longer the "window", the smoother the plot. However, this smoothing delays the peaks and valleys. Compare the gray peak in early April with the blue peak, then the purple peak, and then the green peak. This may not be a big issue for the current use, but I thought it was a good idea to point it out.
The bottom line is, you'll have to play around with different "windows" to zero-in on your desired smoothness and timeliness.

| null | CC BY-SA 3.0 | null | 2011-05-23T19:35:59.183 | 2011-05-24T02:24:52.420 | 2011-05-24T02:24:52.420 | 2775 | 2775 | null |
11163 | 1 | null | null | 7 | 549 | I would like to test that two difference/distance/dissimilarity matrices are not the same. i.e. the rows and columns between the two matrices represent the same features, but the distances are obtained from 2 populations and I'm interested in whether the difference matrices "look different" between the populations.
I'm think I'm looking for something similar to a Mantel's test, but with the null hypothesis flipped. Whereas (as I understand it) the Mantel test tests for a linear correlation between two dissimilarity matrices against a null hypothesis of no linear correlation, the null in my case is that the two dissimilarity matrices are the same, and I'm interested in rejecting that null when two dissimilarity matrices differ significantly.
As a follow up to this question, once I have some sort of omnibus test for difference, what would be the best way to decompose the differences to contributions from individual cells.
| How to test whether two distance/difference matrices are different? | CC BY-SA 3.0 | null | 2011-05-23T19:43:20.070 | 2011-06-23T01:02:47.060 | 2011-05-23T23:19:53.857 | null | 4720 | [
"clustering"
] |
11164 | 1 | 11966 | null | 5 | 118 | Suppose we have 500 students nested in 20 classes (different classrooms), 25 students per class
```
student<-factor(1:500)
class<-rep(LETTERS[1:20],each=25)
```
They all take a test.
```
score<-rnorm(500,mean=80,sd=5)
```
The model below would tell you about the average scores and variability among students and classes
```
library(lme4)
lmer(score~(1|class))
```
Suppose further that a random 10 of the classrooms were painted red the other 10 were painted blue.
```
redvblue<-rep(0:1,each=250)
```
Is this a reasonable way to test for the effect of redvblue on test scores? And if it is reasonable, is there a better way?
```
lmer(score~redvblue+(1|class))
```
I'm wondering whether I should be concerned that redvblue was applied at the classroom level rather than the student level.
| Testing for the effect of an intervention when it is applied on a group of which each individual is measured | CC BY-SA 3.0 | null | 2011-05-23T19:50:26.923 | 2012-08-30T23:40:21.070 | 2012-08-30T23:40:21.070 | 5739 | 3874 | [
"r",
"mixed-model",
"multilevel-analysis",
"blocking"
] |
11165 | 1 | 11173 | null | 7 | 1888 | Take the task of fitting an a priori distribution like the ex-Gaussian
to a collection of observed human response times (RT). One method is to compute the sum log likelihood of each observed RT given a set of candidate ex-Gaussian parameters, then try to find the set of parameters that maximizes this sum log likelihood. I wonder if this alternative approach might also be reasonable:
- Select a set of equidistant quantile probabilities, e.g.:
qps = seq( .1 , .9 , .1 )
- For a given set of candidate ex-Gaussian parameters, estimate the
quantile RT values corresponding to qps, e.g.:
sim_dat = rnorm( 1e5 , mu , sigma ) + rexp( 1e5 , 1/tau )
qrt = quantile( sim_dat , prob = qps )
- For each sequential interval between the thus-generated quantile RT
values, count the number of observations falling into that interval,
e.g.:
obs_counts = rep( NA , length(qrt)-1 )
for( i in 1:(length(qrt)-1) ){
obs_counts[i] = length( obs_rt[ (obs_rt>qrt[i]) & (obs_rt<=qrt[i+1]) ] )
}
- Compare these observed counts to the expected counts:
exp_counts = diff(range(qps)) * diff(qps)[1] * length(obs_rt)
chi_sq = sum( (( obs_counts - exp_counts )^2 )/exp_counts )
- Repeat steps 2-4, searching for candidate parameter values that
minimize chi_sq.
Is this approach a reasonable alternative to the more standard maximum likelihood estimation procedure? Does this approach already have a name?
Note that I use the example of an ex-Gaussian purely for illustrative purposes; in practice I'm playing with using the above approach in a rather more complicated context (e.g. fitting data to a stochastic model that yields multiple distributions, each with a different number of expected observation count). The purpose of this question is to ascertain whether I've re-invented the wheel as well as if anyone can pick out any problematic features of the method.
| Is this a reasonable approach to fitting distributions? | CC BY-SA 3.0 | null | 2011-05-23T20:08:50.817 | 2013-07-04T10:37:48.477 | 2013-07-04T10:37:48.477 | 17230 | 364 | [
"distributions",
"fitting"
] |
11166 | 2 | null | 11165 | 0 | null | Take a look at the QQ-Plot (under my answer) in the following link:
[Need help identifying a distribution by its histogram](https://stats.stackexchange.com/questions/8662/need-help-identifying-a-distribution-by-its-histogram/8674#8674)
| null | CC BY-SA 3.0 | null | 2011-05-23T20:19:23.160 | 2011-05-23T20:19:23.160 | 2017-04-13T12:44:39.283 | -1 | 2775 | null |
11167 | 2 | null | 11165 | 6 | null | What you are proposing is called quantile matching, though the way you propose to do it will be exhausting. The ex-Gaussian distribution can be found in the package `gamlss.dist` with quantiles as `qexGAUS` etc.; it uses `nu` where you use `tau`.
A similar quantile matching method can be used in the function `fitdist` in the package `fitdistrplus` using `method="qme"`. The package is mentioned in the answer linked by bill_080. One difference is that it only matches as many quantiles as there are parameters (three in this case).
The following seems to work more or less: it simulates some data points from a particular ex-Gaussian distribution and then tries to estimate the parameters using quantile matching, and then draws some graphs. It needs a rough estimate of the parameters to work.
```
library(fitdistrplus)
library(gamlss.dist)
set.seed(1)
sim_size <- 1000
Gm <- 10 # mean of Gaussian
Gs <- 2 # sd of Gaussian
Em <- 5 # mean of exponential
sim_dat <- rnorm( sim_size , Gm , Gs ) + rexp( sim_size , 1/Em )
fit_qme <- fitdist(sim_dat, "exGAUS", method="qme",
start=c(mu=15, sigma=1, nu=3),
probs=c(0.2,0.5,0.8) )
fit_qme
plot(fit_qme)
```
In this example and with this seed, the estimates are
```
> fit_qme
Fitting of the distribution ' exGAUS ' by matching quantiles
Parameters:
estimate
mu 9.859207
sigma 1.753703
nu 5.049785
```
By comparison a maximum likelihood estimate method using the same function might look something like
```
fit_mle <- fitdist(sim_dat, "exGAUS", method="mle",
start=c(mu=15, sigma=1, nu=3) )
```
and produce something like
```
> fit_mle
Fitting of the distribution ' exGAUS ' by maximum likelihood
Parameters:
estimate Std. Error
mu 9.938870 0.1656315
sigma 2.034017 0.1253632
nu 5.007996 0.2199171
```
| null | CC BY-SA 3.0 | null | 2011-05-23T22:21:40.967 | 2011-05-23T22:21:40.967 | null | null | 2958 | null |
11168 | 1 | null | null | 10 | 48626 | Should the n for sample size be capitalized? Is there a difference between n and N?
| Capitalization of n for sample size | CC BY-SA 3.0 | null | 2011-05-23T23:59:06.663 | 2021-06-09T20:54:57.477 | 2011-05-24T01:13:30.937 | 2902 | 4722 | [
"notation"
] |
11169 | 2 | null | 11168 | 11 | null | There is actually a difference in some textbooks: $N$ generally means population size and $n$ sample size.
However, this is not always the case. You should check in your textbook.
:)
| null | CC BY-SA 3.0 | null | 2011-05-24T00:03:19.040 | 2011-05-24T00:03:19.040 | null | null | 2902 | null |
11170 | 2 | null | 11163 | 2 | null | I am not sure I understand what you mean by difference/distance/dissimilarity matrix. Assuming that $D_{i,j}^2 = (v_i - v_j)^{\top}(v_i - v_j)$ for some vectors $v_i, v_j$, if you can accept a transformation to the crossproduct matrix $G_{i,j} = -2 v_i^{\top}v_j$ (say for example the vectors are normalized so $v_i^{\top}v_i = 1 = v_j^{\top}v_j$, and so you can subtract out the 2 from $D_{i,j}^2$). Then you can compare the two cross product matrices, $G$ and $H$ call them, by a Wilks' lambda test, I think. I'm not sure, but I think they would both have to be the same rotation away from Wishart matrices. The Wilks' lambda distribution would then describe the ratio of the products of the eigenvalues of the two matrices under the null.
This may not be applicable to your problem, though...
| null | CC BY-SA 3.0 | null | 2011-05-24T00:53:14.377 | 2011-05-24T00:53:14.377 | null | null | 795 | null |
11171 | 2 | null | 11161 | 1 | null | To answer your question , one would be advised to build a single equation model which captured day-of-the-week effects (6 dummy indicators) and an indicator for the "event". Software exists to capture any lead, contemporaneous and.or lag effects around known event. In the absence of such software you might try and "roll your own" in order to identify the "window of response" around your event variable. In addition you might want to include week-of-the-year and/or month-of-the-year variables to handle annual seasonality. Furthermore since this is daily data you might want to include other events such as the Holidays and also incorporating any required "window of response" around these Holidays. You should also try to identify "particular days of the month" irrespective of what day of the week they fall on as important contributors to explaining daily demand such as the end-of-the-month or the day seniors get their social security checks et al . Other factors that might be important in the "discovery model phase" might be the empirical identification of mean shifts and/or local time trends. Before congratulating yourself for either writing such a "tour de force" you might (should !) validate that neither the model parameters of the error variance have changed over time. By sharing your data and inviting the list to provide quality analysis of your data we could all learn what others are doing in a precise manner. Upon completion of the modelling phase one could then eliminate the event variables effect , essentially scrubbing the data by obtaining a realization of the daily sales series without the events coefficients/effects being incorporated. The difference between the originally observed series and this scrubbed series provides an estimate of the events effects.This recommended approach uses all of the data as compared to the incorrect approach of trying to forecast data values at a point prior to the events effects which of course is the unknown ( but to be found ) point in time where the effect "starts".
| null | CC BY-SA 3.0 | null | 2011-05-24T01:19:16.530 | 2011-05-24T01:52:15.880 | 2011-05-24T01:52:15.880 | 3382 | 3382 | null |
11172 | 1 | null | null | 4 | 2057 | I'm trying to figure out how to calculate the standard error of a mean correlation coefficient.
I have 6 bilateral correlation coefficients for 4 countries. I have transformed them using the Fisher z transformation in order to calculate their mean correlation coefficient. I'm trying to figure out what the standard error of this statistic is in order to create a confidence interval to test for significance against other mean correlation coefficients.
I think I'll need to use the Fisher std error formula 1/SQRT(N-3) but I'm not sure what the N here should be. The sample size, or the number of bilateral pairs used in the mean correlation coefficient statistic?
When I use the N=number of bilateral pairs, so that the standard error is the cross-sectional dispersion, my 95% confidence interval doesn't make sense. e.g. r = 0.7040 (-0.2510,0.9645). Using the n=number of observations would mean that the standard error is constant across all mean correlation coefficients. Would it make sense to bootstrap the errors instead?
It would be great if I could create a confidence interval for each of these mean statistics and report it in one table so that the reader is able to compare across these regional correlations as they like.
| How to calculate standard error of the mean of a set of correlation coefficients | CC BY-SA 3.0 | null | 2011-05-24T01:49:43.733 | 2011-05-24T10:38:39.853 | 2011-05-24T04:41:24.997 | 183 | 4724 | [
"correlation",
"confidence-interval"
] |
11173 | 2 | null | 11165 | 6 | null | One problematic feature is that there may be a continuum of optimal solutions. In most settings the quantiles are continuous functions of the parameters. When the distributions are continuous, almost surely there will be positive intervals between the data values. Suppose your objective function is optimized by a particular parameter value whose quantiles do not coincide exactly with any of the data: that is, they lie in the interiors of the intervals determined by the nearby data values. (This is an extremely likely event.) Then small changes in the parameter value will move the quantiles slightly, to remain within the same intervals, thereby leaving the chi-squared value unchanged because none of the counts changes. Thus the procedure doesn't even pick out a definite set of parameter values!
Another problematic feature is that this procedure apparently provides no way to obtain estimation errors for the parameters.
Another problem is that you do not know even the most basic properties of this estimator, such as its amount of bias.
| null | CC BY-SA 3.0 | null | 2011-05-24T02:13:03.453 | 2011-05-24T02:13:03.453 | null | null | 919 | null |
11174 | 2 | null | 11168 | 6 | null | In terms of ANOVA small n (usually subscripted) could mean the sample size of a particular group while capital N might mean the total sample size. It depends on context.
| null | CC BY-SA 3.0 | null | 2011-05-24T03:03:16.627 | 2011-05-24T03:03:16.627 | null | null | 2310 | null |
11175 | 1 | null | null | 11 | 16951 | It is mentioned [here](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method) that one of the methods to determine the optimal number of clusters in a data-set is the "elbow method". Here the percentage of variance is calculated as the ratio of the between-group variance to the total variance.
I felt difficult in understanding this calculation. Can any one explain how to calculate the percentage of variance for a data-set represented as feature matrix $F \in \mathbf{R}^{m \times n}$, where $m$ is the feature dimension and $n$ is the number of data points. I use the k-means algorithm for clustering.
| Elbow criteria to determine number of cluster | CC BY-SA 4.0 | null | 2011-05-24T04:43:59.053 | 2018-05-07T11:01:06.643 | 2018-05-07T11:01:06.643 | 207324 | 4290 | [
"clustering",
"k-means"
] |
11176 | 1 | 442966 | null | 6 | 2407 | [Heathcote, Brown & Mewhort](https://doi.org/10.3758%2FBF03196299?from=SL) (2002, [PDF](https://doi.org/10.3758%2FBF03196299?from=SL)) present Quantile Maximum Probability Estimation (originally termed Quantile Maximum Likelihood Estimation but later corrected) as a method of fitting distributional data, and find that it outperforms the more traditional Continuous Maximum Likelihood Estimation approach, at least in the case of fitting data to the ex-Gaussian (though see also [this paper](https://doi.org/10.3758%2FBF03195574?from=SL), which shows that this benefit generalizes to other distributions as well).
I'm trying to understand the actual steps to achieving QMPE. I understand that one first specifies increasing and equidistant quantile probabilities, then uses these to obtain the quantile values (`q`) in the observed data corresponding to these probabilities. I also understand that these observed quantile values are then used in counting the number of observations between each quantile (`N`). But this is where I'm stuck. Presumably one searches through the parameter space of whatever a priori model one assumes generated the data, searching for a parameter set that maximizes the joint probability of `q` and `N`. However, I don't know how, given a set of candidate parameters, this joint probability is computed.
Not having a strong math background, I think much better in code, so if someone could help me figure out what comes next, I'd be greatly appreciative. Here's the beginning of an attempt to fit some data to an ex-Gaussian:
```
#generate some data to fit
true_mu = 300
true_sigma = 50
true_tau = 100
my_data = rnorm(100, true_mu,
true_sigma) + rexp(100, 1/true_tau)
#select some quantile probabilities;
#estimate quantiles and inter-quantile
#counts
#from the observed data
quantile_probs = seq(.1, .9, .1)
#or does it have to be seq(0,1,.1) ?
q = quantile( my_data, probs =
quantile_probs, type = 5 ) #Heathcote et al
#apparently use type=5 given their example
N = rep( NA , length(q)-1 )
for( i in 1:( length(q)-1 ) ){
N = length( my_data[ (my_data>q[i])
& (my_data<=q[i+1]) ] )
}
#specify some candidate parameter values
#to assess (normally done as part of an
#iterative search using an optimizer like
#optim)
candidate_mu = 350
candidate_sigma = 25
candidate_tau = 30
#given these candidates, what next?
```
| Can anyone explain quantile maximum probability estimation (QMPE)? | CC BY-SA 4.0 | null | 2011-05-24T05:22:53.910 | 2022-03-26T13:11:49.697 | 2022-03-26T13:11:49.697 | 11887 | 364 | [
"distributions",
"quantiles",
"fitting"
] |
11177 | 2 | null | 11176 | 1 | null | Just a small suggestion:
Have you checked out the [Newcastle Cognition Lab's page on QMPE](http://www.newcl.org/?q=node/10)?
It has source code, a getting started guide, and a few other resources.
| null | CC BY-SA 3.0 | null | 2011-05-24T06:22:09.193 | 2011-05-24T06:22:09.193 | null | null | 183 | null |
11178 | 1 | null | null | 0 | 12437 | >
Possible Duplicate:
Logistic Regression in R (Odds Ratio)
I need to do a logistic regression in R. My response variable is `surv=0`; `surv=1` and I have about 18 predictor variables.
After reading my model, I got the table of Coefficients below and I need to go through some steps, which I am not familiar with, until I get to the odds ratios.
This is my first time to do a logistic regression in R and your help would be appreciated.
```
Call:
glm(formula = surv ~ as.factor(tdate) + as.factor(line) + as.factor(wt) +
as.factor(crump) + as.factor(pind) + as.factor(pcscore) +
as.factor(ptem) + as.factor(pshiv) + as.factor(pincis) +
as.factor(presp) + as.factor(pmtone) + as.factor(pscolor) +
as.factor(ppscore) + as.factor(pmstain) + as.factor(pbse) +
as.factor(psex) + as.factor(pgf), family = binomial(link = "logit"),
data = ap)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.9772 -0.5896 -0.4419 -0.3154 2.8264
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.59796 0.27024 -2.213 0.026918 *
as.factor(tdate)2009-09-08 0.43918 0.19876 2.210 0.027130 *
as.factor(tdate)2009-09-11 0.27613 0.20289 1.361 0.173514
as.factor(tdate)2009-09-15 0.58733 0.19232 3.054 0.002259 **
as.factor(tdate)2009-09-18 0.52823 0.20605 2.564 0.010360 *
as.factor(tdate)2009-09-22 0.45661 0.19929 2.291 0.021954 *
as.factor(tdate)2009-09-25 -0.09189 0.21740 -0.423 0.672526
as.factor(tdate)2009-09-29 -0.15696 0.28369 -0.553 0.580076
as.factor(tdate)2010-01-26 1.39260 0.21049 6.616 3.69e-11 ***
as.factor(tdate)2010-01-29 1.67827 0.21099 7.954 1.80e-15 ***
as.factor(tdate)2010-02-02 1.35442 0.21292 6.361 2.00e-10 ***
as.factor(tdate)2010-02-05 1.36856 0.21439 6.383 1.73e-10 ***
as.factor(tdate)2010-02-09 1.18159 0.21951 5.383 7.33e-08 ***
as.factor(tdate)2010-02-12 1.40457 0.22001 6.384 1.73e-10 ***
as.factor(tdate)2010-02-16 1.01063 0.21783 4.639 3.49e-06 ***
as.factor(tdate)2010-02-19 1.54992 0.21535 7.197 6.14e-13 ***
as.factor(tdate)2010-02-23 0.85695 0.33968 2.523 0.011641 *
as.factor(line)2 -0.26311 0.07257 -3.625 0.000288 ***
as.factor(line)5 0.06766 0.11162 0.606 0.544387
as.factor(line)6 -0.30409 0.12130 -2.507 0.012176 *
as.factor(wt)2 -0.33904 0.10708 -3.166 0.001544 **
as.factor(wt)3 -0.28976 0.13217 -2.192 0.028359 *
as.factor(wt)4 -0.50470 0.16264 -3.103 0.001915 **
as.factor(wt)5 -0.74870 0.20067 -3.731 0.000191 ***
as.factor(crump)2 0.07537 0.10751 0.701 0.483280
as.factor(crump)3 -0.14050 0.13217 -1.063 0.287768
as.factor(crump)4 -0.20131 0.16689 -1.206 0.227724
as.factor(crump)5 -0.23963 0.20778 -1.153 0.248803
as.factor(pind)2 -0.29893 0.10752 -2.780 0.005434 **
as.factor(pind)3 -0.40828 0.12436 -3.283 0.001027 **
as.factor(pind)4 -0.73021 0.14947 -4.885 1.03e-06 ***
as.factor(pind)5 -0.68878 0.17650 -3.902 9.52e-05 ***
as.factor(pcscore)2 -0.52667 0.13606 -3.871 0.000108 ***
as.factor(ptem)2 -0.72600 0.08964 -8.099 5.52e-16 ***
as.factor(ptem)3 -0.79145 0.10503 -7.536 4.86e-14 ***
as.factor(ptem)4 -0.89956 0.10331 -8.707 < 2e-16 ***
as.factor(ptem)5 -0.90181 0.10721 -8.412 < 2e-16 ***
as.factor(pshiv)2 0.25236 0.07713 3.272 0.001068 **
as.factor(pincis)2 0.02327 0.07216 0.323 0.747041
as.factor(presp)2 0.43746 0.11598 3.772 0.000162 ***
as.factor(pmtone)2 0.34515 0.11178 3.088 0.002016 **
as.factor(pscolor)2 0.53469 0.26851 1.991 0.046443 *
as.factor(ppscore)2 0.25664 0.08751 2.933 0.003361 **
as.factor(pmstain)2 -0.48619 0.84408 -0.576 0.564611
as.factor(pbse)2 -0.28248 0.07335 -3.851 0.000117 ***
as.factor(psex)2 -0.18240 0.06385 -2.857 0.004280 **
as.factor(pgf)12 0.10329 0.14314 0.722 0.470554
as.factor(pgf)21 -0.06481 0.10772 -0.602 0.547388
as.factor(pgf)22 0.39584 0.12740 3.107 0.001890 **
as.factor(pgf)31 0.18820 0.10082 1.867 0.061936 .
as.factor(pgf)32 0.39662 0.13963 2.841 0.004504 **
as.factor(pgf)41 0.09178 0.10413 0.881 0.378106
as.factor(pgf)42 0.21056 0.14906 1.413 0.157787
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 7812.9 on 8714 degrees of freedom
Residual deviance: 6797.4 on 8662 degrees of freedom
(418 observations deleted due to missingness)
AIC: 6903.4
Number of Fisher Scoring iterations: 5
```
| How to interpret table of logistic regression coefficients using glm function in R | CC BY-SA 3.0 | null | 2011-05-24T07:17:41.610 | 2011-05-24T07:58:08.240 | 2017-04-13T12:44:52.660 | -1 | 4263 | [
"r",
"logistic"
] |
11179 | 2 | null | 11175 | 13 | null | The idea underlying the k-means algorithm is to try to find clusters that minimize the within-cluster variance (or up to a constant the corresponding sum of squares or SS), which amounts to maximize the between-cluster SS because the total variance is fixed. As mentioned on the wiki, you can directly use the within SS and look at its variation when increasing the number of clusters (like we would do in Factor Analysis with a screeplot): an abrupt change in how SS evolve is suggestive of an optimal solution, although this merely stands from visual appreciation. As the total variance is fixed, it is equivalent to study how the ratio of the between and total SS, also called the percentage of variance explained, evolves, because in this the case it will present a large gap from one k to the next k+1. (Note that the between/within ratio is not distributed as an F-distribution because k is not fixed; so, test are meaningless.)
In sum, you just have to compute the squared distance between each data point and their respective center (or centroid), for each cluster--this gives you the within SS, and the total within SS is just the sum of the cluster-specific WSS (transforming them to variance is just a matter of dividing by the corresponding degrees of freedom); the between SS is obtained by substracting the total WSS from the total SS, the latter being obtained by considering k=1 for example.
By the way, with k=1, WSS=TSS and BSS=0.
If you're after determining the number of clusters or where to stop with the k-means, you might consider the Gap statistic as an alternative to the elbow criteria:
>
Tibshirani, R., Walther, G., and
Hastie, T. (2001). Estimating the
numbers of clusters in a data set via
the gap statistic. J. R. Statist.
Soc. B, 63(2): 411-423.
| null | CC BY-SA 3.0 | null | 2011-05-24T07:53:03.603 | 2013-10-18T12:54:47.860 | 2013-10-18T12:54:47.860 | 264 | 930 | null |
11180 | 2 | null | 11178 | 4 | null | Call
```
exp(your.model$coefficients)
```
where `your.model` is your R object with `glm` class. Similar question was ask previously; detailed answer is [here](https://stats.stackexchange.com/questions/8661/logistic-regression-in-r-odds-ratio).
| null | CC BY-SA 3.0 | null | 2011-05-24T07:58:08.240 | 2011-05-24T07:58:08.240 | 2017-04-13T12:44:20.840 | -1 | 609 | null |
11182 | 1 | 11183 | null | 95 | 98946 | Does anybody know why offset in a Poisson regression is used? What do you achieve by this?
| When to use an offset in a Poisson regression? | CC BY-SA 3.0 | null | 2011-05-24T08:12:01.783 | 2020-03-10T07:48:42.057 | 2013-10-04T02:25:03.373 | 7290 | 4496 | [
"poisson-regression",
"offset"
] |
11183 | 2 | null | 11182 | 135 | null | Here is an example of application.
Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individuals are not followed the same amount of time. For example, six cases over 1 year should not amount to the same as six cases over 10 years. So, instead of having
$\log \mu_x = \beta_0 + \beta_1 x$
(where $\mu_x$ is the expected count for those with covariate $x$), you have
$\log \tfrac{\mu_x}{t_x} = \beta'_0 + \beta'_1 x$
(where $t_x$ is the exposure time for those with covariate $x$). Now, the last equation could be rewritten
$\log \mu_x = \log t_x + \beta'_0 + \beta'_1 x$
and $\log t_x$ plays the role of an offset.
| null | CC BY-SA 3.0 | null | 2011-05-24T09:03:34.040 | 2011-05-24T09:03:34.040 | null | null | 3019 | null |
11184 | 2 | null | 11172 | 4 | null | Just a few thoughts:
- n is the sample size for each bivariate correlation, i.e. $n \neq 6$.
- I am not sure if this makes sense but you could run a small meta-analysis (based on the Fisher's transformed correlations). This would give you a pooled standard error (see page 4).
- Whatever you do, your effect sizes (correlations) are not independent because each country is part of three bilateral correlations. Using dependent effect sizes will probably lead to a biased pooled standard error (which means that running a simple meta-analysis isn't such a good idea but you could do a robust variance estimation).
- I do not understand the last paragraph of your post/question: "It would be great if I could create a confidence interval for each of these mean statistics and report it in one table so that the reader is able to compare across these regional correlations as they like." Can you give us an example?
| null | CC BY-SA 3.0 | null | 2011-05-24T10:38:39.853 | 2011-05-24T10:38:39.853 | null | null | 307 | null |
11185 | 2 | null | 11149 | 0 | null | Missing data is one big issue everywhere. I wish you'd answer the following question first. 1) what %age of the data is missing ? -- if its more than 10% of the data you'd not risk imputing it with mean. Because imputing such missing with mean is equivalent to telling the LR box that look ..this variable has mean most of the places( so draw some conclusion) and you dont want LR box to draw conclusions upon your suggestions.do you ?? Now, the least you can do if you dont want much is you can try to relate this variables available values with different predictors value or use a business sense where ever possible..example..if I have a missing for marriage_ind , one of the ways could be seeing the median age of the people married, (lets say it comes out to be 29), I can assume that generally people(in India) get married by 30 and 29 suggests so. PROC MI also does thing internally for you but in a far more sofisticated way..so my 2 cents..see atleast 4-5 variables which are linked to your missings and try to form a correlation..This can be better than mean.
| null | CC BY-SA 3.0 | null | 2011-05-24T10:59:39.857 | 2011-05-24T10:59:39.857 | null | null | 1763 | null |
11186 | 1 | null | null | 3 | 573 | Suppose we have time-series $ X_t $ and it has the following decomposition
$$X_t=\mu + \varepsilon_t,$$
where $\mu$ is a mean and $\varepsilon_t$ - the error term.
The model complexity will increase if we divide this time-series in to some segments,say $k$, and repeat above process. As the model complexity increases the approximation accuracy also increases. So I want to introduce a regularisation term here which will help in deciding the number of segments $k$ in which we need to divide the time-series. The error in approximation can be defined as
$$ \epsilon_t= \frac{1}{k} \sum_{i=1}^k (\mu_{1}-X_{i})+\frac{1}{n-k} \sum_{i=n-k}^n (\mu_{2}-X_{i}), $$
here I have divided the time-series in 2 segments and $\mu_{1}, \mu_{2}$ are their respective means. Now I want to find out the optimal number of segments in general. Please note, that here I want to introduce a "regularisation" term which will help in deciding optimal number of segments.
| Number of segments to divide a time-series | CC BY-SA 3.0 | null | 2011-05-24T11:08:06.330 | 2011-05-24T13:36:02.940 | 2011-05-24T13:36:02.940 | 3722 | 3722 | [
"time-series",
"regularization",
"change-point"
] |
11187 | 2 | null | 11186 | 2 | null | Seems that you have a [change point problem](http://en.wikipedia.org/wiki/Structural_break). Also look at [change-point tag](https://stats.stackexchange.com/questions/tagged/change-point) for related questions in this site. For fitting these type of models R for example has the packages segmented and strucchange. The relevant function to find the optimal number of segments in package strucchange is `breakpoints`. Here is the simple example:
```
e<-rnorm(100) ##errror term
##the mean, value 10 for first 20 observations,
##then 30 for next 30 and 3 for last 50.
mu<-c(rep(10,20),rep(5,30),rep(3,50))
##generate time series X
x<-mu+e
> breakpoints(x~1,data=data.frame(x=x))
Optimal 3-segment partition:
Call:
breakpoints.formula(formula = y ~ 1, data = data.frame(y = y))
Breakpoints at observation number:
20 50
Corresponding to breakdates:
0.2 0.5
```
| null | CC BY-SA 3.0 | null | 2011-05-24T11:45:53.530 | 2011-05-24T11:45:53.530 | 2017-04-13T12:44:52.660 | -1 | 2116 | null |
11189 | 1 | null | null | 4 | 5154 | I received the following question by email:
>
I was wondering should I use tick the
option for pairwise exclusion of
missing data when I carry out
regression analyses (or any analyses
for that matter) rather than using [some other missing values replacement strategy].
Julie Pallant recommends pairwise
exclusion of missing data in her SPSS
textbook.
I have a few thoughts, but I was interested in first hearing your thoughts.
| When, if ever, to use pairwise deletion in multiple regression? | CC BY-SA 3.0 | null | 2011-05-24T14:02:48.443 | 2011-05-24T20:41:41.547 | null | null | 183 | [
"regression",
"missing-data"
] |
11190 | 1 | null | null | 2 | 113 | I am working on 4 different species of tomatoes. From the data I had, I looked at the occurrence of a particular "event" in certain intervals of their genome (this interval is identical in all 4 plants) and I have a file for each of the species with their probability of occurrence. The file looks something like this:
>
ch01:a1-b1
p = 0.45
ch01:a2-b2
p = 0.005
...
...
I have 4 such files (one for each species) where the chromosomal locations/intervals are identical and the probability value differs. Now I would like to find out those intervals where the occurrence of this event stands out.
What do I mean by that? If I find, for example, over a particular interval, the event is more significant in plant/file 1 and 3, but not the others, then this interval is of interest to me. I would like to find all such intervals where "the occurrence of event is not all `equally` significant/insignificant".
My question is how should I go about this? Which parametric/non-parametric test can I use? Can I use the test on each interval separately? What are the usual issues to look for and overcome.
| R: statistical test | CC BY-SA 3.0 | null | 2011-05-24T14:08:15.747 | 2018-10-01T22:52:30.817 | 2018-10-01T22:52:30.817 | 11887 | 4731 | [
"r",
"hypothesis-testing",
"genetics"
] |
11191 | 1 | 11197 | null | 7 | 3662 | This question came up in a consulting context, and I was interested in your thoughts.
### Context
One strategy for dealing with occasional missing data when calculating scale means looks like this in the language of SPSS:
```
COMPUTE depmean = mean.4(dep1, dep2, dep3, dep4, dep5, dep6).
EXECUTE.
```
I.e., calculate the mean of a psychological scale such as depression by taking the mean of six items.
If a participant has four or more non-missing items, return the mean of the non-missing items.
If the participant has three or fewer non-missing items, return missing.
Of course the number of items in the scale and the threshold number items for calculating the mean can vary.
### Question
- In general, under what conditions, would you see this method of dealing with missing data to be appropriate?
- If you perceive it to be inappropriate, what alternative procedure would you recommend?
| Appropriateness of calculating scale means based on available non-missing responses (i.e., person-mean imputation) | CC BY-SA 3.0 | 0 | 2011-05-24T14:12:45.140 | 2011-05-25T13:31:15.813 | 2011-05-25T06:43:02.630 | 183 | 183 | [
"scales",
"missing-data"
] |
11193 | 1 | 11224 | null | 18 | 54704 | I have a data frame that contains some duplicate ids. I want to remove records with duplicate ids, keeping only the row with the maximum value.
So for structured like this (other variables not shown):
```
id var_1
1 2
1 4
2 1
2 3
3 5
4 2
```
I want to generate this:
```
id var_1
1 4
2 3
3 5
4 2
```
I know about unique() and duplicated(), but I can't figure out how to incorporate the maximization rule...
| How do I remove all but one specific duplicate record in an R data frame? | CC BY-SA 3.0 | null | 2011-05-24T14:23:45.017 | 2015-06-21T16:16:11.437 | null | null | 4110 | [
"r"
] |
11194 | 2 | null | 11193 | 7 | null | You actualy want to select the maximum element from the elements with the same id. For that you can use `ddply` from package plyr:
```
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
> ddply(dt,.(id),summarise,var_1=max(var))
id var_1
1 1 4
2 2 3
3 3 4
4 4 2
```
`unique` and `duplicated` is for removing duplicate records, in your case you only have duplicate ids, not records.
Update: Here is the code when there are additional variables:
```
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2),bu=rnorm(6))
> ddply(dt,~id,function(d)d[which.max(d$var),])
```
| null | CC BY-SA 3.0 | null | 2011-05-24T14:33:45.407 | 2011-05-24T19:43:38.453 | 2011-05-24T19:43:38.453 | 2116 | 2116 | null |
11195 | 2 | null | 10111 | 2 | null | The eta-square ($\eta^2$) value you are describing is intended to be used as a measure of effect size in the observed data (i.e., your sample), as it amounts to quantify how much of the total variance can be explained by the factor considered in the analysis (that is what you wrote in fact, BSS/TSS). With more than one factor, you can also compute partial $\eta^2$ that reflect the percentage of variance explained by one factor when holding constant the remaining ones.
The F-ratio (BSS/WSS) is the right test statistic to use if you want to test the null hypothesis ($H_0$) that there is no effect of your factor (all group means are equal), that is your factor of interest doesn't account for a sufficient amount of variance compared to the residual (unexplained) variance. In other words, we test whether the added explained variance (BSS=TSS-RSS) is large enough to be considered as a "significant quantity". The distribution of the ratio of these two sum of squares (scaled by their corresponding degrees of freedom--this answers one of your question, about why we don't use directly SSs), which individually follow a $\chi^2$ distribution, is known as the [Fisher-Snedecor](http://en.wikipedia.org/wiki/F-distribution) distribution.
I don't know which software you are using, but
- If you have R, everything you need for basic modeling is given in the aov() base function ($\eta^2$ might be computed with etasq from the heplots package; and there's a lot more to see for diagnostics and plotting in other packages).
- If you're more versed into C programming, you may have a look at the apophenia library which features a nice set of statistical functions with bindings for MySQL and Python.
| null | CC BY-SA 3.0 | null | 2011-05-24T14:34:40.337 | 2011-05-24T14:34:40.337 | null | null | 930 | null |
11196 | 2 | null | 9671 | 5 | null | One generally consider that a "good partitioning" must satisfy one or more of the following criteria: (a) compactness (small within-cluster variation), connectedness (neighbouring data belong to the same cluster), and spatial separation (must be combined with other criteria like compactness or balance of cluster sizes). As part of a large battery of internal measures of cluster validity (where we do not use additional knowledge about the data, like some a priori on class labeling), they can be complemented with so-called combination measures (for example, assessing intra-cluster homogeneity and inter-cluster separation), like Dunn or Davies–Bouldin index, silhouette width, SD-validity index, etc., but also estimates of predictive power (self-consistency and stability of a partitioning), how well distance information are reproduced in the resulting partitions (e.g., cophenetic correlation and Hubert's Gamma statistic).
A more complete review, and simulation results, are available in
>
Handl, J., Knowles, J., and Kell, D.B.
(2005). Computational cluster
validation in post-genomic data
analysis. Bioinformatics,
21(15): 3201-3212.
I guess you could rely on some of them for comparing your different cluster solutions and choose the features set that yields the better indices. You can even use bootstrap to get an estimate of the variability of those indices (e.g., cophenetic correlation, Dunn's index, silhouette width), as was done by Tom Nichols and coll. in a neuroimaging study, [Finding Distinct Genetic Factors that Influence Cortical Thickness](http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/presentations/ohbm2010/nichols-ohbm2010-CoheritabilityClustering.pdf).
If you are using R, I warmly recommend taking a look at the [fpc](http://cran.r-project.org/web/packages/fpc/index.html) package, by [Christian Hennig](http://www.homepages.ucl.ac.uk/~ucakche/), which provides almost all statistical indices described above (`cluster.stats()`) as well as a bootstrap procedure (`clusterboot()`).
About the use of mutual information in clustering, I have no experience with it but here is a paper that discusses its use in a genomic context (with comparison to k-means):
>
Priness, I., Maimon, O., and Ben-Gal,
I. (2007). Evaluation of
gene-expression clustering via mutual
information distance measure. BMC
Bioinformatics, 8: 111.
| null | CC BY-SA 3.0 | null | 2011-05-24T15:05:40.330 | 2011-05-24T15:05:40.330 | null | null | 930 | null |
11197 | 2 | null | 11191 | 6 | null | Some years ago, I thought it might be a good idea to apply person-mean imputation (person-mean substitution or case-mean imputation) in case of item non-response. Nowadays, however, it seems obvious to me that this approach assumes that all scale items share similar characteristics (similar variance, standard deviation, item difficulty, etc.). In other words, I would be concerned if some respondents do not answer difficult/sensitive/... items.
[Bono et al (2007: 7)](http://www.sciencedirect.com/science/article/pii/S155174110600043X) are less concerned about this approach:
>
"Person-mean imputation requires
substitution of the mean of all of an
individual’s completed items for those
items that were not completed on a
given scale. This differs from
item-mean where the mean response of
the whole sample that responded to the
item is substituted. Person-mean
imputation could result in different
substitutions for each person with
missing items. On the plus side,
because it does not substitute a
constant value, it does not
artificially reduce the measure’s
variability and is less likely to
attenuate the correlation. A
disadvantage is that it tends to
inflate the reliability estimates as
the number of missing items increases.
However, when the numbers of either
respondents with missing items or
items missing within scales are 20% or
less, both item-mean imputation and
person-mean imputation provide good
estimates of the reliability of
measures."
You also might want to check
- Craig K. Enders (2010): Applied Missing Data Analysis. (Google books link)
- Downey RG, King C. (1998): Missing data in Likert ratings: A comparison of replacement methods.
| null | CC BY-SA 3.0 | null | 2011-05-24T15:05:55.980 | 2011-05-24T15:20:22.497 | 2011-05-24T15:20:22.497 | 307 | 307 | null |
11198 | 2 | null | 11189 | 1 | null | I think it depends on the situation at hand. If you're missing a couple values out of several hundred or thousand observations, sure, delete them.
If one of your important variables is 10% missing, you may need to think up a strategy for dealing with this.
| null | CC BY-SA 3.0 | null | 2011-05-24T15:25:33.190 | 2011-05-24T15:25:33.190 | null | null | 2817 | null |
11199 | 2 | null | 11193 | 6 | null | The base-R solution would involve `split`, like this:
```
z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),]))
```
`split` splits the data frame into a list of chunks, on which we perform cutting to the single row with max value and then `do.call(rbind,...)` reduces the list of single rows into a data frame again.
| null | CC BY-SA 3.0 | null | 2011-05-24T15:35:27.920 | 2011-05-24T15:35:27.920 | null | null | null | null |
11200 | 1 | 11201 | null | 6 | 1778 | Today I opened two STATA windows and ran the following command in both:
```
set obs 100
gen x = rnormal()
sort x
```
(the difference is that on the second window I generated a variable called y). Summing up: I asked STATA to give me 100 pseudo-random numbers taken from a standard normal distribution, then I sorted it. To my surprise, the numbers of the x and y vectors are the same! I did this at home, and then at work, and my impression is that all of these vectors are the same. Is there an explanation for this, to me, strange behavior?
If this is a problem in STATA, does R have a better pseudo-random number generator procedure?
A side question. I came up to this "problem" because I was trying to generate two pseudo-random columns in Stata (x and y, say), and then sort then separately. But the two commands I know for sorting (sort and gsort) sort the whole database, not separate columns. Would you know of a Stata command that allows me to sort a column while keeping the other columns fixed?
| Generating sorted pseudo-random numbers in Stata | CC BY-SA 3.0 | null | 2011-05-24T15:53:05.780 | 2011-05-24T16:07:17.587 | 2011-05-24T16:07:17.587 | 919 | 2929 | [
"stata",
"random-generation"
] |
11201 | 2 | null | 11200 | 8 | null | The help for `set_seed` states
>
The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched.
Stata's philosophy emphasizes reproducibility, so this consistency is not surprising. Of course you can set the seed yourself. See the help page for more information.
One way to sort a column separately from all others is to preserve your data, keep only the column to sort, sort it, save the results in a temporary file, restore your data, and merge the temporary file:
```
gen y = rnormal()
preserve
keep y
sort y
tempfile out
save `out'
restore
merge 1:1 _n using `out', nogen
```
| null | CC BY-SA 3.0 | null | 2011-05-24T16:05:08.697 | 2011-05-24T16:05:08.697 | null | null | 919 | null |
11202 | 1 | 11205 | null | 1 | 6167 | Hello
I am trying to forecast using different exponential smoothing methods(Linear and Winter's). For the optimal parameters, I am getting negative values of the forecasats.
I am assuming it means that the values will be zero, since it is a sales forecast.
I wanted to know if negative values denote something wrong with the model or is it possible to have negative values in forecast. Since it is just a model, I think we can get negative values for certain type of time series.
| Can the forecasts using exponential smoothing be negative in value? | CC BY-SA 3.0 | null | 2011-05-24T16:39:54.080 | 2011-05-24T17:15:30.827 | null | null | 4445 | [
"forecasting"
] |
11203 | 1 | 11228 | null | 3 | 549 | Hi
I am using Linear and exponential forecasting models to do sales forecasting. In the model itself, we use the forecasts of period t to get next forecast and so on.
While analyzing the accuracy of the forecast using Mean Absolute Percentage Error, I get good results. But when I compare the intermediate forecast values of the model against the actual time series data, I can see some big Percentage Errors like MAPE might be 15%, but some of the intermediate percentage errors might be as high as 80% and some will be pretty low. So the MAPE averages out to low.
I wanted to know
1) Is it wise to compare these intermediate forecasts with actual values?
2) Can we use these high intermediate forecast values to say that forecasts for so and so month might be unreliable?
| Should we compare the individual monthly forecasts with actual values? | CC BY-SA 3.0 | null | 2011-05-24T16:46:48.120 | 2016-04-17T12:00:00.623 | 2016-04-17T12:00:00.623 | 1352 | 4445 | [
"time-series",
"forecasting",
"mape"
] |
11205 | 2 | null | 11202 | 7 | null | Holt's or Winter-Holt's exponential smoothing methods can give negative values for purely non-negative input values because of the trend factor which acts as a kind of inertia, which can drive the time series below zero. Normal exponential smoothing doesn't have this problem, it's always smoothing inwards, it never overshoots.
| null | CC BY-SA 3.0 | null | 2011-05-24T17:14:00.257 | 2011-05-24T17:14:00.257 | null | null | 4360 | null |
11206 | 2 | null | 11202 | 0 | null | You can get negative values for certain kinds of models. You might want to explore more complicated models than simple exponential smoothing.
| null | CC BY-SA 3.0 | null | 2011-05-24T17:15:30.827 | 2011-05-24T17:15:30.827 | null | null | 2817 | null |
11207 | 2 | null | 11203 | 6 | null | Yes, you should absolutely compare your predicted values with actual values. This is good practice with any kind of statistical modeling, not just time series analysis.
If certain months are consistently off, you should use a seasonal model.
| null | CC BY-SA 3.0 | null | 2011-05-24T17:20:16.913 | 2011-05-24T17:20:16.913 | null | null | 2817 | null |
11208 | 1 | null | null | 8 | 661 | Problem is that government wants to close electronic roulette and they claim that roulette failed at statistical test.
Sorry for my language but this is translated from Slovenian law as good as possible
Official (by law) requirements are:
- frequency of each event should not differ from expected frequency by more than 3 sigma
- chi square test of normal distribution has to be within risk level of 0.025
- test of consecutive correlation has to pass 3 sigma test and chi squared test
I have tested first 2 requirements and they pass tests, but I have problems whith understanding 3rd requirement. (keep in mind that this is translated and "consecutive correlation" can be something else)
How should I test 3rd requirement?
Data if somebody is interested:
[http://pastebin.com/ffbSKpr1](http://pastebin.com/ffbSKpr1)
EDIT:
chi squared fails 2% of the time (what I expect that is expected due to fact that alpha is 0.025) and sigma3 test fails 5% where i expect 9% failure for 3sigma (it looks like that frequencies are not distributed according to normal distribution even for random numbers)
I might not understand this law correctly, but it is almost 0% probability to pass 3sigma test for all autocorrelation vectors, since it is 9% probability to fail in single run and 2.5 for chi squared test.
Python code:
```
from math import sqrt
from itertools import *
import random
#uncoment for python 2.x
#zip = izip
#range = xrange
#with open("rng.txt","r") as wr:
# n = [int(i) for i in wr]
n = [random.randint(0,36) for i in range(44000)]
def get_freq(n):
r=[0 for i in range(37)]
for i in n:
r[i] += 1
return r
def trisigmatest(freq):
Ef = 1.0*sum(freq)/37
sigma = sqrt(sum(i**2 for i in freq)/37-Ef**2)
return all((abs(i - Ef )< sigma*3) for i in freq)
def chiquaretest(freq):
Ef = 1.0*sum(freq)/37
chi2 = sum((i-Ef)**2 / Ef for i in freq)
# values are from http://itl.nist.gov/div898/handbook/eda/section3/eda3674.htm
# (EDIT) I recaluclated these valuse from inverse cdf chi2
# distribution for interval (0.025/2,1-0.025/2) (alpha = 0.025)
return 20.4441 < chi2 < 58.8954
#whitout autocorelation
gf = get_freq(n)
if not trisigmatest(gf):
print("failed")
raise
if not chiquaretest(gf):
print("failed")
raise
actests = 1000
trifailed = 0;
chifailed = 0;
for i in range(1,actests + 1):
f=((b-a+37) % 37 for (a,b) in zip(n,n[i:]))
gf = get_freq(f)
if not trisigmatest(gf):
trifailed += 1;
if not chiquaretest(gf):
chifailed += 1;
print("trisigmatest failed ", 1.0 * trifailed / actests )
print("chiquaretest failed ", 1.0 * chifailed / actests )
```
| Statistics for gambling machine validation | CC BY-SA 3.0 | null | 2011-05-24T18:19:51.373 | 2012-05-04T02:20:36.217 | 2011-05-25T06:18:33.933 | 4738 | 4738 | [
"correlation",
"statistical-significance",
"chi-squared-test"
] |
11209 | 1 | 11230 | null | 8 | 2935 | I have written a 3-way ANOVA in C++. I have 3 factors, lets say A, B and C and my aim is to check the strength of all possible interactions and main effects. The result of my code is the same as in MATLAB when I use type-I sum of squares.
But when I change the data so that the number of replicates/samples is high in some cells and low in others (unbalanced design), I don't get the same results as in MATLAB. (To be precise, only SSt, SSe and SSa are the same as in MATLAB).
My question is, is it possible that since I have a large difference in the number of replicates, I should use type-III sum of squares? Or is there a special way that Matlab treats the data in such cases so its results differ from mine?
| The effect of the number of samples in different cells on the results of ANOVA | CC BY-SA 3.0 | null | 2011-05-24T19:12:20.207 | 2016-04-29T23:42:42.427 | 2016-04-29T23:42:42.427 | 28666 | 2885 | [
"anova",
"matlab",
"sums-of-squares"
] |
11210 | 1 | 11217 | null | 72 | 25352 | I appreciate the usefulness of the bootstrap in obtaining uncertainty estimates, but one thing that's always bothered me about it is that the distribution corresponding to those estimates is the distribution defined by the sample. In general, it seems like a bad idea to believe that our sample frequencies look exactly like the underlying distribution, so why is it sound/acceptable to derive uncertainty estimates based on a distribution where the sample frequencies define the underlying distribution?
On the other hand, this may be no worse (possibly better) than other distributional assumptions we typically make, but I'd still like to understand the justification a bit better.
| Assumptions regarding bootstrap estimates of uncertainty | CC BY-SA 3.0 | null | 2011-05-24T19:53:26.753 | 2011-05-24T23:06:16.357 | 2011-05-24T22:34:07.543 | null | 4733 | [
"bootstrap",
"uncertainty"
] |
11211 | 2 | null | 9867 | 2 | null | I found this paper with Google but I cannot access it, so I don't really know what it is about really:
>
Berry KJ, Johnston JE, Mielke PW Jr.
An alternative measure of effect size
for Cochran's Q test for related
proportions. Percept Mot Skills.
2007 Jun;104(3 Pt 2):1236-42.
I initially thought that using pairwise multiple comparisons with Cochran or McNemar test* (if the overall test is significant) would give you further indication of where the differences lie, while reporting simple difference for your binary outcome would help asserting the magnitude of the observed difference.
* I found an [online tutorial with R](http://yatani.jp/HCIstats/Cochran).
| null | CC BY-SA 3.0 | null | 2011-05-24T20:07:50.003 | 2011-05-24T20:07:50.003 | null | null | 930 | null |
11212 | 2 | null | 11189 | 3 | null | Pairwise is a dangerous method in this case, IMO. If you delete pairwise then you'll end up with different numbers of observations contributing to different parts of your model, which can make interpretation difficult.
That being said, casewise deletion tends to discard lots and lots of information, so I suppose it depends on both the proportion of missing responses, and your sample size.
Personally, I would probably use the multiple imputation procedure in SPSS and run the analyses for each dataset, then combine if nothing looks odd.
This would be my strategy of choice with a high proportion of missing values, whereas if the number is small, case-wise would probably be my first choice.
| null | CC BY-SA 3.0 | null | 2011-05-24T20:41:41.547 | 2011-05-24T20:41:41.547 | null | null | 656 | null |
11213 | 2 | null | 11210 | 10 | null | The main trick (and sting) of bootstrapping is that it is an asymptotic theory: if you have an infinite sample to start with, the empirical distribution is going to be so close to the actual distribution that the difference is negligible.
Unfortunately, bootstrapping is often applied in small sample sizes. The common feel is that bootstrapping has shown itself to work in some very non-asymptotic situations, but be careful nonetheless. If your samplesize is too small, you are in fact working conditionally on your sample being a 'good representation' of the true distribution, which leads very easily to reasoning in circles :-)
| null | CC BY-SA 3.0 | null | 2011-05-24T21:01:51.687 | 2011-05-24T21:01:51.687 | null | null | 4257 | null |
11214 | 2 | null | 11157 | 0 | null | Maybe use an adaptation of [J-Charts](http://www.investopedia.com/articles/technical/04/060204.asp) and/or [Market Profile charts](http://daytrading.about.com/od/indicators/a/MarketProfile.htm), but instead of plotting price (y-axis) vs volume (x-axis) you could plot time of trade (y-axis) vs no. of trades (x-axis) and use colours to delineate different trading days or averages of no. of trades at these times for different look back periods.
| null | CC BY-SA 3.0 | null | 2011-05-24T21:34:01.523 | 2011-05-24T21:34:01.523 | null | null | 226 | null |
11215 | 2 | null | 11210 | 5 | null | I would argue not from the perspective of "asymptotically, the empirical distribution will be close to the actual distribution" (which, of course, is very true), but from a "long run perspective". In other words, in any particular case, the empirical distribution derived by bootstrapping will be off (sometimes shifted too far this way, sometimes shifted too far that way, sometimes too skewed this way, sometimes too skewed that way), but on average it will be a good approximation to the actual distribution. Similarly, your uncertainty estimates derived from the bootstrap distribution will be off in any particular case, but again, on average, they will be (approximately) right.
| null | CC BY-SA 3.0 | null | 2011-05-24T21:55:49.077 | 2011-05-24T22:30:27.953 | 2011-05-24T22:30:27.953 | 1934 | 1934 | null |
11216 | 2 | null | 11193 | 5 | null | I prefer using `ave`
```
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2))
## use unique if you want to exclude duplicate maxima
unique(subset(dt, var==ave(var, id, FUN=max)))
```
| null | CC BY-SA 3.0 | null | 2011-05-24T22:39:14.203 | 2011-05-24T22:39:14.203 | null | null | 375 | null |
11217 | 2 | null | 11210 | 64 | null | There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the model you're using is (essentially) correct.
Let's focus on the first one. We'll assume that you have a random sample $X_1, X_2, \ldots, X_n$ distributed according the the distribution function $F$. (Assuming otherwise requires modified approaches.) Let $\hat{F}_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}(X_i \leq x)$ be the empirical cumulative distribution function. Much of the motivation for the bootstrap comes from a couple of facts.
Dvoretzky–Kiefer–Wolfowitz inequality
$$
\renewcommand{\Pr}{\mathbb{P}}
\Pr\big( \textstyle\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| > \varepsilon \big) \leq 2 e^{-2n \varepsilon^2} \> .
$$
What this shows is that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability. Indeed, this inequality coupled with the Borel–Cantelli lemma shows immediately that $\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| \to 0$ almost surely.
There are no additional conditions on the form of $F$ in order to guarantee this convergence.
Heuristically, then, if we are interested in some functional $T(F)$ of the distribution function that is smooth, then we expect $T(\hat{F}_n)$ to be close to $T(F)$.
(Pointwise) Unbiasedness of $\hat{F}_n(x)$
By simple linearity of expectation and the definition of $\hat{F}_n(x)$, for each $x \in \mathbb{R}$,
$$
\newcommand{\e}{\mathbb{E}}
\e_F \hat{F}_n(x) = F(x) \>.
$$
Suppose we are interested in the mean $\mu = T(F)$. Then the unbiasedness of the empirical measure extends to the unbiasedness of linear functionals of the empirical measure. So,
$$
\e_F T(\hat{F}_n) = \e_F \bar{X}_n = \mu = T(F) \> .
$$
So $T(\hat{F}_n)$ is correct on average and since $\hat{F_n}$ is rapidly approaching $F$, then (heuristically), $T(\hat{F}_n)$ rapidly approaches $T(F)$.
To construct a confidence interval (which is, essentially, what the bootstrap is all about), we can use the central limit theorem, the consistency of empirical quantiles and the delta method as tools to move from simple linear functionals to more complicated statistics of interest.
Good references are
- B. Efron, Bootstrap methods: Another look at the jackknife, Ann. Stat., vol. 7, no. 1, 1–26.
- B. Efron and R. Tibshirani, An Introduction to the Bootstrap, Chapman–Hall, 1994.
- G. A. Young and R. L. Smith, Essentials of Statistical Inference, Cambridge University Press, 2005, Chapter 11.
- A. W. van der Vaart, Asymptotic Statistics, Cambridge University Press, 1998, Chapter 23.
- P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217.
| null | CC BY-SA 3.0 | null | 2011-05-24T22:48:41.360 | 2011-05-24T23:06:16.357 | 2011-05-24T23:06:16.357 | 2970 | 2970 | null |
11218 | 2 | null | 11210 | 12 | null | Here is a different approach to thinking about it:
Start with the theory where we know the true distribution, we can discover properties of sample statistics by simulating from the true distribution. This is how Gosset developed the t-distribution and t-test, by sampling from known normals and computing the statistic. This is actually a form of the parametric bootstrap. Note that we are simulating to discover the behavior of the statistics (sometimes relative to the parameters).
Now, what if we do not know the population distribution, we have an estimate of the distribution in the empirical distribution and we can sample from that. By sampling from the empirical distribution (which is known) we can see the relationship between the bootstrap samples and the empirical distribution (the population for the bootstrap sample). Now we infer that the relationship from bootstrap samples to empirical distribution is the same as from the sample to the unknown population. Of course how well this relationship translates will depend on how representative the sample is of the population.
Remember that we are not using the means of the bootstrap samples to estimate the population mean, we use the sample mean for that (or whatever the statistic of interest is). But we are using the bootstrap samples to estimate properties (spread, bias) of the sampling proccess. And using sampling from a know population (that we hope is representative of the population of interest) to learn the effects of sampling makes sense and is much less circular.
| null | CC BY-SA 3.0 | null | 2011-05-24T23:00:19.693 | 2011-05-24T23:00:19.693 | null | null | 4505 | null |
11219 | 1 | null | null | 8 | 39091 | I have been reading about appropriate measures of central tendency for ordinal level data.
So far I have learned that the median and mode can be used but that the latter can only be used in some cases. Some sources state that the median can only be used with Likert questions when there is an odd number of scores. It is not clear to me what this means and also which cases the median cannot be used.
## Example:
An example may illustrate.
- If there was a question: "Climate change is England’s most serious environmental problem" on a response scale: 1=strongly agree 2=agree 3=unsure 4=disagree 5=strongly disagree. Would the median be 3=unsure?
- What if no respondents stated disagree or strongly disagree and all 100 respondents stated either 1, 2, or 3, is the median then 2?
- what if respondents only stated 2 or 3. In this case is it not possible to identify the median?
| Median value on ordinal scales | CC BY-SA 3.0 | null | 2011-05-25T00:29:48.363 | 2012-09-04T18:44:50.493 | 2011-05-25T03:22:21.527 | 4498 | 4498 | [
"median"
] |
11220 | 1 | null | null | 8 | 2031 | I have data from a load test of a web site with several thousand data points spread out over roughly 30 minutes (the values are the response time of the site in milliseconds). The values are spread out among the 30 minute range, but not at a constant rate (i.e. there may be a few milliseconds between some points, other points maybe at the same timestamp, etc).
I'd like to present this data visually and chart it, but I'm not sure of the best method for doing so - there is a good amount of variance around any sort of concept of average values or a trend line.
Is there any generally accepted best practices for methods on how to graph data of this type? I'm concerned about choosing a poor method for averaging/smoothing out the data and misrepresenting the data - such as underweighting some outlier values.
I've played around with a line chart with the timestamps on the x-axis and the average of samples in the same minute on the y-axis. I'd also like to consider graphing a moving average of the data, but I'm unsure if I should be averaging datapoints in the same N minutes or a window of the last N points.
I'd like to make sure that whatever choice I make would appear to be a rigorous representation of the data and not too amateur-ish.
Update: below is a sample of what I have produced so far, each point on the chart is taken as the mean/median of all of the samples within the same minute (i.e. within 11:12:00.000 and 11:12:59.999). I included the number of samples per minute as a bar chart in the second half of the image to be able to show if any single points in the line chart look as outliers due to a small amount of samples, although aesthetically speaking I think the bar chart takes up way too much real estate for the amount of information it gives.

| Preferred methods for graphing time-series data to present "averages"? | CC BY-SA 3.0 | null | 2011-05-25T00:56:40.383 | 2011-05-25T16:02:58.067 | 2011-05-25T13:24:19.233 | 4739 | 4739 | [
"time-series",
"data-visualization"
] |
11221 | 2 | null | 11145 | 2 | null | As far as your statistical test, it might be a choice between 1) ancova with pretest weight as the covariate and 2) anova with change scores as the outcome. You'd use ancova if you believed posttest weight would naturally be different from pretest weight even without the treatment, and that posttest weight would be a linear function of pretest weight with a correlation of at least .3. You'd use anova on change scores if you believed that absent the treatment there would be no expected change in mean weight. Cook and Campbell's Quasi-Experimentation has a particularly thought-provoking section on these issues.
| null | CC BY-SA 3.0 | null | 2011-05-25T01:35:52.533 | 2011-05-25T01:35:52.533 | null | null | 2669 | null |
11222 | 2 | null | 11191 | 4 | null | Person-mean imputation with an minimum-item threshold is a simple strategy for retaining scale scores where participants miss the occasional response.
### Some general principles
- If missing data is minimal (e.g., less than 5% of participants are missing 1 item on a 10 item scale), the method of dealing with missing data is unlikely to make a difference to substantive conclusions.
- From a first principle perspective, imputation methods should provide more robust estimates of missing item responses as they incorporate both item and person characteristics into estimating the missing response.
- Design studies to avoid sporadic item-missing data.
### Conditions where person-mean imputation is more reasonable:
- Item means are all about the same
- The threshold number of missing items is low relative to the total number of items in the scale (e.g., a requirement for 19 out of 20 items is more appropriate than 10 out of 20 items)
- There is generally very little missing data; at the extreme level, there is no missing data, and person-mean imputation does not change the data at all
- the cause of missing data is due to random processes such as accidentally skipping items, not clearly indicating the response, and so on.
- A simple and standardised rule for calculating scale means is desired (e.g., a rule might be required for a test manual that can be applied in a standardised way across studies and samples)
### Avoiding sporadic missing data for items in scales
At a broader level, person-mean imputation of item responses is a response to a problem that can often be avoided using various study design strategies:
- Computerised administration of questionnaires can prevent (where this is ethically permitted) participants skipping or missing items.
- If questionnaires are administered on paper and in person, the experimenter can review the questionnaire booklet to check for missing data before the participant leaves the room.
| null | CC BY-SA 3.0 | null | 2011-05-25T01:49:14.433 | 2011-05-25T02:07:08.603 | 2020-06-11T14:32:37.003 | -1 | 183 | null |
11223 | 2 | null | 11219 | 3 | null | No, the median is the value where half the data is less than or equal to that value and half the data is greater than or equal to that value.
So if your ordinal scale had 100 respondents then find the value that has at least 50 less or equal and 50 greater than or equal. It would only be 3 if half the responses were to either side. If 1 person said 1, 2 people said 2, 3 said 3, 4 said 4, and the remaining 90 said 5, then 5 would be the median.
The median works when the data is ordered, but would not make sense for nominal/unordered data, like what is your favorite color?
| null | CC BY-SA 3.0 | null | 2011-05-25T01:57:05.730 | 2011-05-25T01:57:05.730 | null | null | 4505 | null |
11224 | 2 | null | 11193 | 25 | null | One way is to reverse-sort the data and use `duplicated` to drop all the duplicates.
For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well.
```
# Some data to start with:
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# id var
# 1 2
# 1 4
# 2 1
# 2 3
# 3 5
# 4 2
# Reverse sort
z <- z[order(z$id, z$var, decreasing=TRUE),]
# id var
# 4 2
# 3 5
# 2 3
# 2 1
# 1 4
# 1 2
# Keep only the first row for each duplicate of z$id; this row will have the
# largest value for z$var
z <- z[!duplicated(z$id),]
# Sort so it looks nice
z <- z[order(z$id, z$var),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
```
Edit: I just realized that the reverse sort above doesn't even need to sort on `id` at all. You could just use `z[order(z$var, decreasing=TRUE),]` instead and it will work just as well.
One more thought... If the `var` column is numeric, then there's a simple way to sort so that `id` is ascending, but `var` is descending. This eliminates the need for the sort at the end (assuming you even wanted it to be sorted).
```
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# Sort: id ascending, var descending
z <- z[order(z$id, -z$var),]
# Remove duplicates
z <- z[!duplicated(z$id),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
```
| null | CC BY-SA 3.0 | null | 2011-05-25T02:59:08.417 | 2011-05-25T19:40:10.713 | 2011-05-25T19:40:10.713 | 4740 | 4740 | null |
11225 | 1 | null | null | 3 | 495 | I need to calculate an exponential moving average for a series of data. The intended sampling interval is fixed (say 1s) but the data stream has varying intervals (data intervals vary from 0.01s to 10s or so). The data is somewhat noisy (a random data sample would virtually never be on the average).
My impression is that I cannot thus just take the most recent data sample at each interval, as that could easily lead to a misleading stat. I reckon I can somehow just calculate the average for each period and take that as the sample, but I'm not positive.
Is there a standard algorithm that manages an exponential moving average on a time-variable data stream?
---
I need to program this for a real-time system where it won't be possible to store a sample history. Nonetheless, I'm sure I can adapt any non-streaming algorithm to the streaming form.
| Exponential moving average with sub-interval relevance / varying timeframe | CC BY-SA 3.0 | null | 2011-05-25T04:43:24.693 | 2011-05-25T06:55:33.350 | 2011-05-25T06:55:33.350 | 2116 | 4741 | [
"time-series",
"sampling",
"exponential-smoothing"
] |
11226 | 2 | null | 11219 | 7 | null |
### Definitional issues:
- The median is the middle value of the data; it is not by definition the middle value of the scale.
- When the sample size is even, then the median is the mean of the values either side of middle most point after rank ordering all values (see wikipedia description).
### When to use median on ordinal data
- In theory the median can be used on data from any variable where the values can be ordered.
- In practice, the median is often not the most useful summary of central tendency with ordinal variables.
This partially depends on what you want to get out of your measure of central tendency.
When you are describing the central tendency of data on an ordinal variable with only a small number of response options (i.e., perhaps less than 20 or 50 or 100), the median can be quite gross (e.g., 1,1,3,3,3 and 1,3,3,5,5 both have a median of 3, but the second example would have a higher mean).
When it comes to summarising the central tendency of Likert items, I find the mean to be much more useful and sensitive to meaningful differences.
Ordinal variables that are ranks do not suffer from this problem of "grossness".
- Interpolated medians are another way of overcoming the gross nature of the median on ordinal data with few values.
| null | CC BY-SA 3.0 | null | 2011-05-25T05:41:03.343 | 2011-05-25T05:41:03.343 | null | null | 183 | null |
11227 | 1 | null | null | 4 | 206 | Say I have 2 sets, $A$ and $B$ with $n_{A}$ and $n_{B}$ elements respectively, which I assume is known. I would like to estimate $| A \bigcup B |$ using samples of $\tilde{A} \subset A$ and $ \tilde{B} \subset B$.
That is if $\tilde{A}$'s elements are uniformly sampled from $A$, and likewise for $\tilde{B}$, will $ \frac{| \tilde{A} \bigcup \tilde{B} |}{| \tilde{A} | + | \tilde{B} |}$ be an unbiased estimate for $\frac{| A \bigcup B |}{| A |+|B |}$? If not, is there some another estimator that will allow me estimate $| A \bigcup B |$ without bias?
| Bias in sampling for set intersections | CC BY-SA 3.0 | null | 2011-05-25T05:52:01.007 | 2011-10-24T13:13:28.450 | 2011-05-25T08:40:23.070 | null | 4742 | [
"sampling",
"unbiased-estimator",
"bias"
] |
11228 | 2 | null | 11203 | 5 | null | MAPE is known to have problems, when the time series have values close to zero. Check whether this is the case, since high MAPEs may be the problem of time series values close to zero, not of model accuracy. For a discussion on accuracy measures I recommend [this article](http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2005/wp13-05.pdf) by Rob Hyndman and Anne Koehler.
If it is not the problem with MAPE, @Zach advice is spot on, you should always compare the forecasts with actual values, that is how you know how good your forecasts are.
| null | CC BY-SA 3.0 | null | 2011-05-25T06:52:17.017 | 2011-05-25T07:54:30.247 | 2011-05-25T07:54:30.247 | 2116 | 2116 | null |
11229 | 2 | null | 11220 | 6 | null | I suggest adding an example or two of what you are presently doing so we can better see what you are dealing with.
What you are concerned with is an important issue: how do you convey the "overall" pattern in the time series data while also not misleading viewers by showing just average values? One way I have dealt with this situation is plotting an average or median line along with surrounding quantile bands. For example,

Here, the time series data are from a bootstrap-based simulation so there are hundreds of values associated with each time point. The actual data are plotted in the black line with colored bands showing the variability of values from the simulation. This particular plot is maybe not the best example to show, but you can see that some points have much more variability than others, and you can also assess how the variability is skewed above/below the actual values depending on the position in the series.
UPDATE:
Given your update here are some additional questions and thoughts... What decisions, if any, are made from this visualization? For example, are you looking for specific points in time where there is very slow response time, perhaps above a specific threshold? If so, it may be better to simply plot all of the points as a scatter plot, and then also plot a time series line showing the average value, as well as some lines delineating the bounds you are concerned about. This recommendation is not appropriate if you have numerous observations at some time points (too much clutter), or if your time measurement is not sufficiently coarse (in which case you can bin response data into minute-wide time of day intervals). But the visualization recommendation will certainly be affected by what decision(s) will be supported with it. In my example, I was looking at such plots side by side, one from one simulation and the other from another simulation (each simulation using different parameters) so I could assess the variability of the underlying model due to sampling error.
| null | CC BY-SA 3.0 | null | 2011-05-25T07:22:42.653 | 2011-05-25T15:50:44.233 | 2011-05-25T15:50:44.233 | 1080 | 1080 | null |
11230 | 2 | null | 11209 | 12 | null | I don't have Matlab but from what I've read in the on-line help for [N-way analysis of variance](http://www.mathworks.com/help/toolbox/stats/anovan.html) it's not clear to me whether Matlab would automatically adapt the `type` (1--3) depending on your design. My best guess is that yes you got different results because the tests were not designed in the same way.
Generally, with an imbalanced design it is recommended to use Type III sum of squares (SS), where each term is tested after all other (the difference with Type II sum of squares is only apparent when an interaction term is present), while with an incomplete design it might be interesting to compare Type III and Type IV SS. Note that the use of type III vs. Type II in the case of unbalanced data is subject to discussion in the literature.
(The following is based on a French tutorial that I cannot found anymore on the original website. Here is a [personal copy](http://www.aliquote.org/cours/2006_cogmaster_A4/ressources/ab-deseq.pdf), and here is another paper that discussed the different ways to compute SS in factorial ANOVAs: [Which Sums of Squares Are Best In Unbalanced Analysis of Variance?](http://www.matstat.com/ss/easleaao.pdf))
The difference between Type I/II and Type III (also called Yates's weighted squares of means) lies in the model that serves as a reference model when computing SS, and whether factors are treated in the order they enter the model or not. Let's say we have two factors, A and B, and their interaction A*B, and a model like y ~ A + B + A:B (Wilkinson's notation).
With Type I SS, we first compute SS associated to A, then B, and finally A*B. Those SS are computed as the difference in residual SS (RSS) between the largest model omitting the term of interest and the smallest one including it.
For Type II and III, SS are computed in a sequantial manner, starting with those associated to A*B, then B, and finally A. For A*B, it is simply the difference between the RSS in the full model and the RSS in the model without interaction. The SS associated to B is computed as the difference between RSS for a model where B is omitted and a model where B is included (reference model); with Type III SS, the reference model is the full model (A+B+A*B), whereas for Type I and II SS, it is the additive model (A+B). This explains why Type II and III will be identical when no interaction is present in the full model. However, to obtain the first SS, we need to use dummy variables to code the levels of the factor, or more precisely difference between those dummy-coded levels (which also means that the reference level considered for a given factor matters; e.g., SAS consider the last level, whereas R consider the first one, in a lexicographic order). To compute SS for the A term, we follow the same idea: we consider the difference between the RSS for the model A+B+A*B and that for the reduced model B+A*B (A omitted), in case of Type III SS; with Type II SS, we consider A+B vs. B.
Note that in a complete balanced design, all SS will be equal. Moreover, with Type I SS, the sum of all SS will equal that of the full model, whatever the order of the terms in the model is. (This is not true for Type II and Type III SS.)
A detailed and concrete overview of the different methods is available in one of Howell's handout: [Computing Type I, Type II, and Type III Sums of Squares directly using the general linear model](http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Type1-3.pdf). That might help you check your code. You can also use R with the [car](http://cran.r-project.org/web/packages/car/index.html) package, by John Fox who dicussed the use of incremental sum of squares in his textbook, Applied Regression Analysis, Linear Models, and Related Methods (Sage Publications, 1997, § 8.2.4--8.2.6). An example of use can be found on [Daniel Wollschläger](http://www.uni-kiel.de/psychologie/dwoll/r/ssTypes.php) website.
Finally, the following paper offers a good discussion on the use of Type III SS (§ 5.1):
>
Venables, W.N. (2000). Exegeses on
Linear Models. Paper presented to
the S-PLUS User’s Conference
Washington, DC, 8-9th October, 1998.
(See also this [R-help thread](http://r.789695.n4.nabble.com/Type-I-v-s-Type-III-Sum-Of-Squares-in-ANOVA-td1573657.html), references therein, and the following post [Anova – Type I/II/III SS explained](http://mcfromnz.wordpress.com/2011/03/02/anova-type-iiiiii-ss-explained/).)
| null | CC BY-SA 3.0 | null | 2011-05-25T07:53:11.060 | 2011-05-25T08:56:55.523 | 2011-05-25T08:56:55.523 | 930 | 930 | null |
11231 | 1 | null | null | 4 | 1887 | I am investigating many different kinds of PCA versions, I am trying to find out whether PCR will apply to my analysis thus the question on use of PCR.
| Applications of principal component analysis versus principal component regression? | CC BY-SA 3.0 | null | 2011-05-25T09:46:14.150 | 2019-03-28T11:34:44.353 | 2019-03-28T11:34:44.353 | 128677 | 4747 | [
"regression",
"pca",
"dimensionality-reduction"
] |
11232 | 1 | null | null | 4 | 1164 | I am trying to compare the difference between two means with two pairwise samples. Unfortunately, my data are very far of being normal. What test would you recommend to use in this situation? Should I revert to a nonparametric test?
| Testing difference between two means with pairwise data and absence of normality | CC BY-SA 3.0 | null | 2011-05-25T11:30:44.473 | 2011-05-25T13:44:31.233 | 2011-05-25T11:40:42.647 | 2116 | 6245 | [
"hypothesis-testing"
] |
11233 | 1 | 28627 | null | 6 | 10138 | Assume, I have a data set, which is similar to
```
require(nlme)
?Orthodont
```
and my model is
```
fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
```
How can I use the model fit object `fm2` to generate several datasets, which have sample sizes 300, 400, 500, ... ?
I read this [great answer on r-sig-mixed-models help](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q3/000293.html) but it seems incomplete.
| How to simulate data based on a linear mixed model fit object in R? | CC BY-SA 3.0 | null | 2011-05-25T11:51:00.517 | 2013-12-20T21:23:36.717 | 2012-06-19T12:11:28.537 | 183 | 4559 | [
"r",
"mixed-model",
"simulation"
] |
11234 | 2 | null | 11232 | 3 | null | Sounds like a job for the [paired Wilcoxon test](http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test).
Note that this method compares the medians of the two samples, not their means. In any case, the mean is often not a good estimator when the distributions are not normally distributed, as it is easily biased by extremely low or high values.
| null | CC BY-SA 3.0 | null | 2011-05-25T12:15:55.230 | 2011-05-25T12:15:55.230 | null | null | 656 | null |
11235 | 2 | null | 11232 | 5 | null | A paired t-test assumes that the differences are normal: the original values could have any distribution. More precisely, just like with a t-test, the differences don't even have to be normal, just the sampling distribution of the mean. This usually means that with a large enough sample you can use a t-test even without normality because the central limit theorem will kick in.
On the other hand, one can always use a non-parametric test with not too much loss in efficiency.
| null | CC BY-SA 3.0 | null | 2011-05-25T13:16:56.460 | 2011-05-25T13:16:56.460 | null | null | 279 | null |
11236 | 1 | 11291 | null | 5 | 764 | I've been using R's `lm` to do some linear regression, but decided to give `MCMCregress` a try to get a feel for how it works. As expected, I got basically the same coefficients, but the extra `sigma2` value puzzles me.
When I do a `qqmath` plot of the coefficients, I get the following graph, and I'm puzzled by the sigma2 plot. It's obviously not linear, but I'm not sure if that's meaningful in this context. I assume it's sigma squared, and when I took the square root and plotted it, the line was straighter, but still curved.
I guess my question boils down to: what is sigma2 telling me about the MCMC regression fit, and is a graph of it useful or should I ignore the graph and focus on something else? (All of the diagnostics and graphs I've done on my original `lm` fit seem to indicate that the fit is good, so I'm also wondering if the MCMC regression gives me more information or not.)

(If I need to provide the actual data, I can. I'm hoping that an answer depends more on what sigma2 is rather than on specific values.)
| QQ plot of sigma2 from an MCMC regression? | CC BY-SA 3.0 | null | 2011-05-25T13:29:57.467 | 2014-11-20T09:49:04.697 | 2020-06-11T14:32:37.003 | -1 | 1764 | [
"r",
"regression",
"markov-chain-montecarlo",
"qq-plot"
] |
11237 | 2 | null | 11191 | 1 | null | one more piece of advice: make sure the full 6-item composite scale is reliable & that none of the included items reduces scale reliability. If those conditions aren't satisfied, you shouldn't be averaging them even in cases where data are complete. If these conditions are satisfied, then using a subset of items for cases w/ missing data isn't going to bias your result (assuming you are averaging or adding z-score transformation of items, as you always should in forming aggregate likert scale); it is just going to make it noisier than it should be (b/c you are relying on fewer items & thus cancelling out less of the random measurement error associated with each individual item).
(Best solution, though, is multiple imputation, again assuming composite scale is reliable.)
| null | CC BY-SA 3.0 | null | 2011-05-25T13:31:15.813 | 2011-05-25T13:31:15.813 | null | null | 11954 | null |
11238 | 2 | null | 11232 | 4 | null | Your description of your design is not too precise as it allows two interpretations.
First, it is possible that you have a 2 (between) x 2 (within) design (i.e., two groups with two pairwise samples).
Second, it is possible that you have a simple design with one group which was measured two times.
Only in the second case, the answers here apply.
Furthermore, the question if it is really inappropriate to use a t-test for our data is the crucial part of your question. Sometimes this question is difficult in the sense that one may confuse normality of the data with normality of the residuals (crucial in the first interpretations of your design, see [here](https://stats.stackexchange.com/questions/6350/anova-assumption-normality-normal-distribution-of-residuals) and [here](https://stats.stackexchange.com/questions/1637/if-the-t-test-and-the-anova-for-two-groups-are-equivalent-why-arent-their-assum)) and/or normality of the differences (crucial for the second interpretation, see Aniko's answer).
If the deviation from normality are that serious that you do not want to use a t-test, you should think about using a permutation test instead of the wilcoxon. See the answers to the following two questions for how to make permutations tests using the `coin` package for r with dependent samples:
[Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?](https://stats.stackexchange.com/questions/6127/which-permutation-test-implementation-in-r-to-use-instead-of-t-tests-paired-and)
[Paired permutation test for repeated measures](https://stats.stackexchange.com/questions/10953/paired-permutation-test-for-repeated-measures)
| null | CC BY-SA 3.0 | null | 2011-05-25T13:44:31.233 | 2011-05-25T13:44:31.233 | 2017-04-13T12:44:39.283 | -1 | 442 | null |
11239 | 2 | null | 11193 | 1 | null | Yet another way to do this with base:
```
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max))
id max
1 1 4
2 2 3
3 3 4
4 4 2
```
I prefer mpiktas
' plyr solution though.
| null | CC BY-SA 3.0 | null | 2011-05-25T14:34:17.263 | 2011-05-25T14:34:17.263 | null | null | 3094 | null |
11240 | 2 | null | 11220 | 3 | null | Have you considered a scatterplot of the data themselves? [That's an approach I really like](https://stats.stackexchange.com/questions/173/time-series-for-count-data-with-counts-20). It lets the viewer make their own conclusions about the presence and significance of trends, and it doesn't conceal variability or outliers. Alpha-blending the points will help if you have serious overplotting (which it sounds like you might). You can also overlay whatever trend you like and take comfort in knowing that the data are still present to speak for themselves.
| null | CC BY-SA 3.0 | null | 2011-05-25T16:02:58.067 | 2011-05-25T16:02:58.067 | 2017-04-13T12:44:45.640 | -1 | 71 | null |
11242 | 1 | null | null | 1 | 199 | This is a question strongly related to Cauchy "characters".
I'm constructing a 4 question canvassing questionnaire that will tell the likely voter being contacted which of the presidential candidates most closely matches them. The advantage of this approach for a dark horse presidential candidate is obvious, presuming, of course, that most of the likely voters match him. I do have the verbal interest in this from the state-level executive director for such a presidential candidate.
One might be entitled to think that this work has been done umpteen times by the thousands of political science PhDs and/or major polling organizations -- at least using the General Social Survey data if nothing else -- and one would be wrong. Moreover, we don't have much time to deploy.
Ideally the questionnaire construction would result in a kind of decision tree where the door-to-door canvassing volunteer could have a mobile device app providing the next question to ask based on the answers to prior questions.
Also ideally, the construction process, itself, would minimize the likely-voter contact as this drives the expense. Using GSS data zeros out that cost and would be optimal if we could get access to the raw GSS data, but we can't. We have to do a survey to gather the data for construction of the 4-deep tree of questions.
On the result side, as a practical compromise, I've proposed falling back to finding just 4 questions rather than finding a 4-deep tree of questions.
On the construction side, as a practical compromise, I've proposed a prize-fund backing a tournament where:
- The contestants each submit 4 questions.
- The submitted questionnaires are paired up for the contests.
- Campaign volunteers each get a pair of contested questionnaires, and get ten likely voters to completely answer all 8 questions.
- The winning questionnaire of a contest is the one whose author can, from the answers to his own questionnaire, best-guess the answers to the opponent's questionnaire.
- Award prizes after the log2(N) contests have selected a winner of that tournament.
- Publish the rankings of the questionnaires and, by permission, their authors.
As resources permit, this tournament is iterated for multiple rounds.
We really have to place weight on the value of up-front volunteer time, so minimizing the construction labor is crucial.
I know this is fairly far from a pure mathematics question but I've brought it as close as I can to some kind of weighted figure of merit involving high-value labor in the construction phase and the expected accuracy of the resulting 4 questions answered by likely voters contacted during blind poll canvassing by lower value labor.
The question: About how far from optimal would be the proposed practical compromise of the 4-question questionnaire (constructed as described) from the ideal 4-deep decision tree of questions constructed from an infinite number of samples during the construction phase?
A secondary question: Is there a better way to make use of the same up-front volunteer time?
| Optimal blind poll construction | CC BY-SA 3.0 | null | 2011-05-25T14:45:39.393 | 2011-05-29T18:01:20.593 | 2011-05-25T19:38:19.693 | null | 4753 | [
"survey",
"experiment-design"
] |
11243 | 1 | null | null | 2 | 161 | Context
I have a regression framework and two sets of data. Using leave-one-out cross-validation, the first set gives very good performance and the second set gives rather poor performance. I need to explain the reason for this difference in performance.
Having looked at the data, it is clear that the first set is a much easier test case. The data in the first set does not vary as much as in the second set. In addition, the test and training data of the first set are strikingly similar, which is not the case in the second dataset.
To explain the reason for the performance difference, I can show these differences in data qualitatively using pictures of the physical phenomena being modeled. However, I would also like to quantify these dataset differences and relate it to the performance discrepancy.
Questions
What metrics can I use to show that the first dataset is a trivial test case and that the second set is more challenging?
As a statistician, what else would you want to know in order to be confident that the performance discrepancy was caused by these differences in data and not something else?
| Explaining regression performance differences | CC BY-SA 3.0 | null | 2011-05-25T17:11:02.417 | 2011-05-25T17:11:02.417 | null | null | 3052 | [
"regression",
"cross-validation",
"ridge-regression"
] |
11246 | 1 | 11265 | null | 4 | 301 | The last line is an example of what I'm looking for:
```
data(airquality)
attach(airquality)
lm1 <- lm(Ozone ~ Solar.R+Wind)
lm2 <- lm(Ozone ~ Solar.R+Wind+Temp)
anova(lm1 , lm2)
require(rpart)
rp1 <- rpart(Ozone ~ Solar.R+Wind)
rp2 <- rpart(Ozone ~ Solar.R+Wind+Temp)
anova(rp1 , rp2) # this doesn't exist - is there something like it? some sort of anova.rpart function?
```
Thanks!
| Is there an ANOVA table generalization for two nested CART models? | CC BY-SA 3.0 | null | 2011-05-25T18:44:42.107 | 2011-05-26T07:49:23.200 | 2011-05-25T19:42:03.047 | null | 253 | [
"anova",
"cart"
] |
11247 | 2 | null | 11242 | 1 | null | EDIT in response to last comments.
Here is my suggestion for how to run the contest.
- The contest holder should decide on a list of "test questions". The 4-item questionnaires will be scored on how well they allow the guesser to guess the voter's responses to these "test questions". These test questions will be made public, and there will be a call for submissions for 4-test questionnaires. There will also be a call for participants to compete in the "guessing" contest. No participant is allowed to compete in both contests.
- The contest holder decides on a list of the (e.g.) 10 most promising questionnaires.
- The questionnaires are randomly assigned to volunteers. The volunteers interview potential voters.
- The survey that a voter completes consists of (i.) the complete list of "test questions" (ii.) plus one of the competing 4-item questionnaires.
- The completed surveys are assembled. A test is created for the guessers to complete. Each test question corresponds to an survey that a voter completed. The test question gives the voter's responses to the 4-item questionnaire. The guessers attempt to guess the voter's responses to the "test questions" based on that information.
- Compute a "guesser score" based on how well each guesser did overall, and compute a "questionnaire score" by taking a weighted average for each questionnaire weighted by the guesser score.
| null | CC BY-SA 3.0 | null | 2011-05-25T18:46:50.477 | 2011-05-25T23:37:05.250 | 2011-05-25T23:37:05.250 | 3567 | 3567 | null |
11248 | 1 | 11343 | null | 6 | 940 | I'm using Gibbs sampling to learn the distributions of coefficients for a multinomial logistic regression model. At the end, I end up using the mean values of distributions of coefficients, and the resulting logistic regression is used as a classifier.
I'm trying to find out advantages of having probability distributions for coefficients and the response variable, but I can't really see the way to leverage credibility intervals. What can I do with these distributions that I end up with, other than using their mean values?
ok, I think I've failed to express my question clearly. Here is the update
Let's assume that I have y = b1*x1 + b2*x2 + b3*x3 in my hands. for all b1,b2,b3 I have normal distributions, with mean and quantiles. These distributions come from the mcmc results. I can use the means of b1,b2,b3 and that'd give me a classifier. I can also use %2.5 quantile values and 97.5 quantile values for b1,b2,b3 and so on.
What would be the probabilistic interpretation of these other equations? Can I produce a smoother classifier this way, rather than using only means?
The use of credibility intervals is quite clear to me when they are used for a single variable, but in this case, I'm talking about N variables (5 actually) each with their own credibility intervals. I'm having trouble getting the semantics of this setup. I have not seen any papers etc that discusses this, and any pointers would be appreciated.
| How can I use credibility intervals in Bayesian logistic regression? | CC BY-SA 3.0 | null | 2011-05-25T21:05:29.583 | 2011-06-29T12:32:46.913 | 2011-05-29T11:33:36.090 | 3280 | 3280 | [
"logistic",
"bayesian",
"credible-interval"
] |
11249 | 1 | null | null | 5 | 6600 | I have a large data set which is in .dbf format right now and what I would like to do is be able to manipulate it easily in Excel and do something like subtotal and calculate stdev and ratios.
Details of the data set;
This data set contains shopper information. It has 1.2 million rows and 20 columns where the rows are each a unique shopper and the columns hold their shopping data (what they bought).
I am using Office 2007 programs, I know Excel the best but was wondering what alternatives I could use to accomplish my goals (subtotal, calculate stdev, and ratio's).
| What would be a good way to work with a large data set in Excel? | CC BY-SA 3.0 | null | 2011-05-25T21:32:13.980 | 2014-09-16T15:17:59.700 | 2011-05-26T06:08:23.393 | 2116 | 4755 | [
"excel",
"large-data"
] |
11250 | 2 | null | 11231 | 4 | null | When doing a PCA, you are effectively choosing a new set of 'variables' that you know for all your observations. Their main property is that they maximize the variance-content in one dimension (first PC has the most,...), while being linear combinations of the original covariates. This is the way it works like a dimension reduction: if 3 PCs contain 99% of the variance delivered by 100 covariates, there is not much reason, it seems, to keep the 100 covariates.
PCR essentially does regression on a set of principal components. Initially it makes sense, and in quite a few cases it does work.
However, in this regard, it is useful to look at Fisher's interpretation of discriminant analysis: he poses the problem as finding the direction(s) where the between-classes variance is maximal wrt the within-class variance.
This is where PCA fails somewhat (or could fail): it finds the direction where the 'overall' variance in the covariates is maximal (a much simpler problem), and then hopes this discriminates well. So, there is some criticism on the method, but that must not stop it from working :-)
In general, doing a clustering style algorithm on your covariates first, and then using the results for classification is not a practice I'd recommend: perhaps the strongest structure in the covariates alone is not the most efficient one for prediction of another variable.
| null | CC BY-SA 3.0 | null | 2011-05-25T21:45:02.643 | 2011-05-25T21:45:02.643 | null | null | 4257 | null |
11251 | 2 | null | 11249 | 15 | null | If you feel you may start more of such very large Excel type projects in the future, then you should consider installing and spending 10 hours learning the basics of R (free), which will let you do what you mention in your question, in a much more efficient manner than Excel.
[R for Beginners PDF](http://cran.r-project.org/doc/contrib/Paradis-rdebuts_en.pdf)
You can ask questions about R on [StackOverflow](http://stackexchange.com/) and here.
| null | CC BY-SA 3.0 | null | 2011-05-25T22:39:07.793 | 2011-05-26T17:33:25.187 | 2011-05-26T17:33:25.187 | 4329 | 4329 | null |
11252 | 1 | 11503 | null | 4 | 4025 | I received a question today that I wasn't exactly sure how to answer.
I have built a predictive model using a fairly basic logistic regression that works pretty well and fits our business needs. Recently, we purchased a CRM tool that allows us to build "probability" scores, but only allows the end users to give integer weights to various factors. Said differently, one can arbitrarily assign a weight of 10 points to one factor and -5 points to another with the sum of all weights representing the "probability" for a given entity in our database.
What I am looking to do is translate my model to this new format such that the resulting score equals the calculated probability from my logistic model. This is not out of desire, but business needs.
Admittedly I am not sure how to use the calculated coefficients and "adjust" them to these requirements. What is the best approach, if any? General thoughts on how to assign statistically valid integer weights to business criteria given these constraints?
Any thoughts or insight will be very much appreciated.
| Weight variables for predictive model | CC BY-SA 3.0 | null | 2011-05-25T23:57:37.597 | 2011-08-02T00:36:09.090 | 2011-05-26T09:21:28.103 | null | 569 | [
"logistic",
"predictive-models",
"validation"
] |
11253 | 1 | 11278 | null | 6 | 170 | If I take a set of measurements and test correlation of variable $A$ vs variable $B$ and get a significant correlation, that makes sense to me. But what if further analysis reveals that of those factors, there is only a significant positive correlation within one group, and that group is over-represented. Is the global correlation still valid, or is it, upon more detailed inspection a sample-bias effect?
Here is some graphs to explain:
The global correlation

The group separated correlations

| Factor dependent correlation | CC BY-SA 3.0 | null | 2011-05-26T02:40:02.887 | 2011-05-27T06:51:02.347 | 2011-05-27T06:51:02.347 | 2116 | 1327 | [
"correlation"
] |
11254 | 2 | null | 11246 | 4 | null | Recursive partitioning does not provide such inferential statistics. It is a highly exploratory method that would require an enormous multiplicity adjustment should you compute regression and error sum of squares from the result. Better would be to do formal but flexible modeling of the two predictors, e.g., using regression splines, and if allowing for interaction, tensor splines (by adding products of spline terms). There is a good reason there is no anova.rpart function in the R rpart package.
| null | CC BY-SA 3.0 | null | 2011-05-26T03:12:00.933 | 2011-05-26T03:12:00.933 | null | null | 4253 | null |
11255 | 1 | null | null | 23 | 1602 | I've noticed this issue coming up a lot in statistical consulting settings and i was keen to get your thoughts.
### Context
I often speak to research students that have conducted a study approximately as follows:
- Observational study
- Sample size might be 100, 200, 300, etc.
- Multiple psychological scales have been measured (e.g., perhaps anxiety, depression, personality, attitudes, other clinical scales, perhaps intelligence, etc.)
The researchers have read the relevant literature and have some thoughts about possible causal processes.
Often there will be some general conceptualisation of variables into antecedents, process variables, and outcome variables.
They have also often heard that structural equation modelling is more appropriate for testing overall models of the relationships between the set of variables that they are studying.
### Question
- Under what conditions do you think structural equation modelling is an appropriate technique for analysing such studies?
- If you would not recommend structural equation modelling, what alternative techniques would you recommend?
- What advice would you give to researchers considering using structural equation modelling in such cases?
| Whether to use structural equation modelling to analyse observational studies in psychology | CC BY-SA 3.0 | null | 2011-05-26T03:20:04.987 | 2015-12-18T14:02:37.680 | 2011-05-26T08:12:19.967 | 183 | 183 | [
"scales",
"causality",
"structural-equation-modeling",
"observational-study"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.