Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13378 | 2 | null | 13353 | 4 | null | The simplest thing to do is to use the predict function on the lm object, then it take care of many of the details like converting a factor to the right values to add together. If you are trying to understand the pieces that go into the prediction then set `type='terms'` and it will show the individual pieces that add together make your prediction.
Note also that how a factor is converted to variables depends on some options, the default will choose a baseline group to compare the other groups to, but you can also set it to an average and differences from that average (or other comparisons of interest).
| null | CC BY-SA 3.0 | null | 2011-07-22T15:30:25.863 | 2011-07-22T15:30:25.863 | null | null | 4505 | null |
13379 | 2 | null | 13362 | 1 | null | You could look at Box-Cox transformations. The boxcox function in the MASS package for R will compute a confidence interval on the value of lambda that gives the best transformation. Combine that with what you know of the science to choose an appropriate transform.
| null | CC BY-SA 3.0 | null | 2011-07-22T15:34:13.693 | 2011-07-22T15:34:13.693 | null | null | 4505 | null |
13380 | 2 | null | 13376 | 6 | null | ```
X = matrix(runif(20000), 1000, 20)
S1 = apply(X, 1, sum)
S2 = rowSums(X)
hist(S1) ## same as hist(S2)
```
Are two ways to do this without a for loop. X contains 1000 rows, each containing 20 uniform(0,1) variables. S1 and S2 will both contain the same data; 1000 independent realizations of the sum of 20 independent uniform(0,1) random variables.
| null | CC BY-SA 3.0 | null | 2011-07-22T15:34:36.500 | 2011-07-22T15:34:36.500 | null | null | 4856 | null |
13382 | 1 | 13384 | null | 14 | 5081 | How do I define the distribution of a random variable $Y$ such that a draw from $Y$ has correlation $\rho$ with $x_1$, where $x_1$ is a single draw from a distribution with cumulative distribution function $F_{X}(x)$?
| How to define a distribution such that draws from it correlate with a draw from another pre-specified distribution? | CC BY-SA 3.0 | null | 2011-07-22T16:11:51.297 | 2012-07-02T21:58:20.257 | 2012-07-02T21:58:20.257 | 4856 | 5471 | [
"distributions",
"probability",
"correlation",
"random-variable",
"conditional-probability"
] |
13383 | 1 | 13388 | null | 3 | 1146 | We teach supplementary lessons in nearly two dozen local schools, and have two data sets of approximately four hundred records each from pre-post tests given at these schools. Each record contains pre and post values (correct, incorrect) for questions on 12 topics as well as whether or not there was an intervention (lesson taught) relating to that topic.
Our goal is of course to assess the impact of the lessons taught.
The pre and post test responses are pair-matched by student, and clustered by school since
the lessons taught vary by school, in addition to the other factors generally accepted as valid for clustering at this level i.e; similar socioeconomic background, school culture, common instructors, etc.
Without clustering, McNemar is the accepted test for analysis of this data, and
several authors have explored various modifications to McNemar to allow for clustering including:
- Methods for the Analysis of Pair-Matched Binary Data from School-Based
Intervention Studies. Vaughan & Begg. 1999.
doi: 10.3102/10769986024004367
- Analysis of clustered matched-pair data. Durkalski et al. 2003.
doi: 10.1002/sim.1438
- Methods for the Statistical Analysis of Binary Data in Split-Cluster Designs.
Donner, Klar, Zou. 2004.
doi: 10.1111/j.0006-341X.2004.00247.x
- Adjustment to the McNemar’s Test for the Analysis of Clustered
Matched-Pair Data.
McCarthy. 2007. http://biostats.bepress.com/cobra/ps/art29/ (Free to download)
I have subsequently experimented with the Durkalski method as documented in McCarthy,
since it seems to be deemed rather robust, as well as being the simplest for me to
understand and code. However, none of the documented methods fit our case
exactly as they use the matched pairs for pre-post or control-treatment only, and treat
the clusters as a single class. We actually have matched pairs of pre-post in multiple
control & treatment clusters, but this latter level of information is not used and
discarding the ~50% of our data points from the control groups seems sub-optimal. Is
anyone aware of a technique designed to analyze this data configuration, or someone whom
might be interested in exploring this area?
Thanks in advance!
| Analyzing (hierarchical?) clustered pair-matched binary data | CC BY-SA 3.0 | null | 2011-07-22T16:14:01.627 | 2011-07-22T18:01:59.267 | null | null | 5510 | [
"binary-data"
] |
13384 | 2 | null | 13382 | 22 | null | You can define it in terms of a data generating mechanism. For example, if $X \sim F_{X}$ and
$$ Y = \rho X + \sqrt{1 - \rho^{2}} Z $$
where $Z \sim F_{X}$ and is independent of $X$, then,
$$ {\rm cov}(X,Y) = {\rm cov}(X, \rho X) = \rho \cdot {\rm var}(X)$$
Also note that ${\rm var}(Y) = {\rm var}(X)$ since $Z$ has the same distribution as $X$. Therefore,
$$ {\rm cor}(X,Y) = \frac{ {\rm cov}(X,Y) }{ \sqrt{ {\rm var}(X)^{2} } } = \rho $$
So if you can generate data from $F_{X}$, you can generate a variate, $Y$, that has a specified correlation $(\rho)$ with $X$. Note, however, that the marginal distribution of $Y$ will only be $F_{X}$ in the special case where $F_{X}$ is the normal distribution (or some other additive distribution). This is due to the fact that sums of normally distributed variables are normal; that is not a general property of distributions. In the general case, you will have to calculate the distribution of $Y$ by calculating the (appropriately scaled) convolution of the density corresponding to $F_{X}$ with itself.
| null | CC BY-SA 3.0 | null | 2011-07-22T16:27:33.663 | 2011-07-22T16:54:31.297 | 2011-07-22T16:54:31.297 | 4856 | 4856 | null |
13386 | 1 | 13390 | null | 1 | 106 | Starting from a given sample, is it possible to estimate (roughly) automatically what kind of law the variable inducing this sample seems to follow?
| Is there a R function to estimate the law of a sample? | CC BY-SA 3.0 | null | 2011-07-22T17:04:30.683 | 2011-07-22T18:29:26.787 | null | null | 5511 | [
"r",
"sample"
] |
13387 | 1 | 13777 | null | 3 | 1936 | I'm building a series of hierarchical models using R and JAGS, linked using the R2jags library. The runs are fairly long -- from several hours to several days. I've had the sad experience of running some chains that did not converge. In that case, is there a way to extend the chain, rather than starting over?
| Is there a way to continue a R/JAGS MCMC chain that did not converge? | CC BY-SA 3.0 | null | 2011-07-22T17:50:17.740 | 2015-01-27T10:22:51.503 | null | null | 4110 | [
"r",
"markov-chain-montecarlo",
"jags"
] |
13388 | 2 | null | 13383 | 1 | null | I would use conditional logistic regression, which incorporates the matched pair design in a regression model with covariates you want to test.
| null | CC BY-SA 3.0 | null | 2011-07-22T18:01:59.267 | 2011-07-22T18:01:59.267 | null | null | 4797 | null |
13389 | 1 | 13396 | null | 39 | 17843 | [Andrew More](http://www.cs.cmu.edu/~awm/) [defines](http://www.autonlab.org/tutorials/infogain11.pdf) information gain as:
$IG(Y|X) = H(Y) - H(Y|X)$
where $H(Y|X)$ is the [conditional entropy](http://en.wikipedia.org/wiki/Conditional_entropy). However, Wikipedia calls the above quantity [mutual information](http://en.wikipedia.org/wiki/Mutual_information).
Wikipedia on the other hand defines [information gain](http://en.wikipedia.org/wiki/Information_gain) as the Kullback–Leibler divergence (aka information divergence or relative entropy) between two random variables:
$D_{KL}(P||Q) = H(P,Q) - H(P)$
where $H(P,Q)$ is defined as the [cross-entropy](http://en.wikipedia.org/wiki/Cross_entropy).
These two definitions seem to be inconsistent with each other.
I have also seen other authors talking about two additional related concepts, namely differential entropy and relative information gain.
What is the precise definition or relationship between these quantities? Is there a good text book that covers them all?
- Information gain
- Mutual information
- Cross entropy
- Conditional entropy
- Differential entropy
- Relative information gain
| Information gain, mutual information and related measures | CC BY-SA 3.0 | null | 2011-07-22T18:27:22.373 | 2019-03-20T12:29:13.570 | 2011-07-22T20:57:37.860 | 2798 | 2798 | [
"information-theory"
] |
13390 | 2 | null | 13386 | 1 | null | An initial approach is to use likelihood based estimation and compare the fit among various family of parametric distributions.
You can use function $\verb+MASS::fitdistr()+$ to fit one of:
"beta","cauchy","chi-squared","exponential","fisher","gamma","geometric","log-normal","logistic","negative","binomial","normal","Poisson","t" and "weibull".
For more options (censored distribution and MDE based estimation) you can turn to
the [$\verb+fitdistrplus+$](http://cran.r-project.org/web/packages/fitdistrplus/) package
both approach can be easily automated, tough MDE seems more suited than likelihood loss.
| null | CC BY-SA 3.0 | null | 2011-07-22T18:29:26.787 | 2011-07-22T18:29:26.787 | null | null | 603 | null |
13391 | 2 | null | 13387 | 0 | null | If a chain has not converged you don't want to use it for inference anyways. You can just continue sampling and save new samples for inference if you use rjags. Then the old iterations would be considered as burn-in. I assume you could do the same with R2jags.
| null | CC BY-SA 3.0 | null | 2011-07-22T18:49:10.123 | 2011-07-22T18:49:10.123 | null | null | 4618 | null |
13392 | 2 | null | 13387 | 2 | null | If there is an option to choose a starting point--yes. You can "continue" an MCMC chain simply by using the last point of the chain as a new starting point.
| null | CC BY-SA 3.0 | null | 2011-07-22T19:06:28.280 | 2011-07-22T19:06:28.280 | null | null | 3567 | null |
13393 | 2 | null | 13361 | 2 | null | Apologies for only posting code with no comments. I wrote this but don't have time now to comment and decided it was best to save this here and comment later.
```
# packages needed
library(lattice)
library(latticeExtra)
library(lme4)
# make a picture of the data
mat <- mat[order(mat$id, mat$x),]
mat$id <- factor(mat$id)
plot1 <- xyplot(y~x, group=id, data=mat, type="b")
plot1
# function to set up knots
knot <- function(x, knot) {(x-knot)*(x>knot)}
knots <- function(x, knots) {
out <- sapply(knots, function(k) knot(x, k))
colnames(out) <- knots
out
}
# add knots to data frame
mat$knot <- knots(mat$x, 0:5)
# piecewise curves with no random effects
m1 <- lm(y~ knot, data=mat)
summary(m1)
# get predicted values
matX <- data.frame(x=0:6)
matX$knot <- knots(matX$x, 0:5)
matX$predict <- predict(m1, newdata=matX)
# plot of data and predicted values
plot1 <- xyplot(y~x, group=id, data=mat, type="b")
plot2 <- xyplot(predict~x, data=matX, type="b", col="black", lwd=3)
plot1+plot2
# piecewise curves with random effects
m2 <- lmer(y~ knot + (knot|id), data=mat)
summary(m2)
ranef(m2)
# get predicted values
ids <- unique(mat$id)
matXid <- expand.grid(id=unique(mat$id), x=0:6)
matXid$knot <- knots(matXid$x, 0:5)
matXid$predict <- rowSums(as.matrix(coef(m2)$id[matXid$id,]) * cbind(1,matXid$knot))
# plot of data and predicted values
plot1 <- xyplot(y~x|id, data=mat, type="b", as.table=TRUE)
plot2 <- xyplot(predict~x|id, data=matXid, type="b", col="black", lwd=1)
```
| null | CC BY-SA 3.0 | null | 2011-07-22T20:00:26.247 | 2011-07-22T20:00:26.247 | null | null | 3601 | null |
13394 | 1 | 13443 | null | 5 | 1150 | Suppose task performance $y$ increases with trial number $x$ ($x = 1, 2, …, 10$), so that there is a practice effect. Let’s suppose the practice effect is linear. Subjects have a long break, and then provide another observation of $y$ (for $x = 11$).
I have reason to believe that subjects get better at the task during this break, beyond what would be expected by the linear practice effect. Therefore, I want to test whether average performance on the 11th trial is predicted by a linear model fit to the first 10 trials. What would be the correct procedure to test this hypothesis? I imagine it would be something like
- Fit a regression model of y on x for x = 1,2,…,10.
- Compute the conditional mean given $x = 11$ based on the model and call this $Y_m$.
- Compute the sample mean of $y$ for $x = 11$ and call this $Y_s$.
- Test for a difference in the above two quantities.
But how do I compute the standard error of the difference? Or should I just compute confidence intervals for both $Y_m$ and $Y_s$ and see if they overlap?
--
Update: Can I fit a linear regression to all trials $x = 1,2,...,11$, include a dummy variable for the 11th trial, and test the significance of the dummy variable term?
| How do I test whether an extrapolated mean for a regression model differs from an observed mean? | CC BY-SA 3.0 | null | 2011-07-22T20:22:24.037 | 2011-07-25T02:46:50.187 | 2011-07-25T00:00:32.223 | 3432 | 3432 | [
"regression",
"confidence-interval"
] |
13395 | 1 | null | null | 3 | 208 | I created a software platform on which about 110 participants could contribute at any time of their choosing over the course of 12 weeks (think something like a wiki). These participants were then randomly split to use one of two otherwise identical platforms. One key difference was implemented between the two platforms, which I hypothesized to cause a different participation rate.
That means I have participation data over the course of 12 weeks (down to the second) for 2 groups of ~50 people. The participation rate is highly variable - some folks don't participate for 4 weeks at a time and then participate heavily at the end. Others participate in the beginning and then never again.
Independent-samples t-tests show differences between the conditions in terms of total number of times participating. But I don't think that's the most informative. I want be able to run a statistical test to show that the difference I implemented led to an increased participation rate over time.
Some tricky parts of this:
- Because it's a collaborative software platform, the cases within each condition are technically non-independent. Users within each condition could participate with each other, although they generally did not (<10 of the ~6000 participation events involved working with someone else). Because of this, I'm thinking they are quasi-independent, but I think this is a theoretical argument and not a statistical one (although I'm not sure).
- Users were also assigned to work on replicated projects across the conditions, and doubled within-condition. For example, there was a Project A, Project B, Project C on both platforms, and 2 users on each platform were assigned to each Project (a total of 4 people working on Project A, 2 in the experimental condition and 2 in the control). The two folks within each platform condition still worked independently, except in cases as described immediately above this (tricky part 1).
- Participation within-subject is definitely not independent, and this is definitely a statistical problem. Each time a participation event occurs, the user is working on a single project (again, think of it like a wiki). So when a user participates early, that means their later edits are on that same product.
Things I have considered:
- A chi-square test of independence examining number of participations by week (12 conditions) crossed with condition (2 conditions). But because the weeks are not independent, I'm pretty sure this is an inappropriate analytic strategy.
- A repeated-measures ANOVA examining a within-subject "week" variable by between-subject "condition." But the data is highly non-normal (the count data is heavily positively skewed).
- A hierarchical linear model examining within-subject weekly participation rates, nested within persons, with a person-level condition variable. But the same non-normality problem occurs here (it is still count data).
Is there another approach I should take here? Am I missing something that will handle this better?
Thanks all.
Edit: I have log-ins, but didn't think of that as meaningful - these are more meaningful participation events. Here is a graph of the cumulative weekly participation events by group, which might help illuminate what I'm talking about. This is the most meaningful metric I can come up with. Note that in this graph, individuals might be represented several times (if a three users participated 30 times, twice and once during Week 1, that increases Week 1 by 33).

| Best way to compare software usage data over time across independent conditions? | CC BY-SA 3.0 | null | 2011-07-22T20:23:35.707 | 2011-07-26T23:58:43.880 | 2011-07-23T03:14:10.707 | 3961 | 3961 | [
"repeated-measures",
"normality-assumption",
"multilevel-analysis",
"count-data",
"non-independent"
] |
13396 | 2 | null | 13389 | 25 | null | I think that calling the Kullback-Leibler divergence "information gain" is non-standard.
The first definition is standard.
EDIT: However, $H(Y)−H(Y|X)$ can also be called mutual information.
Note that I don't think you will find any scientific discipline that really has a standardized, precise, and consistent naming scheme. So you will always have to look at the formulae, because they will generally give you a better idea.
Textbooks:
see ["Good introduction into different kinds of entropy"](https://stats.stackexchange.com/questions/322/good-introduction-into-different-kinds-of-entropy).
Also:
Cosma Shalizi: Methods and Techniques of Complex Systems Science: An Overview, chapter 1 (pp. 33--114) in Thomas S. Deisboeck and J. Yasha Kresh (eds.), Complex Systems Science in Biomedicine
[http://arxiv.org/abs/nlin.AO/0307015](http://arxiv.org/abs/nlin.AO/0307015)
Robert M. Gray: Entropy and Information Theory
[http://ee.stanford.edu/~gray/it.html](http://ee.stanford.edu/~gray/it.html)
David MacKay: Information Theory, Inference, and Learning Algorithms
[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html](http://www.inference.phy.cam.ac.uk/mackay/itila/book.html)
also, ["What is “entropy and information gain”?"](https://stackoverflow.com/questions/1859554/what-is-entropy-and-information-gain)
| null | CC BY-SA 3.0 | null | 2011-07-22T21:11:00.590 | 2011-07-25T04:08:05.823 | 2017-05-23T12:39:26.203 | -1 | 5020 | null |
13397 | 2 | null | 13394 | 1 | null | Think of the distribution of values predicted by your regression as your theoretical distribution. Your null hypothesis is that the trial 11 values match this theoretical distribution--more specifically, that the trial 11 mean is consistent with the theoretical mean. Since the theoretical distribution has a known mean and s, you can use a Z-test to assess whether the trial 11 mean significantly differs.
| null | CC BY-SA 3.0 | null | 2011-07-22T22:57:09.277 | 2011-07-22T22:57:09.277 | null | null | 2669 | null |
13398 | 2 | null | 13184 | 6 | null | There are a lot of different methods and a plethora of literature on this topic from a wide variety of perspectives. Here are a few highlights that might be good starting points for your search.
If your background is more musical than mathematical or computational you might be interested in the works of [David Cope](http://artsites.ucsc.edu/faculty/cope/bibliography.htm) most of his published works focus on the analysis of classical music pieces, but he has a private venture called [recombinant](http://www.recombinantinc.com/aboutus.html) that seems more general. A lot of his work used music as a language type models, but I believe at least some of his most recent work has shifted more toward the whole [musical genome](http://en.wikipedia.org/wiki/Music_Genome_Project) like approach. He has a lot of [software available online](http://artsites.ucsc.edu/faculty/cope/software.htm), but it is generally written in [Lisp](http://en.wikipedia.org/wiki/Lisp_%28programming_language%29) and some can only run in various versions of Apple's OS though some should work in Linux or anywhere you can get [common lisp](http://www.clisp.org/) to run.
Analysis of signals and music in general has been a very popular problem in machine learning. There is good starting coverage in the Christopher Bishop texts [Neural Networks for Pattern Recognition](http://rads.stackoverflow.com/amzn/click/0198538642) and [Pattern Recognition and Machine Learning](http://rads.stackoverflow.com/amzn/click/0387310738). [Here](http://sethoscope.net/aimsc/) is an example of a MSc paper that has the music classification part, but has good coverage on feature extraction, that author cites at least one of the Bishop texts and several other sources. He also recommends [several sources](http://sethoscope.net/geek/music/) for more current papers on the topics.
Books that are more mathematical or statistical (at least by their authorship if not by their content):
Since I mentioned Bishop and the computational perspective of machine learning I'd only be telling half the story if I didn't also suggest you take a glance at the more recent [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (which is available for free legal download) by Hastie, Tibshirani, and Friedman. I don't remember there specifically being an audio processing example in this text, but a number of the methods covered could be adapted to this problem.
One more text worth considering is Jan Beran's [Statistics in Musicology](http://rads.stackoverflow.com/amzn/click/1584882190). This provides a number of statistical tools specifically for the analysis of musical works and also has numerous references.
Again there are many many other sources out there. A lot of this depends on what your background is and which approach to the problem you're most comfortable with. Hopefully at least some of this guides you a bit in your search for an answer. If you tell us more about your background, additional details about the problem, or ask a question in response to this post I'm sure I or many of the others here would be happy to direct you to more specific information. Best of luck!
| null | CC BY-SA 3.0 | null | 2011-07-22T23:35:54.677 | 2011-07-22T23:35:54.677 | null | null | 4325 | null |
13399 | 1 | 13400 | null | 18 | 110611 | I was trying to compute the 95th percentile on the following dataset. I came across a few online references of doing it.
### Approach 1: Based on sample data
The [first one](http://answerpot.com/showthread.php?2694010) tells me to obtain the `TOP 95 Percent` of the dataset and then choose the `MIN` or `AVG` of the resultant set. Doing so for the following dataset gives me:
```
AVG: 29162
MIN: 0
```
### Approach 2: Assume Normal Distribution
The [second one](https://www.sqlteam.com/forums/topic.asp?TOPIC_ID=118781) says that the 95th percentile is approximately two standard deviations above the mean (which I understand) and I performed:
```
AVG(Column) + STDEV(Column)*1.65: 67128.542697973
```
### Approach 3: R Quantile
I used `R` to obtain the 95th percentile:
```
> quantile(data$V1, 0.95)
79515.2
```
### Approach 4: Excel's Approach
Finally, I came across [this](http://bradsruminations.blogspot.com/2009/09/fun-with-percentiles-part-1.html) one, that explains how Excel does it. The summary of the method is as follows:
Given a set of `N` ordered values `{v[1], v[2], ...}` and a requirement to calculate the `pth` percentile, do the following:
- Calculate l = p(N-1) + 1
- Split l into integer and decimal components i.e. l = k + d
- Compute the required value as V = v[k] + d(v[k+1] - v[k])
This method gives me `79515.2`
None of the values match though I trust R's value is the correct one (I observed it from the ecdf plot as well). My goal is to compute the 95th percentile manually (using only `AVG` and `STDEV` functions) from a given dataset and am not really sure what is going here. Can someone please tell me where I am going wrong?
```
93150
93116
93096
93085
92923
92823
92745
92150
91785
91775
91775
91735
91727
91633
91616
91604
91587
91579
91488
91427
91398
91339
91338
91290
91268
91084
91072
90909
86164
85372
83835
83428
81372
81281
81238
81195
81131
81030
81011
80730
80721
80682
80666
80585
80565
80534
80497
80464
80374
80226
80223
80178
80178
80147
80137
80111
80048
80027
79948
79902
79818
79785
79752
79675
79651
79620
79586
79535
79491
79388
79277
79269
79254
79194
79191
79180
79170
79162
79154
79142
79129
79090
79062
79039
79011
78981
78979
78936
78923
78913
78829
78809
78742
78735
78725
78618
78606
78577
78527
78509
78491
78448
78289
78284
78277
78238
78171
78156
77998
77998
77978
77956
77925
77848
77846
77759
77729
77695
77677
77382
70473
70449
69886
69767
69704
69573
69479
69398
69328
69311
69265
69178
69162
69104
69100
69072
69062
68971
68944
68929
68924
68904
68879
68877
68799
68755
68726
68666
68623
68588
68547
68458
68457
68453
68438
68438
68429
68426
68394
68374
68363
68357
68337
68300
68256
68250
68228
68216
68180
68149
68124
68114
68060
68029
68029
68025
68004
67996
67981
67964
67938
67925
67914
67901
67853
67819
67818
67788
67770
67767
67688
67670
67669
67629
67618
67609
67602
67583
67540
67479
67475
67470
67433
67420
67387
67343
67339
67337
67315
67273
67224
67208
67160
67137
67102
67045
66449
66408
66338
66211
63784
63557
63091
63021
62895
62663
62182
62079
62044
61907
61888
61856
61847
61792
61764
61683
61641
61612
61514
61511
61503
61411
61263
61248
60965
60941
60907
60876
60773
60669
60537
60525
60387
60194
59673
59576
59561
59556
57652
57458
57308
57264
57158
57106
56288
56245
56054
56031
55930
55841
55533
55532
55316
55281
55230
55196
55111
55101
50957
50870
49580
48353
21349
21319
21288
21274
21270
21255
21232
21208
21196
21184
21164
21150
21149
21143
21129
21108
21100
21072
21043
20934
20912
20908
20882
20871
20858
20843
20839
20834
20800
20790
20788
20757
20752
20748
20744
20739
20721
20712
20710
20671
20620
20575
20572
20567
20551
20536
20522
20510
20484
20430
20415
20398
20368
20362
20357
20349
20347
20341
20338
20335
20335
20334
20332
20332
20332
20330
20326
20324
20323
20307
20304
20299
20297
20292
20282
20280
20275
20270
20270
20258
20257
20257
20256
20254
20252
20251
20247
20243
20231
20229
20223
20223
20221
20219
20217
20215
20212
20211
20210
20208
20202
20202
20202
20197
20192
20190
20190
20187
20186
20184
20179
20175
20175
20170
20170
20170
20166
20162
20158
20157
20157
20156
20153
20152
20151
20151
20148
20146
20141
20141
20139
20137
20133
20132
20130
20129
20124
20124
20123
20114
20109
20104
20104
20094
20092
20091
20088
20086
20085
20084
20083
20078
20076
20076
20070
20068
20065
20060
20052
20049
20045
20041
20040
20039
20037
20036
20036
20032
20032
20021
20020
20017
20009
20007
20007
20004
20004
20002
19989
19985
19974
19973
19973
19967
19961
19960
19959
19957
19953
19952
19950
19943
19942
19940
19940
19939
19937
19936
19935
19935
19925
19921
19920
19914
19908
19907
19900
19900
19900
19899
19899
19898
19898
19894
19893
19891
19891
19888
19888
19888
19883
19883
19882
19882
19880
19878
19875
19875
19874
19873
19871
19867
19864
19862
19861
19860
19857
19856
19854
19854
19848
19848
19844
19842
19840
19840
19835
19833
19831
19830
19828
19826
19820
19817
19812
19812
19811
19809
19805
19799
19792
19789
19788
19785
19780
19770
19765
19763
19762
19754
19743
19742
19738
19737
19735
19731
19724
19722
19721
19711
19710
19699
19698
19697
19695
19692
19687
19683
19672
19670
19665
19664
19660
19654
19651
19644
19643
19643
19641
19640
19620
19619
19618
19617
19614
19613
19608
19607
19607
19605
19579
19575
19568
19556
19553
19553
19551
19550
19548
19536
19535
19500
19500
19473
19462
19461
19455
19451
19391
19388
19386
19384
19375
19371
19353
19338
19318
19273
19271
19269
19265
19258
19230
19228
19222
19221
19221
19215
19196
19180
19177
19166
19161
19154
19148
19138
19134
19129
19116
19113
19107
19105
19102
19096
19092
19088
19085
19085
19083
19072
19067
19066
19061
19058
19050
19049
19045
19044
19043
19043
19032
19005
18996
18968
18957
18948
18938
18936
18920
18920
18913
18897
18897
18892
18884
18878
18878
18878
18871
18870
18869
18866
18864
18864
18864
18862
18862
18862
18860
18859
18858
18858
18853
18852
18852
18851
18851
18848
18847
18846
18846
18846
18845
18845
18844
18842
18841
18841
18840
18840
18837
18837
18836
18836
18835
18834
18833
18831
18830
18830
18829
18829
18829
18828
18826
18825
18823
18822
18822
18822
18821
18821
18821
18819
18819
18818
18816
18813
18812
18812
18812
18812
18810
18809
18809
18809
18809
18808
18808
18806
18805
18805
18804
18803
18802
18802
18801
18801
18801
18801
18800
18799
18799
18798
18797
18796
18796
18796
18795
18795
18793
18792
18792
18792
18791
18791
18791
18789
18789
18789
18788
18787
18783
18782
18782
18782
18781
18781
18780
18780
18779
18779
18779
18779
18778
18777
18777
18776
18775
18773
18773
18772
18772
18771
18770
18770
18770
18769
18769
18767
18767
18766
18762
18762
18761
18761
18761
18758
18757
18757
18756
18756
18755
18751
18750
18749
18749
18749
18746
18746
18746
18746
18746
18745
18745
18744
18744
18743
18742
18739
18739
18738
18737
18736
18734
18729
18729
18727
18727
18723
18723
18723
18723
18721
18721
18721
18719
18719
18719
18719
18718
18717
18716
18714
18710
18710
18710
18708
18707
18704
18702
18701
18701
18699
18695
18694
18692
18691
18690
18689
18689
18686
18684
18683
18681
18679
18675
18675
18672
18665
18665
18665
18658
18656
18655
18654
18654
18654
18652
18650
18649
18646
18645
18642
18640
18638
18638
18636
18633
18633
18631
18630
18629
18625
18625
18623
18622
18619
18617
18617
18616
18616
18614
18614
18614
18614
18611
18611
18609
18609
18600
18597
18596
18594
18593
18591
18589
18585
18580
18578
18578
18578
18572
18569
18567
18566
18565
18563
18559
18559
18557
18557
18554
18551
18548
18547
18545
18544
18544
18541
18539
18539
18536
18535
18531
18529
18526
18524
18524
18522
18517
18515
18503
18502
18497
18496
18496
18496
18495
18493
18492
18487
18487
18486
18486
18485
18482
18479
18473
18471
18470
18464
18463
18460
18459
18454
18454
18452
18450
18447
18446
18442
18442
18442
18440
18439
18434
18432
18427
18426
18425
18421
18416
18414
18408
18407
18407
18407
18403
18402
18398
18397
18396
18394
18393
18392
18391
18390
18383
18378
18357
18356
18354
18349
18342
18341
18338
18337
18336
18333
18328
18319
18314
18313
18302
18295
18295
18291
18291
18288
18284
18281
18278
18276
18272
18269
18268
18263
18262
18261
18259
18257
18251
18247
18240
18240
18238
18235
18235
18234
18232
18225
18222
18221
18214
18214
18213
18213
18210
18210
18206
18205
18204
18203
18194
18192
18191
18190
18187
18184
18179
18179
18179
18175
18171
18170
18156
18152
18151
18151
18149
18149
18148
18148
18147
18146
18140
18139
18137
18137
18136
18135
18135
18134
18133
18133
18128
18128
18127
18127
18125
18122
18121
18120
18120
18119
18117
18110
18108
18108
18099
18097
18096
18096
18095
18087
18085
18084
18083
18067
18060
18056
18056
18054
18053
18050
18049
18048
18038
18036
18033
18033
18028
18027
18025
18023
18022
18010
18010
18010
18000
17995
17983
17980
17978
17975
17974
17974
17968
17968
17967
17965
17964
17962
17961
17956
17955
17943
17938
17935
17934
17933
17932
17930
17925
17923
17919
17912
17912
17904
17897
17896
17894
17884
17880
17874
17872
17870
17865
17857
17856
17854
17854
17845
17843
17841
17836
17834
17831
17831
17828
17822
17821
17821
17816
17804
17803
17799
17798
17794
17794
17793
17790
17787
17786
17783
17782
17781
17777
17777
17777
17772
17772
17771
17766
17766
17758
17750
17747
17743
17715
17699
17694
17683
17682
17681
17668
17668
17630
17619
17617
17610
17609
17609
17607
17607
17599
17587
17565
17551
17542
17532
17531
17514
17514
17512
17509
17503
17483
17481
17475
17465
17463
17449
17433
17404
17397
17356
17356
17214
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
```
| Calculating the 95th percentile: Comparing normal distribution, R Quantile, and Excel approaches | CC BY-SA 4.0 | null | 2011-07-23T01:04:56.450 | 2019-05-27T00:57:04.210 | 2019-05-27T00:57:04.210 | -1 | 2164 | [
"r",
"dataset",
"quantiles",
"sql"
] |
13400 | 2 | null | 13399 | 16 | null | The first approach is completely wrong and has nothing to do with the 95th percentile, in my opinion.
The second approach seems to be based on an assumption that the data is normally distributed, but it should be about 1.645 standard deviations above the mean, not 2 standard deviations, and it looks like you realised this. This is a poor method if the data is not normally distributed.
If you want to work out the 95th percentile yourself, order the numbers from smallest to largest and find a value such that 95% of the data is below that value. R probably uses some sort of interpolation between data points. A simple approximation might be `sort(data$V1)[0.95*length(data$V1)]`.
Edited after comment from @Macro.
| null | CC BY-SA 3.0 | null | 2011-07-23T01:46:46.720 | 2011-07-23T06:29:06.577 | 2011-07-23T06:29:06.577 | 3835 | 3835 | null |
13401 | 2 | null | 13395 | 2 | null | Here are a few thoughts:
### Characterise participation
- I think the first step is to understand participation at the individual-level.
- How is participation measured at a granular level?
a log-in to the site;
duration on site for a given log-in;
amount of interaction on the site for a given log-in;
- Consider how granular measures of participation can be aggregated into an overall level of participation, both for the whole 12 weeks and for smaller temporal periods of aggregation (e.g., a week).
Presumably, there are conceptual reasons to prefer one index of participation over another (e.g., writing up 2,000 words on the site in one log-in over a couple of hours is probably greater participation than 5 log-ins involving only a little bit of tinkering).
As you aggregate, you may find that some of the skew is removed from the data
You could also consider transformations (e.g., log) to reduce the skew
There might also be other indices of participation over and above overall participation (e.g., regularity of participation).
### Explore temporal effects on participation
- I'd examine lots of graphs of time on participation based on different levels of temporal aggregation (e.g., by day, by week, by two weeks, by four weeks).
It would be interesting to examine group and individual-level analyses to see both the general trend and how individuals differ in the effect of time.
- In terms of a statistical model, perhaps some form of GEE (
see these references on GEE ) would be suitable for modelling the data if the data is similar to counts. But I'd be curious to hear what others have to say. I wonder whether there is a literature on modelling individual level website usage that would be relevant.
- Some form of clustering might also be interesting as a way of clustering different usage patterns.
- You could also explore (and possibly model) any other temporal effects (e.g., when you've assigned projects to particular individuals, and so on).
### Assess effect of condition of participation
- Once you have one or more theoretically meaningful measures of overall participation, a simple t-test might be sufficient.
- If you are worried about independence of observations, you could assess this at this point, particularly if you know which individuals might be grouped together (e.g., using something like ICC). A rough approach would simply be to use a more stringent alpha level.
| null | CC BY-SA 3.0 | null | 2011-07-23T01:47:55.097 | 2011-07-23T04:58:45.707 | 2011-07-23T04:58:45.707 | 183 | 183 | null |
13402 | 2 | null | 13399 | 17 | null | Here are a few points to supplement @mark999's answer.
- Wikipedia has an article on percentiles where it is noted that no standard definition of a percentile exists. However, several formulas are discussed.
- Crawford, J.; Garthwaite, P. & Slick, D. On percentile norms in neuropsychology: Proposed reporting standards and methods for quantifying the uncertainty over the percentile ranks of test scores The Clinical Neuropsychologist, Psychology Press, 2009, 23, 1173-1195 (FREE PDF) discusses calculation of percentiles within a psychology norming context.
The following explores a few things in R:
### Get data and examine R quantile function
```
> x <- c(93150, 93116, 93096, etc... [ABBREVIATED INPUT]
> help(quantile) # Note the 9 quantile algorithms
> rquantileest <- sapply(1:9, function(TYPE) quantile(x, .95, type=TYPE))
> rquantileest
95% 95% 95% 95% 95% 95%
79535.00 79535.00 79535.00 79524.00 79547.75 79570.70
95% 95% 95%
79526.20 79555.40 79553.49
> sapply(rquantileest, function(X) mean(x <= X))
95% 95% 95% 95% 95%
0.9501859 0.9501859 0.9501859 0.9494424 0.9501859
95% 95% 95% 95%
0.9501859 0.9494424 0.9501859 0.9501859
```
- help(quantile) shows that R has nine different quantile estimation algorithms.
- The other output shows the estimated value for the 9 algorithms and the proportion of the data that is less than or equal to the estimated value (i.e., all values are close to 95%).
### Compare with assuming normal distribution
```
> # Estimate of the 95th percentile if the data was normally distributed
> qnormest <- qnorm(.95, mean(x), sd(x))
> qnormest
[1] 67076.4
> mean(x <= qnormest)
[1] 0.8401487
```
- A very different value is estimated for the 95th percentile of a normal distribution based on the sample mean and standard deviation.
- The value estimated is around the 84th percentile of the sample data.
- The plot below shows that the data is clearly not normally distributed, and thus estimates based on assuming normality are going to be a long way off.
plot(density(x))

| null | CC BY-SA 3.0 | null | 2011-07-23T05:52:30.883 | 2011-07-24T02:17:39.563 | 2011-07-24T02:17:39.563 | 183 | 183 | null |
13403 | 1 | null | null | 3 | 1991 | I have data that look like this, with a tonnage measure and density measure for each carriage in a bunch of trains.
```
Train Carriage_No Tonnes Density
A 1 105.5 2.12
A 2 104.9 2.28
A 3 101.2 2.30
A 4 108.7 2.41
B 1 112.3 2.51
B 2 109.7 2.34
```
etc..
How do I determine density's contribution to the variation in tonnes? What is doing this called? Some `R` code would be most helpful!
Our goal is to fit more tonnes into each carriage, but the variation around the mean is such that we're overloading an unacceptable percentage of carriages. Therefore we need to tighten the variation to allow us to increase the mean (i.e. more total tonnes) without overloading.
There are a few things we can do to improve tonnage variability, but controlling density isn't one of them (reacting to changes in density may be possible). So I'd like to understand how much the density accounts for the variation in tonnes.
| Determining a variable's contribution to the variation in another | CC BY-SA 3.0 | null | 2011-07-23T09:28:46.663 | 2012-10-19T08:02:55.210 | 2011-07-26T23:44:55.677 | 827 | 827 | [
"variance"
] |
13404 | 2 | null | 13403 | 1 | null | You should probably start by getting a general idea about the relationship between Tonnes and Density:
```
plot(Tonnes~Density)
lines(ksmooth(Density,Tonnes,bandwidth=0.5))
```
and playing with the bandwidth parameter to figure out the form of the mean function of Tonnes conditional on Density. If it is anything like linear, you should be fine with regression/ANOVA techniques - e.g. R-squared given by the summary(lm(Tonnes~Density)) is precisely the portion of the variance in Tonnes due to the variance in Density. Look up the details here: [http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf](http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf)
If it is nonlinear, some transformation of variables or nonlinear regression modelling might be in order and it is hard to specify a general approach here so maybe you should come back with a plot.
| null | CC BY-SA 3.0 | null | 2011-07-23T09:58:12.937 | 2011-07-23T09:58:12.937 | null | null | 5494 | null |
13405 | 2 | null | 13160 | 4 | null | There is a growing econometric literature on the misclassification of treatment status.
A standard difference-in-difference approach would be a natural starting point here - see e.g. [http://www.nber.org/WNE/lect_10_diffindiffs.pdf](http://www.nber.org/WNE/lect_10_diffindiffs.pdf) p.17 mentioning Poisson case. The problem with misclassification for a general conditional mean function is described here: [https://www2.bc.edu/~lewbel/mistreanote2.pdf](https://www2.bc.edu/~lewbel/mistreanote2.pdf) If it applies to your set up, then you may be confident of a significant effect finding as the bias is said to be towards zero.
| null | CC BY-SA 3.0 | null | 2011-07-23T10:38:50.357 | 2011-07-23T10:38:50.357 | null | null | 5494 | null |
13406 | 2 | null | 13403 | 0 | null | Aside from the ksmooth approach you could do a simple linear regression:
```
fitted_model <- lm(Tonnes ~ Density)
summary(fitted_model)
```
In the summary you look for the "Estimate" that tells you how much increase you expect for each increase in Density.
You should also plot the model by simply:
```
plot(Tonnes, Density)
abline(fitted_model)
```
The plot tells you visually how well the line fits but it also hints if you need some value transformation. Log() is a good transformation that is frequently used, our sense of numbers is actually logarithmic and it's when we're about 3-4 years old that we unlearn that natural instinct of logarithmic counting. This is probably due to that many things appear logarithmic in nature, when a cell divides it divides into to creating an exponential increase in cells. You should suspect logarithmic transformations if your data is grouped at one end of the plot.
@rolando2: I heard about the logarithmic counting through listening to a lovely RadioLab episode about [Numbers](http://www.radiolab.org/2009/nov/30/). They report about tribes in the Amazon that don't have our numbers and that they still as adults count in a logarithmic way.
| null | CC BY-SA 3.0 | null | 2011-07-23T10:44:02.617 | 2011-07-23T17:01:54.163 | 2011-07-23T17:01:54.163 | 5429 | 5429 | null |
13407 | 2 | null | 11405 | 7 | null | It is not in the package, just write your own command. If your regression is reg <- tobit(y~x) then the vector of effects should be
```
pnorm(x%*%reg$coef[-1]/reg$scale)%*%reg$coef[-1].
```
| null | CC BY-SA 3.0 | null | 2011-07-23T11:07:10.127 | 2011-07-23T11:07:10.127 | null | null | 5494 | null |
13409 | 1 | null | null | 2 | 72 | I need to optimize based on an objective function that is non-standard, as far as I know. If the predictors are $X$, output of the model is $\hat y$, and response is $y$, my objective is essentially:
$$j = \sum( \hat y \cdot y ). $$
Minimized it would just be $-j$. Does anyone know of an existing approach to optimization/regression that uses an objective like this? $\hat y\cdot y$ is equivalent to what is called the margin in machine learning, but I cannot find any work that discusses optimizing this directly.
I could obviously just cram this into a global optimizer but I am interested in any work that is pre-existing.
| Maximizing/minimizing product of output and response | CC BY-SA 3.0 | null | 2011-07-23T13:27:12.600 | 2011-07-23T15:04:13.190 | 2011-07-23T15:04:13.190 | 930 | 5518 | [
"regression",
"optimization"
] |
13410 | 1 | null | null | 10 | 3292 | How do you cluster a feature with an asymmetrical distance measure?
For example, let's say you are clustering a dataset with days of the week as a feature - the distance from Monday to Friday is not the same as the distance from Friday to Monday.
How do you incorporate this into the clustering algorithm's distance measure?
| Clustering with asymmetrical distance measures | CC BY-SA 3.0 | null | 2011-07-23T15:32:59.027 | 2014-01-05T01:37:55.617 | 2011-07-24T08:38:37.577 | 183 | 5464 | [
"clustering",
"distance"
] |
13411 | 1 | 13416 | null | 3 | 104 | Let's assume one has an analysis in which there are multiple correlated DVs (average correlation .46) being examined in separate univariate analyses (e.g. t-tests; insufficient df and frustration tolerance to restructure the data fro a MANOVA) in relation to a single IV. All except one of these DVs is not significantly affected by an IV manipulation.
I was thinking that in general, dependence between DVs would tend to globally trend analyses to be significant or non-significant. However, when a DV with a legitimate effect is correlated with a variable which has no effect, wouldn't that correlation represent an association between them which is a source of error variance in the DV with the legitimate effect? Thus, if anything, the presence of significance in a DV associated with non-significant DVs should be taken as a sign of a true effect on that DV?
In short:
- What, if anything, do the non-significant tests tells us about the one that is significant?
- Is there a relationship between this question and the notion of suppressor variables in regression?
| Do non-significant correlates of a DV, which is significant, suggest anything about the effect of that DV? | CC BY-SA 3.0 | null | 2011-07-23T18:40:52.883 | 2011-07-23T20:49:37.393 | null | null | 196 | [
"regression",
"correlation",
"non-independent"
] |
13412 | 1 | 13424 | null | 7 | 22210 | We are currently converting student test scores in this manner :
```
( ScaledScore - ScaledScore Mean ) / StdDeviation ) * 15 + 100
```
I was referring to this calculation as a z-score, I found some information that convinced me that I should really be referring to it as a t-score as opposed to a z-score.
My boss wants me to call it a "Standard Score" on our reports. Are z-scores and t-scores both considered standard scores?
Is there a well known abbreviation for "Standard Score"?
Can someone point me to a reference that will definitively solve this issue.
| What are the primary differences between z-scores and t-scores, and are they both considered standard scores? | CC BY-SA 3.0 | null | 2011-07-23T19:14:31.683 | 2015-12-04T11:40:40.367 | 2011-07-24T06:06:10.233 | 183 | 5519 | [
"psychometrics",
"normalization"
] |
13413 | 2 | null | 13411 | 1 | null | It is not completely clear to me what are you trying to achieve here, an example would probably help. Adding one of the "insignificant" dependent variables to a regression with the "significant" one and checking if the effect is still there should probably answer the question from the second paragraph. More systematically, I would estimate a seemingly unrelated regression system of three equations and look at the correlation of their residuals, including testing of cross-equation equality of the parameters.
| null | CC BY-SA 3.0 | null | 2011-07-23T19:47:48.780 | 2011-07-23T19:47:48.780 | null | null | 5494 | null |
13414 | 1 | null | null | 5 | 312 | I was wondering if anybody had some good references on dynamic neural networks. The goal is to have a neural network that responds in real-time (and is governed by say a system of DEs) instead of at discrete time-steps. The prototypical application is a robot: we have a series of sensors that are constantly feeding data into the input neurons, and we have output neurons that are constantly telling the motors what to do. We want to optimize this robot to do some tasks.
Is there a nice recent survey on dynamic neural nets? In particular one that discusses recurrent DNN, feedforward DNN, and DNN with and without learning.
Alternatively, if continuous dynamic neural networks are not the right approach for this sort of task, then suggestions of other types of neural nets are also welcome.
## Notes
I also [cross posted](http://metaoptimize.com/qa/questions/6769/getting-started-with-dynamic-neural-networks) this question to MetaOptimize.
| Getting started with dynamic neural networks | CC BY-SA 3.0 | null | 2011-07-23T20:06:00.910 | 2012-08-07T13:11:22.663 | 2020-06-11T14:32:37.003 | -1 | 4872 | [
"neural-networks"
] |
13415 | 2 | null | 13412 | 1 | null | The Student's t test is used when you have a small sample and have to approximate the standard deviation (SD, $\sigma$). If you look at [the distribution tables](http://www.statsoft.com/textbook/distribution-tables/) for the z-score and t-score you can see that they quickly approach similar values and that with more than 50 observations the difference is so small that it really doesn't matter which one you use.
The term standard score indicates how many standard deviations away from the expected mean (the null hypothesis) your observations are and through the z-score you can then deduce the probability of that happening by chance, the p-value.
| null | CC BY-SA 3.0 | null | 2011-07-23T20:37:13.827 | 2011-07-23T20:37:13.827 | null | null | 5429 | null |
13416 | 2 | null | 13411 | 2 | null | Not sure I grasped your question. But how can you answer your questions with just univariate analyses at hand? You was thinking of MANOVA of several DVs (say, Y1, Y2) on single IV (X). Let's notice that such MANOVA is equivalent to multiple regression of X on predictors Y1, Y2: direction of prediction changes nothing in this case; the quantity you called Pillai's trace will be now called R-square. In milieu of multiple regression it is no wonder to find a picture where IVs Y1 and Y2 are moderately correlated, still only Y1 predicts X significantly. Can Y1 or Y2 be a suppressor? Yes, and one can check this. If the increment of R-square in responce to adding Y2 into the model containing the rest predictors (Y1) is greater than R-square of simple regression of X on Y2 then Y2 is a suppressor.
| null | CC BY-SA 3.0 | null | 2011-07-23T20:49:37.393 | 2011-07-23T20:49:37.393 | null | null | 3277 | null |
13417 | 2 | null | 10141 | 1 | null | pglm is now available and for e.g. conditional logit there is a closed form estimator that should be straightforward to implement.
| null | CC BY-SA 3.0 | null | 2011-07-23T23:06:25.587 | 2011-07-23T23:06:25.587 | null | null | 5494 | null |
13418 | 2 | null | 13342 | 9 | null | Hint: quantization might be a better keyword to search information.
Designing an "optimal" quantization requires some criterion. To try to conserve the first moment of the discretized variable ... sounds interesting, but I don't think it's very usual.
More frequently (especially if we assume a probabilistic model, as you do) one tries to minimize some distortion: we want the discrete variable to be close to the real one, in some sense. If we stipulate minimum average squared error (not always the best error measure, but the most tractable), the problem is well known, and we can easily build a non-uniform quantizer with [minimum rate distortion](http://en.wikipedia.org/wiki/Quantization_%28signal_processing%29#Rate.E2.80.93distortion_quantizer_design), if we know the probability of the source; this is almost a synonym of "Max Lloyd quantizer".
Because a non-uniform quantizer (in 1D) is equivalent to pre-applying a non-linear transformation to a uniform quantizer, this kind of transformation ("companding") (in probabilistic terms, a function that turns our variable into a quasi-uniform) are very related to non uniform quantization (sometimes the concepts are used interchangeably). A pair of venerable examples are the [u-Law and A-Law](http://en.wikipedia.org/wiki/%CE%9C-law_algorithm) specifications for telephony.
| null | CC BY-SA 3.0 | null | 2011-07-24T00:19:56.330 | 2012-03-23T23:04:41.163 | 2012-03-23T23:04:41.163 | 7972 | 2546 | null |
13419 | 1 | null | null | 8 | 2095 | I have some data that I'm playing around with; for simplicity, let's suppose the data contains information on number of posts a blogger has written vs. number of people who have subscribed to that person's blog (this is just a made-up example).
I want to get some rough model of the relationship between # posts vs. # subscribers, and when looking at a log-log plot, I see the following:

This looks like a rough linear relationship (on the log-log scale), and quickly checking the residuals seems to agree (no apparent pattern, no noticeable deviation from a normal distribution):


So my question is: is it okay to use this linear model? I know vaguely that there are problems using linear regressions on log-log plots to estimate power law distributions, but my data isn't a power law probability distribution (it's simply something that seems to roughly follow a $subscribers = A * (postings) ^ k$ model; in particular, nothing needs to sum to 1), so I'm not sure if the same critiques apply. (Perhaps I'm over-correcting at the mention of "log-log" and "linear regression" in the same sentence...) Also, all I'm really trying to do is to:
- See if there are any patterns to the blogs with positive residuals vs. blogs with negative residuals
- Suggest some rough model of how subscribers are related to number of postings.
| Fitting a line to a log-log plot | CC BY-SA 3.0 | null | 2011-07-24T00:52:14.710 | 2011-07-24T08:18:32.620 | null | null | 1106 | [
"regression",
"power-law"
] |
13421 | 1 | 13457 | null | 10 | 1169 | There are generally many joint distributions $P(X_1 = x_1, X_2 = x_2, ..., X_n = x_n)$ consistent with a known set marginal distributions $f_i(x_i) = P(X_i = x_i)$.
Of these joint distributions, is the product formed by taking the product of the marginals $\prod_i f_i(x_i)$ the one with the highest entropy?
I certainly believe this is true, but would really like to see a proof.
I'm most interested in the case where all variables are discrete, but would also be interested in commentary about entropy relative to product measures in the continuous case.
| Is the maximum entropy distribution consistent with given marginal distributions the product distribution of the marginals? | CC BY-SA 3.0 | null | 2011-07-24T03:03:47.920 | 2013-07-18T20:33:47.917 | 2013-07-18T20:33:47.917 | 22468 | 4925 | [
"distributions",
"joint-distribution",
"marginal-distribution",
"maximum-entropy"
] |
13423 | 2 | null | 13412 | 4 | null | Most basic texts on statistics will define these as $z= \frac{\bar{x}-\mu}{ \sigma/\sqrt{n} }$ and $t=\frac{\bar{x}-\mu}{s/\sqrt{n}}$. The difference is that $z$ uses $\sigma$ which is the known population standard deviation and $t$ uses $s$ which is the sample standard devition used as an estimate of the population $\sigma$. There are sometimes variations on $z$ for an individual observation. Both are standardized scores, though $t$ is pretty much only used in testing or confidence intervals while $z$ with $n=1$ is used to compare between different populations.
| null | CC BY-SA 3.0 | null | 2011-07-24T03:37:39.110 | 2011-07-24T03:37:39.110 | null | null | 4505 | null |
13424 | 2 | null | 13412 | 14 | null | What you are reporting is a standardized score. It just isn't the standardized score most statisticians are familiar with. Likewise, the t-score you are talking about, isn't what most of the people answering the question think it is.
I only ran into these issues before because I volunteered in a psychometric testing lab while in undergrad. Thanks go to my supervisor at the time for drilling these things into my head. Transformations like this are usually an attempt to solve a "what normal person wants to look at all of those decimal points anyway" sort of problem.
- Z-scores are what most people in statistics call "Standard Scores". When a score is at the mean, they have a value of 0, and for each standard deviation difference from the mean adjusts the score by 1.
- The "standard score" you are using has a mean of 100 and a difference of a standard deviation adjusts the score by 15. This sort of transformation is most familiar for its use on some intelligence tests.
- You probably ran into a t-score in your reading. That is yet another specialized term that has no relation (that I am aware of) to a t-test. t-scores represent the mean as 50 and each standard deviation difference as a 10 point change.
Google found an example conversion sheet here:
- http://faculty.pepperdine.edu/shimels/Courses/Files/ConvTable.pdf
A couple mentions of t-scores here supports my assertion regarding them:
- http://www.healthpsych.com/bhi/doublenorm.html
- http://www.psychometric-success.com/aptitude-tests/percentiles-and-norming.htm
- Chapter 5, pp 89, Murphy, K. R., & Davidshofer, C. O. (2001). Psychological testing: principles and applications. Upper Saddle River, NJ: Prentice Hall.
A mention of standardized scores along my interpretation is here:
- http://www.gifted.uconn.edu/siegle/research/Normal/Interpret%20Raw%20Scores.html
- http://www.nfer.ac.uk/nfer/research/assessment/eleven-plus/standardised-scores.cfm
- This is an intro psych book, so it probably isn't particular official either. Chapter 8, pp 307 in Wade, C., & Tarvis, C. (1996). Psychology. New York: Harper Collins state in regards to IQ testing "the average is set arbitrarily at 100, and tests are constructed so that the standard deviation ... is always 15 or 16, depending on the test".
So, now to directly address your questions:
- Yes, zscores and tscores are both types of "Standard scores". However, please note that your boss is right in calling the transformation you are doing a "standard score".
- I don't know of any standard abbreviation for standardized scores.
- As you can see above, I looked for a canonical source, but I was unable to find one. I think the best place to look for a citation people would believe is in the manual of the standardized test you are using.
Good luck.
| null | CC BY-SA 3.0 | null | 2011-07-24T05:59:47.367 | 2011-07-24T06:46:53.133 | 2011-07-24T06:46:53.133 | 196 | 196 | null |
13425 | 2 | null | 13412 | 11 | null | Your question pertains to terminology used in the reporting of standardised psychometric tests.
- Charles Hale has notes on terminology in standardised testing.
My understanding:
- z-score: mean = 0; sd = 1
- t-score: mean = 50; sd = 10 (example test using t-scores) (interestingly, t-score means something different in the bone density literature)
- Typical IQ style scaling: mean = 100; sd = 15
All of the above are "standardised scores" in a general sense.
I have seen people use the term "standard score" exclusively for z-scores, and also for typical IQ style scaling (e.g., [in this conversion table](http://faculty.pepperdine.edu/shimels/Courses/Files/ConvTable.pdf)).
In terms of definitive sources of information, there might be something in [The Standards for Educational and Psychological Testing](http://www.apa.org/science/programs/testing/standards.aspx) from the American Psychological Association.
| null | CC BY-SA 3.0 | null | 2011-07-24T06:05:57.490 | 2011-07-25T00:21:46.050 | 2011-07-25T00:21:46.050 | 183 | 183 | null |
13426 | 2 | null | 13419 | 2 | null | There is nothing inherently wrong with a log-log regression and economists have used them for ages to estimate elasticity. Yet if you want to allow for the power law effect but do not want to bother too much, you may apply this simple correction: [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=881759](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=881759)
| null | CC BY-SA 3.0 | null | 2011-07-24T08:18:32.620 | 2011-07-24T08:18:32.620 | null | null | 5494 | null |
13427 | 1 | 13480 | null | 7 | 696 | I would like to manipulate the work done by the loess function in R. However, the main workhorse of this function is written in C. Is there some pure R implementation of the code?
Thanks.
| Is there a "pure R" implementation for loess? (with no C code?) | CC BY-SA 3.0 | 0 | 2011-07-24T08:33:35.433 | 2011-07-25T23:01:18.317 | null | null | 253 | [
"r",
"loess"
] |
13428 | 1 | 13437 | null | 5 | 370 | As a non-statistician, I have a real world statistical/probability problem that I'm having trouble framing.
The software I rely on in inventory management interprets the 'movements' (number of times inventory is used) in a strange way. It offers the number of months, out of 24 months, that the item has been used.
For example, a moving code of 20 means that the item was used at least once for 20 out of 24 months..
What I need to be able to do is translate that to find the most probable number of movements over that 24 month period.
If movements randomly fall into 20 out of 24 months with no limitations, obviously the number of movements that really occur is likely to be much greater than 20. How much greater?
Sorry, this question is extra challenging because I have no ideas on how to begin tackling this. Any help is much appreciated.
| Probability of total events occurring given that one or more events occur in specified number of months | CC BY-SA 3.0 | null | 2011-07-24T10:42:19.397 | 2011-07-25T00:48:30.593 | 2011-07-25T00:48:30.593 | 183 | 5206 | [
"probability"
] |
13429 | 2 | null | 13086 | 28 | null | There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise white paper (Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions", [see here](http://www.sciencedirect.com/science/article/pii/S0167947307004434)).
There is an $\verb+R+$ implementation of this ($\verb+robustbase::adjbox()+$) as well as a matlab one (in a library called $\verb+libra+$).
I personally find it a better alternative to data transformation (though it is also based on an ad-hoc rule, see white paper).
Incidentally, I find I have something to add to whuber's example here. To the extend that we're discussing the whiskers' behaviour, we really should also consider what happens when considering contaminated data:
```
library(robustbase)
A0 <- rnorm(100)
A1 <- runif(20, -4.1, -4)
A2 <- runif(20, 4, 4.1)
B1 <- exp(c(A0, A1[1:10], A2[1:10]))
boxplot(sqrt(B1), col="red", main="un-adjusted boxplot of square root of data")
adjbox( B1, col="red", main="adjusted boxplot of data")
```
In this contamination model, B1 has essentially a log-normal distribution save for 20 percent of the data that are half left, half right outliers (the break down point of adjbox is the same as that of regular boxplots, i.e. it assumes that at most 25 percent of the data can be bad).
The graphs depict the classical boxplots of the transformed data (using the square root transformation)

and the adjusted boxplot of the non-transformed data.

Compared to adjusted boxplots, the former option masks the real outliers and labels good data as outliers. In general, it will contrive to hide any evidence of asymmetry in the data by classifying offending points as outliers.
In this example, the approach of using the standard boxplot on the square root of the data finds 13 outliers (all on the right), whereas the adjusted boxplot finds 10 right and 14 left outliers.
# EDIT: adjusted box plots in a nutshell.
In 'classical' boxplots the whiskers are placed at:
$Q_1$-1.5*IQR and $Q_3$+1.5*IQR
where IQR is the inter-quantile range, $Q_1$ is the 25th percentile and $Q_3$ is the 75th percentile of the data. The rule of thumb is to regard everything outside the fence as dubious data (the fence is the interval between the two whiskers).
This rule of thumb is ad-hoc: the justification is that if the uncontaminated part of the data is approximately Gaussian, then less than 1% of the good data would be classified as bad using this rule.
A weakness of this fence-rule, as pointed out by the OP, is that the length of the two whiskers are identical, meaning the fence-rule only makes sense if the uncontaminated part of the data has a symmetric distribution.
A popular approach is to preserve the fence-rule and to adapt the data. The idea is to transform the data using some skew correcting monotonous transformation (square root or log or more generally box-cox transforms). This is somewhat messy approach: it relies on circular logic (the transformation should be chosen so as to correct the skewness of the uncontaminated part of the data, which is at this stage an un-observable) and tends to make the data harder to interpret visually. At any rate, this remains a strange procedure whereby one changes the data to preserve what is after all an ad-hoc rule.
An alternative is to leave the data untouched and change the whisker rule. The adjusted boxplot allows the length of each whisker to vary according to an index measuring the skewness of the uncontaminated part of the data:
$Q_1$-$\exp(M,\alpha)$1.5*IQR and $Q_3$+$\exp(M,\beta)$1.5*IQR
Where $M$ is an index of skewness of the uncontaminated part of the data (i.e., just as the median is a measure of location for the uncontaminated part of the data or the MAD a measure of spread for the uncontaminated part of the data) and $\alpha$ $\beta$ are numbers chosen such that for uncontaminated skewed distributions the probability of lying outside the fence is relatively small across a large collection of skewed distributions (this is the ad-hoc part of the fence rule).
For cases when the good part of the data is symmetric, $M\approx 0$ and we're back to the classical whiskers.
The authors suggest using the med-couple as an estimator of $M$ (see reference inside the white paper) because of its high efficiency (though in principle any robust skew index could be used). With this choice of $M$, they then calculated the optimal $\alpha$ and $\beta$ empirically (using a large number of skewed distributions) as:
$Q_1$-$\exp(-4M)$1.5*IQR and $Q_3$+$\exp(3M)$1.5*IQR, if $M\geq 0$
$Q_1$-$\exp(-3M)$1.5*IQR and $Q_3$+$\exp(4M)$1.5*IQR, if $M<0$
| null | CC BY-SA 3.0 | null | 2011-07-24T15:10:46.077 | 2017-01-06T22:05:55.997 | 2017-01-06T22:05:55.997 | 919 | 603 | null |
13430 | 1 | 13431 | null | 2 | 3160 | In my research I'm comparing the variance of a method and I would like to describe the overall variance between individuals and the variance of the replicates of these individuals.
Things like 'comparing the intra-individual variance and between-individual variance' seems to get people confused. I would like to make a short brief notice of this without having to go to much in details about the experiment.
What would be a way of describing this setting more clearly but still within if possible one sentence?
To clarify:
I have 10.000 measurements for 60 individuals. For each measurement I could calculate for example the standard deviation as a method of variance. I also have 5 replicate measurements per individual. I could calculate the standard deviation for each of the 10.000 measurement
in the replicates. So now I have the variance of the measurement when looking in a population AND I have the variance when looking at replicates. When you would now have to describe these 2 types of variance in a single sentence how would you do that without going into to much details?
| Describing the difference between 2 types of variance | CC BY-SA 3.0 | null | 2011-07-21T10:12:00.637 | 2011-12-10T20:44:26.160 | 2011-12-10T20:44:26.160 | 7409 | 5275 | [
"variance"
] |
13431 | 2 | null | 13430 | 2 | null | As I understand the question, it is a matter of comparing the variance of an individual across multiple repeated instances and the variance of one instance across multiple individuals. If so, then I think the terms group variance and individual variance succinctly express the desired meanings.
| null | CC BY-SA 3.0 | null | 2011-07-21T10:56:58.900 | 2011-07-21T10:56:58.900 | null | null | null | null |
13432 | 2 | null | 13430 | 1 | null | In addition to @rubergly's excellent suggestions, which are commonly seen in research fields, you might consider within-subjects variance and between-subjects variance. This terminology was common when I worked in experimental psychology, but may have fallen out of favor with the push to turn human 'subjects' into 'participants.' Of course, if you are talking about non-human subjects, then it would still be very appropriate.
| null | CC BY-SA 3.0 | null | 2011-07-21T12:08:26.350 | 2011-07-21T12:08:26.350 | null | null | 13166 | null |
13433 | 1 | null | null | 5 | 3565 | If somebody can tell me what R commands I need to use for a repeated measures ANOVA, I'd really appreciate it. I have trouble with the random term. I've seen `random=id`, `random=id/(treatment*group)` and others.
Also can you please indicate to me what the formula for the Bonferroni adjusted intervals is?
| Repeated measures ANOVA in R and Bonferroni adjusted intervals | CC BY-SA 3.0 | null | 2011-07-24T17:46:33.013 | 2016-08-28T20:46:32.410 | 2016-08-28T20:46:32.410 | 28666 | 5522 | [
"r",
"anova",
"mixed-model",
"repeated-measures",
"bonferroni"
] |
13434 | 2 | null | 13433 | 4 | null | You should provide more details about your data. From the limited details provided by you, assuming you have a data frame `df` which has `response`, `trt`, `time`, and `subject` information, then these are many ways to fit a LME model in `R` using `lme4` package. However, I will illustrate three methods that I think will be useful for you.
```
library(lme4)
# Random intercepts for different subjects but time and trt effects are fixed
mmod1 <- lmer(response ~ time*trt + (1 | subject), df)
# Random intercepts and trt effects for different subjects, but the time effect is still fixed
mmod2 <- lmer(response ~ time*trt + (1 + trt | subject), df)
# Random intercepts, trt, and time effects for different subjects
mmod3 <- lmer(response ~ time*trt + (1 + trt + time | subject), df)
```
Once you have the p-values from the model fit above, you can use:
```
HPDinterval(mmod1, prob = 0.95, ...)
HPDinterval(mmod2, prob = 0.95, ...)
HPDinterval(mmod3, prob = 0.95, ...)
```
to obtain the 95% CI determined from MCMC sample. Since this CI is obtained from MCMC sampling, it takes into account of the random errors and you won't need to correct for multiple comparisons (I think so, please correct me if I am wrong).
| null | CC BY-SA 3.0 | null | 2011-07-24T18:31:59.870 | 2011-07-25T20:22:03.703 | 2011-07-25T20:22:03.703 | 1307 | 1307 | null |
13435 | 1 | 13452 | null | 2 | 188 | Suppose $X$ and $Y$ are two random samples (not necessarily iid, but one can make this assumption) and that $Z=X+Y$.
If one computes the order statistics of $X$ and $Y$, what can be said about the relative order statistic of $Z$?
To be more clear, let $\tilde{X}_{0.99}$ and $\tilde{Y}_{0.99}$ be the 0.99th quantiles of $X$ and $Y$, respectively, does it exist a relationship $f(\cdot)$ with $\tilde{Z}_{0.99}$ (i.e., the 0.99th quantile of $Z$) such that $\tilde{Z}_{0.99}=f(\tilde{X}_{0.99},\tilde{Y}_{0.99})$?
Sorry for the possibly ill-posed question... I'm not a statistician.
| If $Z=X+Y$ ($X$ and $Y$ being random samples), what is the relationships between the respective order statistics? | CC BY-SA 3.0 | null | 2011-07-24T20:21:27.730 | 2012-12-05T05:25:55.727 | null | null | 5523 | [
"quantiles",
"sample",
"order-statistics"
] |
13436 | 2 | null | 13433 | 6 | null | I agree with suncoolsu that it is difficult to tell exactly what you were looking for. And in general it's not recommended to do standard repeated measures ANOVAs anymore since there are generally better alternatives.
Nevertheless, perhaps you want to generate a simple stratified ANOVA. By stratified I mean that your effects are measured within another grouping variable, in your case the subject and thus a within subjects design. If your data frame is df and your response variable is y then you might have a within subjects predictor x1 and that crossed with a within subjects predictor x2, and perhaps a between subjects predictor z. To get the full model with all interactions you would use.
```
myModel <- aov( y ~ x1 * x2 * z + Error(id/(x1*x2)), data = df)
summary(myModel)
```
You'll note that within the Error term we are grouping x1, x2, and their interaction under id. Note that z is not in there because it is not a within subjects variable.
Keep in mind further that this data is laid out in long format and you probably need to aggregate it first to run this correctly since a repeated measures design often suggests more samples / subject than conditions in order to get good estimates of each subject's response value. Therefore, df above might be replaced with the following dfa.
```
dfa <- aggregate ( y ~ x1 + x2 + z + id, data = df, mean)
```
(BTW, suncoolsu gave a much more modern answer based on multi-level modelling. It's suggested you learn about that if you continue to do repeated measures designs because it is much more powerful, flexible, and allows one to ignore certain kinds of within subjects assumptions (notably sphericity). What I've described is how to do repeated measure ANOVA. You also might want to look at the car, or higher level ez packages in order to do it as well.)
As for your Bonferroni query... it should probably have been a separate question. Nevertheless, that's a bit of a hard one to answer with repeated measures. You could try ?pairwise.t.test. If you give the interactions of all your within variables as the group factor and set paired to true and the correction to bonf you're set. However, straight corrections like that probably are far too conservative. You state at the outset you're only going to use it if there is a significant effect, you probably also have a theoretical reason for making some comparisons, therefore it's not strictly the fishing expedition that Bonferroni (over) corrects for. So, something like...
```
with( df, pairwise.t.test(y, x1:x2, paired = TRUE, p.adj = 'bonf') )
```
will do what you want but that's probably not really what you want.
| null | CC BY-SA 3.0 | null | 2011-07-24T21:58:40.180 | 2011-07-25T11:53:45.310 | 2011-07-25T11:53:45.310 | 601 | 601 | null |
13437 | 2 | null | 13428 | 5 | null | As crayola says, the distribution is unlikely to be independent. Even if it is, as assumed below, the analysis is not simple.
It is possible to work out the probability of $N$ items happening in $M$ unique months out of $24$ with the following recursion:
$$\Pr(M=m|N=n) = \tfrac{(24-m+1)\Pr(M=m-1|N=n-1) + m\Pr(M=m|N=n-1)}{24} $$
starting at $\Pr(M=0|N=0)=1$, and $\Pr(M=0|N=n)=0$ for non-zero $n$. So, for example, $\Pr(M=20|N=42) = 0.2676\ldots$, and for $M=20$ this is the value of $N$ which gives the greatest likelihood.
It is also possible to work out the expected number of unique months
$$E[M|N=n] = 24 \left(1-\left(\frac{24-1}{24}\right)^n\right).$$
If we set this equal to 20 and solved for $n$ we would get $$n=\frac{\log(24)-\log(24-20)}{\log(24)-\log(24-1)} \approx 42.09999\ldots$$ though $N$ needs to be an integer. So this too hints at [$42$](http://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life.2C_the_Universe.2C_and_Everything_.2842.29).
This is all looking hopeful, but hides the true horror. For example if $M=20$ then I think the 95% confidence interval for $N$ is $[28,68]$ which is rather wide. And if $M=24$ for a popular item then a similar confidence interval for $N$ could be $[50,\infty )$ while the maximum likelihood and expectation methods which took us from an observation of $20$ to an estimate of $42$ would take us from an observation of $24$ to an estimate of $\infty$.
Even is you used Bayesian methods to deal with the $M=24$ observation case you would still face very wide credible intervals.
The real answer is to get better inventory management reporting.
| null | CC BY-SA 3.0 | null | 2011-07-24T22:02:28.957 | 2011-07-24T22:02:28.957 | null | null | 2958 | null |
13438 | 1 | null | null | 2 | 1749 | I need help in knowing which test to perform with this analysis.
I have two groups (lets say group A and B). A study was performed with initial rates of heart failures in group A and B (i.e, baseline rates) were measured. After 6 months of follow up the heart failure rates in group A and B (i.e, re measurement rates) were measured again. By simple mathematics, I know that the improvement rate is better in group A as compared to group B.
- How do I test whether the difference in rates between the groups is statistically significant?
I am using t-test for this but not sure how to go about it.
| Significance test for differences in baseline and remeasurement rates between two groups | CC BY-SA 3.0 | null | 2011-07-24T22:05:48.597 | 2011-09-24T05:28:06.580 | 2011-07-25T00:49:51.887 | 183 | 5524 | [
"statistical-significance",
"repeated-measures",
"t-test"
] |
13440 | 2 | null | 13438 | 2 | null | T-tests are for normal or close-to-normal distributions. You need a nonparametric test. Since you have the same two groups being assessed twice, I don't think you can use the McNemar test for dependent proportions. However, maybe you could do a chi-square test on the increases within each group as fractions of the sample sizes. E.g., if group A had 5/100 develop the illness and group B had 12/100, you would run the chi-square test using the numbers 5, 95, 12, and 88.
| null | CC BY-SA 3.0 | null | 2011-07-24T23:23:38.880 | 2011-08-24T22:48:22.420 | 2011-08-24T22:48:22.420 | 2669 | 2669 | null |
13442 | 1 | null | null | 13 | 5881 |
### Context
I want to set the scene before somewhat expanding on the question.
I have longitudinal data, measurements taken on subjects approximately every 3 months, primary outcome is numeric (as in continuous to 1dp) in the range 5 to 14 with the bulk (of all data points) being between 7 and 10. If I do a spaghetti plot (with age on the x axis and a line for each person) it's a mess obviously as I have >1500 subjects, but there is a clear tread towards higher values with increased age (and this is known).
The broader question: What we would like to do is to firstly be able to identify trending groups (those that start high and stay high, those that start low and stay low, those that start low and increase to high etc) and then we can look at individual factors that are associated with 'trend group' membership.
My question here is specifically pertaining to the first portion, the grouping by trend.
### Question
- How can we group individual longitudinal trajectories?
- What software would be suitable for implementing this?
I have looked at Proc Traj in SAS and M-Plus suggested by a colleague, which I'm looking into, but would like to know what others thoughts are on this.
| How to find groupings (trajectories) among longitudinal data? | CC BY-SA 3.0 | null | 2011-07-25T01:27:26.347 | 2019-01-24T02:00:22.750 | 2011-07-26T02:49:42.653 | 4226 | 4226 | [
"clustering",
"panel-data"
] |
13443 | 2 | null | 13394 | 2 | null | Your 'Update' is actually very clever, but I think the estimate of the slope would be somewhat confounded with the estimated effect of your dummy variable. Sometimes such things are hard to avoid, but I suggest a slightly altered approach that doesn't have this problem. You could fit one model:
$$ Y = \beta_{0} + X \beta_{1} + \epsilon $$
to the full data set, including the 11th time point. Now let $X_{k}$ be the $k$'th "trial" value you used for $X$. You could also fit the model
$$ Y = \alpha_{0} + X' \alpha_{1} + X_{11} \alpha_{2} + \varepsilon $$
where $X_{k}' = X_{k}$ if $k \leq 10$ and 0 otherwise, and $X_{11} = X_{k}$ if $k = 11$ and 0 otherwise. So, you have a general slope governing the first 10 trials, and the 11th trial is allowed to have whatever effect it wants. Then, compare the two using the likelihood ratio test with 1 degree of freedom. Note if $\alpha_{1}=\alpha_{2}$ then you arrive back at the smaller model, so the smaller model is a sub-model, which is why the LRT is appropriate here.
If significant, this would indicate that the second (and larger) model, which allows the 11th time point to have a different effect, fits the data better, indicating that using the same slope to fit the 11th predictor value as you did for the first 10 is not sufficient.
Note: The LRT is only a valid tool here if the errors are normally distributed (assuming this is OLS regression).
| null | CC BY-SA 3.0 | null | 2011-07-25T02:29:04.740 | 2011-07-25T02:46:50.187 | 2011-07-25T02:46:50.187 | 4856 | 4856 | null |
13444 | 2 | null | 13442 | 4 | null | I'd expect there is an MPLUS package to do what you need. There is a paper in Psychometrika about almost exactly this subject
springerlink.com/content/25r110007g417187
except the data is binary and the trajectories are probability trajectories. The authors use latent class analysis (implemented by using a penalized finite mixture model) to group trajectories. I also know the first author wrote some other papers around 10 years ago with Bengt Muthen (creator of MPLUS) about latent class analysis in similar settings (with trajectories). For example,
[http://onlinelibrary.wiley.com/doi/10.1111/j.0006-341X.1999.00463.x/abstract](http://onlinelibrary.wiley.com/doi/10.1111/j.0006-341X.1999.00463.x/abstract)
sounds very similar to what you're talking about, except the outcome is binary. The continuous case is much simpler, so I'd do a backwards literature search (i.e. look at the papers these papers reference) to find something that matches what you've described more precisely.
To find out more, you can ask the proprietors of MPLUS directly what package you need to use to do what you need. They are generally pretty quick to respond and are very helpful:
[http://www.statmodel.com/cgi-bin/discus/discus.cgi](http://www.statmodel.com/cgi-bin/discus/discus.cgi)
| null | CC BY-SA 3.0 | null | 2011-07-25T02:56:30.043 | 2011-07-25T03:05:24.717 | 2011-07-25T03:05:24.717 | 4856 | 4856 | null |
13445 | 1 | null | null | 0 | 278 | I am trying to express a hypothesis to test for the following claim:
>
A biologist is presented with the data that shows an increase in the
average number of bacterias, though he suspects there was no actual
change.
I came up with the following:
- $H_0:\; \mu = 40.10^2$ [no change had occurred]
- $H_A:\; \mu > 40.10^2$ [there is an increase]
| How to express an hypothesis that something has increased? | CC BY-SA 3.0 | null | 2011-07-25T03:31:47.650 | 2011-11-22T12:33:31.193 | 2011-08-24T10:53:32.853 | 930 | 5527 | [
"hypothesis-testing",
"self-study"
] |
13446 | 1 | 13453 | null | 5 | 8520 | I am new to R and using `rpart` for building a regression tree for my data.I wanted to use all the input variables for building the tree, but the rpart method using only a couple of inputs as shown below. As we can see, I have provided 10 inputs, but rpart used only two inputs. Please let me know how can force rpart method to use all the input variables. Thanks.
```
rm = rpart(uloss ~ tc_b + ublkb + mpa_a + mpa_b +
sys_a + sys_b + usr_a, data = data81, method="anova")
> princtp(rm)
Regression tree:
rpart(formula = uloss ~ tc_b + ublkb + mpa_a + mpa_b + sys_a +
sys_b, data = data81, weights = usr_a, method = "anova")
Variables actually used in tree construction:
[1] mpa_a tc_b
Root node error: 647924/81 = 7999
n= 81
CP nsplit rel error xerror xstd
1 0.403169 0 1.00000 1.04470 0.025262
2 0.092390 1 0.59683 0.66102 0.015238
3 0.081084 2 0.50444 0.70702 0.013123
4 0.045304 3 0.42336 0.58683 0.012129
5 0.010000 4 0.37805 0.51930 0.011942
```
One more question:
I have used rpart.control for minsplit=2, and got the following for another data.
Inorder to avoid overfititng the data, do I need to use splits 3 or splits 7. Shouldn't I use splits 7? Please let me know.
Variables actually used in tree construction:
[1] ct_a ct_b usr_a
Root node error: 23205/60 = 386.75
n= 60
```
CP nsplit rel error xerror xstd
1 0.615208 0 1.000000 1.05013 0.189409
2 0.181446 1 0.384792 0.54650 0.084423
3 0.044878 2 0.203346 0.31439 0.063681
4 0.027653 3 0.158468 0.27281 0.060605
5 0.025035 4 0.130815 0.30120 0.058992
6 0.022685 5 0.105780 0.29649 0.059138
7 0.013603 6 0.083095 0.21761 0.045295
8 0.010607 7 0.069492 0.21076 0.042196
9 0.010000 8 0.058885 0.21076 0.042196
```
| Recursive partitioning using rpart() method in R | CC BY-SA 3.0 | null | 2011-07-25T04:25:37.593 | 2011-07-25T16:44:28.643 | 2011-07-25T16:44:28.643 | 25133 | 25133 | [
"r",
"rpart"
] |
13447 | 1 | 13451 | null | 6 | 3427 | I understand this to be a binomial distribution: There are 100 balls in a bucket. 10 are red, 90 are blue. I select a ball at random and then replace it in the bucket, and I do this 20 times. I then calculate the probability that none of the selected balls were red.
But what if I don't put the ball back in the bucket? The probability then changes with each trial. Can anyone point me in the right direction of how to calculate the probability in this case?
| Probability of drawing no red balls from 20 draws without replacement given finite sample | CC BY-SA 3.0 | null | 2011-07-25T06:01:54.533 | 2011-07-25T14:21:54.890 | 2011-07-25T14:21:54.890 | 919 | 5529 | [
"distributions",
"probability",
"hypergeometric-distribution"
] |
13448 | 2 | null | 13445 | 1 | null | The null hypothesis you have so far is a good start.
As this question has the homework tag, I'll answer with two questions for you about it, that might nudge you to a better null hypothesis & test of it:
1) What would an observed large decline in the number of bacteria have said about the null hypothesis?
2) Is the null hypothesis you have so far, testable or not? Do you know of any statistical tests which could be done with no information at all about the nature of the underling probability distribution, or the likely statistical pattern of the observation?
| null | CC BY-SA 3.0 | null | 2011-07-25T06:10:59.973 | 2011-07-25T09:10:43.897 | 2011-07-25T09:10:43.897 | 3794 | 3794 | null |
13449 | 2 | null | 13447 | 6 | null | Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from, so you have a $89/99$ chance of not drawing a red ball. Similarly, on the third draw, if the second draw was also not a red ball, then you have a $88/98$ chance of picking a red ball, and so on. In general, if you attempt $k$ times independently without replacement, the probability you seek is
$$ \prod_{i=1}^{k} \frac{ 90-i+1 }{100-i+1} $$
One important thing to note is that this probability actually doesn't arise from a binomial distribution. You are not conducting independent trials with equal probability and counting the number of "successes". The trials are not independent because the success probability of a future trial depends on whether a past trial was a success, making it fundamentally different from the binomial distribution. If there was replacement, then you'd be correct in saying the number of success follows a binomial distribution.
| null | CC BY-SA 3.0 | null | 2011-07-25T06:50:40.557 | 2011-07-25T06:57:35.153 | 2011-07-25T06:57:35.153 | 4856 | 4856 | null |
13451 | 2 | null | 13447 | 9 | null | Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for [hypergeometric distribution](http://en.wikipedia.org/wiki/Hypergeometric_distribution#Application_and_example):
$$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\binom{90}{20}}{\binom{100}{20}}$$
The last term exactly matches the @Macro's answer, but hypergeometric formula is more general. The idea beyond the formula is simple: get the number of ways to draw $20$ $B$ of $90$, number of ways to draw $0$ $R$ from $10$ (there is only one possibility) and divide their product by the number or ways to draw any $20$ balls from $100$. Hope this was not your homework ;)
| null | CC BY-SA 3.0 | null | 2011-07-25T06:58:37.523 | 2011-07-25T06:58:37.523 | null | null | 2645 | null |
13452 | 2 | null | 13435 | 4 | null | You really cannot say very much about $\tilde{Z}_{0.99}$ compared with $\tilde{X}_{0.99}$ and $\tilde{Y}_{0.99}$ without knowing more about the rest of the distributions, even if $X$ and $Y$ are independent.
For most distributions you will find $\tilde{Z}_{0.99} < \tilde{X}_{0.99} + \tilde{Y}_{0.99}$: as an illustration, if $X$ and $Y$ have standard normal distributions then $\tilde{X}_{0.99} = \tilde{Y}_{0.99} \approx 2.326$ so their sum is about $4.653$ but $\tilde{Z}_{0.99} \approx 3.290$.
However it is easy enough to find a counterexample: for example if $X=0$ with probability $0.992$ and $X=1$ otherwise, and $Y$ similarly, then $\tilde{X}_{0.99} = \tilde{Y}_{0.99} = 0$ so their sum is $0$ but $\tilde{Z}_{0.99} =1$.
In fact it is possible to find a distribution where $\tilde{X}_{0.99}$ and $\tilde{Y}_{0.99}$ take any given value and $\tilde{Z}_{0.99}$ takes any given value.
| null | CC BY-SA 3.0 | null | 2011-07-25T07:16:05.640 | 2011-07-25T07:16:05.640 | null | null | 2958 | null |
13453 | 2 | null | 13446 | 7 | null | Perhaps you misunderstood the message? It is saying that, having built the tree using the control parameters specified, only the variables `mpa_a` and `tc_b` have been involved in splits. All the variables were considered, but just these two were needed.
That tree seems quite small; do you have only a small sample of observations? If you want to grow a bigger tree for subsequent pruning back, then you need to alter the `minsplit` and `minbucket` control parameters. See `?rpart.control`, e.g.:
```
rm <- rpart(uloss ~ tc_b + ublkb + mpa_a + mpa_b +
sys_a + sys_b + usr_a, data = data81, method = "anova",
control = rpart.control(minsplit = 2, minbucket = 1))
```
would try to fit a full tree --- but it will be hopelessly over-fitted to the data and you must prune it back using `prune()`. However, that might assure you that `rpart()` used all the data.
| null | CC BY-SA 3.0 | null | 2011-07-25T08:00:10.677 | 2011-07-25T08:00:10.677 | null | null | 1390 | null |
13454 | 1 | 16171 | null | 18 | 4891 | In particular, how should the standard errors of the fixed effects in a linear mixed effects model be calculated (in a frequentist sense)?
I have been lead to believe that the typical estimates (${\rm Var}(\hat\beta)=(X'VX)^{-1}$), such as those presented in Laird and Ware [1982] will give SE's that are underestimated in size because the estimated variance components are treated as though they are the true values.
I have noticed that the SE's produced by the `lme` and `summary` functions in the `nlme` package for R are not simply equal to the square root of the diagonals of the variance-covariance matrix given above. How are they calculated?
I am also under the impression that Bayesians use inverse gamma priors for the estimation of variance components. Do these give the same results (in the right setting) as `lme`?
| How should standard errors for mixed effects model estimates be calculated? | CC BY-SA 3.0 | null | 2011-07-25T08:43:32.087 | 2014-09-07T02:27:15.740 | 2014-09-07T02:27:15.740 | 7290 | 845 | [
"r",
"mixed-model",
"random-effects-model"
] |
13455 | 2 | null | 13442 | 11 | null | I've used the [Mfuzz](http://itb.biologie.hu-berlin.de/~futschik/software/R/Mfuzz/) in [R](http://www.r-project.org/) for clustering time-course microarray data sets. Mfuzz uses "soft-clustering". Basically, individuals can appear in more than one group.
As @Andy points out in the comment, the original paper uses CTN data. However, I suspect that it should work OK for your discrete data. Especially since you are just exploring the data set. Here's a quick example in R:
```
##It's a bioconductor package
library(Mfuzz)
library(Biobase)
## Simulate some data
## 6 time points and 90 individuals
tps = 6;cases = 90
d = rpois(tps*cases, 1) ##Poisson distribution with mean 1
m = matrix(d, ncol=tps, nrow=cases)
##First 30 individuals have increasing trends
m[1:30,] = t(apply(m[1:30,], 1, cumsum))
##Next 30 have decreasing trends
##A bit hacky, sorry
m[31:60,] = t(apply(t(apply(m[31:60,], 1, cumsum)), 1, rev))
##Last 30 individuals have random numbers from a Po(1)
##Create an expressionSet object
tmp_expr = new('ExpressionSet', exprs=m)
##Specify c=3 clusters
cl = mfuzz(tmp_expr, c=3, m=1.25)
mfuzz.plot(tmp_expr,cl=cl, mfrow=c(2, 2))
```
Gives the following plot:

| null | CC BY-SA 3.0 | null | 2011-07-25T09:17:33.513 | 2011-07-25T13:04:20.447 | 2011-07-25T13:04:20.447 | 8 | 8 | null |
13456 | 1 | 13464 | null | 4 | 3564 | For one of my academic project I ran a sample survey. But unfortunately the respondents were not very proactive & I get just 25 responses for my study. With this small response I guess I can't do any statistical studies (e.g. anova, regressions, etc).
So, I was wondering if based on the sample's mean & standard deviation I can simulate a larger dataset & then do my statistical analysis.
I'll also be grateful if somebody can tell me the method to do simulation with given mean & sd in R software.
Regards,
Ari
| Ways to overcome small number of survey responses | CC BY-SA 3.0 | null | 2011-07-25T10:37:52.520 | 2011-07-26T13:29:11.893 | 2011-07-25T11:20:32.517 | null | 4278 | [
"sampling",
"small-sample"
] |
13457 | 2 | null | 13421 | 7 | null | One way is to use the properties of the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/KL_divergence).
Let $\mathfrak{P}$ be the family of distributions with the given margins, and let $Q$ be the product distribution (and obviously $Q \in \mathfrak{P}$).
Now, for any $P \in \mathfrak{P}$, the cross entropy is:
$H(P,Q) = E_P [\log q(X)] = E_P \left[ \log \prod_i q_i(X) \right] = \sum_i E_P [\log q_i(X)] = \sum_i H(P_i,Q_i)$
that is, the sum of the cross entropy of the margins. Since the margins are all fixed, this term itself must be fixed.
Now we can write the KL divergence as:
$D_{KL}(P \| Q) = H(P,Q) - H(P)$
and hence:
$\operatorname*{arg\,min}_{P \in \mathfrak{P}} \ D_{KL}(P \| Q) = \operatorname*{arg\,max}_{P \in \mathfrak{P}} \ H(P) $
that is, the distribution $P$ which maximises the entropy is the one which minimises the KL divergence with $Q$, which by [the properties of the KL divergence](http://en.wikipedia.org/wiki/KL_divergence#Properties), we know is $Q$ itself.
| null | CC BY-SA 3.0 | null | 2011-07-25T10:41:16.607 | 2011-07-25T10:41:16.607 | null | null | 495 | null |
13458 | 1 | 13463 | null | 4 | 7285 | I have bird count data and use classical poisson loglinear model, i.e. we have counts `obs(i,j)` - observed count for site `i` and year `j`, and the model is:
```
ln(model(i,j)) = site_effect(i) + year_effect(j)
```
where `model(i,j)` is expected count for site `i` and year `j`, and it is assumed that counts are poisson distributed, i.e. `obs(i,j) ~ Poiss(lambda = model(i,j))`. Serial correlation and overdispersion was taken into account.
Now I need to analyse residuals of this model as a dependent variable in another regression with some explaining variables (Please don't tell me I should put these variables in the above model - to be brief, there are some technical issues behind that.)
The problem now is:
1) how to define the residuals
2) what is their distribution
3) which regression use to explain them.
Possible solutions of residual definition and analysis:
A) obs/model with another poisson loglinear regression
B) log((obs+1)/model), normal distribution -> normal linear regression
C) log((obs+1)/(model+1)), normal distribution -> normal linear regression
D) ... any other??
Ad Solution A: use `obs/model`,i.e. divide observed value by the expected value from model (for each i,j). But what is their distribution? I would say Poisson, but the numbers are not whole numbers! So I multiplied by 20 and rounded to whole numbers to be able to fit poisson. Is this a good idea, or is it nonsense? Or how should I fit poisson to real number continuous random variable? Here is the distribution:

Note that the Poisson distribution doesn't fit (used goodfit() test in R).
So there is a problem that obs/model are real numbers - not whole natural, as the poisson distribution. Moreover, I'd like to avoid another poisson loglinear regression if possible, as I'm quite infamiliar with how to analyze the explaining variables, the explained/unexplained variability etc. as in the normal regression.
Ad Solution B/C: use something like `log((obs+1)/model)` or `log((obs+1)/(model+1))`. There is a problem of zero observed counts (there are no zero model counts), so I can't just use `log(obs/model)` as I would like to. How to solve this problem?? It is better to use solution B or C?
Here are the distributions.


Note that also in this case, the normal distributions don't fit. Nothing really fits, but this is quite normal with real count data. :-)
So, summary:
- which solution you think fits best?
- how to overcome the problems each solution has?
- is it a big mistake to use B/C and go for normal linear regression, if I'd like to avoid the poisson? Should I use B or C?
Thanks a lot!
| How to analyze residuals of Poisson log-linear model? | CC BY-SA 3.0 | null | 2011-07-25T11:39:53.250 | 2017-03-23T11:46:42.983 | 2011-07-25T15:38:20.933 | null | 5509 | [
"regression",
"poisson-distribution",
"count-data",
"residuals",
"log-linear"
] |
13459 | 2 | null | 13458 | 2 | null | I'm not quite sure what you're trying to achieve, but another option is to do another Poisson regression, using the linear predictor of the first model (that is, the log of the predicted value) in the offset of the second model.
However, without knowing more about why you want to do all this, it is difficult to recommend a course of action.
| null | CC BY-SA 3.0 | null | 2011-07-25T13:03:30.323 | 2011-07-25T13:03:30.323 | null | null | 495 | null |
13460 | 2 | null | 13456 | 3 | null | You should read about [resampling](http://en.wikipedia.org/wiki/Resampling_%28statistics%29).
Such techniques cannot increase the amount of information you have from your small original sample, so you should just take the mean and standard deviation of your sample, but resampling can give some helpful information about how uncertain the estimates you make from your sample are.
| null | CC BY-SA 3.0 | null | 2011-07-25T14:14:14.990 | 2011-07-25T14:14:14.990 | null | null | 2958 | null |
13461 | 2 | null | 13446 | 5 | null | If the number of observations is less than around 20,000 the trees built by rpart do not have a reliable structure. That is, if you were to use the bootstrap to repeat the process, you will see many different trees that are called 'optimal'.
| null | CC BY-SA 3.0 | null | 2011-07-25T14:34:10.480 | 2011-07-25T14:34:10.480 | null | null | 4253 | null |
13462 | 1 | null | null | 2 | 2214 | For my research I am currently working in R and I have created a function which gives me the loglikelihood. Now, I would like to optimize it for multiple parameters. So I decided to use the optim() function in R.
Since I have 1 vector parameter and another parameter which are constrained to have values between 0 and 1, since they are proportions, I used the option method="L-BFGS-B".
Applying this I get results but one of my restricted parameters is estimated as exactly 1.0000. If I do not constrain it, it gives a sensible answer. But, the standard error is large (still smaller than under the constrained). Is there a way to remain with the restriction but that I can solve the fact that the program ends with 1.000?
I give some piece of my code.
```
Ps<-1-c(0.02083333, 0.03067485, 0.11290323, 0.13684211, 0.23629490)
loglik.negbin.perageclass<-function(param.qphi){
#q.param<-rep(param.qphi[1],5)
#phi.param<-rep(param.qphi[2],5)
#size.param<-param.qphi[3]
q.param<-param.qphi[1:5]
phi.param<-rep(param.qphi[6],5)
size.param<-param.qphi[7]
shape.param<-param.qphi[8]
scale.param<-param.qphi[9]
#I_days[1,]<-param.qphi[8:12]
delta.touse<-dgamma(0:4,shape.param,scale.param)/sum(dgamma(0:4,shape.param,scale.param))
delta<-function(t_current){
diff<-rep(0,t_current)
res<-matrix(0,nrow=t_current,ncol=5)
for(k in 1:t_current){
diff[k]=t_current-k
if(diff[k]==0){
res[k,]<-delta.touse[1]
}
else if(diff[k]==1){
res[k,]<-delta.touse[2]
}
else if(diff[k]==2){
res[k,]<-delta.touse[3]
}
else if(diff[k]==3){
res[k,]<-delta.touse[4]
}
else if(diff[k]==4){
res[k,]<-delta.touse[5]
}
}
return(res)
}
week<-1
for(k in 2:35){
I_a_days[k,]<-phi.param*(q.param*C_a[,])%*%(apply(delta(k-1)*Ps*N*I_days[1:k-1,],2,sum))
I_s_days[k,]<-(1-phi.param)*(q.param*C_s[,])%*%(apply(delta(k-1)*Ps*N*I_days[1:k-1,],2,sum))
I_days[k,]<-I_a_days[k,]+I_s_days[k,]
if(k%in%newweek){
beginningweek<-k-6
for(l in beginningweek:k){
I_week[week,]<-I_week[week,]+I_days[l,]
I_s_week[week,]<-I_s_week[week,]+I_s_days[l,]
I_a_week[week,]<-I_a_week[week,]+I_a_days[l,]
}
week<-week+1
}
}
print(I_week)
# Plotting
plot(GP.data$Weekno[24:29],log(GP.data$X0.4[24:29]),pch=19,lwd=2,main="GP data",ylim=range(0,log(700)),col=1,xlab="week",ylab="number of cases")
points(GP.data$Weekno[24:29],log(GP.data$X5.14[24:29]),pch=19,col=2)
points(GP.data$Weekno[24:29],log(GP.data$X15.44[24:29]),pch=19,col=3)
points(GP.data$Weekno[24:29],log(GP.data$X45.64[24:29]),pch=19,col=4)
points(GP.data$Weekno[24:29],log(GP.data$X65.[24:29]),pch=19,col=5)
for (i in 1:5){lines(c(200925:200929),log(I_week[,i]),col=i,lwd=2)}
for (i in 1:5){lines(c(200925:200929),log(I_week[,i]),col=i,lwd=2)}
# End plotting
we<-5/t(t(GP.matr[25:29,13:17])/N)/apply(1/t(t(GP.matr[25:29,13:17])/N),1,sum)
ll<-rep(0,5)
mu.param<-rep(0,5)
for(a in 1:5){
gp.a<-a+4
mu.param<-I_week[,a]
#print(c("debug",N,N[a]*GP.matr[25:29,gp.a]))
ll[a]<-sum(we[,a],log(1E-7+dnbinom(GP.matr[25:29,gp.a],mu=mu.param,size=size.param)))
}
ll.report<-sum(ll)
return(-ll.report)
}
q_start<-rep(0.05,5)
phi_start<-0.10
size_start<-1.2
shape_start<-2.9
scale_start<-0.5
starting.values<-c(q_start,phi_start,size_start,shape_start,scale_start)
loglik.negbin.perageclass(starting.values)
res<-optim(starting.values,loglik.negbin.perageclass,method="L-BFGS- B",lower=c(0,0,0,0,0),upper=c(1,1,Inf,Inf,Inf),
control=list(trace=TRUE,maxit=500),hessian=T)
```
Does anyone has any idea if there is a solution for this?
Thank you very much!
Kim
| Box-constrained optimization using optim() in R | CC BY-SA 3.0 | 0 | 2011-07-25T14:59:32.373 | 2011-07-25T15:37:55.460 | 2011-07-25T15:37:55.460 | null | 5533 | [
"r",
"optimization"
] |
13463 | 2 | null | 13458 | 7 | null | This sounds like a hierarchical model (or multi-level model), but not explained in those terms. Firstly, note that a Poisson "residual" is not really the same thing as you would find in the case of a normal error term. The reason is that a poisson random variable cannot be separated out easily into a "systematic part" and "random part". One standard way is to use the "deviance residuals", defined by:
$$d_{i}=sgn(Y_{i}-\hat{Y}_{i})\sqrt{2Y_{i}\log\left(\frac{Y_{i}}{\hat{Y}_{i}}\right)-2(Y_{i}-\hat{Y}_{i})}$$
These quantities are analogues to the residuals in OLS regression, and can be easily obtained from `residuals(glm.object, "deviance")` function in R. You can show that the sum of squared deviance residuals has an approximate chi-square distribution, but I wouldn't know how to figure out the exact distribution.
An alternative is to add further structure to the Poisson GLM by introducing "random effects" into the model equation. For the Poisson model with canoncial log-link function we have $\log(\mu_i)=x_{i}^{T}\beta$. In your case $x_{i}^{T}$ is a row of a "design matrix" (all zeros and ones) which picks out the relevant beta coefficient for your site effect and year effect. This addition of random effects basically says that the above relationship only holds "on the average" rather than deterministically (can think of this as a "softer" link function). So that we actually have $E\left[\log(\mu_i)\right]=x_{i}^{T}\beta$. The "actual link" is given by:
$$\log(\mu_i)=x_{i}^{T}\beta+z_{i}^{T}u_{i}$$
Where $z_{i}^{T}$ is a known set of covariates, and $u_i$ is a random effect for the $i$th observation with $E[u_{i}]=0$. Now if you were to set $z_{i}^{T}=1$ then you have a good definition of the residuals, given by the random effects $u_i$. All that is needed to do now is to assign a probability distribution to the random effects and do the maths/coding work to either give point estimates or posterior probability distributions for $u_{i}$ to use in your subsequent model. These are often called Generalised linear mixed models, and $u_i$ is often chosen to have a normal distribution.
The nlme package should be able to do this kind of modelling
| null | CC BY-SA 3.0 | null | 2011-07-25T15:20:46.183 | 2011-07-25T15:20:46.183 | null | null | 2392 | null |
13464 | 2 | null | 13456 | 2 | null | Simulating new data with the same mean/sd as your data and doing analysis on that is effectively the same as arbitrarily increasing your sample size while keeping your point estimates the same, thus increasing power, all while ignoring the fact that the mean/sd you've matched was based on only 25 observations - not a very diligent practice.
First off, if your data is normally distributed, then the $p$-values from ANOVA/regression are valid, since the sampling distribution of your estimates will still be normal, regardless of the sample size. More generally, you could consider non-parametric bootstrap resampling to get confidence intervals:
[http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29)
Essentially, you would re-sample from your data with and re-estimate a parameter (e.g. regression coefficients) repeatedly (say 1000 times) and treat these parameter estimates as draws from the sampling distribution of $\hat{\beta}$. You can then use the empirical quantiles (e.g. using the 2.5th and 97.5th percentiles) of this sample to construct confidence intervals.
As a side note, if the effects are large, and you can make a rational story from them you shouldn't over-emphasize the ability to formally "prove" your claims statistically just because your data is non-normal or your sample size is insufficient to invoke the central limit theorem.
| null | CC BY-SA 3.0 | null | 2011-07-25T15:27:49.603 | 2011-07-25T15:27:49.603 | null | null | 4856 | null |
13465 | 1 | 22644 | null | 24 | 123183 | When doing a GLM and you get the "not defined because of singularities" error in the anova output, how does one counteract this error from happening?
Some have suggested that it is due to collinearity between covariates or that one of the levels is not present in the dataset (see: [interpreting "not defined because of singularities" in lm](http://r.789695.n4.nabble.com/interpreting-quot-not-defined-because-of-singularities-quot-in-lm-td882827.html))
If I wanted to see which "particular treatment" is driving the model and I have 4 levels of treatment: `Treat 1`, `Treat 2`, `Treat 3` & `Treat 4`, which are recorded in my spreadsheet as: when `Treat 1` is 1 the rest are zero, when `Treat 2` is 1 the rest are zero, etc., what would I have to do?
| How to deal with an error such as "Coefficients: 14 not defined because of singularities" in R? | CC BY-SA 3.0 | null | 2011-07-25T16:21:17.073 | 2020-10-15T13:44:20.100 | 2011-07-25T22:02:41.133 | 930 | 5397 | [
"r",
"generalized-linear-model",
"regression-coefficients"
] |
13466 | 1 | 13467 | null | 3 | 117 | Are there R packages that offer eigenvalue de-noising methods grounded in Random Matrix Theory? Various cleansing methods include the Power Law and Krazanowski filter.
| Eigenvalue cleansing methods in R? | CC BY-SA 3.0 | null | 2011-07-25T16:23:25.453 | 2011-07-25T16:38:54.890 | 2011-07-25T16:38:54.890 | 8101 | 8101 | [
"r",
"covariance-matrix"
] |
13467 | 2 | null | 13466 | 1 | null | The [tawny](http://cran.r-project.org/web/packages/tawny/tawny.pdf) package in R comes to mind, but are there others? Rmetrics hints at a de-noising procedure using RMT but they do not provide documentation on such a function.
| null | CC BY-SA 3.0 | null | 2011-07-25T16:26:21.773 | 2011-07-25T16:26:21.773 | null | null | 8101 | null |
13468 | 1 | null | null | 7 | 4761 | I have collected 70 organisms from 4 different sites; two sites of treatment 1 and two sites of treatment 2. I also have a continuous explanatory variable (average temperature) which is different for each site. How can I test if measures of richness and diversity differ between sites or by temperature?
Richness is the number of species in each sample
Diversity is a weighted average of the proportions of each species present. In this case we are using the Shannon Index which is:
$^1\!D= \exp\left(-\sum_{i=1}^R p_i \ln p_i\right)$
where $p_i$ is the proportion of each species $i$ at that site, and $R$ is the richness of the site.
What sort of model can I use when diversity and richness are my response variables?
---
This question had been abandonned by the OP without giving enough info to answer it properly, the above is an attempt to provide an answerable question in the spirit of what was asked.
Original Question:
>
I have collected ca. 70 species of organism from 4 sites. 2 sites of
Treatment 1 and 2 sites of Treatment 2. How do I test using R, whether
the richness, dominance, abundance and diversity is different between
the two if I have 6 explanatory variables?
| Species Richness, Dominance and Diversity Differences | CC BY-SA 3.0 | null | 2011-07-25T16:28:34.833 | 2022-06-04T02:29:04.037 | 2022-06-04T02:29:04.037 | 11887 | 5397 | [
"statistical-significance",
"generalized-linear-model",
"entropy",
"ecology",
"diversity"
] |
13469 | 1 | null | null | 5 | 2610 | Tools such as random forests or adaboost are powerful at solving cross-sectional binary logistic problems or prediction problems where there are many weak learners. But can these tools be adapted to solve panel regression problems?
One could naively introduce a time index as an independent variable but all this does is to provide an additional degree of freedom to the fitting algorithm. What we would like is a solution that allows information from period T-1 to have bearing on period T.
If there is not a straightforward way to do this using these algorithms, is there an alternative algorithm that can perform a panel regression making use of the information in both the cross-section and time-series?
| Random Forests / adaboost in panel regression setting | CC BY-SA 3.0 | null | 2011-07-25T16:35:55.233 | 2011-07-25T16:35:55.233 | null | null | 8101 | [
"r",
"time-series",
"panel-data",
"random-forest",
"cross-section"
] |
13470 | 1 | 13486 | null | 1 | 1738 | Sorry to keep bothering you guys with this thing, but another stupid question: Given the following data (from my previous [question](https://stats.stackexchange.com/questions/13352/how-do-you-deal-with-a-multiple-choice-observation-in-bayesian-inference-when/13363#13363)), how would one calculate the conditional probability and marginal probability (for Bayesian inference) of both the positive hypothesis and the negative hypothesis for, say, choice #3?
Suppose I have a questionnaire and I ask respondents how often they eat at McDonalds:
- Never
- Less than once a month
- At least once a month but less than once a week
- 1-3 times a week
- More than 3 times a week
I then correlate these answers with whether the respondents are wearing brown shoes.
- Brown 65 -- not brown 38
- Brown 32 -- not brown 62
- Brown 17 -- not brown 53
- Brown 10 -- not brown 48
- Brown 9 -- not brown 6
Effectively, "brown" is the hypothesis, the "brown" counts are "true positives" and the "not-brown" counts are "false positives".
When I look at this straight-on it seems relatively simple -- Brown total = 133, not brown total = 207, overall total 340 (if I did my math right)[which I didn't the first time]. So the conditional probability of brown for #3 is 17/133, and the marginal probability is (17+53)/340.
For the negative hypothesis it would seem that you can simply turn the statistics on their head and treat "not brown" as the hypothesis, so the "not brown" counts are "true positives" and the "brown" counts are "false positives". Then the conditional probability of "not brown" (the negative hypothesis) for #3 is 53/207, and the marginal probability of "not brown" is still (17+53)/340.
The thing that confuses me is if, due to a previous questionnaire response that is not independent of this one, the marginal probability for #3 must be increased by some amount. One would assume that this would reenforce the true hypothesis and weaken the false hypothesis (or vice-versa), but if the same marginal probability is used for both cases then both hypotheses are affected in the exact same fashion.
Once again, this makes my head hurt.
Am I calculating the probabilities wrong? Am I wrong in believing that both hypotheses shouldn't be affected in the same direction by a change in marginal probability due to interdependence between this question and a prior one?
## Thanks, Greg --
After thinking about it I realize that my mistake was in assuming that (aside from the obvious effect on prior probability) an effect on the marginal probability was the ONLY effect that the prior application of an inter-dependent observation had on the "presumed independent" statistics of a subsequent observation. It does affect the marginal probability, but, more importantly, moves the conceptual threshold between alternative observations. Eg, the dependent "more than 3 times a week" category might come to correspond to a "more than 2 times a week" grouping of the independent responses.
This isn't as clean an answer as I was looking for, but it does help me understand how to most appropriately perform ad-hoc "artistic" compensation for inter-dependent observations.
| How to calculate conditional & marginal probability for both the positive and negative hypotheses? | CC BY-SA 3.0 | null | 2011-07-25T16:50:45.393 | 2011-07-26T16:20:43.603 | 2020-06-11T14:32:37.003 | -1 | 5392 | [
"bayesian",
"conditional-probability"
] |
13471 | 1 | 13472 | null | 11 | 15409 | I have used `rpart.control` for `minsplit=2`, and got the following results from `rpart()` function. In order to avoid overfitting the data, do I need to use splits 3 or splits 7? Shouldn't I use splits 7? Please let me know.
Variables actually used in tree construction:
```
[1] ct_a ct_b usr_a
Root node error: 23205/60 = 386.75
n= 60
CP nsplit rel error xerror xstd
1 0.615208 0 1.000000 1.05013 0.189409
2 0.181446 1 0.384792 0.54650 0.084423
3 0.044878 2 0.203346 0.31439 0.063681
4 0.027653 3 0.158468 0.27281 0.060605
5 0.025035 4 0.130815 0.30120 0.058992
6 0.022685 5 0.105780 0.29649 0.059138
7 0.013603 6 0.083095 0.21761 0.045295
8 0.010607 7 0.069492 0.21076 0.042196
9 0.010000 8 0.058885 0.21076 0.042196
```
| How to choose the number of splits in rpart()? | CC BY-SA 3.0 | null | 2011-07-25T16:56:44.123 | 2011-07-25T21:52:01.743 | 2011-07-25T21:52:01.743 | 930 | 25133 | [
"r",
"cart",
"rpart"
] |
13472 | 2 | null | 13471 | 10 | null | The convention is to use the best tree (lowest cross-validate relative error) or the smallest (simplest) tree within one standard error of the best tree. The best tree is in row 8 (7 splits), but the tree in row 7 (6 splits) does effectively the same job (`xerror` for tree in row 7 = 0.21761, which is within (smaller than) the `xerror` of best tree plus one standard error, `xstd`, (0.21076 + 0.042196) = 0.252956) and is simpler, hence the 1 standard error rule would select it.
| null | CC BY-SA 3.0 | null | 2011-07-25T17:06:15.350 | 2011-07-25T17:06:15.350 | null | null | 1390 | null |
13473 | 2 | null | 13456 | 3 | null | The first thing you need to do is determine (or at least think about) whether the poor return rate skewed your data. Ie, is there a statistical difference between the "average" person you attempted to survey (who was presumably representative of the population you wanted to study) and the "average" person who chose to respond?
You don't want to end up [electing Alf Landon](http://historymatters.gmu.edu/d/5168/).
| null | CC BY-SA 3.0 | null | 2011-07-25T17:46:54.347 | 2011-07-26T13:29:11.893 | 2011-07-26T13:29:11.893 | 919 | 5392 | null |
13474 | 1 | 13665 | null | 3 | 553 | Note: I'm not even sure how to best title this question, so if anyone has any ideas, please edit!
Consider six independent draws from a distribution defined by a cdf $F(x)$ over 0-1. Let’s call them $X1, X2 . . . X6$.
I want to define the distribution for X1+X3, if I know the following information:
- $X1 +X4 < X2 + X6$
- $X3 < X4$
- $X5 < X6$
It may help to think of the draws as being arranges in a tree structure. The image below is my best attempt to illustrate what we know (first row) and then what distribution I'm looking for (2nd row).
Note that I don't care specifically if $X3$ is the 3rd variable drawn. More accurately I might say that, given 2 draws from $F(x)$ for the first level of the tree, and then given
4 draws for the second level of the tree, label the draws $X1$ to $X6$ such that the conditions specified are true.

This question is similar to[ this question ](https://stats.stackexchange.com/questions/13259/what-is-the-distribution-of-maximum-of-a-pair-of-iid-draws-where-the-minimum-is), except that sums take away independence. I am specifically interested in F(x) = x (uniform distribution) but an answer to the general case would be very interesting.
| How to define the distribution of a convolution when there is some partial order statistic information? | CC BY-SA 3.0 | null | 2011-07-25T18:05:42.707 | 2011-07-31T08:52:37.623 | 2017-04-13T12:44:21.160 | -1 | 5471 | [
"conditional-probability",
"order-statistics"
] |
13477 | 1 | null | null | 2 | 257 | Let's say I have 100 parts. I sample one part and find it defective. I then sample another five parts and do not find them defective. What statistical procedure would I use that combines this information to provide a confidence interval or failure rate?
| How to test component failure in a general way? | CC BY-SA 3.0 | null | 2011-07-25T21:01:28.370 | 2017-11-03T13:52:02.260 | 2017-11-03T13:52:02.260 | 101426 | 5537 | [
"hypothesis-testing",
"binary-data"
] |
13478 | 1 | 14087 | null | 5 | 1969 | Using R, I have developed three models:
- linear regression using lm();
- decision tree using rpart();
- k-nearest neighbor using kknn().
I would like to conduct leave-one-out cross-validation tests and compare these models. However, which error metric should I use for better representation? Does mean absolute percentage error (MAPE) or sMAPE (symmetric MAPE) look fine? Please suggest me a metric.
For example, when I conducted leave-one-out CV tests on linear regression (LR) and decision tree (DT) models, the sMAPE error values are 0.16 and 0.20. However, the R-squared values of LR and DT are 0.85 and 0.92 respectively. Where sMAPE computed as `[sum (abs(predicted - actual)/((predicted + actual)/2))] / (number of data points)`. Here DT is pruned regression tree. These R^2 values are computed on full data set. There are a total of 60 data points in the set.
```
Model R^2 sMAPE
LR 0.85 0.16
DT 0.92 0.20
```
| Metric to compare models? | CC BY-SA 3.0 | null | 2011-07-25T21:20:49.893 | 2016-04-17T12:04:11.863 | 2016-04-17T12:04:11.863 | 1352 | 25133 | [
"r",
"metric",
"mape"
] |
13480 | 2 | null | 13427 | 4 | null | The loess.demo function in the TeachingDemos package replicates some of the internals in plain R (it also uses the built in C code version). You could use that function as a starting place depending on what you want to do.
| null | CC BY-SA 3.0 | null | 2011-07-25T23:01:18.317 | 2011-07-25T23:01:18.317 | null | null | 4505 | null |
13481 | 1 | null | null | 20 | 5029 | Can anybody point me to a survey paper on "Large $p$, Small $n$" results? I am interested in how this problem manifests itself in different research contexts, e.g. regression, classification, Hotelling's test, etc.
| Summary of "Large p, Small n" results | CC BY-SA 3.0 | null | 2011-07-25T23:17:00.903 | 2018-07-27T20:05:32.860 | null | null | 795 | [
"regression",
"classification",
"multivariate-analysis"
] |
13482 | 2 | null | 13477 | 3 | null | Let's assume you choose the parts at random, and that you are sampling without replacement so all 6 parts are different.
So you now know that 1 out of 6 parts sampled were defective. You can be sure that between 1 and 95 of your 100 parts are defective; you probably suspect that the actual number is closer to the bottom end.
To get a better estimate you can use Bayesian methods, with a prior belief about the number of defective parts. Perhaps you might have started by thinking that any number from 0 through to 100 are defective and each number was equally likely. If the number actually defective is $N$ out of 100, then the likelihood of 1 defective in 6 tested is proportional to ${N \choose 1}{100-N \choose 5}$ and so the posterior probability is this divided by its sum over $N$, which is ${101 \choose 7}$. This is a distribution for $N$, with a mode at $N=17$, a median at $N=22$, a mean of $24.5$, a variance of $199.75$, a standard deviation about $14.13$, and a Bayesian 95% credible interval of something like $[2,52]$.
To tighten this range you need to test more, or to have started with a tighter prior distribution for your beliefs about the number of defectives.
| null | CC BY-SA 3.0 | null | 2011-07-25T23:37:53.883 | 2011-07-25T23:37:53.883 | null | null | 2958 | null |
13483 | 2 | null | 13481 | 12 | null | I don't know of a single paper, but I think the current book with the best survey of methods applicable to $p\gg n$ is still Friedman-Hastie-Tibshirani. It is very partial to shrinkage and lasso (I know from a common acquaintance that Vapnik was upset at the first edition of the book), but covers almost all common shrinkage methods and shows their connection to Boosting. Talking of Boosting, the [survey of Buhlmann & Hothorn](ftp://ftp.stat.math.ethz.ch/Research-Reports/Other-Manuscripts/buhlmann/STS242.pdf) also shows the connection to shrinkage.
My impression is that, while classification and regression can be analyzed using the same theoretical framework, testing for high-dimensional data is different, since it's not used in conjunction with model selection procedures, but rather focuses on family-wise error rates. Not so sure about the best surveys there. Brad Efron has a ton of papers/surveys/book on [his page](http://stat.stanford.edu/~ckirby/brad/papers/). Read them all and let me know the one I should really read...
| null | CC BY-SA 3.0 | null | 2011-07-26T00:35:08.230 | 2011-07-26T00:35:08.230 | null | null | 30 | null |
13485 | 1 | 163909 | null | 6 | 14647 | Using `R plot()` and `plotcp()` methods, we can visualize linear regression model (`lm`) as an equation and decision tree model (`rpart`) as a tree. We can develop k-nearest neighbour model using R `kknn()` method, but I don't know how to present this model. Please suggest me some R methods that produce nice graphs for knn model visualization.
| Visualizing k-nearest neighbour? | CC BY-SA 3.0 | null | 2011-07-26T04:27:33.463 | 2015-07-30T10:56:01.163 | 2015-07-30T09:25:10.427 | 53094 | 25133 | [
"r",
"data-visualization",
"k-nearest-neighbour"
] |
13486 | 2 | null | 13470 | 1 | null | When I learned about Bayes' rule, the metaphor I was given would have me call shoe-color a "test result" and McD's consumption the "underlying condition".
The joint probabilities we would lay out like this:
Brown Other Marginal
1 a b (a+b)/S = P(1)
2 c d (c+d)/S = P(2)
3 e f etc.
4 g h etc.
5 i j etc.
P(Brw)= P(Othr)=
Marg (a+c+e+g+i)/S (b+d+f+h+j)/S S = grand total count
This way it's kind of easy to see what the joint and conditional probabilities are.
P(Brown & 1) = a/S
P(Brown | 1) = P(B&1)/P(1) = a/S / P(1) = a/(a+b)
P(1 | Brown) = P(B&1)/P(B) = a/S / P(B) = a/(a+c+e+g+i)
NB: S in your case is 340, not 240.
I'm having trouble answering your question though because it's not clear to me where this extra non-independent questionnaire comes in. I don't think it's quite right to think of this other data as 'changing the marginal probability of 3'. Instead, maybe you want to formulate the question in terms like:
First: what is the probability of Brown given 3 on second questionnaire (marginalizing over all responses to the previous questionnaire)?
Second: what is the probability of Brown given 3 on the second questionnaire and 3 on the second questionnaire?
To get at those you would need to give your table another dimension. eg
(:,:, Q1=1) (:,:,:Q1=2) ... (:,:,Q1=5)
Q2 Brown Other Brown Other Brown Other
1 a b g h m n
2 c d i j o p
... ... ...
5 e f k l q r
I think if you get the marginals right and pick some actual numbers to run your example, and make the comparisons more formal, it may make more sense. Hope this is somewhat helpful.
| null | CC BY-SA 3.0 | null | 2011-07-26T04:42:21.427 | 2011-07-26T04:42:21.427 | null | null | 5539 | null |
13487 | 1 | 13489 | null | 9 | 12050 | I was playing around with ggplot2 using the following commands to fit a line to my data:
```
ggplot(data=datNorm, aes(x=Num, y=Val)) + geom_point() +
stat_summary(fun.data = "mean_cl_boot", geom="errorbar", colour="red", width=0.8) +
stat_sum_single(median) +
stat_sum_single(mean, colour="blue") +
geom_smooth(level = 0.95, aes(group=1), method="lm")
```
The red dots are median values, blue are the means and the vertical red lines show the error bars. As a final step, I used `geom_smooth` to fit a line using linear smoothing so I used `method="lm"`. Along with the line, a dull shade was generated as well around the line. While I figured out how to remove it from the documentation, the option I used to turn it off is:
```
se: display confidence interval around smooth?
```
Can someone please tell me what I am supposed to understand from the shade around the line? Specifically, I am trying to understand how to interpret it. It must be some goodness-of-fit for the line perhaps but any extra information could be very useful to me. Any suggestions?

| What does this blur around the line mean in this graph? | CC BY-SA 3.0 | null | 2011-07-26T05:10:40.910 | 2011-07-26T07:50:31.570 | null | null | 2164 | [
"r",
"modeling",
"dataset",
"data-mining",
"ggplot2"
] |
13489 | 2 | null | 13487 | 7 | null | I suspect it means very little in your actual figure; you have drawn a form of stripplot/chart. But as we don't have the data or reproducible example, I will just describe what these lines/regions show in general.
In general, the line is the fitted linear model describing the relationship $$\widehat{\mathrm{val}} = \beta_0 + \beta_1 \mathrm{Num}$$ The shaded band is a pointwise 95% confidence interval on the fitted values (the line). This confidence interval contains the true, population, regression line with 0.95 probability. Or, in other words, there is 95% confidence that the true regression line lies within the shaded region. It shows us the uncertainty inherent in our estimate of the true relationship between your response and the predictor variable.
| null | CC BY-SA 3.0 | null | 2011-07-26T07:44:19.050 | 2011-07-26T07:50:31.570 | 2011-07-26T07:50:31.570 | 1390 | 1390 | null |
13490 | 2 | null | 13485 | 3 | null | kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple this would be rather harsh to decipher.
You may do this by counting the distances between train objects the way you did it in kknn, then use `cmdscale` to cast this on 2D, finally plot directly or using some smoothed scatterplot using colours to show classes or values (the smoothed regression version would require probably some hacking with hue and intensity). However, as I wrote, this would be probably a totally useless plot.
| null | CC BY-SA 3.0 | null | 2011-07-26T07:54:30.830 | 2011-07-26T07:54:30.830 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.