Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8685 | 2 | null | 7897 | 3 | null | Unless identification of the most relevant variables is a key aim of the analysis, it is often better not to do any feature selection at all and use regularisation to prevent over-fitting. Feature selection is a tricky procedure and it is all too easy to over-fit the feature selection criterion as there are many degrees of freedom. LASSO and elastic net are a good compromise, the achieve sparsity via regularisation rather than via direct feature selection, so they are less prone to that particular form of over-fitting.
| null | CC BY-SA 2.5 | null | 2011-03-23T18:01:20.987 | 2011-03-23T18:01:20.987 | null | null | 887 | null |
8687 | 2 | null | 8677 | 7 | null | Do you know the [qgraph](http://sites.google.com/site/qgraphproject/) project (and the related [R package](http://cran.r-project.org/web/packages/qgraph/index.html))? It aims at providing various displays for psychometric models, especially those relying on correlations. I discovered this approach for displaying correlation measures when I was reading a very nice and revolutionary article on diagnostic medicine by Denny Borsboom and coll.: [Comorbidity: A network perspective](http://sites.google.com/site/borsboomdenny/CramerEtAl2010.pdf), BBS (2010) 33: 137-193.
An oversimplified summary of their network approach of comorbidity is that it is “hypothesized to arise from direct relations between symptoms of multiple disorders”, contrary to the more classical view where these are comorbid disorders themselves that causes their associated symptoms to correlate (as reflected in a latent variable model, like factor or item response models, where a given symptom would allow to measure a particular disorder). In fact, symptoms are part of disorder, but they don’t measure it (and this is a mereological relationship). Their figure 5 describes such a "comorbidity network" and is particularly interesting as it embeds the frequency of symptoms and magnitude of their bivariate association in the same picture. They were using [Cytoscape](http://www.cytoscape.org/) at that time, but the qgraph project has now reached a mature state.
Here are some examples from the on-line R help; basically, these are (1) an association graph with circular or (2) spring layout, (3) a concentration graph with spring layout, and (4) a factorial graph with spring layout (but see `help(qgraph.panel)`):

(See also `help(qgraph.pca)` for nice circular displays of an observed correlation matrix for the NEO-FFI, which is a 60-item personality inventory.)
| null | CC BY-SA 2.5 | null | 2011-03-23T18:32:52.837 | 2011-03-23T18:32:52.837 | null | null | 930 | null |
8689 | 1 | 8713 | null | 19 | 12650 | In R, if I write
```
lm(a ~ b + c + b*c)
```
would this still be a linear regression?
How to do other kinds of regression in R? I would appreciate any recommendation for textbooks or tutorials?
| What does linear stand for in linear regression? | CC BY-SA 2.5 | null | 2011-03-23T19:48:25.340 | 2021-04-15T23:35:10.463 | 2021-01-11T22:33:33.700 | 11887 | 3870 | [
"r",
"regression",
"interaction",
"intuition"
] |
8690 | 1 | 8760 | null | 7 | 392 | Suppose I wanted to fit a model of the form
$$y_i = \beta_0 + \sum_{1 \le j \le k} \beta_j X_{i,j} + \gamma_i Z_i + \epsilon_i,$$
to some data, where the regressors $X$ and $Z$, and the regressand $y$ are observed, and where $\gamma_i$ is a Bernoulli random variable that equals one with (unknown) probability $p$ and is zero otherwise. We can assume all kinds of 'regularity': the errors $\epsilon_i$ are independent of the regressors, and are independent of $\gamma_i$, etc.
Some questions:
- What is the name for this, if there is one?
- If I just throw the $Z$ data in with the $X$ data and perform an ordinary multiple least squares, will the least squares coefficient corresponding to the $Z$ term converge to $p$ as I add more observations?
- If least squares regression is advisable for this model, what is the distribution of the least squares coefficient corresponding to the $Z$ term under the null hypothesis $p = p_0$? (For a 'deterministic' regression, the coefficient has a certain t-distribution, with parameters depending on $p_0, n, k$ and the design matrix; I am looking for the analogue for the random coefficient.)
- If least squares regression is advisable, how will the presence of the random coefficient affect the distribution of the other (sample) regression coefficients?
| Linear model with random coefficient | CC BY-SA 2.5 | null | 2011-03-23T20:11:04.610 | 2011-03-28T12:08:49.257 | 2011-03-25T10:59:58.783 | null | 795 | [
"regression"
] |
8691 | 1 | null | null | 7 | 475 | I'm comparing scores for two small groups of individuals that competed in a tournament, and I'm being told that the comparison calls for a Mann-Whitney U test. It feels wrong to me, though: my two sets of scores are fundamentally interdependent because the two groups competed against one another.
Briefly: I have two groups, a control group A (n=10 men) and an experimental group B (n=12 men). Group B received a treatment, and then the members of group A and group B were pitted against each other in a tournament. I'm interested in the degree to which A's vs. B's were successful at beating the opposite-group men they competed against.
In each day of tournament play, two A's and two B's competed. Each game was every man for himself, competing for points (the task is irrelevant, I think). On a single day, there were always 4 competitors - 2 A's and 2 B's. But if any of those individuals reached criterion that day (scored a certain number of points, essentially), he got pulled and replaced the next day with a new player from the same home group (A or B). An individual could reach criterion in up to 7 days (if you went 7 days without reaching criterion you were considered to have lost, and pulled from play). This meant that an individual (say, an A) that played and won in a single day faced only three other competitors - 1 A and 2 B's. But an A that did poorly and stayed for 7 days could face many more players - a bunch of A's and B's - as better competitors cycled through.
So I've given each of the men a rank score that reflects the percentage of opposite-group men that he beat. Let's say an A named Joe competed for two days and faced 1 other A and 3 B's, and he came in second, behind the one other A in the group but above the three B's. His score would be 1.0. If Joe had had a harder time and had taken more days to reach criterion, he would probably have faced more competitors overall, but his score would still have been 1.0 if he had outcompeted all of the B's that he met. The score attempts to measure players' effectiveness at beating opposite-group men, and to allow comparison across men that faced different numbers of competitors.
So the rank scores for the two groups look like this:
A: 1, 1, 1, 0.833333, 0.75, 0.833333, 0.5, 0.333333, 0.333333, 0.5, 0.333333, 0.125
B: 1, 0.5, 0.666667, 0.5, 0.333333, 0.5, 0.5, 0.2, 0.166667, 0
And my question is: is there a more valid way to see if there's a difference between the groups than a Mann-Whitney?
| Statistical analysis of competition data | CC BY-SA 2.5 | null | 2011-03-23T20:14:56.090 | 2011-03-24T10:34:54.860 | null | null | null | [
"hypothesis-testing",
"nonparametric"
] |
8692 | 1 | 8697 | null | 4 | 12695 | I want to transpose a data frame in R with `unstack`. Consider the two data frames, `a` and `b`:
```
> a
count state
1 199665 RSTO
2 4147 RSTR
3 31274 S1
4 1 S2
5 2522 S3
6 118009 SF
> b
count state
1 31956 RSTO
2 11689 RSTR
3 6702 S1
4 2838 S2
5 6268 S3
6 672561 SF
```
My problem is that unstacking a single one does not work:
```
> formula(a)
count ~ state
> unstack(a)
res
RSTO 199665
RSTR 4147
S1 31274
S2 1
S3 2522
SF 118009
```
However, if I concatenate `a` and `b`, `unstack` works as expected.
```
> unstack(rbind(a,b))
RSTO RSTR S1 S2 S3 SF
1 199665 4147 31274 1 2522 118009
2 31956 11689 6702 2838 6268 672561
```
Why is this happening? Does the groups (i.e., the RHS of the formula) need to repeat for `unstack` to work properly? How can I make `unstack` work with a single data frame?
| Transposing data frames in R via unstack | CC BY-SA 2.5 | null | 2011-03-23T20:16:06.923 | 2017-04-23T09:34:18.440 | null | null | 1537 | [
"r",
"data-transformation"
] |
8693 | 2 | null | 8689 | 4 | null | I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
[http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html](http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html)
Assuming you're asking if the following equation is linear:
```
a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * (b*c))
```
The answer is yes, if you assemble a new independent variable such as:
```
newv = b * c
```
Substituting the above newv equation into the original equation probably looks like what you're expecting for a linear equation:
```
a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * newv)
```
As far as references go, Google "r regression", or whatever you think might work for you.
| null | CC BY-SA 2.5 | null | 2011-03-23T20:23:54.697 | 2011-03-23T21:48:53.517 | 2011-03-23T21:48:53.517 | 2775 | 2775 | null |
8694 | 2 | null | 8691 | 6 | null | Yes. This situation is so complicated and the results are so interdependent that the applicability of almost any standard test has to be called into question.
Why not conduct a [permutation test](http://en.wikipedia.org/wiki/Resampling_%28statistics%29#Permutation_tests)? This is a natural situation for it: the null hypothesis of no treatment effect basically says the labels are meaningless. So, keep all the results but permute the labels, always maintaining a group of 10 "controls" and a group of 12 "treatment" subjects. Compute any ranking or relative score between the groups you deem meaningful for every one of the $\binom{22}{10}$ = 646,646 permutations. (You could randomly sample the permutations to save time, but their number is small enough that this brute-force calculation is easily carried out.) That's the permutation distribution for your statistic under the null hypothesis. To determine the p-value, see where the observed value of the statistic falls on the cumulative distribution.
BTW, if the men were not chosen to compete at random (formally--not arbitarily--using a random number generator), then one could validly suspect any apparent difference might be due to the sequence in which the men competed. No statistical test can overcome such a deficiency if it is present.
| null | CC BY-SA 2.5 | null | 2011-03-23T20:54:36.260 | 2011-03-23T21:19:11.903 | 2011-03-23T21:19:11.903 | 919 | 919 | null |
8695 | 1 | null | null | 2 | 62377 | I have some issues with an exploratory factor analysis.
Can anybody please tell me how to calculate the Average Variance Extracted (AVE) and the Composite Reliability from two factors, each with three items using SPSS? If not with SPSS, Stata might help too.
| AVE & composite reliability with SPSS | CC BY-SA 3.0 | null | 2011-03-23T20:57:27.787 | 2018-09-28T17:10:54.200 | 2014-01-21T23:07:27.623 | 7290 | null | [
"spss",
"factor-analysis",
"reliability",
"composite"
] |
8696 | 1 | 9956 | null | 10 | 3037 | UPDATE: caret now uses `foreach` internally, so this question is no longer really relevant. If you can register a working parallel backend for `foreach`, caret will use it.
---
I have the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package for R, and I'm interesting in using the `train` function to cross-validate my models. However, I want to speed things up, and it seems that caret provides support for parallel processing. What is the best way to access this feature on a Windows machine? I have the [doSMP](http://cran.r-project.org/web/packages/doSMP/doSMP.pdf) package, but I can't figure out how to translate the `foreach` function into an `lapply` function, so I can pass it to the `train` function.
Here is an example of what I want to do, from the `train` documentation: This is exactly what I want to do, but using the `doSMP` package, rather than the `doMPI` package.
```
## A function to emulate lapply in parallel
mpiCalcs <- function(X, FUN, ...)
}
theDots <- list(...)
parLapply(theDots$cl, X, FUN)
{
library(snow)
cl <- makeCluster(5, "MPI")
## 50 bootstrap models distributed across 5 workers
mpiControl <- trainControl(workers = 5,
number = 50,
computeFunction = mpiCalcs,
computeArgs = list(cl = cl))
set.seed(1)
usingMPI <- train(medv ~ .,
data = BostonHousing,
"glmboost",
trControl = mpiControl)
```
Here's a version of mbq's function that uses the same variable names as the lapply documentation:
```
felapply <- function(X, FUN, ...) {
foreach(i=X) %dopar% {
FUN(i, ...)
}
}
x <- felapply(seq(1,10), sqrt)
y <- lapply(seq(1,10), sqrt)
all.equal(x,y)
```
| Parallelizing the caret package using doSMP | CC BY-SA 3.0 | null | 2011-03-23T21:04:15.040 | 2014-01-07T14:32:50.000 | 2014-01-07T14:32:50.000 | 2817 | 2817 | [
"r",
"parallel-computing"
] |
8697 | 2 | null | 8692 | 8 | null | Note that `rbind(a, b)` creates a single data frame, so that's not it. The unexpected behavior of `unstack(a)` results from the fact that you only have one observation (`count`) per factor level (`state`). To see what's going on, you have to look at the `unstack()` function.
```
# list source code for unstack()'s method for a data frame
> getS3method("unstack", "data.frame")
function (x, form, ...)
{
form <- if (missing(form))
stats::formula(x)
else stats::as.formula(form)
if (length(form) < 3)
stop("'form' must be a two-sided formula")
res <- c(tapply(eval(form[[2L]], x), eval(form[[3L]], x),
as.vector))
if (length(res) < 2L || any(diff(unlist(lapply(res, length))) !=
0L))
return(res)
data.frame(res)
}
<environment: namespace:utils>
```
The relevant bits are `c(tapply(eval(form[[2L]], x), eval(form[[3L]], x), as.vector))` and `data.frame(res)`. The first expression here is equivalent to
```
> c(tapply(a$count, a$state, as.vector))
RSTO RSTR S1 S2 S3 SF
199665 4147 31274 1 2522 118009
```
The output of `tapply()` is a 1-dimensional array if the provided function only returns one value when applied to the data in each level of the factor. `c()` strips the class, thus returning a numeric vector. `data.frame(<vector>)` creates a data frame with one variable equal to that vector. Compare this to what happens with two observations per condition:
```
> ab <- rbind(a, b)
> c(tapply(ab$count, ab$state, as.vector))
$RSTO
[1] 199665 31956
$RSTR
[1] 4147 11689
$S1
[1] 31274 6702
$S2
[1] 1 2838
$S3
[1] 2522 6268
$SF
[1] 118009 672561
```
Here, the function provided for `tapply()` returns a vector with two elements for each level of the factor. The result then is a list with as many components as you have factor levels, each component containing the result of the supplied function for that level. All components have equal length, hence `data.frame(<list>)` returns a data frame where each component becomes a variable (data frames are special lists with components of equal length).
Now, another question would be if the behavior of `unstack()` is sensible in a case like yours. If this is of practical importance, you can add an id-variable and use `reshape()` instead.
```
> a$id <- 1
> reshape(a, direction="wide", v.names="count", timevar="state", idvar="id")
id count.RSTO count.RSTR count.S1 count.S2 count.S3 count.SF
1 1 199665 4147 31274 1 2522 118009
> ab$id <- rep(1:2, each=nrow(a))
> reshape(ab, direction="wide", v.names="count", timevar="state", idvar="id")
id count.RSTO count.RSTR count.S1 count.S2 count.S3 count.SF
1 1 199665 4147 31274 1 2522 118009
11 2 31956 11689 6702 2838 6268 672561
```
| null | CC BY-SA 2.5 | null | 2011-03-23T21:15:39.913 | 2011-03-23T21:44:51.900 | 2011-03-23T21:44:51.900 | 1909 | 1909 | null |
8698 | 2 | null | 8692 | 2 | null | Caracal has answered the question. Here are some additional pieces of information.
Try this:
```
require(stats)
formula(PlantGrowth)
x = PlantGrowth[c(1,11,21),]
y = PlantGrowth[c(1,2,11,12,21,22),]
x
unstack(x)
y
unstack(y)
```
The output:
```
> x
weight group
1 4.17 ctrl
11 4.81 trt1
21 6.31 trt2
> unstack(x)
res
ctrl 4.17
trt1 4.81
trt2 6.31
> y
weight group
1 4.17 ctrl
2 5.58 ctrl
11 4.81 trt1
12 4.17 trt1
21 6.31 trt2
22 5.12 trt2
> unstack(y)
ctrl trt1 trt2
1 4.17 4.81 6.31
2 5.58 4.17 5.12
```
We just have to define a new unstack function:
```
NewUnstack <- function(x)
{
df = unstack(x)
if(dim(df)[2]==1)
{
df = t(df)
}
return(df)
}
```
Here we go:
```
> NewUnstack(x)
ctrl trt1 trt2
res 4.17 4.81 6.31
> NewUnstack(y)
ctrl trt1 trt2
1 4.17 4.81 6.31
2 5.58 4.17 5.12
```
| null | CC BY-SA 2.5 | null | 2011-03-23T21:18:52.463 | 2011-03-23T21:30:36.963 | 2011-03-23T21:30:36.963 | 1351 | 1351 | null |
8699 | 2 | null | 8669 | 0 | null | Are you sure there's a Z variable? Since you have an equation of X's versus Y's, isn't one of the X's or Y's the dependent variable? For example:
$$X_1 = - newa_2X_2... - newa_{11}X_{11} + newb_0 + newb_1Y_1 + newb_2Y_2 + new b_3Y_3 + newb_4Y_4$$
| null | CC BY-SA 2.5 | null | 2011-03-23T21:45:18.903 | 2011-03-23T21:45:18.903 | null | null | 2775 | null |
8700 | 2 | null | 8690 | 1 | null | Maybe I'm missing something here, but couldn't you rearrange the equation and do a logit or probit analysis on gamma?
| null | CC BY-SA 2.5 | null | 2011-03-23T22:14:01.047 | 2011-03-23T22:28:41.923 | 2011-03-23T22:28:41.923 | 2775 | 2775 | null |
8701 | 2 | null | 7318 | 2 | null | Non-parametric tests are likely to be less powerful than parametric tests and thus require a larger sample size. This is annoying because if you had a large sample size, sample means would be approximately normally distributed by the central limit theorem, and you thus wouldn't need non-parametric tests.
Look at generalized linear models, of which least squares and Poisson are special cases. I've never found a text that explains this particularly well; try talking to someone about it.
Look at non-parametric methods if you feel like it, but I have a hunch that they won't help you much in this case unless you're using ordinal data or a large set of very bizarrely distributed data.
| null | CC BY-SA 2.5 | null | 2011-03-23T22:47:02.427 | 2011-03-23T22:47:02.427 | null | null | 3874 | null |
8702 | 1 | null | null | 4 | 1104 | I am examining the difference between a physical feature of different species of animals. Due to the nature of my experiments I'm using a nonlinear mixed model with the following setup:
```
lme(log10(feature) ~ log10(Body.mass) + factor(Trial.Number), random = ~1 | IndividualID, data=animals, subset=Frfactor=="low", na.action=na.omit )
```
where `subset=Frfactor=="low"` refers to a specific speed range that I'm interested in.
I get great results which I'm happy about. But now I want to see how species affects my feature. Since the same conditions apply (tons of repeated effects) I've kept the lme and changed the structure to:
```
lme(log10(feature) ~ specfactor + factor(Trial.Number), random = ~1 | IndividualID, data=animals, subset=Frfactor=="low", na.action=na.omit )
```
where specfactor lists the names of the species. Looking at the p values it looks like these species are not significantly different from the intercept (which is specfactorserval). However when I create a boxplot, it certainly looks like there are some big interspecies differences!
I guess because the lme is a test for regressions, it doesn't really make sense to use when comparing the feature against categorical variables. But I still need to account for repeated effects. My question is if there's a better way to test for significance between species using the boxplot? I need the usual- p-values, confidence intervals. The "list" command seems to fall short of such comparisons. I don't know if a t-test would cut it.
Thanks!
PS I originally posted an image of my test results and of the boxplot, but I'm too new of a user to be allowed....
| Boxplot with glme | CC BY-SA 2.5 | null | 2011-03-23T23:00:11.000 | 2012-10-18T02:02:50.437 | 2011-03-24T07:07:50.403 | 449 | null | [
"r",
"repeated-measures",
"boxplot"
] |
8703 | 2 | null | 8562 | 4 | null | You might use a nested design (and a hierarchical/multilevel model). The top level would be Baseline v Non-Baseline, and the Non-Baseline would include your 2^3 factorial design.
I unfortunately can't post my pretty picture of the nesting structure because I don't have 10 reputation yet.
I'd have to read a bit to remember exactly how to do this, but I'm pretty sure that this is a reasonable way of doing it.
| null | CC BY-SA 2.5 | null | 2011-03-23T23:01:21.810 | 2011-03-23T23:01:21.810 | null | null | 3874 | null |
8704 | 1 | 8721 | null | 3 | 107 | The title defines the question. May be the concept would do...like how to go about it? Thanks.
| Given pdf of $I$ and $R$ (both $I$ and $R$ are independent RV's), how to find pdf of $W =I^2\cdot R$? | CC BY-SA 2.5 | null | 2011-03-24T00:05:13.620 | 2011-03-24T14:24:39.583 | 2011-03-24T08:52:26.383 | 2645 | null | [
"self-study",
"independence",
"density-function"
] |
8706 | 2 | null | 8689 | 35 | null | Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector can be written $\hat{\beta} = \sum_i{w_iy_i}$, where the $\{w_i\}$ are weights determined by your estimation procedure. Linear models can be solved algebraically in closed form, while many non-linear models need to be solved by numerical maximization using a computer.
| null | CC BY-SA 2.5 | null | 2011-03-24T01:32:10.797 | 2011-03-24T01:32:10.797 | null | null | 401 | null |
8709 | 2 | null | 8633 | 1 | null | If you're interested in parsing a measure of fatigue from your RT data for use as a covariate, then I'd suggest computing the slope of RT as a function of time. An additional measure of "noisiness" might be the variance of RT once the effect of time has been removed.
| null | CC BY-SA 2.5 | null | 2011-03-24T02:07:04.620 | 2011-03-24T02:07:04.620 | null | null | 364 | null |
8710 | 1 | 8730 | null | 6 | 298 | I have 2 alternative methods to solve a problem, and I was just wondering what people who know the math better than I think, and if there is a better method to use for this type of problem.
The problem: I have a list of lat/lon positions and a value for the time interval between position updates and wish to find the SOG (Speed Over Ground). However there is some uncertainty in the exact value for each time interval - as the data is retrieved over the internet, even though in a program it would (for example) be set to request an update every 60 seconds. The data comes from an ocean racing yacht simulator, so SOG will also change with varying conditions and also with changes in course (relative to wind).
One approach is to weight new updates proportional to the length of time that interval and multiply into our current best SOG estimate. Something like this (T in seconds):
$$\text{speed} = \text{distance} / \Delta T$$
$$d = 1 - e^{-\Delta T}$$
$$\text{sog} = d * \text{speed} + (1 - d) * \text{sog}$$
And this works ok, especially if the speed is relatively consistent.
Prior to seeing this method, the one I had come up with to estimate SOG is this:
Maintain a list of the $N$ most previous {timestamp, lat, lon} tuples.
On each update, compute the harmonic mean of all possible sub intervals from $0, N-1$:
$$ S = N(N+1)/2 $$
$$ \text{sog estimate} = S / \sum_{i=0,j=i+1}^{N-1} \text{time}_{ij} / \text{distance}_{ij} $$
Then keep a list of the $M$ most previous estimates and again take the harmonic mean. $N$ & $M$ should not be too large. And obviously if there are not yet $N$ tuples to sum sub-intervals over, just use however many are available. Since the updates occur each 60 seconds, a value of 10-15 seems appropriate for both $N$ & $M$. One possible improvement I can think of is to weight each sub interval similar to method 1.
This second method seems to be more accurate than the first, although can sometimes take longer to converge on a reasonably accurate estimate - usually approximately N updates are required before it becomes accurate for any practical use.
It also seems to handle changes in 'real' SOG better, ie slight accelerations.
Given that SOG is rarely consistent over say more than 15 or 20 minutes, ie from an increase in wind strength, or change of angle to one with higher/lower boatspeed etc, and also that direction can change (again, with a change in speed), what would be the best algorithm to compute the SOG from uncertain time intervals and change in position. Also, is there a possible way to estimate the error in estimation? Even possibly correct for this error term once computed?
I should mention all the data I have available: heading, lattitude and longitude, and I keep a timestamp from the beginning of each update request.
I would like to be able to compute this speed as accurately as possible.
Thanks in advance for any assistance, and forgive my poor use of MathJaX.
Edit: I get Math processing errors from MathJaX that works fine on math.stackexchange? Notice a lot of them in either questions to?? So reposted plain text math.
| Estimating speed from position updates with uncertain time intervals | CC BY-SA 2.5 | null | 2011-03-24T02:28:44.653 | 2011-03-24T21:42:40.373 | 2011-03-24T21:42:40.373 | 919 | null | [
"regression",
"estimation",
"functional-data-analysis",
"measurement-error"
] |
8714 | 1 | null | null | 3 | 6046 | Is it possible to perform logarithmic regression on multiple variables with Excel? If I just have a single independent variable than it's very easy to do this using the best-fit line option (it lets me switch from linear to logarithmic). But this feature does not work for multiple variable regression and the regression feature under the Data Analysis plugin only seems to support linear multiple regression.
However, I have a table that has 3 columns containing 3 independent variables and 1 column with the corresponding dependent variable (outcome). I'm pretty sure there's a logarithmic relationship, but I'm not sure how to use Excel to get the coefficients. Thanks!
| Performing logarithmic multiple regression with Excel? | CC BY-SA 2.5 | null | 2011-03-24T05:30:59.240 | 2011-03-24T17:34:04.710 | 2011-03-24T16:10:25.313 | null | null | [
"regression",
"excel"
] |
8715 | 2 | null | 8714 | 6 | null | If by logarithmic regression you mean the model `log(y) = m1.x1 + m2.x2 + ... + b + (Error)`, you can use `LOGEST` and `GROWTH` with multiple independent variables. Note that if you want the estimated coefficients `m1, m2, ..., b` from `LOGEST`, you'll have to enter the formula into multiple cells as an array. See Excel's online help for the steps required.
Alternatively, you can log-transform your dependent variable and use `LINEST`/`TREND` which does the same thing under the hood.
ObWarning: Excel isn't the best regression package in the world. See, for example, McCullough & Heiser (2008), On the accuracy of statistical procedures in Microsoft Excel 2007, Comp Stats & Data Analysis 52(10) pp.4570-4578.
| null | CC BY-SA 2.5 | null | 2011-03-24T06:26:38.550 | 2011-03-24T06:37:46.187 | 2011-03-24T06:37:46.187 | 1569 | 1569 | null |
8716 | 1 | null | null | 5 | 312 | I have a basket of time series (stock prices). I want to find the N (fixed or not) time series that will best replicate the basket in the sense that combination of them will be best cointegrated with the basket.
Beside using the N series that have the best cointegration scores (ADF test) and regressing these variables against the basket. Do you know of other/better methodologies for doing that?
| Cointegration-based feature selection | CC BY-SA 2.5 | null | 2011-03-24T08:39:25.643 | 2011-04-23T20:26:08.810 | 2011-03-24T18:21:05.500 | 2116 | 3362 | [
"cointegration"
] |
8717 | 1 | null | null | 2 | 410 | Let $X_1, X_2, ..., X_n$ be a random sample from a distribution with p.d.f.,
$$f(x;\theta)=\theta^2xe^{-x\theta} ; 0<x<\infty, \theta>0$$ Obtain minimum variance unbiased estimator of $\theta$ and examine whether it is attained?
MY WORK:
Using MLE i have found the estimator for $\theta=\frac{2}{\bar{x}}$
Or as $$X\sim Gamma(2, \theta)$$So
$E(X)=2\theta$
$E(\frac{x}{2})=\theta$
so can i take $\frac {x}{2}$ as unbiased estimator of $\theta$.
I'm stuck and confused need some help.
Thank u.
| What will be minimum variance unbiased estimator? | CC BY-SA 2.5 | null | 2011-03-24T08:44:11.823 | 2011-03-24T16:11:25.250 | 2011-03-24T16:11:25.250 | null | 3846 | [
"probability",
"estimation"
] |
8718 | 1 | null | null | 13 | 12274 | I've developed a logit model to be applied to six different sets of cross-sectional data. What I'm trying to uncover is whether there are changes in the substantive effect of a given independent variable (IV) on the dependent variable (DV) controlling for other explanations at different times and across time.
My questions are:
- How do I assess increased / decreased size in the association between the IV and DV?
- Can I simply look at the different magnitudes (sizes) of the coefficients across the models or do I need to go through some other process?
- If I need to do something else, what is it and can it be done/how do I do it in SPSS?
Also, within a single model,
- Can I compare the relative size of independent variables based on unstandardised scores if all are coded 0-1 or do I need to convert them to standardised scores?
- Are there problems involved with standardised scores?
| Comparing logistic regression coefficients across models? | CC BY-SA 3.0 | null | 2011-03-24T09:06:46.867 | 2021-04-03T17:11:25.097 | 2021-04-03T17:11:25.097 | 11887 | 3883 | [
"regression",
"logistic",
"spss",
"regression-coefficients"
] |
8719 | 2 | null | 8718 | 3 | null | Are there changes across data sets? I can answer that without seeing the data! Yes. There are. How big are they? That's key. For me, the way to see is by looking. You will have odds ratios for each independent variable for each data set - are they different in ways people would find interesting? Now, it's true each will have a standard error and so on, and there are probably ways to see if they are statistically significantly different from each other, but is that really an interesting question? If it is, then one way to test it easily with software would be to combine all the studies, and include "study" as another independent variable. You could then even test interactions, if you wanted. Whether you want to do this depends on your substantive questions.
As to comparing variables within a model, the main problem with standardized scores is that they are standardized on your particular sample. So, the parameter estimates and so on are then in terms of standard deviations of the variables in your particular sample. Even if your sample is truly a random sample from some population, it will have (slightly) different standard deviations from other random samples. This makes things confusing.
The other problem is what the question of "relative size" even means. If your IVs are things that are well-understood, you can compare the ORs across ranges that mean something.
| null | CC BY-SA 2.5 | null | 2011-03-24T10:20:54.480 | 2011-03-24T10:20:54.480 | null | null | 686 | null |
8721 | 2 | null | 8704 | 3 | null | Maybe you know that if $X$ and $Y$ are independent then the pdf of $X+Y$ is given by a convolution.
You can generalize the idea of a convolution to another group than $(+,\mathbb{R})$ and the idea that pdf of $X\cdot Y$ (where $\cdot$ is the operation of the group) is given by the convolution is still valid. For the convolution on group see [wikipedia](http://en.wikipedia.org/wiki/Convolution#Convolutions_on_groups).
For the group of multiplication $(\cdot,\mathbb{R}_+)$, the (left)-haar measure is $dt/t$ (see [here](http://en.wikipedia.org/wiki/Haar_measure#Examples)), and this gives (if $f_X$ and $f_Y$ are densities of $X$ and $Y$):
$f_{X\cdot Y}(z)=\int_{\mathbb{R}_+} f_X(t)f_Y(z/t)dt/t$
the density of the product $X\cdot Y$. It is straigthforward to derive the density of $I^2\cdot R$ from that.
| null | CC BY-SA 2.5 | null | 2011-03-24T10:46:20.527 | 2011-03-24T14:24:39.583 | 2020-06-11T14:32:37.003 | -1 | 223 | null |
8723 | 2 | null | 6920 | 7 | null | You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for the search direction instead.
Let $E(i; W)$ be the cost of the i'th training sample given the parameters $W$. Your update for the j'th parameter is then
$$W_{j} \leftarrow W_j + \alpha \frac{\partial{E(i; W)}}{\partial{W_j}}$$
where $\alpha$ is a step rate, which you should pick via cross validation or good measure.
This is very efficient and the way neural networks are typically trained. You can process even lots of samples in parallel (say, a 100 or so) efficiently.
Of course more sophisticated optimization algorithms (momentum, conjugate gradient, ...) can be applied.
| null | CC BY-SA 2.5 | null | 2011-03-24T10:57:37.647 | 2011-03-24T10:57:37.647 | null | null | 2860 | null |
8724 | 1 | 8727 | null | 2 | 186 | I have a database with many attributes. I would like to know which attributes has the minimum variation in the data. Is there some standard technique? It should be like clustering without split records in clusters. I would like to know what the records in particular cluster have in common.
I was going to compute the mean ($\bar{x}$) and st.d. ($s$) for each continuous attribute $x$. After computing the coefficient of variation $CV=\frac{s}{\bar{x}}$ I would say that attributes with $CV\leq0.1$ are the similar ones. For categorical ones I would choose attributes with more than $90\%$ relative frequency for the mode.
Is there some standard technique?
| Exploring data attributes | CC BY-SA 2.5 | null | 2011-03-24T11:04:28.350 | 2011-03-24T12:42:28.827 | null | null | 2719 | [
"clustering"
] |
8725 | 1 | null | null | 4 | 1346 | I tried to use the Kernel Density plot method from [Hayfield and Racine (2008)](http://www.jstatsoft.org/v27/i05/paper/) [np package](http://cran.r-project.org/web/packages/np/vignettes/np.pdf) for my own data, but somehow ended up with different type of plots and I have no idea what the difference is between my data and the example data provided by the package.
I used the Italy GDP example that is provided in the np package and described in [Racine's primer](http://socserv.mcmaster.ca/racine/ECO0301.pdf). The example itself worked well and I ended up with a nice 3D plot.
When I used it with my own data I just got two 2D plots instead of one 3D plot. Why is that?
Do I miss something content wise (that's what I think and it's the reason being why I guess this is rather a CV than a SO question) or is it a syntax problem?
```
# get a bandwith object
bw2 <- npcdensbw(formula=mydf$values~ordered(mydf$datefield), tol=.1, ftol=.1)
str(mydf)
# returns
data.frame': 780 obs. of 2 variables:
$ datefield: Ord.factor w/ 104 levels "1984-04-01"<"1984-07-01"<..: 1 1 1 1 1 1 1 1 1 1 ...
$ values : num 50.7 58.5 56.1 55.5 62.7 ...
tables(mydf$datefield)
# shows that I have 13 entries per quarter.
quartz()
plot(bw2)
npplot(bw2)
# same 2D plot for both :(
```
all of these properties are exactly in line with the example data. Still I get only 2D plots instead of a 3D plot. I do have three dimensions: quarters,values and conditional density. What's wrong?
| Conditional kernel density plot with R's np package | CC BY-SA 3.0 | null | 2011-03-24T11:50:53.710 | 2015-04-23T05:57:15.093 | 2015-04-23T05:57:15.093 | 9964 | 704 | [
"r",
"nonparametric",
"conditional-probability",
"kernel-smoothing"
] |
8726 | 2 | null | 8663 | 0 | null | $p-value=Pr(E||H)$ where E is the "data at least as extreme as what was observed" event, and $H$ is the hypothesis, usually of the form "some set of the parameters are zero". I have used the double lines $||$ to indicate that it is not a conditional probability per se, rather a probability based on the the assumption that the null hypothesis is true.
You have not explicitly said with the null hypothesis was is both examples. If you take the time to carefully state exactly what the null hypothesis is in each case, you may find that it makes perfect sense for them to give different p-values. You may also find that the event $E$ is not the same in the two cases either. If the "extreme event" is different, then you would in general expect to see different p-values.
p-values are usually used to "reject" models rather than compare models. At least that's how I use them. A small p-value usually means (at least to me anyways) "there is a better explanation for the observed data than the null hypothesis". I usually take the size of this p-value to be a rough measure of "how much easier" such an explanation would be to find. For me, p-values of the size $0.4$ and $0.8$ just indicate that the data is fairly consistent with either model.
I'm not so sure that it says that the "binned" model is better than the "line" model. This is because there is no explicit comparison or reference to the other model made when either of these two p-values are calculated. The reason I say this is that, generally there is an implicit class of alternative hypothesis to which a p-value refers. If both models are in this implicit class when calculating the p-value based on the other, then it is does indicate that the binned model is better. But determining this implicit class is not always straight forward, unless the p-value is based on a sufficient statistic for the null hypothesis.
| null | CC BY-SA 2.5 | null | 2011-03-24T12:20:57.903 | 2011-03-24T12:20:57.903 | null | null | 2392 | null |
8727 | 2 | null | 8724 | 2 | null | It reminds me of what is implemented in the [caret](http://caret.r-forge.r-project.org/) package for data pre-processing. It is fully described in one of the accompanying vignette, namely [Data Sets and Miscellaneous Functions in the caret Package](http://cran.r-project.org/web/packages/caret/vignettes/caretMisc.pdf). What is actually done is to identify predictors that have low variance in the full dataset, as you described, whether it be a continuous or a categorical feature. They compute:
- the frequency of the most prevalent value over the second most frequent value (termed "frequency ratio"),
- the proportion of unique values (subject-wise),
considering that
>
If the frequency ratio is less than a
pre–specified threshold and the unique
value percentage is less than a
threshold, we might consider a
predictor to be near zero–variance. (p. 5, emphasis is mine)
The rationale is that near-zero variance predictors may have exact zero variance when using cross-validation, or induce model instability. They also address the problem of collinearity, but then this really is a matter of statistical modeling (some models, like classical regression models, don't accommodate well correlated predictors because it will inflates standard error of regression coefficients; others don't care about that).
Besides screening those low informative predictors, you can also use a hierarchical clustering method (by variables, not by individuals) to see how it goes. This is often used for studying missing data patterns (i.e., where we are interested in examining which variables are consistently showing an increased number of missing responses across all samples, or a particular subgroup).
| null | CC BY-SA 2.5 | null | 2011-03-24T12:42:28.827 | 2011-03-24T12:42:28.827 | null | null | 930 | null |
8728 | 2 | null | 8695 | 5 | null | The following is shamelessly extracted from the following [link](http://findarticles.com/p/articles/mi_qa3713/is_199704/ai_n8770931/pg_10/).
>
\begin{shamelesscopyandpaste}
Finally, the "average variance extracted" measures the amount of variance that is captured by the construct in relation to the amount of variance due to measurement error and can be calculated using the following formula: (summation of squared factor loadings)/(summation of squared factor loadings) (summation of error variances) (Fornell & Larcker). If the average variance extracted is less than .50, then the variance due to measurement error is greater than the variance due to the construct. In this case, the convergent validity of the construct is questionable.
\end{shamelesscopyandpaste}
I haven't used SPSS in some time, and I don't remember seeing an option to perform these calculations, but you can certainly do it using the syntax.
[These](http://www.ats.ucla.edu/stat/spss/seminars/spss_syntax08/default08.htm) two [links](http://www.ats.ucla.edu/stat/spss/seminars/spss_syntax08/default08_part2.htm) give you an introduction to SPSS syntax. What I would do from here is examine the syntax SPSS gives you when you perform an FA, and use the variables named to compute the average variance explained.
Sorry for the lack of a definitive answer, but I felt that some response was definitely better than none.
| null | CC BY-SA 3.0 | null | 2011-03-24T13:52:21.373 | 2014-01-21T23:11:57.833 | 2014-01-21T23:11:57.833 | 7290 | 656 | null |
8729 | 1 | null | null | 6 | 4391 | I'm trying to gain a better understanding of kmeans clustering and am still unclear about colinearity and scaling of data. To explore colinearity, I made a plot of all five variables that I am considering shown in the figure below, along with a correlation calculation.

I started off with a larger number of parameters, and excluded any that had a correlation higher than 0.6 (an assumption I made). The five I choose to include are shown in this diagram.
Then, I scaled the date using the `R` function `scale(x)` before applying the `kmeans()` function. However, I'm not sure whether `center = TRUE` and `scale = TRUE` should also be included as I don't understand the differences that these arguments make. (The `scale()` description is given as `scale(x, center = TRUE, scale = TRUE)`).
Is the process that I describe an appropriate way of identifying clusters?
| Colinearity and scaling when using k-means | CC BY-SA 2.5 | null | 2011-03-24T14:51:27.800 | 2014-01-08T01:52:15.047 | 2011-03-24T16:12:33.673 | null | 2635 | [
"r",
"clustering"
] |
8730 | 2 | null | 8710 | 4 | null | Because you trust the GPS positions (and therefore the distances computed from them) but the times have errors, regress the times against the cumulative distances.
To account for acceleration and deceleration, consider a model of the form
$$\text{Time} = t = \beta_0 + \beta_1 X + \beta_2 X^2 + \varepsilon$$
where $X$ is distance and $\varepsilon$ represents the time errors. After fitting this you can estimate $\widehat{dt/dX} = \hat{\beta}_1 + 2 \hat{\beta}_2 X$, whence
$$\text{Speed estimate} = \widehat{dX/dt} = 1/\widehat{dt/dX} = \frac{1}{\hat{\beta}_1 + 2 \hat{\beta}_2 X}.$$
This will likely work best for times near the middle of your dataset. Thus, for instance, you could maintain a circular buffer of $k$ observations which at any moment would span times $t_1, t_2, \ldots, t_k$ and use them to estimate speeds near the mean position $\frac{1}{k}\sum_{i=1}^{k}X_k$.
As an example I generated distances according to the formula $X(t) = 15 + i - i^2/25$ for times $t=1, 2, \ldots, 10$. This represents a period of uniform deceleration from a unit speed. Note that it is not the same as the postulated model: the variation in time is given by the quadratic formula. I then varied those times by independent standard normal variates. Compared to the nominal sampling interval this is a huge error: you can't even be sure that two successive times are actually in the right order. This is a fairly severe test of the method.
Here is a plot of a typical realization, with the red dots showing the true values, the blue ones showing the observed values (i.e., the true values with time jittering), and the ordinary least squares fit of the time. I used $k=10$.

Note that the time measurements are so awful, the boat seems to reverse its course several times, especially near times 2 and 4.
Unlike many of the realizations I looked at, this fit doesn't look terribly good in the center: the line departs from the red dots. Be that as it may, let's look at how the estimated and actual speeds varied during this period:

(Note that you can estimate a speed at any time, not just a measured time, because the fitted coefficients give a formula for the speed.)
In this plot of speed versus position (not time), the blue curve is the estimate and the red curve is the actual speed. Obviously they are not very close near the ends, but for the middle times the estimate is excellent. (The middle times are around positions 19-20, as the first plot indicates.) Note that you would run into severe difficulties estimating speeds directly from the sequence of positions, because (due to the errors in the times) the boat seems to be jumping around forwards and backwards. If you treat the backwards jumps as true reversals you would grossly overestimate the speeds, but if you treat them as negative speeds you would run into severe problems when averaging. The moral here is that it's best to fit a model of position versus time and only then attempt to estimate speeds; don't attempt to estimate speeds directly.
The larger $k$ is, the more accurate you can expect to be. Use the machinery of OLS to estimate the [prediction error](http://www.sas.com/resources/whitepaper/wp_4430.pdf) of the time for any desired position; from that you can [propagate the error](https://stats.stackexchange.com/questions/tagged/error-propagation) to the speed estimate.
The speed estimates might jump around a little due to the addition of new values and dropping of old values as you go along. You could get fancier and use weighted regression, decreasing the weights smoothly to zero near the extreme positions. In doing so, a plot of estimated speeds versus position would be a little smoother. (This technique is akin to the recently popularized ["Geographically Weighted Regression"](http://ncg.nuim.ie/ncg/GWR/) of Fotheringham, Charlton, and Brunsdon.)
| null | CC BY-SA 2.5 | null | 2011-03-24T15:33:25.113 | 2011-03-24T15:33:25.113 | 2017-04-13T12:44:53.777 | -1 | 919 | null |
8732 | 1 | null | null | 8 | 5746 | I am new to time series analysis, and would appreciate any suggestions on how best to approach the following time-series regression problem: I have hourly temperature measurements at approximately 20 locations across one site over three years, along with static ancillary information (slope, elevation, aspect, canopy cover). The site is several hectares in size, and the temperature recording devices are spread across the site along a couple of transects, at ~20-50 m intervals. About 1 km away, I have hourly data from a weather station, which also provides measurements of wind speed, wind direction, humidity, solar illumination, etc.
I would like to be able to predict the temperature (min,max,mean) at the site (in general) using only the data from the weatherstation; it is in place semi-permanently, whereas the temperature recorders at the site were only in place for 3 years. So in essence I have multiple independent variables (temperature, humidity, wind, etc) at one location (the weatherstation), but a single dependent variable (temperature) at multiple locations, each of which also has several time-invariant attributes: slope, elevation, aspect, etc.
I am most interested in predicting the daily lows and highs at the site in general, rather than hourly temperatures at each temperature recording location in the site. Although, those hourly predictions would certainly be of value.
My initial approach has been to compute daily average, minimum, and maximums from the temperatures at the site, and use these as dependent variables in simple linear regressions, using the measurements available at the weatherstation as independent variables. This works reasonably well (R2 > 0.50 with 2 predictors), but seems rather too simplistic for many reasons, and I imagine there must be more sophisticated (and powerful) ways to do this.
For one, I'm not doing anything explicit about the time-series nature of the daily values in the regression, and although the min or average temp from one day to the next may not be as correlated as it is from one hour to the next, I wonder about issues with the independence of these daily data (or certainly hourly, if I were trying to predict hourly temperatures). Second, due to concerns with having multiple somewhat-correlated temperature measurements across the site (they are much more similar among themselves than any are to the weather station data), I am simply using the mean or min or max of all measurements across the site, versus including the data from each individual measurement location directly. But this also prevents me from using the time-invariant ancillary information from each temperature measurement location (slope, elevation, aspect, canopy cover), which presumably will explain a good part of the differences in temperatures between locations at the site. Third, due to concerns with the regression being dominated by the very strong diurnal cycle in temperatures, I'm only looking at daily values instead of hourly.
Any suggestions on better ways to go about this (especially in R), or where to start looking, would be most appreciated! I realize there are alot of R packages that deal with time-series, but I'm having trouble finding the best place to start with this type of problem as none of the examples I've seen really seem to reflect the situation I'm trying to model here.
Update: thinking about this a bit more, it is not clear to me whether time-series models are really appropriate here because I am not interested in predicting what will happen at some future specific point in time. Rather, I'm simply interested in how temperatures at the site are related to temperatures (and other environmental variables) at the weatherstation. I thought that perhaps time-series analysis would be of value because I was concerned that subsequent temperature measurements might not be sufficiently independent. Certainly, one hour's temperature depends a great deal on the previous hour, but the dependence is weaker for daily data. In either case, is the time-correlation/non-independence of time-series data a valid concern that should be addressed if one is not interested in a time-series prediction?
| How to model time-series temperature data at multiple sites as a function of data at one site? | CC BY-SA 2.5 | null | 2011-03-24T18:26:43.530 | 2022-08-24T18:36:34.137 | 2011-03-25T15:08:06.370 | null | null | [
"time-series",
"regression",
"multivariate-analysis",
"spatio-temporal"
] |
8733 | 1 | null | null | 3 | 1407 | Most repeated measures ANOVAs have time as the repeated measure; I was wondering about using a repeated measure that is not time.
Say we fed two groups of animals different diets. At the end of the experiment, we sample the tissues, and measure ~30 different compounds (e.g. different fatty acids [FA]). Animals are sampled but once. Each FA is not independent of each other, as some of these compounds are converted from one form to another. As such, they are frequently moderately correlated with each other.
Would it be fair to treat each compound as a repeated measure, with diet as a between subjects factor? Thus, the interaction between Diet X FA would tell me if the FA content differed among diets?
I note that in many papers, researchers would perform 25-30 separate ANOVAs; one on each compound. Yet, these compounds are not independent of each other, as they are each measured simultaneously on the same animals.
Thanks for any pointers.
| Repeated measures with correlated measures (not time) | CC BY-SA 2.5 | null | 2011-03-24T19:02:00.630 | 2011-03-24T21:27:29.747 | 2011-03-24T21:27:29.747 | 485 | 3886 | [
"pca",
"repeated-measures",
"manova"
] |
8734 | 1 | 8740 | null | 13 | 15689 | Is there a well founded rule for the number of significant figures to publish?
Here are some specific examples / questions:
- Is there any way to relate the number of significant figures to the coefficient of variation? For example, if the estimate is 12.3 and the CV is 50%, does that mean that the information represented by '.3' approaches zero?
- If a confidence interval has a range of orders of magnitude, should they still have the same number of significant figures, e.g.:
12.3 (1.2, 123.4) vs 12 (1.2, 120)
- Should the number of significant figures in an error estimate be the same or less than the number of significant figures in a mean?
| Number of significant figures to put in a table? | CC BY-SA 2.5 | null | 2011-03-24T19:15:34.247 | 2011-03-26T23:21:47.630 | 2011-03-26T23:21:47.630 | 1381 | 1381 | [
"tables"
] |
8736 | 2 | null | 8734 | 0 | null | I'd suggest 12 (1.2, 123.4). Omit the .3 since it's nearly meaningless, but many people when they see (1.2, 120) will assume that the last '0' in 120 is significant.
| null | CC BY-SA 2.5 | null | 2011-03-24T19:47:02.827 | 2011-03-24T19:47:02.827 | null | null | 2658 | null |
8737 | 2 | null | 8733 | 0 | null | It's interesting... how do you compare the compounds with each other? Repeated measures don't have to be necessarily over time, but they need to be measures of the same thing or things that can be treated as if they are the same. You could certainly measure the same compound in different areas of the body and that would be a repeated measure.
I guess I'm asking the first question out of ignorance of your field. If you can compare the compounds one to another then they are repeated measures grouped by individual animal.
I suppose one could even consider multi-level modelling beyond simple repeated measures because you could take the comparable body part as one level and the animal itself as another.
| null | CC BY-SA 2.5 | null | 2011-03-24T20:08:10.450 | 2011-03-24T20:08:10.450 | null | null | 601 | null |
8738 | 1 | 8741 | null | 9 | 832 | It's pretty tough to search the Web for info on something when you don't know what words are commonly used to describe it. In this case, I'm wondering what it's called when you include another predictor in a time series.
As an example, say I'm modeling a variable $X$ using AR(3):
$ X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varphi_3 X_{t-3} + \varepsilon_t $
I want my model to include the effects of another variable—say, $Y$—so my model is now described as:
$ X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varphi_3 X_{t-3} + \beta_1 Y_{t-1} + \beta_2 Y_{t-2} + \beta_3 Y_{t-3} + \varepsilon_t $
What would the term (or terms) be that distinguishes the former model from the latter?
| What is the term for a time series regression having more than one predictor? | CC BY-SA 2.5 | null | 2011-03-24T20:25:16.957 | 2011-04-07T21:01:14.247 | null | null | 1583 | [
"time-series",
"terminology"
] |
8739 | 2 | null | 8733 | 1 | null | Have a look in a Multivariate text for MANOVA--multivariate ANOVA. Here is a website...
[http://faculty.chass.ncsu.edu/garson/PA765/manova.htm](http://faculty.chass.ncsu.edu/garson/PA765/manova.htm)
Though, that's a lot of dependent variables and it could be hard to interpret. It might be simpler to do some sort of data reduction first, like PCA among your set of fatty acids. I suppose it depends on how much overlap there is and how many PCs you'd need to extract to account for the 30 vars.
| null | CC BY-SA 2.5 | null | 2011-03-24T20:29:46.690 | 2011-03-24T21:05:57.457 | 2011-03-24T21:05:57.457 | 485 | 485 | null |
8740 | 2 | null | 8734 | 19 | null | I doubt there's a universal rule so I'm not going to make any up. I can share these thoughts and the reasons behind them:
- When summaries reflect the data themselves--max, min, order statistics, etc.--use the same number of significant figures used to record the data in the first place. This provides a consistent representation throughout the document concerning the precision of the data.
- When summaries have higher precision than the data, write the values in a way that reflects that extra precision. For instance, a mean of $n$ values has $\sqrt{n}$ times the precision of the individual values: roughly, include one extra significant figure for $3 \le n \le 30$, two for $30 \lt n \le 300$, etc. (This is rounding on a log-10 scale, obviously.)
-Note that the CV does not provide useful information in this regard.
-Some estimates can be obtained with great precision. They don't have to be rounded to match something else. For instance, the mean of 1,000,000 integers might be 10.977 with a standard error of 0.00301. My decision to write the mean to three decimal places (and 4-5 sig figs) was based on the order of magnitude of the SE, which indicates the last digit is partially reliable. The decision to write the SE to three sig figs (five decimal places) is more arbitrary: two sig figs would work; one probably would not; four sig figs would also work and be consistent with the 4-5 sig figs in the mean; more than four sig figs would be overkill. (One could estimate the standard error of the SE itself in terms of the fourth moment of the data, and use that to determine an appropriate amount of rounding, but most of us don't go to such trouble...)
- Signal the reader when you are doing substantial rounding. Be especially careful when the report is discussing the statistical test itself. The reason is that people may use your work to check their own calculations. Sometimes even a slight difference can reveal an error. You don't want to cause trouble because you rounded 123 to 120 and someone else, checking the work, obtains 123 and suspects one of you has erred.
- Be consistent. You might lose some readers if you list a value as 123 at one point and later reference it as 120.
- Don't be ridiculous. (I automatically suspect incompetence when I encounter reports that give statistical results to 15 sig figs when the data have only two sig figs, for instance.)
| null | CC BY-SA 2.5 | null | 2011-03-24T20:37:17.667 | 2011-03-24T20:37:17.667 | null | null | 919 | null |
8741 | 2 | null | 8738 | 7 | null | ARIMAX (Box-Tiao) is what it is called when you add covariates to Arima models, is is basically arima + X.
[http://www.r-bloggers.com/the-arimax-model-muddle/](http://www.r-bloggers.com/the-arimax-model-muddle/)
Also search for Panel data or TSCS: 'Time-series–cross-section (TSCS) data consist of comparable time series data observed on a variety of units'
See:
[http://as.nyu.edu/docs/IO/2576/beck.pdf](http://as.nyu.edu/docs/IO/2576/beck.pdf)
or [https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q4/004530.html](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q4/004530.html)
| null | CC BY-SA 2.5 | null | 2011-03-24T20:41:18.543 | 2011-03-24T20:54:54.370 | 2011-03-24T20:54:54.370 | 1893 | 1893 | null |
8742 | 1 | 8745 | null | 2 | 2800 | There is a bayesian network Asia:

I am computing based on
```
A (visit to Asia)
S (smoker)
T (tuberculosis)
L (lung cancer)
B (bronchitis)
E (tuberculosis versus lung cancer/bronchitis)
D (dyspnoea)
X (chest X-ray)
P(A)=0.01
P(S)=0.50
P(T)=0.0104
P(L)=0.055
P(B)=0.45
P(E)=0.064828
P(D)=0.4393105
P(X)=0.11029004
```
How would you compute probabilities when you assign truth values to ceratin observable variables like?
```
p(X=yes|A=no, S=yes)
p(D=yes|L=no, B=yes)
p(E=yes|L=yes, T=no)
p(D=yes|B=yes, T=yes)
```
| Bayes Network computing conditional probabilities | CC BY-SA 2.5 | null | 2011-03-24T21:12:58.117 | 2011-03-27T01:30:17.867 | 2011-03-27T01:30:17.867 | 3681 | 3681 | [
"bayesian",
"conditional-probability"
] |
8743 | 2 | null | 8742 | 1 | null | You have to compute [joint probabilities](http://en.wikipedia.org/wiki/Joint_probability) first, and then the [marginal probabilities](http://en.wikipedia.org/wiki/Marginal_distribution) you are interested in, thanks to [sum-product](http://en.wikipedia.org/wiki/Belief_propagation).
| null | CC BY-SA 2.5 | null | 2011-03-24T22:58:02.023 | 2011-03-25T15:18:57.543 | 2011-03-25T15:18:57.543 | 1351 | 1351 | null |
8744 | 1 | null | null | 32 | 23591 | I have some points $X=\{x_1,...,x_n\}$ in $R^p$, and I want to cluster the points so that:
- Each cluster contains an equal number of elements of $X$. (Assume that the number of clusters divides $n$.)
- Each cluster is "spatially cohesive" in some sense, like the clusters from $k$-means.
It's easy to think of a lot of clustering procedures that satisfy one or the other of these, but does anyone know of a way to get both at once?
| Clustering procedure where each cluster has an equal number of points? | CC BY-SA 3.0 | null | 2011-03-24T23:07:21.220 | 2018-05-07T02:46:47.837 | 2018-01-14T21:04:04.177 | 7828 | 3891 | [
"machine-learning",
"clustering",
"k-means",
"unsupervised-learning"
] |
8745 | 2 | null | 8742 | 3 | null | For your four example questions, it looks rather easy, if I am reading this correctly.
The first asks for the probability the individual has tuberculosis or cancer given not having tuberculosis and not having cancer. That should be 0.
The second, third and fourth ask for the probability the individual has tuberculosis or cancer given some combination of having them. That should be 1 each time.
I suspect that these are not quite what you are trying to ask. For example the proportion of individuals who have both tuberculosis and cancer might be $0.0104 + 0.055 - 0.064828 = 0.000572$. This is also suggested if the lack of an arrow between tuberculosis and cancer means they are independent and $0.0104 \times 0.055 = 0.000572$.
But there is no such clarity for example with the relationship between a visit to Asia and tuberculosis. This is not independent as there is an arrow, but more individuals have tuberculosis than visited Asia. Are we supposed to assume that everyone who visited Asia now has tuberculosis? It does not say that.
| null | CC BY-SA 2.5 | null | 2011-03-24T23:54:20.790 | 2011-03-24T23:54:20.790 | null | null | 2958 | null |
8746 | 2 | null | 8738 | 5 | null | This is called a Transfer Function Model. It has also been referred to as a Dynamic Regression Model.
| null | CC BY-SA 2.5 | null | 2011-03-24T23:59:50.460 | 2011-03-24T23:59:50.460 | null | null | 3382 | null |
8747 | 1 | null | null | 4 | 304 | I am working on something to test advertisements. I have 3 independent variables I want to test (with mixed numbers of variations of each variable), and I would like to find the best combination of the three by looking at their effect on a single dependent variable (the percentage who purchase).
```
TITLE IMAGE DESCRIPTION (Percentage who purchased)
Title1 Image1 Decription1 .05%
Title2 Image2 Description1 .02%
Title3 Image? Description1 .08%
Title4 Image? Description1 .02%
```
There can be any number of variations of each variable, sometimes there will be 10 images, other times there will only be one, etc.
My goal is to find out what Titles/Images/Descriptions affect the purchase percentage.
- How can I test and analyze something like this?
- Is it possible to not have to test every single variation and still find the best combination?
(I've read about chi-square tests but I don't know if they are the proper fit)
| A test for assessing advertisement efficiency | CC BY-SA 3.0 | null | 2011-03-25T01:15:45.307 | 2011-04-12T04:34:57.197 | 2011-04-12T04:34:57.197 | 183 | null | [
"chi-squared-test",
"conjoint-analysis"
] |
8748 | 1 | 8758 | null | 8 | 272 | I am working in hospital processing infection data, and start to read more and more articles on regressions and statistics, having realized that my mathematics background is not sufficient for me to handle all the maths inside the article. I plan to do some self-study.
I have seen from [here](http://www.biostat.jhsph.edu/academics/courses/651_FAQ.pdf) that calculus and linear algebra is needed for going further in biostatistics, I am thinking of finding some textbook for that, preferably free textbook online.
And one more question, which one should I start first? Calculus or algebra?
I know this may not be relevant to be asked here, but can anyone here kindly give me some suggestions that I can start my reading? Thanks.
| Introduction to maths for a junior in epidemiology | CC BY-SA 2.5 | null | 2011-03-25T02:55:34.180 | 2011-03-25T12:14:55.213 | 2011-03-25T09:29:26.643 | null | 588 | [
"references"
] |
8749 | 1 | 8928 | null | 22 | 1322 | I need to implement a program that will classify records into 2 categories (true/false) based on some training data, and I was wondering at which algorithm/methodology I should be looking at. There seem to be a lot of them to choose from -- Artificial Neural Network, Genetic Algorithm, Machine Learning, Bayesian Optimization etc. etc., and I wasn't sure where to start. So, my questions is:
How should I choose a learning algorithm I should use for my problem?
If this helps, here is the problem I need to solve.
---
The training data:
The training data consists of many rows like this:
```
Precursor1, Precursor2, Boolean (true/false)
```
The run
I will be given a bunch of precursors.
Then,
- I choose an algorithm A from different algorithms (or dynamically generate an algorithm), and apply it on every possible combinations of these precursors and collect the "record"s
that are emitted. The "record" consists of several key-value pairs*.
- I apply some awesome algorithm and classify these records in to 2 categories
(true/false).
- I will generate a table that has the same format as the train data:
Precursor1, Precursor2, Boolean
And the whole program is scored based on how many true/false I got right.
*:"Record"s will look like this (hope this makes sense)
```
Record [1...*] Score
-Precursor1 -Key
-Precursor2 -Value
```
There are only a finite number of possible Keys. Records contain different subset of these keys (some records have key1, key2, key3... other records have key3, key4... etc.).
I actually need 2 learning. One is for step 1. I need to have a module that looks at the Precursor pairs etc. and decide what algorithm to apply in order to emit a record for the comparison. Another is for step 2. I need a module that analyzes the collection of records and categorize them into the 2 categories (true/false).
Thank you in advance!
| How to choose between learning algorithms | CC BY-SA 2.5 | null | 2011-03-25T03:04:24.757 | 2011-03-29T15:52:01.003 | 2011-03-25T13:48:01.460 | 919 | 800 | [
"machine-learning",
"bayesian",
"optimization",
"genetic-algorithms"
] |
8750 | 1 | 8751 | null | 13 | 23587 | If I have an arima object like `a`:
```
set.seed(100)
x1 <- cumsum(runif(100))
x2 <- c(rnorm(25, 20), rep(0, 75))
x3 <- x1 + x2
dummy = c(rep(1, 25), rep(0, 75))
a <- arima(x3, order=c(0, 1, 0), xreg=dummy)
print(a)
```
.
```
Series: x3
ARIMA(0,1,0)
Call: arima(x = x3, order = c(0, 1, 0), xreg = dummy)
Coefficients:
dummy
17.7665
s.e. 1.1434
sigma^2 estimated as 1.307: log likelihood = -153.74
AIC = 311.48 AICc = 311.6 BIC = 316.67
```
How do calculate the R squared of this regression?
| How can I calculate the R-squared of a regression with arima errors using R? | CC BY-SA 2.5 | null | 2011-03-25T03:29:12.967 | 2017-11-12T17:20:50.220 | 2017-11-12T17:20:50.220 | 11887 | 179 | [
"r",
"regression",
"time-series",
"arima",
"r-squared"
] |
8751 | 2 | null | 8750 | 24 | null | Once you have ARMA errors, it is not a simple linear regression any more. So you would have to define what you mean by $R^2$. Perhaps the squared correlation of fitted to actuals? In that case:
```
cor(fitted(a),x3)^2
```
The `fitted()` function will only work if you have loaded the `forecast` package, but it looks like you have already done that judging from the output in your question.
In your case, you don't have ARMA errors, but you do have differencing. So it is equivalent to the linear model
```
b <- lm(diff(x3) ~ diff(dummy) - 1)
summary(b)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
diff(dummy) 17.766 1.149 15.46 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.149 on 98 degrees of freedom
Multiple R-squared: 0.7092, Adjusted R-squared: 0.7062
F-statistic: 239 on 1 and 98 DF, p-value: < 2.2e-16
```
Of course, that is a very different value of $R^2$ than just using the correlations as above because it is now being computed on the differences.
You will need define what you mean by $R^2$, and what you want to use it for. Once you move away from the usual regression set up with an intercept and iid errors, $R^2$ ceases to be uniquely defined and is not particularly useful.
| null | CC BY-SA 2.5 | null | 2011-03-25T05:19:58.357 | 2011-03-25T05:19:58.357 | null | null | 159 | null |
8752 | 1 | null | null | 9 | 385 | I want to perform quadrat count analysis on several point processes (or one marked point process), to then apply some dimensionality reduction techniques.
The marks are not identically distributed, i.e., some marks are appearing quite often, and some are pretty rare. Thus, I cannot simply divide my 2D space in a regular grid, because the more frequent marks will "overwhelm" the lesser frequent ones, masking their appearance.
Thus, I tried to build my grid such that each cell has at most N points in it (to do so, I simply divide each cell in four smaller (and equally sized) cells, recursively, until no cell has more than N points in it).
What do you think of this "normalization" technique? Is there a standard way to do such things?
| How to construct quadrats for point processes that differ greatly in frequency? | CC BY-SA 3.0 | 0 | 2011-03-25T06:00:09.103 | 2017-06-29T16:53:05.470 | 2015-12-20T18:52:58.673 | 7290 | 3699 | [
"multivariate-analysis",
"normalization",
"ecology",
"point-process"
] |
8753 | 2 | null | 8747 | 4 | null | In marketing-oriented statistics the analysis you need is called conjoint analysis. You construct a number of product "scenarios" which are various "mixes" of attributes such your 3 attributes, each being allowed to vary across some "levels". The Conjoint will tell you what's the best scenario and what are "utility" coefficients for each of your attributes - these allow to predict popularity of scenarios which you didn't test. Algorithmically conjoint analysis could be based on regressions or multidimensional scaling or other technique. Look for software performing conjoint analysis.
| null | CC BY-SA 2.5 | null | 2011-03-25T07:09:32.060 | 2011-03-25T07:09:32.060 | null | null | 3277 | null |
8754 | 1 | 8768 | null | 24 | 5040 | An interim analysis is an analysis of the data at one or more time points prior the official close of the study with the intention of, e.g., possibly terminating the study early.
According to Piantadosi, S. ([Clinical trials - a methodologic perspective](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471727814.html)):
"The estimate of a treatment effect will be biased when a trial is terminated at an early stage. The earlier the decision, the larger the bias."
Can you explain me this claim. I can easily understand that the accuracy is going to be affected, but the claim about the bias is not obvious to me...
| Why is bias affected when a clinical trial is terminated at an early stage? | CC BY-SA 2.5 | null | 2011-03-25T07:45:24.537 | 2011-03-25T16:18:10.147 | null | null | 3019 | [
"clinical-trials",
"bias"
] |
8755 | 1 | 8773 | null | 6 | 35580 | I have some data to analyze where $y$ is dependent of $x$ - a linear regression was used.
It's a question from an exam, so I think it should be solvable. The regression was used to estimate the mean miles per gallon (response) from the amount of miles driven (predictor).
I have the following statistics available:
- Correlation coefficient (0.117)
- Standard deviation (0.482)
- Number of observations (101)
An ANOVA of this regression yields (Regression and residuals, respectively):
- df: 1, 99
- SS: 0.319, 22.96
- MS: 0.319, 0.232
- F-value: 1.374, critical F-value: 0.244
The regression itself (Intercept and Slope, respectively):
- Coefficients: 6.51, -0.00024
- Standard deviations: 0.186, 0.0002
- t-Values: 34.90, -1.17
- p-Values: 1.93E-57, 0.2439
Also, the "upper and lower 95% and 99%" are given for the above regression (although I'm not sure what that means).
Now, I am asked to calculate the mean $y$ for several values $x$, that's relatively easy, I just use the coefficients. So for example, I can calculate the mean miles per gallon for 500 miles driven.
Part where I'm stuck: I need to calculate the 99% confidence interval for the mean of $y$.. Obviously, this is what the example is all about - the introduction states that the mileage of a car should be estimated.
My question: How can I find out the mean of $y$ using the data provided above? (And, subsequently, the 99% confidence interval, although I seem to have the standard deviation, so that shouldn't be the problem)
| Calculating the mean using regression data | CC BY-SA 2.5 | null | 2011-03-25T08:22:34.893 | 2011-04-29T00:56:50.313 | 2011-04-29T00:56:50.313 | 3911 | 1205 | [
"regression",
"self-study",
"mean"
] |
8757 | 2 | null | 8692 | 3 | null | To get unstacking of `a` similar to `rbind(a,b)` you can simply transpose the result:
```
> t(unstack(a))
RSTO RSTR S1 S2 S3 SF
res 199665 4147 31274 1 2522 118009
```
You will get a `matrix` instead of `data.frame` though.
It is also possible to use the `cast` function from package reshape:
```
> cast(~state,data=a,value="count")
value RSTO RSTR S1 S2 S3 SF
1 (all) 199665 4147 31274 1 2522 118009
```
For the `rbind(a,b)` with added `id` column the cast will give you the following:
```
> ab <- rbind(a,b)
> ab$id=rep(1:2,each=nrow(a))
> cast(id~state,data=ab,value="count")
id RSTO RSTR S1 S2 S3 SF
1 1 199665 4147 31274 1 2522 118009
2 2 31956 11689 6702 2838 6268 672561
```
| null | CC BY-SA 2.5 | null | 2011-03-25T09:15:16.203 | 2011-03-25T09:15:16.203 | null | null | 2116 | null |
8758 | 2 | null | 8748 | 5 | null | Jeff Gill has a good book, on Essential Mathematics for Social and Political Research: [http://www.amazon.com/Essential-Mathematics-Political-Research-Analytical/dp/052168403X/ref=sr_1_2?ie=UTF8&s=books&qid=1301047912&sr=8-2](http://rads.stackoverflow.com/amzn/click/052168403X)
I found it quite useful for getting a good overview of linear algebra and calculus. He only assumes knowledge of basic algebra (i.e. x+y=2 etc).
Despite the name, its a good read for anyone interested in bringing their maths up to the level required for reading journal articles and multivariate textbooks.
| null | CC BY-SA 2.5 | null | 2011-03-25T10:13:50.523 | 2011-03-25T10:13:50.523 | null | null | 656 | null |
8759 | 2 | null | 8754 | 1 | null | Well, my knowledge on this comes from the Harveian oration in 2008 [http://bookshop.rcplondon.ac.uk/details.aspx?e=262](http://bookshop.rcplondon.ac.uk/details.aspx?e=262)
Essentially, to the best of my recollection the results will be biased as 1) stopping early usually means that either the treatment was more or less effective than one hoped, and if this is positive, then you may be capitalising on chance.
I believe that p values are calculated on the basis of the planned sample size (but i could be wrong on this), and also if you are constantly checking your results to see if any effects have been shown, you need to correct for multiple comparisons in order to insure that you are not merely finding a chance effect.
For example, if you check 20 times for p values below .05 then statistically speaking, you are almost certain to find one significant result.
| null | CC BY-SA 2.5 | null | 2011-03-25T10:18:25.167 | 2011-03-25T10:18:25.167 | null | null | 656 | null |
8760 | 2 | null | 8690 | 7 | null | I will try to answer the questions 2 to 4. Suppose that we observe sample $(y_i,\mathbf{x}_i,z_i,\gamma_i,\varepsilon_i)$. Suppose that our model is
$$y_i=\mathbf{x}_i\beta+\gamma_iz_i+\varepsilon_i$$
and
$$E(\varepsilon_i|\mathbf{x}_i,z_i,\gamma_i)=0.$$
The least squares estimate of the regression will be
\begin{align}
(\hat{\beta},\hat{\gamma})'=\left(\sum_{i=1}^n
\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}
[\mathbf{x}_i,z_i]\right)^{-1}\sum_{i=1}^n\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}y_i
\end{align}
Now since we assumed sample due to law of large numbers we get that
\begin{align}
\frac{1}{n}\sum_{i=1}^n
\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}
[\mathbf{x}_i,z_i]\to
\begin{bmatrix}
E\mathbf{x}_1'\mathbf{x}_1 & E\mathbf{x_1}'z_1\\
E\mathbf{x}_1z_1 & Ez_1^2
\end{bmatrix}
\end{align}
Now
\begin{align}
\sum_{i=1}^n\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}y_i=\sum_{i=1}^n\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}\varepsilon_i+\sum_{i=1}^n
\begin{bmatrix}
\mathbf{x}_i'\mathbf{x}_i\beta +\mathbf{x}_i'z_i\gamma_i \\
\mathbf{x}_iz_i\beta + z_i^2\gamma_i
\end{bmatrix}
\end{align}
Due to law of large numbers and our conditional expectation condition we get that
\begin{align}
\frac{1}{n}\sum_{i=1}^n\begin{bmatrix} \mathbf{x}_i'\\
z_i
\end{bmatrix}\varepsilon_i\to 0.
\end{align}
Now comes the part where we need more assumptions. Assume that $(\mathbf{x}_i,z_i)$ is independent of the $\gamma_i$. Then due to law of large numbers
\begin{align}
\frac{1}{n}\sum_{i=1}^n\mathbf{x}_iz_i\gamma_i\to E\mathbf{x}_1z_1\gamma_1=pE\mathbf{x}_1z_1
\end{align}
where $p=P(\gamma_i=1)$. Similarly
\begin{align}
\frac{1}{n}\sum_{i=1}^nz_i^2\gamma_i\to Ez_1^2\gamma_1=pEz_1^2
\end{align}
Gathering all the results we get
\begin{align}
(\hat{\beta},\hat{\gamma})'\to \left(\begin{bmatrix}
E\mathbf{x}_1'\mathbf{x}_1 & E\mathbf{x_1}'z_1\\
E\mathbf{x}_1z_1 & Ez_1^2
\end{bmatrix}\right)^{-1}\begin{bmatrix}
E\mathbf{x}_1'\mathbf{x}_1\beta + pE\mathbf{x_1}'z_1\\
E\mathbf{x}_1z_1\beta + pEz_1^2
\end{bmatrix}=(\beta,p)'
\end{align}
So the answer to second question is yes. Simple experiment in R confirms this:
```
> g<-sample(0:1,1000,prob=c(1/3,1-1/3),replace=TRUE)
> z <- rnorm(1000)
> y<-1+g*z+rnorm(1000)/3
> dt <- data.frame(y=y,z=z)
> lm(y~z,data=dt)
Call:
lm(formula = y ~ z, data = dt)
Coefficients:
(Intercept) z
1.0050 0.6595
```
Now let us proceed to question 3. Introduce notation $X_i=(\mathbf{x}_i,z_i)$. Then
\begin{align}
\sqrt{n}\begin{bmatrix}
\hat{\beta}-\beta\\
\hat{\gamma}-p
\end{bmatrix}=\left(\frac{1}{n}\sum_{i=1}^nX_i'X_i\right)^{-1}\frac{1}{\sqrt{n}}\left(\sum_{i=1}^nX_i'\varepsilon_i+\sum_{i=1}^n
\begin{bmatrix}
\mathbf{x}_i'z_i\\\
z_i^2
\end{bmatrix}(\gamma_i-p)\right)
\end{align}
Introduce notation $Z_i=(\mathbf{x}_iz_i,z_i^2)$ and $C_i=(X_i\varepsilon_i,Z_i(\gamma_i-p))$. We have that $EC_i=0$ and since we have sample we can apply multivariate central limit theorem for $C_i$:
\begin{align}
\frac{1}{\sqrt{n}}\sum_{i=1}^nC_i'\to N(0,\Sigma_C)
\end{align}
where
\begin{align}
\Sigma_C=EC_i'C_i&=
\begin{bmatrix}
EX_1'X_1\varepsilon_1^2 & EX_1'Z_1\varepsilon_1(\gamma_1-p)\\
EZ_1'X_1\varepsilon_1(\gamma_1-p) & EZ_1'Z_1(\gamma_1-p)^2
\end{bmatrix}\\
&=\begin{bmatrix}
EX_1'X_1\varepsilon_1^2 & 0\\
0 & EZ_1'Z_1(\gamma_1-p)^2
\end{bmatrix}
\end{align}
due to condition $E(\varepsilon_i|\mathbf{x}_i,z_i,\gamma_i)=0$.
Now assume further that $E(\varepsilon_i^2|\mathbf{x}_i,z_i,\gamma_i)=\sigma^2$. Denote
\begin{align}
\mathbf{A}=EX_1'X_1, \quad \mathbf{B}=EZ_1'Z_1
\end{align}
Then we get
\begin{align}
\left(\frac{1}{n}\sum_{i=1}^nX_i'X_i\right)^{-1}\frac{1}{\sqrt{n}}\left(\sum_{i=1}^nX_i'\varepsilon_i+\sum_{i=1}^n
\begin{bmatrix}
\mathbf{x}_i'z_i\\\
z_i^2
\end{bmatrix}(\gamma_i-p)\right)\to N(0,\sigma^2A+p(1-p)B)
\end{align}
so
\begin{align}
\sqrt{n}\begin{bmatrix}
\hat{\beta}-\beta\\
\hat{\gamma}-p
\end{bmatrix}\to N(0,\sigma^2A^{-1}+p(1-p)A^{-1}BA^{-1})
\end{align}
This should give an idea how to construct the test for testing $H_0:p=p_0$. I am not happy about the presence of $p$ in the variance matrix, this could pose some problems.
Now given the above the answer to the fourth question is that random coefficient influences only the covariance matrix of the other coefficients. If there is bias, it vanishes asymptotically.
I should note that the way I derived these results is pretty straightforward application of LLN and CLT. If there is some elegant way to avoid this I would really like to know.
| null | CC BY-SA 2.5 | null | 2011-03-25T10:26:13.193 | 2011-03-28T12:08:49.257 | 2011-03-28T12:08:49.257 | 2116 | 2116 | null |
8762 | 2 | null | 8744 | 2 | null | I suggest the recent paper [Discriminative Clustering by Regularized Information Maximization](http://las.ethz.ch/files/gomes10discriminative.pdf) (and references therein). Specifically, Section 2 talks about class balance and cluster assumption.
| null | CC BY-SA 2.5 | null | 2011-03-25T11:07:50.053 | 2011-03-26T09:01:37.207 | 2011-03-26T09:01:37.207 | 3785 | 3785 | null |
8763 | 2 | null | 8754 | 1 | null | I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will be "biased" because you have less data. The so called "likelihood principle" states that inference should only depend on data that was observed, and not on data that might have been observed, but was not. The LP says
$$P(H|D,S,I)=P(H|D,I)$$
Where $H$ stands for the hypothesis you are testing (in the form of a proposition, such as "the treatment was effective"), $D$ stands for the data you actually observed, and $S$ stands for the proposition "the experiment was stopped early", and $I$ stands for the prior information (such as a model). Now suppose your stopping rule depends on the data $D$ and on the prior information $I$, so you can write $S=g(D,I)$. Now an elementary rule of logic is $AA=A$ - saying that A is true twice is the same thing as saying it once.
This means that because $S=g(D,I)$ will be true whenever $D$ and $I$ are also true. So in "boolean algebra" we have $D,S,I = D,g(D,I),I = D,I$. This proves the above equation of the likelihood principle. It is only if your stopping rule depends on something other than the data $D$ or the prior information $I$ that it matters.
| null | CC BY-SA 2.5 | null | 2011-03-25T11:15:05.500 | 2011-03-25T11:15:05.500 | null | null | 2392 | null |
8764 | 2 | null | 8748 | 3 | null | As far as learning the information on slide 6 in the slideshow you linked to, I would suggest [A Mathematical Primer for Social Statistics](http://www.sagepub.com/books/Book232153) by John Fox (not free but cheap, [Google book link](http://books.google.com/books?id=S4_VhrdIKS4C)). All of those sage green books are aimed at individuals with only a very brief statistics background.
If you are interested in taking that specific class I would also suggest you ask the professor for a syllabus and maybe some example problems. Although the professor did not state any preferred reference mathematical book I would imagine if pressed they could give some recommendations.
| null | CC BY-SA 2.5 | null | 2011-03-25T12:14:55.213 | 2011-03-25T12:14:55.213 | null | null | 1036 | null |
8766 | 2 | null | 8380 | 1 | null | This sounds like something very similar to a method I have seen Jerry Reiter using multiple imputation for missing data. However I can't quite remember the name of the paper. But these terms will probably be able to get you in the right(er) direction (pardon the pun).
So basically you have three variables $X$, $Y$, and $Z$. the variable $Z$ is your "gold standard" variable, and the variable $X$ is the "bronze standard" variable. You would prefer to have observed $Y$ and $Z$ together. But unfortunately, you only observe $X$ and $Y$ together, and $X$ and $Z$ together.
you can set up the model as follows. If you knew $Z$, then $X$ would be irrelevant to making inference about $Y$ (why use bronze when you've got gold?). So you have:
$$P(Y|X,Z)=P(Y|Z)$$
But you also have a model relating $X$ and $Z$, $P(Z|X)$. Now use the law of total probability, the product rule, and the above equation to expand $P(Y|X)$:
$$P(Y|X)=\int P(Y,Z|X)dZ=\int P(Y|Z,X)P(Z|X) dZ=\int P(Y|Z)P(Z|X)dZ$$
And so you have a "weighted average" of the "gold model" $P(Y|Z)$, where the weights depend on the "error model" $P(Z|X)$ (i.e. how well the bronze standard predicts the gold standard). If the $Z$ can be quite well predicted, then $P(Z|X)$ will resemble a "delta function" and the model will essentially be a "plug in" model:
$$P(Y|Z)\approx P(Y|Z)_{Z=X}$$
If $Z$ is poorly estimated from $X$, then $P(Z|X)$ will be quite "flat" and this procedure will "spread out its bets" over many different models.
Jerry Reiter's method should give some more details about the actual implementation of this.
| null | CC BY-SA 2.5 | null | 2011-03-25T13:18:27.827 | 2011-03-25T13:18:27.827 | null | null | 2392 | null |
8767 | 2 | null | 8754 | 0 | null | there will be bias (in "statistical sense") if termination of studies is not random.
In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no effect" will show some effect (as a result of chance) and (b) some experiments that ultimately do find an effect will show "no effect" (likely as a result of lack of power). In a world in which you terminate trials, if you stop (a) more often than (b), you'll end up across run of studies with bias in favor of finding an effect. (Same logic applies for effect sizes; terminating studies that show "bigger than expected" effect early on more often than ones that show "as expected or lower" will inflate count of findings of "big effect.")
If in fact medical trials are terminated when early results show a positive effect -- in order to make treatment available to subjects in placebo or others -- but not when early results are inconclusive, then there will be more type 1 error in such testing than there would be if all experiments were run to conclusion. But that doesn't meant the practice is wrong; the cost of type 1 error, morally speaking, might be lower than denying treatment as quickly as one otherwise would for treatments that really would be shown to work at end of full trial.
| null | CC BY-SA 2.5 | null | 2011-03-25T13:53:13.750 | 2011-03-25T13:58:46.677 | 2011-03-25T13:58:46.677 | 11954 | 11954 | null |
8768 | 2 | null | 8754 | 13 | null | First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the estimate of the effect size will be biased in a completely statististical sense. If you stopped for efficacy, the estimated effect will be too high (assuming it is positive), if you stopped for futility, it will be too low.
Piantodosi does give an intuitive explanantion as well (Sec 10.5.4 in my edition). Suppose the true difference in two means is 1 unit. When you run a lot of trials, and look at them at your interim analysis time, some of them will have observed effect sizes much above 1, some much below one, and most around 1 - the distribution will be wide, but symmetric. The estimated effect size at this point would not be very accurate, but would be unbiased. However you only stop and report an effect size if the difference is significant (adjusted for multiple testing), that is the estimate is on the high side. In all other cases you keep going and don't report an estimate. That means that conditional on having stopped early, the distribution of the effect size is not symmetric, and its expected value is above the true value of the estimate.
The fact that this effect is more severe early on comes from the larger hurdle for stopping the trial, thus a larger part of the distribution being thrown away during the conditioning.
| null | CC BY-SA 2.5 | null | 2011-03-25T14:10:28.553 | 2011-03-25T14:56:53.367 | 2011-03-25T14:56:53.367 | 279 | 279 | null |
8769 | 1 | null | null | 4 | 373 | I am taking a course in data mining. I am not sure how a non linear SVM when transformed to high dimensional space becomes a linear classification problem. It would be good if someone can provide me an intuition on this.
| Linear behaviour of nonlinear SVM in higher dimensional space | CC BY-SA 2.5 | null | 2011-03-25T14:54:47.357 | 2012-06-04T21:12:22.990 | 2011-03-25T14:58:14.440 | null | 3897 | [
"svm"
] |
8770 | 2 | null | 8769 | 2 | null | You should look at the problem the other way arround. SVM algorithms solve Linear classification problems in feature space. Depending on the kernel you use, the boundaries in the original space might not be linear.
| null | CC BY-SA 2.5 | null | 2011-03-25T15:09:20.313 | 2011-03-25T15:09:20.313 | null | null | 3362 | null |
8771 | 1 | null | null | 1 | 1673 | Given a factor with a number of levels, say for example, versions of a banner advertisement on a web page, where the measurement of interest is the click through rate (# clicks / # of times banner add was viewed), is there a principled way to determine the best performing ad, controlling for multiple testing?
Hsu's [Multiple Comparisons with the Best](https://onlinecourses.science.psu.edu/stat502/node/34) appears like the perfect solution (but it works for normally distributed continuous data). Does anyone know of an analog for binary response data? Preferably in R and preferably in a glm context where covariates could be controlled for. This last piece is not vital however as the experiment will be randomized and a one-way situation should be adequate.
| Multiple comparisons with binary data: Hsu's MCB method | CC BY-SA 2.5 | null | 2011-03-25T15:10:02.870 | 2011-03-26T17:18:46.097 | 2011-03-26T17:18:46.097 | null | 2040 | [
"r",
"multiple-comparisons",
"binary-data"
] |
8772 | 2 | null | 8742 | 1 | null | @darkcminor: I wonder if the following short tutorial would help you (look especially at the chain rule and the section on inference). I have not looked at these for a long time, but I believe with a few principles you can figure out the values of any query. Some of them will just be onerous done by hand.
[http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html](http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html)
| null | CC BY-SA 2.5 | null | 2011-03-25T15:13:51.963 | 2011-03-25T15:13:51.963 | null | null | 2040 | null |
8773 | 2 | null | 8755 | 7 | null | Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$.
where $r$ is the correlation. The question doesn't state whether the standard deviation (0.482) is for $s_y$ or $s_x$ (the MLE standard deviation, with divisor $n$). Either way, you can work out the either from the info given. for their ratio must satisfy:
$$\frac{\hat{\beta}}{r}=\frac{s_y}{s_x}$$
The slope can't be negative if the correlation is positive, so I have assumed that you have done something incorrectly (for you have correlation of 0.117, and slope of -0.00024; this is impossible). This will affect the numbers, but not the general method. So I will assume the standard deviations are both known, but not write in the specific values. The same goes for the rest of the actual numbers.
Now the variance of $\hat{\beta}$ is given by:
$$var(\hat{\beta})=s_e^2(X^TX)^{-1}_{22}=\frac{s_e^2 (X^TX)_{11}}{|X^TX|}$$
Note that $(X^TX)_{11}=n$ and $s_e^2$ is the "mean square error". The variance of $\alpha$ is given by:
$$var(\hat{\alpha})=s_e^2(X^TX)^{-1}_{11}=\frac{s_e^2 (X^TX)_{22}}{|X^TX|}$$
Now $(X^TX)_{22}=\sum_i x_i^2 = n(s_x^2+n\overline{x}^2)$
And dividing these two variances gives:
$$\frac{var(\hat{\alpha})}{var(\hat{\beta})}=\frac{(X^TX)_{22}}{(X^TX)_{11}}=\frac{n(s_x^2+n\overline{x}^2)}{n}=s_x^2+n\overline{x}^2$$
Now all quantities in the equation are known, except for the mean $\overline{x}$. So we can re-arrange this equation and solve for the mean:
$$\overline{x}=\pm\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}$$
But we know from the start that $x_i>0$ - you can't drive "negative miles". So only the positive square root is to be taken. The rest is straight-forward CI stuff. The estimate of the mean $\hat{\overline{y}}$ is given by:
$$\hat{\overline{y}}=\hat{\alpha}+\hat{\beta}\overline{x}=\hat{\alpha}+\hat{\beta}\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}=\overline{y}$$
And the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})+2\overline{x}cov(\hat{\alpha},\hat{\beta})$$
Now the covariance is equal to:
$$cov(\hat{\alpha},\hat{\beta})=s_e^2(X^TX)^{-1}_{21}=-\frac{s_e^2 (X^TX)_{21}}{|X^TX|}=-\frac{s_e^2 n\overline{x}}{ns_x^2}=-\frac{s_e^2 \overline{x}}{s_x^2}$$
And so the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})-2\frac{s_e^2 \overline{x}^2}{s_x^2}=var(\hat{\alpha})+\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}\left(var(\hat{\beta})-2\frac{s_e^2}{s_x^2}\right)$$
So you construct your $100(1-P)$% confidence interval by choosing $T_{1-P/2}^{(n-2)}$ as the $P/2$ quantile of standard T distribution with $n-1$ degrees of freedom (which effectively equal to the standard normal, as $n-1=100$), and you have:
$$CI=\overline{y}\pm T_{1-P/2}^{(n-2)}\sqrt{var(\hat{\overline{y}})}$$
And all quantities are calculable, given the information.
| null | CC BY-SA 2.5 | null | 2011-03-25T15:28:23.537 | 2011-03-25T15:28:23.537 | null | null | 2392 | null |
8774 | 1 | null | null | 5 | 3350 | What is the difference between `independence.test` in R and CATT (Cochrane and Armitage) tests?
How these tests are calculated?
Where do we and how do we define x=0.0 0.5 1.0 (genetic studies) for both of the tests?
| What is the difference between independence.test in R and Cochrane and Armitage trend test? | CC BY-SA 3.0 | null | 2011-03-25T16:15:37.153 | 2012-03-15T15:11:13.880 | 2012-03-15T15:11:13.880 | 930 | 3870 | [
"ordinal-data",
"genetics",
"association-measure"
] |
8775 | 2 | null | 8754 | 3 | null | Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect but may have a negative effect (-1). Five guinea pigs are tested one after the other. The unknown probability of a positive outcome in a single case is in fact $\frac{3}{4}$ and a negative outcome $\frac{1}{4}$.
So after five trials the probabilities of the different outcomes are
```
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 405/1024
+3-2 = +1 270/1024
+2-3 = -1 90/1024
+1-4 = -3 15/1024
+0-5 = -5 1/1024
```
so the probability of a positive outcome overall is 918/1024 = 0.896, and the mean outcome is +2.5. Dividing by the 5 trials, this is an average of a +0.5 outcome per trial.
It is the unbiased figure, as it is also $+1\times\frac{3}{4}-1\times\frac{1}{4}$.
Suppose that in order to protect guinea pigs, the study will be terminated if at any stage the cumulative outcome is negative. Then the probabilities become
```
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 324/1024
+3-2 = +1 135/1024
+2-3 = -1 18/1024
+1-2 = -1 48/1024
+0-1 = -1 256/1024
```
so the probability of a positive outcome overall is 702/1024 = 0.6855, and the mean outcome is +1.953. If we looked the mean value of outcome per trial in the previous calculation, i.e. using $\frac{+5}{5}$, $\frac{+3}{5}$, $\frac{+1}{5}$, $\frac{-1}{5}$, $\frac{-1}{3}$ and $\frac{-1}{1}$ then we would get +0.184.
These are the senses in which there is bias by stopping early in the second scheme, and the bias is in the predicted direction. But it is not the full story.
Why do whuber and probabilityislogic think stopping early should produce unbiased results? We know the expected outcome of the trials in the second scheme is +1.953. The expected number of trials turns out to be 3.906. So dividing one by the other we get +0.5, exactly as before and what was described as unbiased.
| null | CC BY-SA 2.5 | null | 2011-03-25T16:18:10.147 | 2011-03-25T16:18:10.147 | null | null | 2958 | null |
8776 | 2 | null | 8718 | 1 | null | Another tool that may be useful is the standardarized regression coefficient, or at least a rough-and-ready pseudo-version. You can obtain one such version by multiplying your obtained coefficient by the standard deviation of the predictor. (There are other versions and some debate about the best one, e.g. see Menard 2002, Applied Logistic Regression Analysis ([Google books](http://books.google.com/books?id=EAI1QmUUsbUC&printsec=frontcover&dq=menard%20logistic&source=bl&ots=4SBKH0mVJS&sig=br7joGc43N7NEDIAIU7ItVCTPpU&hl=en&ei=rcOMTb6ZBI6y0QHD24W3Cw&sa=X&oi=book_result&ct=result&resnum=3&ved=0CC0Q6AEwAg))). This will give you a way to assess the strength of the effect across studies.
| null | CC BY-SA 3.0 | null | 2011-03-25T16:37:41.953 | 2013-10-02T16:54:52.130 | 2013-10-02T16:54:52.130 | 7290 | 2669 | null |
8777 | 1 | null | null | 25 | 21498 | In [genome-wide association studies](https://en.wikipedia.org/wiki/Genome-wide_association_study) (GWAS):
- What are the principal components?
- Why are they used?
- How are they calculated?
- Can a genome-wide association study be done without using PCA?
| In genome-wide association studies, what are principal components? | CC BY-SA 4.0 | null | 2011-03-25T16:39:10.923 | 2020-09-14T18:12:48.993 | 2018-11-13T13:12:32.697 | 28666 | 3870 | [
"pca",
"genetics",
"gwas"
] |
8778 | 2 | null | 8380 | 1 | null | One approach would be to use the second dataset to make a secondary model that predicts the exact variables as a function of the noisy variables and use this in operation to provide the inputs for the primary model trained on the first dataset (which predicts the target given the exact variables). However, to do this properly you would want to propogate the uncertainty of the secondary model through the primary model when making predictions. Say you used a multivariate regression model (as the uncertainty in the conditional estimates of the exact variables are unlikely to be uncorrellated) you would have a multi-variate normal distribution for the plausible values of the exact variables conditioned on the noisy ones. This could then be sampled a few thousand times and the output of the primary model averaged over that sample to get a more robust estimate for the target. I suspect if you kept everything linear and normal there would be an analytic solution so you didn't have to sample.
The regression may at least help with the bias of the noise, if nothing else.
| null | CC BY-SA 2.5 | null | 2011-03-25T16:48:20.410 | 2011-03-25T16:48:20.410 | null | null | 887 | null |
8779 | 1 | null | null | 20 | 11452 | This problem is actually about fire detection, but it is strongly analogous to some radioactive decay detection problems. The phenomena being observed is both sporadic and highly variable; thus, a time series will consist of long strings of zeroes interrupted by variable values.
The objective is not merely capturing events (breaks in the zeroes), but quantitative characterization of the events themselves. However, the sensors are limited, and thus will sometimes record zero even if the "reality" is non-zero. For this reason, zeroes must be included when comparing sensors.
Sensor B might be more sensitive than Sensor A, and I would like to be able to describe that statistically. For this analysis, I do not have "truth," but I do have a Sensor C, which is independent of Sensors A&B. Thus my expectation is that better agreement between A/B and C indicates better agreement with "truth." (This may seem shaky, but you'll have to trust me-- I'm on solid ground here, based on what is known from other studies about the sensors).
The problem, then, is how to quantify "better agreement of time series." Correlation is the obvious choice, but will be affected by all those zeroes (which cannot be left out), and of course disproportionately affected by the maximum values. RMSE could also be calculated, but would be strongly weighted toward the behavior of the sensors in the near-zero case.
Q1: What is the best way to apply a logarithmic scaling to non-zero values that will then be combined with zeroes in a time-series analysis?
Q2: What "best practices" can you recommend for a time-series analysis of this type, where behavior at non-zero values is the focus, but zero values dominate and cannot be excluded?
| Analysis of time series with many zero values | CC BY-SA 2.5 | null | 2011-03-25T18:35:44.733 | 2017-11-08T11:21:00.140 | 2017-11-08T11:21:00.140 | 1352 | 3898 | [
"time-series",
"correlation",
"crostons-method",
"intermittent-time-series"
] |
8780 | 2 | null | 8777 | 30 | null | In this particular context, PCA is mainly used to account for population-specific variations in alleles distribution on the SNPs (or other DNA markers, although I'm only familiar with the SNP case) under investigation. Such "population substructure" mainly arises as a consequence of varying frequencies of minor alleles in genetically distant ancestries (e.g. japanese and black-african or european-american). The general idea is well explained in [Population Structure and Eigenanalysis](http://www.genome.duke.edu/education/seminars/journal-club/documents/PopStructure.pdf), by Patterson et al. (PLoS Genetics 2006, 2(12)), or the Lancet's special issue on genetic epidemiology (2005, 366; most articles can be found on the web, start with Cordell & Clayton, [Genetic Association Studies](http://www.molmed.nl/uploads/abstracts/127/Lancet%20genetic%20epi%203.pdf)).
The construction of principal axes follows from the classical approach to PCA, which is applied to the scaled matrix (individuals by SNPs) of observed genotypes (AA, AB, BB; say B is the minor allele in all cases), to the exception that an additional normalization to account for population drift might be applied. It all assumes that the frequency of the minor allele (taking value in {0,1,2}) can be considered as numeric, that is we work under an additive model (also called allelic dosage) or any equivalent one that would make sense. As the successive orthogonal PCs will account for the maximum variance, this provides a way to highlight groups of individuals differing at the level of minor allele frequency. The software used for this is known as [Eigenstrat](https://reich.hms.harvard.edu/software). It is also available in the `egscore()` function from the [GenABEL](http://cran.r-project.org/web/packages/GenABEL/) R package (see also [GenABEL.org](http://www.genabel.org/)). It is worth to note that other methods to detect population substructure were proposed, in particular model-based cluster reconstruction (see references at the end). More information can be found by browsing the [Hapmap](http://hapmap.ncbi.nlm.nih.gov/) project, and available tutorial coming from the [Bioconductor](http://www.bioconductor.org) project. (Search for Vince J Carey or David Clayton's nice tutorials on Google).
Apart from clustering subpopulations, this approach can also be used for detecting outliers which might arise in two cases (AFAIK): (a) genotyping errors, and (b) when working with an homogeneous population (or assumed so, given self-reported ethnicity), individuals exhibiting unexpected genotype. What is usually done in this case is to apply PCA in an iterative manner, and remove individuals whose scores are below $\pm 6$ SD on at least one of the first 20 principal axes; this amounts to "whiten" the sample, in some sense. Note that any such measure of genotype distance (this also holds when using Multidimensional Scaling in place of PCA) will allow to spot relatives or siblings. The [plink](http://pngu.mgh.harvard.edu/%7Epurcell/plink/) software provides additional methods, see the section on [Population stratification](http://pngu.mgh.harvard.edu/%7Epurcell/plink/strat.shtml) in the on-line help.
Considering that eigenanalysis allows to uncover some structure at the level of the individuals, we can use this information when trying to explain observed variations in a given phenotype (or any distribution that might be defined according to a binary criterion, e.g. disease or case-control situation). Specifically, we can adjust our analysis with those PCs (i.e., the factor scores of individuals), as illustrated in [Principal components analysis corrects for stratification in genome-wide association studies](http://www.biostat.jhsph.edu/%7Eiruczins/teaching/misc/gwas/papers/price2006.pdf), by Price et al. (Nature Genetics 2006, 38(8)), and later work (there was a nice picture showing axes of genetic variation in Europe in [Genes mirror geography within Europe; Nature 2008](https://www.nature.com/nature/journal/v456/n7218/full/nature07331.html); Fig 1A reproduced below). Note also that another solution is to carry out a stratified analysis (by including ethnicity in an GLM)--this is readily available in the [snpMatrix](http://www.bioconductor.org/packages/2.3/bioc/html/snpMatrix.html) package, for example.

References
- Daniel Falush, Matthew Stephens, and Jonathan K Pritchard (2003). Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies. Genetics, 164(4): 1567–1587.
- B Devlin and K Roeder (1999). Genomic control for association studies. Biometrics, 55(4): 997–1004.
- JK Pritchard, M Stephens, and P Donnelly (2000). Inference of population structure using multilocus genotype data. Genetics, 155(2): 945–959.
- Gang Zheng, Boris Freidlin, Zhaohai Li, and Joseph L Gastwirth (2005). Genomic control for association studies under various genetic models. Biometrics, 61(1): 186–92.
- Chao Tian, Peter K. Gregersen, and Michael F. Seldin1 (2008). Accounting for ancestry: population substructure and genome-wide association studies. Human Molecular Genetics, 17(R2): R143-R150.
- Kai Yu, Population Substructure and Control Selection in Genome-wide Association Studies.
- Alkes L. Price, Noah A. Zaitlen, David Reich and Nick Patterson (2010). New approaches to population stratification in genome-wide association studies, Nature Reviews Genetics
- Chao Tian, et al. (2009). European Population Genetic Substructure: Further Definition of Ancestry Informative Markers for Distinguishing among Diverse European Ethnic Groups, Molecular Medicine, 15(11-12): 371–383.
| null | CC BY-SA 4.0 | null | 2011-03-25T19:32:21.563 | 2020-09-14T18:12:48.993 | 2020-09-14T18:12:48.993 | 930 | 930 | null |
8781 | 2 | null | 4276 | 1 | null | Since there is quite some variation in how the steps look,
you could try a statistical approach. Which could, for example, be done
in the following steps:
- Generate the feature vector.
Filter the signal by a number of filters, each having a different
frequency response. A set foo (haar)-wavelets might be a reasonable
starting points. if your original signal has N samples, and you have K filters,
this filtering should result in an N-by-K matrix. take the element-wise square to determine the energy in each of the signals.
- Generate ground truth.
Write down the sample numbers which mark the start of each step, store them in vector S.
Use this to make ground truth output data: Y = zeros(N,1); Y(S) = 1.
- Train your classifier.
Now you can apply a generic classification algorithm (e.g. LDA or logistic regression) to the results of step 1 and 2. Matlab implementations should not be hard to find.
- Apply you classifier on new data.
For new data, repeat step 1. this can then be used to as input for the classifier resulting from step 3. It might be necessary to post-process this data, for example by low-pass filtering it. Setting a threshold with some hysteresis should then give you the start of each step
| null | CC BY-SA 2.5 | null | 2011-03-25T20:20:58.920 | 2011-03-25T20:20:58.920 | null | null | 3867 | null |
8782 | 2 | null | 8779 | 12 | null | To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysis. This arises normally when you have "lots of zeros" relative to the number of non-zeros.The issue is that there are two random variables ; the time between events and the expected size of the event. As you said the autocorrelation (acf) of the complete set of readings is meaningless due to the sequence of zeroes falsely enhancing the acf. You can pursue threads like "Croston's method” which is a model-based procedure rather than a data-based procedure. Croston's method is vulnerable to outliers and changes/trends/level shifts in the rate of demand i.e. the demand divided by the number of periods since the last demand. A much more rigorous approach might be to pursue "Sparse Data - Unequally Spaced Data" or searches like that. A rather ingenious solution was suggested to me by Prof. Ramesh Sharda of OSU and I have been using it for a number of years in my consulting practice.
If a series has time points where sales arise and long periods of time where no sales arise it is possible to convert sales to sales per period by dividing the observed sales by the number of periods of no sales thus obtaining a rate. It is then possible to identify a model between rate and the interval between sales culminating in a forecasted rate and a forecasted interval. You can find out more about this at autobox.com and google "intermittent demand"
| null | CC BY-SA 2.5 | null | 2011-03-25T23:55:45.633 | 2011-03-25T23:55:45.633 | null | null | 3382 | null |
8783 | 1 | null | null | 3 | 3949 | I am wondering that how one can calculate [KL-divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) on two probability distributions. For example, if we have
```
t1 = 0.4, 0.2, 0.3, 0.05, 0.05
t2 = 0.23, 0, 0.14, 0.17
```
The formula is bit complicated for me :(
| KL divergence calculation | CC BY-SA 2.5 | null | 2011-03-26T00:33:57.720 | 2011-04-08T20:44:20.170 | 2011-04-08T20:44:20.170 | 919 | 3900 | [
"distributions",
"machine-learning",
"distance-functions",
"information-retrieval"
] |
8784 | 1 | null | null | 14 | 4303 | This is a follow up question from [the one I asked a couple of days ago](https://stats.stackexchange.com/questions/8718/comparing-logistic-regression-coefficients-across-models/). I feel it puts a different slant on the issue, so listed a new question.
The question is: can I compare the magnitude of coefficients across models with different dependent variables? For example, on a single sample say I want to know whether the economy is a stronger predictor of votes in the House of Representatives or for President. In this case, my two dependent variables would be the vote in the House (coded 1 for Democrat and 0 for Republican) and vote for President (1 for Democrat and 0 for Republican) and my independent variable is the economy. I'd expect a statistically significant result in both offices, but how do I assess whether it has a 'bigger' effect in one more than the other? This might not be a particularly interesting example, but i'm curious about whether there is a way to compare. I know one can't just look at the 'size' of the coefficient. So, is comparing coefficients on models with different dependent variables possible? And, if so, how can it be done?
If any of this doesn't make sense, let me know. All advice and comments are appreciated.
| Comparing logistic coefficients on models with different dependent variables? | CC BY-SA 2.5 | null | 2011-03-26T01:07:46.037 | 2011-04-28T00:05:39.210 | 2017-04-13T12:44:23.203 | -1 | 3883 | [
"regression",
"logistic"
] |
8785 | 2 | null | 8783 | 4 | null | Using brute force and the first formula [here](http://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence) based on the first formula for the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence), you are starting from two multisets each with 5 values, 3 of which are shared between them. So the combination of them is the multiset
$$M={0, 0.05, 0.05, 0.1, 0.2, 0.2, 0.3, 0.3, 0.4, 0.4}$$
so using $D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \log \frac{P(i)}{Q(i)}$
$$JSD(t_1 \parallel t_2)= \frac{1}{2}D_{\mathrm{KL}}(t_1 \parallel M)+\frac{1}{2}D_{\mathrm{KL}}(t_2 \parallel M)$$
$$=\frac{1}{2}\left(1\cdot\frac{2}{5} \log\left(\frac{2/5}{2/10}\right) +3\cdot\frac{1}{5} \log\left(\frac{1/5}{2/10}\right)\right) $$
$$+\frac{1}{2}\left(2\cdot\frac{1}{5} \log\left(\frac{1/5}{1/10}\right) +3\cdot\frac{1}{5} \log\left(\frac{1/5}{2/10}\right)\right) $$
$$= \dfrac{2}{5}\log(2) \approx 0.277$$
though you may want to check this. Other calculations, such as using Shannon entropy should produce the same result.
| null | CC BY-SA 2.5 | null | 2011-03-26T01:09:49.170 | 2011-03-26T01:09:49.170 | null | null | 2958 | null |
8786 | 2 | null | 8749 | 8 | null | I would use probability theory to start with, and then pick whichever algorithm best calculates what probability theory tells you to do. So you have training data $T$, and some new precursors $X$, and an object to classify $Y$, as well as your prior information $I$.
So you want to know about $Y$. Then probability theory says, just calculate its probability, conditional on all the information you have available to you.
$$P(Y|T,X,I)$$
Now we can use any of the rules of probability theory to manipulate this into things that we do know how to calculate. So using Bayes theorem, you get:
$$P(Y|T,X,I)=\frac{P(Y|T,I)P(X|Y,T,I)}{P(X|T,I)}$$
Now $P(Y|T,I)$ is usually easy - unless you prior information can tell you something about $Y$ beyond the training data (e.g. correlations), then it is given by the rule of succession - or basically the observed fraction of times $Y$ was true in the training data set.
For the second term $P(X|Y,T,I)$ - this is your model, and where most of your work will go, and where different algorithms will do different things. $P(X|T,I)$ is a bit of a vicious beast to calculate, so we do the following trick to avoid having to do this: take the odds of $Y$ against $\overline{Y}$ (i.e. not $Y$). And we get:
$$O(Y|T,X,I)=\frac{P(Y|T,X,I)}{P(\overline{Y}|T,X,I)}=\frac{P(Y|T,I)}{P(\overline{Y}|T,I)}\frac{P(X|Y,T,I)}{P(X|\overline{Y},T,I)}$$
Now you basically need a decision rule - when the odds/probability is above a certain threshold, you will classify $Y$ as "true", otherwise you will classify it as "false". Now nobody can really help you with this - it is a decision which depends on the consequences of making right and wrong decisions. This is a subjective exercise, and only the proper context can answer this. Of course the "subjectivity" will only matter if there is high uncertainty (i.e. you have a "crap" model/data which can't distinguish the two very well).
The second quantity - the model $P(X|Y,T,I)$ is a "predictive" model. Suppose the prior information indicates a single model which depends on parameter $\theta_{Y}$. Then the quantity is given by:
$$P(X|Y,T,I)=\int P(X,\theta_{Y}|Y,T,I) d\theta = \int P(X|\theta_{Y},Y,T,I)P(\theta_{Y}|Y,T,I) d\theta_{Y}$$
Now if your model is of the "iid" variety, then $P(X|\theta_{Y},Y,T,I)=P(X|\theta_{Y},Y,I)$. But if you have a dependent model, such as an autoregressive one, then $T$ may still matter. And $P(\theta_{Y}|Y,T,I)$ is the posterior distribution for the parameters in the model - this is the part that the training data would determine. And this is probably where most of the work will go.
But what if the model is not known with certainty? well it just becomes another nuisance parameter to integrate out, just as was done for $\theta_{Y}$. Call the ith model $M_i$ and its set of parameters $\theta^{(i)}_{Y}$, and the equation becomes:
$$P(X|Y,T,I)= \sum_{i}P(M_{i}|Y,T,I)\int P(X|\theta_{Y}^{(i)},M_{i},Y,T,I)P(\theta_{Y}^{(i)}|M_{i},Y,T,I) d\theta_{Y}^{(i)}$$
Where
$$P(M_{i}|Y,T,I)=P(M_{i}|Y,I)\int P(\theta_{Y}^{(i)}|M_{i},Y,I)P(T|\theta_{Y}^{(i)},M_{i},Y,I) d\theta_{Y}^{(i)}$$
(NOTE: $M_i$ is a proposition of the form "the ith model is the best in the set that is being considered". and no improper priors allowed if you are integrating over models - the infinities do not cancel out in this case, and you will be left with non-sense)
Now, up to this point, all results are exact and optimal (this is the option 2 - apply some awesome algorithm to the data). But this a daunting task to undertake. In the real world, the mathematics required may be not feasible to do in practice - so you will have to compromise. you should always "have a go" at doing the exact equations, for any maths that you can simplify will save you time at the PC. However, this first step is important, because this sets "the target", and it makes it clear what is to be done. Otherwise you are left (as you seem to be) with a whole host of potential options with nothing to choose between them.
Now at this stage, we are still in "symbolic logic" world, where nothing really makes sense. So you need to link these to your specific problem:
- $P(M_{i}|Y,I)$ is the prior probability for the ith model - generally will be equal for all i.
- $P(\theta_{Y}^{(i)}|M_{i},Y,I)$ is the prior for the parameters in the ith model (must be proper!)
- $P(T|\theta_{Y}^{(i)},M_{i},Y,I)$ is the likelihood function for the training data, given the ith model
- $P(\theta_{Y}^{(i)}|T,M_{i},Y,I)$ is the posterior for the parameters in the ith model, conditional on the training data.
- $P(M_{i}|Y,T,I)$ is the posterior for the ith model conditional on the training data
There will be another set of equations for $\overline{Y}$
Note that the equations will simplify enormously if a) one model is a clear winner, so that $P(M_{j}|Y,T,I)\approx 1$ and b) within this model, its parameters are very accurate, so the integrand resembles a delta function (and integration is very close to substitution or plug-in estimates). If both these conditions are met you have:
$$P(X|Y,T,I)\approx P(X|\theta_{Y}^{(j)},M_{j},Y,T,I)_{\theta_{Y}^{(j)}=\hat{\theta}_{Y}^{(j)}}$$
Which is the "standard" approach to this kind of problem.
| null | CC BY-SA 2.5 | null | 2011-03-26T05:43:34.727 | 2011-03-27T00:34:48.983 | 2011-03-27T00:34:48.983 | 2392 | 2392 | null |
8787 | 1 | null | null | 5 | 548 | I'm looking for any reference about Generalized Linear Latent and Mixed Model (GLLAMM) for Crossed Factors involving both Measurement Model and Structured Model of GLLAMM (See the problem below). Any help in this regard will be highly appreciated.
Problem: A researcher observed four responses Y1, Y2, Y3, and Y4 along with three covariates X1, X2, and X3 from an experiment involving ab treatment combinations from a fixed factor A with a levels and a random factor B with b levels. Based on past experience, it is assumed four responses are correlated and Y1 is also influenced by the other three (Y2, Y3, and Y4). This data set can be analyzed by [GLLAMM](http://www.gllamm.org/) in Stata.
| Generalized linear latent and mixed model (GLLAMM) for crossed factors | CC BY-SA 2.5 | null | 2011-03-26T05:58:44.600 | 2012-09-20T00:48:23.793 | 2012-09-20T00:31:58.377 | 5739 | 3903 | [
"mixed-model",
"stata",
"gllamm"
] |
8788 | 1 | null | null | 2 | 2733 | I have a problem when I run the Komogorov-Smirnov test.
I have to samples of daily prices distributions estimated with density(). Now I would like to compare these two distributions with each other.
data.1:
```
Date price
01.01.2010 1.2
02.01.2010 1.5
etc.
```
data.2:
```
Date price
01.01.2009 0.1
02.01.2009 0.05
etc.
```
For the probability density, I calculated
```
density.1 <- density(data.1$price)
density.2 <- density(data.2$price)
```
Now I wanted to run the KS-test:
```
ks <- ks.test(density.1$x, density.2$x)
```
and got the results that p=1, hence the two distributions are the same. However, it is already observable from eye that they differ quite heavily from each other.
Where is my mistake?
Thank you, Dani
| Goodness-of-fit test using Kolmogorov-Smirnov | CC BY-SA 2.5 | null | 2011-03-25T11:57:14.800 | 2011-04-08T20:44:41.933 | 2011-04-08T20:44:41.933 | 919 | null | [
"r",
"distributions"
] |
8789 | 2 | null | 8788 | 5 | null | ks.test receives values, not densities. So you don't need to call density().
Probably what you should do is simply:
```
ks.test(data.1$return, data.2$return)
```
and the reason why you get p=1 is that you passed `return.density.1$x` instead of `return.density.1$y`.
`density(foo)$x` is the n coordinates of the points where the density is estimated.
| null | CC BY-SA 2.5 | null | 2011-03-25T12:23:49.427 | 2011-03-26T14:54:51.527 | 2011-03-26T14:54:51.527 | 919 | 2280 | null |
8790 | 2 | null | 8788 | 7 | null | First of all, you don't calculate the ks on an estimated density, as the ks test works on the empiric cumulative distribution function (ecdf). So you add the raw data:
```
ks.test(data.1$return, data.2$return)
```
Second, the `$x` is not the density, but the uniformly distributed grid constructed by the density function. So off course they are rather alike if the means are alike.
```
x <- rnorm(100,3)
y <- runif(100,min(x),max(x))
xx <- density(x)$x
yy <- density(y)$x
ks.test(xx,yy)
qqplot(xx,yy)
ks.test(x,y)
qqplot(x,y)
```
Last, please read in on a test if you use it before using it. Many mistakes in statistics are made by people that have no clue what they're actually doing. I don't say this to be rude, I just see things like this happen on an almost daily basis...
| null | CC BY-SA 2.5 | null | 2011-03-25T12:29:53.250 | 2011-03-25T12:29:53.250 | null | null | 1124 | null |
8791 | 1 | null | null | 6 | 104 | The best way I can think to describe this question is by example: Imagine there is a ship sailing around the pacific ocean on an unknown path (possibly random.) Other ships passing by sometimes see this ship and radio in its location to me. Some of these scout ships have better instruments or a more trustworthy crew than others, so I assign an accuracy weight to each of them. If the ship were static, it would be a simple problem to collect all of the reports and calculate an area where the ship is located (with high probability.) How can I adapt this for when the ship is constantly moving (assuming a fixed speed.) Obviously newer reports need to be given more weight and older reports need to be "faded out" so the ship's calculated location changes over time.
I started trying to design something like this but I think I'm making it too complicated.
Is there a name for this sort of problem? Any suggestions for a good method to solve it? Thanks!
| Weighted discrete measurements of a value changing over time | CC BY-SA 2.5 | null | 2011-03-26T01:54:14.677 | 2011-03-30T19:16:40.193 | 2011-03-30T19:16:40.193 | 919 | 180918 | [
"time-series",
"predictive-models"
] |
8792 | 2 | null | 8791 | 2 | null | Sounds like you might want to look at [(Weighted) Moving Average](http://en.wikipedia.org/wiki/Moving_average).
| null | CC BY-SA 2.5 | null | 2011-03-26T01:59:01.030 | 2011-03-26T01:59:01.030 | null | null | null | null |
8793 | 2 | null | 8784 | 1 | null | Let us say the interest lies in comparing two groups of people: those with $X_{1} = 1$ and those with $X_{1} = 0$.
The exponential of $\beta_{1}$, the corresponding coefficient, is interpreted as the ratio of the odds of success for those with $X_{1} = 1$ over the odds of success for those with $X_{1} = 0$, conditional on the other variables in the model.
So, if you have two models with different dependend variables then the interpretation of $\beta_{1}$ changes since it is not conditioned upon the same set of variables. As a consequence, the comparison is not direct...
| null | CC BY-SA 2.5 | null | 2011-03-26T10:34:21.970 | 2011-03-26T10:34:21.970 | null | null | 3019 | null |
8794 | 2 | null | 8784 | 2 | null | I assume that by "my independent variable is the economy" you're using shorthand for some specific predictor.
At one level, I see nothing wrong with making a statement such as
>
X predicts Y1 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ]
while
X predicts Y2 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ].
@dmk38's recent suggestions look very helpful in this regard.
You might also want to standardize the coefficients to facilitate comparison.
At another level, beware of taking inferential statistics (standard errors, p-values, CIs) literally when your sample constitutes a nonrandom sample of the population of years to which you might want to generalize.
| null | CC BY-SA 3.0 | null | 2011-03-26T12:21:03.737 | 2011-04-23T15:38:48.887 | 2011-04-23T15:38:48.887 | 2669 | 2669 | null |
8795 | 1 | 8809 | null | 9 | 602 | If you flip a coin and get 268 heads and 98 tails, you can calculate the probability that coin is fair several ways. A simple, heuristic observation would have most likely conclude that such a coin is unfair. I've calculated the p-value in R with:
```
> coin <- pbinom(98, 366, 0.5)
> coin*2
[1] 2.214369e-19
```
This value is smaller than .05, ergo we reject the hypothesis that it's a fair coin.
But what if you where told that the same coin landed on its side 676 times during the trial. Heuristically you'll likely come to the same conclusion, but would the typical fair coin tests still be valid?
Here is a graph to illustrate the problem:

What are valid methods to test the hypothesis that there is equal probability that an event occurs in the shaded areas?
NOTE: there are 629 positive moves (413 negative) in the graph illustration.
R code that generates the data:
```
require("quantmod")
ticker <- getSymbols("SLV")[,6]
change <- (ticker - lag(ticker, 24)) / lag(ticker, 24)
change <- na.locf(change, na.rm=TRUE)
# some other calculations
dens <- density(change)
plot(dens)
# some formatting stuff
```
| Can a fair coin test be applied to a coin that often lands on its edge? | CC BY-SA 2.5 | null | 2011-03-26T13:29:21.417 | 2011-03-27T23:31:20.117 | 2011-03-27T23:31:20.117 | 3306 | 3306 | [
"probability"
] |
8796 | 2 | null | 5997 | 1 | null | You can try playing with spam filtering, that's quite a common use of Naive Bayesian Classifiers.
| null | CC BY-SA 2.5 | null | 2011-03-26T14:30:20.593 | 2011-03-26T14:30:20.593 | null | null | 3442 | null |
8797 | 1 | null | null | 2 | 1265 | I've made a little questionnaire where participants can rate an answer between 1 and 5. I calculated the mean value, the average value and the standard deviation.
Now I was asking myself if it is possible to calculated a confidence interval for these results and if yes, if this would tell me anything. So I just tested it and used excel to calculate a 95% confidence interval.
Here are the values:
```
Arithmetic average: 4.60
Median: 5.00
Max: 5.00
Min: 3.00
Standard deviation: 0.63
95% Confidence interval: 0.32
```
But what is this value telling me? I can be sure by 32% that the values aren't random values? Or is a confidence interval for those kinds of questions useless?
| Understanding confidence interval | CC BY-SA 2.5 | null | 2011-03-26T15:37:23.837 | 2011-03-28T13:54:58.437 | 2011-03-26T17:16:41.450 | null | 3908 | [
"confidence-interval"
] |
8798 | 1 | 14644 | null | 11 | 6363 | In the absence of good a priori guesses about the number of components to request in Independent Components Analysis, I'm looking to automate a selection process. I think that a reasonable criterion might be the number that minimizes the global evidence for correlation amongst the computed components. Here's pseudocode of this approach:
```
for each candidate number of components, n:
run ICA specifying n as requested number of components
for each pair (c1,c2) of resulting components:
compute a model, m1: lm(c1 ~ 1)
compute a model, m2: lm(c1 ~ c2)
compute log likelihood ratio ( AIC(m2)-AIC(m1) ) representing the relative likelihood of a correlation between c1 & c2
compute mean log likelihood ratio across pairs
Choose the final number of components as that which minimizes the mean log likelihood of component relatedness
```
I figure this should automatically penalize candidates larger than the "true" number of components because ICAs resulting from such candidates should be forced to distribute information from single true components across multiple estimated components, increasing the average evidence of correlation across pairs of components.
Does this make sense? If so, is there a faster way of achieving an aggregate metric of relatedness across estimated components than the mean log likelihood approach suggested above (which can be rather slow computationally)? If this approach doesn't make sense, what might a good alternative procedure look like?
| How do I select the number of components for independent components analysis? | CC BY-SA 2.5 | null | 2011-03-26T15:55:40.037 | 2016-08-31T17:10:38.983 | 2011-03-26T17:16:12.450 | null | 364 | [
"independent-component-analysis"
] |
8799 | 1 | null | null | 4 | 17943 | I have made a very simple questionnaire that asks questions that are independent of each other. Every question can be answered with a rating between 1 and 5 where 1 means I strongly disagree and 5 I strongly agree.
Now I was wondering with statistical methods I could use to evaluate the results. I know maybe I should have thought about that before performing the evaluation but now its too late and I have to get the most out of it.
Currently i just calculate the following values for each question:
- arithmetic mean
- median
- max value
- min value
- standard deviation
But are there any other good indicators I could use to analyze the answers?
| How to evaluate a simple questionnaire with statistical methods? | CC BY-SA 2.5 | null | 2011-03-26T15:57:41.113 | 2011-03-26T19:04:16.593 | 2011-03-26T16:46:53.343 | 930 | 3908 | [
"survey"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.