Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
9478
1
null
null
4
21420
I have samples with each sample has n features, how to normalize these features to let feature values lie between interval [-1,1], please give a formula.
How to normalize data to let each feature lie between [-1,1]?
CC BY-SA 3.0
null
2011-04-12T17:04:06.680
2015-04-01T09:19:18.187
2011-04-12T19:50:34.097
null
2141
[ "normalization" ]
9479
2
null
9315
22
null
I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old documents the same. This usually converges fast (maybe 5-10-20 iterations), and you don't need to sample your old corpus, so it also runs fast. At the end you will have the topic assignment for every word in the new document. This will give you the distribution of topics in that document. In your Gibbs sampler, you probably have something similar to the following code: ``` // This will initialize the matrices of counts, N_tw (topic-word matrix) and N_dt (document-topic matrix) for doc = 1 to N_Documents for token = 1 to N_Tokens_In_Document Assign current token to a random topic, updating the count matrices end end // This will do the Gibbs sampling for doc = 1 to N_Documents for token = 1 to N_Tokens_In_Document Compute probability of current token being assigned to each topic Sample a topic from this distribution Assign the token to the new topic, updating the count matrices end end ``` Folding-in is the same, except you start with the existing matrices, add the new document's tokens to them, and do the sampling for only the new tokens. I.e.: ``` Start with the N_tw and N_dt matrices from the previous step // This will update the count matrices for folding-in for token = 1 to N_Tokens_In_New_Document Assign current token to a random topic, updating the count matrices end // This will do the folding-in by Gibbs sampling for token = 1 to N_Tokens_In_New_Document Compute probability of current token being assigned to each topic Sample a topic from this distribution Assign the token to the new topic, updating the count matrices end ``` --- If you do standard LDA, it is unlikely that an entire document was generated by one topic. So I don't know how useful it is to compute the probability of the document under one topic. But if you still wanted to do it, it's easy. From the two matrices you get you can compute $p^i_w$, the probability of word $w$ in topic $i$. Take your new document; suppose the $j$'th word is $w_j$. The words are independent given the topic, so the probability is just $$\prod_j p^i_{w_j}$$ (note that you will probably need to compute it in log space).
null
CC BY-SA 3.0
null
2011-04-12T17:17:44.110
2011-04-13T18:39:20.870
2011-04-13T18:39:20.870
3369
3369
null
9480
2
null
9477
22
null
If you're willing to use [ggplot](http://had.co.nz/ggplot2/), you can try the following code. With a continuous predictor ``` library(ggplot2) gp <- ggplot(data=br, aes(x=tangle, y=gtangles)) gp + geom_point() + stat_smooth(method="lm", fullrange=T) + facet_grid(. ~ up) ``` for a facetted interaction plot ![enter image description here](https://i.stack.imgur.com/ihhbW.png) For a standard interaction plot (like the one produced by `interaction.plot()`), you just have to remove the facetting. ``` gp <- ggplot(data=br, aes(x=tangle, y=gtangles, colour=factor(up))) gp + geom_point() + stat_smooth(method="lm") ``` ![enter image description here](https://i.stack.imgur.com/UUHNu.png) With a discrete predictor Using the `ToothGrowth` dataset (see `help(ToothGrowth)`), ``` ToothGrowth$dose.cat <- factor(ToothGrowth$dose, labels=paste("d", 1:3, sep="")) df <- with(ToothGrowth , aggregate(len, list(supp=supp, dose=dose.cat), mean)) df$se <- with(ToothGrowth , aggregate(len, list(supp=supp, dose=dose.cat), function(x) sd(x)/sqrt(10)))[,3] opar <- theme_update(panel.grid.major = theme_blank(), panel.grid.minor = theme_blank(), panel.background = theme_rect(colour = "black")) gp <- ggplot(df, aes(x=dose, y=x, colour=supp, group=supp)) gp + geom_line(aes(linetype=supp), size=.6) + geom_point(aes(shape=supp), size=3) + geom_errorbar(aes(ymax=x+se, ymin=x-se), width=.1) theme_set(opar) ``` ![enter image description here](https://i.stack.imgur.com/ZDfXy.png)
null
CC BY-SA 3.0
null
2011-04-12T18:24:25.537
2011-04-20T20:02:55.400
2011-04-20T20:02:55.400
930
930
null
9481
2
null
7057
15
null
In my experience, LARS is faster for small problems, very sparse problems, or very 'wide' problems (much much more features than samples). Indeed, its computational cost is limited by the number of features selected, if you don't compute the full regularization path. On the other hand, for big problems, glmnet (coordinate descent optimization) is faster. Amongst other things, coordinate descent has a good data access pattern (memory-friendly) and it can benefit from redundancy in the data on very large datasets, as it converges with partial fits. In particular, it does not suffer from heavily correlated datasets. The conclusion that we (the core developers of the [scikit-learn](http://scikit-learn.sourceforge.net/)) have come too is that, if you do not have strong a priori knowledge of your data, you should rather use glmnet (or coordinate descent optimization, to talk about an algorithm rather than an implementation). Interesting benchmarks may be compared in Julien Mairal's thesis: [https://lear.inrialpes.fr/people/mairal/resources/pdf/phd_thesis.pdf](https://lear.inrialpes.fr/people/mairal/resources/pdf/phd_thesis.pdf) Section 1.4, in particular 1.4.5 (page 22) Julien comes to slightly different conclusions, although his analysis of the problem is similar. I suspect this is because he was very much interested in very wide problems.
null
CC BY-SA 4.0
null
2011-04-12T19:43:01.033
2021-09-24T05:02:58.787
2021-09-24T05:02:58.787
274906
1265
null
9482
1
null
null
2
248
I am currently working with large datasets that have missing data points. My data ranges from the years 1979-2009. I want to choose, by year, the maximum value from that year. Missing values are replaced with a "." I am currently using the formula: {=MAX((E13:E2863=1979)*(H13:H2863))} This array returns the max value where the appropriate data is available. However when i use the array over the entire dataset, a #VALUE error occurs within the box. My question is how do you overcome this issue? Is there a way to develop a formula that will not take into account for the missing data points i.e. cells with "." Below i will post a screenshot representing my spreadsheet/issue. ``` B C D E F =MAX((B6:B18=1979)*(E6:E18)) 1979 0 53 67 38 1979 0 57 72 41 1979 0 63 . 47 1979 0 64 79 49 1979 0 65 78 52 1979 0 57 71 43 1980 0.37 43 52 33 1980 0 47 60 34 1980 0 55 70 39 1980 0 64 79 48 1980 0 66 82 50 1980 0 69 87 51 1981 0 71 85 57 ``` Any suggestions?
How do you choose MAX or MIN values from a range of incomplete data?
CC BY-SA 3.0
null
2011-04-12T20:24:51.287
2011-04-13T07:57:05.360
2011-04-13T07:57:05.360
null
null
[ "modeling", "excel" ]
9483
1
9495
null
9
12011
I have two datasets from genome-wide association studies. The only information available are the odd ratios and their confidence intervals (95%) for each genotyped SNP. My want to generate a forest plot comparing these two odds ratios, but I can't find the way to calculate the combined confidence intervals to visualize the summary effects. I used the program [PLINK](http://pngu.mgh.harvard.edu/~purcell/plink/) to perform the meta-analysis using fixed effects, but the program did not show these confidence intervals. - How can I calculate such confidence intervals? The data available is: - Odd ratios for each study, - 95% confidence intervals and - Standard errors.
How to calculate confidence intervals for pooled odd ratios in meta-analysis?
CC BY-SA 3.0
null
2011-04-12T20:56:40.753
2016-06-26T16:46:35.647
2011-04-13T08:25:08.687
930
4137
[ "confidence-interval", "meta-analysis", "genetics", "odds-ratio" ]
9484
2
null
9483
0
null
This is a comment (don't have enough rep. points). If you know the sample size (#cases and #controls) in each study, and the odds ratio for a SNP, you can reconstruct the 2x2 table of case/control by a/b (where a and b are the two alleles) for each of the two studies. Then you can just add those counts to get a table for the meta-study, and use this to compute the combined odds-ratio and confidence intervals.
null
CC BY-SA 3.0
null
2011-04-12T23:00:57.143
2011-04-12T23:00:57.143
null
null
3036
null
9485
2
null
9365
2
null
An article with early impact regarding statistical bioinformatics research: Jelizarow et al. [Over-optimism in bioinformatics: an illustration](http://bioinformatics.oxfordjournals.org/content/26/16/1990.abstract). Bioinformatics, 2010 It makes for an interesting discussion on bias sources, overfitting, and fishing for significance.
null
CC BY-SA 3.0
null
2011-04-13T01:35:39.070
2011-04-13T01:35:39.070
null
null
3770
null
9487
2
null
9482
2
null
Using a Pivot Table isn't as quick or dirty, but may give you some more options for further analysis that make this approach worthwhile. I honestly have no idea how to convey in words how to use Excel's pivot tables, but if you highlight all of your data of interest and click on "insert --> pivot table" and then adjust the row labels and values, you can eventually get what you are after. Here's a screenshot: ![enter image description here](https://i.stack.imgur.com/uJH0l.jpg)
null
CC BY-SA 3.0
null
2011-04-13T02:31:59.070
2011-04-13T02:31:59.070
null
null
696
null
9488
1
9489
null
0
171
### Question: - Is it possible to get log liklihood values for my stepwise glms? ### Context: I am able to get a logliklihood value using lmer with the following model. My study involves unbalanced repeated females, two sites (females don't exchange between sites), 8 predictors, and a response. ``` (glmfit1 <- lmer(Response ~ 1 + SITE + (1|SITE:FEMALE) + Variable A + Variable B + Variable C, data = data)) ``` Removing the repeated females and taking out the Site factor gives me this formula. ``` (lrfit1 <- glm(Response ~ 1+Variable A + Variable B + Variable C, data = data)) summary (lrfit1) #just gives me p values for my variables and an AIC but NO LOGLIK. ```
How to get LogLiklihood value from logistic regression in R
CC BY-SA 3.0
null
2011-04-13T03:35:57.670
2011-04-13T03:51:10.630
2011-04-13T03:49:37.880
183
4027
[ "r", "likelihood-ratio" ]
9489
2
null
9488
2
null
Have you tried `logLik(lrfit1)`?
null
CC BY-SA 3.0
null
2011-04-13T03:51:10.630
2011-04-13T03:51:10.630
null
null
1569
null
9490
1
9524
null
10
4948
The Singular Value Decomposition (SVD) of a matrix is $$A_{m\times n} = U_{m\times m}\Lambda_{m\times n} V_{n\times n}'$$ where $U$ and $V$ are orthogonal matrices and $\Lambda$ has (i, i) entry $\lambda_i \geq 0$ for $i = 1, 2, \cdots , min(m, n)$ and the other entries are zero. Then the left singular vectors $U$ for rows of matrix and right singular vectors $V$ for columns of matrix can be plotted on the same graph called bi-plot. I'm wondering how to do the SVD of a three dimensional array and plot the singular vectors on the same graph like bi-plot. Thanks
Singular value decomposition of a three-dimensional array
CC BY-SA 3.0
null
2011-04-13T05:49:33.487
2011-04-13T15:51:38.890
2011-04-13T07:51:09.623
null
3903
[ "svd" ]
9491
1
null
null
15
548
I came across the article by [Hervé Abdi](http://www.utdallas.edu/~herve/Abdi-SVD2007-pretty.pdf) about generalized SVD. The author mentioned: > The generalized SVD (GSVD) decomposes a rectangular matrix and takes into account constraints imposed on the rows and the columns of the matrix. The GSVD gives a weighted generalized least square estimate of a given matrix by a lower rank matrix and therefore, with an adequate choice of the constraints, the GSVD implements all linear multivariate techniques (e.g., canonical correlation, linear discriminant analysis, correspondence analysis, PLS-regression). I'm wondering how the GSVD is related to all linear multivariate techniques (e.g., canonical correlation, linear discriminant analysis, correspondence analysis, PLS-regression).
Does GSVD implement all linear multivariate techniques?
CC BY-SA 3.0
null
2011-04-13T06:10:47.197
2014-04-17T13:04:54.810
2011-09-22T08:42:55.030
3903
3903
[ "multivariate-analysis", "svd" ]
9492
2
null
9477
12
null
There's also Fox and Hong's effects package in R. See the J. Stat. Soft. papers [here](http://www.jstatsoft.org/v08/i15/paper) and [here](http://www.jstatsoft.org/v32/i01/paper) for examples with confidence intervals and generating R code. It's not quite as pretty as a ggplot solution, but quite a bit more general, and a lifesaver for moderately complex GLMs.
null
CC BY-SA 3.0
null
2011-04-13T08:57:54.630
2011-04-13T08:57:54.630
null
null
1739
null
9493
1
null
null
3
135
I have a software application which uses a queue and multiple processors to process those jobs. Jobs get re-run on a daily basis for customers, but we also have new customers signing up regularly. The problem is that customers generally sign up during office hours, and the daily re-running of the jobs simply schedules the job to run at the same time every day. This means that we have built up quite large spikes of work during the day, with very few jobs running overnight. I would like to even out the spikes in this job queue, but I also want to minimise how far each individual job is moved from it's original time slot. In other words, I would rather move 100 jobs by 5 minutes each than move 1 job by an hour to achieve an even distribution of jobs over the whole 24 hour period.
How can I even out a random distribution while minimising how far each data point is moved?
CC BY-SA 3.0
null
2011-04-13T09:42:14.907
2011-04-13T10:32:35.167
2011-04-13T10:32:35.167
449
4144
[ "smoothing", "randomness", "queueing" ]
9494
1
null
null
8
8436
Currently I am working with Text Mining which includes sentiment identification and assigning corresponding business categories using open source tool R. I found these two documents which helped me to some extent: - http://www.jstatsoft.org/v25/i05/ - http://epub.wu.ac.at/1923/1/document.pdf My approach is to tokenize the text and then lookup for sentiment and business category. To do this I require positive and negative libraries for sentiment mining and category file which contains word and category. I was able to get positive and negative words but was unable to get a categorization library. - Where can I get a categorization library? - Is the above approach appropriate? Is there a better way to do this?
How to perform text mining, sentiment mining, and business category identification, and where to obtain a categorization library
CC BY-SA 3.0
null
2011-04-13T10:01:19.413
2017-02-16T18:01:52.683
2011-04-13T11:34:00.200
183
4145
[ "r", "text-mining" ]
9495
2
null
9483
10
null
In most meta-analysis of odds ratios, the standard errors $se_i$ are based on the log odds ratios $log(OR_i)$. So, do you happen to know how your $se_i$ have been estimated (and what metric they reflect? $OR$ or $log(OR)$)? Given that the $se_i$ are based on $log(OR_i)$, then the pooled standard error (under a fixed effect model) can be easily computed. First, let's compute the weights for each effect size: $w_i = \frac{1}{se_i^2}$. Second, the pooled standard error is $se_{FEM} = \sqrt{\frac{1}{\sum w}}$. Furthermore, let $log(OR_{FEM})$ be the common effect (fixed effect model). Then, the ("pooled") 95% confidence interval is $log(OR_{FEM}) \pm 1.96 \cdot se_{FEM}$. Update Since BIBB kindly provided the data, I am able to run the 'full' meta-analysis in R. ``` library(meta) or <- c(0.75, 0.85) se <- c(0.0937, 0.1029) logor <- log(or) (or.fem <- metagen(logor, se, sm = "OR")) > (or.fem <- metagen(logor, se, sm = "OR")) OR 95%-CI %W(fixed) %W(random) 1 0.75 [0.6242; 0.9012] 54.67 54.67 2 0.85 [0.6948; 1.0399] 45.33 45.33 Number of trials combined: 2 OR 95%-CI z p.value Fixed effect model 0.7938 [0.693; 0.9092] -3.3335 0.0009 Random effects model 0.7938 [0.693; 0.9092] -3.3335 0.0009 Quantifying heterogeneity: tau^2 < 0.0001; H = 1; I^2 = 0% Test of heterogeneity: Q d.f. p.value 0.81 1 0.3685 Method: Inverse variance method ``` References See, e.g., [Lipsey/Wilson (2001: 114)](http://books.google.com/books?id=G-PnRSMxdIoC&lpg=PP1&dq=lipsey%20wilson%20meta&hl=de&pg=PA114#v=onepage&q&f=false)
null
CC BY-SA 3.0
null
2011-04-13T10:17:32.607
2011-04-14T12:44:16.130
2011-04-14T12:44:16.130
307
307
null
9496
2
null
9493
1
null
This is a problem in optimal control. But you only need a few tools to get a practical solution for your problem. 1) a way to estimate the time distribution of jobs from new customers. Plotting the hourly, daily, and weekly averages for instances of jobs from new customers will give you a feel for the periodicity of the distribution. Then you can fit the distribution to a time-dependent Poisson process, with terms for minute/hour/day/week. 2) quantification of your "loss function." Based on your statement I would imagine that your loss function will include a term penalizing concentrations of jobs in short time periods and a term penalizing time delays on jobs in a nonlinear fashion. Supposing $t_1,...,t_2$ represents the times at which jobs are executed, and $d_1,...,d_m$ represents the delays for job 1, job 2, etc., your loss function might look like $$ L = k_1 \sum_{i=1}^m (\sum_{j=1}^m I(t_i - t_j < T))^2 + k_2 \sum_{i=1}^m d_i^2 $$ where $k_1$ and $k_2$ are positive constants, and $T$ is a 'critical' time period (like 5 minutes). 3) simulation of various control policies (scheduling of recurring jobs) to find their expected loss. Given a policy of running daily jobs at time $x_1,...,x_l$ you would draw from your distribution of new jobs to get times of new jobs $y_1,...,y_m$ (where $m$ is random). You can estimate the expected loss by doing the simulation many times and averaging the loss function over the runs.
null
CC BY-SA 3.0
null
2011-04-13T10:23:21.593
2011-04-13T10:23:21.593
null
null
3567
null
9497
2
null
4802
4
null
Queue models depend on a few key distributions: the distribution of the time gaps between incoming jobs, the distribution of service times (how long it takes to process a job). Some commonly used models for these distributions are the exponential, gamma, and Weibull distributions. To find out which distribution is appropriate for your situation, you will need to collect data and do some model selection. You may also want to model the parameters for the distributions as time-varying, but this makes things much more complicated. EDIT: "I found that each query took about 5-15ms with a few taking 1ms and a few taking 600+ms." The Weibull distribution I mentioned before is often used to model "extreme" events.
null
CC BY-SA 3.0
null
2011-04-13T10:38:55.930
2011-04-13T10:38:55.930
null
null
3567
null
9499
1
9514
null
2
791
I've just started playing with the R forecast package and found I must be doing something wrong because I can't get a decent prediction for a simple sinus. ``` weightData <- data.frame(weight = sin(seq(1:100)), week=1:100) weight <- as.numeric(weightData$weight) predicted <- forecast(weight,h=3,level=95) # see the predicted values by forecast predicted myplot <- forecast(weight,h=10,level=95) plot(myplot) ``` And I get a flat prediction. I understand the generic forecast methods selects the best method for my data. Isn't that true? Am I missing something? Thanks in advance!
Forecast R package producing flat predictions
CC BY-SA 3.0
null
2011-04-13T10:55:07.987
2011-04-13T14:14:58.703
null
null
4134
[ "r", "forecasting" ]
9500
1
9522
null
24
28584
You can use the decathlon dataset {FactoMineR} to reproduce this. The question is why the computed eigenvalues differ from those of the covariance matrix. Here are the eigenvalues using `princomp`: ``` > library(FactoMineR);data(decathlon) > pr <- princomp(decathlon[1:10], cor=F) > pr$sd^2 Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 1.348073e+02 2.293556e+01 9.747263e+00 1.117215e+00 3.477705e-01 1.326819e-01 Comp.7 Comp.8 Comp.9 Comp.10 6.208630e-02 4.938498e-02 2.504308e-02 4.908785e-03 ``` And the same using `PCA`: ``` > res<-PCA(decathlon[1:10], scale.unit=FALSE, ncp=5, graph = FALSE) > res$eig eigenvalue percentage of variance cumulative percentage of variance comp 1 1.348073e+02 79.659589641 79.65959 comp 2 2.293556e+01 13.552956464 93.21255 comp 3 9.747263e+00 5.759799777 98.97235 comp 4 1.117215e+00 0.660178830 99.63252 comp 5 3.477705e-01 0.205502637 99.83803 comp 6 1.326819e-01 0.078403653 99.91643 comp 7 6.208630e-02 0.036687700 99.95312 comp 8 4.938498e-02 0.029182305 99.98230 comp 9 2.504308e-02 0.014798320 99.99710 comp 10 4.908785e-03 0.002900673 100.00000 ``` Can you explain to me why the directly computed eigenvalues differ from those? (the eigenvectors are the same): ``` > eigen(cov(decathlon[1:10]))$values [1] 1.381775e+02 2.350895e+01 9.990945e+00 1.145146e+00 3.564647e-01 [6] 1.359989e-01 6.363846e-02 5.061961e-02 2.566916e-02 5.031505e-03 ``` Also, the alternative `prcomp` method gives the same eigenvalues as the direct computation: ``` > prc <- prcomp(decathlon[1:10]) > prc$sd^2 [1] 1.381775e+02 2.350895e+01 9.990945e+00 1.145146e+00 3.564647e-01 [6] 1.359989e-01 6.363846e-02 5.061961e-02 2.566916e-02 5.031505e-03 ``` Why do `PCA`/`princomp` and `prcomp` give different eigenvalues?
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
CC BY-SA 3.0
null
2011-04-13T10:57:05.660
2016-10-30T15:08:10.357
2015-02-07T21:46:36.967
28666
339
[ "r", "pca" ]
9501
1
9576
null
12
10870
Firstly, by analytically integrate, I mean, is there an integration rule to solve this as opposed to numerical analyses (such as trapezoidal, Gauss-Legendre or Simpson's rules)? I have a function $\newcommand{\rd}{\mathrm{d}}f(x) = x g(x; \mu, \sigma)$ where $$ g(x; \mu, \sigma) = \frac{1}{\sigma x \sqrt{2\pi}} e^{-\frac{1}{2\sigma^2}(\log(x) - \mu)^2} $$ is the probability density function of a lognormal distribution with parameters $\mu$ and $\sigma$. Below, I'll abbreviate the notation to $g(x)$ and use $G(x)$ for the cumulative distribution function. I need to calculate the integral $$ \int_{a}^{b} f(x) \,\rd x \>. $$ Currently, I'm doing this with numerical integration using the Gauss-Legendre method. Because I need to run this a large number of times, performance is important. Before I look into optimizing the numerical analyses/other pieces, I would like to know if there are any integration rules to solve this. I tried applying the integration-by-parts rule, and I got to this, where I'm stuck again, - $\int u \,\mathrm{d}v = u v - \int v \mathrm{d}u$. - $u=x \implies \rd u = \rd x$ - $\rd v = g(x) \rd x \implies v = G(x)$ - $u v - \int v \rd x = x G(x) - \int G(x) \rd x$ I'm stuck, as I can't evaluate the $\int G(x) \rd x$. This is for a software package I'm building.
Is it possible to analytically integrate $x$ multiplied by the lognormal probability density function?
CC BY-SA 3.0
null
2011-04-13T11:30:07.083
2011-04-15T02:50:14.760
2011-04-15T01:07:53.763
2970
4146
[ "distributions", "lognormal-distribution" ]
9503
1
null
null
7
2993
I am running a regression equation and I want to enter in 12 indepdendent variables then stepwise enter 7 more independent variables and not have an origin. - DV is shfl. - I want to enter in the following 12 independent dummy variables ajan bfeb cmar dapr emay fjun gjul haug isep joct knov ldec - And then I want to enter in a stepwise fashion slag6 slag7 slag8 slag9 slag10 slag11 slag12 - And finally, I want there to be no origin. I've done simple regression, but nothing quite like this that enters in the primary variables and step enters several more. - How can such a model be specified using R?
Multiple regression with no origin and mix of directly entered and stepwise entered variables using R
CC BY-SA 3.0
null
2011-04-13T11:41:57.880
2011-04-13T13:43:37.057
2011-04-13T11:53:01.557
183
4148
[ "r", "stepwise-regression" ]
9505
1
9543
null
3
297
I am working on a real-time recommendation engine. At one step, I have a feature that resembles a string-encoded item set, so I took Jaccard on the tokenized string to get a good similarity result. Frankly, Jaccard offers great results, but it takes far too much time on runtime. My current prototype environment for this is written in Java and I am already using the StringTokenizer to fast transform my string into a token set but the set operations thereafter still take a large amount of time - so my idea was to skip the tokenizing part and work on the string directly, but I am not sure on what metric to count for this puspose. Anyone have any ideas or experiences?
Item set similarity for real-time calculation purposes?
CC BY-SA 3.0
null
2011-04-13T12:07:18.300
2013-08-19T14:23:01.063
2013-08-19T14:23:01.063
22468
1158
[ "recommender-system", "java" ]
9506
1
9516
null
28
27697
I am new to R and to time series analysis. I am trying to find the trend of a long (40 years) daily temperature time series and tried to different approximations. First one is just a simple linear regression and second one is Seasonal Decomposition of Time Series by Loess. In the latter it appears that the seasonal component is greater than the trend. But, how do I quantify the trend? I would like just a number telling how strong is that trend. ``` Call: stl(x = tsdata, s.window = "periodic") Time.series components: seasonal trend remainder Min. :-8.482470191 Min. :20.76670 Min. :-11.863290365 1st Qu.:-5.799037090 1st Qu.:22.17939 1st Qu.: -1.661246674 Median :-0.756729578 Median :22.56694 Median : 0.026579468 Mean :-0.005442784 Mean :22.53063 Mean : -0.003716813 3rd Qu.:5.695720249 3rd Qu.:22.91756 3rd Qu.: 1.700826647 Max. :9.919315613 Max. :24.98834 Max. : 12.305103891 IQR: STL.seasonal STL.trend STL.remainder data 11.4948 0.7382 3.3621 10.8051 % 106.4 6.8 31.1 100.0 Weights: all == 1 Other components: List of 5 $ win : Named num [1:3] 153411 549 365 $ deg : Named int [1:3] 0 1 1 $ jump : Named num [1:3] 15342 55 37 $ inner: int 2 $ outer: int 0 ``` ![enter image description here](https://i.stack.imgur.com/jwCSr.png)
STL trend of time series using R
CC BY-SA 3.0
null
2011-04-13T12:07:56.007
2015-09-21T18:24:48.010
2011-04-13T12:15:51.427
183
4147
[ "r", "time-series", "trend" ]
9507
1
9564
null
5
5932
As a part of my studies, I’m trying to cluster co-occurrences of URLs and tags in data from Delicious. I found a promising method for this in a paper called “[Emergent Semantics from Folksonomies: A Quantitative Study](http://www.lambda.ee/images/9/93/Semanticsfolksonomies.pdf)” (pages 6-13). It used a Separable Mixture Model (SMM, described in the paper “[Statistical Models for Co-occurrence Data](http://dspace.mit.edu/bitstream/handle/1721.1/7253/AIM-1625.pdf?sequence=2)” pages 2-4) to model the data and an adapted EM-algorithm to fit the known data to the model. I coded the algorithm with Java and ran it with a little piece of real data from Delicious. Unfortunately, the results did not seem correct. The results showed that each tag had equal (although varying from tag to tag) possibility to belong to each concept. Now, while this problem could have came from me simply coding the adapted EM-algorithm wrong, I would also like to rule out the possibility of incorrectly initialized variables. This time, since I didn’t know any better way to do it, I simply initialized all the $R_{r\alpha}$ (variables that denote the possibility of co-occurrence $r$ to have raised from concept $\alpha$) to be equal, $1/K$ ($K$ being the number of concepts). My question is two-fold. Could the flat results come from the flat variable initialization? How should I initialize the variables from the EM-algorithm in this case?
How to initialize EM-algorithm when trying to fit data to a separable mixture model?
CC BY-SA 3.0
null
2011-04-13T12:39:58.633
2011-04-20T06:52:07.723
2011-04-15T08:15:01.263
null
4141
[ "expectation-maximization" ]
9508
1
null
null
4
728
I'm analyzing the results of a hormone manipulation experiment. I measured a number of variables at three times in three groups. The groups are different sizes and not all individuals were measured every time, so I'm using GLMM rather than a repeated-measures ANOVA. I created the model then tested the significance of the terms (time, treatment, and time x treatment) with ANOVA. I'm quite new to GLMM, but after doing the tests, further reading suggests that my approach may be inappropriate, particularly with small data sets (I have ~seven animals per group). It seems that there is disagreement about what the degrees of freedom should be. This leads me to three questions: 1) Is this method acceptable? 2) If so, what would be an appropriate method to do post-hoc analyses to determine which groups differ? 3) Like the fake data, I have a number of negative results. Specifically, I often see significant time effects, but no effect of treatment or treatment X time. If I stick with this method, how can I calculate effects sizes and/or confidence intervals for such tests? Here are some fake data: ``` library(nlme) datums<-data.frame(id=rep(1:20,each=3),var1=runif(60,4,6),var2=runif(60,25,30),var3=runif(60,0,1),var4=runif(60,10,15),var.time=rep(1:3,times=20),var.treatment=rep(c('a','b','c'),each=20)) datums$var.time<-as.factor(datums$var.time) datums$id<-as.factor(datums$id) #and now the GLMMs on each variable - I'll show just two here var1.glmm<-lme(var1~var.time + var.treatment + var.time*var.treatment, data=datums, random = ~1| id) var2.glmm<-lme(var2~var.time + var.treatment + var.time*var.treatment, data=datums, random = ~1|id) summary(var1.glmm) anova(var1.glmm) ``` I'm aware that the place for me to go is probably the Pinheiro and Bates book, but I don't have access to it at this time. Thanks in advance for any advice.
Some doubts about using GLMM
CC BY-SA 3.0
null
2011-04-13T13:09:15.407
2011-07-17T18:18:19.500
2011-07-17T18:18:19.500
null
124
[ "r", "confidence-interval", "mixed-model", "repeated-measures", "effect-size" ]
9509
2
null
9483
3
null
Actually, you could use software like [METAL](http://bioinformatics.oxfordjournals.org/content/26/17/2190.full) which is specifically designed for meta-analyses in GWA context. It's awkward that plink doesn't give the confidence interval. However, you can get the CI because you have the final OR (take $\log(\text{OR})$) and the $p$-value (hence the $z$) for the fixed effect. Bernd's method is even more precise. Beware that I would be more worried about the effect direction as it looks like you only have summary stats for each study but nothing to be sure which is the OR allele. Unless you know it is done on the same allele. Christian
null
CC BY-SA 3.0
null
2011-04-13T13:20:03.047
2011-04-13T15:26:01.423
2011-04-13T15:26:01.423
930
3946
null
9510
1
9513
null
55
38208
If I wanted to get the probability of 9 successes in 16 trials with each trial having a probability of 0.6 I could use a binomial distribution. What could I use if each of the 16 trials has a different probability of success?
Probability distribution for different probabilities
CC BY-SA 3.0
null
2011-04-13T13:34:06.930
2015-07-02T04:33:14.487
2011-04-14T07:16:48.973
null
4150
[ "distributions", "probability", "binomial-distribution" ]
9511
2
null
9503
7
null
I think you can set up your base model, that is the one with your 12 IVs and then use `add1()` with the remaining predictors. So, say you have a model `mod1` defined like `mod1 <- lm(y ~ 0+x1+x2+x3)` (`0+` means no intercept), then ``` add1(mod1, ~ .+x4+x5+x6, test="F") ``` will add and test one predictor after the other on top of the base model. More generally, if you know in advance that a set of variables should be included in the model (this might result from prior knowledge, or whatsoever), you can use `step()` or `stepAIC()` (in the `MASS` package) and look at the `scope=` argument. Here is an illustration, where we specify a priori the functional relationship between the outcome, $y$, and the predictors, $x_1, x_2, \dots, x_{10}$. We want the model to include the first three predictors, but let the selection of other predictors be done by stepwise regression: ``` set.seed(101) X <- replicate(10, rnorm(100)) colnames(X) <- paste("x", 1:10, sep="") y <- 1.1*X[,1] + 0.8*X[,2] - 0.7*X[,5] + 1.4*X[,6] + rnorm(100) df <- data.frame(y=y, X) # say this is one of the base model we think of fm0 <- lm(y ~ 0+x1+x2+x3+x4, data=df) # build a semi-constrained stepwise regression fm.step <- step(fm0, scope=list(upper = ~ 0+x1+x2+x3+x4+x5+x6+x7+x8+x9+x10, lower = ~ 0+x1+x2+x3), trace=FALSE) summary(fm.step) ``` The results are shown below: ``` Coefficients: Estimate Std. Error t value Pr(>|t|) x1 1.0831 0.1095 9.888 2.87e-16 *** x2 0.6704 0.1026 6.533 3.17e-09 *** x3 -0.1844 0.1183 -1.558 0.123 x6 1.6024 0.1142 14.035 < 2e-16 *** x5 -0.6528 0.1029 -6.342 7.63e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.004 on 95 degrees of freedom Multiple R-squared: 0.814, Adjusted R-squared: 0.8042 F-statistic: 83.17 on 5 and 95 DF, p-value: < 2.2e-16 ``` You can see that $x_3$ has been retained in the model, even if it proves to be non-significant (well, the usual caveats with univariate tests in multiple regression setting and model selection apply here -- at least, its relationship with $y$ was not specified).
null
CC BY-SA 3.0
null
2011-04-13T13:43:37.057
2011-04-13T13:43:37.057
null
null
930
null
9512
1
null
null
7
4518
I have a time series data of 30 years and found that ARIMA(0,1,1) has best model among others. I have used the simulate.Arima (forecast package) function to simulate the series into the future. ``` library(forecast) series <- ts(seq(25,55), start=c(1976,1)) arima_s <- Arima(series, c(0,1,1)) simulate(arima_s, nsim=50, future=TRUE) ``` Later on, i have found the updated value of first forecasted year (i.e. series[31] <- 65). Now i want to simulate the series with this updated value. I am wondering how to do this in R.
How to update ARIMA forecast in R?
CC BY-SA 3.0
null
2011-04-13T13:48:56.380
2014-08-24T14:22:54.270
2011-04-14T11:54:52.443
null
3084
[ "r", "time-series", "forecasting", "arima" ]
9513
2
null
9510
31
null
This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of success the chance of success on both trials is $p_1 p_2$, the chance of no successes is $(1-p_1)(1-p_2)$, and the chance of one success is $p_1(1-p_2) + (1-p_1)p_2$. That last expression owes its validity to the fact that the two ways of getting exactly one success are mutually exclusive: at most one of them can actually happen. That means their probabilities add. By means of these two rules--independent probabilities multiply and mutually exclusive ones add--you can work out the answers for, say, 16 trials with probabilities $p_1, \ldots, p_{16}$. To do so, you need to account for all the ways of obtaining each given number of successes (such as 9). There are $\binom{16}{9} = 11440$ ways to achieve 9 successes. One of them, for example, occurs when trials 1, 2, 4, 5, 6, 11, 12, 14, and 15 are successes and the others are failures. The successes had probabilities $p_1, p_2, p_4, p_5, p_6, p_{11}, p_{12}, p_{14},$ and $p_{15}$ and the failures had probabilities $1-p_3, 1-p_7, \ldots, 1-p_{13}, 1-p_{16}$. Multiplying these 16 numbers gives the chance of this particular sequence of outcomes. Summing this number along with the 11,439 remaining such numbers gives the answer. Of course you would use a computer. With many more than 16 trials, there is a need to approximate the distribution. Provided none of the probabilities $p_i$ and $1-p_i$ get too small, a [Normal approximation](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation) tends to work well. With this method you note that the expectation of the sum of $n$ trials is $\mu = p_1 + p_2 + \cdots + p_n$ and (because the trials are independent) the variance is $\sigma^2 = p_1(1-p_1) + p_2(1-p_2) + \cdots + p_n(1-p_n)$. You then pretend the distribution of sums is Normal with mean $\mu$ and standard deviation $\sigma$. The answers tend to be good for computing probabilities corresponding to a proportion of successes that differs from $\mu$ by no more than a few multiples of $\sigma$. As $n$ grows large this approximation gets ever more accurate and works for even larger multiples of $\sigma$ away from $\mu$.
null
CC BY-SA 3.0
null
2011-04-13T14:03:30.150
2014-10-08T14:34:02.737
2014-10-08T14:34:02.737
919
919
null
9514
2
null
9499
3
null
No automated forecasting method can produce good forecast for all data. If that was the case, the field of statistics would be for want of a better word dead. The package forecast picks the best exponential smoothing model for the given data. It works best with the certain class of time series data, which is not the case for your data. The precise reason for the failure is evident if you try this (ets is the workhorse behind the forecast): ``` > ets(weight,"AMM") Erreur dans ets(weight, "AMM") : Nonseasonal data ``` Since you do not supply time series attributes, the forecasting algorithm cannot pickup cyclic component, so the end result is the flat line.
null
CC BY-SA 3.0
null
2011-04-13T14:14:58.703
2011-04-13T14:14:58.703
null
null
2116
null
9515
2
null
8562
1
null
Riffing off of Thomas' suggestion, I think a multilevel model for this data would look something like this... y(hat) = TrialTypeA + TrialTypeB + ModalityA + ModalityB + TaskOrder + TrialTypeA:ModalityA + TrialTypeB:ModalityB + TrialTypeA:TaskOrder + TrialTypeB + TaskOrder At a deeper layer in the model each of these variables individual differences can be eliminated in regards to the TrialType variable. The As and Bs for TrialType are dummy codes where a value of 0 means the baseline condition and a value of 1 denotes membership in a condition. E.g. ModalityA might be 0 for VM but be 1 for AV and the ModalityB would be 0 for VM but be 1 for HM. In R I would specify this model (having loaded the package lme4) as lmer(TrialTypeA + TrialTypeB + ModalityA + ModalityB + TaskOrder + TrialTypeA:ModalityA + TrialTypeB:ModalityB + TrialTypeA:TaskOrder + TrialTypeB + TaskOrder + (1|SubjectID),data=mydata) Good luck.
null
CC BY-SA 3.0
null
2011-04-13T14:22:25.767
2011-04-13T14:22:25.767
null
null
196
null
9516
2
null
9506
21
null
I wouldn't bother with `stl()` for this - the bandwidth for the lowess smoother used to extract the trend is far, far, to small resulting in the small scale fluctuations you see. I would use an additive model. Here is an example using data and model code from Simon Wood's book on GAMs: ``` require(mgcv) require(gamair) data(cairo) cairo2 <- within(cairo, Date <- as.Date(paste(year, month, day.of.month, sep = "-"))) plot(temp ~ Date, data = cairo2, type = "l") ``` ![cairo temperature data](https://i.stack.imgur.com/ZxNHU.png) Fit a model with trend and seasonal components --- warning this is slow: ``` mod <- gamm(temp ~ s(day.of.year, bs = "cc") + s(time, bs = "cr"), data = cairo2, method = "REML", correlation = corAR1(form = ~ 1 | year), knots = list(day.of.year = c(0, 366))) ``` The fitted model looks like this: ``` > summary(mod$gam) Family: gaussian Link function: identity Formula: temp ~ s(day.of.year, bs = "cc") + s(time, bs = "cr") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 71.6603 0.1523 470.7 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(day.of.year) 7.092 7.092 555.407 < 2e-16 *** s(time) 1.383 1.383 7.035 0.00345 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.848 Scale est. = 16.572 n = 3780 ``` and we can visualise the trend and seasonal terms via ``` plot(mod$gam, pages = 1) ``` ![Cairo fitted trend and seasonal](https://i.stack.imgur.com/7B6iy.png) and if we want to plot the trend on the observed data we can do that with prediction via: ``` pred <- predict(mod$gam, newdata = cairo2, type = "terms") ptemp <- attr(pred, "constant") + pred[,2] plot(temp ~ Date, data = cairo2, type = "l", xlab = "year", ylab = expression(Temperature ~ (degree*F))) lines(ptemp ~ Date, data = cairo2, col = "red", lwd = 2) ``` ![Cairo fitted trend](https://i.stack.imgur.com/hH4EN.png) Or the same for the actual model: ``` pred2 <- predict(mod$gam, newdata = cairo2) plot(temp ~ Date, data = cairo2, type = "l", xlab = "year", ylab = expression(Temperature ~ (degree*F))) lines(pred2 ~ Date, data = cairo2, col = "red", lwd = 2) ``` ![Cairo fitted model](https://i.stack.imgur.com/15BYc.png) This is just an example, and a more in-depth analysis might have to deal with the fact that there are a few missing data, but the above should be a good starting point. As to your point about how to quantify the trend - well that is a problem, because the trend is not linear, neither in your `stl()` version nor the GAM version I show. If it were, you could give the rate of change (slope). If you want to know by how much has the estimated trend changed over the period of sampling, then we can use the data contained in `pred` and compute the difference between the start and the end of the series in the trend component only: ``` > tail(pred[,2], 1) - head(pred[,2], 1) 3794 1.756163 ``` so temperatures are, on average, 1.76 degrees warmer than at the start of the record.
null
CC BY-SA 3.0
null
2011-04-13T14:26:48.023
2011-04-13T15:43:20.187
2011-04-13T15:43:20.187
1390
1390
null
9517
1
9521
null
4
562
I work for a commission-based company that gives accounts to employees for about a month and the employees try and resolve the account, if they are successful they get a commission, otherwise the account goes to another employee to try and resolve. We are running some reports to measure some performance characteristics for the company. We have years of data that we can analyze. My question is: What summary(median or mean) can best describe the central tendency of length of time to resolve an account per employee? Almost every employee's median is greater than their mean, some are further apart than others. One thing that we are trying to find out is if we give an employee an account, how long should it take for him or her to resolve it. It seems to me that median is a better summary.
Whether to use mean or median to summarise the central tendency of length of time to perform a task
CC BY-SA 3.0
null
2011-04-13T13:02:01.697
2011-04-13T15:19:35.777
2011-04-13T14:52:39.297
183
4170
[ "mean" ]
9519
1
9546
null
4
267
I am creating a term to describe a design that may not be the proper term ("Staggered within subjects design"). What I mean by this is one can imagine a within subjects design with 3 levels. Each participant provides data for two levels, and between participant the order in which the data is collected is counter-balanced. For example (condition order is in the cells of the table below; sorry for the table formatting, I did my best, but I think you'll get the gist): ``` Participant Condition A Condition B Condition C 1 1 2 2 2 1 3 1 2 4 2 1 5 1 2 6 2 1 7 1 2 8 2 1 9 1 2 10 2 1 11 1 2 12 2 1 ``` Does this sort of design have a name? What are the consequences of such a design? How would one analyze it? I understand that I could probably use a mixed model to analyze this data but I'm curious about the application of ANOVA/t-test style analyses to this sort of data as these statistical approaches are more familiar to many in my field. The rationale for such a design might be that you have limited time with each participant and so you can't run them through all three conditions however each participant must be rewarded the same whether they participate in 1 condition or 2 conditions. It seems like, with special consideration, that this data could be run as a between subjects ANOVA (is the same as between subjects t-tests) or a within-subjects ANOVA. I will assert points, and I would appreciate it if the person to answer this question would accept or reject these points. - I could conduct a within Subjects anova on this data comparing any two conditions crossing condition with trial order also as a within-subjects factor (I expect that the trial order effects will be non-significant). - If there were main effects of order it would influence the validity of my estimate of the exact amount of effect of being demonstrated by each condition but would not influence subsequent analyses. - If there were interactions between a condition and order I should not conduct any between subjects analyses on data coming from order 2. - I could run two between subjects t-tests for each pair of conditions. One for the data from order 1 and the other from order 2 (as using data from both in a single between subjects analysis would violate the assumption of independence). If I could run all of these statistics, using the same data I would have 3 statistical tests. Each of the between subjects tests would be a full replication of the comparison between condition A and B. The within subjects tests would validate my hypothesis that trial order does not matter and allow the between subjects tests to be applied. If I fail to find significance in my between subjects analysis my within subjects analysis may provide additional power which will allow me to evaluate whether my hypothesis is sufficiently supported to run an additional study or whether I'm barking up the wrong tree.
Is there literature on staggered within subjects designs? What are the consequences of such a design?
CC BY-SA 3.0
null
2011-04-13T14:57:38.620
2011-04-14T06:50:16.403
null
null
196
[ "anova", "repeated-measures" ]
9520
2
null
4473
4
null
You use the example "January, February, and March", and I hope that's more for illustration and it isn't literally all the data you have. For monthly data, you really should have 3+ years of data, and places like the Census Bureau won't touch a monthly series with less than 7 years of data. Also, you don't mention exactly what data you have for each month, but it really makes a difference. Simply taking a single sales number for each month and fitting a straight line through them (lm) isn't really much of a model. It might give you an idea of the overall trend, but not realistic predictions. If you have monthly data on other factors that are primary drivers of sales, it might work. Then, as chl says, if you want to have an idea if your approach will work, you need to look at the residuals (`resid (lm_model)`) with commands like `acf` and `qqnorm` to see if the errors from your model are basically random -- which is what `lm` assumes -- or if they have a pattern to them, in which case your model is missing something important. Obviously, the ultimate test is how well your model predicts on data that it has not seen, over the long run, but with small amounts of data or the wrong data you can fool yourself with an initial lucky "prediction" and really get embarrassed down the road when the data and approach inevitably fail. Usual models for univariate time series (such as monthly sales figures and no other data) would involve more complicated models that take some experience to fit, such as `arima` or the `forecast` package.
null
CC BY-SA 3.0
null
2011-04-13T15:01:29.147
2011-04-13T15:01:29.147
null
null
1764
null
9521
2
null
9517
5
null
You should use the median, not the mean. However, you'll need to use methods appropriate for [time-to-event (survival)](http://en.wikipedia.org/wiki/Survival_analysis) data that deal appropriately with [censoring](http://en.wikipedia.org/wiki/Censoring_%28statistics%29): if the account was handled to another employee without being resolved you know only that the time this employee would have taken to resolve it is greater than or equal to the observed time for which they handled the account, so the observed time is right-censored. The appropriate method would be to construct the [Kaplan–Meier estimator](http://en.wikipedia.org/wiki/Kaplan-Meier_estimator) of the survival function. If you want a single number for each employee, you could use this method to obtain the median 'survival' time, which gives you the median time taken to resolve an account. You can also get confidence intervals for the median if you so wish. It's possible that the median may not be estimable for some employees though if they fail to resolve more than half the accounts before they are re-assigned, in which case you could consider switching to some other percentile that is estimable for all (or at least the great majority of) employees. The Kaplan-Meier estimator isn't difficult -- it's perfectly possible, if somewhat tedious, to construct by hand, and straightforward to program. Confidence intervals are a bit more tricky, but are available in any decent statistical software package.
null
CC BY-SA 3.0
null
2011-04-13T15:19:35.777
2011-04-13T15:19:35.777
null
null
449
null
9522
2
null
9500
18
null
As pointed out in the comments, it's because `princomp` uses $N$ for the divisor, but `prcomp` and the direct calculation using `cov` both use $N-1$ instead of $N$. This is mentioned in both the Details section of `help(princomp)`: > Note that the default calculation uses divisor 'N' for the covariance matrix. and the Details section of `help(prcomp)`: > Unlike princomp, variances are computed with the usual divisor N - 1. You can also see this in the source. For example, the snippet of `princomp` source below shows that $N$ (`n.obs`) is used as the denominator when calculating `cv`. ``` else if (is.null(covmat)) { dn <- dim(z) if (dn[1L] < dn[2L]) stop("'princomp' can only be used with more units than variables") covmat <- cov.wt(z) n.obs <- covmat$n.obs cv <- covmat$cov * (1 - 1/n.obs) cen <- covmat$center } ``` You can avoid this multiplication by specifying the `covmat` argument instead of the `x` argument. ``` princomp(covmat = cov(iris[,1:4]))$sd^2 ``` --- Update regarding PCA scores: You can set `cor = TRUE` in your call to `princomp` in order to perform PCA on the correlation matrix (instead of the covariance matrix). This will cause `princomp` to $z$-score the data, but it will still use $N$ for the denominator. As as result, `princomp(scale(data))$scores` and `princomp(data, cor = TRUE)$scores` will differ by the factor $\sqrt{(N-1)/N}$.
null
CC BY-SA 3.0
null
2011-04-13T15:31:24.723
2016-10-30T15:08:10.357
2016-10-30T15:08:10.357
1657
1657
null
9523
2
null
9190
1
null
[A Tour through the Visualization Zoo](http://queue.acm.org/detail.cfm?id=1805128) (Heer et al., Visualization 8(5) 2010) offers a particularly interesting overview of "innovative" and interactive techniques for displaying data. On a related point, a good software for data visualization, including the aforementioned gallery, is [Protovis](http://vis.stanford.edu/protovis/), which comes with a lot of [examples](http://vis.stanford.edu/protovis/ex/).
null
CC BY-SA 3.0
null
2011-04-13T15:39:52.810
2011-04-13T15:39:52.810
null
null
930
null
9524
2
null
9490
10
null
There are several notions of decomposition of such a tensor. Last year I asked [essentially the same question](http://www.mapleprimes.com/questions/97377-Decompose-A-Third-Order-Tensor-Into-Pure-Tensors) on the MaplePrimes site, answered it myself by referring to [wikipedia](http://en.wikipedia.org/wiki/Higher-order_singular_value_decomposition), and provided an implementation for one of those notions (the CANDECOMP/PARAFAC decomposition) in a [follow-up post](http://www.mapleprimes.com/posts/97861-CANDECOMPPARAFAC-Images-As-Tensors-Of-Order-3) (applied to decomposing the $3\times m \times n$ tensor given by the R,G,B entries of an image).
null
CC BY-SA 3.0
null
2011-04-13T15:51:38.890
2011-04-13T15:51:38.890
null
null
2898
null
9525
2
null
6949
5
null
Fraud detection is a rare class problem. [Chapter Six of Charles Elkan's Notes](http://cseweb.ucsd.edu/~elkan/291/) for his Graduate Course in Data Mining and Predictive Analytics at UCSD walks you through the prediction of a rare class, and the pitfalls and proper ways to evaluate the success of such a model. The methods he specifically uses are Isotonic and Univariate Logistic Regression. The software he uses in the class is Rapidminer, but I prefer R. If you choose to use R, you can perform both of these functions using the isoreg and glm functions. Many people also like to use SVMs in fraud detection, but part of the model selection criterion should be the speed with which you need to validate the transactions. If, for example, it's the swipe of a credit card, SVMs are wholly unfeasible because it will take far too long to process. This is why, in production environments, variants on regression models are typically used for fraud detection.
null
CC BY-SA 3.0
null
2011-04-13T17:16:56.247
2011-04-13T17:16:56.247
null
null
2166
null
9526
1
null
null
2
106
I'm working on a project that will convert $USD to other currencies. The system will use average monthly exchange rates instead of daily because of pre-existing limitations. I need to calculate an estimate of the amount of error that will result from using monthly average. My idea is to - Download historical daily exchange rates for several years - Calculate the monthly average for each month - Subtract the monthly average from each daily amount that month - Find the standard deviation for the differences I would report the standard deviation as the expected error amount. Is this a valid approach?
Calculate error for monthly exchange rates
CC BY-SA 3.0
null
2011-04-13T17:52:53.233
2011-04-15T08:18:51.293
2011-04-15T08:18:51.293
null
4154
[ "estimation", "standard-deviation" ]
9527
2
null
9474
1
null
Ha. It [looks like](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1308339) this is an unsolved problem in applied statistics. The best MRC estimators execute in $n^2log(n)$ time or worse, so of course no code is available.
null
CC BY-SA 3.0
null
2011-04-13T18:07:07.070
2011-04-13T18:07:07.070
null
null
4110
null
9528
2
null
9431
2
null
You can either estimate a parametric model of the error using MLE, or you can use a semi-paramteric approach based on something like the maximal rank correlation (MRC) estimator. Computationally, MRC is prohibitive for large samples, so it looks like MLE is the right approach for me. Thanks to GaBorgulya for some good, prompt direction, especially on the term "misclassification error." Here are some good sources on the topic: [The basic model, exactly as described in the original problem](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VC0-3V72S8B-B&_user=99318&_coverDate=12%2F31%2F1998&_rdoc=1&_fmt=high&_orig=gateway&_origin=gateway&_sort=d&_docanchor=&view=c&_searchStrId=1714202295&_rerunOrigin=scholar.google&_acct=C000007678&_version=1&_urlVersion=0&_userid=99318&md5=0db67a1d6b1fb454e3ffb9d6c4afb75a&searchtype=a) [Ungated version of the same](http://faculty.smu.edu/millimet/classes/eco7377/papers/hausman%20et%20al.pdf) [A more complicated, but more general model](http://www.jstor.org/stable/20076198) [A nice overview](http://www.jstor.org/stable/2696516)
null
CC BY-SA 3.0
null
2011-04-13T18:13:24.987
2011-04-13T18:13:24.987
null
null
4110
null
9532
1
13155
null
6
381
This is a statistical version of my [Math.SE ](https://math.stackexchange.com/questions/30731/will-two-convex-hulls-overlap) post. Given natural numbers $b$ and $r$, uniformly randomly choose $b+r$ points within a unit square. Call the $b$ points the blue points and the $r$ points red points. Let $p(b,r)$ be the probability that $H_r$, the convex hull of red points, overlaps with $H_b$, that of the blue points. How do I efficiently estimate $p(b,r)$ via Monte Carlo (or perhaps better) methods? I can think of averaging the probability over a large number of randomly chosen test cases, but how can I sample the search space in a way that gives me error bounds?
Monte Carlo estimation of convex hull overlap probability
CC BY-SA 3.0
0
2011-04-13T21:09:57.260
2011-07-17T18:15:16.630
2017-04-13T12:19:38.853
-1
4011
[ "monte-carlo" ]
9533
1
null
null
6
498
I have 150 observations, 500 features, and I am interested in novelty detection (outlier detection): given a new observation (let's say 'patient') I want to know if it is different from the previous ones (let's call it 'control'). If I had a lot of data, I would probably be using statistical testing at the univariate parameter level, but, because of multiple testing issues, I end up exploring the tails of the control distribution to achieve significance, and I do not have enough data for non parametric tests for such small p-values. I am doing one class SVMs that alleviate this issue by learning a global decision strategy. The limitations of this approach are - it is very 'blackboxy' - it works poorly if the data is very 'anisotropic', i.e. the marginal distributions of the control are very dissimilar in different directions. A trick to work around problem 2 is to center an norm the univariate parameters (this is often called creating 'Z scores'). Ideally, one would like to whiten the data using the control covariance, but there is not enough data to compute it. The values fed in the OC-SVM then can be interpreted as a univariate test statistic (under a normal null distribution for the controls). In my case, I can see from the histograms that the control's distribution is heavy tailed. I would like to learn a univariate transform making it closer to a standard normal. By the way, I have no reference about such practices. I have learned them empirically, and from lab discussions. Any pointer would be welcome, even if they don't directly answer my question.
Learning a univariate transform (kernel?) for novelty detection
CC BY-SA 3.0
null
2011-04-13T21:38:03.710
2015-04-19T20:50:08.983
2015-04-19T20:50:08.983
9964
1265
[ "machine-learning", "outliers", "kernel-trick" ]
9534
2
null
9512
10
null
Update: It turns out that the Arima function has an argument for supplying old model: ``` adj_s <- Arima(series,model=arima_s) ``` The end result might be the same for both approaches, but I would advise using second one, because it clearly is tested more thoroughly. **Old answer: ** As it happens, I encountered similar problem recently. Here is the function which takes existing arima model and applies it to new data. ``` adjust.amodel <- function(object,extended) { km.mod <- makeARIMA(object$model$phi,object$model$theta,object$model$Delta) km.res <- KalmanRun(extended,km.mod) object$x <- extended object$residuals <- ts(km.res$resid,start=start(extended),end=end(extended),frequency=frequency(extended)) object$model <- km.mod object } ``` In your case here is how you should use it: ``` series[31] <- 65 adj_arima_s <- adjust.amodel(arima_s,series) simulate(adj_arima_s, nsim=50, future=TRUE) ``` The usual caveats apply though. You need to have good reasons to do that. If the data changes this means that the model should change, so what you are doing is ignoring the new information and sticking to the old model, which might be the wrong one. You can compare this to producing the model fit by plucking the model coefficients out of the blue air. Although the coefficients have statistical validation, it comes from the old data, so interpretation of the results should take this into consideration.
null
CC BY-SA 3.0
null
2011-04-13T21:54:17.267
2014-08-24T14:22:54.270
2014-08-24T14:22:54.270
2116
2116
null
9535
1
9538
null
33
22830
I've run across a couple guides suggesting that I use R's nlm for maximum likelihood estimation. But none of them (including [R's documentation](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/nlm.html)) gives much theoretical guidance for when to use or not use the function. As far as I can tell, nlm is just doing gradient descent along the lines of Newton's method. Are there principles for when it's reasonable to use this approach? What alternatives are available? Also, are there limits on the size of the arrays, etc. one can pass to nlm?
When should I *not* use R's nlm function for MLE?
CC BY-SA 3.0
null
2011-04-14T00:40:22.830
2012-10-11T20:41:35.183
null
null
4110
[ "r", "maximum-likelihood" ]
9536
1
9554
null
4
3096
I am sick of using the examples in the book. Is there an easy place to find data for which z-score/percentile/normal distribution stuff would be easy to see?
Where can I find good publicly available data that I could use to teach z-scores to my college students?
CC BY-SA 3.0
null
2011-04-14T01:33:55.987
2013-11-07T12:41:24.940
null
null
1490
[ "dataset" ]
9537
2
null
9535
17
null
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. `nlm` will work just fine if the likelihood surface isn't particularly "rough" and is everywhere differentiable. `nlminb` provides a way to constrain parameter values to particular bounding boxes. `optim`, which is probably the most-used optimizer, provides a few different optimization routines; for example, BFGS, L-BFGS-B, and simulated annealing (via the SANN option), the latter of which might be handy if you have a difficult optimizing problem. There are also a number of optimizers available on CRAN. `rgenoud`, for instance, provides a genetic algorithm for optimization. `DEoptim` uses a different genetic optimization routine. Genetic algorithms can be slow to converge, but are usually guaranteed to converge (in time) even when there are discontinuities in the likelihood. I don't know about `DEoptim`, but `rgenoud` is set up to use `snow` for parallel processing, which helps somewhat. So, a probably somewhat unsatisfactory answer is that you should use `nlm` or any other optimizer if it works for the data you have. If you have a well-behaved likelihood, any of the routines provided by `optim` or `nlm` will give you the same result. Some may be faster than others, which may or may not matter, depending on the size of the dataset, etc. As for the number of parameters these routines can handle, I don't know, though it's probably quite a few. Of course, the more parameters you have, the more likely you are to run into problems with convergence.
null
CC BY-SA 3.0
null
2011-04-14T02:27:08.703
2011-04-14T02:27:08.703
null
null
3265
null
9538
2
null
9535
47
null
There are a number of general-purpose optimization routines in base R that I'm aware of: `optim`, `nlminb`, `nlm` and `constrOptim` (which handles linear inequality constraints, and calls `optim` under the hood). Here are some things that you might want to consider in choosing which one to use. - optim can use a number of different algorithms including conjugate gradient, Newton, quasi-Newton, Nelder-Mead and simulated annealing. The last two don't need gradient information and so can be useful if gradients aren't available or not feasible to calculate (but are likely to be slower and require more parameter fine-tuning, respectively). It also has an option to return the computed Hessian at the solution, which you would need if you want standard errors along with the solution itself. - nlminb uses a quasi-Newton algorithm that fills the same niche as the "L-BFGS-B" method in optim. In my experience it seems a bit more robust than optim in that it's more likely to return a solution in marginal cases where optim will fail to converge, although that's likely problem-dependent. It has the nice feature, if you provide an explicit gradient function, of doing a numerical check of its values at the solution. If these values don't match those obtained from numerical differencing, nlminb will give a warning; this helps to ensure you haven't made a mistake in specifying the gradient (easy to do with complicated likelihoods). - nlm only uses a Newton algorithm. This can be faster than other algorithms in the sense of needing fewer iterations to reach convergence, but has its own drawbacks. It's more sensitive to the shape of the likelihood, so if it's strongly non-quadratic, it may be slower or you may get convergence to a false solution. The Newton algorithm also uses the Hessian, and computing that can be slow enough in practice that it more than cancels out any theoretical speedup.
null
CC BY-SA 3.0
null
2011-04-14T03:00:07.563
2011-04-14T03:00:07.563
null
null
1569
null
9539
1
null
null
1
123
I'm familiar with the diagnostics required for OLS, however I'm in new territory with a model I'm fitting to data in R, using Poisson regression with GLM. What are the standard methods in evaluating a WLS model?
How do I go about conducting model diagnostics on WLS?
CC BY-SA 3.0
null
2011-04-14T03:20:16.033
2021-05-14T16:57:27.933
2021-05-14T16:57:27.933
11887
1965
[ "r", "modeling", "poisson-regression", "diagnostic" ]
9540
2
null
7164
3
null
The generalization of the change of variable formula to the non-bijective case is generally hard to write out explicitly, check [http://en.wikipedia.org/wiki/Probability_density_function#Multiple_variables](http://en.wikipedia.org/wiki/Probability_density_function#Multiple_variables) which essentially formalizes mpiktas's suggestion
null
CC BY-SA 3.0
null
2011-04-14T04:00:01.310
2011-04-14T04:00:01.310
null
null
null
null
9541
1
null
null
1
1563
I am trying to determine model significance using: ``` 1-pchisq(null deviance-residual deviance, null df- residual df) ``` I have 5 models: - Four models were estimated with GLMs, which gave me null and residual DFs in the summary. - The fifth model was estimated with the lmer function because of the nested structure of my data. This does not give me degrees of freedom for null or residuals. How do I get my null and residual deviances and degrees of freedoms? Thanks!
How can I obtain null and residual deviance/degrees of freedom for assessing model significance?
CC BY-SA 3.0
null
2011-04-14T04:13:36.247
2011-06-18T15:00:41.607
2011-04-19T14:30:05.650
930
4027
[ "r", "statistical-significance", "degrees-of-freedom" ]
9542
1
14883
null
30
22400
I know that in a regression situation, if you have a set of highly correlated variables this is usually "bad" because of the instability in the estimated coefficients (variance goes toward infinity as determinant goes towards zero). My question is whether this "badness" persists in a PCA situation. Do the coefficients/loadings/weights/eigenvectors for any particular PC become unstable/arbitrary/non-unique as the covariance matrix becomes singular? I am particularly interested in the case where only the first principal component is retained, and all others are dismissed as "noise" or "something else" or "unimportant". I don't think that it does, because you will just be left with a few principal components which have zero, or close to zero variance. Easy to see this isn't the case in the simple extreme case with 2 variables - suppose they are perfectly correlated. Then the first PC will be the exact linear relationship, and the second PC will be perpindicular to the first PC, with all PC values equal to zero for all observations (i.e. zero variance). Wondering if its more general.
Is PCA unstable under multicollinearity?
CC BY-SA 3.0
null
2011-04-14T04:51:15.753
2011-08-27T17:07:07.827
2011-04-14T07:19:10.440
null
2392
[ "pca", "multicollinearity" ]
9543
2
null
9505
2
null
You can use [MinHashing](http://en.wikipedia.org/wiki/MinHash) to get a fast approximate jacard similiarity match for your current item set against a database of existing of item sets. You might use a few min hashes to find quickly find candidate recommendations, and only do the full jacard computation against only the candidates found via min hashing.
null
CC BY-SA 3.0
null
2011-04-14T05:53:49.233
2011-04-14T05:53:49.233
null
null
4164
null
9544
1
9563
null
7
1404
Let $A\in\mathbb{R}^{n \times n}$ be a dense symmetric positive-definite matrix (the $X^TX$ from [here](https://stats.stackexchange.com/questions/9341/regularized-fit-from-summarized-data)) and $b$ a vector in $\mathbb{R}^n$. I need to compute $A^{-1}b$. Two questions: - Could you recommend an efficient and numerically stable algorithm for computing $A^{-1}b$ for $n \approx 1000$? - Let $\tilde{A_i}$ denote the matrix obtained from $A$ by removing its $i$-th row and $i$-th column. Is there an algorithm that, having been allowed to pre-process $A$ in some way, would enable me to quickly compute $\tilde{A_i}^{-1}\tilde b$ for any $i \in \{1,2,\ldots,n\}$ and any $\tilde b \in \mathbb{R}^{n-1}$?
Computing $(X^TX)^{-1}X^Ty$ in OLS
CC BY-SA 3.0
null
2011-04-14T05:58:16.167
2011-05-12T12:01:23.310
2017-04-13T12:44:33.357
-1
439
[ "regression", "least-squares", "matrix-inverse" ]
9545
2
null
9510
15
null
One alternative to @whuber's normal approximation is to use "mixing" probabilities, or a hierarchical model. This would apply when the $p_i$ are similar in some way, and you can model this by a probability distribution $p_i\sim Dist(\theta)$ with a density function of $g(p|\theta)$ indexed by some parameter $\theta$. you get a integral equation: $$Pr(s=9|n=16,\theta)={16 \choose 9}\int_{0}^{1} p^{9}(1-p)^{7}g(p|\theta)dp $$ The binomial probability comes from setting $g(p|\theta)=\delta(p-\theta)$, the normal approximation comes from (I think) setting $g(p|\theta)=g(p|\mu,\sigma)=\frac{1}{\sigma}\phi\left(\frac{p-\mu}{\sigma}\right)$ (with $\mu$ and $\sigma$ as defined in @whuber's answer) and then noting the "tails" of this PDF fall off sharply around the peak. You could also use a beta distribution, which would lead to a simple analytic form, and which need not suffer from the "small p" problem that the normal approximation does - as beta is quite flexible. Using a $beta(\alpha,\beta)$ distribution with $\alpha,\beta$ set by the solutions to the following equations (this is the "mimimum KL divergence" estimates): $$\psi(\alpha)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[p_{i}]$$ $$\psi(\beta)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[1-p_{i}]$$ Where $\psi(.)$ is the digamma function - closely related to harmonic series. We get the "beta-binomial" compound distribution: $${16 \choose 9}\frac{1}{B(\alpha,\beta)}\int_{0}^{1} p^{9+\alpha-1}(1-p)^{7+\beta-1}dp ={16 \choose 9}\frac{B(\alpha+9,\beta+7)}{B(\alpha,\beta)}$$ This distribution converges towards a normal distribution in the case that @whuber points out - but should give reasonable answers for small $n$ and skewed $p_i$ - but not for multimodal $p_i$, as beta distribution only has one peak. But you can easily fix this, by simply using $M$ beta distributions for the $M$ modes. You break up the integral from $0<p<1$ into $M$ pieces so that each piece has a unique mode (and enough data to estimate parameters), and fit a beta distribution within each piece. then add up the results, noting that making the change of variables $p=\frac{x-L}{U-L}$ for $L<x<U$ the beta integral transforms to: $$B(\alpha,\beta)=\int_{L}^{U}\frac{(x-L)^{\alpha-1}(U-x)^{\beta-1}}{(U-L)^{\alpha+\beta-1}}dx$$
null
CC BY-SA 3.0
null
2011-04-14T06:48:46.820
2013-12-01T03:18:23.493
2013-12-01T03:18:23.493
2392
2392
null
9546
2
null
9519
2
null
I'd probably call this a [cross-over design](http://en.wikipedia.org/wiki/Crossover_design) with three treatments and two periods, although you could also think of it as a [balanced incomplete block design](http://en.wikipedia.org/wiki/Incomplete_block_design) with three treatments and 'participant' as the blocking factor with a block size of 2. Not my area of expertise so I can't accept or reject all your points but I'm sure you can analyse it with ANOVA. One point that comes to mind is, as for any designed experiment, don't forget to randomize, and [conceal allocation](http://en.wikipedia.org/wiki/Allocation_concealment#Allocation_concealment)! Two standard textbooks on crossover trials are: - Jones, Byron; Kenward, Michael G. (2003). Design and Analysis of Cross-Over Trials (2nd ed.). Chapman and Hall. - Senn, Stephen (2002). Cross-over trials in clinical research (2nd ed.). Wiley.
null
CC BY-SA 3.0
null
2011-04-14T06:50:16.403
2011-04-14T06:50:16.403
null
null
449
null
9547
1
null
null
2
1329
- When measuring the quantization error of a clustering, should the distance between samples/centroid be squared or not? I found both variants in the literature. - Furthermore, is (squared) quantization error not the same as squared sum of errors?
Measuring quantization error for clustering - squared or not?
CC BY-SA 3.0
null
2011-04-14T07:54:22.887
2011-04-14T13:47:33.077
2011-04-14T07:59:37.813
183
null
[ "clustering" ]
9548
2
null
9536
2
null
One good website where you can find data for different disciplines. [Rob Hyndman's Time Series Data library](http://robjhyndman.com/TSDL/)
null
CC BY-SA 3.0
null
2011-04-14T09:38:51.977
2011-04-14T10:48:29.207
2011-04-14T10:48:29.207
183
3084
null
9549
1
null
null
8
3922
Assume you have a class of approximately 800 students and following a set of assessments each student has a raw grade. - How should these raw grades be converted into a final grade? - Is it a good idea to scale the raw grades to a normal distribution?
Should grades be assigned to students based on a normal distribution?
CC BY-SA 3.0
null
2011-04-14T09:43:45.847
2014-08-12T09:16:02.363
2011-04-14T10:41:27.343
183
4167
[ "normal-distribution" ]
9550
1
null
null
1
497
I've got more than 20 (10 point likert scale) variables with more than 1000 entries each. What I want to do is compare the means of the answers on the questions. A one-way anova seems suitable for this, but you can only categorize by the values of a variabele. I want to categorize by question, the variable itself. Is there any way to do this without having to put all answer beneath each other and having another variable saying which question it is. Just to be clear, I've got: variable: question1 --> 1000 entries ranging from 1 to 10 variable: question2 --> 1000 entries ranging from 1 to 10 ... variable: question20 --> 1000 entries ranging from 1 to 10 I want to compare the means of the different questions with one-way anova but I can't choose to factor by question.
Compare means of different variables
CC BY-SA 3.0
null
2011-04-14T10:10:03.263
2011-04-14T10:34:50.687
null
null
4168
[ "anova", "spss" ]
9551
2
null
9544
5
null
The standard answer to your first question is Cholesky decomposition. To quote [the Wikipedia article](http://en.wikipedia.org/wiki/Cholesky_decomposition#Applications): > If $A$ is symmetric and positive definite, then we can solve $Ax = b$ [for $x$] by first computing the Cholesky decomposition $A = LL^\mathrm{T}$, then solving $Ly = b$ for $y$, and finally solving $L^\mathrm{T} x = y$ for $x$. I'm not really clear what you're after in your second question. Doesn't a solution to the first question also provide a solution to the second? Surely it's not difficult computationally to remove a row and column from a matrix, and surely that's all the 'pre-proccessing' required??
null
CC BY-SA 3.0
null
2011-04-14T10:32:32.233
2011-04-14T10:41:05.347
2011-04-14T10:41:05.347
449
449
null
9552
2
null
9550
4
null
- If the same 1000 participants answered each question: consider using paired samples t-tests and repeated measures ANOVAs - If a different 1000 participants answered each question: then it sounds like you need to restructure your data file to conform to the expectations of the statistical package you are using.
null
CC BY-SA 3.0
null
2011-04-14T10:34:50.687
2011-04-14T10:34:50.687
null
null
183
null
9553
2
null
9542
3
null
PCA is often a means to an ends; leading up to either inputs to a multiple regression or for use in a cluster analysis. I think in your case, you are talking about using the results of a PCA to perform a regression. In that case, your objective of performing a PCA is to get rid of mulitcollinearity and get orthogonal inputs to a multiple regression, not surprisingly this is called Principal Components Regression. Here, if all your original inputs were orthogonal then doing a PCA would give you another set of orthogonal inputs. Therefore; if you are doing a PCA, one would assume that your inputs have multicollinearity. Given the above, you would want to do PCA to get a few input variable from a problem that has a number of inputs. To determine how many of those new orthogonal variables you should retain, a scree plot is often used (Johnson & Wichern, 2001, p. 445). If you have a large number of observations, then you could also use the rule of thumb that with $\hat{ \lambda_{i} }$ as the $i^{th}$ largest estimated eigenvalue only use up to and including those values where $\frac{ \hat{ \lambda_{i} } }{p}$ are greater than or equal to one (Johnson & Wichern, 2001, p. 451). References Johnson & Wichern (2001). Applied Multivariate Statistical Analysis (6th Edition). Prentice Hall.
null
CC BY-SA 3.0
null
2011-04-14T10:44:19.680
2011-04-14T10:57:59.350
2011-04-14T10:57:59.350
930
3805
null
9554
2
null
9536
3
null
You may wish to read answers to this existing question on [freely available datasets](https://stats.stackexchange.com/questions/7/locating-freely-available-data-samples). In general, I imagine that you'd want a dataset with some interesting metric variables. In psychology research methods classes that I've taught, we've often looked at datasets with intelligence or personality test scores. If you want a personality example, I have some [personality data and metadata on github](https://github.com/jeromyanglim/Sweave_Personality_Reports) based on the [IPIP](http://ipip.ori.org/), an public domain measure of the Big 5 factors of personality. - github repository home - data - metadata - David Smith's summary
null
CC BY-SA 3.0
null
2011-04-14T10:55:13.367
2011-04-14T10:55:13.367
2017-04-13T12:44:40.807
-1
183
null
9555
2
null
9544
6
null
To add to @onestop's answer, another efficient way is to use [QR decomposition](http://en.wikipedia.org/wiki/QR_decomposition). The added benefit is that QR decomposition can be applied directly to $X$, and not to $X^TX$. I think the QR decomposition can be made to work for your second question, it is definitely straightforward for column removal. However the question is why do you need it? There are [readily available libraries](http://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Algebra_Software) which perform these operations very efficiently. Do you want to reimplement them? As far as I understand there is a lot of non-trivial fine-tuning of code even if the algorithm is basically the best for the job, so chances are pretty high that you might not get full benefits by implementing algorithm which is supposedly theoretically superior. Here is the [link to the chapter in a book](http://books.google.com/books?id=ZecsDBMz5-IC&pg=PA132), which discusses modifications needed for solving the second question in case of QR decomposition. Although it states that the methods apply for least squares problems, it can be applied for system of linear equations. I am pretty sure that this should be a standard problem, so maybe someone will give a more suitable reference.
null
CC BY-SA 3.0
null
2011-04-14T10:57:09.693
2011-05-12T12:01:23.310
2011-05-12T12:01:23.310
830
2116
null
9556
1
null
null
3
463
I have 2 genes (tf1 and tf2), which are affecting a third gene (tg). By affecting a gene I mean the changes in the value of tf1 and tf2 changes the value of tg. what we measure in the whole experiment is this value. We want to see if the values of tf1 and tf2 are dependent (which means they are regulating each other) and if (tf1,tg) and (tf2,tg) are dependent. The response of the tf1 and tf1 and the tg differs in different conditions. So my 3 factors are tf1, tf2, and conditions. - First I want to make sure that the design of my 3way-anova is correct. One dimension is the conditions.If we see the second and third factors (tf1 and tf2) as a matrix, I have designed it like this. |tg |tf1 | |tf2 |func(tf1,tf2)| so in the first column, there is no tf1, and in the secong column we always see its effect. In the first row there is no tf2, but in the second row we see its effect. I thought that it would be good idea to fill the cell in the first row, first column with the tg. and the last cell has an effect of both of tf1 and tf2 (I combined the values of tf1 and tf1 with a function like average). I appreciate if you tell me what you think about this design. - If a:tf1, b:tf2, and c:condition, which of the SSs shows if there is a dependency between tf1 and tf2? Is it SSab? The reason that I am asking is that I don't see much of a difference in the histogram of the SSab for samples in which there is an interaction between a and b with the samples that there is no interaction between a and b. - Also, what is the interpretation of SSabc for my particular problem?
Interpretation of 3-way ANOVA
CC BY-SA 3.0
null
2011-04-14T11:20:00.523
2017-11-30T12:50:00.897
2011-04-28T17:27:47.293
2885
2885
[ "anova", "genetics", "interpretation", "bioinformatics", "biostatistics" ]
9557
1
9558
null
10
8988
I was recently exposed to some statistical hypothesis testing methods (e.g. Friedman test) at work, and I would like to increase my knowledge on the topic. Can you suggest a good introduction to statistical significance / statistical hypothesis testing for a computer scientist? I am thinking of a PDF book or similar, but any other kind of help is welcome. Edit: I've already found [this](http://www.jerrydallal.com/LHSP/LHSP.HTM) website but I was looking preferably for some which is easily printable. Thank you Tunnuz
What is a good introduction to statistical hypothesis testing for computer scientists?
CC BY-SA 3.0
null
2011-04-14T12:07:36.113
2011-06-29T19:35:06.853
2011-04-14T12:17:05.553
4169
4169
[ "hypothesis-testing", "statistical-significance", "p-value" ]
9558
2
null
9557
6
null
[http://greenteapress.com/thinkstats/](http://greenteapress.com/thinkstats/) This seems like it would be useful for you. Full disclosure: I have not read it, but I am working my way through the Think Like a Computer Scientist in Java, and am finding that extremely useful.
null
CC BY-SA 3.0
null
2011-04-14T12:19:03.930
2011-04-14T12:19:03.930
null
null
656
null
9559
2
null
9549
6
null
Why should grades be normally distributed? Sometimes they are but if the grades are not normally distributed then the bell curve grading system, where the middle say 70% get C's, is probably not a good one to base grades off of. Although that grading is pretty harsh, few instructors would actually do it. Use distributions to describe the data, don't transform data to fit a particular distribution (although transformations can be helpful at times). If you use the bell curve grading system and, extreme case, everyone aces the class. How do you decide grades? Here is how I would decide final grades: 90-100%: A 80-90%: B ...
null
CC BY-SA 3.0
null
2011-04-14T12:54:08.067
2011-04-14T12:54:08.067
null
null
2310
null
9560
2
null
9547
3
null
The point of the squared error is that it results out of the underlying assumption, that your data is distributed with a Gaussian random component. (Like noise on your measurements, e.g.) The sum of squares error comes from the log probability. Say your points are distributed according to $p$, then you want to pick your parameters $\theta$ (for K-Means the centroids, e.g.) to maximize the probability of the data: $$p(D) = \Pi_i p(x_i|\theta)$$ However, maximization of products is hard, so we maximize the log instead, which makes the product to a sum: $$argmax_{\theta} \sum_i \log p(x_i|\theta)$$ Now, the density of the Gaussian distribution is someting like $\frac{1}{\sqrt{2\pi\sigma^2}}\,e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$. If you take the log of this, you are stuck with what you find in the exponent and some constant terms. You ignore the variance in most cases. And voilá, that's where the squared distance comes from. Since the Gaussian punishes outliers very much (because of the square), sometimes it is more robust to assume a distribution that has heavier tails, like student's-t or Laplace. If you only measure the absolute distance, this comes from a Laplace assumption. Thus, it is a question that the statistician has to answer - it's more part of the model. Of course, you can use a model selection method (like cross validation) that does this part of the job.
null
CC BY-SA 3.0
null
2011-04-14T13:47:33.077
2011-04-14T13:47:33.077
null
null
2860
null
9561
1
9569
null
35
38670
This is an elementary question, but I wasn't able to find the answer. I have two measurements: n1 events in time t1 and n2 events in time t2, both produced (say) by Poisson processes with possibly-different lambda values. This is actually from a news article, which essentially claims that since $n_1/t_1\neq n_2/t_2$ that the two are different, but I'm not sure that the claim is valid. Suppose that the time periods were not chosen maliciously (to maximize the events in one or the other). Can I just do a t-test, or would that not be appropriate? The number of events is too small for me to comfortably call the distributions approximately normal.
Checking if two Poisson samples have the same mean
CC BY-SA 3.0
null
2011-04-14T14:26:53.683
2014-11-11T13:41:53.107
null
null
1378
[ "hypothesis-testing", "poisson-distribution" ]
9562
2
null
9533
4
null
Your setting is pretty hard. I have no solution, but a couple of points. - Energy based models can give you a scalar corresponding to a "grade of belief" that an input is generated by the distribution of your data. It comes down to chosing a model and a good loss function. Check out Yann Lecun's tutorial on energy based models. Also, there is Ranzato's energy based unsupervised framework paper where they use sparse autoencoders. Sparseness is generally desirable given your tiny dataset, I guess. - A restricted Boltzmann machine might work. You can train your RBM on the data with less than 25 hidden features (which you might anyway because of your lack of data) enabling you to write down the probability of any new input to belong to the data given by your distribution. Actually, an RBM is a energy based model as well. - I have a feeling that SVM with Kernels might be a too complex model for what you are doing. Do you get acceptable scores on a test set?
null
CC BY-SA 3.0
null
2011-04-14T14:28:33.387
2011-04-14T14:28:33.387
null
null
2860
null
9563
2
null
9544
5
null
Regarding your second question, here is a way to do this for $n=i$, for simplicity of notation. Let $\alpha = A_{nn}, \, a = (A_{1n},\dots, A_{n-1,n})^T$ and therefore $$ A = \begin{pmatrix} \tilde A & a \\ a^T & \alpha \end{pmatrix} $$ Also let $A^{-1}$ be partitioned in the same way, $$ A^{-1} = \begin{pmatrix} \tilde C & c \\ c^T & \gamma \end{pmatrix} $$ I understand you want $\tilde A^{-1}$ and have already computed $A^{-1}$. Set $d = \tilde A^{-1}a$ and $\delta = c^Ta$. These quantities already appear in $A^{-1}$, since $$ A^{-1} = \begin{pmatrix} \tilde A^{-1} + \frac{1}{\alpha - \delta}dd^T & -\frac{1}{\alpha - \delta} d \\ -\frac{1}{\alpha - \delta} d^T & \frac{1}{\alpha - \delta} \end{pmatrix} $$ as a direct calculation shows. Therefore, $$ \tilde A^{-1} = \tilde C - \frac{1}{\alpha - \delta} dd^T = \tilde C - \frac{1}{\gamma}cc^T $$ which is an $O(n^2)$ update of $A^{-1}$.
null
CC BY-SA 3.0
null
2011-04-14T14:34:05.890
2011-04-14T14:34:05.890
null
null
4062
null
9564
2
null
9507
5
null
Whenever I implement a new algorithm, I get myself an easy, interpretable dataset on which I can try it out. This has several advantages, for example runtime (use a small dataset) or visualization (make things you want to plot have dimension 2 or 3). Of course the behavior you see could result from the dataset. It feels however unlikely to me. However, a standard way to initialize variables is to just randomize them uniformly in an interval like [-1, 1], [-0.1, 0.1] or something like that. I have seen standard normal as well. I am not sure if it works for SMMs, but for Gaussian Mixture Models and PCA mixtures, it makes sense to run a few iterations of K-Means before you go into EM (and use the centers as responsibilities). Maybe you want to try that.
null
CC BY-SA 3.0
null
2011-04-14T14:35:38.870
2011-04-14T14:35:38.870
null
null
2860
null
9566
1
null
null
11
2105
I have the total number of calls received each week and have plotted them on a chart, going back nearly 3 years. By eye it seems that there was a massive drop over Christmas, that doesn't seem to have recovered, it seems that there has been a step change in requests. Is there a test I can do that can quantify this difference? Cheers Ben
Determining if change in a time series is statistically significant
CC BY-SA 3.0
null
2011-04-14T14:48:57.507
2011-04-16T01:50:04.197
2011-04-15T03:54:14.787
183
4171
[ "time-series", "statistical-significance", "change-point" ]
9567
2
null
9561
11
null
You're looking for a quick and easy check. Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[0, t_1]$ ($n_1$ in number) and the events during the interval $[t_1, t_1+t_2]$ ($n_2$ in number). You would estimate the rate as $$\hat{\lambda} = \frac{n_1+n_2}{t_1+t_2}$$ and from that you can estimate the distribution of the $n_i$: they are Poisson of intensity near $t_i\hat{\lambda}$. If one or both $n_i$ are situated on tails of this distribution, most likely the claim is valid; if not, the claim may be relying on chance variation.
null
CC BY-SA 3.0
null
2011-04-14T14:56:48.807
2011-04-14T14:56:48.807
null
null
919
null
9569
2
null
9561
29
null
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution whose success probability is a function of the ratio two lambda. Therefore, hypothesis testing and interval estimation procedures can be readily developed from the exact methods for making inferences about the binomial success probability. There usually two methods are considered for this purpose, - C-test - E-test You can find the details about these two tests in this paper. [A more powerful test for comparing two Poisson means](http://www.ucs.louisiana.edu/~kxk4695/JSPI-04.pdf)
null
CC BY-SA 3.0
null
2011-04-14T14:59:25.827
2011-04-14T14:59:25.827
null
null
3084
null
9570
2
null
9566
11
null
A very similar example is used in the tutorial of PyMC. If you assume that the daily amount of requests was constant until some point in time (maybe exactly Christmas) and after that it was constant again, all you need to do is substitute the numbers in the example: [http://pymc.googlecode.com/svn/doc/tutorial.html](http://pymc.googlecode.com/svn/doc/tutorial.html) As this is the Bayesian approach you won't (easily) get p values. However, the size of the step down and its [credible interval](http://en.wikipedia.org/wiki/Credibility_interval) (this is a Bayesian interval, similar to a confidence interval) may be equally useful.
null
CC BY-SA 3.0
null
2011-04-14T19:29:37.493
2011-04-16T01:50:04.197
2011-04-16T01:50:04.197
3911
3911
null
9571
2
null
9561
5
null
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation. Calculating the lengths of the intervals first, and a check: ``` Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession Lnrec = as.numeric(as.Date("2007-12-01") - as.Date("2001-12-01")) # L of non rec period (43/Lrec)/(50/Lnrec) [1] 2.000276 ``` This check gives a slightly different result (100.03% increase) than the one of the publication (101% increase). Go on with the bootstrap (do it twice): ``` N = 100000 k=(rpois(N, 43)/Lrec)/(rpois(N, 50)/Lnrec) c(quantile(k, c(0.025, .25, .5, .75, .975)), mean=mean(k), sd=sd(k)) 2.5% 25% 50% 75% 97.5% mean sd 1.3130094 1.7338545 1.9994599 2.2871373 3.0187243 2.0415132 0.4355660 2.5% 25% 50% 75% 97.5% mean sd 1.3130094 1.7351970 2.0013578 2.3259023 3.0173868 2.0440240 0.4349706 ``` The 95% confidence interval of the increase is 31% to 202%.
null
CC BY-SA 3.0
null
2011-04-14T20:09:55.573
2011-04-16T01:47:49.140
2011-04-16T01:47:49.140
3911
3911
null
9572
2
null
8007
1
null
If you are interested in the amount of time it takes to complete an order, it seems that a duration analysis (aka survival or event history analysis) would be most appropriate. See the Wikipedia entry for an overview: [http://en.wikipedia.org/wiki/Survival_analysis](http://en.wikipedia.org/wiki/Survival_analysis) This introduction, which covers issues such as censoring, looks relevant and accessible: [Survival Analysis Introduction](http://www.amstat.org/chapters/northeasternillinois/pastevents/presentations/summer05_Ibrahim_J.pdf) And if you are so inclined, R has a task view dedicated to survival analysis: [R Survival Analysis Task View](http://cran.r-project.org/web/views/Survival.html) Since you know pretty well what steps go into the production of each item, and because you seem interested in forecasting, you may begin by estimating a parametric model, such as a Weibull or log-logistic/log-normal. Most software capable of estimating these models will also provide the tools to forecast average time-to-completion for different orders. You should also be able to produce plots of estimated durations.
null
CC BY-SA 3.0
null
2011-04-14T20:58:55.363
2011-04-14T20:58:55.363
null
null
3265
null
9573
1
9575
null
114
195079
Long ago I learnt that normal distribution was necessary to use a two sample T-test. Today a colleague told me that she learnt that for N>50 normal distribution was not necessary. Is that true? If true is that because of the central limit theorem?
T-test for non normal when N>50?
CC BY-SA 3.0
null
2011-04-14T21:55:43.747
2022-07-18T22:26:17.110
2021-07-29T11:58:51.467
301448
4176
[ "hypothesis-testing", "normal-distribution", "t-test", "inference", "central-limit-theorem" ]
9574
2
null
4111
4
null
Mobile phone providers can count the number of phones in the area. Having an estimate of the mean number of phones/person good approximation can be calculated. This looks simple, so I assume it is in practice.
null
CC BY-SA 3.0
null
2011-04-14T23:19:26.933
2011-04-14T23:19:26.933
null
null
3911
null
9575
2
null
9573
121
null
Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samples.) The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed. By the central limit theorem, means of samples from a population with finite variance approach a normal distribution regardless of the distribution of the population. Rules of thumb say that the sample means are basically normally distributed as long as the sample size is at least 20 or 30. For a t-test to be valid on a sample of smaller size, the population distribution would have to be approximately normal. The t-test is invalid for small samples from non-normal distributions, but it is valid for large samples from non-normal distributions. Small samples from non-normal distributions As Michael notes below, sample size needed for the distribution of means to approximate normality depends on the degree of non-normality of the population. For approximately normal distributions, you won't need as large sample as a very non-normal distribution. Here are some simulations you can run in R to get a feel for this. First, here are a couple of population distributions. ``` curve(dnorm,xlim=c(-4,4)) #Normal curve(dchisq(x,df=1),xlim=c(0,30)) #Chi-square with 1 degree of freedom ``` Next are some simulations of samples from the population distributions. In each of these lines, "10" is the sample size, "100" is the number of samples and the function after that specifies the population distribution. They produce histograms of the sample means. ``` hist(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') hist(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') ``` For a t-test to be valid, these histograms should be normal. ``` require(car) qqp(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') qqp(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') ``` Utility of a t-test I have to note that all of the knowledge I just imparted is somewhat obsolete; now that we have computers, we can do better than t-tests. As Frank notes, you probably want to use [Wilcoxon tests](http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test) anywhere you were taught to run a t-test.
null
CC BY-SA 3.0
null
2011-04-15T01:07:12.330
2015-11-25T04:03:52.120
2015-11-25T04:03:52.120
7290
3874
null
9576
2
null
9501
16
null
Short answer: No, it is not possible, at least in terms of elementary functions. However, very good (and reasonably fast!) numerical algorithms exist to calculate such a quantity and they should be preferred over any numerical integration technique in this case. Quantity of interest in terms of normal cdf The quantity you are interested in is actually closely related to the conditional mean of a lognormal random variable. That is, if $X$ is distributed as a lognormal with parameters $\mu$ and $\sigma$, then, using your notation, $$ \newcommand{\e}{\mathbb{E}}\renewcommand{\Pr}{\mathbb{P}}\newcommand{\rd}{\mathrm{d}} \int_a^b f(x) \rd x = \int_a^b \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{1}{2\sigma^2}(\log(x) - \mu)^2} \rd x = \Pr(a \leq X \leq b) \e(X \mid a \leq X \leq b) \>. $$ To get an expression for this integral, make the substitution $z = (\log(x) - (\mu + \sigma^2))/\sigma$. This may at first appear a bit unmotivated. But, note that using this substitution, $x = e^{\mu + \sigma^2} e^{\sigma z}$ and by a simply change of variables, we get $$ \int_a^b f(x) \rd x = e^{\mu + \frac{1}{2}\sigma^2} \int_{\alpha}^{\beta} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} z^2} \rd z \> , $$ where $\alpha = (\log(a) - (\mu + \sigma^2))/\sigma$ and $\beta = (\log(b) - (\mu + \sigma^2))/\sigma$. Hence, $$ \int_a^b f(x) \rd x = e^{\mu + \frac{1}{2}\sigma^2} \big( \Phi(\beta) - \Phi(\alpha) \big) \>, $$ where $\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-z^2/2} \rd z$ is the standard normal cumulative distribution function. Numerical approximation It is often stated that no known closed form expression for $\Phi(x)$ exists. However, a [theorem of Liouville](http://en.wikipedia.org/wiki/Liouville%27s_theorem_%28differential_algebra%29) from the early 1800's asserts something stronger: There is no closed form expression for this function. (For the proof in this particular case, see [Brian Conrad's writeup](http://www.claymath.org/programs/outreach/academy/LectureNotes05/Conrad.pdf).) Thus, we are left to use a numerical algorithm to approximate the desired quantity. This can be done to within IEEE double-precision floating point via an algorithm of W. J. Cody's. It is the standard algorithm for this problem, and utilizing rational expressions of a fairly low order, it's pretty efficient, too. Here is a reference that discusses the approximation: > W. J. Cody, Rational Chebyshev Approximations for the Error Function, Math. Comp., 1969, pp. 631--637. It is also the implementation used in both MATLAB and $R$, among others, in case those make it easier to obtain example code. [Here](https://stats.stackexchange.com/questions/7200/evaluate-definite-interval-of-normal-distribution) is a related question, in case you're interested.
null
CC BY-SA 3.0
null
2011-04-15T01:36:14.320
2011-04-15T02:50:14.760
2017-04-13T12:44:20.730
-1
2970
null
9577
1
null
null
10
1383
I have fit two generalized estimating equation (GEE) models to my data: 1) Model 1: Outcome is longitudinal Yes/No variable (A) (year 1,2,3,4,5) with longitudinal continuous predictor (B) for years 1,2,3,4,5. 2) Model 2: Outcome is the same longitudinal Yes/No variable (A), but now with my predictor fixed at its year 1 value i.e. forced to be time invariant (B). Due to missing measurements in my longitudinal predictor at a few time points for different cases, the number of data points in model 2 is higher than in model 1. I would like to know about what comparisons I can validly make between the odds ratios, p-values and fit of the two models e.g.: - If the OR for predictor B is bigger in model 1, can I validly say that the association between A and B is stronger in model1? - How can I assess which is the better model for my data. am I correct in thinking that QIC/AIC pseudo R squareds should not be compared across models if the number of observations is not the same? Any help would be greatly appreciated.
How can I assess GEE/logistic model fit when covariates have some missing data?
CC BY-SA 3.0
null
2011-04-15T02:39:43.933
2016-10-11T17:26:46.970
2011-04-15T08:17:00.480
null
4054
[ "logistic", "generalized-estimating-equations" ]
9578
2
null
9573
6
null
In my experience with just the one-sample t-test, I have found that the skew of the distributions is more important than the kurtosis, say. For non-skewed but fat-tailed distributions (a t with 5 degrees of freedom, a Tukey h-distribution with $h=0.24999$, etc), I have found that 40 samples has always been sufficient to get an empirical type I rate near the nominal. When the distribution is very skewed, however, you may need many many more samples. For example, suppose you were playing the lottery. With probability $p = 10^{-4}$ you will win 100 thousand dollars, and with probability $1-p$ you will lose one dollar. If you perform a t-test for the null that the mean return is zero based on a sample of one thousand draws of this process, I don't think you are going to achieve the nominal type I rate. edit: duh, per @whuber's catch in the comment, the example I gave did not have mean zero, so testing for mean zero has nothing to do with the type I rate. Because the lottery example often has a sample standard deviation of zero, the t-test chokes. So instead, I give a code example using Goerg's [Lambert W x Gaussian](http://cran.r-project.org/web/packages/LambertW/) distribution. The distribution I use here has a skew of around 1355. ``` #hey look! I'm learning R! library(LambertW) Gauss_input = create_LambertW_input("normal", beta=c(0,1)) params = list(delta = c(0), gamma = c(2), alpha = 1) LW.Gauss = create_LambertW_output(input = Gauss_input, theta = params) #get the moments of this distribution moms <- mLambertW(beta=c(0,1),distname=c("normal"),delta = 0,gamma = 2, alpha = 1) test_ttest <- function(sampsize) { samp <- LW.Gauss$rY(params)(n=sampsize) tval <- t.test(samp, mu = moms$mean) return(tval$p.value) } #to replicate randomness set.seed(1) pvals <- replicate(1024,test_ttest(50)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(250)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) p vals <- replicate(1024,test_ttest(1000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(2000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) ``` This code gives the empirical reject rate at the nominal 0.05 level for different sample sizes. For sample of size 50, the empirical rate is 0.40 (!); for sample size 250, 0.29; for sample size 1000, 0.21; for sample size 2000, 0.18. Clearly the one-sample t-test suffers from skew.
null
CC BY-SA 3.0
null
2011-04-15T03:20:49.997
2011-04-15T18:30:54.513
2011-04-15T18:30:54.513
795
795
null
9579
2
null
8271
2
null
I am guessing that you have your data in wide format. This seems to be causing you confusion because in wide format it is less obvious that "time" is your second independent variable. One option in SPSS for doing what it sounds like you want to do: - Analyze -- GLM -- Repeated Measures - Enter time as repeated measures factor with two levels - Enter Group as a between subjects factor - Enter the dependent variable (presumably represented as two variables in SPSS as your two times points) - Enter your covariate in the covariate box For more information, see [Andy Field's SPSS Repeated Measures tutorial](http://www.statisticshell.com/repeatedmeasures.pdf). However, given that repeated measures ANOVA already controls for individual differences in some respect, you may want to think about the meaning of including a time-invariant covariate in such a model. See here for a [discussion of the issues related to covariates in repeated measures ANOVA](http://www.psyc.bbk.ac.uk/research/DNL/stats/Repeated_Measures_ANCOVA.html).
null
CC BY-SA 3.0
null
2011-04-15T03:40:14.243
2011-04-15T03:50:06.243
2011-04-15T03:50:06.243
183
183
null
9580
2
null
9573
21
null
See my previous answer to a question on the [robustness of the t-test](https://stats.stackexchange.com/questions/1386/robust-t-test-for-mean/1391#1391). In particular, I recommend playing around with the [onlinestatsbook applet](http://onlinestatbook.com/stat_sim/robustness/index.html). The image below is based on the following scenario: - null hypothesis is true - fairly severe skewness - same distribution in both groups - same variance in both groups - sample size per group 5 (i.e., much less than 50 as per your question) - I pressed the 10,000 simulations button about 100 times to get up to over one million simulations. The simulation obtained suggests that instead of getting a 5% Type I errors, I was only getting 4.5% Type I errors. Whether you consider this robust depends on your perspective. ![enter image description here](https://i.stack.imgur.com/gOXMq.png)
null
CC BY-SA 3.0
null
2011-04-15T07:23:36.947
2015-10-30T09:19:25.873
2017-04-13T12:44:37.583
-1
183
null
9581
1
null
null
1
3627
I have a huge dataset which contains 20 columns and many rows. I have done clustering in SAS, Knime and SPSS, but I am new to R. I have to do clustering on my dataset. I have imported my data into R. - What are some suggestions for getting started with cluster analysis in R?
Getting started with cluster analysis in R
CC BY-SA 3.0
null
2011-04-15T11:07:26.733
2011-04-15T14:11:57.157
2011-04-15T11:45:53.927
183
null
[ "r", "clustering" ]
9583
2
null
9581
4
null
A lot of people coming from SAS or SPSS to R find the [Quick-R website useful](http://www.statmethods.net/). There is a page on [cluster analysis](http://www.statmethods.net/advstats/cluster.html) which you may find useful in getting you started.
null
CC BY-SA 3.0
null
2011-04-15T11:41:37.817
2011-04-15T14:11:57.157
2011-04-15T14:11:57.157
183
183
null
9584
2
null
9581
16
null
``` ## dummy data require(MASS) set.seed(1) dat <- data.frame(mvrnorm(100, mu = c(2,6,3), Sigma = matrix(c(10, 2, 4, 2, 3, 0.5, 4, 0.5, 2), ncol = 3))) ``` So my data are in object `dat`, you have read your data in and called it something. Use that object instead of `dat` in this code below: [@sridher - the codes below are the three lines I mentioned!] ``` set.seed(2) ## *k*-means uses a random start klust <- kmeans(scale(dat, center = TRUE, scale = TRUE), centers = 3) klust ``` The first line (`set.seed(2)`) fixes the random number generator at a given starting point so the results are reproducible. We do this because `kmeans()`, if not given the starting cluster centres will randomly choose `centers` samples from your data as the cluster centres. The second line calls `kmeans()` on the standardised data (all the variables in my data set are in different units, so scaling them to zero mean and unit variance would seem appropriate). We ask for 3 groups by specifying `centers = 3`. The third line prints the fitted k-means object to the screen showing the output from the function. This is just an example though. Why three groups? I don't even do any subsequent analysis of the clustering solution. Furthermore, you probably want to run the `kmeans()` code several times to make sure you get similar clusterings each time, but using different random starts --- set a different seed for each run. There is a lot more to clustering than just throwing your data at an algorithm! You can automate that bit to some extent using the `cascadeKM()` function in package `vegan`: ``` require(vegan) fit <- cascadeKM(scale(dat, center = TRUE, scale = TRUE), 1, 4, iter = 100) plot(fit, sortg = TRUE) ``` which suggests 2 groups is the best solution for these data: ![cascadeKM output](https://i.stack.imgur.com/yqk8o.png) but we know the data generation process had three groups, and as such k-means and the summary stats of the results have not been able to correctly identify the presence of three groups in this small sample of data. With some real data this time, using the famous Iris data set ``` fit2 <- cascadeKM(iris[,1:4], 1, 4, iter = 100) plot(fit2, sortg = TRUE) ``` which clearly favours 3 groups, ![iris cascadeKM output](https://i.stack.imgur.com/lLbdn.png) , which is good seeing as there really are three species in the data set ``` > with(iris, unique(Species)) [1] setosa versicolor virginica Levels: setosa versicolor virginica ```
null
CC BY-SA 3.0
null
2011-04-15T12:10:43.937
2011-04-15T12:10:43.937
null
null
1390
null
9586
1
null
null
10
1924
Excuse what may be an obvious question about bootstrapping. I got sucked in the Bayesian world early and never really explored bootstrapping as much as I should have. I ran across an analysis in which the authors were interested in a survival analysis related to some time to failure data. They had about 100 points and used regression to fit a Weibull distribution to the data. A result of this they obtained estimates of the scale and shape parameters. A very traditional approach. However, they next used bootstrapping to sample from the original data set and, for each new sample, performed a regression and came up with a new Weibull distribution. The results of the bootstrapping was then used to construct confidence intervals on the survival distribution. My intuition is a bit conflicted. I'm familiar with bootstrapping confidence intervals on parameters, but not seen it used for constructing distribution confidence intervals. Can anyone point me toward a reference/source that might provide some insight? Thanks in advance.
Bootstrap confidence intervals on parameters or on distribution?
CC BY-SA 3.0
null
2011-04-15T14:47:27.237
2012-02-24T19:41:14.713
null
null
3591
[ "confidence-interval", "bootstrap" ]
9587
1
null
null
2
975
Adapted from my [previous, unanswered question](https://stats.stackexchange.com/questions/9508/glmm-questions-of-appropriateness-post-hoc-tests-effects-size-and-confidence): I'm analyzing the results of a hormone manipulation experiment. I measured a number of variables at three times in three groups. The groups are different sizes and not all individuals were measured every time, so I'd like to use GLMM rather than a repeated-measures ANOVA. I created the model then tested the significance of the terms (time, treatment, and time x treatment) with ANOVA. I'm quite new to GLMM, but after doing the tests, further reading suggests that my approach may be inappropriate, particularly with small data sets (I have ~seven animals per group). One reason given against using this method is that it seems that there is disagreement about what the degrees of freedom should be. Is this method of testing the significance of factors in the model appropriate? I'm aware that the place for me to go is probably the Pinheiro and Bates book, but I don't have access to it at this time. Thanks in advance for any advice. ``` library(nlme) datums<-data.frame(id=rep(1:20,each=3),var1=runif(60,4,6),var2=runif(60,25,30),var3=runif(60,0,1),var4=runif(60,10,15),var.time=rep(1:3,times=20),var.treatment=rep(c('a','b','c'),each=20)) datums$var.time<-as.factor(datums$var.time) datums$id<-as.factor(datums$id) #and now the GLMMs on each variable - I'll show just one here var1.glmm<-lme(var1~var.time + var.treatment + var.time*var.treatment, data=datums, random = ~1| id) summary(var1.glmm) anova(var1.glmm) ```
GLMM - test of significance
CC BY-SA 3.0
null
2011-04-15T15:13:03.293
2011-07-17T10:19:58.217
2017-04-13T12:44:29.013
-1
124
[ "r", "mixed-model" ]
9588
1
null
null
5
303
I'm not really a statistician, but rather in need of statistical guidance, so I hope this is not too much of an off-topic question. I'm writing a master's thesis (computational linguistics/NLP), and I've got several result sets I'm comparing. Now, I didn't really formulate a null and an alternative hypothesis before I ran the experiments, which I understand means that ideally I shouldn't really go about using the T-test on my datasets. But some of the different results are so close that I've given in to the temptation and T-tested them. How unclean is this? Is it bad enough that I should leave it out of my thesis entirely, or is it less severe? In case it matters, the data are the error rates of different language models, by 10-fold cross-validation.
After the fact hypothesis testing
CC BY-SA 3.0
null
2011-04-15T15:32:24.263
2011-04-16T00:10:23.233
null
null
4185
[ "hypothesis-testing" ]
9589
2
null
9588
2
null
It's not necessarily a problem that you didn't formulate hypotheses before running the study, but you may be doing a post-hoc analysis, which would be relevant. Also consider whether what your tests mean. The population I feel the need to point out that the population is sort of bizarre in this situation. If I understand your study correctly, you have 10 error rates for each model. If you were just interested in the performance for this particular partitioning scheme, you would not need to use statistical tests; these 10 error rates would be the population. The entire corpus would interest you more. There are (N choose N/10) ways you could partition the data into 90% training set and 10% test set, and you could run the models on a random sample of these partitioning schemes. It seems that this approach is called repeated random sub-sampling validation. Differences between models If I understand your dataset, the tests that I am about to describe may not really be valid because the 10 error rates are sort of dependent on each other; they are all taken from the same partitioning scheme. But here we go anyway! I assume that you are trying to see whether any of the models perform significantly differently from than the others. This is a valid hypothesis, but you'll need to use something like ANOVA because you have more than two models. On the other hand, if you are just trying to tell the difference between two models because they are the two best models, you have to account for how you decided to compare these two after the fact. Look at post-hoc tests and p-value adjustments.
null
CC BY-SA 3.0
null
2011-04-15T17:22:47.937
2011-04-15T17:22:47.937
null
null
3874
null