Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
11914
2
null
2099
1
null
While commercial, Macrofocus provides a free 30-days evaluation of their [TreeMap](http://www.macrofocus.com/public/products/treemap/) software.
null
CC BY-SA 3.0
null
2011-06-15T00:01:29.807
2011-06-15T00:01:29.807
null
null
5021
null
11915
2
null
11899
2
null
[FactoMineR](http://factominer.free.fr/) is a nice package for Factor Analysis on mixed variables.
null
CC BY-SA 3.0
null
2011-06-15T01:22:29.520
2011-06-15T01:22:29.520
null
null
3903
null
11917
1
null
null
3
553
I have [Case Fatality Rates](http://en.wikipedia.org/wiki/Case_fatality_rate) (deaths per 100 cases) for 2 different states receiving different treatments for 17 years. What is the best statistical method to compare Case Fatality Rates ? The data are like this ``` Year St.1 Cases St.1 Deaths St.1 CFR St.2 Cases St.2 Deaths State2 CFR 1994 1836 383 20.86 583 121 20.75 1995 1246 257 20.63 1126 227 20.16 1996 1450 263 18.14 896 179 19.98 1997 2953 407 13.78 351 76 21.65 1998 1161 149 12.83 1061 195 18.55 1999 2924 434 14.84 1371 275 20.06 2000 1729 169 9.77 1170 253 21.62 2001 1888 275 14.57 1005 199 19.80 2002 919 178 19.37 604 133 22.02 2003 865 142 16.42 1124 237 21.09 2004 1543 131 8.49 1030 228 22.14 2005 2887 336 11.64 6061 1500 24.75 2006 1484 108 7.28 2320 528 22.76 2007 1592 75 4.71 3024 645 21.33 2008 1920 53 2.76 3012 537 17.83 2009 1477 40 2.71 3073 556 18.09 2010 1534 26 1.69 3540 494 13.95 ```
What is the best statistical method to compare case fatality rates of 17 years for 2 different states?
CC BY-SA 3.0
null
2011-06-15T01:59:12.173
2011-06-23T23:54:50.307
2011-06-20T10:43:08.843
930
2956
[ "epidemiology", "odds-ratio", "methodology" ]
11918
2
null
11580
1
null
``` # I found the answer myself. # general additive model using gam library(gam) fitgam <- gam(y ~ lo(x), family=binomial) plot(fitgam) ```
null
CC BY-SA 4.0
null
2011-06-15T03:01:59.773
2022-09-20T01:25:29.720
2022-09-20T01:25:29.720
-1
4887
null
11919
1
11921
null
5
6526
I've to draw a polygon which will join the four vertexes on the plot in R. For this I need minimum value of X which has minimum Y value. ``` X <- c(-62, -40, 9, 13, 26, 27, 27) Y <- c( 7, -14, 10, 9, -8, -16, 12) plot(x = X, y = Y) abline(h = 0, v = 0, lty = 2.5, col = "green", lwd = 2) ``` ![enter image description here](https://i.stack.imgur.com/G0sv8.png) I've tough time to find four vertexes - minimum of X and minimum Y (which is (-40, -14)) - minimum of X and maximum Y (which is (-62, 7)) - maximum of X and minimum Y (which is ( 27, -16)) - maximum of X and maximum Y (which is ( 27, 12)) in R.
Convex Hull in R
CC BY-SA 3.0
null
2011-06-15T03:10:23.180
2018-12-14T18:29:01.810
2011-06-15T05:48:28.270
3903
3903
[ "r" ]
11921
2
null
11919
6
null
I think you want the convex hull of your data. Try this ``` library(grDevices) # load grDevices package df <- data.frame(X = c(-62, -40, 9, 13, 26, 27, 27), Y = c( 7, -14, 10, 9, -8, -16, 12)) # store X,Y together con.hull.pos <- chull(df) # find positions of convex hull con.hull <- rbind(df[con.hull.pos,],df[con.hull.pos[1],]) # get coordinates for convex hull plot(Y ~ X, data = df) # plot data lines(con.hull) # add lines for convex hull ``` EDIT If you want to add a line from the origin to each side of the convex hull such that each line is perpendicular to the convex hull, then try this: ``` getPerpPoints <- function(mat) { # mat: 2x2 matrix with first row corresponding to first point # on the line and second row corresponding to second # point on the line # # output: two points which define the line going from the side # to the origin # store the inputs more conveniently x <- mat[,1] y <- mat[,2] # define a new matrix to hold the output out <- matrix(0, nrow = 2, ncol = 2) # handle special case of vertical line if(diff(x) == 0) { xnew <- x[1] } else { # find point on original line xnew <- (diff(y) / diff(x)) * x[1] - y[1] xnew <- xnew / (diff(y) / diff(x) + diff(x) / diff(y)) } ynew <- -(diff(x) / diff(y)) * xnew # put new point in second row of matrix out[2,] <- c(xnew, ynew) return(out = out) } ``` After you've plotted the initial points, as well as the convex hull of the data, run the above code and the following: ``` for(i in 1:4) { lines(getPerpPoints(con.hull[i:(i+1),])) } ``` Keep in mind that some of the lines going from the origin to each side will not terminate within the interior of the convex hull of the data. Here is what I got as output: ![enter image description here](https://i.stack.imgur.com/oFMDV.jpg)
null
CC BY-SA 3.0
null
2011-06-15T03:40:29.380
2011-06-15T18:17:07.670
2011-06-15T18:17:07.670
4812
4812
null
11922
2
null
11919
4
null
I'm not 100% sure I'm following what you are trying to do with `abline`, but maybe this will move you in the right direction. You can use the function `which.min()` and `which.max()` to return the minimum or maximum values from a vector. You can combine that with the `[` operator to index a second vector with that condition. For example: ``` X[which.min(Y)] X[which.max(Y)] ``` EDIT to address additional details in the question Instead of indexing the X vector with the min/max value of the Y vector, you can index the Y vector itself...and the X vector for the X vector: ``` c(X[which.min(X)], Y[which.min(Y)]) c(X[which.min(X)], Y[which.max(Y)]) c(X[which.max(X)], Y[which.min(Y)]) c(X[which.max(X)], Y[which.max(Y)]) ``` EDIT # 2: You want to find the convex hull of your data. Here's how you go about doing that: ``` #Make a data.frame out of your vectors dat <- data.frame(X = X, Y = Y) #Compute the convex hull. THis returns the index for the X and Y coordinates c.hull <- chull(dat) #You need five points to draw four line segments, so we add the fist set of points at the end c.hull <- c(c.hull, c.hull[1]) #Here's how we get the points back #Extract the points from the convex hull. Note we are using the row indices again. dat[c.hull ,] #Make a pretty plot with(dat, plot(X,Y)) lines(dat[c.hull ,], col = "pink", lwd = 3) ###Note if you wanted the bounding box library(spatstat) box <- bounding.box.xy(dat) plot(box, add = TRUE, lwd = 3) #Retrieve bounding box points with(box, expand.grid(xrange, yrange)) ``` And as promised, your pretty plot: ![enter image description here](https://i.stack.imgur.com/CCeaM.png)
null
CC BY-SA 3.0
null
2011-06-15T03:50:23.223
2011-06-15T05:20:56.503
2011-06-15T05:20:56.503
696
696
null
11923
1
null
null
5
23554
Should covariates be included in regression analyses if they are correlated with the dependent variable or if they are correlated with the predictor variable/s. Alternatively, should they be included as a result of past (fairly robust) findings that they are significantly related to the outcome and/or predictor variables?
Covariates in regression models
CC BY-SA 3.0
null
2011-06-15T09:34:41.810
2014-08-14T15:54:06.393
null
null
4716
[ "regression" ]
11924
1
11928
null
23
63046
How can I add new variable into data frame which will be percentile rank of one of the variables? I can do this in Excel easily, but I really want to do that in R. Thanks
Computing percentile rank in R
CC BY-SA 3.0
null
2011-06-15T09:45:06.003
2019-07-17T22:44:51.283
2019-07-17T22:44:51.283
3277
333
[ "r", "quantiles" ]
11925
2
null
11872
5
null
There is rank one QR update function in matlab [here](http://www.mathworks.com/help/techdoc/ref/qrupdate.html) that saves you a factor $p$ in the complexity of updating the coefficients of a p-variate linear regression. Despite searching for days a couple of months ago, i've not been able to find an equivalent in R (beware there are many qr.update functions in cran but when you look under the hood they're just fake --i.e. they call `lm.update` all the same). Update: try in the source of the package 'leaps'. In the R-source, you will find a function 'leaps.forward', which calls a FORTRAN routine 'forwrd', located inthe /src of the package which seems to implement rank 1 QR update.
null
CC BY-SA 3.0
null
2011-06-15T10:03:53.660
2011-06-16T10:57:42.790
2011-06-16T10:57:42.790
603
603
null
11926
2
null
11924
9
null
If your original data.frame is called `dfr` and the variable of interest is called `myvar`, you can use `dfr$myrank<-rank(dfr$myvar)` for normal ranks, or `dfr$myrank<-rank(dfr$myvar)/length(myvar)` for percentile ranks. Oh well. If you really want it the Excel way (may not be the simplest solution, but I had some fun using new (to me) functions and avoiding loops): ``` percentilerank<-function(x){ rx<-rle(sort(x)) smaller<-cumsum(c(0, rx$lengths))[seq(length(rx$lengths))] larger<-rev(cumsum(c(0, rev(rx$lengths))))[-1] rxpr<-smaller/(smaller+larger) rxpr[match(x, rx$values)] } ``` so now you can use `dfr$myrank<-percentilerank(dfr$myvar)` HTH.
null
CC BY-SA 3.0
null
2011-06-15T10:06:10.363
2011-06-15T11:50:44.800
2011-06-15T11:50:44.800
4257
4257
null
11927
1
11930
null
19
31660
Could someone tell me what the term 'persistence' mean in time series analysis? It's regarding econometrics and applied regression.
Persistence in time series
CC BY-SA 3.0
null
2011-06-15T10:08:56.057
2019-01-23T15:11:08.963
2018-09-04T15:20:26.720
128677
5023
[ "regression", "time-series", "econometrics", "terminology" ]
11928
2
null
11924
36
null
Given a vector of raw data values, a simple function might look like ``` perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100 ``` where `x0` is the value for which we want the percentile rank, given the vector `x`, as suggested on [R-bloggers](http://www.r-bloggers.com/r-tutorial-series-summary-and-descriptive-statistics/). However, it might easily be vectorized as ``` perc.rank <- function(x) trunc(rank(x))/length(x) ``` which has the advantage of not having to pass each value. So, here is an example of use: ``` my.df <- data.frame(x=rnorm(200)) my.df <- within(my.df, xr <- perc.rank(x)) ```
null
CC BY-SA 3.0
null
2011-06-15T10:30:37.863
2011-06-15T11:13:21.743
2011-06-15T11:13:21.743
930
930
null
11929
1
null
null
4
393
I am starting to use R's [dynlm](http://cran.r-project.org/web/packages/dynlm/index.html) package. Currently I am just looking at the fit and eyeball which choice of lags might be the best. Is there a standard way or a strategy to determine the best k parameters for `L()`. What I often see is ridiculously high lags like k=10 in a quarterly series delivering the best fit. What could be the reason for that?
How to optimize the k parameters in dynamic linear regression?
CC BY-SA 3.0
null
2011-06-15T10:42:30.187
2012-03-07T20:48:41.547
2011-06-15T10:45:37.457
930
704
[ "r", "regression", "time-series" ]
11930
2
null
11927
17
null
Roughly speaking, the term persistence in time series context is often related to the notion of memory properties of time series. To put it another way, you have a persistent time series process if the effect of infinitesimally (very) small shock will be influencing the future predictions of your time series for a very long time. Thus the longer the time of influence the longer is the memory and the extremely persistence. You may consider an integrated process I(1) as an example of highly persistent process (information that comes from the shocks never dies out). Though fractionally integrated (ARFIMA) processes would be more interesting examples of persistent processes. Probably it would be useful to read about Measuring Conditional Persistence in Time Series in G.Kapetanios article.
null
CC BY-SA 4.0
null
2011-06-15T10:53:03.040
2019-01-23T15:11:08.963
2019-01-23T15:11:08.963
2645
2645
null
11931
2
null
11923
5
null
Correlation with the dependent variable is a definite plus (especially for linear regression where there are close links between the coefficients and covariance with the dependent variable). Correlation with the other covariates/predictors is somewhat more subtle and depends on your goal. Generally, it is considered good practice to include as many variables as feasible at first (especially ones that you have some reason to believe could be relevant from previous research or the likes), and then try to optimize with some criterion (e.g. AIC or simple likelihood ratio tests) or through some optimizing algorithm (LASSO etc.). I make an exception for perfectly correlated variables: there is no use in leaving them both in. You should always be careful to leave covariates out, though, as leaving the wrong one(s) out could bias your coefficient estimates! Maybe you can ask your question somewhat more explicitly? Specify what are your goals in your research, then we may be more able to give specific advice.
null
CC BY-SA 3.0
null
2011-06-15T11:00:06.003
2011-06-15T11:00:06.003
null
null
4257
null
11933
2
null
11088
5
null
By looking at the Wikipedia article, I've written a function to generate random variables from the Laplace dsistribution. Here it is: ``` function x = laplacernd(mu,b,sz) %LAPLACERND Generate Laplacian random variables % % x = LAPLACERND(mu,b,sz) generates random variables from a Laplace % distribution having parameters mu and b. sz stands for the size of the % returned random variables. See [1] for Laplace distribution. % % [1] http://en.wikipedia.org/wiki/Laplace_distribution % % by Ismail Ari, 2011 if nargin < 1 % Equal to exponential distribution scaled by 1/2 mu = 0; end if nargin < 2 b = 1; end if nargin < 3 sz = 1; end u = rand(sz) - 0.5; x = mu - b*sign(u) .* log(1-2*abs(u)); ``` And here is a code snippet to use it ``` clc, clear mu = 30; b = 2; sz = [50000 1]; x = laplacernd(mu,b,sz); hist(x,100) ``` ![Laplace distribution](https://i.stack.imgur.com/19Wma.jpg)
null
CC BY-SA 3.0
null
2011-06-15T11:14:09.953
2011-06-15T12:35:29.040
2011-06-15T12:35:29.040
5025
5025
null
11934
2
null
2806
28
null
I've implemented the Randomized SVD as given in "Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). An algorithm for the principal component analysis of large data sets. Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from [http://arxiv.org/abs/1007.5510](http://arxiv.org/abs/1007.5510).". If you want to get truncated SVD, it really works much much faster than the svd variations in MATLAB. You can get it here: ``` function [U,S,V] = fsvd(A, k, i, usePowerMethod) % FSVD Fast Singular Value Decomposition % % [U,S,V] = FSVD(A,k,i,usePowerMethod) computes the truncated singular % value decomposition of the input matrix A upto rank k using i levels of % Krylov method as given in [1], p. 3. % % If usePowerMethod is given as true, then only exponent i is used (i.e. % as power method). See [2] p.9, Randomized PCA algorithm for details. % % [1] Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). % An algorithm for the principal component analysis of large data sets. % Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from % http://arxiv.org/abs/1007.5510. % % [2] Halko, N., Martinsson, P. G., & Tropp, J. A. (2009). Finding % structure with randomness: Probabilistic algorithms for constructing % approximate matrix decompositions. Arxiv preprint arXiv:0909.4061. % Retrieved April 1, 2011, from http://arxiv.org/abs/0909.4061. % % See also SVD. % % Copyright 2011 Ismail Ari, http://ismailari.com. if nargin < 3 i = 1; end % Take (conjugate) transpose if necessary. It makes H smaller thus % leading the computations to be faster if size(A,1) < size(A,2) A = A'; isTransposed = true; else isTransposed = false; end n = size(A,2); l = k + 2; % Form a real n×l matrix G whose entries are iid Gaussian r.v.s of zero % mean and unit variance G = randn(n,l); if nargin >= 4 && usePowerMethod % Use only the given exponent H = A*G; for j = 2:i+1 H = A * (A'*H); end else % Compute the m×l matrices H^{(0)}, ..., H^{(i)} % Note that this is done implicitly in each iteration below. H = cell(1,i+1); H{1} = A*G; for j = 2:i+1 H{j} = A * (A'*H{j-1}); end % Form the m×((i+1)l) matrix H H = cell2mat(H); end % Using the pivoted QR-decomposiion, form a real m×((i+1)l) matrix Q % whose columns are orthonormal, s.t. there exists a real % ((i+1)l)×((i+1)l) matrix R for which H = QR. % XXX: Buradaki column pivoting ile yapılmayan hali. [Q,~] = qr(H,0); % Compute the n×((i+1)l) product matrix T = A^T Q T = A'*Q; % Form an SVD of T [Vt, St, W] = svd(T,'econ'); % Compute the m×((i+1)l) product matrix Ut = Q*W; % Retrieve the leftmost m×k block U of Ut, the leftmost n×k block V of % Vt, and the leftmost uppermost k×k block S of St. The product U S V^T % then approxiamtes A. if isTransposed V = Ut(:,1:k); U = Vt(:,1:k); else U = Ut(:,1:k); V = Vt(:,1:k); end S = St(1:k,1:k); end ``` To test it, just create an image in the same folder (just as a big matrix,you can create the matrix yourself) ``` % Example code for fast SVD. clc, clear %% TRY ME k = 10; % # dims i = 2; % # power COMPUTE_SVD0 = true; % Comment out if you do not want to spend time with builtin SVD. % A is the m×n matrix we want to decompose A = im2double(rgb2gray(imread('test_image.jpg')))'; %% DO NOT MODIFY if COMPUTE_SVD0 tic % Compute SVD of A directly [U0, S0, V0] = svd(A,'econ'); A0 = U0(:,1:k) * S0(1:k,1:k) * V0(:,1:k)'; toc display(['SVD Error: ' num2str(compute_error(A,A0))]) clear U0 S0 V0 end % FSVD without power method tic [U1, S1, V1] = fsvd(A, k, i); toc A1 = U1 * S1 * V1'; display(['FSVD HYBRID Error: ' num2str(compute_error(A,A1))]) clear U1 S1 V1 % FSVD with power method tic [U2, S2, V2] = fsvd(A, k, i, true); toc A2 = U2 * S2 * V2'; display(['FSVD POWER Error: ' num2str(compute_error(A,A2))]) clear U2 S2 V2 subplot(2,2,1), imshow(A'), title('A (orig)') if COMPUTE_SVD0, subplot(2,2,2), imshow(A0'), title('A0 (svd)'), end subplot(2,2,3), imshow(A1'), title('A1 (fsvd hybrid)') subplot(2,2,4), imshow(A2'), title('A2 (fsvd power)') ``` ![Fast SVD](https://i.stack.imgur.com/T1n9m.jpg) When I run it on my desktop for an image of size 635*483, I get ``` Elapsed time is 0.110510 seconds. SVD Error: 0.19132 Elapsed time is 0.017286 seconds. FSVD HYBRID Error: 0.19142 Elapsed time is 0.006496 seconds. FSVD POWER Error: 0.19206 ``` As you can see, for low values of `k`, it is more than 10 times faster than using Matlab SVD. By the way, you may need the following simple function for the test function: ``` function e = compute_error(A, B) % COMPUTE_ERROR Compute relative error between two arrays e = norm(A(:)-B(:)) / norm(A(:)); end ``` I didn't add the PCA method since it is straightforward to implement using SVD. You may [check this link](https://math.stackexchange.com/questions/3869/what-is-the-intuitive-relationship-between-svd-and-pca) to see their relationship.
null
CC BY-SA 3.0
null
2011-06-15T12:23:55.193
2012-01-16T16:18:36.543
2017-04-13T12:19:38.800
-1
5025
null
11935
1
null
null
1
4372
I already posted about exploratory factor analysis to understand the difference with PCA. Now, I carried out an exploratory factor analysis on my data set by using the R's `psych::fa` function. I have some perplexities about the interpretation of the results listed here below. A is the matrix of my data having 16 rows and 6 columns. ``` fa(a,nfactors=3,rotate="varimax") In fa, too many factors requested for this number of variables to use SMC for communality estimates, 1s are used instead Factor Analysis using method = minres Call: fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, scores = scores, residuals = residuals, SMC = SMC, missing = FALSE, impute = impute, min.err = min.err, max.iter = max.iter, symmetric = symmetric, warnings = warnings, fm = fm, alpha = alpha) Standardized loadings based upon correlation matrix MR1 MR3 MR2 h2 u2 V1 -0.02 0.38 0.06 0.15 0.848 V2 0.14 0.50 0.14 0.29 0.711 V3 0.97 0.06 0.24 1.00 0.005 V4 -0.03 -0.05 -0.47 0.22 0.779 V5 0.67 0.74 0.03 1.00 0.005 V6 0.46 0.39 0.79 1.00 0.005 MR1 MR3 MR2 SS loadings 1.63 1.10 0.92 Proportion Var 0.27 0.18 0.15 Cumulative Var 0.27 0.45 0.61 Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the null model are 15 and the objective function was 2.15 with Chi Square of 26.17 The degrees of freedom for the model are 0 and the objective function was 0.07 The root mean square of the residuals is 0.03 The number of observations was 16 with Chi Square = 0.67 with prob < NA Tucker Lewis Index of factoring reliability = -Inf Fit based upon off diagonal values = 0.98 Measures of factor score adequacy MR1 MR3 MR2 Correlation of scores with factors 1.00 0.99 0.99 Multiple R square of scores with factors 0.99 0.99 0.99 Minimum correlation of possible factor scores 0.99 0.97 0.98 ``` I cannot understand from chi-square value if I can not reject the null hypothesis of goodness of fit on three factors. I have seen some examples on the web for this function, but I could not find anything similar. thanks,
Interpreting R output from exploratory factor analysis regarding rejection of null hypothesis of goodness of fit
CC BY-SA 3.0
null
2011-06-15T12:27:13.230
2011-06-26T15:25:27.820
2011-06-15T15:12:52.760
183
4903
[ "r", "factor-analysis", "small-sample" ]
11936
1
11939
null
3
251
I have data on a variable spanning multiple years. I want to see how the distribution has evolved over the years. - Is there an easy way to produce a density plot for each year in single plot with different colours for each year?
How to create a density plot for data from multiple years with each year represented by a different colour?
CC BY-SA 3.0
null
2011-06-15T12:41:59.350
2011-06-15T13:16:42.637
2011-06-15T13:16:42.637
183
333
[ "data-visualization", "histogram" ]
11937
2
null
11936
1
null
I'll assume you are using R (from previous questions). Note that this also makes your question better suited for StackOverflow.com. As such, I remembered an answer there that could be of service to you:[Q6030684](https://stackoverflow.com/questions/6030684/histogram-without-vertical-lines).
null
CC BY-SA 3.0
null
2011-06-15T12:52:45.697
2011-06-15T12:52:45.697
2017-05-23T12:39:26.150
-1
4257
null
11938
2
null
8399
3
null
The problem may stem from the fitness ratio you use in your code. The inverse of the distances may give very large differences in the resampling probabilities of the new generation of the samples. For example, let `x` stand for the `rms` values in your design and let them be ``` x = [0.1 1 1 3 10 50] ``` When you use the inverses, that leads to ``` y = (1./x) ./ norm(1./x) y = 0.9896 0.0990 0.0990 0.0330 0.0099 0.0020 ``` As you see, the first sample dominates the next generation. It may converge to wrong proposals very easily. Alternatively, you may use a sigmoid-like function. For example, ``` z = exp(-x) ./ norm(exp(-x)) z = 0.8659 0.3521 0.3521 0.0476 0.0000 0.0000 ``` Now, the particles in the next generation will probably involve some samples similar to the second and third ones. So, the generation scheme will be more robust to erronous proposals. Edit: Moreover, after dividing by the norms, the sum of the result is not 1. You might use `sum` instead. ``` y = (1./x) ./ sum(1./x) y = 0.8030 0.0803 0.0803 0.0268 0.0080 0.0016 z = exp(-x) ./ sum(exp(-x)) z = 0.5353 0.2176 0.2176 0.0295 0.0000 0.0000 ```
null
CC BY-SA 3.0
null
2011-06-15T12:53:37.533
2011-06-15T13:00:42.117
2011-06-15T13:00:42.117
5025
5025
null
11939
2
null
11936
3
null
Without seeing the data, I would suggest trying [trellis displays](http://cm.bell-labs.com/stat/project/trellis/). If you are using R, this is very easy to do with the [lattice](http://cran.r-project.org/web/packages/lattice/index.html) (even [latticeExtra](http://cran.r-project.org/web/packages/latticeExtra/index.html)) or [ggplot2](http://cran.r-project.org/web/packages/ggplot2/index.html) packages. ``` > my.df <- data.frame(x=rnorm(300), year=gl(3, 100, 300, labels=2000:2002)) > head(my.df) x year 1 -0.3260365 2000 2 0.5524619 2000 3 -0.6749438 2000 4 0.2143595 2000 5 0.3107692 2000 6 1.1739663 2000 > library(lattice) > densityplot(~ x, data=my.df, groups=year) ``` which gives ![enter image description here](https://i.stack.imgur.com/UJ4gF.png) Compare to `densityplot(~ x | year, data=my.df, layout=c(3,1))` (for a facetted display).
null
CC BY-SA 3.0
null
2011-06-15T12:58:54.110
2011-06-15T12:58:54.110
null
null
930
null
11940
1
null
null
0
166
Assume there are a number of customers, each of which has made one more orders. You now see a sample/subset of orders (lets say 1,000,000 out of 10,000,000) but do not know the total number of customers. How to estimate the total number of customers in the full dataset, based on the sample? I can assume that the number of orders per customer follows a power law, but it would be better if I could do the estimate without this assumption, but base it purely on the sample.
Estimating the total number of customers based on a subset of orders
CC BY-SA 3.0
null
2011-06-15T13:24:52.863
2012-06-10T03:32:12.813
null
null
5026
[ "estimation" ]
11943
1
11954
null
1
1934
I am using `auto.arima()` for forecasting. When I am using any in built data such as "AirPassengers" it is capturing seasonality. But, If I am entering data in any other format (in vector form or from an excel sheet) it is not detecting seasonality. Is there any specific format in which it detects seasonality or I am doing some thing wrong? Does data have to be entered in a specific format?
Problem using auto.arima() in R
CC BY-SA 3.0
null
2011-06-15T13:47:16.973
2011-06-22T19:04:40.963
2011-06-16T08:30:43.427
103
5028
[ "r", "forecasting" ]
11945
1
null
null
1
2094
I have a lin-log regression model like $$Y = b_0 + b_1 \log(x_1 + 1) + e.$$ The distribution of $x_1$ is very skewed, thus I use the natural logarithm to get a more Gaussian like distribution. Because 3 out of 100 values have zero as entry I add a constant c, in my case plus 1, to avoid -Inf. The resulting estimation of $b_1$ is about -0.14. Without the constant the interpretation is clear: a 1% change in $x$ results in a $0.01\cdot b_1$ change in $y$. I struggle with the constant. How can I account for it in my interpretation? If I change the value of c I get, of course, other estimates. I have chosen + 1 because this results in positive log values (the values of $x_1$ are originally positive too). Or should I add a small value just to the three 0s? Many thanks in advance and, please, a non mathematical answer ;-) Marco
Interpretation lin-log regression where the covariate is log(x1 + 1) transformed
CC BY-SA 3.0
null
2011-06-15T14:14:47.230
2015-02-12T22:22:14.470
2011-06-16T06:38:21.240
2116
5029
[ "regression", "data-transformation" ]
11946
2
null
2516
4
null
I think its a problem of most significance tests having some general undefined class of implicit alternatives to the null, which we never know. Often these classes may contain some sort of "sure thing" hypothesis, in which the data fits perfectly (i.e. a hypothesis of the form $H_{ST}:d_{1}=1.23,d_{2}=1.11,\dots$ where $d_{i}$ is the ith data point). The value of the log likelihood is such an example of a significance test which has this property. But one is usually not interested in these sure thing hypothesis. If you think about what you actually want to do with the hypothesis test, you will soon recognise that you should only reject the null hypothesis if you have something better to replace it with. Even if your null does not explain the data, there is no use in throwing it out, unless you have a replacement. Now would you always replace the null with the "sure thing" hypothesis? Probably not, because you can't use these "sure thing" hypothesis to generalise beyond your data set. It's not much more than printing out your data. So, what you should do is specify the hypothesis that you would actually be interested in acting on if they were true. Then do the appropriate test for comparing those alternatives to each other - and not to some irrelevant class of hypothesis which you know to be false or unusable. Take the simple case of testing the normal mean. Now the true difference may be small, but adopting a position similar to that in @keith's answer, we simply test the mean at various discrete values that are of interest to us. So for example, we could have $H_{0}:\mu=0$ vs $H_{1}:\mu\in\{\pm 1,\pm 2,\pm 3,\pm 4,\pm 5,\pm 6\}$. The problem then transfers to looking at what level do we want to do these tests at. This has a relation to the idea of effect size: at what level of graininess would have an influence on your decision making? This may call for steps of size $0.5$ or $100$ or something else, depending on the meaning of the test and of the parameters. For instance if you were comparing the average wealth of two groups, would anyone care if there was a difference of two dollars, even if it was 10,000 standard errors away from zero? I know I wouldn't. The conclusion is basically that you need to specify your hypothesis space - those hypothesis that you are actually interested in. It seems that with big data, this becomes a very important thing to do, simply because your data has so much resolving power. It also seems like it is important to compare like hypothesis - point with point, compound with compound - to get well behaved results.
null
CC BY-SA 3.0
null
2011-06-15T14:16:06.043
2011-06-15T14:16:06.043
null
null
2392
null
11947
1
11948
null
23
68978
I have 2 variables, both from class "numeric": `> head(y) [1] 0.4651804 0.6185849 0.3766175 0.5489810 0.3695258 0.4002567` `> head(x) [1] 59.32820 68.46436 80.76974 132.90824 216.75995 153.25551` I plotted them, and now I would like to fit an exponential model to the data (and add it to the plot) but I cannot find any info on fitting models to multivariate data in R! Only to univariate data, can somebody help? I don't even know where to start... Thanks!
Fitting an exponential model to data
CC BY-SA 3.0
null
2011-06-15T11:35:00.780
2011-06-15T15:09:39.317
null
null
5034
[ "r" ]
11948
2
null
11947
23
null
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give it a try. If `x` is your independent (or predictor) variable and `y` is your dependent (or response) variable, then this should work. ``` # generate data beta <- 0.05 n <- 100 temp <- data.frame(y = exp(beta * seq(n)) + rnorm(n), x = seq(n)) # plot data plot(temp$x, temp$y) # fit non-linear model mod <- nls(y ~ exp(a + b * x), data = temp, start = list(a = 0, b = 0)) # add fitted curve lines(temp$x, predict(mod, list(x = temp$x))) ```
null
CC BY-SA 3.0
null
2011-06-15T12:52:01.850
2011-06-15T12:52:01.850
null
null
1445
null
11949
1
null
null
2
79
In stage 1 we test many SNPs, in stage 2 we try to confirm these SNPs. Usually we start with less sample, but then it's hard to detect a signal. For example, if we can afford 3000 samples, how to allocate these 3000 people in two stages?
How to allocate sample size to gain most power on a two stage analysis?
CC BY-SA 3.0
null
2011-06-15T15:13:07.057
2012-09-05T12:45:23.920
2012-09-05T12:45:23.920
930
5030
[ "sample-size", "genetics", "statistical-power" ]
11950
1
11951
null
8
5266
I have simple linear regression model. What I want to calculate is how "important" each of my input variables are i.e. to make a statement something like this: "60% of predictive power in this model comes from variable var1, wheres var2 and var3 have30% and 10%" respectively" What I need to do to calculate these percentages?
Explanatory power of a variable
CC BY-SA 3.0
null
2011-06-15T15:29:45.667
2019-03-10T06:27:38.167
2019-03-10T06:27:38.167
11887
333
[ "regression", "importance" ]
11951
2
null
11950
10
null
The [relaimpo](http://cran.r-project.org/web/packages/relaimpo/index.html) R package does exactly what you want to do, and it also provides bootstrap CIs when assessing relative contribution of individual predictor to the overall $R^2$. An example of use can be found at the end of this tutorial: [Getting Started with a Modern Approach to Regression](http://www.unt.edu/benchmarks/archives/2008/june08/rss.htm).
null
CC BY-SA 3.0
null
2011-06-15T15:53:37.903
2011-06-15T15:53:37.903
null
null
930
null
11953
1
11962
null
1
173
Does anybody know how to interpret a whole bunch of effects (main and interaction) in a clever way? Or does anybody have a good example where it's shown? To be more precisely: Assume that you have a lot of effects in your model (main and interaction effects) and you know that standard errrors and coefficients are biased (as they are in most empiric studies). But you want to present results, what would you do? So because of the bias p-values are wrong, so there is no valid way to say which coeficients are significant and which are not. So there has to be another way to present information. The idea to work with plots seems to be a good one. But I'm still curious how you guys handle such problems!
Interpreting a lots of effects
CC BY-SA 3.0
null
2011-06-15T17:15:42.137
2011-06-16T07:48:04.480
2011-06-16T07:48:04.480
4496
4496
[ "regression", "interpretation" ]
11954
2
null
11943
2
null
As @Dmitrij Celov commented, make sure your data is a ts() object, with the proper frequency. For example, if you have a vector of quarterly data, x = c(4,3,2,1,4,3,2,1), create y=ts(x,frequency=4). Use frequency=12 for monthly data, etc.
null
CC BY-SA 3.0
null
2011-06-15T17:16:51.263
2011-06-15T17:16:51.263
null
null
2817
null
11955
1
null
null
7
5301
I have a bunch of documents (66 quarterly reports on grievances and complaints on health care services provided and growing) and a list of words that I'd like to follow over time. What is the easiest way to do this? I have played with R's text mining library and have got rather frustrated. I have also tried to use RapidMiner and it chokes on three documents (ran out of memory). I would greatly appreciate any suggestions, ideas etc...
Determining trends in text
CC BY-SA 3.0
null
2011-06-15T18:05:17.453
2015-01-16T10:24:21.677
2011-06-15T19:50:14.933
930
5037
[ "dataset", "text-mining" ]
11956
1
11958
null
6
1113
From the pdf of the Poisson distribution I would expect $\Pr(x=1)$ to be $$\lambda dt \cdot \exp(-\lambda dt)$$ I can see that as $dt$ gets very small, $\exp(-\lambda dt)$ becomes close to $1$, and so suggests $\lambda dt$, but I don't see why in the limit as $dt \to 0$ of $\lambda dt \cdot \exp(-\lambda dt)$, doesn't that make the $\lambda dt$ term zero also?
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
CC BY-SA 3.0
null
2011-06-15T20:23:02.830
2012-04-10T07:10:29.973
2011-06-15T22:01:15.863
919
5038
[ "poisson-distribution" ]
11957
1
13940
null
2
164
I have a dataset with 1500 patients with data on recurrence of a disease. Follow-up time varies between 1 and 15 years. Approx 10% have recurrence. What I´d like to do is create a predictive model for recurrence into n similar groups that share the same temporal recurrence risk. This so that I could advise on an optimal control scheme. I can create the predictive model quite allright, but I´m uncertain as to what technique I can use to optimise the interval for controls. I guess a brute force simulation approach could be allright, but I´m sure there are other ways to do this. Advice or pointers to literature would be very welcome. EDIT: By control I mean how often they should be seen and evaluated for recurrence in the outpatient clinic. @rolando: 1) from what is already know on recurrence of this disease- the patients can be broken down into high, medium and low recurrence risks. This aso makes the most sense with respect to advising on a control scheme for the recurrence. Some patients recur within one year (2-3%), some 3-5 years and some after 10 years. Clearly it would be costsaving to have the latter patients be screened for recurrence at 10 years instead of yearly as would be indicated in the high risk group. However, exactly how I can utilise a prediction model and create the different groups so that they are homogenous with respect to time of recurrence is one of my problems. 2) n is the number of different groups with the same recurrence risk-see 1 3)By optimising the control routines (see 1)- A patient with low recurrence risk should not be seen as often as a patient with high recurrence risk. How can the optimal time-interval for controls in each group be calculated? //M
Optimising control routines after creating a predictive model
CC BY-SA 3.0
null
2011-06-15T21:26:41.960
2011-08-08T02:24:53.093
2011-08-04T07:30:57.503
1291
1291
[ "predictive-models", "optimization" ]
11958
2
null
11956
6
null
In fact, [Leibniz' notation](http://en.wikipedia.org/wiki/Leibniz%27s_notation) for infinitesimal increments can be confusing. One has to be careful here to keep all terms of the same order: $e^{-\lambda dt}$ must be approximated to first order of $dt$ (not zeroth order, i.e. without any terms in $dt$), i.e. $e^{-\lambda dt}$ is approximately $1 - \lambda dt$ + (plus terms which are at least quadratic in $dt$ and thus go to zero faster than $dt$ itself) Then one has: $$ \lambda dt \cdot e^{-\lambda dt} \approx \lambda dt \cdot (1 - \lambda dt) $$ and then for $dt \rightarrow 0$ one can ignore the non-leading terms ($dt^2$) and is left with $\lambda dt$.
null
CC BY-SA 3.0
null
2011-06-15T21:39:17.763
2011-06-15T21:51:27.543
2011-06-15T21:51:27.543
961
961
null
11959
1
null
null
4
1478
In R the `princomp()`and the `factanal()` are somewhat similar. At least their output looks pretty similar. I learned that this is not surprising since the print function of `princomp` comes from `factanal`. I understand that SS loadings do not make much sense for `princomp` as it is bounded to `1` anyway. Moreover, as Joris stated on nabble, the proportion of variance is only printed because of the common print function, but does not contain valuable information when princomp is used. What I do not understand is rather not an R question but more a multivariate stats question what is the conceptual difference between these PCA and Factor Analysis functions as they are used in R? This question relates particularly to the scores (let's assume "regression" scores for FA) respectively the difference between scores in both concepts? What should I rather use when I want to use to resulting scores in a regression model (for example in order to circumvent multicollinearity)? I also understand that PCA has a fixed number of components while FA has fewer factors than variables. richiemorrisroe's answer in the thread suggested by Rob Hyndman might go into that direction.
What is the difference between scores in Princomp vs. factanal?
CC BY-SA 3.0
null
2011-06-15T21:46:08.337
2011-06-16T12:00:00.687
2011-06-16T08:08:18.657
704
704
[ "r", "pca", "factor-analysis" ]
11960
1
null
null
3
2368
Consider the following survey question: --- Q: Choose one or more of the following 5 items: A B C D E --- - How can one test which items are more frequently chosen for a sample with 100 individuals? Is it advisable to fit a distribution to the data?
Analyzing multiple choice survey questions
CC BY-SA 3.0
null
2011-06-15T22:43:19.320
2011-07-16T03:52:15.417
null
null
6245
[ "hypothesis-testing" ]
11961
2
null
11923
2
null
This really depends on the scientific question being asked. If you are interested in if there is a relationship between x1 and y, then do the regression between x1 and y. If you are interested in if x1 helps predict y above and beyond the effects of x2, x3, etc. then you need to include the other x's in the model. For example: suppose that Y is length of stay in the hospital and x1 is the dose (amount) of medication for a given set of patients. Now you may want to adjust for the severity of illness measured at the time of entry into the hospital/study and the treatment starts just after the measure of the severity, then you would probably want to include severity in the model. But if your severity of illness measure is the most severe the illness gets during their stay and the medicine reduces the severity and therefore shortens the length of stay, then including severity of illness will hide the effectiveness of the treatment rather than give any useful information. Before doing a regression it can be helpful to draw out a path diagram, write out the names of all the variables that could go into the regression, then draw arrows where you know, or strongly believe that there are relationships and how the causality most likely goes. Then draw an arrow or arrows in another color or style to show what relationships you think might exist but don't know and want to test. Considering this diagram can be useful in thinking about what models make the most sense to fit and compare to each other.
null
CC BY-SA 3.0
null
2011-06-16T00:03:39.260
2011-06-16T00:03:39.260
null
null
4505
null
11962
2
null
11953
1
null
I like the Predict.Plot and TkPredict functions in the TeachingDemos package for R, but my opinion may be slightly biased. Here is an example (the TkPredict function allows you to dynamically change values to see how they compare): ![enter image description here](https://i.stack.imgur.com/XOXZ0.png)
null
CC BY-SA 3.0
null
2011-06-16T00:14:12.240
2011-06-16T00:14:12.240
null
null
4505
null
11964
2
null
11960
2
null
It seems like you are not interested in how choosing one item relates to choosing another. If this is the case, you can just treat each of the five items as separate questions with binary responses, and you can estimate the proportion of people who would select a particular item with a binomial or normal model.
null
CC BY-SA 3.0
null
2011-06-16T02:08:55.753
2011-06-16T02:08:55.753
null
null
3874
null
11965
2
null
11856
0
null
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
null
CC BY-SA 3.0
null
2011-06-16T02:24:20.677
2011-06-16T19:56:03.420
2011-06-16T19:56:03.420
3874
3874
null
11966
2
null
11164
2
null
Your approach is fine and appropriate. The power of mixed effects models is that they can be used to analyze data like yours or where treatments were at the individual level, or both.
null
CC BY-SA 3.0
null
2011-06-16T04:24:22.030
2011-06-16T04:24:22.030
null
null
4505
null
11968
1
null
null
7
777
Context An experiment in agronomy whose aim is to investigate the possible effect of a treatment, with 13 possible levels, on the height of trees. Model $ Y_{ijk} = \mu_{\cdot \cdot \cdot} + \alpha_{i} + \beta_{j} + \gamma_{k(j)} + (\alpha \beta)_{ij} + \epsilon_{ijk} $ - $Y_{ijk}$ is the response for the tree lying in the $k$th row of the $j$th bloc when it has received the $i$th treatment, - $\mu_{\cdot \cdot \cdot}$ is an overall constant, - $\alpha_{i}$ are the fixed treatment effects, - $\beta_{j}$ are the random bloc effects, - $\gamma_{k(j)}$ are the random row (nested within bloc) effects, - $(\alpha \beta)_{ij}$ are the random treatment-bloc interaction effects, - $\epsilon_{ijk}$ are random error terms. Two important features - There is a lot of heterogeneity in response within each treatment. - The interaction $(\alpha \gamma)_{ik(j)}$ cannot be estimated because there is no replicate. Partial results The residual variance is much much higher than the variances of the different random effects. As a consequence, a much simpler model without random effect is selected based on the AIC. EDIT relative to Nick Sabbe's comment: The simpler model I am talking about is $Y_{ijk} = \mu_{\cdot \cdot \cdot} + \alpha_{i} + \epsilon_{ijk} $ Question My interpretation is that the residual variance actually contains two parts: the residual variance itself, and the interaction that cannot be estimated. Now, intuitively, I think that that interaction cannot be simply ignored. Hence, I would not compare my model with a simpler model without random effect. Do you agree with that?
How do you handle the situation where the residual variance is very high compared to the other variance parameter estimates?
CC BY-SA 3.0
null
2011-06-16T06:01:51.053
2020-10-12T23:11:51.447
2020-10-12T23:11:51.447
11887
3019
[ "mixed-model", "model-selection" ]
11969
2
null
2
10
null
A related question can be found [here](https://web.archive.org/web/20120612111559/http://metaoptimize.com/qa/questions/4939/why-do-we-assume-that-the-error-is-normally-distributed) about the normal assumption of the error (or more generally of the data if we do not have prior knowledge about the data). Basically, - It is mathematically convenient to use normal distribution. (It's related to Least Squares fitting and easy to solve with pseudoinverse) - Due to Central Limit Theorem, we may assume that there are lots of underlying facts affecting the process and the sum of these individual effects will tend to behave like normal distribution. In practice, it seems to be so. An important note from there is that, as Terence Tao states [here](http://terrytao.wordpress.com/2010/09/14/a-second-draft-of-a-non-technical-article-on-universality/), "Roughly speaking, this theorem asserts that if one takes a statistic that is a combination of many independent and randomly fluctuating components, with no one component having a decisive influence on the whole, then that statistic will be approximately distributed according to a law called the normal distribution". To make this clear, let me write a Python code snippet ``` # -*- coding: utf-8 -*- """ Illustration of the central limit theorem @author: İsmail Arı, http://ismailari.com @date: 31.03.2011 """ import scipy, scipy.stats import numpy as np import pylab #=============================================================== # Uncomment one of the distributions below and observe the result #=============================================================== x = scipy.linspace(0,10,11) #y = scipy.stats.binom.pmf(x,10,0.2) # binom #y = scipy.stats.expon.pdf(x,scale=4) # exp #y = scipy.stats.gamma.pdf(x,2) # gamma #y = np.ones(np.size(x)) # uniform y = scipy.random.random(np.size(x)) # random y = y / sum(y); N = 3 ax = pylab.subplot(N+1,1,1) pylab.plot(x,y) # Plotting details ax.set_xticks([10]) ax.axis([0, 2**N * 10, 0, np.max(y)*1.1]) ax.set_yticks([round(np.max(y),2)]) #=============================================================== # Plots #=============================================================== for i in np.arange(N)+1: y = np.convolve(y,y) y = y / sum(y); x = np.linspace(2*np.min(x), 2*np.max(x), len(y)) ax = pylab.subplot(N+1,1,i+1) pylab.plot(x,y) ax.axis([0, 2**N * 10, 0, np.max(y)*1.1]) ax.set_xticks([2**i * 10]) ax.set_yticks([round(np.max(y),3)]) pylab.show() ``` ![Random distribution](https://i.stack.imgur.com/dSXIy.png) ![Exponential distribution](https://i.stack.imgur.com/5zYWS.png) ![Uniform distribution](https://i.stack.imgur.com/ZMurD.png) As can be seen from the figures, the resulting distribution (sum) tends towards a normal distribution regardless of the individual distribution types. So, if we do not have enough information about the underlying effects in the data, normality assumption is reasonable.
null
CC BY-SA 4.0
null
2011-06-16T06:25:08.077
2022-11-23T13:00:11.563
2022-11-23T13:00:11.563
362671
5025
null
11970
1
30507
null
1
217
I have a sample of 200 independent networks and I want to test the hypothesis that majority of vertices belong to one [giant component](http://en.wikipedia.org/wiki/Giant_component). I wonder what is the appropriate approach to do that. More formally, suppose we have two variables, $A$ and $B$, where $A$ denotes number of vertices belonging to a giant component in a network $i$ and $B$ denotes number of vertices not belonging to giant component in network $i$. The task is to test the hypothesis $\mu_A > \mu_B$. Because $A$ and $B$ are clearly dependent here, I doubt that classical statistical tests (e.g., Z-test) are appropriate here. Any suggestion would be greatly appreciated.
How to test whether the majority of vertices belong to one giant component?
CC BY-SA 3.0
null
2011-06-16T06:26:05.237
2012-09-14T09:01:51.983
2012-06-16T08:29:45.287
183
609
[ "hypothesis-testing", "networks", "graph-theory" ]
11971
1
null
null
7
7692
In my master thesis I have drawn a few hypotheses. I have answered them all with linear regression. In these linear regressions, I took control variables into account. My question is: do I have to run a mediation analysis? Or is it also possible to report the regressions of all relations separately (for example: X -> Y, X -> M, M -> Y and A -> Y)? Here is how my model look like: ![enter image description here](https://i.stack.imgur.com/gxHfX.jpg) My main hypothesis is about the relation between X and Y. I hope my question is clear. Thank you in advance!
Mediation model with linear regression
CC BY-SA 3.0
null
2011-06-16T07:10:19.083
2011-06-16T19:32:12.330
2011-06-16T07:40:49.657
930
5040
[ "regression", "mediation" ]
11972
1
null
null
2
163
I'm trying to understand confidence intervals but having some trouble. I've been doing some exercises I found online and I'm stuck on this question: I have been given a 95% confidence interval for a population proportion: (0.35, 0.40), a sample size of 200, and I need to find a 99% confidence interval. The methods I would go to first involve using standard deviation, which I don't have. How can I approach this question without knowing variance? The whole quiz is about normal distributions, to give it some context. This isn't really homework but I'm tagging it this way because I'm looking for a similar outcome. The quiz has answers at the end; what I want is to be able to solve it myself.
Finding a narrower confidence interval for a given CI, sample mean and size
CC BY-SA 3.0
null
2011-06-16T07:53:16.880
2011-06-16T11:12:01.237
null
null
5041
[ "confidence-interval", "self-study", "normal-distribution" ]
11973
2
null
11971
8
null
You can run regression models separately, if you follow the Baron-Kenny approach. As far as I know, there are two general approaches to test for mediation: (1) path models (and, SEM, of course) and (2) the Baron-and-Kenny approach (see item (a)). I use Mplus to run my mediation models which is very handy (+ bootstrapped standard errors). Unfortunately, you did not tell us what software package you are using to do your analysis. You have a couple of options: (a) You might be interested in [D Kenny's website on mediation](http://www.davidakenny.net/cm/mediate.htm) . He gives a very clear description how to proceed in order to test for a mediation effect (see "Baron and Kenny Steps"). (b) If you happen to use Stata or R for your analysis, you could check out the ATA website on [Stata Frequently Asked Questions](http://www.ats.ucla.edu/stat/stata/faq/) (search for 'mediation') or the [R package mediation](http://imai.princeton.edu/software/mediation.html). If you use SPSS, you will like this [website](http://www.ats.ucla.edu/stat/spss/faq/mediation.htm). Kenny's website also offers a couple of tips for different software packages, e.g. how to get bootstrapped standard errors in SPSS or SAS.
null
CC BY-SA 3.0
null
2011-06-16T09:24:59.490
2011-06-16T09:24:59.490
null
null
307
null
11974
1
11988
null
11
371
I've met the following randomized trace technique in M. Seeger, “Low rank updates for the Cholesky decomposition,” University of California at Berkeley, Tech. Rep, 2007. $$\operatorname{tr}(\mathbf{A}) = {E[\mathbf{x}^T \mathbf{A} \mathbf{x}]}$$ where $\mathbf{x} \sim N(\mathbf{0},\mathbf{I})$. As a person without deep mathematics background, I wonder how this equality can be achieved. Moreover, how can we interpret $\mathbf{x}^T \mathbf{A} \mathbf{x}$, for example geometrically? Where should I look in order to understand the meaning of taking the inner product of a vector and its range value? Why is the mean equal to the sum of the eigenvalues? Besides theoretical property, what is its practical importance? I've written a MATLAB code snippet to see whether it works ``` #% tr(A) == E[x'Ax], x ~ N(0,I) N = 100000; n = 3; x = randn([n N]); % samples A = magic(n); % any n by n matrix A y = zeros(1, N); for i = 1:N y(i) = x(:,i)' * A * x(:,i); end mean(y) trace(A) ``` The trace is 15 where the approximation is 14.9696.
Randomized trace technique
CC BY-SA 3.0
null
2011-06-16T10:53:54.953
2012-01-13T15:41:56.677
2012-01-13T10:54:57.020
5025
5025
[ "normal-distribution", "matlab" ]
11975
1
11978
null
5
182
I'm trying to generate multivariate data shaped like Saturn (long story). More formally; - *cluster 1 is a rank-p Gaussian with correlation matrix R (where every out of diagonal entry of R is the same number in $(0,1)$). - cluster 2 is a bunch of points distributed on a hyper-plan of rank p-1, around the equator of cluster 1, all located beyond a certain mahalanobis distance wrt to cluster1 of say $\zeta$. Rejection sampling is ok, so long as the rejection rate is small: this configuration has to be generated many times for $n$ and $p$ large reliably. Update: So i implemented Bob Durrant's solution: ``` x0<-matrix(rnorm(n*p),n,p) b0<-matrix(rnorm(n*p),n,p) b1<-qchisq(0.99,df=p); b2<-sqrt(c(b1,b1*1.25)) b0<-b0/sqrt(rowSums(b0*b0))*runif(n,b2[1],b2[2]) plot(rbind(x0,x1)) mahalanobis(cbind(0,b0),colMeans(x0),var(x0))/(qchisq(0.99,df=p)) ``` yielding the attached picture, which look indeed like Saturn ![Saturn](https://i.stack.imgur.com/Rv6VE.jpg). with the rings located at least @ qchisq(0.99,df=p) away from the center. Now however, i want my saturn to be ellipse shaped, i.e. to have correlation structure R. The problem is that is i pre-multiply $\verb+x0+$ and $\verb+cbind(0,x1)+$ by $R^{1/2}$, the rings are no longer on the equator :( Update2: For example, things already go awry when i replace the diagonal variance structure above with something else. For example: ``` library(MASS) x0<-mvrnorm(n,rep(0,p),diag(rchisq(p,p),p)) b0<-mvrnorm(n,rep(0,p-1),diag(rchisq(p-1,p-1),p-1)) b1<-qchisq(0.99,df=p); b2<-sqrt(c(b1,b1*1.25)) b0<-b0/sqrt(rowSums(b0*b0))*runif(n,b2[1],b2[2]) mahalanobis(cbind(0,b0),colMeans(x0),var(x0))/(qchisq(0.99,df=p)) ``` most of the distances are way too large (compare with the output to the same call after using diagonal co-variance matrix above)! Update3 ``` delta<-0.9 p<-3 R<-matrix(runif(p^2,delta*0.99,delta),p,p) #to avoid repeating eigen-values, i jitter a bit. diag(R)<-1 ```
How to generate a pair of ellipses shaped as Saturn in $\mathbb{R}^p$
CC BY-SA 3.0
null
2011-06-16T11:05:29.070
2011-06-16T17:32:34.030
2011-06-16T17:32:34.030
603
603
[ "clustering", "matrix" ]
11976
2
null
11972
5
null
Assuming normality, the mean of the distribution of the estimator of the proportion is 0.375 because confidence intervals from normal distributions are typically given as symmetric intervals around the mean (note: technically, this isn't required, but everybody always does it like that). Next, you know the distance from the mean to the edges of the confidence interval (0.025 here) are `q * SD` where SD is the standard deviation of your estimator, and q is the `1-alpha/2` standard normal quantile (note in this case: `95% CI => alpha=0.05 => q = 0.975 quantile ~ 1.96`). So you can easily find the SD from that. Finally, using the mean and SD, create the 99% CI. If you don't know how to do this part, I believe you need more reading.
null
CC BY-SA 3.0
null
2011-06-16T11:12:01.237
2011-06-16T11:12:01.237
null
null
4257
null
11977
1
null
null
1
102
I will be too happy help me find the exact value or a very tight exponential upper bound for: $$\sum_{k=1}^{N-1} \alpha^k \beta^{\frac{1}{N-k}}$$ where $0 \leq \alpha < 1$, $\beta= \exp(-N^2\zeta)$ and $\zeta$ is a positive small value. Thanks a lot in advance.
Exponential upper bound for $\sum_{k=1}^{N-1} \alpha^k \beta^{\frac{1}{N-k}}$
CC BY-SA 3.0
null
2011-06-16T11:48:09.250
2011-06-16T20:01:37.823
2011-06-16T20:01:37.823
4770
4770
[ "bounds" ]
11978
2
null
11975
4
null
For large $p$ a spherical rank $p$ Gaussian in $\mathbb{R}^{p}$ looks like the uniform distribution on the surface of the hypersphere $\mathbb{S}^{p-1}$ with radius $\sigma\sqrt{p}$, while a rank $p-1$ spherical Gaussian embedded in $\mathbb{R}^{p}$ will give Saturn's rings (the points will look like the hypersphere $\mathbb{S}^{p-2}$). So I think you can generate this data by drawing from two spherical Gaussians $\mathcal{N}(0,\sigma_{p}^{2}I_{p})$ and $\mathcal{N}(0,\sigma_{p-1}^{2}I_{p-1})$ having set $\sigma_{p}^{2}$ and $\sigma_{p-1}^{2}$ to get the separation you want. The concentration in the norms is exponentially fast w.r.t $p$, so you probably won't have to throw away any points if $p$ is large enough.
null
CC BY-SA 3.0
null
2011-06-16T11:54:59.413
2011-06-16T11:54:59.413
null
null
3248
null
11980
1
49846
null
3
1379
I calculate (with flow cytometry) pecentage of lymphocytes with a specific receptor (Lph*) as a ratio to general number of lymphocytes (Lph). Should I consider them (Lph*) as Poisson distributed? (My data set is [here](https://stats.stackexchange.com/q/11887/5003).)
If I count cells should I consider them as Poisson distributed?
CC BY-SA 3.0
null
2011-06-16T12:33:30.747
2013-02-12T17:39:44.757
2017-04-13T12:44:20.840
-1
5003
[ "distributions", "poisson-distribution" ]
11981
2
null
11980
2
null
The short answer is probably not, since: - the Poisson distribution is discrete, your data is continuous; - the Poisson distributions has support on 0,1,2, ..., whereas (I think) your data has a range from 0 to 100. Without seeing your data and knowing your problem, it's tricky to give you a suggestion. A good starting position would be to look at the statistical analysis section of publications that analyse data similar to your data.
null
CC BY-SA 3.0
null
2011-06-16T12:38:10.817
2011-06-16T12:38:10.817
null
null
8
null
11982
2
null
11974
3
null
If $A$ is symmetric positive definite, then $A = U^tDU$ with $U$ orthonormal, and $D$ diagonal with eigenvalues on the diagonal. Since $x$ has identity covariance matrix, and $U$ is orthonormal, $Ux$ is also has an identity covariance matrix. Hence writing $y = Ux$, we have $E[x^TAx] = E[y^tDy]$. Since the expectation operator is linear, this is just $\sum_{i=0}^n \lambda_i E[y_i^2]$. Each $y_i$ is chi-square with 1 degree of freedom, so has expected value 1. Hence the expectation is the sum of eigenvalues. Geometrically, symmetric positive definite matrices $A$ are in 1-1 correspondence with ellipsoids -- given by the equation $x^TAx = 1$. The lengths of the ellipsoid's axes are given by $1/\sqrt\lambda_i$ where $\lambda_i$ are the eigenvalues. When $A = C^{-1}$ where $C$ is the covariance matrix, this is the square of the [Mahalanobis distance](http://en.wikipedia.org/wiki/Mahalanobis_distance).
null
CC BY-SA 3.0
null
2011-06-16T12:49:14.810
2011-06-16T13:24:35.603
2011-06-16T13:24:35.603
5044
5044
null
11983
1
12006
null
1
119
I've got a (probably easy) question in how to handle empirical studies, when there are a lot of effects involved. I have a whole bunch of variables and I'd like to analyze just a few of them. But the problem is, that the model is wrong... So standard errors and coeficients itself are probably biased and so t-statistics are as well. So general: Everything is wrong. Pretty frustrating task to find out how to handle this problem, when it is not possible to say what coeficeints have a clear influence on $y$. What would you do in this case? It's possible to compact the coeficients, but the problem is still present. Do you have some experiences how to handle this problem? Or is there anyone who knows a good paper where it's been discussed? Fyi: I'm going to do Cross Validation afterwards, to compare models... But it's still required to make an analysis of the estimation before looking which model is good. And I'm bounded to do OLS, before looking for better models. The general question is: How are other studies dealing with biased std. errors or coefficients? Please help :( Edit: I'm analyzing a whole bunch of effects on wage. Thererefor I've a lot of effects. I know that heteroskedasticity occurs, wage is skewed and the sample size is relatively small. I'm not interested in changing the model, since I've to do OLS without transforming variables or s.th. Just the regular OLS. Unfortunately I don't really know how to interpret all the effects, when I can't get rid of the non significant ones, because significance isn't good defined (because of bias). Is there a theory that says in general:"Although bias occurs you can assume that the effects with high significances are more clear different from 0, than other effects that are not significant?"
How to present a empirical study when using econometric models?
CC BY-SA 3.0
null
2011-06-16T12:51:19.880
2017-09-28T18:27:21.523
2017-09-28T18:27:21.523
60613
4496
[ "r", "regression", "interpretation", "interpolation" ]
11984
1
12145
null
9
5224
I'm currently working on a plot engine for my project. This engine should be robust for a wide range of inputs. In order to analyse the data, I'm plotting a series of graphs utilising python/matplotlib. Among them is the following: ![scatter plot](https://i.stack.imgur.com/6vhgg.png) I think this graph is not good because the data being plotted first (high pressures, red) have a lower z-order (i.e. they are overdrawn) than the blue bullets for low pressures. Thus introducing a bias when looking at the graph. The underlying reason for that is that the data is bell-shaped. First off, do you agree or disagree? I could leave it like it is because it is just one of many views on the data. It could still be useful. However, if there is a way to make this graph better with some sort of trick, I'd be much happier. I already played with point size, transparency/alpha and edgecolor. This only made it worse. A great way to remove the z-order bias in scatter plots is to bin the data and colour-code it accordingly (e.g. [hexbin](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.hexbin)). But since I used the colour for the pressure information, I see no possibility to to something similar. Another idea would be to randomise the z-order, but I'm not sure how to do that and if the result would be better. Any other comments for improvements are appreciated.
How can I remove the z-order bias of a coloured scatter plot?
CC BY-SA 3.0
null
2011-06-16T13:17:20.117
2020-03-29T10:18:00.950
2020-06-11T14:32:37.003
-1
4373
[ "data-visualization", "scatterplot" ]
11985
1
11997
null
21
18005
I have 5 variables and I'm trying to predict my target variable which must be within the range 0 to 70. How do I use this piece of information to model my target better?
How to model bounded target variable?
CC BY-SA 4.0
null
2011-06-16T13:28:07.507
2019-03-02T00:30:27.433
2019-03-02T00:28:01.407
11887
333
[ "regression", "bounds" ]
11986
2
null
11985
2
null
Data transformation: rescale your data to lie in $[0,1]$ and model it using a glm model with a logit link. Edit: When you re-scale a vector (ie divide all the elements by the largest entry), as a rule, before you do that, screen (eyeballs) for outliers. UPDATE Assuming you have access to R, i would carry the modeling part with a robust glm routine, see $\verb+glmrob()+$ in package $\verb+robustbase+$.
null
CC BY-SA 4.0
null
2011-06-16T13:35:19.083
2019-03-02T00:30:27.433
2019-03-02T00:30:27.433
11887
603
null
11987
1
11993
null
8
7869
Reading the Linear Mixed Model (LMM) literature I am aware that fitting a model using REML provides better estimates of variance parameters than fitting via ML. However, we should not compare nested models fitted with REML that have different fixed effects. Recently, I have been fitting some models using GLS via the `gls()` function in the nlme package for R. The default fitting method for that function is REML. Do the same principals of REML vs ML for LMM also apply to GLS? Specifically, I am fitting a model with and without a linear trend with a correlation structure in the residuals: ``` m1 <- gls(Response ~ Time, data = foo, correlation = corAR1(form ~ Time)) m0 <- gls(Response ~ 1, data = foo, correlation = corAR1(form ~ Time)) ``` In the above, I should fit the models using ML as they have different fixed effects. Is this correct? Secondly, consider two GLS models that differ in the correlation structure: ``` m1 <- gls(Response ~ Time, data = foo, correlation = corARMA(form ~ Time, p = 1)) m2 <- gls(Response ~ Time, data = foo, correlation = corARMA(form ~ Time, p = 2)) ``` What fitting method should ideally be used here? REML or ML? Here my intuition would say fit via REML as we are estimating (co)variance parameters. Is my intuition correct or have I got this all mixed up?
Fitting a generalized least squares model with correlated data; use ML or REML?
CC BY-SA 3.0
null
2011-06-16T14:35:01.897
2011-06-16T15:26:56.980
null
null
1390
[ "r", "time-series", "maximum-likelihood", "generalized-least-squares" ]
11988
2
null
11974
12
null
NB The stated result does not depend on any assumption of normality or even independence of the coordinates of $\newcommand{\x}{\mathbf{x}}\newcommand{\e}{\mathbb{E}}\newcommand{\tr}{\mathbf{tr}}\newcommand{\A}{\mathbf{A}}\x$. It does not depend on $\A$ being positive definite either. Indeed, suppose only that the coordinates of $\x$ have zero mean, variance of one and are uncorrelated (but not necessarily independent); that is, $\e \x_i = 0$, $\e \x_i^2 = 1$, and $\e \x_i \x_j = 0$ for all $i \neq j$. Bare-hands approach Let $\A = (a_{ij})$ be an arbitrary $n \times n$ matrix. By definition $\tr(\A) = \sum_{i=1}^n a_{ii}$. Then, $$ \tr(\A) = \sum_{i=1}^n a_{ii} = \sum_{i=1}^n a_{ii} \e \x_i^2 = \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j , $$ and so we are done. In case that's not quite obvious, note that the right-hand side, by linearity of expectation, is $$ \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j = \e\Big(\sum_{i=1}^n \sum_{j=1}^n a_{ij} \x_i \x_j \Big) = \e(\x^T \A \x) $$ Proof via trace properties There is another way to write this that is suggestive, but relies, conceptually on slightly more advanced tools. We need that both expectation and the trace operator are linear and that, for any two matrices $\A$ and $\newcommand{\B}{\mathbf{B}}\B$ of appropriate dimensions, $\tr(\A\B) = \tr(\B\A)$. Then, since $\x^T \A \x = \tr(\x^T \A \x)$, we have $$ \e(\x^T \A \x) = \e( \tr(\x^T \A \x) ) = \e( \tr(\A \x \x^T) ) = \tr( \e( \A \x \x^T ) ) = \tr( \A \e \x \x^T ), $$ and so, $$ \e(\x^T \A \x) = \tr(\A \mathbf{I}) = \tr(\A) . $$ Quadratic forms, inner products and ellipsoids If $\A$ is positive definite, then an inner product on $\mathbf{R}^n$ can be defined via $\langle \x, \mathbf{y} \rangle_{\A} = \x^T \A \mathbf{y}$ and $\mathcal{E}_{\A} = \{\x: \x^T \A \x = 1\}$ defines an ellipsoid in $\mathbf{R}^n$ centered at the origin.
null
CC BY-SA 3.0
null
2011-06-16T14:35:57.443
2011-06-16T19:46:05.920
2011-06-16T19:46:05.920
2970
2970
null
11989
1
null
null
4
9326
I am now plotting some bars and pictures for use in presentation. Really interested in different ways of showing confidence intervals on the plots.
Does anyone know "nice" ways of plotting confidence intervals for use in presentations?
CC BY-SA 3.0
null
2011-06-16T14:38:08.753
2011-06-16T23:21:54.403
null
null
5045
[ "confidence-interval" ]
11990
1
69926
null
5
1436
I'm analyzing pairwise correlations of time series between two different types of microarrays done for several samples as biological replicates. So, I have M1 number of variables on type 1 array, M2 number of variable on type 2 array, N samples and T time points. For each sample, I calculate M1 x M2 correlation coefficients (Pearson or Spearman) and p-values using T points. Due to batch effect between samples I cannot average measurements between different samples for each time point. My question is which statistics to use to find which pairs of M1 and M2 variables have statistically significant correlation consistent between samples? How to apply multi-test correction? Please help.
Statistics for multi-test replicated correlation analysis
CC BY-SA 3.0
null
2011-06-16T14:55:49.937
2014-02-12T05:35:42.677
2011-06-21T01:13:42.407
1586
1586
[ "time-series", "correlation", "multivariate-analysis", "repeated-measures", "meta-analysis" ]
11991
1
null
null
7
7111
I have a large data set containing children's scores on arithmetic tasks with a lot of missing values possibly due to the age of the children? My hypothesis is that the missingness is not completely at random but at random. I need to do a CFA and want to know whether I need to condition on age or not? I know SPSS 17 has a missing values analysis but my data is in R and is quite large. I would like to do a missing values analysis in R but have not been able to find a package that can do this. Does anyone know?
Are misses in my data distributed completely at random?
CC BY-SA 3.0
null
2011-06-16T15:08:03.307
2011-06-16T17:15:48.763
2011-06-16T16:22:33.367
null
5046
[ "r", "missing-data" ]
11992
2
null
1980
5
null
Simon Jackman has a particularly useful example of analysing the results of a survey: "Americans and Australians 10 years after 9/11". It has multiple examples of integrating tables and figures. He has made the [Sweave document](http://jackman.stanford.edu/oz/USSC/2011/report.Rnw) and [PDF report](http://jackman.stanford.edu/oz/USSC/2011/report.pdf) via [this blog post](http://jackman.stanford.edu/blog/?p=2066). While the raw data is not supplied (as far as I can tell), so it's not possible to run the actual Sweave examples, I think a fair bit can be learned from studying the Sweave code.
null
CC BY-SA 3.0
null
2011-06-16T15:12:17.700
2011-06-16T15:12:17.700
null
null
183
null
11993
2
null
11987
7
null
Your intuition is correct, the same principles apply. I looked in Pinheiro/Bates section 5.4, where `gls` is introduced, but it doesn't say so explicitly, so you'll just have to trust me, I guess. :) In Chapter 2 they go through the theory of REML and ML and you'll notice that none of the theory depends on there being any random effects, and that actually, you could write any random effect model using just correlation structure instead and fit with gls, though for complex random effects it would be quite complex. The simplest example is that a random intercept model is equivalent to a compound symmetry model.
null
CC BY-SA 3.0
null
2011-06-16T15:26:56.980
2011-06-16T15:26:56.980
null
null
3601
null
11994
2
null
11991
3
null
Your question is a little difficult to decipher. One approach for dealing with missing data is [imputation](http://en.wikipedia.org/wiki/Imputation_%28statistics%29) -- and there is a substantial literature on this and an already large and growing set of packages at [CRAN](http://cran.r-project.org) so you may want to start there.
null
CC BY-SA 3.0
null
2011-06-16T15:27:04.020
2011-06-16T15:27:04.020
null
null
334
null
11995
2
null
11672
0
null
You might consider doing a round robin tournament and then estimating the effect of color controlling for weight within a hierarchical paired comparison model. With 120 comparisons, you still will not have much power, but you'll have more than the non-parametric techniques. You can get a little bit more power by having them interact more often, but not much more since you are just improving your estimate of the difference between the same fish. See Ulf Böckenholt's "[Hierarchical Modeling of Paired Comparison Data](http://psycnet.apa.org/journals/met/6/1/49/)" also H.A. David's The Method of Paired Comparisons for discussion of different types of designs. Also, I might worry that your experiment could be changing the behavior you are trying to measure, particularly if the fish are unused to interacting. It might be sound to have more than 120 interactions to evaluate whether there is a habituation or learning effect.
null
CC BY-SA 3.0
null
2011-06-16T15:34:31.853
2011-06-16T16:38:20.713
2011-06-16T16:38:20.713
82
82
null
11997
2
null
11985
25
null
You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range (that is, use $\max(0, \min(70, \hat{y}))$ instead of $\hat{y}$) will do well. Cross-validate the model to see whether this works. However, the restricted range raises the possibility of a nonlinear relationship between the dependent variable ($y$) and the independent variables ($x_i$). Some additional indicators of this include: - Greater variation in residual values when $\hat{y}$ is in the middle of its range, compared to variation in residuals at either end of the range. - Theoretical reasons for specific non-linear relationships. - Evidence of model mis-specification (obtained in the usual ways). - Significance of quadratic or high-order terms in the $x_i$. Consider a nonlinear re-expression of $y$ in case any of these conditions hold. There are many ways to re-express $y$ to create more linear relationships with the $x_i$. For instance, any increasing function $f$ defined on the interval $[0,70]$ can be "folded" to create a symmetric increasing function via $y \to f(y) - f(70-y)$. If $f$ becomes arbitrarily large and negative as its argument approaches $0$, the folded version of $f$ will map $[0,70]$ into all the real numbers. Examples of such functions include the logarithm and any negative power. Using the logarithm is equivalent to the "logit link" recommended by @user603. Another way is to let $G$ be the inverse CDF of any probability distribution and define $f(y) = G(y/70)$. Using a Normal distribution gives the "probit" transformation. One way to exploit families of transformations is to experiment: try a likely transformation, perform a quick regression of the transformed $y$ against the $x_i$, and test the residuals: they should appear to be independent of the predicted values of $y$ (homoscedastic and uncorrelated). These are signs of a linear relationship with the independent variables. It helps, too, if the residuals of the back-transformed predicted values tend to be small. This indicates the transformation has improved the fit. To resist the effects of outliers, use robust regression methods such as [iteratively reweighted least squares](http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares).
null
CC BY-SA 3.0
null
2011-06-16T16:24:31.900
2011-06-16T16:24:31.900
null
null
919
null
11999
2
null
11991
7
null
As far as I understand your question, you want to investigate if missing values in your data appear due to some pattern. In this case, you don't need any "missing value analysis" -- this is the same problem as checking whether the score is bigger than 0.7 or whatever. Just convert your dataset into two-class factor (missing, not-missing) and look for correlations.
null
CC BY-SA 3.0
null
2011-06-16T16:30:04.423
2011-06-16T16:30:04.423
null
null
null
null
12001
2
null
11985
5
null
It is important to consider why are your values bounded in the 0-70 range. For example, if they are the number of correct answers on a 70-question test, then you should consider models for "number of successes" variables, such as overdispersed binomial regression. Other reasons might lead you to other solutions.
null
CC BY-SA 3.0
null
2011-06-16T16:49:26.207
2011-06-16T16:49:26.207
null
null
279
null
12002
1
72610
null
22
11355
Imagine that you repeat an experiment three times. In each experiment, you collect triplicate measurements. The triplicates tend to be fairly close together, compared to the differences among the three experimental means. Computing the grand mean is pretty easy. But how can one compute a confidence interval for the grand mean? Sample data: Experiment 1: 34, 41, 39 Experiment 2: 45, 51, 52 Experiment 3: 29, 31, 35 Assume that the replicate values within an experiment follow a Gaussian distribution, as does the mean values of each experiment. The SD of variation within an experiment is smaller than the SD among experimental means. Assume also that there is no ordering of the three values in each experiment. The left-to-right order of the three values in each row is entirely arbitrary. The simple approach is to first compute the mean of each experiment: 38.0, 49.3, and 31.7, and then compute the mean, and its 95% confidence interval, of those three values. Using this method, the grand mean is 39.7 with the 95% confidence interval ranging from 17.4 to 61.9. The problem with that approach is that it totally ignores the variation among triplicates. I wonder if there isn't a good way to account for that variation.
How to calculate the confidence interval of the mean of means?
CC BY-SA 3.0
null
2011-06-16T16:58:13.537
2015-01-21T16:01:32.387
2011-07-19T21:18:45.377
919
25
[ "confidence-interval", "multilevel-analysis" ]
12003
2
null
11991
14
null
As @Dirk Eddelbuettel already mentioned, your questions is not very clear. In fact, I think you are asking two questions. The first question is related to your M(C)AR assumption. The second question is about (an) appropriate R package(s). (1) "Testing" for MAR To test if age has an effect on the missingness of your score variable, you could run a simple logistic regression model with age as a predictor variable. Your response variable is 0: score is not missing, 1: score is missing (see also @mbq's answer and @Macro's comment). Given the assumption that younger children are more likely to not report math scores, we expect to see a significant negative effect of age . ``` ## Make up some data set.seed(2) ## Younger children are more likely to not report math scores, ## so I use a Poisson distribution to model that behaviour missData <- rpois(10000, 10) dfr <- data.frame(score=rnorm(100), age=sample(6:15, 100, replace=TRUE)) dfr <- dfr[order(dfr$age), ] dfr$agemiss <- sort(sample(missData, 100, replace=TRUE)) dfr$miss <- ifelse(dfr$agemiss == dfr$age, 1, 0) ## Run the logistic regression with age as predictor > summary(glm(miss ~ age, data=dfr, family=binomial)) [...] Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.9729 1.4946 3.996 6.43e-05 *** age -0.7997 0.1760 -4.544 5.53e-06 *** --- [...] ``` (2) (Some) Missing data related R packages Some of these packages also have functions to explore patterns of missingness (e.g., `missing.pattern.plot()` in the `mi` package). - Amelia II: A Program for Missing Data - Hmisc: Harrell Miscellaneous - mi: Missing Data Imputation and Model Checking - mitools: Tools for multiple imputation of missing data
null
CC BY-SA 3.0
null
2011-06-16T17:03:55.907
2011-06-16T17:15:48.763
2011-06-16T17:15:48.763
307
307
null
12004
2
null
11956
2
null
Here's an alternative (but basically equivalent) derivation to @Andre Holzner's: For a Poisson process $N(t)$ with rate $\lambda$, $Pr(N(t+\tau) - N(t) = 1) = (\tau\lambda)\exp(-\tau\lambda) = Pr(N(\tau) = 1) $ which has Taylor expansion around $\tau=0$ $\tau\lambda - \tau^2\lambda^2 + O(\tau^3)$ and this is approximately $\tau\lambda$ for small $\tau$. You're correct that the actual limit is zero, as one typically assumes $Pr(N(0)=0)=1$ in developing the Poisson process.
null
CC BY-SA 3.0
null
2011-06-16T18:25:55.220
2011-06-17T03:16:46.350
2011-06-17T03:16:46.350
26
26
null
12005
1
null
null
8
6574
I'm using offset for the first time (as per a recommendation from a colleague) and have a couple questions about interpreting my results. Our ultimate goal is to look at the effect of some population level treatment on disease incidence (cases/population). We've decided to use poisson models, but there are surely a variety of ways to look at our data. My data look like this: ``` cases <- c(6216128, 3341110, 855105, 359371, 417393, 640434, 528914, 377166, 401556, 252832, 128458) population <- c(54703334, 54252430, 55976643, 56630708, 57373529, 58025577, 58617708, 58921850, 59695818, 60466585, 60223458) treat.count <- c(13389482, 17746954, 27974966, 27329972, 16534356, 10591797, 12740820, 11787687, 6780603, 5503181, 4446687) treat.percent <- c(0.24476537, 0.32711814, 0.49976141, 0.48259986, 0.28818789, 0.18253669, 0.21735446, 0.20005629, 0.11358590, 0.09101194, 0.07383646) data <- cbind(cases, population, treat.count, treat.percent) mydata <- as.data.frame(data) ``` I have two overarching questions: - the interpretation of offset in these poisson models and - the interpretation of the poisson model with offset and covariates added. 1) with the inclusion of offset and no covariates: ``` f1 <- glm(cases ~ offset(population), data=mydata, family=poisson) ``` is that the expected value of `cases`, divided by `pop`, is `exp(intercept)`...correct? 2) with the inclusion of offset and covariates: ``` f2 <- glm(cases ~ offset(population)+log(treat.percent), data=mydata, family=poisson) ``` is that the expected value of `cases`, divided by `pop`, is `exp(intercept)`...as the `treat.percent` increases? There were similar questions posted before, but not quite this situation.
Interpretation of intercept term in poisson model with offset and covariates
CC BY-SA 3.0
null
2011-06-16T18:32:17.720
2017-04-08T17:25:42.223
2017-04-08T17:25:42.223
11887
5049
[ "r", "poisson-distribution", "offset" ]
12006
2
null
11983
1
null
You can always interpret OLS as estimating the best linear approximation to the conditional expectation of the outcome given your explanatory variables. With this interpretation OLS is never biased. The downside is that the OLS estimates only describe the relationship among variables in your data. In particular, the OLS estimates don't necessarily tell you anything about causality. In your example, you could say something like, "the coefficient on education is positive and significant, suggesting that education might increase wages. However, this positive relationship could also be due to an omitted variable, such as intelligence or work ethic, which could be increase both education and wages. To account for this possibility, I `<`estimate some other model`>`." Even while interpreting OLS as only approximating a conditional expectation, you do need to worry about estimating the standard errors correctly. With heteroskedastic and independent observations, the variance of OLS coefficients is consistently estimated by: $$ (X'X)^{-1} (\sum x_i'x_i \hat{\epsilon}_i^2 ) (X'X)^{-1} $$ If you have dependent data, there are other ways to estimate the standard errors.
null
CC BY-SA 3.0
null
2011-06-16T18:35:41.910
2011-06-16T18:35:41.910
null
null
1229
null
12007
1
12011
null
2
4547
Before you begin reading, I want to say thank you for helping, or attempting to help me. I really appreciate any help you can give me! Also, warning: Wall of Text approaching fast. I've been informed that I am misusing the term Confidence Interval. I have been told that what I am actually looking for is the "Prediction Interval". I'm not sure though as I know my coworker who assigned me this task said Confidence Interval. Objectives: - Quantify the variability of exchange rates between system #1 and system #2 - Determine bestfit distribution Find - 95% confidence interval Please note: I am doing this in excel, so I will be using excel functions to calculate stdev ect. But if you aren't familiar with excel you can still be of help to me! The concepts are what I need help with! Here an explanation of the data I was given and what I have done so far: System #1 and #2 BOTH have two lists of 48 numbers. One list being the closing market price in USD and the other list the closing market price in the native currency. So I have "Sys#1 USD", "Sys#1 Native", "Sys#2 USD" and "Sys#2 Native" columns of 48 values each. System #1 is the actual closing mkt price, while System #2 is the one we are testing to see how much it differs from the correct values. (1.) I found the exchange rate to the dollar for each system by simply dividing Native/USD for the corresponding system. ``` (#1Native/#1USD) = (#1Native/#1 $1 USD) and (#2Native/#2USD) = (#2Native/#2 $1 USD) `$` ``` (2.) I then found the percent error of the foreign exchange rates of System 2 compared to System 1. ``` [ ( (abs[(#2Native/#2 $ 1USD) - (#1Native/ #1 $ 1USD)]) / (#1Native/ #1 $1USD) ] * 100 ``` (3.) I proceeded to find the standard deviation and mean using the simple functions excel comes equipped with. Excel functions below. ``` =STDEV(values) =AVERAGE(values) ``` (4.) I was informed that using the =CONFIDENCE function in excel was actually NOT what I want because it calculated the CI with the true mean of all future data, and I do not know the true mean value of all future data, only of my sample of 48 days. I was told to use the =NORMINV(probability,mean,standard_dev) function by my coworker. To my understanding, this method "fits a normal distribution to the data and then makes a prediction assuming that this fit is correct." I'm not sure if my data is a normal distribution, so do not know if I can use =NORMINV? So basically, how do I calculate a 95% confidence interval of this data and determine the best fit distribution? Should I be using =NORMINV? Thank you so much for your help!
Confidence Interval / Best-fit / Prediction Interval?
CC BY-SA 3.0
null
2011-06-16T19:12:41.583
2011-06-16T20:55:11.027
null
null
5050
[ "confidence-interval" ]
12008
2
null
12002
0
null
You can't have one confidence interval that solves both of your problems. You have to pick one. You can either derive one from a mean square error term of within experiment variance that allows you to say something about how accurately you can estimate the values within experiment or you can do it between and it will be about between experiments. If I just did the former I'd tend to want to plot it around 0 rather than around the grand mean because it doesn't tell you anything about the actual mean value, only about an effect (in this case 0). Or you could just plot both and describe what they do. You've got a handle on the between one. For the within it's just like calculating the error term in an ANOVA to get an MSE to work with and from there the SE for the CI is just sqrt(MSE/n) (n = 3 in this case).
null
CC BY-SA 3.0
null
2011-06-16T19:20:17.353
2011-06-16T19:20:17.353
null
null
601
null
12009
2
null
11971
1
null
Whether you control for A depends on what you are trying to accomplish. Including A in the model will increase your R-squared. But if A is uncorrelated with X or M (as your diagram indicates) then inclusion/non-Inclusion of A will not affect coefficients or p-values for X or M.
null
CC BY-SA 3.0
null
2011-06-16T19:32:12.330
2011-06-16T19:32:12.330
null
null
2669
null
12010
2
null
11955
8
null
There are a number of statistical NLP projects out there, with [NLTK](http://www.nltk.org/) one of the more active Open Source ones. However tracking word frequency over time and over a few hundred documents is probably a simple enough problem to code yourself. - You will want to start with converting your documents to a format easy to process, like plain text ala your comment. Convert to lower case, drop punctuation, and then split each document into words. Start with a regular expression like /\b/, and then filter out numbers and obvious errors. - Next you will probably want to drop stop words. Here is a decent stop word list for English language sources. - Now count each occurrence of a word in each document. You will probably want to build a hashtable index, with (non-stop) word as the key and the value as an integer count. - If you would like to get more sophisticated, you could pull out collocations by adding each preceding or succeeding $n-1$ words to your index. Or even run each word through a stemmer like this Ruby gem. - Lastly sort your words by index count. Here are your steps for the plain text document: The quick brown fox jumps over the quick, brown lazy dog. - Convert to lower case, and drop the puncutation: the quick brown fox jumps over the quick brown lazy dog - Split into words with /\b/, drop the words that are all whitespace: the; quick; brown; fox; jumps; over; the; quick; brown; lazy; dog - Now drop the stop words: quick; brown; fox; jumps; over; quick; brown; lazy; dog - Build your count index: quick=2; brown=2; fox=1; jumps=1; over=1; lazy=1; dog=1 - Add 2-gram collocations: quick-brown=2; quick=2; brown=2; brown-fox=1; fox=1; fox-jumps=1; jumps-over=1... - Stem the words, so jumps and jumped become just jump.
null
CC BY-SA 3.0
null
2011-06-16T20:54:49.750
2011-06-16T20:54:49.750
null
null
4942
null
12011
2
null
12007
1
null
You should read [Spreadsheet Adiction](http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html) and the links from that page before trusting any results from Excel. From your question it appears that you don't have a firm grasp on what confidence intervals and prediction intervals are. You should really consult a good intro stats book, and/or take a class or meet with a consultant to get these concepts down. But here is a short explanation: The condifence interval is a statement about where we believe the true population parameter (the mean above) to be based on the sample data. So not knowing the population mean does not mean that you cannot do a confidence interval. If your sample is large and you are willing to assume that the population is not overly skewed or would produce outliers, then the Central Limit Theorem says that a confidence interval on the mean based on the assumption of a normal population will be a good approximation even if the population is not normal. So you can use normal based theory without knowing if the population is normal as long as you are willing to make the above assumptions. The prediction interval is a statement about where we expect future individual data points to be. This prediction will depend much more on the shape of the distribution. The big difference in concept is whether you are talking about the mean of all future data, or individual data points (I could not tell which you are interested in from the question). The norminv function in Excel does not fit a normal distribution, but gives the x-value for a given area under the curve (probability) for a normal with the specified mean and standard deviation. That function could be used as part of the computations to get either of the intervals, but that assumes that you know the population standard deviation, if you are using the sample standard deviation then it is more appropriate to use the t distribution rather than the normal. Also note that the prediction interval takes into account the uncertainty in you estimate of the mean and standard deviation in addition to the randomness of the individual data points, so norminv probably is not what you want.
null
CC BY-SA 3.0
null
2011-06-16T20:55:11.027
2011-06-16T20:55:11.027
null
null
4505
null
12012
2
null
12005
8
null
I think that you want `offset(log(population))` in your models above. The offset is just a term included in the model without estimating a coefficient for it (fixing the coefficient at 1). Since the standard transformation in poisson regression is log, you can think of incuding the offset of log(population) as a rough equivalent (though mathematically better) of using log( cases/population ) as the response variable. So it is adjusting for differences in population sizes. This means that the intercept without any offset is predicting the average when log(population) is 0, or in other words, when you have a population of 1. The slope in the second model would then be the increase for a population of size 1. You could also use an offset like `offset(log(population/1000))` and then the interpretations would be for a population of size 1,000 (change the 1,000 to whatever value is meaningful for you), this makes it easier to visualize. For most models beyond the simplest it is often easier to interpret predictions from the model rather than individual coeficients. The Predict.Plot and TkPredict functions in the TeachingDemos package may help.
null
CC BY-SA 3.0
null
2011-06-16T21:05:02.923
2011-06-16T21:05:02.923
null
null
4505
null
12013
2
null
11989
4
null
As Aaron has pointed out, it is not quite clear what you are actually asking for. Here is an example of bars with confidence intervals, which I think does the job ![bars with confidence intervals](https://i.stack.imgur.com/S2egT.png) It was produced in R using the package `ggplot2` and the following code based on the example in the help page for `geom_errorbar` ``` library(ggplot2) df <- data.frame( trt = factor(c(1, 1, 2, 2)), resp = c(1, 5, 3, 4), group = factor(c(1, 2, 1, 2)), se = c(0.1, 0.3, 0.3, 0.2) ) # Define the top and bottom of the errorbars ci <- aes(ymax = resp + 1.96*se, ymin=resp - 1.96*se) p <- ggplot(df, aes(fill=group, y=resp, x=trt)) # Because the bars and errorbars have different widths # we need to specify how wide the objects we are dodging are dodge <- position_dodge(width=0.9) p + geom_bar(position=dodge) + geom_errorbar(ci, position=dodge, width=0.25) ```
null
CC BY-SA 3.0
null
2011-06-16T22:14:35.363
2011-06-16T22:14:35.363
null
null
2958
null
12014
2
null
11989
2
null
If you like to do it in MATLAB, you can use [errorbar](http://www.mathworks.com/help/techdoc/ref/errorbar.html). Alternatively, you may check the confidence interval plotting utilities in MATLAB file exchange. For instance, [errorb](http://www.mathworks.com/matlabcentral/fileexchange/27387-create-healthy-looking-error-bars) is a nice function to do different types of confidence interval plots. ``` clc, clear, close y=rand(1,5)+1; e=rand(1,5)/4; hold off; bar(y,'facecolor',[.8 .8 .8]); hold on; errorb(y,e); figure x=linspace(0,2*pi,8); y=sin(x); e=rand(1,8)/4; hold off; plot(x,y,'k','linewidth',2); hold on; errorb(x,y,e) figure values=abs(rand(2,3))+1; errors=rand(2,3)/1.5+0; errorb(values,errors,'top'); ``` ![Error plot 1](https://i.stack.imgur.com/arCLh.jpg) ![Error plot 2](https://i.stack.imgur.com/kbFW5.jpg) ![Error plot 3](https://i.stack.imgur.com/o1VUx.jpg) There are other alternatives in file exchange such as [#1](http://www.mathworks.com/matlabcentral/fileexchange/27485-boundedline-line-plots-with-shaded-errorconfidence-intervals), [#2](http://www.mathworks.com/matlabcentral/fileexchange/23116-confplott) and [#3](http://www.mathworks.com/matlabcentral/fileexchange/13103-plot-confidence-intervals).
null
CC BY-SA 3.0
null
2011-06-16T23:21:54.403
2011-06-16T23:21:54.403
null
null
5025
null
12015
1
12016
null
3
1275
Does anyone know an implementation in R (or other) of a decision tree for censored outcomes? I would like to utilize a decision tree in order to discretize/bin continuous variables before a survival analysis in some sort of principled manner. I am left with only a traditional decision tree using a binary target (event/no event) disregarding the censored nature of the data as it stands.
Decision tree for censored data
CC BY-SA 3.0
null
2011-06-16T23:55:26.110
2011-06-17T11:38:35.677
2011-06-17T11:38:35.677
2040
2040
[ "survival", "cart" ]
12016
2
null
12015
3
null
Have you checked the package `party`? I believe the function `ctree` handles censored data.
null
CC BY-SA 3.0
null
2011-06-17T00:47:19.203
2011-06-17T02:10:34.887
2011-06-17T02:10:34.887
5055
5055
null
12017
2
null
73
2
null
I use ggplot2, vegan and reshape quite often.
null
CC BY-SA 3.0
null
2011-06-17T00:57:13.403
2011-06-17T00:57:13.403
null
null
1050
null
12018
2
null
12015
5
null
Binning continuous variables goes against principle. And note that for recursive partitioning to be able to do all the thinking for you (find correct cutpoints assuming they exist, which is highly unlikely) requires upwards of 50,000 events in order to obtain a tree whose structure will be validated in other data. The motivation for binning in order to do any kind of analysis is unclear.
null
CC BY-SA 3.0
null
2011-06-17T01:00:59.820
2011-06-17T01:00:59.820
null
null
4253
null
12019
2
null
73
2
null
I like roxygen for its Curry() function.
null
CC BY-SA 3.0
null
2011-06-17T02:29:37.163
2011-06-17T02:29:37.163
null
null
3567
null
12020
1
12033
null
1
111
I am not a statistician and hope someone can point me towards the right direction. I have some time series data grouped into three classes like this: ``` Time Period 1 Time Period 2 Time Period 3 ------------------------------------------------- [1,2,3,4,5,6...] [12,13,14,15] [17,3,1,3,4...] [1,3,5,6,8,9...] [6,8,7,9,6,4] [1,2,5,7,3,2...] [9,8,9,9,8,9...] [3,1,1,2,1,2] [7,8,9,9,9,8...] ``` The dots indicate that I have significantly more values for `Time Period 1` and `Time Period 3` than for `Time Period 2`. I am trying to define events of "interest". Interest is when there is a significant change in the transitions from `Time Period 1 to Time Period 2` and `Time Period 2 to Time Period 3`. Of course, there could be a knob for determining what I mean for significant. What I am looking for is a good metric that tells me if an event is of potential interest. Obviously, average would not do good due to outliers so all I could think of was the median which seemed like a good one i.e. if there is a significant change in the median from Time Period 1 to Time Period 2, then this transition is of interest. While this metric is working out for me, I am curious if there is a more structured/formal approach to deriving a metric that is better than the median. As of now, the problem formulation is open as well so any suggestions/constructive criticisms are greatly appreciated.
What metric should I use to determine a significant effect?
CC BY-SA 3.0
null
2011-06-17T03:40:15.580
2011-08-16T14:25:30.763
2011-06-17T13:29:03.027
919
2164
[ "time-series", "multivariate-analysis", "dataset", "metric" ]
12021
1
12022
null
3
612
Two random variables are defined as [subindependent](http://en.wikipedia.org/wiki/Subindependence) if their covariance is zero--in other words, if they are [uncorrelated](http://en.wikipedia.org/wiki/Uncorrelated). The latter link notes that "not all uncorrelated variables are independent. For example, if $X$ is a continuous random variable uniformly distributed on $[−1, 1]$ and $Y = X^2$, then $X$ and $Y$ are uncorrelated even though $X$ determines $Y$ and a particular value of $Y$ can be produced by only one or two values of $X$." So subindependence, as you can guess from the name, is a weak form of independence. Soon after reading this, I was looking at [Pearson's chi-square test for independence](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test#Test_of_independence). The wikipedia page says that "for the test of independence, a chi-square probability of less than or equal to 0.05 (or the chi-square statistic being at or larger than the 0.05 critical point) is commonly interpreted by applied workers as justification for rejecting the null hypothesis that the row variable is independent of the column variable. The alternative hypothesis corresponds to the variables having an association or relationship where the structure of this relationship is not specified." A previous CV answer ([here](https://stats.stackexchange.com/questions/1562/what-dependence-is-implied-by-a-chi-square-test-for-independence)) also indicates that this is what the chi-square test is looking for. Now, how can you test a null hypothesis of independence by looking for a correlation but not define the lack of a correlation as indicative of independence? Granted, my skepticism is based on my reading of the Wikipedia pages for these concepts, which could easily be flawed. But it seems to me like Pearson's chi-square method must be testing for the lack of subindependence. Is there something wrong with my conclusions? Or, is this already common knowledge?
An inconsistency between the concept of "subindependence" and the chi-square test for independence?
CC BY-SA 3.0
null
2011-06-17T03:56:40.287
2011-06-17T19:18:16.560
2017-04-13T12:44:33.310
-1
2073
[ "correlation", "chi-squared-test", "independence" ]
12022
2
null
12021
2
null
Pearson's chi-squared test looks at how closely sample observations match a theoretical distribution. It works with discrete or categorised data. Taking your example and a sample of 1000, you might get something like ``` -1<=X<0 0<=X<=1 0 <=Y< 0.5 378 329 0.5<=Y<=1 142 151 ``` which has a Chi-squared statistic of 1.8803 and 1 degree of freedom: this does not show a significant relationship. but if you categorise the $X$ data more finely you might get something like ``` -1<=X<-0.5 -0.5<=X<0 0<=X<0.5 0.5<=X<=1 0 <=Y< 0.5 102 276 232 97 0.5<=Y<= 1 142 0 0 151 ``` and this time Chi-squared statistic is 428.3342 with 3 degrees of freedom, and is highly significant. So in some circumstances the Pearson's chi-squared test can spot non-independent data even when it is uncorrelated. In this case, potting $Y$ against $X$ would suggest the relationship more quickly.
null
CC BY-SA 3.0
null
2011-06-17T07:26:57.333
2011-06-17T19:18:16.560
2011-06-17T19:18:16.560
2958
2958
null
12023
1
null
null
1
1291
I just read: [http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/](http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/) Here is the example from the blog post: > Let’s make up a little story: let’s say we have three types of wine (A, B and C), and we would like to know which one is the best one (in a scale of 1 to 7). We asked 22 friends to taste each of the three wines (in a blind fold fashion), and then to give a grade of 1 till 7 (for example sake, let’s say we asked them to rate the wines 5 times each, and then averaged their results to give a number for a persons preference for each wine. This number which is now an average of several numbers, will not necessarily be an integer). Why let them rate the wine "5 times each"? This is just an arbitrary number. More importantly, how do you know that "5" is enough? How should you define "enough"? Is "4" or "2" also enough? Are there methods to quantify how good a sample size is? For my personal problem, I have to test with datapoints, where the mean is around 100 and the standard deviation is 300. This is averaged over 500 samples, but is this enough, for such a huge variance?
How many samples is enough?
CC BY-SA 3.0
null
2011-06-17T08:57:09.077
2011-06-18T14:29:18.473
2011-06-17T11:24:47.397
5058
5058
[ "anova", "sample-size" ]
12024
2
null
11833
2
null
You may use CUR decomposition as an alternative to PCA. For CUR decomposition, you may refer to [1] or [2]. In CUR decomposition, C stands for the selected columns, R stands for the selected rows and U is the linking matrix. Let me paraphrase the intuition behind CUR decompsosition as given in [1]; > Although the truncated SVD is widely used, the vectors $u_i$ and $v_i$ themselves may lack any meaning in terms of the field from which the data are drawn. For example, the eigenvector [(1/2)age − (1/ √2)height + (1/2)income] being one of the significant uncorrelated “factors” or “features” from a dataset of people’s features, is not particularly informative or meaningful. The nice thing about CUR is that basis columns are actual columns (or rows) and better to interpret as opposed to PCA (which uses trancated SVD). The algorithm given in [1] is easy to implement and you can play with it by changing the error threshold and get different number of bases. [1] M.W. Mahoney and P. Drineas, “CUR matrix decompositions for improved data analysis.,” Proceedings of the National Academy of Sciences of the United States of America, vol. 106, Jan. 2009, pp. 697-702. [2] J. Sun, Y. Xie, H. Zhang, and C. Faloutsos, “Less is more: Compact matrix decomposition for large sparse graphs,” Proceedings of the Seventh SIAM International Conference on Data Mining, Citeseer, 2007, p. 366.
null
CC BY-SA 3.0
null
2011-06-17T09:03:35.980
2011-06-17T09:03:35.980
null
null
5025
null
12025
2
null
12023
3
null
Having them rate each wine five times is useful especially when they are tasted blind, i.e.: if they don't know which wine they are drinking. This may help avoid bias due to the order of the tasting (by randomizing the 15 tastings) or different circumstances at the time of each tasting (maybe the first wine is tasted on a sunny day, the other on a day where the test person just got divorced). Whether 5 is/was 'enough' can only truly be assessed when there is an estimate of the variance in grades given to each wine, and even then it wil depend upon your goal. I don't understand how your own problem relates to the wine example ('the' mean and 'the' standard deviation? You already know this? Then why/what do you need to test?). If you clarify, I'll edit my answer.
null
CC BY-SA 3.0
null
2011-06-17T09:29:40.707
2011-06-17T09:29:40.707
null
null
4257
null
12026
1
12027
null
7
450
My data takes the form of a stream of events for each customer in my sample. For a given customer, the stream takes the form of a list of events over time: > At T1, customer C1 bought 1 unit of product X At T2, customer C2 bought 1 unit of product X At T3, customer C1 contacted customer service At T9, customer C1 bought 3 units of product Y, etc. I am trying to predict whether the customer will make another purchase in the next 3 months based on their previous history. Most approaches I have read about and experimented with involve propositionalization: that is, computing some summary statistics on the stream and feeding those into traditional decision trees or neural nets. For example, > Customer 1: Avg purchase prev month = $34, Avg time between purchases = 6 days, Time since last purchase = 25 days, Slope of purchase volume last 6 months = -0.45 Customer 2: Avg purchase prev month = $64, Avg time between purchases = 20 days, Time since last purchase = 5 days, Slope of purchase volume last 6 months = +0.05 etc While this has produced some useful models, I can't help but feel I'm loosing a lot of information by using only the statistics of summary. Are there any machine learning techniques out there that are capable of learning from the streams themselves? Are there any good starter resources for developing a home grown AI system that would be capable of building and updating a set of rules as new data comes in from the stream for each customer?
Machine learning for activity streams
CC BY-SA 3.0
null
2011-06-17T10:08:35.493
2011-06-17T13:15:16.227
null
null
5060
[ "time-series", "machine-learning", "churn" ]