Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
10033
2
null
10030
2
null
First of all, the Kolmogorov-Smirnoff test is probably incorrect for your situation. I assume you want to test whether the distribution of the number bird sightings is uniform (constant accross time). A chi-square goodness-of-fit test would be the simplest solution for that. Second, the R-produced value of 0.6 seems correct to me: at $x=2.5$, $F_a(x)=0.6$, while $F_b(x)=0$.
null
CC BY-SA 3.0
null
2011-04-27T14:09:42.373
2011-04-27T14:09:42.373
null
null
279
null
10034
2
null
10031
1
null
If you are talking about a missing value in the response, there are many available texts on imputation with specific tailoring to regressions. I have not read it, but [Frank E. Harrell's text](http://books.google.com/books?hl=en&lr=&id=kfHrF-bVcvQC&oi=fnd&pg=PR7&dq=imputation%20multinomial%20logistic%20regression&ots=34FWS2bhA2&sig=3Fyi5kO0NVQ5Ipx5dCHt0zvm6NI#v=onepage&q&f=false) is tailored to logistic regressions and has a chapter on missing values.
null
CC BY-SA 3.0
null
2011-04-27T14:44:33.060
2011-04-27T14:44:33.060
null
null
3542
null
10035
2
null
10010
1
null
Perhaps [dynamic time warping](http://en.wikipedia.org/wiki/Dynamic_time_warping) is appropriate? Maybe you can use it to measure the distance between your "query" signals and your training signals (dtw actually calculates alignments between two signals, from which you can derive the distance). You'd then pick the signal from your training data that's the nearest neighbor to your query. There is a [dtw](http://cran.r-project.org/web/packages/dtw) package on CRAN that lets you do this for R, which also has a very nice [vignette](http://cran.r-project.org/web/packages/dtw/vignettes/dtw.pdf). The alignments are a bit intensive to compute using dtw, there is a [Java implementation](http://code.google.com/p/fastdtw/) that calculates a fast approximation to the alignments if you want to pursue this further.
null
CC BY-SA 3.0
null
2011-04-27T14:44:49.657
2011-04-27T14:44:49.657
null
null
227
null
10036
1
null
null
1
3422
I am using R to replicate a study and obtain mostly the same results the author reported. At one point, however, I calculate marginal effects that seem to be unrealistically small. I would greatly appreciate if you could have a look at my reasoning and the code below and see if I am mistaken at one point or another. My sample contains 24535 observations, the dependent variable `x028bin` is a binary variable taking on the values 0 and 1, and there are furthermore 10 explaining variables. Nine of those independent variables have numeric levels, the independent variable `f025grouped` is a factor consisting of different religious denominations. I would like to run a probit regression including dummies for religious denomination and then compute marginal effects. In order to do so, I first eliminate missing values and use cross-tabs between the dependent and independent variables to verify that there are no small or 0 cells. Then I run the probit model which works fine and I also obtain reasonable results: ``` probit4AKIE <- glm(x028bin ~ x003 + x003squ + x025secv2 + x025terv2 + x007bin + x04chief + x011rec + a009bin + x045mod + c001bin + f025grouped, family=binomial(link="probit"), data=wvshm5red2delna, na.action=na.pass) summary(probit4AKIE) ``` However, when calculating marginal effects with all variables at their means from the probit coefficients and a scale factor, the marginal effects I obtain are much too small (e.g. 2.6042e-78). The code looks like this: ``` ttt <- cbind(wvshm5red2delna$x003, wvshm5red2delna$x003squ, wvshm5red2delna$x025secv2, wvshm5red2delna$x025terv2, wvshm5red2delna$x007bin, wvshm5red2delna$x04chief, wvshm5red2delna$x011rec, wvshm5red2delna$a009bin, wvshm5red2delna$x045mod, wvshm5red2delna$c001bin, wvshm5red2delna$f025grouped, wvshm5red2delna$f025grouped,wvshm5red2delna$f025grouped, wvshm5red2delna$f025grouped,wvshm5red2delna$f025grouped,wvshm5red2delna$f025grouped, wvshm5red2delna$f025grouped,wvshm5red2delna$f025grouped, wvshm5red2delna$f025grouped) #I put variable "f025grouped" 9 times because this variable consists of 9 levels ttt <- as.data.frame(ttt) xbar <- as.matrix(mean(cbind(1,ttt[1:19]))) #1:19 position of variables in dataframe ttt betaprobit4AKIE <- probit4AKIE$coefficients zxbar <- t(xbar) %*% betaprobit4AKIE scalefactor <- dnorm(zxbar) marginprobit4AKIE <- scalefactor * betaprobit4AKIE[2:20] #(2:20 are the positions of variables in the output of the probit model 'probit4AKIE' #(variables need to be in the same ordering as in data.frame ttt), the constant in #the model occupies the first position) marginprobit4AKIE #in this step I obtain values that are much too small ```
R glm probit regression marginal effects
CC BY-SA 3.0
null
2011-04-27T14:46:36.657
2012-09-02T02:55:46.353
2012-09-02T02:55:46.353
3826
4348
[ "r", "generalized-linear-model" ]
10037
1
null
null
3
2475
let v to be forecasted value for periods 1 through T and $v_{t}$ be its forecasted value at time $t$. We express $v_{t}$ as the sum of two terms, its mean at time $t$, and its deviation from the mean at time $t$, $\epsilon_{t}$. In other words, $$ v_{t}= \overline{v_{t}} + \epsilon_{t} $$ The $\overline{v_{t}}$ are chosen based on the arguments. The $\epsilon_{t}$ term is assumed to be a normally distributed random variable with mean zero and standard deviation $\sigma (ε_{t})=0.234$. The moving average formation of order q is choosen, MA(q) where q is the number of lagged terms in the moving average. We use the following moving average specification: $$\epsilon_{t} = \sum^{q}_{i=0}{\alpha_{i} \mu_{t-i}} $$ where $\mu_{t-i}$ are independently distributed standard normal random variables. To ensure that the standard deviation of $εt$ is equal to its pre-specified value, we set the $$\alpha_{i}= \frac{\sigma(\epsilon_{t})}{\sqrt{q+1}}$$ Note that $\epsilon t$ depends on $q+1$ random terms. the R-code that i have used for above model ``` q=31 iter=10000 for(i in 1:31){alpha_i[i] <- 0.234/(sqrt(31+1))} err_pk <- array(0,c(iter,11)) for(i in 1:iter){err_pk[i,] <- arima.sim(list(order=c(0,0,31), ma=alpha_i), n=11, innov=rnorm(iter,0,1))} ffe <- array(0,11) for(i in 1:11){ffe[i] <- median(err_pk[,i])}` plot.ts(ffe) ``` I am wondering that, $\alpha$ is changing through time? and also i never get the same output as in the figure 2 [The end of world Population growth Nature, v412, 543](http://www.nature.com/nature/journal/v412/n6846/extref/412543a0_S1.htm) the parameter for figure in the paper are: Note: MA(30), (31 terms), $\sigma(\epsilon_{t})=0.234$, 31 initial values of $\mu=0$, 10,000 simulation Am I missing any thing?
Fitting the moving average model
CC BY-SA 3.0
null
2011-04-27T14:57:23.747
2011-04-27T16:33:35.423
2011-04-27T16:33:35.423
null
3084
[ "r", "time-series", "arima" ]
10038
1
10040
null
4
2284
I am trying to discern the best way to calculate a correlation and perform a one-way ANOVA on data I am taking from a PostgreSQL database. - What tools should I use? - Can I do this using the SQL language itself? - Is there an easy way to export the data?
Conducting correlation and one way ANOVA using data from a PostgreSQL database
CC BY-SA 3.0
null
2011-04-27T14:58:52.013
2011-04-28T10:29:17.357
2011-04-28T10:29:17.357
183
1514
[ "correlation", "anova", "sql" ]
10039
2
null
9533
2
null
Would the nearest-neighbor distance distribution help ? For each of the 150 observations, you have distances $d_1\ d_2$ ... to its nearest, 2nd nearest ... neighbors, and an averaged distance distribution, call it DD. A query point gives you the distribution $d_1 .. d_{150}$: compare that to DD. The metric between points is crucial, but I have no recipe. Try the fractional or near-Hamming metric $\sum |a_j - b_j|^q$ (with no outer $\frac{1}{q}$). For small $q$, say .1, this up-weights close matches in a few features, which make sense; otherwise a sum of 500 terms is just normally distributed with no contrast at all / distance whiteout. (Yes near-Hamming is not a norm, but it is a metric, satisfies the triangle inequality.) Take a look at Omercevic et al., [High-dimensional feature matching: employing the concept of meaningful nearest neighbors](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.137.5973&rep=rep1&type=pdf) 2007 8p, who do this: - find ~ 100 nearest neighbors to a query point - fit $\lambda$ to exponential background noise - weight the 100 neighbors: don't understand this bit, looks ad hoc - pick ~ 10 outliers as "signal". (However they're matching 128-d SIFT vectors, whose distance distribution and noise model may be very different from yours.)
null
CC BY-SA 3.0
null
2011-04-27T15:10:30.253
2011-04-27T15:10:30.253
null
null
557
null
10040
2
null
10038
3
null
Although you can substitute the values into the ANOVA formulas and correlation formulas yourself, maybe even calculate everything using SQL you probably want to use the ANOVA and correlation features of a statistical software. All major statistical software (SAS, SPSS, Stata, R, Statistica etc) are able to connect to databases and get the data table with a simple SQL query. Besides direct connections you can use files as well to transfer the data: export the data into a file and import them into the statistical software. The comma separated values format is often used, but many other formats may work well.
null
CC BY-SA 3.0
null
2011-04-27T15:13:22.780
2011-04-27T15:13:22.780
null
null
3911
null
10041
2
null
10024
7
null
[This paper](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4767478&tag=1) appears to prove convergence in a finite number of steps.
null
CC BY-SA 3.0
null
2011-04-27T15:27:37.043
2011-04-27T15:27:37.043
null
null
495
null
10042
2
null
9797
1
null
Here is a simple approach that does not use logistic regression, but does attempt to use the suggestions above. Calculation of summary stats assumes, perhaps naively, that the date is normally distributed. Please pardon inelegant code - write a function to estimate the day of budbreak for each individual: use the day of year half way between the last observation of 0 and the first observation of 1 for each individual. budburst.day <- function(i){ data.subset <- subset(testdata, subset = id == i, na.rm = TRUE) y1 <- data.subset$day[max(which(data.subset$obs==0))] y2 <- data.subset$day[min(which(data.subset$obs==1))] y <- mean(c(y1, y2), na.rm = TRUE) if(is.na(y) | y<0 | y > 180) y <- NA return(y) } - Calculate summary statistics #calculate mean mean(unlist(lapply(1:4, budburst.day))) [1] 16.125 #calculate SE = sd/sqrt(n) sd(unlist(lapply(1:4, budburst.day)))/2 [1] 5.06777
null
CC BY-SA 3.0
null
2011-04-27T16:02:11.710
2011-04-27T16:02:11.710
null
null
1381
null
10043
2
null
10038
3
null
For simple correlation, in would use the statistical functions which are directly available in Postgre. e.g. ----> SELECT CORR(x, y) FROM YourTable; For ANOVA it is a bit tougher. You might be better off transforming your problem to one involving regression and use the regression formulas. [http://www.postgresql.org/docs/8.2/static/functions-aggregate.html](http://www.postgresql.org/docs/8.2/static/functions-aggregate.html)
null
CC BY-SA 3.0
null
2011-04-27T16:02:20.230
2011-04-27T16:02:20.230
null
null
3489
null
10044
1
null
null
8
984
I have 100 patients and each patient have 10 longitudinal serum creatinine measurements. The estimated glomerular filtration rates (eGFR) were calculated from a MDRD formula comprising gender, age and serum creatinine. eGFR is the dependent variable and time is the independent variable in linear regression for each patient. - Do linear regressions violate the "independent X's" assumption and linear mixed models should be used instead? - Can eGFR slopes (which are estimates with uncertainties instead of measured numbers) estimated from each patient (in linear regressions for each patient or in random effects mixed models [how to estimate slopes for each individual patient in mixed models?]) be used as the independent or dependent variables in other regression models? Thank you.
Can slopes in linear regressions be used as the independent or dependent variables in other regression models?
CC BY-SA 3.0
null
2011-04-27T16:27:36.533
2011-04-29T04:59:18.027
2011-04-27T18:31:47.960
71
4349
[ "regression", "mixed-model", "repeated-measures", "panel-data" ]
10045
1
10047
null
16
1851
I have hourly and daily temperature reports for many stations at [http://data.barrycarter.info/](http://data.barrycarter.info/) I encourage people to download it, but, at 6.6G, it uses up a lot of bandwidth. Is there a service that hosts "public interest" data for free? I know about [http://aws.amazon.com/publicdatasets](http://aws.amazon.com/publicdatasets), but you need an Amazon EC2 account to access that data.
Free public interest data hosting?
CC BY-SA 3.0
null
2011-04-27T16:39:04.570
2015-10-09T10:00:14.350
null
null
null
[ "dataset" ]
10046
2
null
10024
4
null
The $k$-means objective function strictly decreases with each change of assignment, which automatically implies convergence without cycling. Moreover, the partitions produced in each step of $k$-means satisfy a "Voronoi property" in that each point is always assigned to its nearest center. This implies an upper bound on the total number of possible partitions, which yields a finite upper bound on the termination time for $k$-means.
null
CC BY-SA 3.0
null
2011-04-27T16:44:37.730
2011-04-27T16:44:37.730
null
null
139
null
10047
2
null
10045
7
null
I did a quick search for google projects that may fit your needs, and I came up with two hits, which I have not tested: [Google Fusion Tables](https://www.google.com/accounts/ServiceLogin?service=fusiontables&passive=1209600&continue=http://www.google.com/fusiontables/Home&followup=http://www.google.com/fusiontables/Home) and [Google Public Data](http://www.google.com/publicdata/admin)
null
CC BY-SA 3.0
null
2011-04-27T17:23:10.253
2011-04-27T17:23:10.253
null
null
3542
null
10049
1
null
null
7
3348
I am working on the BRFSS dataset with the goal of predicting Diabetes. The dataset has 500,000 rows and 405 columns. It is a 0/1 classification problem, the ratio of 0 to 1 is 90:10. I tried using decision trees, logistic regression an ensemble of decision trees and logistic regression and my misclassification rate is almost 14% in all of these methods. - What should I do to increase the accuracy? I saw an earlier [post](https://stats.stackexchange.com/questions/9398/supervised-learning-with-rare-events-when-rarity-is-due-to-the-large-number-of) which says subsampling or assigning different weights helps. But I am not sure about the ratio. - What would be the best ratio to start off with? - I am working using SAS. Is there a way to do subsampling in SAS? - I am also interested in trying out the weighted approach. Is there a way to implement this in SAS? EDIT (28 Apr 2011) I tried subsampling and my misclassification rate goes up from 14% to 23%. The ratio I used was 50:50 for classes 0 and 1. The original ratio in the data was 90:10 and using the data as it is gave 14% error. So I believe subsampling doesn't work for my data. Would you suggest any other way to improve accuracy?
Improving accuracy of a binary classification when the target is unbalanced
CC BY-SA 3.0
null
2011-04-27T18:37:21.917
2013-01-06T18:24:58.067
2017-04-13T12:44:33.977
-1
3897
[ "machine-learning", "sas" ]
10050
2
null
10045
2
null
You should also take a look at [Infochimps](http://www.infochimps.com/). I've never used the site personally, but it's designed for precisely this.
null
CC BY-SA 3.0
null
2011-04-27T18:44:38.870
2011-04-27T18:44:38.870
null
null
71
null
10051
1
10061
null
4
894
I am looking for a repeated measures version of the [Logrank test](http://en.wikipedia.org/wiki/Logrank_test). If I am correct, I am looking for an equivalent of the Friedman test for survival data. Any suggestions on where to look? (and R code will always be welcomed :) ) Thanks.
Is there a repeated measures aware version of the logrank test?
CC BY-SA 3.0
null
2011-04-27T18:45:15.907
2011-04-27T22:56:56.187
2011-04-27T22:56:56.187
null
253
[ "r", "nonparametric", "survival", "logrank-test" ]
10052
2
null
10044
5
null
In effect, you are proposing to use linear regression as a mathematical procedure to condense a 10-variate observation into a single variable (the slope). As such it's just another example of similar procedures like (say) using an average of repeated measurements as a regression variable or including principal components scores in a regression. Specific comments follow. (1) Linear regression does not require the X's (independent variables) to be "independent." Indeed, in the standard formulation the concept of independence does not even apply because the X's are fixed values, not realizations of a random variable. (2) Yes, you can use the slopes as dependent variables. It would help to establish that they might behave like the dependent variable in linear regression. For ordinary least squares this means that a. Slopes may depend on some of the patient attributes. b. The dependence is approximately linear, at least for the range of observed patient attributes. c. Any variation between an observed slope and the hypothesized slope can be considered random. d. This random variation is (i) independent from patient to patient and (ii) has approximately the same distribution from patient to patient. e. As before, the independent variables are not viewed as random but as fixed and measured without appreciable error. If all these conditions approximately hold, you should be ok. Violations of (d) or (e) can be cured by using generalizations of ordinary least squares. (2'). Because the slopes will exhibit uncertainty (as measured in the regression used to estimate the slopes), they might not be good candidates for independent variables unless you are treating them as random in a mixed model or are using an errors-in-variables model. You can also cope with this situation by means of a [hierarchical Bayes model](http://en.wikipedia.org/wiki/Hierarchical_Bayes_model).
null
CC BY-SA 3.0
null
2011-04-27T18:54:41.967
2011-04-27T18:54:41.967
null
null
919
null
10053
1
null
null
11
12983
I am attempting to test the goodness of fit for a vector of count data to a binomial. To do so I am using the `goodfit()` function in the `vcd` package. When I run the function, however, it returns `NaN` for the p-value of the Chi-squared test. In my setup, I have a vector of count data with 75 elements. ``` > library(vcd) > counts <- c(32, 35, 44, 35, 41, 33, 42, 49, 36, 41, 42, 45, 38, 43, 36, 35, 40, 40, 43, 34, 39, 31, 40, 39, 36, 37, 37, 37, 32, 48, 41, 32, 37, 36, 49, 37, 41, 36, 34, 37, 41, 32, 36, 36, 30, 33, 33, 42, 39, 36, 36, 29, 31, 41, 36, 39, 40, 37, 39, 39, 31, 39, 37, 40, 33, 41, 34, 46, 35, 41, 44, 38, 44, 34, 42) > test.gof <- goodfit(counts, type="binomial", + par=list(size=length(counts), prob=0.5)) ``` Everything works fine, but when I inspect the `goodfit()` object I get the following: ``` > summary(test.gof) Goodness-of-fit test for binomial distribution X^2 df P(> X^2) Pearson NaN 75 NaN Likelihood Ratio 21.48322 19 0.3107244 Warning message: In summary.goodfit(test.gof) : Chi-squared approximation may be incorrect ``` I suspected it was a small sample size issue at first, but I also have a data set with 50 observations that does not return `NaN` for the p-value. I have also tried to switch the method in `goodfit()` to ML with similar results. Why would this function be producing `NaN` in this case? Is there an alternative function to calculate GOF on count data?
NaN p-value when using R's goodfit on binomial data
CC BY-SA 3.0
null
2011-04-27T19:10:24.630
2011-04-28T22:50:54.383
2011-04-28T22:50:54.383
null
302
[ "binomial-distribution", "chi-squared-test", "goodness-of-fit" ]
10054
2
null
10049
1
null
Regarding decision trees, I would suggest the following. Assume that you have 10 training examples from class $C_1$ and 90 training examples from class $C_2$. You can use an ensemble of $N$ decision trees, where each tree is trained on 10 examples from $C_1$ and 10 randomly chosen examples from $C_2$. The decision of the ensemble may be the majority vote. You can play with different $N$ to see how it works.
null
CC BY-SA 3.0
null
2011-04-27T19:25:02.690
2011-04-28T02:49:11.433
2011-04-28T02:49:11.433
4337
4337
null
10055
1
null
null
0
260
I have a table I want to convert into a graph (bar-graph or line-graph) The first column has fixed values. Twenty different values are simulated for these fixed values and kept in the next columns. I want to plot a graph of the fixed column against all the different twenty simulated columns. How do I go about it? I am using R2.12.1 > table name:bygrace V1 V2 V3 V4 V5 100 16 11 -6 1 120 -17 -12 7 -2 140 18 13 -8 3 150 -19 -14 9 -4 210 20 15 -10 -5 ``` values for V1 are:100,120,140 V2:16,-17,18,-19,20 V3:11,-12,13,-14,15 V4:-6,7,-8,9,10 V5.1,-2,3,-4,-5 ``` Actually, my table looks like the one above. The first column V1 represents Premiums charged by an insurance company,say last year (for 5 policyholders/policies) After critical analysis of the portfolio, the company decides to increase or decrease the premium amount for some of the customers. However, the company does not know exactly by how much it should increase of decrease the premiums for fear that it would lose customers. For this reason, four different scenarios/simulations are considered which are represented by the next four columns(V2 through V5) respectively. Now the task is to plot the premium amounts in V1 against these four different scenarios(bar-graph or line-graph; I think bar will be better). And my question is can this be done on one graph/at once? If yes, how shoul I go about it? Or do I have to plot the premium against each column separately? In fact, I have spent the whole of yesterday and today on this but I have not been able to get the desired result. Someone gave me an answer to try.And I am going to do that because it has given me an idea and I am very thankful to the one! Many thanks to everyone for their help Owusu Isaac
How to convert a table into a graph in R
CC BY-SA 3.0
0
2011-04-27T19:37:31.773
2011-04-27T19:37:31.773
null
null
4340
[ "graph-theory" ]
10056
2
null
10045
5
null
How about the [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/index.html)? Here is their [data donation policy](http://archive.ics.uci.edu/ml/donation_policy.html).
null
CC BY-SA 3.0
null
2011-04-27T20:16:08.833
2011-04-27T20:16:08.833
null
null
4325
null
10057
1
null
null
1
95
Many academic papers focusing on statistical learning in the applied setting of finance train a model such that their parameter set, B, defines the relationship y(t) = B*x(t) + e(t) y could be a 0-1 coded response for an increase (or not) of the asset price at time t. Ignoring the details of the model my question is as follows: given that you are predicting the response at the SAME time that you record your features, how would you make use of this model? Intuitively I think ok, my input space says we will see an increase in y but since they are at the same time, I can actually go and see what y has in fact done. Do I take the view of my model to be the truth and create a trading signal based on this information (for example)? Any practical insight would be greatly appreciated.
Classification in a applied financial setting
CC BY-SA 3.0
null
2011-04-27T20:45:01.373
2011-05-27T20:50:51.843
null
null
4352
[ "classification" ]
10058
1
10321
null
7
629
I need a reality check if you will. I have a data set, where I know how many individual butterflies of two species co-occur at one meadow (not always, though). I have additional variables, for instance wet/dry meadow, intensely cultivated/not cultivated, percent of area around the meadow covered by wet or dry meadows... This is the head of the dataset. 50 rows in total, 25 per species. Notice that all columns are identical except count and species, indicating that they come from the same sampling location. ``` > head(dej) count type1 type2 perc.for.100m perc.dry.100m perc.wet.100m species 1 1 intensive dry 13.836 22.724 0.000 reali 2 3 extensive wet 6.877 1.613 52.213 reali 3 4 intensive wet 22.770 0.537 44.901 reali 4 6 intensive dry 17.346 42.322 6.359 reali 5 1 extensive wet 34.854 9.091 11.950 reali 6 2 extensive dry 50.387 19.245 0.000 reali ... 26 0 intensive dry 13.836 22.724 0.000 sinapis 27 0 extensive wet 6.877 1.613 52.213 sinapis 28 0 intensive wet 22.770 0.537 44.901 sinapis 29 0 intensive dry 17.346 42.322 6.359 sinapis 30 1 extensive wet 34.854 9.091 11.950 sinapis 31 1 extensive dry 50.387 19.245 0.000 sinapis ... ``` I'm interested in knowing if any of these variables influence the species and their respective counts. And this is the result of the "full" model. ``` glm(formula = count ~ type1 + type2 + perc.for.100m + perc.dry.100m + perc.wet.100m + species, family = poisson, data = dej) Deviance Residuals: Min 1Q Median 3Q Max -2.8458 -1.1414 -0.4546 0.8297 2.2145 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.028129 0.523509 0.054 0.95715 type1intensive 0.196699 0.191960 1.025 0.30551 type2wet 0.071841 0.334286 0.215 0.82984 perc.for.100m 0.003741 0.008277 0.452 0.65130 perc.dry.100m 0.010952 0.010750 1.019 0.30829 perc.wet.100m 0.007467 0.011596 0.644 0.51960 speciessinapis 0.597837 0.187689 3.185 0.00145 ** ``` Does this sound like the correct approach at all? Some additional information As a side note, based on my exploration of the data, I would expect that the count would depend (at least) on the type2 variable, alas that's not what I got. ![enter image description here](https://i.stack.imgur.com/9sYc5.jpg) Using the "reverse logic", I tried if species can be predicted using my data, which presumably confirms the above results. ``` Call: glm(formula = species ~ type1 + type2 + perc.for.100m + perc.dry.100m + perc.wet.100m + count, family = binomial, data = dej) Deviance Residuals: Min 1Q Median 3Q Max -1.6322 -1.0136 -0.1568 1.0592 1.6407 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.351192 1.658052 -0.212 0.8323 type1intensive -0.170583 0.651611 -0.262 0.7935 type2wet -0.107377 1.078726 -0.100 0.9207 perc.for.100m -0.002806 0.026807 -0.105 0.9166 perc.dry.100m -0.010227 0.036982 -0.277 0.7821 perc.wet.100m -0.006486 0.038071 -0.170 0.8647 count 0.345036 0.153811 2.243 0.0249 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 69.315 on 49 degrees of freedom Residual deviance: 63.431 on 43 degrees of freedom AIC: 77.431 ``` EDIT 1 Aniko noticed that there may be an interaction between the type2 and species. Indeed! ``` Call: glm(formula = count ~ type1 + type2 * species + perc.for.100m + perc.dry.100m + perc.wet.100m, family = poisson, data = dej) Deviance Residuals: Min 1Q Median 3Q Max -3.0859 -1.1350 -0.1947 0.7109 2.7470 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.357165 0.559987 -0.638 0.52360 type1intensive 0.196699 0.191960 1.025 0.30551 type2wet 0.704769 0.429087 1.642 0.10049 speciessinapis 1.145132 0.306847 3.732 0.00019 *** perc.for.100m 0.003741 0.008277 0.452 0.65130 perc.dry.100m 0.010952 0.010750 1.019 0.30829 perc.wet.100m 0.007467 0.011596 0.644 0.51960 type2wet:speciessinapis -0.962811 0.394038 -2.443 0.01455 * ``` EDIT 2 After removing the non significant terms (assuming I found the global maximum for the sake of data dredging) the story gets another twist into the right direction. ``` Call: glm(formula = count ~ type2 * species, family = poisson, data = dej) Deviance Residuals: Min 1Q Median 3Q Max -2.7080 -1.1617 -0.1582 0.6979 3.1599 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.1542 0.2673 0.577 0.56408 type2wet 0.6821 0.3237 2.107 0.03508 * speciessinapis 1.1451 0.3068 3.732 0.00019 *** type2wet:speciessinapis -0.9628 0.3940 -2.443 0.01455 * ```
Reality check using GLM
CC BY-SA 3.0
null
2011-04-27T20:45:05.570
2013-07-17T17:52:18.233
2013-07-17T17:52:18.233
6029
144
[ "r", "generalized-linear-model" ]
10059
1
10170
null
8
12138
- Is it possible to do 2-stage cluster analysis in R? - Can anybody provide me resource on it?
Two-stage clustering in R
CC BY-SA 3.0
null
2011-04-27T20:48:00.120
2019-10-28T02:26:16.910
2011-04-28T02:14:29.323
183
4278
[ "r", "clustering" ]
10060
2
null
10057
1
null
If you want to predict the future, and then use that information to profit, you need to build a model like this: `Next(y)~x` or `y(t+1)=B*x(t)+e(t)` i.e. specify your problem as "what will happen tomorrow" not "what did happen today." Look at the [quantmod](http://cran.r-project.org/web/packages/quantmod/index.html) package for R for more examples.
null
CC BY-SA 3.0
null
2011-04-27T20:49:05.533
2011-04-27T20:49:05.533
null
null
2817
null
10061
2
null
10051
2
null
You can probably use a so-called marginal survival model. It would be more like Cox-regression than a log-rank test, i.e. proportional hazards for the effect of task would be assumed. It is implemented in the `survival` package: ``` mod <- coxph(Surv(time, censor) ~ task + cluster(id)) ``` Here `task` would be a factor representing the task, and `id` would identify the subject. There is a book that goes into lots of detail on multivariate survival data: T.Therneau, P. Grambsch. Modeling Survival Data: Extending the Cox Model. Springer, 2000.
null
CC BY-SA 3.0
null
2011-04-27T21:08:56.193
2011-04-27T21:08:56.193
null
null
279
null
10062
1
10070
null
9
5342
I would like to fit a mixture model to Monte Carlo generated data with probability densities which typically look like those in the attached image. ![typical densities](https://i.stack.imgur.com/TQw1z.png) It would seem from visual inspection that a normal mixture model might be applicable but on checking the CRAN task view I really do not know which package might be appropriate for my needs. Basically what I would like to do is supply a vector of the data and then have the package function return the mean, variance and proportional weights for each component in the mixture model, and also perhaps identify how many components there are in the model.
Which R package to use to calculate component parameters for a mixture model
CC BY-SA 3.0
null
2011-04-27T21:10:13.713
2011-09-24T02:16:05.507
2011-05-11T13:29:30.453
919
226
[ "r", "mixed-model" ]
10063
1
null
null
2
583
I am attempting to test the goodness of fit for a vector of count data to a binomial. To do so I am using the `goodfit()` function in the `vcd` package. When I run the function, however, it returns `NaN` for the p-value of the Chi-squared test. In my setup, I have a vector of count data with 75 elements. ``` > library(vcd) > counts [1] 32 35 44 35 41 33 42 49 36 41 42 45 38 43 36 35 40 40 43 34 39 31 40 39 36 37 37 37 32 48 41 32 37 36 49 37 41 36 34 37 41 32 36 36 30 33 33 42 39 36 36 29 31 [54] 41 36 39 40 37 39 39 31 39 37 40 33 41 34 46 35 41 44 38 44 34 42 > test.gof <- goodfit(counts, type="binomial", + par=list(size=length(counts), prob=0.5)) ``` Everything works fine, but when I inspect the `goodfit()` object I get the following: ``` > summary(test.gof) Goodness-of-fit test for binomial distribution X^2 df P(> X^2) Pearson NaN 75 NaN Likelihood Ratio 21.48322 19 0.3107244 Warning message: In summary.goodfit(test.gof) : Chi-squared approximation may be incorrect ``` I suspected it was a small sample size issue at first, but I also have a data set with 50 observations that does not return `NaN` for the p-value. I have also tried to switch the method in `goodfit()` to ML with similar results. Why would this function be producing `NaN` in this case? Is there an alternative function to calculate GOF on count data? Data used: ``` counts <- c(32, 35, 44, 35, 41, 33, 42, 49, 36, 41, 42, 45, 38, 43, 36, 35, 40, 40, 43, 34, 39, 31, 40, 39, 36, 37, 37, 37, 32, 48, 41, 32, 37, 36, 49, 37, 41, 36, 34, 37, 41, 32, 36, 36, 30, 33, 33, 42, 39, 36, 36, 29, 31, 41, 36, 39, 40, 37, 39, 39, 31, 39, 37, 40, 33, 41, 34, 46, 35, 41, 44, 38, 44, 34, 42) ```
p-value NaN when using goodfit() on binomial data
CC BY-SA 3.0
0
2011-04-27T18:17:13.837
2011-04-27T22:04:50.433
null
null
302
[ "r" ]
10064
2
null
10053
7
null
You have zero frequencies in observed counts. That explains `NaN`s in your data. If you look at `test.gof` object, you'll see that: ``` table(test.gof$observed) 0 1 2 3 4 5 7 8 10 56 5 3 2 5 1 1 2 1 ``` you have 56 zeros. Anyway, IMHO this question is for [http://stats.stackexchange.com](http://stats.stackexchange.com).
null
CC BY-SA 3.0
null
2011-04-27T18:54:14.177
2011-04-27T18:54:14.177
null
null
1356
null
10065
1
null
null
3
578
I have a collection of $10^5$ essays, each of which on average contains $10^3$ distinct words. There are $10^6$ distinct words in the entire collection. If I index every word what is the mean and median size of the inverted index lists? My guess is that median would be 1, but I have no clue how without the parameters can I calculate harmonic mean? Can anybody help? UPDATE: I should have really mentioned that before: I am talking about English language.
Mean and median in Zipf's distribution
CC BY-SA 3.0
null
2011-04-27T22:17:42.550
2013-04-27T09:12:29.140
2011-04-28T07:10:04.853
1371
1371
[ "self-study", "mean", "median", "zipf" ]
10066
1
null
null
2
1617
I am trying to perform a canonical correlation analysis to investigate the relationship between attitudes (14 variables), perceived consumer effectiveness (6 variables) AND Intention to dine (DV) 14 variables. However when SPSS generates the Manova Output there are no tables on redundancy index and as far as I understand this index is important to report as it explains how good are the IVs as the predictors of the variance in DV. This is the script that i am using for the test: ``` manova Attitudes PCE with Intention / discrim all alpha(1) / print=sig(eigen dim). ``` Thank you! Oksana
How to get a redundancy index when performing canonical correlation analysis in SPSS?
CC BY-SA 3.0
null
2011-04-27T22:40:23.897
2011-04-28T09:12:41.800
2011-04-28T09:12:41.800
930
4356
[ "correlation", "multivariate-analysis", "spss" ]
10067
1
99399
null
5
1650
I am a software engineer by trade doing stats in my free time. I am playing around with an implementation of [Microsoft's TrueSkill](http://research.microsoft.com/en-us/projects/trueskill/) rating system for ranking players and openings in from a data set of ~700,000 Dominion games. I want to measure the log loss of predictions that the system gives for multiplayer games so that I can optimize hyperparameters of the system as well as explicitly model the inherent turn order advantage in the game. In general, the TrueSkill builds a Bayesian Graphical Model, assuming a normal distribution on player skills as well as a normal distribution for player luck per game, and produces a mean/variance for the independent performance of players. For a two player game, the probability of one beating the other is encoded in the difference of the performance distributions. However, predicting the probability of a player winning a 3 player game seems to requires estimating the probability that given player will exceed the maximum performance of the other two players. Is the maximum of two Gaussian also Gaussian? I've read [these notes](http://ee162.caltech.edu/notes/lect8.pdf) which seem to solve a more general version of problem that I want on page 27 with equation (8-47). Basically it says that for random variables X and Y, then if $Z = \max(X, Y)$ With X and Y independent, (1) $F_z(z) = F_x(x) F_y(y)$ and hence (2) $f_z(z) = F_x(z)f_y(z) + f_x(z)F_y(z)$ Should I then plug in the formulas for the cdf/pdfs and pray that I get something useful?
Maximum of two (or more) gaussian distributions with known and possibly different means/variances
CC BY-SA 3.0
null
2011-04-27T23:16:13.090
2014-05-20T14:56:00.673
null
null
4164
[ "normal-distribution", "predictive-models" ]
10069
2
null
10005
5
null
Here's a few options using the ggplot2 package. If you haven't started learning much about plotting with R, taking the time now to learn [ggplot2](http://had.co.nz/ggplot2/) or lattice may be worth the effort. ggplot requires data to be in "long" format, so the first thing we'll do is reformat to long: ``` require(ggplot2) dat <- data.frame(V1 = c(100, 120, 140, 150, 210), V2 = c(16, -17, 18, -19, 20) , V3 = c(11, -12, 13, -14, 15), V4 = c(-6, 7, -8, 9, -10), V5 = c(1, -2, 3, -4, 5)) dat.m <- melt(dat, "V1") ``` It sounds like V1 represents discrete levels, and should be treated as categorical data. That is why I treat V1 as a factor. If that's not the case, simply use V1 in the x-axis. ``` #Bar chart ggplot(dat.m, aes(x = factor(V1), y = value, fill = variable)) + geom_bar(position = "dodge") #Line chart ggplot(dat.m, aes(x = factor(V1), y = value, group = variable, colour = variable)) + geom_line() ``` ![enter image description here](https://i.stack.imgur.com/zLCVu.jpg)
null
CC BY-SA 3.0
null
2011-04-28T00:23:08.120
2011-04-28T00:23:08.120
null
null
696
null
10070
2
null
10062
9
null
Try [mixdist](http://cran.r-project.org/web/packages/mixdist/index.html) Here's an example: ``` library(mixdist) #Build data vector "x" as a mixture of data from 3 Normal Distributions x1 <- rnorm(1000, mean=0, sd=2.0) x2 <- rnorm(500, mean=9, sd=1.5) x3 <- rnorm(300, mean=13, sd=1.0) x <- c(x1, x2, x3) #Plot a histogram (you'll play around with the value for "breaks" as #you zero-in on the fit). Then build a data frame that has the #bucket midpoints and counts. breaks <- 30 his <- hist(x, breaks=breaks) df <- data.frame(mid=his$mids, cou=his$counts) head(df) #The above Histogram shows 3 peaks that might be represented by 3 Normal #Distributions. Guess at the 3 Means in Ascending Order, with a guess for #the associated 3 Sigmas and fit the distribution. guemea <- c(3, 11, 14) guesig <- c(1, 1, 1) guedis <- "norm" (fitpro <- mix(as.mixdata(df), mixparam(mu=guemea, sigma=guesig), dist=guedis)) #Plot the results plot(fitpro, main="Fit a Probability Distribution") grid() legend("topright", lty=1, lwd=c(1, 1, 2), c("Original Distribution to be Fit", "Individual Fitted Distributions", "Fitted Distributions Combined"), col=c("blue", "red", rgb(0.2, 0.7, 0.2)), bg="white") =========================== Parameters: pi mu sigma 1 0.5533 -0.565 1.9671 2 0.2907 8.570 1.6169 3 0.1561 12.725 0.9987 Distribution: [1] "norm" Constraints: conpi conmu consigma "NONE" "NONE" "NONE" ``` ![enter image description here](https://i.stack.imgur.com/BzSbw.jpg)
null
CC BY-SA 3.0
null
2011-04-28T00:38:55.720
2011-04-28T00:38:55.720
null
null
2775
null
10071
2
null
9850
2
null
Perhaps try something like [Fastmap](http://portal.acm.org/citation.cfm?id=223812) to plot your set of marks using their relative distances. [(still) nothing clever](http://gromgull.net/blog/2009/08/fastmap-in-python/) has written up Fastmap in python to plot strings and could be easily updated to handle lists of attributes if you wrote up your own distance metric. Below is a standard euclidean distance I use that takes two lists of attributes as parameters. If your lists have a class value, don't use it in the distance calculation. ``` def distance(vecone, vectwo, d=0.0): for i in range(len(vecone)): if isnumeric(vecone[i]): d = d + (vecone[i] - vectwo[i])**2 elif vecone[i] is not vectwo[i]: d += 1.0 return math.sqrt(d) def isnumeric(s): try: float(s) return True except ValueError: return False ```
null
CC BY-SA 3.0
null
2011-04-28T01:51:18.807
2011-04-28T01:51:18.807
null
null
1856
null
10072
2
null
220
87
null
If the CDF of $X_i$ is denoted by $F(x)$, then the CDF of the minimum is given by $1-[1-F(x)]^n$. Reasoning: given $n$ random variables, the probability $P(Y\leq y) = P(\min(X_1\dots X_n)\leq y)$ implies that at least one $X_i$ is smaller than $y$. The probability that at least one $X_i$ is smaller than $y$ is equivalent to one minus the probability that all $X_i$ are greater than $y$, i.e. $P(Y\leq y) = 1 - P(X_1 \gt y,\dots, X_n \gt y)$. If the $X_i$'s are independent identically-distributed, then the probability that all $X_i$ are greater than $y$ is $[1-F(y)]^n$. Therefore, the original probability is $P(Y \leq y) = 1-[1-F(y)]^n$. Example: say $X_i \sim \text{Uniform} (0,1)$, then intuitively the probability $\min(X_1\dots X_n)\leq 1$ should be equal to 1 (as the minimum value would always be less than 1 since $0\leq X_i\leq 1$ for all $i$). In this case $F(1)=1$ thus the probability is always 1.
null
CC BY-SA 4.0
null
2011-04-28T02:10:35.973
2019-09-04T09:59:55.003
2019-09-04T09:59:55.003
258028
4358
null
10073
2
null
10066
1
null
It's been a while since I've run canonical correlation in SPSS. Perhaps have a look at the `cancorr` macro. It's an alternative way of running a canonical correlation, and from memory it provides slightly different output. ### Running Cancor Here's some [info on running cancorr](http://jeromyanglim.blogspot.com/2010/06/canonical-correlation-getting-started.html), extracting the important bit: "Note that the location script required to run CANCORR changes between versions and installations of SPSS. David Garson sets out the following code template:" ``` INCLUDE 'c:\Program Files\SPSS\Canonical correlation.sps'. CANCORR SET1=varlist/ SET2=varlist/. ``` ### Additional Reference - Here's some more information from a book chapter by V.K. Bhatia
null
CC BY-SA 3.0
null
2011-04-28T02:34:53.760
2011-04-28T02:34:53.760
null
null
183
null
10074
1
10097
null
2
2857
After a little searching, I was able to find data from the 1990 US census on the frequency of male and female first names, and on surnames. That data is [here](https://web.archive.org/web/20110103145240/http://www.census.gov/genealogy/names/names_files.html). I'm not able to find similar data for any other census years, however. Does anyone know where I can get data for other years, both earlier and later?
Where can I find name data for US census years other than 1990?
CC BY-SA 4.0
0
2011-04-28T03:46:16.683
2023-05-01T06:57:11.210
2023-05-01T06:57:11.210
362671
4359
[ "dataset", "census" ]
10075
1
10081
null
15
31925
I'm teaching myself probability theory, and I'm not sure I understand any use for variance, as opposed to standard deviation. In the practice situations I'm looking at, the variance is larger than the range, so it doesn't seem intuitively useful.
What is the practical application of variance?
CC BY-SA 3.0
null
2011-04-28T04:51:05.910
2018-02-02T05:46:06.277
2011-04-28T07:58:18.980
null
4361
[ "variance" ]
10076
2
null
10075
3
null
Variance is the most basic of the two measures... stddev = sqrt(variance). While exaggerated, it's good enough for a comparison and grows very large when there is mixed-up-ness in the distribution. ``` variance(22, 25, 29, 30, 37) = 32.3 variance(22, 25, 29, 30, 900) = 152611.0 ``` Standard deviation is used way more often because the result has the same units as the data, making standard deviation more appropriate for any sort of visual analysis.
null
CC BY-SA 3.0
null
2011-04-28T06:13:12.620
2011-04-28T06:13:12.620
null
null
1856
null
10077
2
null
10074
0
null
Try calling the census bureau using a help line number from census.gov. It usually takes a few tries and patience with bureaucracy, but they've always been helpful to me. The census doesn't release individual names in their data sets, so it will have to be a census bureau created aggregation.
null
CC BY-SA 3.0
null
2011-04-28T06:25:09.087
2011-04-28T08:06:46.293
2011-04-28T08:06:46.293
4361
4361
null
10079
1
null
null
83
248157
Within the context of a research proposal in the social sciences, I was asked the following question: > I have always gone by 100 + m (where m is the number of predictors) when determining minimum sample size for multiple regression. Is this appropriate? I get similar questions a lot, often with different rules of thumb. I've also read such rules of thumb quite a lot in various textbooks. I sometimes wonder whether popularity of a rule in terms of citations is based on how low the standard is set. However, I'm also aware of the value of good heuristics in simplifying decision making. ### Questions: - What is the utility of simple rules of thumb for minimum sample sizes within the context of applied researchers designing research studies? - Would you suggest an alternative rule of thumb for minimum sample size for multiple regression? - Alternatively, what alternative strategies would you suggest for determining minimum sample size for multiple regression? In particular, it would be good if value is assigned to the degree to which any strategy can readily be applied by a non-statistician.
Rules of thumb for minimum sample size for multiple regression
CC BY-SA 3.0
null
2011-04-28T06:40:32.977
2017-11-22T17:09:56.040
2020-06-11T14:32:37.003
-1
183
[ "regression", "sample-size", "statistical-power", "rule-of-thumb" ]
10080
2
null
9850
30
null
Usually you'd plot the original values in a scatterplot (or a matrix of scatterplots if you have many of them) and use colour to show your groups. You asked for an answer in python, and you actually do all the clustering and plotting with scipy, numpy and matplotlib: ## Start by making some data ``` import numpy as np from scipy import cluster from matplotlib import pyplot np.random.seed(123) tests = np.reshape( np.random.uniform(0,100,60), (30,2) ) #tests[1:4] #array([[ 22.68514536, 55.13147691], # [ 71.94689698, 42.31064601], # [ 98.07641984, 68.48297386]]) ``` ## How many clusters? This is the hard thing about k-means, and there are lots of methods. Let's use [the elbow method](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method) ``` #plot variance for each value for 'k' between 1,10 initial = [cluster.vq.kmeans(tests,i) for i in range(1,10)] pyplot.plot([var for (cent,var) in initial]) pyplot.show() ``` ![Elbow plot](https://i.stack.imgur.com/4Y1Ah.png) ## Assign your observations to classes, and plot them I reckon index 3 (i.e. 4 clusters) is as good as any so ``` cent, var = initial[3] #use vq() to get as assignment for each obs. assignment,cdist = cluster.vq.vq(tests,cent) pyplot.scatter(tests[:,0], tests[:,1], c=assignment) pyplot.show() ``` ![scatter plot](https://i.stack.imgur.com/3JT8o.png) Just work out where you can stick whatever you've already done into that workdflow (and I hope you clusters are a bit nicer than the random ones!)
null
CC BY-SA 3.0
null
2011-04-28T06:42:26.660
2011-04-28T06:42:26.660
2020-06-11T14:32:37.003
-1
3732
null
10081
2
null
10075
9
null
In practice, you calculate the SD through calculating the variance (as abutcher indicated). I believe the variance is used more often (apart from interpretation, as you indicated yourself) because it has a lot of statistically interesting properties: it has unbiased estimators in a lot of cases, leads to known distributions for hypothesis testing etc. As to the variance being bigger: if the variance were 1/4, the SD would be 1/2. As soon as your variance/SD are smaller than 1, this order reverses.
null
CC BY-SA 3.0
null
2011-04-28T06:49:29.953
2011-04-28T06:49:29.953
null
null
4257
null
10082
2
null
10058
3
null
You indicate yourself that your measurements are not independent (you measure both species' abundance from the same locations). As such, you should correct for repeated measurements. Try lmer from the lme4 package.
null
CC BY-SA 3.0
null
2011-04-28T07:01:33.323
2011-04-28T07:01:33.323
null
null
4257
null
10083
2
null
10066
1
null
As it is not really difficult to import SAV dataset in R nowadays, with e.g., ``` library(foreign) df <- read.spss("yourfilename", to.data.frame=TRUE) ``` you can check your SPSS results against one of the R packages that allow to perform CCA (see the CRAN Task View on [Multivariate](http://cran.r-project.org/web/views/Multivariate.html) or [Psychometrics](http://cran.r-project.org/web/views/Psychometrics.html) analysis). In particular, the [vegan](http://cran.r-project.org/web/packages/vegan/index.html) package offers an handy way to apply CCA and has nice graphical and numerical summary through the `CCorA()` function. Also, note that redundancy indexes apply onto one block of variables, conditional on the other block (hence the distinction you'll find in the aforementioned function between Y|X and X|Y); they are intended to provide a measure of the variance of one set of variables predicted from the linear combination of the other set of variables. However, in essence CCA consider that you have two sets of measures that play a symmetrical role. They are both descriptions of the same individuals or statistical units. If your blocks really play an assymmetric role--that is you have a block of predictors and a block of response variable--then you're better using [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression).
null
CC BY-SA 3.0
null
2011-04-28T09:10:40.600
2011-04-28T09:10:40.600
null
null
930
null
10084
1
10087
null
3
675
What's it called when I'm trying to remove the effect of a variable on another variable? Am I in the right ballpark using the terms "detrending" or "normalizing" or "controlling for Y on X"? What's the general process for doing this? The problem I'm working with currently is as follows: I work at a mine that loads trains that are 160 wagons in length. After the trains are filled, they leave to get unloaded and then later return to the mine. Sometimes the wagons end up in different positions in the train. Our aim is to maximise tonnes in every wagon. We've noticed two things - some wagons IDs load consistently low in tonnes - wagons in poisitions near the start or end of the train load consistently low. i.e. tonnes go down as distance from middle position goes up I've got a linear model fitted in R like ``` > lm(df$tonnes ~ df$dist_fr_middle) Coefficients: (Intercept) df$dist_fr_middle 113.92001 -0.03915 ``` How do I go about creating a new column in my dataframe for distance-from-middle-adjusted tonnes? Thanks for your help!
Accounting for biases in the data (normalizing? detrending?)
CC BY-SA 3.0
null
2011-04-28T09:30:46.753
2011-04-28T11:02:40.190
null
null
827
[ "regression", "normalization" ]
10086
1
10090
null
3
233
I have 6 random number generators. They are "black boxes", i.e. I do not know if they are the same or different. For example I do not know if they provide the same arithmetic averages and/or root mean square deviations. My goal is to check if they are the same or different. The problem is that I can generate a limited set of numbers (every random number generator gives me only 20 numbers). The procedure that I use is as following. - I choose a pair of generators. - I use each of them to generate 20 numbers. - For every generator I use these numbers to calculate the mean. - Then I calculate the difference between the means (mean1 - mean2). I will call this difference as "reference difference". Now I want to know if this difference is "real". In other words I want to know if the observed difference in means is caused by the fact that I do have two different random number generators or I have identical generators and the mean is different just because of chance (small number of numbers). To answer the last question I use the following procedure. - I combine the two sets of numbers (20 numbers + 20 numbers). - Then I randomly split this set into two subsets of equal size. - For each subset I calculate means and, as before, I calculate the difference between the two means (I will call this difference as "test difference"). - I compare this test difference with the reference difference. - I repeat this procedure many times and to see in home many cases the test difference is smaller then the reference difference. I thought that if the reference difference is larger than the test difference in most of the cases than it is conditioned not by chance but by the fact that the compared random number generators are different. In contrast, if the generators are identical than the test difference has 50% of chances to be larger (or smaller) then the reference one. However, I see that for the most of the considered pairs of the generators the reference difference between the averages is smaller than the test differences. How could it be? It looks like the the generators are more similar than identical generators?
How different random number generators can be more similar than identical ones?
CC BY-SA 3.0
null
2011-04-28T10:42:16.010
2011-04-28T11:19:41.957
null
null
2407
[ "mean", "random-generation", "hypothesis-testing" ]
10087
2
null
10084
2
null
The simple solution: ``` df$locationAdjustedTonnes = resid(lm(df$tonnes ~ df$dist_fr_middle)) ``` A general way is tweaking this (The example below gives the same result as the one above): ``` df$locationAdjustedTonnes = df$tonnes - predict(lm(df$tonnes ~ df$dist_fr_middle)) ```
null
CC BY-SA 3.0
null
2011-04-28T11:02:40.190
2011-04-28T11:02:40.190
null
null
3911
null
10088
1
10091
null
2
5185
For a multilabel dataset, i would like to find the number of clusters involved in it. The below example gives more details about the problem: ``` Label_A: feature values Label_B: feature values Label_A, Label_C: feature values Label_C: feature values ... etc ``` We have say $n$ datarecord. Label field may have single label/multilabel(as in the case of record 3). I would like to determine the number of cluster involved in the data. Assuming number of label as the number of cluster results in bad accuracy. This is because there may be case where single label can have multiple cluster. In this case, if we can find more cluster and assign two or more cluster to same label, we can increase the accuracy. Hence, how do you find the number of cluster present in the multilabel data?
How to determine optimal number of clusters?
CC BY-SA 3.0
null
2011-04-28T11:10:57.593
2019-06-06T20:39:39.673
2011-04-28T15:07:36.650
183
4290
[ "clustering" ]
10089
1
null
null
2
527
I have 3 arms in a trial. I want to compare results of a survey before during and on completion of a treatment. Data is not normally distributed. What test should I use?
How to handle multiple comparisons in a three-arm clinical trial?
CC BY-SA 3.0
null
2011-04-28T11:18:50.257
2011-11-11T03:27:26.863
2011-04-28T12:52:54.167
930
4367
[ "multiple-comparisons", "survey", "clinical-trials" ]
10090
2
null
10086
2
null
What you are doing is essentially the standard nonparametrical trick (a black box random generator can be seen as simply an unknown distribution, and your main hypothesis is that the distributions are the same). As such, it should work. However, you are using the difference of the means as a test statistic. Though it is not obvious to create an example, this could be very dependent on the shape of your unknown distributions. I'm guessing that using the Wilcoxon / Mann-Whitney type of test statistic will be less influenced by weird forms of the underlying distributions (but may also be less powerful). Regardless of the above: the idea should work (but is dependent on the fact that your samples are good representations of the true distributions). Maybe you should post some code? As a sidenote: you do not mention any particulars about these random number generators: are they continuous (over an interval?), discrete,... ? This may also be of influence, as your sample sizes are rather small.
null
CC BY-SA 3.0
null
2011-04-28T11:19:41.957
2011-04-28T11:19:41.957
null
null
4257
null
10091
2
null
10088
3
null
You can convert the labels into features indicating if the label is present or not. After that you can use various clustering algorithms and their corresponding methods to find out the number of clusters. EDIT: I understood that your difficulty was handling the multiple labels and I suggested a solution for that. Your question did not mention that you wanted to use the k-means algorithm. The number of k-means clusters question has been answered here: [How to define number of clusters in K-means clustering?](https://stats.stackexchange.com/questions/9016/how-to-define-number-of-clusters-in-k-means-clustering). For hierarchical clustering the answer is here: [Where to cut a dendrogram?](https://stats.stackexchange.com/questions/3685/where-to-cut-a-dendrogram). But there are many other clustering methods available: [Choosing a clustering method](https://stats.stackexchange.com/questions/3713/choosing-clustering-method).
null
CC BY-SA 3.0
null
2011-04-28T11:40:10.407
2011-04-28T19:12:57.390
2017-04-13T12:44:21.613
-1
3911
null
10092
1
null
null
6
22684
I am trying to understand Logistic Regression in relation to credit scoring model. I wish to understand the significance of "20/ln(2)" in logistic regression. Why and how is it used?
What does "20/ln(2)" mean in logistic regression?
CC BY-SA 3.0
null
2011-04-28T11:54:46.047
2020-07-02T13:45:10.190
2011-07-29T06:22:27.997
183
4368
[ "logistic" ]
10093
2
null
10079
12
null
(+1) for indeed a crucial, in my opinion, question. In macro-econometrics you usually have much smaller sample sizes than in micro, financial or sociological experiments. A researcher feels quite well when on can provide at least feasible estimations. My personal least possible rule of thumb is $4\cdot m$ ($4$ degrees of freedom on one estimated parameter). In other applied fields of studies you usually are more lucky with data (if it is not too expensive, just collect more data points) and you may ask what is the optimal size of a sample (not just minimum value for such). The latter issue comes from the fact that more low quality (noisy) data is not better than smaller sample of high quality ones. Most of the sample sizes are linked to the power of tests for the hypothesis you are going to test after you fit the multiple regression model. There is a nice [calculator](http://www.danielsoper.com/statcalc/calc01.aspx) that could be useful for multiple regression models and some [formula](http://www.danielsoper.com/statkb/topic01.aspx) behind the scenes. I think such a-priory calculator could be easily applied by non-statistician. Probably K.Kelley and S.E.Maxwell [article](http://nd.edu/~kkelley/publications/articles/Kelley_Maxwell_2003.pdf) may be useful to answer the other questions, but I need more time first to study the problem.
null
CC BY-SA 3.0
null
2011-04-28T12:11:19.007
2011-04-28T19:01:45.053
2011-04-28T19:01:45.053
2645
2645
null
10094
1
null
null
10
704
I performed both a SVD decomposition and a multidimensional scaling of a 6-dimensional data matrix, in order to get a better understanding of the structure of the data. Unfortunately, all the singular values are of the same order, implying that the dimensionality of the data is indeed 6. However, I would like to be able to interpret the values of the singular vectors. For instance, the first one seems to be more or less equal in each dimension (ie `(1,1,1,1,1,1)`), and the second also has an interesting structure (something like `(1,-1,1,-1,-1,1)`). How could I interpret these vectors? Could you point me to some literature on the subject?
How to interpret results of dimensionality reduction/multidimensional scaling?
CC BY-SA 3.0
null
2011-04-28T13:26:53.983
2011-07-22T08:25:45.747
null
null
3699
[ "pca", "interpretation", "dimensionality-reduction", "svd" ]
10095
1
10108
null
8
9904
### Assumptions: In an ANOVA where the normality assumptions are violated, the Box-Cox transformation can be applied to the response variable. The `lambda` can be estimated by the using maximum likelihood to optimize the normality of the model residuals. ### Question: When the estimates for `lambda` in the null model and the full model differ, how should `lambda` be estimated? ### My Data: In my data the lambda estimate for the null model is `-2.3` and the lambda estimate for the full model is `-2.8`. Transforming the response using these different parameters and preforming the ANOVA leads to different F-statistics. I have produced below a simplified version of the analysis using beta distributions with different parameters to simulate non-normality. Unfortunately, in this example the results of the ANOVA are insensitive to the different estimates of `lambda`. So, it doesn't fully illustrate the problem. ``` library(ggplot2) library(MASS) library(car) #Generating random beta-distributed data n=200 df <- rbind( data.frame(x=factor(rep("a1",n)), y=rbeta(n,2,5)), # more left skewed data.frame(x=factor(rep("a2",n)), y=rbeta(n,2,2))) # less left skewed print(qplot(data=df, color=x, x=y, geom="density")) print("Untransformed Analaysis of Variance:") m.null <- lm(y ~ 1, df) m.full <- lm(y ~ x, df) print(anova(m.null, m.full)) # Estimate Maximum Liklihood Box-Cox transform parameters for both models bc.null <- boxcox(m.null); bc.null.opt <- bc.null$x[which.max(bc.null$y)] bc.full <- boxcox(m.full); bc.full.opt <- bc.full$x[which.max(bc.full$y)] print(paste("ML Box-Cox estimate for null model:",bc.null.opt)) print(paste("ML Box-Cox estimate for full model:",bc.full.opt)) df$y.bc.null <- bcPower(df$y, bc.null.opt) df$y.bc.full <- bcPower(df$y, bc.full.opt) print(qplot(data=df, x=x, y=y.bc.null, geom="boxplot")) print(qplot(data=df, x=x, y=y.bc.full, geom="boxplot")) print("Analysis of Variance with optimial Box-Cox transform for null model") m.bc_null.null <- lm(y.bc.null ~ 1, data=df) m.bc_null.full <- lm(y.bc.null ~ x, data=df) print(anova(m.bc_null.null, m.bc_null.full)) print("Analysis of Variance with optimial Box-Cox transform for full model") m.bc_full.null <- lm(y.bc.null ~ 1, data=df) m.bc_full.full <- lm(y.bc.null ~ x, data=df) print(anova(m.bc_full.null, m.bc_full.full)) ```
Estimating Lambda for Box Cox transformation for ANOVA
CC BY-SA 3.0
null
2011-04-28T13:52:13.837
2017-02-25T14:15:46.357
2017-02-25T14:15:46.357
11887
501
[ "r", "anova", "maximum-likelihood", "modeling", "data-transformation" ]
10096
1
null
null
4
263
I am currently monitoring the daily energy consumption of a house since January of this year. That daily consumption depends on several variables, including outdoor temperature, solar irradiance etc. At regular intervals (once every two weeks) I am changing the algorithm of the heating controller, to see if I can improve the energy savings. What is the methodology one should follow for testing whether such a change has any effect on a timeserie? I've found a short example in BHH2 (where they analyze the effect of a law on pollution levels) but I need a more thorough discussion. Update Here is a plot of the data I'm working with: ![Energy vs time](https://farm4.static.flickr.com/3301/5694800429_70e184727c_d.jpg) This shows the daily energy consumption. The green line indicates when we introduced the change to the algorithm. As you can see, there already was a trend towards lower energy consumption even before we introduced the change, probably due to outdoor temperatures getting warmer. What I need to know is whether the introduction of the change has significantly accentuated that trend or not. Note also that since that time, we have collected much more data than what is shown on this plot.
How to test the effect of various manipulations of an independent variable on a dependent variable in a timeseries?
CC BY-SA 3.0
null
2011-04-28T14:06:44.270
2012-03-02T03:44:21.547
2017-03-09T17:30:36.720
-1
4370
[ "time-series", "intervention-analysis" ]
10097
2
null
10074
2
null
In [this post](http://www.mapleprimes.com/posts/38183-Baby-Names), my friend John May links to [http://www.ssa.gov/OACT/babynames/](http://www.ssa.gov/OACT/babynames/); I believe it only has the top 1000 most popular first names for both sexes, but for all years since 1879. (John's post [and the followup](http://www.mapleprimes.com/posts/38177-Baby-Names--Continued) are interesting in their own right - he looks at popular unisex baby names.)
null
CC BY-SA 3.0
null
2011-04-28T14:16:49.660
2011-04-28T14:16:49.660
null
null
2898
null
10098
2
null
10017
10
null
After a lot of reading, I found the solution for doing clustering within the `lm` framework. There's an excellent [white paper](http://people.su.se/~ma/clustering.pdf) by Mahmood Arai that provides a tutorial on clustering in the `lm` framework, which he does with degrees-of-freedom corrections instead of my messy attempts above. He provides his functions for both one- and two-way clustering covariance matrices [here](http://people.su.se/~ma/clmclx.R). Finally, although the content isn't available free, [Angrist and Pischke's Mostly Harmless Econometrics](http://www.mostlyharmlesseconometrics.com/book-contents/) has a section on clustering that was very helpful. --- Update on 4/27/2015 to add code from [blog post](https://rinfinance.wordpress.com/2011/09/22/cluster-robust-standard-errors-in-r/). ``` api=read.csv("api.csv") #create the variable api from the corresponding csv attach(api) # attach of data.frame objects api1=api[c(1:6,8:310),] # one missing entry in row nr. 7 modell.api=lm(API00 ~ GROWTH + EMER + YR_RND, data=api1) # creation of a simple linear model for API00 using the regressors Growth, Emer and Yr_rnd. ##creation of the function according to Arai: clx <- function(fm, dfcw, cluster) { library(sandwich) library(lmtest) library(zoo) M <- length(unique(cluster)) N <- length(cluster) dfc <- (M/(M-1))*((N-1)/(N-fm$rank)) # anpassung der freiheitsgrade u <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum)) vcovCL <-dfc * sandwich (fm, meat = crossprod(u)/N) * dfcw coeftest(fm, vcovCL) } clx(modell.api, 1, api1$DNUM) #creation of results. ```
null
CC BY-SA 3.0
null
2011-04-28T15:08:21.680
2015-04-27T16:12:06.400
2015-04-27T16:12:06.400
1445
1445
null
10099
2
null
10074
1
null
A search for a name on [Wolfram|Alpha](http://www.wolframalpha.com/input/?i=Eric) gives some stats and a distribution graph of ages. At the bottom of the output is a 'Source information' link, which results in a popup with a bunch of references. One is the ssa.gov babynames (link above). Most of the references are for other countries (mostly English speaking), but the [AGNAMES](https://web.archive.org/web/20110921022119/http://users.erols.com/dgalbi/names/agnames.htm) link looks really good, with "Popular Given Names US, 1801-1999" data and some normalization code. At the bottom of the Wolfram|Alpha sources popup is this statement: > "Requests by researchers for detailed information on the sources for individual Wolfram|Alpha results can be directed here." where the popup has the actual link. Seeing that Wolfram|Alpha output is all images of text and not text, I am not sure how open they are to sharing, but you could inquire.
null
CC BY-SA 4.0
null
2011-04-28T15:08:58.207
2023-05-01T06:56:00.703
2023-05-01T06:56:00.703
362671
4372
null
10100
1
10118
null
2
169
Online marketers often need to choose a "winning" variation among, say, 2 possible variations. ### Example: - PageVariation1 had 1000 sessions with 20 conversions (2.0% response rate) and - PageVariation2 had 800 sessions with 8 conversions (1.0% response rate). I'm using the "unpooled" [Two-Proportion z-Test](http://apcentral.collegeboard.com/apc/members/courses/teachers_corner/49013.html) for estimating the P-value. In the example above it is 0.9615 - so I might conclude that there's a 96% that PageVariation1 has a higher response rate. Let's say it's good enough to declare it as "winner". Problem is that some sessions and conversions weren't tracked to the PageVariation level - i.e. I know there were, say, additional 200 sessions and 3 conversions, but those can't be allocated to any of the above. Intuitively it means that there is higher uncertainty due to tracking/measurement issues. Consequently the P-value should be lower. ### Question: - Which test should be used (and how) in order to calculate the new P-value?
How to embed measurement uncertainty in a two-proportion z-test?
CC BY-SA 3.0
null
2011-04-28T15:11:57.650
2011-04-29T05:13:18.330
2011-04-28T22:51:49.780
null
4374
[ "missing-data", "proportion" ]
10101
2
null
10053
3
null
Would you be happier with a surgically altered goodfit object? ``` > idx <- which(test.gof$observed != 0) > idx [1] 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 49 50 > test.gof$par$size <- length( idx-1) > test.gof$fitted <- test.gof$fitted[idx] > test.gof$count <- test.gof$count[idx] > test.gof$observed <- test.gof$observed[idx] > summary(test.gof) Goodness-of-fit test for binomial distribution X^2 df P(> X^2) Pearson Inf 75 0.0000000 Likelihood Ratio 21.48322 19 0.3107244 Warning message: In summary.goodfit(test.gof) : Chi-squared approximation may be incorrect ```
null
CC BY-SA 3.0
null
2011-04-28T15:14:00.030
2011-04-28T15:14:00.030
null
null
2129
null
10102
2
null
10100
3
null
A few quick points: - You could draw on the general literature on missing data. - If the 200 sessions with missing data are known to be completely random, then you could choose to ignore these observations and just analyse the data where you have complete data. - In general it is important to think through what mechanisms might explain the missing data. - You may find it useful to read up on strategies for dealing with missing data.
null
CC BY-SA 3.0
null
2011-04-28T15:33:56.817
2011-04-29T05:13:18.330
2011-04-29T05:13:18.330
183
183
null
10103
2
null
10031
1
null
If a potential outcome has never been observed, then you have no information about the effect of covariates on it. So any outcome-specific covariate effects are unidentifiable. If you assume constant covariate effects, then this outcome has no effect on their estimation, so you might as well omit it from the data. More generally, you do have some information about the frequency of that outcome, but logistic regression (multinomial or otherwise) cannot handle this at all, because if you insist on having this outcome, the corresponding intercept has to be $-\infty$.
null
CC BY-SA 3.0
null
2011-04-28T15:52:05.907
2011-04-28T15:52:05.907
null
null
279
null
10104
1
null
null
7
108
Because all clinical and laboratory data are continuous random variables larger than zero (i.e. they are bounded above zero), is it impossible for them to be normally distributed? Can they only be log-normally distributed? Thank you.
Is it impossible for clinical and laboratory data to be normally distributed?
CC BY-SA 3.0
null
2011-04-28T15:57:52.087
2011-04-28T18:23:22.037
null
null
4349
[ "distributions" ]
10105
2
null
10079
42
null
I'm not a fan of simple formulas for generating minimum sample sizes. At the very least, any formula should consider effect size and the questions of interest. And the difference between either side of a cut-off is minimal. ### Sample size as optimisation problem - Bigger samples are better. - Sample size is often determined by pragmatic considerations. - Sample size should be seen as one consideration in an optimisation problem where the cost in time, money, effort, and so on of obtaining additional participants is weighed against the benefits of having additional participants. ### A Rough Rule of Thumb In terms of very rough rules of thumb within the typical context of observational psychological studies involving things like ability tests, attitude scales, personality measures, and so forth, I sometimes think of: - n=100 as adequate - n=200 as good - n=400+ as great These rules of thumb are grounded in the 95% confidence intervals associated with correlations at these respective levels and the degree of precision that I'd like to theoretically understand the relations of interest. However, it is only a heuristic. ### G Power 3 - I typically use G-Power 3 to calculate power based on various assumptions see my post. - See this tutorial from the G Power 3 site specific to multiple regression - The Power Primer is also a useful tool for applied researchers. ### Multiple Regression tests multiple hypotheses - Any power analysis question requires consideration of effect sizes. - Power analysis for multiple regression is made more complicated by the fact that there are multiple effects including the overall r-squared and one for each individual coefficient. Furthermore, most studies include more than one multiple regression. For me, this is further reason to rely more on general heuristics, and thinking about the minimal effect size that you want to detect. - In relation to multiple regression, I'll often think more in terms of the degree of precision in estimating the underlying correlation matrix. ### Accuracy in Parameter Estimation I also like Ken Kelley and colleagues' discussion of Accuracy in Parameter Estimation. - See Ken Kelley's website for publications - As mentioned by @Dmitrij, Kelley and Maxwell (2003) FREE PDF have a useful article. - Ken Kelley developed the MBESS package in R to perform analyses relating sample size to precision in parameter estimation.
null
CC BY-SA 3.0
null
2011-04-28T16:03:22.203
2011-04-28T16:03:22.203
null
null
183
null
10106
2
null
10095
5
null
The Box-Cox transformation tries to improve the normality of the residuals. Since that is the assumption of ANOVA as well, you should run it on the model that you are actually going to use, i.e. the full model. For example, if you have two well separated groups, the distribution of the response variable will be strongly bimodal and nowhere near normal even if within each group the distribution is normal. Additionally, you certainly want to take whuber's comment to heart, and check for outliers, missing predictors, etc to make sure that some artifact is not driving your transformation. Also consider the confidence interval around the optimal lambda, and whether a particular transformation within that interval does make applied sense. For example, if you have linear measurements, but the outcome would reasonably be related to a volume, then a lambda=3 or lambda=-3 might be meaningful. If, on the other hand, areas are involved, then 2 or -2 might be better choices.
null
CC BY-SA 3.0
null
2011-04-28T16:04:21.660
2011-04-28T16:04:21.660
null
null
279
null
10107
2
null
10104
9
null
You are correct, a positive-only value cannot be really normally distributed. However depending on the particular circumstances, its distribution could be sufficiently close to normal for statistical methodology assuming normality to apply. I want to note that very few methods actually require normality of the data, often only model residuals, or sampling distributions have to be (approximately) normally distributed.
null
CC BY-SA 3.0
null
2011-04-28T16:07:58.037
2011-04-28T16:07:58.037
null
null
279
null
10108
2
null
10095
4
null
It is not appropriate to do ordinary ANOVA after using the same dataset to fit lambda. The analysis should be unified, penalizing for uncertainty in lambda (a parameter to be estimated, and included in the covariance matrix).
null
CC BY-SA 3.0
null
2011-04-28T16:16:40.647
2011-04-28T16:16:40.647
null
null
4253
null
10109
1
null
null
1
455
I am trying to use R to develop a corporate financial model. The model includes various line items, X, of the following form with actual values for time period 1, 2.. n and projected values for periods n+1, n+2,.. n+k. g is the average growth rate for the forecast period. I need to construct a vector of the following form: ``` X=c(X1,X2,...,Xn,Xn+1=(1+g)Xn,Xn+2=(1+g)Xn+1,...,Xn+k=(1+g)Xn+k-1)) ``` How would I do this in R? I have tried looking up the R literature on lagged variables but could not find a simple example which does what is required. I look forward to any guidance that can be given.
Lagged Variables in R
CC BY-SA 3.0
null
2011-04-28T16:51:00.163
2011-04-28T18:47:02.267
2011-04-28T18:12:11.967
3911
4375
[ "r", "time-series", "forecasting" ]
10110
1
null
null
4
1658
100 periods have been collected from a 3 dimensional periodic signal. The wavelength slightly varies. The noise of the wavelength follows Gaussian distribution with zero mean. A good estimate of the wavelength is known, that is not an issue here. The noise of the amplitude may not be Gaussian and may be contaminated with outliers. How can I compute a single period that approximates 'best' all of the collected 100 periods? I have no idea how time-series models work. Are they prepared for varying wavelengths? Can they handle non-smooth true signals? If a time-series model is fitted, can I compute a 'best estimate' for a single period? How? A related question is [this](https://stackoverflow.com/q/2572444/341970). Speed is not an issue in my case. Processing is done off-line, after all periods have been collected. Origin of the problem: I am measuring acceleration during human steps at 200 Hz. After that I am trying to double integrate the data to get the vertical displacement of the center of gravity. Of course the noise introduces a HUGE error when you integrate twice. I would like to exploit periodicity to reduce this noise. Here is a crude graph of the actual data (y: acceleration in g, x: time in second) of 6 steps corresponding to 3 periods (1 left and 1 right step is a period): ![Human steps](https://i.stack.imgur.com/44q8d.png) My interest is now purely theoretical, as [http://jap.physiology.org/content/39/1/174.abstract](http://jap.physiology.org/content/39/1/174.abstract) gives a pretty good recipe what to do. It does not address periodicity. Note: I have asked [this question on stackoverflow](https://stackoverflow.com/q/5702974/341970) but it seems to be off-topic there.
How to exploit periodicity to reduce noise of a signal?
CC BY-SA 3.0
null
2011-04-28T16:59:06.690
2011-05-01T23:30:01.290
2017-05-23T12:39:26.143
-1
4240
[ "r", "time-series", "autocorrelation", "arima", "kalman-filter" ]
10111
1
11195
null
5
10490
I would like to compute the p-value of my two-way ANOVA. The score that I am using to detect significant samples is the eta score which is computed as SS(between)/SS(total). In many sites I saw that F=Var(between)/Var(within). I want to know if I can consider the eta score as the F-statistic value, and then compute the p-value? What if I want to compute the p-value for other scores such as eta-partial, or omega? Is it meaningful if I calculate the p-value for them? Or is it only meaningful for the ratio Var(between)/Var(within)? What difference does it make if I use SS(between)/SS(within) instead of Var? The formula that I am using to calculate p-value is: ``` pvalue=-log10(betai(0.5 * df2, 0.5* df1, df2 / (df2 + df1 * eta))); ``` based on the book Numerical Recipes in C, where df1 and df2 are degrees of freedom. Thanks for your help.
Calculating p-value for a two-way ANOVA
CC BY-SA 3.0
null
2011-04-28T17:14:11.820
2011-05-24T14:34:40.337
2011-04-28T18:34:07.647
930
2885
[ "anova", "p-value" ]
10112
2
null
10110
3
null
I'm not sure I totally understand what would be best, but I believe that looking at Dynamic Linear Models, State Space modeling, and the Kalman Filter would be helpful. In R, the package `dlm` is fairly accessible. Edit: Describe DLM's simply, eh? One explanation I've seen is that DLMs are regression where your coefficients are allowed to change with time. Doesn't really give me much intuition. So, here is my I-realize-I-don't-understand-DLMs-well-enough-to-do-this-well-but-you-asked answer, which I hope others will correct as necessary. Let me use a situation that's similar to how they were invented... Say you were controlling a remotely-piloted vehicle. You could summarize the vehicle's actual state (speed, direction, altitude, fuel, etc, etc) as a vector in a state space. You can't directly observe the actual state, but you do have sensors that observe linear combinations of the actual state, with some (gaussian) noise added in. That is, your sensors are not perfect. Further, the vehicle has a program that determines how it changes states, as a linear combination of the current state, also with some (gaussian) noise. That is, your controls are not perfect. Say you want to know the most likely and most accurate path possible for the vehicle -- that is, the states it actually went through: how it actually moved. What you observe is noisy, but you can use a Kalman filter on this system you model and the filter models the variances in the system and outputs the most likely actual states. At each time step, it forecasts what it believes the state will be, then observes what the sensors say, calculates the variances of the different parts, and comes up with a final estimate of the state as a weighted sum of the forecast and the sensor readings. Then the process is repeated for the next time step. The whole concept is basically a continuous Hidden Markov Model, if you're familiar with those. The model is several equations which involve several matrices that are multiplied together, and you can add columns to the matrix to reflect trend, seasonal, and other types of components of the time series. The R package `dlm` makes it particularly easy to define and combine components.
null
CC BY-SA 3.0
null
2011-04-28T17:59:25.397
2011-04-30T22:52:56.120
2011-04-30T22:52:56.120
1764
1764
null
10113
2
null
9850
0
null
I'm not a python expert, but it is extremely helpful to plot the 1st 2 principal components against each other on the x,y axes. Not sure which packages you are using, but here is a sample link: [http://pyrorobotics.org/?page=PyroModuleAnalysis](http://pyrorobotics.org/?page=PyroModuleAnalysis)
null
CC BY-SA 3.0
null
2011-04-28T18:06:58.480
2011-04-28T18:06:58.480
null
null
3489
null
10114
2
null
10104
3
null
Many biological measurements take positive values only and to such variables applies the rule of thumb: If $\text{mean} < 2\cdot\text{SD}$ then the distribution is not normal. Explanation: for normally distributed values the majority of values are in the interval $[\text{mean}\pm2\cdot\text{SD}]$, and a normal distribution with $\text{mean} < 2\cdot\text{SD}$ would have a too large proportion of negative values, the normal approximation is poor. Of course be cautious like with any other rules of thumb.
null
CC BY-SA 3.0
null
2011-04-28T18:23:22.037
2011-04-28T18:23:22.037
null
null
3911
null
10116
2
null
10109
2
null
Does this do what you want? ``` x1 <- 1:10 g <- .1 x2 <- cumprod(c(tail(x1, 1), rep(1+g, 5))) c(x1, x2) ```
null
CC BY-SA 3.0
null
2011-04-28T18:47:02.267
2011-04-28T18:47:02.267
null
null
3542
null
10117
1
null
null
4
821
I sincerely apologize if there is another thread already that will answer this question. I'm so incredibly out of my league here that I don't even know what keywords to search for :-). I'm a computer programmer by trade, and while I have a basic background in math, statistics was never really my cup of tea. I currently work at a school and just finished developing a basic set of tools to help automatically collect and analyze data on our student's behaviors (this is a school for children with autism and other disabilities). So, we have a couple of year's worth of data for things like: given Billy, how frequently did he have Aggressions, Self-Injurious Behaviors, Drop, etc. Probably 6 - 10 "inputs" (I think that's the correct term) per student. We'll be adding more as well in the future. What I'm curious about is this: Are there any beginning tutorials out there that might show me some interesting things to do with this data (besides just graphing it?) For example, it would be interesting to be able to predict when Billy is likely to have a long string of aggressions given that these `x` other factors have been increasing lately. Or, there is an increasing trend of this behavior which is way out of whack with its previous values, that should raise a big red flag. I've been doing some basic Googling and this seems to be in the realm of "Statistical Data Mining"—some brief tutorials were found on [Andrew Moore's site](http://www.autonlab.org/tutorials/), but these just aren't detailed enough for me to really learn anything. I realize that this is akin to someone walking into Stack Overflow and saying "Hey, tell me how to write the next Facebook." So, if these are the sorts of things that I can only do with years and years of statistical experience, just let me know and I'll be on my way. However, I also know that while someone couldn't walk into SO and write the next Facebook in a few weeks, we could probably point them in the right direction to create a basic site for their dad's business, even if it would be a pretty basic site. Likewise, I'm not looking to create a genius AI capable of predicting student behavior down to the millisecond; rather, I'm just curious if there's any low-hanging fruit that a guy like me could pick up in a few weeks or months of diligent reading that might make for some interesting uses of this new data we've unlocked. I'm open to online tutorials, books, textbooks, videos, open source programs and libraries, etc.
Beginner to prediction/statistics: Where do I start?
CC BY-SA 3.0
null
2011-04-28T19:57:29.240
2011-04-29T00:08:49.870
2011-04-28T22:55:58.053
2970
4379
[ "forecasting", "panel-data" ]
10118
2
null
10100
3
null
It can help to survey the scene: that is, to understand the consequences for your decision of the possible states of affairs. The missing information can be described by two integers: $m$, the number of additional sessions for "PageVariation2", and $k$, the number of additional conversions for PageVariation2. For each possible combination of $m$ and $k$ ($k = 0, 1, 2, 3$ and $0 \le m \le 200-k$) we can compute what the p-value would be. ![Plot of p-values vs. m by k](https://i.stack.imgur.com/TxsNA.png) In this plot the graph of the p-value versus $m$ for $k=3$ is shown in blue (the highest one), for $k=2$ in red, for $k=1$ in orange, and for $k=0$ in green. For reference, a (two-sided) p-value of 0.05 is shown with a black line. It is evident that the p-values can range tremendously from indicating strong significance (for $m=197$ and $k=0$) to indicating no significance at all (for small $m$ and $k = 1, 2, 3$). This tells us that your assumptions about the missingness completely determine the significance or lack of significance of the results. If you think it highly likely that $m$ is large--so that the majority of missing results are for PageVariation2--and that $k$ is small--so that the majority of missing conversions are for PageVariation1--then you can conclude the differences in conversion rates are not likely due to chance and proceed accordingly. If you do not have sufficient information to justify these assumptions, then the data do not provide strong evidence for a significant difference. That is, what you have observed could be the result of random variation rather than due to some intrinsic difference between the two variations.
null
CC BY-SA 3.0
null
2011-04-28T20:53:13.500
2011-04-28T20:53:13.500
null
null
919
null
10120
2
null
10117
1
null
"anomaly detection" -- This is usually called detection of outliers. You can find many references via googling. "can more aggressions predict more self injurious behaviors" -- You can try one of the basic things: correlation between different variables or features (you called them "inputs"). The data will be easier to analyze if they will be in the format, say, "number of xxx incidents per week", i.e., if your variables will be measured on the same timescale.
null
CC BY-SA 3.0
null
2011-04-28T21:23:38.963
2011-04-28T21:31:47.557
2011-04-28T21:31:47.557
4337
4337
null
10121
1
null
null
11
2634
I would like to generate a random correlation matrix such that the distribution of its off-diagonal elements looks approximately like normal. How can I do it? The motivation is this. For a set of $n$ time series data, the correlation distribution often looks quite close to normal. I would like to generate many "normal" correlation matrices to represent the general situation and use them to calculate risk number. --- I know one method, but the resulting standard deviation (of the distribution of the off-diagonal elements) is too small for my purpose: generate $n$ uniform or normal random rows of a matrix $\mathbf X$, standardize the rows (subtract the mean, divide by standard deviation), then the sample correlation matrix $\frac{1}{n-1}\mathbf X \mathbf X^\top$ has normally distributed off-diagonal entries [Update after comments: standard deviation will be $\sim n^{-1/2}$]. Can anyone suggest a better method with which I can control the standard deviation?
How to generate random correlation matrix that has approximately normally distributed off-diagonal entries with given standard deviation?
CC BY-SA 3.0
null
2011-04-28T22:18:23.307
2015-11-25T10:50:43.863
2015-11-25T10:50:43.863
28666
4383
[ "normal-distribution", "random-generation", "correlation-matrix" ]
10122
1
null
null
4
2254
I'm looking for a clustering implementation with the following features: - Support for high-dimensional data. Now I have approximately 160.000 dimensions/features. - Be able to manage sparse matrix. That is, not only to read sparse matrices, but also capable of making operations in this format. - Properly shows the centroid for each cluster. I've tested some packages: - Rapidminer, which seems to be a memory eater, I suppose because although capable of reading a sparse matrix, it is not capable of working with them as they are. - Cluto, which is very fast and low-memory consumption, but it is not able of show properly the centroid elements (source code not available). It shows descriptive features together with a percentage of how that feature contributes to the average similarity, but there is no clear info (here is a question about that, with no clear answer) about how is calculated that, and also I have clusters where there is 0.0% but it is not clear for me if this means the program is unable to show an upper precision or if that feature has nothing to do tho the average similarity. I appreciate any comment or answer about it.
Looking for sparse and high-dimensional clustering implementation
CC BY-SA 3.0
null
2011-04-28T22:22:09.967
2012-10-02T15:31:28.983
2020-06-11T14:32:37.003
-1
4382
[ "clustering", "algorithms", "large-data" ]
10123
2
null
1737
1
null
I personally like `cast()`, from the reshape package because of it's simplicity: ``` library(reshape) cast(melt(tips), sex ~ smoker | variable, c(sd,mean, length)) ```
null
CC BY-SA 3.0
null
2011-04-28T22:36:25.187
2011-04-28T22:36:25.187
null
null
776
null
10124
2
null
10117
1
null
It sounds like you have some really wonderful data to work with! One person suggested trying analyses in R, and that's definitely a powerful option. With your background in programming, it may be well-suited for you. I personally prefer a program like SPSS, which is built specifically user-friendly(ish) analysis of social science data. If you're new to the program, I'd suggest Julie Pallant's "SPSS Survival Manual," which has basic how-to instructions for most common analyses. Regardless of the software, it sounds like using correlations, regressions, and some time-series work could help you investigate your variables. If it seems overwhelming to learn all the stats in a short period of time, I might suggest advertising the fact that you have data to work with. I'm certain that psychology undergrads or grad students at a nearby university would jump at the chance to help you do analysis and possibly publish any useful results. Best of luck!
null
CC BY-SA 3.0
null
2011-04-28T22:38:17.013
2011-04-28T22:52:08.073
2011-04-28T22:52:08.073
2970
4384
null
10125
2
null
10053
0
null
Try plotting it. You'll get a better idea of what's going on. As mentioned before, you're getting NaN because you're passing 0 frequencies to chisq.test() ``` test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts), prob=0.5)) plot(test.gof) ## doesn't look so good test.gof <- goodfit(counts, type="binomial", par=list(size=length(counts))) plot(test.gof) ## looks a little more clear ```
null
CC BY-SA 3.0
null
2011-04-28T22:44:44.650
2011-04-28T22:44:44.650
null
null
776
null
10126
1
10132
null
4
1291
I ran a GEE model, with a dependent variable of "percent of total students with an unexcused absence," using a binomial family. My dependent variable is basically a proportion, with range from 0 to 1 to obey the rules of the binomial family. I understand that the difference between the non-event/event 0-1 is "zero percent of students with an unexcused absence" and "100% of students with an unexcused absence." Questions: - How can I interpret the Odds Ratio for this in a way that makes policy-level, real-world sense? An interpretation of 1-unit change in independent variable X would be e.g. "a school with a 1-unit increase in X has 0.9 times the odds of having 100% of students unexcused from class at least once, compared to a school without the increase in X." Seems clunky! You may ask why I used a binomial family rather than a Gaussian family, but according to "Comparison of Logistic Regression and Linear Regression in Modeling Percentage Data" by Zhao, Chen and Schaffner (2001), logistic regression can and should be used for any models with a dependent variable that's modeled as a percentage for various reasons. If there's no other accurate way to interpret this coefficient through logistic regression and ORs, does anyone have any suggestions about how to model the data to be better interpretable?
Interpretation of odds ratio when outcome is a percentage
CC BY-SA 3.0
null
2011-04-28T22:57:24.707
2011-04-29T10:09:11.960
2011-04-28T23:18:41.013
null
3309
[ "logistic" ]
10127
2
null
10121
1
null
You might be interested in some of the code at the following link: [Correlation and Co-integration](https://quant.stackexchange.com/questions/1027/correlation-and-cointegration-similarities-differences-relationships)
null
CC BY-SA 3.0
null
2011-04-28T23:12:04.403
2011-04-28T23:12:04.403
2017-04-13T12:46:23.127
-1
2775
null
10128
2
null
6698
1
null
Although this question has already been answered a useful way to remember for more general situations is the law of iterated expectations. Note that the independence for prediction does not hold even if the "true process" is independent. This is because the estimates are not independent, unless you have $Z^{T}Z$ and $Z_{new}Z_{new}^{T}$ both to be diagonal ("new" for the predictions) So if you let $\hat{Y}_{ti}$ denote the estimated monthly values in year $t$ for month $i$, and $\hat{X}_{t}$ denote the estimated annual value, you have: $$\hat{X}_{t}=\sum_{i=1}^{12}\hat{Y}_{ti}$$ $$Var(\hat{X}_{t})=E[Var(\hat{X}_{t}|\hat{Y}_{t,1},\dots,\hat{Y}_{t,12})]+Var[E(\hat{X}_{t}|\hat{Y}_{t,1},\dots,\hat{Y}_{t,12})]$$ (note sure if it should be an average or total, if average, then divide my final result for the standard error by $12$ and divide variance by $144$) Plugging one into the other we get: $$Var(\hat{X}_{t})=E[Var(\sum_{i=1}^{12}\hat{Y}_{ti}|\hat{Y}_{t,1},\dots,\hat{Y}_{t,12})]+Var[E(\sum_{i=1}^{12}\hat{Y}_{ti}|\hat{Y}_{t,1},\dots,\hat{Y}_{t,12})]$$ $$=Var[\sum_{i=1}^{12}\hat{Y}_{ti}]=\sum_{i=1}^{12}\sum_{j=1}^{12}Cov(\hat{Y}_{tj},\hat{Y}_{ti})$$ Now when you condition on something, it is a constant, so that's why the "inner" variance term disappears. Now you have a regression model for $Y_{ti}$ so we know that $$\begin{array}{l l} \hat{Y}_{ti}=Z_{ti,new}^{T}\hat{\beta} & Cov(\hat{Y}_{ti},\hat{Y}_{sj})=s^{2}Z_{ti,new}^{T}(Z^{T}Z)^{-1}Z_{sj,new} \\ \hat{\beta}=(Z^{T}Z)^{-1}Z^{T}Y & s^{2}=\frac{1}{n-dim(\hat{\beta})}(Y-Z\hat{\beta})^{T}(Y-Z\hat{\beta}) \end{array}$$ Where $Z$ and $Y$ are the matrix and vector that you used to actually fit the regression (I am assuming OLS regression here), $dim(\hat{\beta})$ is the number of betas that you have fitted (including the intercept). $Z_{ti,new}$ is a new set of regression co-efficients to be used in the prediction. Note that for prediction, your estimates of $Y$ are not independent, even if the "true values" are. So the square root of $N$ rule doesn't apply, unless your $Z$ variables are orthogonal so that $(Z^{T}Z)^{-1}=I$ and $Z_{ti}^{T}Z_{sj}=0$ when $s\neq t$ or $i\neq j$. Plugging this into the variance formula for $\hat{X}_{t}$ we get: $$Var(\hat{X}_{t})=\sum_{i=1}^{12}\sum_{j=1}^{12}s^{2}Z_{ti,new}^{T}(Z^{T}Z)^{-1}Z_{tj,new}=s^{2}J^{T}Z_{t,new}(Z^{T}Z)^{-1}Z_{t,new}^{T}J$$ Where $J$ is a column of 12 ones, and $Z_{t,new}$ is the twelve $Z_{ti}^{T}$ rows for prediction stacked on top of each other, of dimension $12\times dim(\hat{\beta})$. But note that we also have the "true" process $X_{t}$, assumed to be governed by the regression model, so we apply the law of iterated expectations again, but conditioning on $\hat{X}_{t}$ this time: $$Var(X_{t})=E[Var(X_{t}|\hat{X}_{t})]+Var[E(X_{t}|\hat{X}_{t})]=E[Var(\sum_{i=1}^{12}Y_{t}|\hat{Y}_{ti})]+Var[\hat{X}_{t}]$$ $$=E[\sum_{i=1}^{12}Var(Y_{ti})]+Var[\hat{X}_{t}]=12s^{2}+s^{2}J^{T}Z_{t,new}(Z^{T}Z)^{-1}Z_{t,new}^{T}J$$ I probably should put approx because this is a "plug-in" of $s^{2}$ for the "true variance" $\sigma^{2}$ - however I don't know of many people who don't just do this. It is also justified on Bayesian grounds as the proper way to account for uncertainty in estimating $\sigma^{2}$ for the normal model, plus it is an unbiased estimator on frequentist grounds. So the annual standard error should really be $$s\sqrt{12+J^{T}Z_{t,new}(Z^{T}Z)^{-1}Z_{t,new}^{T}J}$$ So what the $\sqrt{12}$ rule is essentially doing here is ignoring the uncertainty in estimating the betas. If you already estimate the betas pretty well, then this will make little difference to the $\sqrt{12}$ rule - probably something like $\sqrt{13}$. If the betas are not estimated well, or you are close to multi-collinearity, then the extra term may be important.hb** ## text **
null
CC BY-SA 3.0
null
2011-04-28T23:29:00.073
2011-04-28T23:46:12.220
2011-04-28T23:46:12.220
2392
2392
null
10130
3
null
null
0
null
null
CC BY-SA 3.0
null
2011-04-29T00:18:02.670
2011-04-29T00:18:02.670
2011-04-29T00:18:02.670
-1
-1
null
10131
3
null
null
0
null
null
CC BY-SA 3.0
null
2011-04-29T00:18:02.670
2011-04-29T00:18:02.670
2011-04-29T00:18:02.670
-1
-1
null
10132
2
null
10126
3
null
In a school with a 1-unit higher X value a randomly picked student has 0.9 times the odds of having been unexcused from class at least once, compared to an other school. (Assuming that all other student and school characteristics modelled are the same.)
null
CC BY-SA 3.0
null
2011-04-29T01:26:50.757
2011-04-29T10:09:11.960
2011-04-29T10:09:11.960
3911
3911
null
10133
1
null
null
9
513
Question for the experienced data miners out there: ### Given this scenario: - There are N shopping carts - Each shopping cart is filled with an arbitrary number of M items from an infinitely large set (with the current amount of data I have, that arbitrary number can hit numbers around 1500) - The order in which each cart is filled is significant - There are other attributes such as geolocation of shopper, but these can be (and currently are) thrown out in favor of making the algorithm simpler ### I need to: - At a particular point in time, given only the ordered sets of items in each cart, identify 'similar' carts without prior knowledge of class labels - After a certain amount of data has been collected and a drudge works through the data and assigns labels, create a classifier that can work quickly with future unseen data ### Initial approach: - So far, my approach has been focused on the first point. My method uses k-means clustering and handles the sequential nature of the data by using a distance matrix generated by calculating the Hamming distance between carts. In this way, [apple, banana, pear] is different from [pear, apple, banana], but [apple, banana, pear] is less different from [apple, banana, antelope]. The appropriate value of k is determined through investigation of the silhouette coefficient. The clusters generated from this seem to make sense, but the runtime of my method will definitely be prohibitive as my dataset scales. ### Question: - Would anyone happen to have any suggestions for a novice data miner for this problem? ### Edits with more info: - I've found suggestions that consider using n-gram features and comparing them pair-wise. A concern I have about this is order: will the order of the sequences be maintained if n-gram models are used? Also, I see performance issues being a larger possibility with this method.
Data mining approaches for analysis of sequential data with nominal attributes
CC BY-SA 3.0
null
2011-04-29T01:57:42.573
2011-05-01T20:20:58.663
2011-04-29T17:04:04.173
4386
4386
[ "clustering", "classification", "data-mining", "ordinal-data" ]
10135
2
null
10110
2
null
You may want to check out [Functional Data Analysis](http://www.psych.mcgill.ca/misc/fda/), which permits characterization of functional (temporal) phenomena with noise in the wavelength, amplitude, etc.
null
CC BY-SA 3.0
null
2011-04-29T05:35:19.293
2011-04-29T05:35:19.293
null
null
364
null
10136
2
null
10020
8
null
### Suggestions - You could perform individual multiple regressions for each type of predictor, and compare across multiple regressions, adjusted r-square, generalised r-square, or some other parsimony adjusted measure of variance explained. - You could alternatively explore the general literature on variable importance (see here for a discussion with links). This would encourage a focus on the importance of individual predictors. - In some situations hierarchical regression may provide a useful framework. You would enter one type of variable in one block (e.g., cognitive variables), and in the second block another type (e.g., social variables). This would help answer the question of whether one type of variable predicts over and above another type. - As a side examination, you could run a factor analysis on the predictor variables to examine whether the correlations between predictor variables map on to the assignment of variables to types. ### Caveats - Types of variables such as cognitive, social, and behavioural are broad classes of variables. A given study will always include only a subset of the possible variables, and typically such a subset is small relative to the possible variables. Furthermore, the measured variables may not be the most reliable or valid means of measuring the intended construct. Thus, you need to be careful when drawing the broader inference about the relative importance of a given type of variable over and beyond what was actually measured. - You also need to consider any bias in the way that the dependent variable was measured. Particularly in psychological studies, there is a tendency for self-report measures to correlate well with self-report, ability with ability, other-report with other report, and so on. The issue is that the mode of measurement has a large effect over and beyond the actual construct of interest. Thus, if the dependent variable is measured in a particular way (e.g., self-report), then don't over-interpret larger correlations with one type of predictor if that type also uses self-report.
null
CC BY-SA 3.0
null
2011-04-29T06:08:36.737
2011-04-30T03:11:51.177
2011-04-30T03:11:51.177
183
183
null
10137
1
null
null
6
15859
Related to an [earlier question on power analysis for multiple regression](https://stats.stackexchange.com/questions/10079/rules-of-thumb-for-minimum-sample-size-for-multiple-regression), a social science researcher asked me about power analysis for [moderator regression](http://www.davidakenny.net/cm/moderation.htm) (i.e., an interaction effect). The researcher asked me: > I seem to recall that power of tests for moderation with two continuous predictor variables is low - do you know the minimum sample size requirement in this context? From the context, it can further be assumed that this is an observational study (not an experimental study) and that the dependent variable is continuous. ### Question - What advice would you give regarding calculating the minimum sample size required? - Are there any caveats that you would present?
Power analysis for moderator effect in regression with two continuous predictors
CC BY-SA 3.0
null
2011-04-29T06:24:13.510
2021-07-06T21:01:03.487
2017-04-13T12:44:25.243
-1
183
[ "regression", "sample-size", "statistical-power", "interaction" ]
10138
2
null
9930
3
null
I think that variable z in the question is a suppresor variable. I suggest having a look at: Tzelgov, J., & Henik, A. (1991).Suppression situations in psychological research: Definitions, implications, and applications, Psychological Bulletin, 109 (3), 524-536. [http://doi.apa.org/psycinfo/1991-20289-001](http://doi.apa.org/psycinfo/1991-20289-001) See also: [http://dionysus.psych.wisc.edu/lit/articles/TzelgovJ1991a.pdf](http://dionysus.psych.wisc.edu/lit/articles/TzelgovJ1991a.pdf) HTH, dror
null
CC BY-SA 3.0
null
2011-04-29T07:20:04.600
2011-04-29T07:20:04.600
null
null
4390
null
10140
2
null
10137
6
null
If I had to do this, I would use a simulation approach. This would involve making assumptions about the regression coefficients, predictor distributions, correlation between predictors, and error variance (with help from the researcher), generating data sets using the assumed model, and seeing what proportion of these give a significant p-value for the interaction. Then use trial and error to find the minimum sample size giving the required power.
null
CC BY-SA 3.0
null
2011-04-29T08:34:31.240
2011-04-29T08:34:31.240
null
null
3835
null
10141
1
null
null
9
1980
I know this question might have been around before like in this SO [thread](https://stackoverflow.com/questions/2804001/panel-data-with-binary-dependent-variable-in-r), but maybe (hopefully) its answer has changed over time. - Is there any package in R or an outline how to do panel regressions with a discrete dependent variable? - Is there any other open source package that does it and that would help coding something in R?
Discrete choice panel models in R
CC BY-SA 3.0
null
2011-04-29T09:44:30.163
2021-06-20T16:43:19.210
2021-06-20T16:43:19.210
11887
704
[ "r", "panel-data", "choice-modeling" ]