Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
10690
2
null
10676
1
null
The [Weka](http://www.cs.waikato.ac.nz/ml/weka/) [SVMAttributeEval](http://weka.sourceforge.net/doc/weka/attributeSelection/SVMAttributeEval.html) package allows you to do feature selection using SVM. It should be pretty easy to dump your R data frame to a csv file, import that into Weka, do the feature selection, and then pull it back into R.
null
CC BY-SA 3.0
null
2011-05-12T00:50:24.680
2011-05-12T00:50:24.680
null
null
1876
null
10691
2
null
10680
2
null
When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument. [http://www.wired.com/wiredscience/2011/05/the-sad-reason-we-reason/](http://www.wired.com/wiredscience/2011/05/the-sad-reason-we-reason/)
null
CC BY-SA 3.0
null
2011-05-12T01:03:26.873
2011-05-12T01:09:31.147
2011-05-12T01:09:31.147
2775
2775
null
10692
2
null
10664
3
null
This overlaps largely with what @rolando2 has already said. ### General ideas: Regardless of the software you use, here are some things that you could do: - Compare means on each item across age groups - Crosstabulate age group by item response (for each item); you are probably most interested in the proportion of each item response within an age group. ### Further considerations: - Do the four items form a scale? If so, you may want to create a composite based on the items, and compare means for each age group - Do you wish to run statistical tests of significant group differences? if so, you'll most likely be interested in ANOVAs and follow-up tests looking at the effect of group on item means (but be aware of debates about the appropriateness of using means on ordinal items).
null
CC BY-SA 3.0
null
2011-05-12T01:55:31.253
2011-05-12T10:58:56.187
2011-05-12T10:58:56.187
183
183
null
10693
1
null
null
1
203
### Scenario: An industrial/organizational psychologist is interested in determining whether adding 15-minute breaks increases worker productivity. She selects a sample $n$ and measures productivity (on a continuous scale) before and after introducing the intervention. The researcher runs a repeated measures t-test. ### Question - How can I work out whether the intervention is effective?
Determining statistical significance of a repeated measures t-test
CC BY-SA 3.0
null
2011-05-12T02:22:37.467
2011-05-13T04:43:39.707
2020-06-11T14:32:37.003
-1
4573
[ "self-study", "t-test" ]
10696
2
null
10687
12
null
There is nothing explicit in the mathematics of regression that state causal relationships, and hence one need not explicitly interpret the slope (strength and direction) nor the p-values (i.e. the probability a relation as strong as or stronger would have been observed if the relationship were zero in the population) in a causal manner. That being said, I would say regression does have a much stronger connotation that one is estimating an explicit directional relationship than does estimating the correlation between two variables. Assuming by correlation you mean [Pearson's r](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient), it typically does not have an explicit causal interpretation as the metric is symmetrical (i.e. you can switch which variable is X and which is Y and you will still have the same measure). Also the colloquialism "Correlation does not imply causation" I would suspect is so well known that stating two variables are correlated the assumption is one is not making a causal statement. Estimated effects in [regression](http://en.wikipedia.org/wiki/Regression_analysis) analysis are not symetrical though, and so by choosing what variable is on the right hand side versus the left hand side one is making an implicit statement unlike that of the correlation. I suspect one intends to make some causal statement in the vast majority of circumstances in which regression is used (inference vs prediction aside). Even in cases of simply stating correlations I suspect people frequently have some implied goals of causal inference in mind. Given some constraints are met [correlation can imply causation](https://stats.stackexchange.com/questions/534/under-what-conditions-does-correlation-imply-causation)!
null
CC BY-SA 3.0
null
2011-05-12T05:15:58.993
2011-05-12T05:15:58.993
2017-04-13T12:44:36.923
-1
1036
null
10697
1
null
null
7
3842
I'm studying R package dlm. So far it seems very powerful and flexible package, with nice programming interfaces and good documentation. I've been able to successfully use dlmMLE and dlmModARMA to estimate the parameters of AR(1) process: ``` u <- arima.sim(list(ar = 0.3), 100) fit <- dlmMLE(u, parm = c(0.5, sd(u)), build = function(x) dlmModARMA(ar = x[1], sigma2 = x[2]^2)) fit$par ``` Now I'm trying to use similar code to estimate the parameters of simple linear regression model: ``` r <- rnorm(100) u <- -1*r + 0.5*rnorm(100) fit <- dlmMLE(u, parm = c(0, 1), build = function(x) dlmModReg(x[1]*r, FALSE, dV = x[2]^2)) fit$par ``` I expect fit$par to be close to c(-1, 0.5), but I keep getting something like ``` [1] -0.0002118851 0.4884367070 ``` The coefficient -1 is not estimated correctly. However, the strange thing is that the variance of the noise is returned correctly. I understand that max-likelihood estimation might fail given bad initial values, but I observed that the likelihood function returned by dlmLL is very flat in the first coordinate. So I wonder: can such model be estimated at all using dlm? I believe the model is "non-singular", however I'm not sure how the likelihood function is calculated inside the dlm. Any hint greatly appreciated.
Maximum likelihood estimation of dlmModReg
CC BY-SA 3.0
null
2011-05-12T05:31:19.257
2011-05-13T15:26:56.350
2011-05-12T09:22:09.627
930
4575
[ "r", "regression", "maximum-likelihood", "dlm" ]
10699
2
null
1164
6
null
I'm not statistician, my experience in statistics is fairly limited, I just use robust statistics in computer vision/3d reconstruction/pose estimation. Here is my take on the problem from the user point of view: First, robust statistics used a lot in engineering and science without calling it "robust statistics". A lot of people use it intuitively, coming to it in the process of adjusting specific method to real-world problem. For example iterative reweighted least squares and trimmed means/trimmed least square used commonly, that just the user don't know they used robust statistics - they just make method workable for real, non-synthetic data. Second, both "intuitive" and conscious robust statistics practically always used in the case where results are verifiable, or where exists clearly visible error metrics. If result obtained with normal distribution are obviously non-valid or wrong, people start tinkering with weights, trimming,sampling, read some paper and end up using robust estimators, whether they know term or not. On the other hand if end result of research just some graphics and diagrams, and there is no insensitive to verify results, or if normal statistic produce reults good enough - people just don't bother. And last, about usefulness of robust statistics as a theory - while theory itself is very interesting it's not often give any practical advantages. Most of robust estimators are fairly trivial and intuitive, often people reinventing them without any statistical knowledge. Theory, like breakdown point estimation, asymptotics, data depth, heteroskedacity etc allow deeper understanding of data, but in most cases it's just unnecessary. One big exception is intersection of robust statistics and compressive sensing, which produce some new practical methods such as "cross-and-bouquet"
null
CC BY-SA 3.0
null
2011-05-12T08:11:59.857
2011-05-12T08:11:59.857
null
null
4578
null
10700
1
10761
null
1
4373
Does anyone know of an existing implementation of ordinal logistic regression in Excel?
Excel spreadsheet for ordinal logistic regression
CC BY-SA 3.0
null
2011-05-12T08:35:17.713
2011-05-13T10:55:57.083
2011-05-12T09:21:36.930
930
333
[ "logistic", "excel" ]
10701
2
null
10676
2
null
For Recursive Feature Extraction (SVM-RFE) the packages e1071 and Kernlab doesn't implement it i think. For the Weka SVMAttributeEval package is for Java i think, but the question was for R as i saw. The best way is trying to implement the SVM-RFE using e1071 and LIBSVM library I found a good parper relating that [here](http://www.uccor.edu.ar/paginas/seminarios/Software/SVM_RFE_R_implementation.pdf).
null
CC BY-SA 3.0
null
2011-05-12T09:22:47.270
2011-05-12T09:22:47.270
null
null
4531
null
10702
1
10717
null
4
7699
I have a question regarding the interpretation of resulting p-values of a two sample Kolmogorov Smirnov test. Basis of my analysis is to try to identify groups that show a difference in their distribution difference compared to totality. I used a two sample Kologorov Smirnov Test in R to do so. Sample sizes: ``` Full = 2409 Group_1 = 25 Group_2 = 26 Group_3 = 33 Group_4 = 43 ``` Dataset plots: ![Boxplots of dataset distributions](https://i.stack.imgur.com/rQqU0.png) "Other" contains a collection of groups containing less than 20 datapoints. The resulting p-values I get when I compare each "Group" against "Full - Group" are the following: ``` Group 1: 2.6155e-002 Group 2: 2.1126e-003 Group 3: 7.2113e-002 Group 4: 7.3466e-003 ``` How can I interpret these results - especially with regards to the low number of datapoints per group as well as the difference in sample size for Full (N=2409) and Groups (N=25-43)? Is the choice of a KS test good or might another test be more appropriate in this case?
Two sample Kolmogorov-Smirnov test and p-value interpretation
CC BY-SA 3.0
null
2011-05-12T09:40:56.987
2011-05-13T14:09:30.737
2011-05-13T14:09:30.737
4579
4579
[ "sample-size", "p-value", "kolmogorov-smirnov-test" ]
10703
1
null
null
5
343
I've implemented QR factorization based on Householder reflections (for the purposes of computing the OLS fit). Mathematically, the $R$ matrix is upper triangular. However, due to floating-point issues I typically end up with small non-zero entries below the diagonal. What should I do with them - leave alone or set to zero, and why? For example, the following input: ``` [[ 1 1 10] [ 3 3 5] [ 2 4 -6] [ 1 10 8]] ``` produces the following $R$ matrix: ``` [[ -3.87298335e+00 -7.22956891e+00 -5.42217668e+00] [ -8.88178420e-16 8.58681159e+00 4.86793026e+00] [ -4.44089210e-16 -3.33066907e-16 1.31111882e+01]] ``` Note the non-zeros below the main diagonal. For comparison, both `numpy` and `R` return exact zeros, so I am wondering if there are reasons for me to do the same.
QR factorization: floating-point issues
CC BY-SA 3.0
null
2011-05-12T10:08:34.017
2011-05-12T11:08:02.840
null
null
439
[ "matrix-decomposition" ]
10704
2
null
9739
2
null
There is different approach for scalable clustering, divide and conquer approach, parallel clustering and incremental one. This is for general approach after you can use normal clustering methods. There a good method of clustering i really appreciate is DBSCAN (Density-Based Spatial Clustering of Applications with Noise) it is one of the most used clustering algorithm.
null
CC BY-SA 3.0
null
2011-05-12T10:29:20.660
2011-05-12T10:29:20.660
null
null
4531
null
10705
2
null
10703
4
null
It's safe to ignore those tiny entries, as long as they are less than some quantity like "norm of the matrix times machine epsilon". FWIW, if you'll be doing backsubstitution with the triangular matrix you now have, the routine is not supposed to access those subdiagonal entries anyway.
null
CC BY-SA 3.0
null
2011-05-12T11:08:02.840
2011-05-12T11:08:02.840
null
null
830
null
10706
2
null
10655
4
null
The dataset would have to be enormous for the empirical approaches to have sufficient precision, and it doesn't help very much to look at percentiles of the marginal distribution of height. I suggest quantile regression, allowing age to be flexibly modeled (e.g., using restricted cubic splines). Here is an example using R. ``` require(rms) # loads quantreg, Hmisc, SparseM packages too dd <- datadist(mydata); options(datadist='dd') f <- Rq(ht ~ rcs(age,5), tau=.25, data=mydata) # model 25th percentile f plot(Predict(f)) # shows confidence bands nomogram(f) # make a nomogram to predict the quantile manually ```
null
CC BY-SA 3.0
null
2011-05-12T11:11:08.707
2011-05-12T21:10:18.593
2011-05-12T21:10:18.593
4253
4253
null
10707
2
null
10697
5
null
I think your setup is not correct. Try this: ``` set.seed(1234) r <- rnorm(100) X <- r u <- -1*X + 0.5*rnorm(100) MyModel <- function(x) dlmModReg(X, FALSE, dV = x[1]^2) fit <- dlmMLE(u, parm = c(0.3), build = MyModel) mod <- MyModel(fit$par) dlmFilter(u,mod)$a ``` You recover the estimate of the observation variance from the only element of fit$par: ``` > fit $par [1] 0.4431803 $value [1] -20.69313 $counts function gradient 17 17 $convergence [1] 0 $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH" ``` while your estimate of the coefficient (should be around -1 in your case) can be obtained as the last element of `dlmFilter(u,mod)$a`, which gives the values of the state as new observations are processed: ``` > dlmFilter(u,mod)$m [1] 0.0000000 -1.1486921 -1.2123431 -1.1172783 -1.1231454 -1.1170222 [7] -1.0974931 -1.1377114 -1.0378758 -1.0927136 -1.0955372 -1.0120210 [13] -0.9874791 -1.0036429 -1.0765513 -1.0678725 -1.0795124 -1.1568597 [19] -1.2044821 -1.2056687 -1.2102896 -1.2938958 -1.2922945 -1.2670604 [25] -1.1789594 -1.1570172 -1.1601590 -1.1417200 -1.1585501 -1.1608675 [31] -1.1616278 -1.1744861 -1.1717561 -1.1715025 -1.1568086 -1.1451311 [37] -1.1520867 -1.1379211 -1.1270897 -1.1048035 -1.1015793 -1.1054597 [43] -1.0621750 -1.0621218 -1.0696813 -1.0807651 -1.0816893 -1.0647963 [49] -1.0643440 -1.0667282 -1.0626404 -1.0623697 -1.0586265 -1.0571205 [55] -1.0569135 -1.0579224 -1.0607623 -1.0582257 -1.0495232 -1.0494288 [61] -1.0539632 -1.0555427 -1.0553468 -1.0491239 -1.0488604 -1.0491036 [67] -1.0510551 -1.0576294 -1.0611296 -1.0628612 -1.0626451 -1.0573650 [73] -1.0629577 -1.0647724 -1.0658052 -1.0823839 -1.0753808 -1.0747229 [79] -1.0747762 -1.0615243 -1.0630352 -1.0697431 -1.0666448 -1.0617227 [85] -1.0585460 -1.0583981 -1.0563544 -1.0567715 -1.0544349 -1.0573228 [91] -1.0588404 -1.0639155 -1.0625845 -1.0578004 -1.0571034 -1.0602645 [97] -1.0604838 -1.0586019 -1.0580891 -1.0587096 -1.0577559 ``` Hope this helps.
null
CC BY-SA 3.0
null
2011-05-12T11:12:51.557
2011-05-12T11:38:03.097
2011-05-12T11:38:03.097
892
892
null
10708
2
null
10700
4
null
It's difficult to recommend Excel (which has shown itself to be unreliable for simpler problems than the one posed) when R has well worked-out packages for this.
null
CC BY-SA 3.0
null
2011-05-12T11:13:03.210
2011-05-12T11:13:03.210
null
null
4253
null
10709
2
null
10640
4
null
Neither one of these designs contradicts each other, they are generated in different ways. A $2^{5-2}_{III}$ is not unique. Using a design with so many factors and so few runs, necessiates the fact that main effects and two factor interactions will be confounded, the question is how. Since you want to do an experiment with $8 = 2^{3}$ observations, the full factorial that will generate this design will be based on 3 factors; call them A, B, and C. In the 3 factor design, you will have interactions - AB, AC, BC - ABC To expand this design to include two more factors, each of the new factors ;call them D and E; must be confounded with one of the above listed interactions. Two ways to do that without confounding two main effects is to either have - D = AB and E = AC which means DE = BC - D = BC and E = ABC which means DE = A You can try other combinations, and see the results of your confounding structure.
null
CC BY-SA 3.0
null
2011-05-12T11:45:28.677
2011-05-12T14:51:54.927
2011-05-12T14:51:54.927
930
3805
null
10710
2
null
10687
7
null
Neither correlation nor regression can indicate causation (as is illustrated by @bill_080's answer) but as @Andy W indicates regression is often based on an explicitly fixed (i.e., independent) variable and an explicit (i.e., random) dependent variable. These designations are not appropriate in correlation analysis. To quote Sokal and Rohlf, 1969, p. 496 > "In regression we intend to describe the dependence of a variable Y on an independent variable X... to lend support to hypotheses regarding the possible causation of changes in Y by changes in X..." "In correlation, by contrast, we are concerned largely whether two variables are interdependent or covary - that is, vary together. We do not express one as a function of the other." Sokal, R. R. and F. J. Rohlf, 1969. Biometry. Freeman and Co.
null
CC BY-SA 3.0
null
2011-05-12T11:56:45.823
2011-05-12T11:56:45.823
2020-06-11T14:32:37.003
-1
4048
null
10711
1
null
null
13
394
Stack Exchange, as we all know it, is a collection of Q&A sites with diversified topics. Assuming that each site is independent from each other, given the stats a user has, how to compute his "well-roundedness" as compared to the next guy? What is the statistical tool I should employ? To be honest, I don't quite know how to mathematically define the "well-roundness", but it must have the following characteristics: - All things being equal, the more rep a user has, the more well-rounded he is - All things being equal, the more sites a user participates in, the more well-rounded he is. - Whether answer or question doesn't affect the well-roundness
How to measure the "well-roundedness" of SE contributors?
CC BY-SA 3.0
null
2011-05-12T13:23:45.673
2022-07-11T18:03:39.023
2020-08-29T16:19:12.463
11887
175
[ "ranking", "entropy", "information-theory", "diversity" ]
10712
1
10715
null
39
81298
I'm just reading the book "R in a Nutshell". And it seems as if I skipped the part where the "." as in "sample.formula" was explained. ``` > sample.formula <- as.formula(y~x1+x2) ``` Is sample an object with a field formula as in other languages? And if so, how can I find out, what other fields/functions this object has? (Type declaration) EDIT: I just found another confusing use of the ".": ``` > svm(formula = is_spam~., data = spambase.training) ``` (the dot between ~., )
What is the meaning of the "." (dot) in R?
CC BY-SA 3.0
null
2011-05-12T14:11:20.500
2016-06-07T10:35:08.830
2011-05-12T19:41:19.033
3541
3541
[ "r" ]
10713
2
null
10711
4
null
If you define 'well-roundedness' as 'contributing to many different Stack Exchange Sites,' I would compute some metric of contribution per site. You could use total posts, or average posts per day, or perhaps reputation. Then look at the distribution of this metric across all sites, and compute its skewness in some way that makes sense. In other words, a 'well-rounded' person would be one who contributes to many different sites, while a 'not well-rounded' person would be one who primarily contributes to one site. You could further improve this by scaling your metric with a user's total across all sites. i.e. someone who's contributed a lot to many different sites should be considered more well-rounded than someone who's contributed nothing to any of the sites. A person who's never used SE isn't very well rounded!
null
CC BY-SA 3.0
null
2011-05-12T14:16:43.510
2011-05-12T14:16:43.510
null
null
2817
null
10714
2
null
10712
5
null
There are some exceptions (S3 method dispatch), but generally it is simply used as legibility aid, and as such has no special meaning.
null
CC BY-SA 3.0
null
2011-05-12T14:19:47.597
2011-05-12T14:19:47.597
null
null
4257
null
10715
2
null
10712
30
null
The dot can be used as in normal name. It has however additional special interpretation. Suppose we have an object with specific class: ``` a <- list(b=1) class(a) <- "myclass" ``` Now declare `myfunction` as standard generic in the following way: ``` myfunction <- function(x,...) UseMethod("myfunction") ``` Now declare the function ``` myfunction.myclass <- function(x,...) x$b+1 ``` Then the dot has special meaning. For all objects with class `myclass` calling ``` myfunction(a) ``` will actualy call function `myfunction.myclass`: ``` > myfunction(a) [1] 2 ``` This is used widely in R, the most apropriate example is function `summary`. Each class has its own `summary` function, so when you fit some model for example (which usually returns object with specific class), you need to invoke `summary` and it will call appropriate summary function for that specific model.
null
CC BY-SA 3.0
null
2011-05-12T14:27:47.707
2011-05-12T14:35:03.243
2011-05-12T14:35:03.243
2116
2116
null
10716
2
null
10711
6
null
EXAMPLE: say there are three sites, and we want to compare the well-roundedness of the Users A, B, C. We write the reputations of the users across the three sites in vector form: > User A: [23, 23, 0] User B: [15, 15, 0] User C: [10, 10, 10] We would consider A more well-rounded than B (their reputations are both spread out evenly across two sites, but A has more total reputation). Also, we would consider C more well-rounded than B (they have the same total reputation, but C has an even spread across more sites.) It is undecided whether A should be considered more well-rounded than C, or vice-versa. Let $x_A$, $x_B$, $x_C$ be the above reputation vectors respectively. We want to measure the "well-roundedness" of a user by a function of their reputation vector $f(x)$. By the above, we would want our function $f$ to satisfy $f(x_A) > f(x_B)$, and $f(x_C) > f(x_B)$. Any $f(x)$ that is concave and increasing will do the trick. Two common examples of convex functions are the 'fractional norm' $$ f([x_1,...,x_m]) = \sum_i x_i^p $$ for $0 < p < 1$. Taking $p = 1/2$, we calculate $$f(x_A) = 2\sqrt{23} \approx 9.6$$ $$f(x_B) = 2\sqrt{15} \approx 7.7$$ $$f(x_C) = 3\sqrt{10} \approx 9.5$$ According to the $1/2$-norm, User A would be considered the most well-rounded of the three, by a narrow margin over User C. Another choice for $f$ is the (scaled) Shannon entropy $$ f([x_1,...,x_m]) = -\sum_i x_i \log(x_i/c). $$ where $c = \sum_i x_i$. If we take $f$ to be the scaled Shannon entropy, then we calculate $$f(x_A) = 46 \log(2) \approx 31.9$$ $$f(x_B) = 30 \log(2) \approx 20.8$$ $$f(x_C) = 30 \log(3) \approx 33.0$$ Measured according the scaled Shannon entropy, then, we would say C is the most well-rounded of the three, and A the second most well-rounded. EDIT: I originally said the function $f(x)$ had to be convex; the opposite is true. EDIT2: Added an example in light of whuber's comment.
null
CC BY-SA 3.0
null
2011-05-12T14:27:59.733
2011-05-15T19:20:38.847
2011-05-15T19:20:38.847
3567
3567
null
10717
2
null
10702
3
null
If you are using the traditional 0.05 alpha level cutoff then all but group 3 are significantly different from your full group. It is a little easier to see this if the p-values are not in scientific notation ( you can use options(scipen=5) in R to make this less likely). Also group 1 becomes non-significant for some adjustments for multiple tests. You should consider whether that adjustment applies in your case or not. Also note that the groups that are not significant could be different, just low power. But that just means that any differences, however small, are not easily explained by chance. It could be that your groups are close enough for practical purposes. It is usualy more meaningful to plot the data to see how different the distributions are. You could use the qqplot function as one approach. The vis.test function in the TeachingDemos package for R gives another approach. One possible hitch is if your groups are part of the "Full" data set as well, then you don't have the independence assumed (but given the sample sizes, I am not sure how much this would affect things). You could address this by taking random samples from the full data set and computing the KS-distance for each (ignore the p-value), then compare where your actual data falls relative to the random samples. Most of this comes down to what question you really want answered, many of the exact distributional tests answer a different question than the researcher is really interested in.
null
CC BY-SA 3.0
null
2011-05-12T15:21:02.490
2011-05-12T15:21:02.490
null
null
4505
null
10718
1
null
null
1
1298
Where can I find details of Steel's method for nonparametric multiple comparison with control on line ... ?
Steel's method for nonparametric multiple comparison with control
CC BY-SA 3.0
null
2011-05-12T15:22:27.747
2012-05-30T22:17:16.617
null
null
3539
[ "nonparametric" ]
10719
1
10741
null
8
644
Is there a standard name for a multinomial choice model where the observations are in the form of binary questions such as "do you prefer A to B" and "do you prefer B to D"? This seems like a common occurrence, and the likelihood is easy enough to write out by hand, but I'm having trouble searching for references.
Multinomial choice with binary observations
CC BY-SA 3.0
null
2011-05-12T15:54:06.523
2020-11-21T19:12:36.267
2018-06-09T11:14:44.360
11887
493
[ "maximum-likelihood", "discrete-data", "paired-data", "bradley-terry-model" ]
10720
2
null
10539
18
null
You don't state this explicitly, but from your description of the problem it seems likely that you're after a high-biased set of quantiles (e.g., 50th, 90th, 95th and 99th percentiles). If that's the case, I've had a lot of success with the method described in ["Effective Computation of Biased Quantiles over Data Streams"](http://www.cs.rutgers.edu/~muthu/bquant.pdf) by Cormode et al. It's a fast algorithm that requires little memory and that's easy to implement. The method is based on an earlier algorithm by Greenwald and Khanna that maintains a small sample of the input stream along with upper and lower bounds on the rank of the values in the sample. It requires more space than a collection of few moments, but will be much better at describing the interesting tail region of the distribution accurately.
null
CC BY-SA 3.0
null
2011-05-12T16:34:10.220
2016-03-28T21:38:59.803
2016-03-28T21:38:59.803
110168
439
null
10721
2
null
10711
8
null
You need to account for similarity between the sites, as well. Someone who participates on StackOverflow and [Seasoned Advice](https://cooking.stackexchange.com/) is more well-rounded than someone who participates on SO and CrossValidated, who is in turn (I would argue) more well-rounded than someone who participates in SO and [Programmers](http://programmers.stackexchange.com). There are undoubtedly many ways to do that, but you could check overlapping registration to just get a feel for it.
null
CC BY-SA 3.0
null
2011-05-12T16:37:16.473
2011-05-12T16:37:16.473
2017-04-13T12:33:37.403
-1
71
null
10722
2
null
10639
2
null
My question was about the distributions, not the test, and I think I've figured out the answer: a t-distribution squared has an F(1,n) distribution, which is a Hotelling distribution (up to rescaling by a constant determined by the parameters). I believe one can say that an F(m,n) distribution is the same as a Wilks' $\Lambda(1,m,n)$ distribution, which is the kind of 'generalization' I was looking for. (I guess I'm now enough of an 'expert' to edit the wikipedia page ;)
null
CC BY-SA 3.0
null
2011-05-12T16:49:57.787
2011-05-12T16:49:57.787
null
null
795
null
10723
1
null
null
4
4227
I am running a dlog-dlog (difference of logarithm*) regression and I want to convert the coefficients into marginal effects. I know that it's different from a log-log regression, in which the coefficients directly give us the elasticities. How can we interpret the coefficients from a dlog-dlog regression? * For example dlog (Y)= a + b dlog(X)+ error term.
Interpreting coefficients of a dlog-dlog regression
CC BY-SA 3.0
null
2011-05-12T16:50:08.270
2014-01-07T13:13:13.270
2011-05-12T21:16:41.350
930
4586
[ "regression" ]
10724
2
null
10680
2
null
To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece. One other thing to consider is the non-intuitivness of probabilities. Let's assume that we have a group of 10 people who are independent and representative of the population (random sample) and that we know that in the population 10% have a particular characteristic. Therefore each of the 10 people has a 10% chance of having the characteristic. The common assumption is that it is fairly certain that at least 1 will have the characteristic. But that is a simple binomial problem, we can calculate the probability that none of the 10 have the characteristic, it is about 35% (converges to 1/e for bigger group/smaller probability) which is much higher than most people would guess. There is also a 26% chance that 2 or more people have the characteristic.
null
CC BY-SA 3.0
null
2011-05-12T16:58:13.247
2011-05-12T16:58:13.247
null
null
4505
null
10726
1
null
null
28
16205
I thought heavy tail = fat tail, but some articles I read gave me a sense that they aren't. One of them says: heavy tail means the distribution have infinite jth moment for some integer j. Additionally all the dfs in the pot-domain of attraction of a Pareto df are heavy-tailed. If the density has a high central peak and long tails, then the kurtosis is typically large. A df with kurtosis larger than 3 is fat-tailed or leptokurtic. I still don't have a concrete distinction between these two (heavy tail vs. fat tail). Any thoughts or pointers to relevant articles would be appreciated.
Differences between heavy tail and fat tail distributions
CC BY-SA 3.0
null
2011-05-12T17:23:54.350
2021-10-21T02:03:18.183
2020-04-25T21:45:04.013
11887
4497
[ "distributions", "fat-tails", "heavy-tailed" ]
10727
1
null
null
4
1542
I want to get a confidence interval of a function of some parameters. for example, from the data I estimate parameters of Pareto. Now I want to get 95% CI for 90th quantile (it's a function of parameters of Pareto), so I would need standard error. I know delta method is one option. For simulation method, I am wondering if it is legitimate to simulate 1000 samples of size 50 from Pareto, calculate each of the 90th quantiles and take the standard deviation of the 1000 data. Is the standard deviation I get equivalent to standard error? Thanks for your help!
How to get standard error of a function (delta method vs. simulation)?
CC BY-SA 3.0
null
2011-05-12T17:35:16.817
2011-05-12T21:08:58.433
2011-05-12T21:08:58.433
930
4497
[ "simulation" ]
10728
1
10732
null
11
492
Given N sampled values, what does the "p-th quantile of the sampled values" mean?
Definition of quantile
CC BY-SA 3.0
null
2011-05-12T17:56:52.957
2020-08-07T08:30:22.120
2011-05-12T23:31:27.247
null
3026
[ "sampling" ]
10729
2
null
10727
6
null
Generating data from a given distribution, then calculating the part of interest then redoing this a bunch of times to get the interval is sometimes called a parametric bootstrap. You might learn more by reading up on this topic. Why a sample of 50 each time? is the 50 meaningful? if not, then bigger samples are probably better. One thing that your above method does not take into account is any uncertainties that you have in the parameters of the pareto distribution itself. You may be able to take this into account by doing a 2 stage bootstrap, fit the parameters on a bootstrap sample, then generate your new data from that set of parameters and find the percentile. Then repeat the entire process many times (starting with the bootstrap sample again).
null
CC BY-SA 3.0
null
2011-05-12T18:04:07.217
2011-05-12T18:04:07.217
null
null
4505
null
10730
2
null
10687
4
null
From a semantic perspective, an alternative goal is to build evidence for a good predictive model instead of proving causation. A simple procedure for building evidence for the predictive value of a regression model is to divide your data in 2 parts and fit your regression with one part of the data and with the other part of the data test how well it predicts. The notion of [Granger causality](http://en.wikipedia.org/wiki/Granger_causality) is interesting.
null
CC BY-SA 3.0
null
2011-05-12T18:16:46.903
2011-05-12T18:16:46.903
null
null
4329
null
10731
2
null
9220
27
null
The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution of $Q = z_1^2 + z_2^2$ where $z = a - b$ follows a bivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$. This is a quadratic form in the bivariate random variable $z$. Briefly, one nice general result for the $p$-dimensional case where $z \sim N_p(\mu, \Sigma)$ and $$Q = \sum_{j=1}^p z_j^2$$ is that the moment generating function is $$E(e^{tQ}) = e^{t \sum_{j=1}^p \frac{b_j^2 \lambda_j}{1-2t\lambda_j}}\prod_{j=1}^p (1-2t\lambda_j)^{-1/2}$$ where $\lambda_1, \ldots, \lambda_p$ are the eigenvalues of $\Sigma$ and $b$ is a linear function of $\mu$. See Theorem 3.2a.2 (page 42) in the book cited above (we assume here that $\Sigma$ is non-singular). Another useful representation is 3.1a.1 (page 29) $$Q = \sum_{j=1}^p \lambda_j(u_j + b_j)^2$$ where $u_1, \ldots, u_p$ are i.i.d. $N(0, 1)$. The entire Chapter 4 in the book is devoted to the representation and computation of densities and distribution functions, which is not at all trivial. I am only superficially familiar with the book, but my impression is that all the general representations are in terms of infinite series expansions. So in a certain way the answer to the question is, yes, the distribution of the squared euclidean distance between two bivariate normal vectors belongs to a known (and well studied) class of distributions parametrized by the four parameters $\lambda_1, \lambda_2 > 0$ and $b_1, b_2 \in \mathbb{R}$. However, I am pretty sure you won't find this distribution in your standard textbooks. Note, moreover, that $a$ and $b$ do not need to be independent. Joint normality is enough (which is automatic if they are independent and each normal), then the difference $a-b$ follows a normal distribution.
null
CC BY-SA 3.0
null
2011-05-12T18:48:43.983
2011-05-12T23:43:36.363
2011-05-12T23:43:36.363
4376
4376
null
10732
2
null
10728
11
null
In theory (with $0 \lt p \lt 1$) it means the point a fraction $p$ up the cumulative distribution. In practice there are various definitions used, particularly in statistical computing. For example in R there are [nine different definitions](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/quantile.html), the first three for a discrete interpretation and the rest for a variety of continuous interpolations. Here is an example: if your sample is $\{400, 1, 1000, 40\}$, and you are looking for the $0.6$ quantile ($60$th centile) then the different calculation methods give ``` > x <- numeric() > for (t in 1:9) { x[t] <- quantile(c(400, 1, 1000, 40), probs=0.6, type = t ) } > x 60% 400 400 40 184 364 400 328 376 373 ``` My personal view is that the correct figure is $400$ since $$Pr(X<400) = 0.5 < 0.6 \text{ and } Pr(X>400) = 0.25 < 1-0.6.$$ This comes from treating the sample as the population, and if the empirical CDF is drawn it will be a sequence of steps. There are opposing arguments for interpolating so the empirical CDF is continuous, as being likely to be a better or more useful approximation to the population, and the method of interpolation will affect the result.
null
CC BY-SA 4.0
null
2011-05-12T18:51:03.867
2020-08-07T08:30:22.120
2020-08-07T08:30:22.120
2958
2958
null
10734
2
null
5207
3
null
I think the order is correct, but the labels assigned to p(x) and p(y|x) were wrong. The original problem states p(y|x) is log-normal and p(x) is Singh-Maddala. So, it's - Generate an X from a Singh-Maddala, and - generate a Y from a log-normal having a mean which is a fraction of the generated X.
null
CC BY-SA 3.0
null
2011-05-12T19:03:23.007
2012-03-19T09:35:51.173
2012-03-19T09:35:51.173
2116
3437
null
10735
2
null
10697
0
null
After reading help for dlmFilter, I could come up with the following code: ``` r <- rnorm(100) u <- -1*r + 0.5*rnorm(100) fit <- dlmMLE(u, parm = c(1, sd(u)), build = function(x) dlmModReg(r, FALSE, dV = x[2]^2, m0 = x[1], C0 = matrix(0))) fit$par [1] -1.1330088 0.4788357 ```
null
CC BY-SA 3.0
null
2011-05-12T19:23:50.573
2011-05-12T19:23:50.573
null
null
4575
null
10736
2
null
10726
22
null
I would say that the usual definition in applied probability theory is that a right heavy tailed distribution is one with infinite moment generating function on $(0, \infty)$, that is, $X$ has right heavy tail if $$E(e^{tX}) = \infty, \quad t > 0.$$ This is in agreement with [Wikipedia](http://en.wikipedia.org/wiki/Heavy-tailed_distribution), which does mention other used definitions such as the one you have (some moment is infinite). There are also important subclasses such as the long-tailed distributions and the subexponential distributions. The standard example of a heavy-tailed distribution, according to the definition above, with all moments finite is the log-normal distribution. It may very well be that some authors use fat tailed and heavy tailed interchangeably, and others distinguish between fat tailed and heavy tailed. I would say that fat tailed can be used more vaguely to indicate fatter than normal tails and is sometimes used in the sense of leptokurtic (positive kurtosis) as you indicate. One example of such a distribution, which is not heavy tailed according to the definition above, is the logistic distribution. However, this is not in agreement with e.g. [Wikipedia](http://en.wikipedia.org/wiki/Fat_tail), which is much more restrictive and requires that the (right) tail has a power law decay. The Wikipedia article also suggests that fat tail and heavy tail are equivalent concepts, even though power law decay is much stronger than the definition of heavy tails given above. To avoid confusions, I would recommend to use the definition of a (right) heavy tail above and forget about fat tails whatever that is. The primary reason behind the definition above is that in the analysis of rare events there is a qualitative difference between distributions with finite moment generating function on a positive interval and those with infinite moment generating function on $(0, \infty)$.
null
CC BY-SA 3.0
null
2011-05-12T19:57:41.420
2011-05-14T18:14:19.110
2011-05-14T18:14:19.110
4376
4376
null
10737
2
null
5054
0
null
A simple start using historical tweets data: Create a weekly variable called popularity change based on week to week changes in tweets for a tag for the past 25 weeks from the current time. Calculate these 2 measures: - Trend: Mean of popularity change - Volatility: Standard Deviation (square root of variance) of popularity change. Meaning: A change from the number of tweets in your current week from the past week that is greater than 2 standard deviations can be considered a big change, and a change of 3 standard deviations would be a very big change (a very rare occurrence assuming the distribution of popularity changes for the time span looks like a [normal distribution](http://en.wikipedia.org/wiki/Normal_distribution)).
null
CC BY-SA 3.0
null
2011-05-12T20:00:40.270
2011-05-12T20:00:40.270
null
null
4329
null
10738
2
null
10680
2
null
Statistical analysis or statistical data? I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example some one is using numbers to help communicate quantity more effectively than just saying 'many people'. My guess is that the answer to your question is hidden in the motivation of the speaker on why she is using numbers. It could be to communicate some notion better or it could be to show authority or it could be to dazzle the listener. The good thing about stating numbers rather than saying 'very big' is that people can refute the number. See [Popper's idea](http://en.wikipedia.org/wiki/Karl_Popper) on refutation.
null
CC BY-SA 3.0
null
2011-05-12T20:17:49.997
2011-05-12T20:17:49.997
null
null
4329
null
10739
2
null
10712
12
null
Look at the help page for `?formula` with regard to `.` Here's the relevant bits: > There are two special interpretations of . in a formula. The usual one is in the context of a data argument of model fitting functions and means ‘all columns not otherwise in the formula’: see terms.formula. In the context of update.formula, only, it means ‘what was previously in this part of the formula’. Alternatively, the `reshape` and `reshape2` packages use `.` and `...` a bit differently (from `?cast`): > There are a couple of special variables: "..." represents all other variables not used in the formula and "." represents no variable
null
CC BY-SA 3.0
null
2011-05-12T21:04:50.773
2013-08-04T15:47:50.433
2013-08-04T15:47:50.433
7290
696
null
10740
1
19808
null
4
2421
I am not so experienced to design a customized covariance matrix / kernel functions. I would like to get such a understanding that after looking at data, I can figure out the covariance matrix. For example, in my case, I have a data set, $X$ that contains many zeros and couple of points far from them, close to hundred. $Y$ is like a normal distribution, $\mathcal{N}(50,10)$. $X$ and $Y$ are limited from $0$ to $100$. So, I am try to regress $X$ on to $Y$, using the Gaussian process method. The difficulty arises from those many zeros that makes my covariance matrix messy. So, I have a large standard deviation for whole estimation.
Designing covariance matrix and kernel function for a gaussian process
CC BY-SA 3.0
null
2011-05-12T21:05:20.087
2013-08-31T20:38:14.213
2013-08-31T20:38:14.213
27581
4581
[ "regression", "machine-learning", "gaussian-process" ]
10741
2
null
10719
9
null
Unless I misunderstood the question, this refers to paired preference (1) or [pair comparison](http://en.wikipedia.org/wiki/Pairwise_comparison) data. A well-known example of such a model is the Bradley-Terry model (2), which shares some connections with item scaling in psychometrics (3). There is an R package, [BradleyTerry2](http://cran.r-project.org/web/packages/BradleyTerry2/index.html), described in the JSS (2005) 12(1), [Bradley-Terry Models in R](http://www.jstatsoft.org/v12/i01/paper), and a detailed overview in Agresti's CDA, pp. 436-439, with R code available in Laura Thompson's textbook, [R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002) 2nd edition](https://web.archive.org/web/20110805121758/https://home.comcast.net/%7Elthompson221/Splusdiscrete2.pdf). References - Thurstone, L.L. (1927). A law of comparative judgment. Psychological Review, 3, 273-286. - Bradley, R.A. and Terry, M.E. (1952). Rank analysis of incomplete block designs I: The methods of paired comparisons. Biometrika, 39, 324-345. - Andrich, D. (1978). Relationships between the Thurstone and Rasch approaches to item scaling. Applied Psychological Measurement, 2(3), 451-462.
null
CC BY-SA 4.0
null
2011-05-12T21:35:07.073
2020-11-21T19:12:36.267
2020-11-21T19:12:36.267
930
930
null
10742
2
null
10723
1
null
If this is indeed linear then I think your underlying model may be something like $$Y_j \approx k \, \exp(aj) X_j^b $$ where your regression does not tell you about the value of the constant $k$, but you might perhaps be able to use it to pin the first and last points of your observed data.
null
CC BY-SA 3.0
null
2011-05-12T22:19:02.890
2011-05-12T22:19:02.890
null
null
2958
null
10743
2
null
10450
1
null
Here's a heuristic that I coded up quickly that seems to do quite well: - Initialize a matrix with 1 on the diagonals. - Fill out the upper triangular sub-matrix according to your distribution (90% are uniform on (-.3,.3) and 10% outside that). - Make the matrix symmetric. - Now iterate between Project the matrix onto the PSD cone. Project the matrix onto the set of matrices with diagonal 1. - Alternating projections converges, so we just hope that the matrix we get out has values according to your distribution (see simulation for the check). > pickone <- function(x){ if(runif(1)<.9){ return(runif(1,-.3,.3)) } else { return(sample(c(-1,1),1)*runif(1,.3,1)) } } generateMat <- function(x){ X <- matrix(0,nrow=10,ncol=10) diag(X) <- rep(1,10) X[upper.tri(X)] <- sapply(1:45,pickone) X <- X + t(X)-diag(rep(1,10)) Xnew <- X for(i in 1:50){ eig <- eigen(Xnew) ##project onto the PSD cone Xnew <- eig$vectors%*%diag(sapply(eig$values,max,0))%*%t(eig$vectors) ##project onto the set of matrices with diagonal 1 diag(Xnew) <- rep(1,10) } vals <- Xnew[upper.tri(Xnew)] return(mean(vals < .3 & vals > -.3)) } summary(sapply(1:100,generateMat)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.7556 0.8667 0.8889 0.8960 0.9333 0.9778 It seems like most of the values after simulating 100 times are close to 90% within (-.3,.3).
null
CC BY-SA 3.0
null
2011-05-12T22:23:22.400
2011-05-12T22:23:22.400
null
null
1815
null
10744
1
10749
null
17
12735
In gene expression studies using microarrays, intensity data has to be normalized so that intensities can be compared between individuals, between genes. Conceptually, and algorithmically, how does "quantile normalization" work, and how would you explain this to a non-statistician?
How does quantile normalization work?
CC BY-SA 3.0
null
2011-05-13T01:06:27.260
2016-04-05T12:56:06.917
2012-01-24T20:57:37.107
930
36
[ "genetics", "normalization", "microarray" ]
10745
1
null
null
5
1101
All, I'm working on a project looking at cross-national public opinion across two different observational "waves". Many countries were surveyed in both waves, though some were surveyed in the first wave and not the other (and vice-versa). With predictors at both the level of the individual and the level of the country, a mixed effects model is appropriate. My question involves specifying the random effects, or the groupings. I think there are two: the country and the wave. That is, individuals in certain countries are going to be more like each other than individuals in other countries. Further, observations in the first wave are going to be more similar than with observations in the second wave, but it is not entirely a repeated measures design. Thus, I'm inclined to believe I should model the random effect component in lmer as `(1 | country/wave)` [alternatively: `(1|country:wave) + (1|country)`], where the country is nested within the wave. The results are a little sensitive to changes in the random effects structure. Nesting waves within countries and excluding the wave grouping altogether produce different results on key independent variables. I was wanting to know if you think that's right. I could also be inviting trouble by treating wave as a random effect when there are only two waves. Thanks for any input.
Properly specifying mixed effects model in lmer
CC BY-SA 3.0
null
2011-05-13T01:20:32.237
2011-05-13T13:28:05.307
2011-05-13T13:28:05.307
279
4594
[ "r", "mixed-model", "lme4-nlme" ]
10746
2
null
10700
2
null
Since you just need it for demonstration, how about using [Minitab](http://www.minitab.com/en-US/default.aspx)? It is similarly transparent. [RExcel](http://en.wikipedia.org/wiki/RExcel) looks promising too. Of course, both of these options are somewhat opaque because all of the software is proprietary, closed-source software. You could also try [R and Calc](http://wiki.services.openoffice.org/wiki/R_and_Calc), which uses open-source software. (I know that's not what you meant by transparent.)
null
CC BY-SA 3.0
null
2011-05-13T04:09:32.393
2011-05-13T04:09:32.393
null
null
3874
null
10747
2
null
10615
2
null
I will to demonstrate how to do this without reshaping the data, as all it entails is simple arithmetic. If I am reading your question correctly, the data look something like this; ``` Data List Free / Group (A1) Grade1 Grade2 Grade3. Begin Data A 1 2 3 A 6 5 10 B 2 7 18 C 23 5 1 D 7 7 13 End Data. ``` To calculate any type of variance, you need to assign some type of numeric values to your grade variables. Here I will assume that a value for Grade1 is the equivalent of a 95, Grade 2 is equal to an 85, and Grade3 is equal to a 75. All that one needs to do after this is to essentially plug in the values for the calculation of the variance. The Wikipedia page for [standard deviation](http://en.wikipedia.org/wiki/Standard_deviation) has an explicit example taking you through the steps. The only difference in the code below is that I need to multiply the squared differences by the number of observations within that specific grade variable. ``` compute row_sum = (Grade1*95)+(Grade2*85)+(Grade3*75). compute row_n = Grade1+Grade2+Grade3. compute row_mean = row_sum/row_n. execute. compute square_diff1 = (95 - row_mean)*(95 - row_mean). compute square_diff2 = (85 - row_mean)*(85 - row_mean). compute square_diff3 = (75 - row_mean)*(75 - row_mean). execute. compute row_variance = ( (Grade1*square_diff1) + (Grade2*square_diff2) + (Grade3*square_diff3) ) / (row_n). execute. ``` The resulting variable, `row_variance` , is a calculation of the variance for the observations in the row. In your example given it suggested you would have groupings of variables, and hence would need to calculate not only the row variance, but the group variance. You can simply use the AGGREGATE command to sum the Grade variables within each group, and then follow the same steps above. An example would be; ``` DATASET DECLARE Grouped. AGGREGATE /OUTFILE='Grouped' /BREAK=Group /Grade1 =SUM(Grade1) /Grade2 =SUM(Grade2) /Grade3 =SUM(Grade3). ``` This will produce a new dataset named Grouped, in which you can calculate the row variance using the exact same code above. As a note, you asked for the variance in your question. I suspect the sample standard deviation is another statistic you might be interested in. One can not simply take the square root of the variance I gave above though, as one divides by the number of observations minus one for the sample standard deviation. So if you were interested in the sample standard deviation in the row you could use the code below; ``` compute row_sd =SQRT( ( (Grade1*square_diff1) + (Grade2*square_diff2) + (Grade3*square_diff3) ) / (row_n - 1) ). execute. ``` I doubt this is the simplest solution in terms of length of code. But I hope it is straightforward mathematically.
null
CC BY-SA 3.0
null
2011-05-13T04:22:02.233
2011-05-13T04:22:02.233
null
null
1036
null
10748
2
null
10693
1
null
You just use a generic t-test with matched pairs. For each worker, measure before and after. Use a one-sample t-test to test whether the difference between these two measurements is zero. I've actually never before heard anyone use "repeated measures" to refer to fewer than three measurement times. Repeated measures gets more complicated when you have three measurement times, which may make the question's phrasing sort of scary.
null
CC BY-SA 3.0
null
2011-05-13T04:43:39.707
2011-05-13T04:43:39.707
null
null
3874
null
10749
2
null
10744
9
null
[A comparison of normalization methods for high density oligonucleotide array data based on variance and bias](http://bioinformatics.oxfordjournals.org/content/19/2/185.abstract) by Bolstad et al. introduces quantile normalization for array data and compares it to other methods. It has a pretty clear description of the algorithm. The conceptual understanding is that it is a transformation of array $j$ using a function $\hat{F}^{-1} \circ \hat{G}_j$ where $\hat{G}_j$ is an estimated distribution function and $\hat{F}^{-1}$ is the inverse of an estimated distribution function. It has the consequence that the normalized distributions become identical for all the arrays. For quantile normalization $\hat{G}_j$ is the empirical distribution of array $j$ and $\hat{F}$ is the empirical distribution for the averaged quantiles across arrays. At the end of the day it is a method for transforming all the arrays to have a common distribution of intensities.
null
CC BY-SA 3.0
null
2011-05-13T05:05:47.527
2011-05-13T05:05:47.527
null
null
4376
null
10750
1
null
null
5
9193
I have a time series. I want to model it using ARMA, which will be used for forcasting. In R I am using `arima()` function to get the coefficients. But `arima()` requires order(p,d,q) as input. What is the simplest way in R to arrive at a good value for p and q (with d = 0) so that I don't overfit?
ARMA modeling in R
CC BY-SA 3.0
null
2011-05-13T05:55:17.087
2011-05-13T07:17:44.377
2011-05-13T07:06:54.123
1390
4596
[ "r", "time-series" ]
10751
2
null
10750
6
null
Simplest way to arrive at values for $p$ and $q$ is using `auto.arima` function from package forecast. There is no simplest way in any statistical package to arrive at good values. The main reason for that is that there is no universal definition of good. Since you mention overfitting, one possible way is to fit arima models for different values of $p$ and $q$ and then pick the one which is the best according to your overfitting criteria (out of sample forecasting performance for example). `auto.arima` does basically the same, you can choose between AIC, AICC and BIC to let `auto.arima` pick the best model.
null
CC BY-SA 3.0
null
2011-05-13T07:16:27.647
2011-05-13T07:16:27.647
null
null
2116
null
10752
2
null
10750
6
null
One option is to fit a series of ARMA models with combinations of $p$ and $q$ and work with the model that has the best "fit". Here I evaluate "fit" using BIC to attempt to penalise overly complex fits. An example is shown below for the in-built Mauna Loa $\mathrm{CO}_2$ concentration data set ``` ## load the data data(co2) ## take only data up to end of 1990 - predict for remaining data later CO2 <- window(co2, end = c(1990, 12)) ## Set up the parameter sets over which we want to operate CO2.pars <- expand.grid(ar = 0:2, diff = 1, ma = 0:2, sar = 0:1, sdiff = 1, sma = 0:1) ## As you are only wanting ARMA, then you would need something like ## pars <- expand.grid(ar = 0:4, diff = 0, ma = 0:4) ## and where you choose the upper and lower limits - here 0 and 4 ## A vector to hold the BIC values for each combination of model CO2.bic <- rep(0, nrow(CO2.pars)) ## loop over the combinations, fitting an ARIMA model and recording the BIC ## for that model. Note we use AIC() with extra penalty given by `k` for (i in seq(along = CO2.bic)) { CO2.bic[i] <- AIC(arima(CO2, unlist(CO2.pars[i, 1:3]), unlist(CO2.pars[i, 4:6])), k = log(length(CO2))) } ## identify the model with lowest BIC CO2.pars[which.min(CO2.bic), ] ## Refit the model with lowest BIC CO2.mod <- arima(CO2, order = c(0, 1, 1), seasonal = c(0, 1, 1)) CO2.mod ## Diagnostics plots tsdiag(CO2.mod, gof.lag = 36) ## predict for the most recent data pred <- predict(CO2.mod, n.ahead = 7 * 12) upr <- pred$pred + (2 * pred$se) ## upper and lower confidence intervals lwr <- pred$pred - (2 * pred$se) ## approximate 95% pointwise ## plot what we have done ylim <- range(co2, upr, lwr) plot(co2, ylab = ylab, main = expression(bold(Mauna ~ Loa ~ CO[2])), xlab = "Year", ylim = ylim) lines(pred$pred, col = "red") lines(upr, col = "red", lty = 2) lines(lwr, col = "red", lty = 2) legend("topleft", legend = c("Observed", "Predicted", "95% CI"), col = c("black", "red", "red"), lty = c(1, 1, 2), bty = "n") ```
null
CC BY-SA 3.0
null
2011-05-13T07:17:44.377
2011-05-13T07:17:44.377
null
null
1390
null
10753
2
null
10740
2
null
I've not worked through the details of GPs, so I cannot help you there. However, it seems like you have two groups of data in X: X=0 and X>0. You may get better results by first classifying X into these two groups based on Y, and then performing GP regression in the X>0 class.
null
CC BY-SA 3.0
null
2011-05-13T07:26:15.260
2011-05-13T07:26:15.260
null
null
3595
null
10754
2
null
10723
4
null
Since you have differences, this means that the data is time series and we can write $$Y_t=Y_0+\sum_{s=1}^t\Delta Y_s$$ So if the true model is $$\Delta Y_t=\alpha+\beta \Delta X_t$$ we have $$Y_t=Y_0+\sum_{s=1}^t(\alpha+\beta \Delta X_t)=Y_0-\beta X_0+\alpha t+\beta X_t$$ So you can say that interpretation remains the same as in model with levels.
null
CC BY-SA 3.0
null
2011-05-13T07:32:40.563
2011-05-13T07:32:40.563
null
null
2116
null
10755
5
null
null
0
null
For the quotation, see [http://www.stata.com/support/faqs/statistics/delta-method/](http://www.stata.com/support/faqs/statistics/delta-method/). For the second sense of the definition, refer to [http://en.wikipedia.org/wiki/Delta_method](http://en.wikipedia.org/wiki/Delta_method).
null
CC BY-SA 3.0
null
2011-05-13T07:38:51.577
2012-08-10T17:29:54.877
2012-08-10T17:29:54.877
919
919
null
10756
4
null
null
0
null
"The delta method, in its essence, expands a function of a random variable about its mean, usually with a one-step Taylor approximation, and then takes the variance." The term also refers to a method for showing that a function of an asymptotically normal statistical estimator is asymptotically normal.
null
CC BY-SA 3.0
null
2011-05-13T07:38:51.577
2012-08-10T17:29:54.877
2012-08-10T17:29:54.877
919
2116
null
10757
2
null
10672
9
null
Here are two survey papers I have found recently. I have not read them yet, but the abstracts sound promising. [Joann`s Vermorel and Mehryar Mohri: Multi-Armed Bandit Algorithms and Empirical Evaluation](http://www.cs.nyu.edu/~mohri/pub/bandit.pdf) (2005) From the abstract: > The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. [Volodymyr Kuleshov and Doina Precup: Algorithms for the multi-armed bandit problem](http://www.cs.mcgill.ca/~vkules/bandits.pdf) (2000) From the abstract: > Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly.
null
CC BY-SA 3.0
null
2011-05-13T08:40:46.693
2011-05-13T08:51:01.957
2011-05-13T08:51:01.957
264
264
null
10759
2
null
10309
7
null
As @caracal's said, this script implements a permutation-based approach to Friedman's test with the [coin](http://cran.r-project.org/web/packages/coin/index.html) package. The maxT procedure is rather complex and there is no relation with the traditional $\chi^2$ statistic you're probably used to get after a Friedman ANOVA. The general idea is to control the [FWER](http://en.wikipedia.org/wiki/Familywise_error_rate). Let's say you perform 1000 permutations, for every variable of interest, then you can derive not only pointwise empirical p-values for each variable (as you would do with a single permutation test) but also a value that accounts for the fact that you tested all those variables at the same time. The latter is achieved by comparing each observed test statistic against the maximum of permuted statistics over all variables. In other words, this p-value reflects the chance of seeing a test statistic as large as the one you observed, given you've performed as many tests. More information (in a genomic context, and with algorithmic considerations) can be found in > Dudoit, S., Shaffer, J.P., and Boldrick, J.C. (2003). Multiple Hypothesis Testing in Microarray Experiments. Statistical Science, 18(1), 71–103. (Here are some [slides](http://www.epibiostat.ucsf.edu/biostat/cbmb/courses/multtest.pdf) from the same author with applications in R with the [multtest](http://www.bioconductor.org/packages/release/bioc/html/multtest.html) package.) Another good reference is Multiple Testing Procedures with Applications to Genomics, by Dudoit and van der Laan (Springer, 2008). Now, if you want to get more "traditional" statistic, you can use the [agricolae](http://cran.r-project.org/web/packages/agricolae/index.html) package which has a `friedman()` function that performs the overall Friedman's test followed by post-hoc comparisons. The permutation method yields a maxT=3.24, p=0.003394, suggesting an overall effect of the target when accounting for the blocking factor. The post-hoc tests basically indicate that only results for Wine A vs. Wine C (p=0.003400) are statistically different at the 5% level. Using the non-parametric test, we have ``` > library(agricolae) > with(WineTasting, friedman(Taster, Wine, Taste, group=FALSE)) Friedman's Test =============== Adjusted for ties Value: 11.14286 Pvalue chisq : 0.003805041 F value : 7.121739 Pvalue F: 0.002171298 Alpha : 0.05 t-Student : 2.018082 Comparison between treatments Sum of the ranks Difference pvalue sig LCL UCL Wine A - Wine B 6 0.301210 -5.57 17.57 Wine A - Wine C 21 0.000692 *** 9.43 32.57 Wine B - Wine C 15 0.012282 * 3.43 26.57 ``` The two global tests agree and basically say there is a significant effect of Wine type. We would, however, reach different conclusions about the pairwise difference. It should be noted that the above pairwise tests (Fisher's LSD) are not really corrected for multiple comparisons, although the difference B-C would remain significant even after Holm's correction (which also provides strong control of the FWER).
null
CC BY-SA 3.0
null
2011-05-13T09:55:18.950
2011-05-13T09:55:18.950
null
null
930
null
10760
2
null
6298
11
null
Google is using different machine learning techniques and algorithm for training and prediction. The strategies for large-scale supervised learning: 1. Sub-sample 2. Embarrassingly parallelize some algorithms 3. Distributed gradient descent 4. Majority Vote 5. Parameter mixture 6. Iterative parameter mixture They should train and predict the model with the different machine learning techniques and using an algorithm to decide the best model and prediction to return. - Sub-sampling provides inferior performance - Parameter mixture improves, but not as good as all data - Distributed algorithms return better classifiers quicker - Iterative parameter mixture achieves as good as all data But of course it is not really clear in the API documentation.
null
CC BY-SA 3.0
null
2011-05-13T10:24:17.390
2011-05-13T10:24:17.390
null
null
4531
null
10761
2
null
10700
5
null
It sounds like your goal is didactic; that you are trying to explain ordinal logistic to some group of people. I have used Excel for this sort of thing when the topic is much simpler - e.g., crosstabs and chi-square - so that there is some intuition about the formulas. I don't think that will be the case here. Even if you could find (or create) an Excel spreadsheet that does this, I think the intermediate steps are so numerous that it will not be clear to any audience that could not follow a more usual explanation of ordinal logistic. I would use some standard software and give different examples. I've written a talk doing this, using SAS, but it could be done using R or whatever.
null
CC BY-SA 3.0
null
2011-05-13T10:55:57.083
2011-05-13T10:55:57.083
null
null
686
null
10762
2
null
10745
3
null
By nesting country within wave, you are cutting the connection between the repeated measurements within the same country. I would just use crossed random effects: ``` (1|country) + (1|wave) + (1|country:wave) ```
null
CC BY-SA 3.0
null
2011-05-13T13:27:04.293
2011-05-13T13:27:04.293
null
null
279
null
10763
2
null
577
8
null
From what I can tell, there isn't much difference between AIC and BIC. They are both mathematically convenient approximations one can make in order to efficiently compare models. If they give you different "best" models, it probably means you have high model uncertainty, which is more important to worry about than whether you should use AIC or BIC. I personally like BIC better because it asks more (less) of a model if it has more (less) data to fit its parameters - kind of like a teacher asking for a higher (lower) standard of performance if their student has more (less) time to learn about the subject. To me this just seems like the intuitive thing to do. But then I am certain there also exists equally intuitive and compelling arguments for AIC as well, given its simple form. Now any time you make an approximation, there will surely be some conditions when those approximations are rubbish. This can be seen certainly for AIC, where there exist many "adjustments" (AICc) to account for certain conditions which make the original approximation bad. This is also present for BIC, because various other more exact (but still efficient) methods exist, such as Fully Laplace Approximations to mixtures of Zellner's g-priors (BIC is an approximation to the Laplace approximation method for integrals). One place where they are both crap is when you have substantial prior information about the parameters within any given model. AIC and BIC unnecessarily penalise models where parameters are partially known compared to models which require parameters to be estimated from the data. one thing I think is important to note is that BIC does not assume a "true" model a) exists, or b) is contained in the model set. BIC is simply an approximation to an integrated likelihood $P(D|M,A)$ (D=Data, M=model, A=assumptions). Only by multiplying by a prior probability and then normalising can you get $P(M|D,A)$. BIC simply represents how likely the data was if the proposition implied by the symbol $M$ is true. So from a logical viewpoint, any proposition which would lead one to BIC as an approximation are equally supported by the data. So if I state $M$ and $A$ to be the propositions $$\begin{array}{l|l} M_{i}:\text{the ith model is the best description of the data} \\ A:\text{out of the set of K models being considered, one of them is the best} \end{array} $$ And then continue to assign the same probability models (same parameters, same data, same approximations, etc.), I will get the same set of BIC values. It is only by attaching some sort of unique meaning to the logical letter "M" that one gets drawn into irrelevant questions about "the true model" (echoes of "the true religion"). The only thing that "defines" M is the mathematical equations which use it in their calculations - and this is hardly ever singles out one and only one definition. I could equally put in a prediction proposition about M ("the ith model will give the best predictions"). I personally can't see how this would change any of the likelihoods, and hence how good or bad BIC will be (AIC for that matter as well - although AIC is based on a different derivation) And besides, what is wrong with the statement If the true model is in the set I am considering, then there is a 57% probability that it is model B. Seems reasonable enough to me, or you could go the more "soft" version there is a 57% probability that model B is the best out of the set being considered One last comment: I think you will find about as many opinions about AIC/BIC as there are people who know about them.
null
CC BY-SA 3.0
null
2011-05-13T14:06:44.390
2011-05-13T14:06:44.390
null
null
2392
null
10764
5
null
null
0
null
> ...The standard SVM takes a set of input data and predicts, for each given input, which of two possible classes the input is a member of, which makes the SVM a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. --[Wikipedia](http://en.wikipedia.org/wiki/Support_vector_machine) Visually: [](https://i.stack.imgur.com/0e3ce.png)
null
CC BY-SA 3.0
null
2011-05-13T14:19:52.103
2017-01-07T20:20:00.037
2017-01-07T20:20:00.037
7290
919
null
10765
4
null
null
0
null
Support Vector Machine refers to "a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis."
null
CC BY-SA 3.0
null
2011-05-13T14:19:52.103
2011-08-10T14:59:45.267
2011-08-10T14:59:45.267
919
2513
null
10766
1
null
null
6
3499
I have two cases: - Two random poisson variables $X_1 \sim \text{Pois}(\lambda_1)$, $X_2 \sim \text{Pois}(\lambda_2)$, and testing: Null Hypothesis: $\lambda_1 = \lambda_2$ Alternate hyp: $\lambda_1 \neq \lambda_2$ - Two random binomial variables $X_1 \sim \text{Binom}(n_1, p_1)$, $X_2 \sim \text{Binom}(n_2, p_2)$, where $n_1=n_2$: Null Hypothesis: $p_1 = p_2$ Alternate hyp : $p_1 \neq p_2$ I started with likelihood ratio test for both cases, where I calculated maximum likelihood estimates for both cases, and then estimated likelihood ratio test statistics. But while reading some literature and discussion, I learned likelihood ratio test may not be a valid test in all cases, such as at low lambda values or one lambda low and other high, so for first case (poisson case), I could also explore the option of poisson random variable condition on sum of $X_1+X_2$ which will be $\text{Binom}(X_1+X_2, 0.5)$ and can be tested using Fisher exact test. Similarly for second case with two binom random variables a good choice will be McNamara test. And there are also some Bayesian tests which can be used (I do not know of any). So I am wondering how could I do Fisher exact test for two poisson r.v condition on sum of them, and McNamara test for two binom, and if there is any bayesian test which i could use. Preferably, i will try these test in R as I am trying to learn R.
Two poisson random variables and likelihood ratio test
CC BY-SA 4.0
null
2011-05-13T14:45:17.463
2019-03-22T08:18:48.027
2019-03-22T08:18:48.027
128677
4098
[ "r", "maximum-likelihood", "binomial-distribution", "poisson-distribution" ]
10767
1
null
null
3
185
I am overthinking this for sure, but I am stumped. I have a historical data set of projects with hours of contribution by various positions. There are six types of projects. How can I model the average contribution of each position for future forecasting purposes? Linear regression doesn't work because that models the independent contribution of each position. The end goal is to establish a formula that will allow me to say "when working on project type 1, if position A contributes 40 hours, then positions B and C usually need X hours a piece." The data look like this: ``` Total Hours Project Type Position A Position B Position C 200 1 100 40 60 140 2 40 60 40 ``` And so on for about 700 projects.
Modeling relative contribution of a variable
CC BY-SA 3.0
null
2011-05-13T14:56:25.473
2011-05-13T16:29:02.787
2011-05-13T16:29:02.787
null
4600
[ "modeling", "forecasting" ]
10768
1
10814
null
15
1768
If $F_Z$ is a CDF, it looks like $F_Z(z)^\alpha$ ($\alpha \gt 0$) is a CDF as well. Q: Is this a standard result? Q: Is there a good way to find a function $g$ with $X \equiv g(Z)$ s.t. $F_X(x) = F_Z(z)^\alpha$, where $ x \equiv g(z)$ Basically, I have another CDF in hand, $F_Z(z)^\alpha$. In some reduced form sense I'd like to characterize the random variable that produces that CDF. EDIT: I'd be happy if I could get an analytical result for the special case $Z \sim N(0,1)$. Or at least know that such a result is intractable.
CDF raised to a power?
CC BY-SA 3.0
null
2011-05-13T15:02:08.907
2017-05-11T07:48:44.873
2017-05-11T07:48:44.873
35989
3577
[ "data-transformation", "cumulative-distribution-function", "quantiles" ]
10769
2
null
10697
2
null
Below is code which implements my solution and Paramonov's solution (a slight edit: I have changed `dlmFilter(u,mod)$a` in the orginally posted answer by `dlmFilter(u,mod)$m`). ``` library(dlm) set.seed(1234) reps <- 100 MyEstimates <- YourEstimates <- matrix(0,reps,2) for (i in (1:reps) ) { X <- r <- rnorm(100) u <- -1*r + 0.5*rnorm(100) # fit <- dlmMLE(u, parm = c(1, sd(u)), build = function(x) dlmModReg(r, FALSE, dV = x[2]^2, m0 = x[1], C0 = matrix(0))) YourEstimates[i,] <- fit$par # MyModel <- function(x) dlmModReg(X, FALSE, dV = x[1]^2) fit <- dlmMLE(u, parm = c(0.3), build = MyModel) mod <- MyModel(fit$par) MyEstimates[i,] <- c(dlmFilter(u,mod)$m[101],fit$par[1]) } ``` When I run the above code, this is what I get: ``` > summary(YourEstimates) V1 V2 Min. :-9.5284 Min. :-0.5747 1st Qu.:-1.4280 1st Qu.: 0.4710 Median :-0.9795 Median : 0.4937 Mean :-0.9737 Mean : 0.4369 3rd Qu.:-0.5636 3rd Qu.: 0.5215 Max. : 4.5222 Max. : 0.5980 > summary(MyEstimates) V1 V2 Min. :-1.1099 Min. :-0.6010 1st Qu.:-1.0266 1st Qu.: 0.4736 Median :-0.9974 Median : 0.4961 Mean :-0.9938 Mean : 0.4469 3rd Qu.:-0.9635 3rd Qu.: 0.5158 Max. :-0.8390 Max. : 0.5776 ``` While the first set of estimates gives similar estimates for the second parameter, it occasionally gives values well off the mark for the first. I think the reason is that "tying" the state to its initial value with ``` C0=matrix(0) ``` leads to numerical instability, but I am not sure. In any case, you may want to look at the issue.
null
CC BY-SA 3.0
null
2011-05-13T15:26:56.350
2011-05-13T15:26:56.350
null
null
892
null
10770
2
null
10766
2
null
Did your reading suggest that the likelihood ratio test statistic had problems? or that the chi-squared approximation was not very good? I expect that most of the problems are more likely the latter, the test statistic is fine, but we don't know the distribution of it under the null hypothesis. With modern computers we can estimate the distribution fairly easily and compare to that (R works great for this). Just generate data that looks like yours but assuming the null hypothesis to be true, compute the LR statistic (or other statistic) for each simulated dataset. Now compare the same statistic for your real data to the set of simulated statistics. Another option, depending on the nature of your data and question, is to use permutation tests where the null hypothesis is that there is no difference between your groups, they are just samples from the same population. So you mix them together and recreate the samples over and over again and compare the test statistic of interest from the original samples to the permuted samples. Fisher's exact test is actualy a permutation test.
null
CC BY-SA 3.0
null
2011-05-13T16:36:38.337
2015-09-22T15:54:21.487
2015-09-22T15:54:21.487
17230
4505
null
10771
2
null
10171
11
null
(Don't have much time now so I'll answer briefly and then expand later) Say that we are considering a binary classification problem and have a training set of $m$ class 1 samples and $n$ class 2 samples. A permutation test for feature selection looks at each feature individually. A test statistic $\theta$, such as information gain or the normalized difference between the means, is calculated for the feature. The data for the feature is then randomly permuted and partitioned into two sets, one of size $m$ and one of size $n$. The test statistic $\theta_p$ is then calculated based on this new partition $p$. Depending on the computational complexity of the problem, this is then repeated over all possible partitions of the feature into two sets of order $m$ and $n$, or a random subset of these. Now that we have established a distribution over $\theta_p$, we calculate the p-value that the observed test statistic $\theta$ arose from a random partition of the feature. The null hypothesis is that samples from each class come from the same underlying distribution (the feature is irrelevant). This process is repeated over all features, and then the subset of features used for classification can be selected in two ways: - The $N$ features with the lowest p-values - All features with a p-value$<\epsilon$
null
CC BY-SA 3.0
null
2011-05-13T17:42:00.410
2011-05-13T17:42:00.410
null
null
3595
null
10772
2
null
10567
1
null
With respect to the question in the header With logistic regression predicting posterior probabilities, the dependent variable (outcome) is both bounded and continuous. One train of thoughts to arrive at logistic regression in fact is thinking how to construct a regression with limits for the continuous outcome. - You want e.g. to do a regression directly on the probability - "Common" regression methods (e.g. linear regression) give you continuous output in the set of real numbers, $\mathbb R$. - But probabilities are in [0, 1] - So put a sigmoid transformation into your model to transform $\mathbb R \mapsto [0, 1]$ - If you choose the logistic function $\frac{1}{1 + e^{-x}}$ (a standard choice of a sigmoid), you end up with logistic regression. With respect to modeling angles in general I'd like to follow up with another question: how to model cyclic behaviour, how would I tell a model that 359° is almost the same as 0° (regardless of whether the variable is dependent or independent)?
null
CC BY-SA 3.0
null
2011-05-13T18:15:07.350
2011-05-14T07:40:35.677
2011-05-14T07:40:35.677
2669
4598
null
10773
1
null
null
4
3272
Although reading quite a bunch of books, I'm still not sure, which method to use and how to implement it, therefore I appreciate any help! I have 4 different groups (treatments) with 50 participants each. Each participant's action is observed 5 times under the same condition. The 5 different values are collected within one "round", where each participant has to select 5 items at an arbitrary time. So I know that the different values for each participant are not independent of each other. However, time effects are not of interest for me and I don't assume that learning effects are apparent. I just collect 5 values for each participant in order to get more data. All the data within one group (for all users and their choices) is then taken together and a complex aggregated measure (a number) is calculated for all groups. Now the question is, whether there are significant differences for this measure between the groups. As there is no unbiased estimator for the measure and as my data is not normal, I know that I need some form of sampling. So the Welch or Levene tests, for instance, would not work here. I'm just not sure whether bootstrapping or permutation tests seem to be more appropriate. Second, could I "ignore" the repeated measures and permute all values together without regard to the participants who provided it? If not, am I supposed to create vectors for each participant and permute these? What happens if I have an unbalanced design (in case some observations are missing)? Do you know any software where such a method is already implemented?
Permutation tests with repeated measures
CC BY-SA 3.0
null
2011-05-13T18:36:09.233
2013-11-19T19:18:19.297
2013-11-19T19:18:19.297
686
4602
[ "repeated-measures", "permutation-test" ]
10774
1
10865
null
10
1973
## Background I have data from a field study in which there are four treatment levels and six replicates in each of two blocks. (4x6x2=48 observations) The blocks are about 1 mile apart, and within the blocks, there is a grid of 42, 2m x 4m plots and a 1m wide walkway; my study only used 24 plots in each block. I would like to evaluate evaluate spatial covariance. Here is an example analysis using the data from a single block, without accounting for spatial covariance. In the dataset, `plot` is the plot id, `x` is the x location and `y` the y location of each plot with plot 1 centered on 0, 0. `level` is the treatment level and `response` is the response variable. ``` layout <- structure(list(plot = c(1L, 3L, 5L, 7L, 8L, 11L, 12L, 15L, 16L, 17L, 18L, 22L, 23L, 26L, 28L, 30L, 31L, 32L, 35L, 36L, 37L, 39L, 40L, 42L), level = c(0L, 10L, 1L, 4L, 10L, 0L, 4L, 10L, 0L, 4L, 0L, 1L, 0L, 10L, 1L, 10L, 4L, 4L, 1L, 1L, 1L, 0L, 10L, 4L), response = c(5.93, 5.16, 5.42, 5.11, 5.46, 5.44, 5.78, 5.44, 5.15, 5.16, 5.17, 5.82, 5.75, 4.48, 5.25, 5.49, 4.74, 4.09, 5.93, 5.91, 5.15, 4.5, 4.82, 5.84), x = c(0, 0, 0, 3, 3, 3, 3, 6, 6, 6, 6, 9, 9, 12, 12, 12, 15, 15, 15, 15, 18, 18, 18, 18), y = c(0, 10, 20, 0, 5, 20, 25, 10, 15, 20, 25, 15, 20, 0, 15, 25, 0, 5, 20, 25, 0, 10, 20, 25)), .Names = c("plot", "level", "response", "x", "y"), row.names = c(NA, -24L), class = "data.frame") model <- lm(response ~ level, data = layout) summary(model) ``` ## Questions - How can I calculate a covariance matrix and include it in my regression? - The blocks are very different, and there are strong treatment * block interactions. Is it appropriate to analyze them separately?
How can I account for spatial covariance in a linear model?
CC BY-SA 3.0
null
2011-05-13T18:44:04.633
2011-05-16T19:56:57.893
2011-05-16T15:59:16.513
1381
1381
[ "r", "spatial", "linear-model", "covariance" ]
10775
2
null
10773
3
null
If you permute the individual values then you are testing the combined null hypothesis that there is no difference between groups and that there is no structure within values from the same individual. So if you reject the null you don't know if it is because the groups differ or because there is structure within an individual. I would suggest permuting individuals and keeping all values with the individual. I don't know of a program that does this out of the box, but writing a permutation test in R is often easier/quicker than figuring out a precanned program. Here is a simple example: ``` library(nlme) Odist <- split(Orthodont$distance, Orthodont$Subject) out <- replicate( 1999, { tmp <- sample(Odist); mean( unlist( tmp[1:16] ) ) - mean( unlist( tmp[17:27] ) ) } ) out <- c(out, mean(unlist(Odist[1:16])) - mean(unlist(Odist[17:27])) ) hist(out) abline( v=out[2000] ) mean(out >= out[2000]) ```
null
CC BY-SA 3.0
null
2011-05-13T19:04:56.143
2011-05-13T19:29:16.703
2011-05-13T19:29:16.703
4505
4505
null
10776
2
null
10766
4
null
The Bayesian test for your question is based on the integrated (rather than maximised) likelihood. So for Poisson we have: $$\begin{array}{c|c} H_{1}:\lambda_{1}=\lambda_{2} & H_{2}:\lambda_{1}\neq\lambda_{2} \end{array} $$ Now neither hypothesis says what the parameters are, so the actual values are nuisance parameters to be integrated out with respect to their prior probabilities. $$P(H_{1}|D,I)=P(H_{1}|I)\frac{P(D|H_{1},I)}{P(D|I)}$$ The model likelihood is given by: $$P(D|H_{1},I)=\int_{0}^{\infty} P(D,\lambda|H_{1},I)d\lambda=\int_{0}^{\infty} P(\lambda|H_{1},I)P(D|\lambda,H_{1},I)\,d\lambda$$ $$=\int_{0}^{\infty} P(\lambda|H_{1},I)\frac{\lambda^{x_1+x_2}\exp(-2\lambda)}{\Gamma(x_1+1)\Gamma(x_2+1)}\,d\lambda$$ where $P(\lambda|H_{1},I)$ is the prior for lambda. A convenient mathematical choice is the gamma prior, which gives: $$P(D|H_{1},I)=\int_{0}^{\infty} \frac{\beta^{\alpha}}{\Gamma(\alpha)}\lambda^{\alpha-1}\exp(-\beta \lambda)\frac{\lambda^{x_1+x_2}exp(-2\lambda)}{\Gamma(x_1+1)\Gamma(x_2+1)}\,d\lambda$$ $$=\frac{\beta^{\alpha}\Gamma(x_1+x_2+\alpha)}{(2+\beta)^{x_1+x_2+\alpha}\Gamma(\alpha)\Gamma(x_1+1)\Gamma(x_2+1)}$$ And for the alternative hypothesis we have: $$P(D|H_{2},I)=\frac{\beta_{1}^{\alpha_{1}}\beta_{2}^{\alpha_{2}}\Gamma(x_1+\alpha_{1})\Gamma(x_2+\alpha_{2})}{(1+\beta_{1})^{x_1+\alpha_{1}}(1+\beta_{2})^{x_2+\alpha_{2}}\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(x_1+1)\Gamma(x_2+1)}$$ Now if we assume that all hyper-parameters are equal (not an unreasonable assumption, given that you are testing for equality), then we have an integrated likelihood ratio of: $$\frac{P(D|H_{1},I)}{P(D|H_{2},I)}= \frac{(1+\beta)^{x_1+x_2+2\alpha}\Gamma(x_1+x_2+\alpha)\Gamma(\alpha)} {(2+\beta)^{x_1+x_2+\alpha}\beta^{\alpha}\Gamma(x_1+\alpha)\Gamma(x_2+\alpha)} $$ Which you can see that the prior information is still very important. We can't set $\alpha$ or $\beta$ equal to zero (Jeffrey's prior), or else $H_{1}$ will always be favored, regardless of the data. One way to get values for them is to specify prior estimates for $E[\lambda]$ and $E[\log(\lambda)]$ and solve for the parameters - this cannot be based on $x_1$ or $x_2$ but can be based on any other relevant information. You can also put in a few different (reasonable) values for the parameters and see what difference it makes to the conclusion. The numerical value of this statistic tells you how much the data and your prior information about the rates in each hypothesis support the hypothesis of equal rates. This explains why the likelihood ratio test is not always reliable - because it essentially ignores prior information, which is usually equivalent to specifying Jeffrey's prior. Note that you could also specify upper and lower limits for the rate parameters (this is usually not too hard to do given some common sense thinking about the real world problem). Then you would use a prior of the form: $$p(\lambda|I)=\frac{I(L<\lambda<U)}{\log\left(\frac{U}{L}\right)\lambda}$$ And you would be left with a similar equation to that above but in terms of incomplete, instead of complete gamma functions. For the binomial case things are much simpler, because the non-informative prior (uniform) is proper. The procedure is similar to that above, and the integrated likelihood for $H_{1}:p_{1}=p_{2}$ is given by: $$P(D|H_{1},I)={n_1 \choose x_1}{n_2 \choose x_2}\int_{0}^{1}p^{x_1+x_2}(1-p)^{n_1+n_2-x_1-x_2}\,dp$$ $$={n_1 \choose x_1}{n_2 \choose x_2}B(x_1+x_2+1,n_1+n_2-x_1-x_2+1)$$ And similarly for $H_{2}:p_{1}\neq p_{2}$ $$P(D|H_{2},I)={n_1 \choose x_1}{n_2 \choose x_2}\int_{0}^{1}p_{1}^{x_1}p_{2}^{x_2}(1-p_{1})^{n_1-x_1}(1-p_{2})^{n_{2}-x_{2}}\,dp_{1}\,dp_{2}$$ $$={n_1 \choose x_1}{n_2 \choose x_2}B(x_1+1,n_1-x_1+1)B(x_2+1,n_2-x_2+1)$$ And so taking ratios gives: $$\frac{P(D|H_{1},I)}{P(D|H_{2},I)}= \frac{B(x_1+x_2+1,n_1+n_2-x_1-x_2+1)} {B(x_1+1,n_1-x_1+1)B(x_2+1,n_2-x_2+1)} $$ $$=\frac{{x_1+x_2 \choose x_1}{n_1+n_2-x_1-x_2 \choose n_1-x_1}(n_1+1)(n_2+1)}{{n_1+n_2 \choose n_1}(n_1+n_2+1)}$$ And the choose functions can be calculated using the hypergeometric($r$,$n$,$R$,$N$) distribution where $N=n_1+n_2$, $R=x_1+x_2$, $n=n_1$, $r=x_1$ And this tells you how much the data support the hypothesis of equal probabilities, given that you don't have much information about which particular value this may be.
null
CC BY-SA 3.0
null
2011-05-13T19:06:40.390
2012-12-07T05:31:46.147
2012-12-07T05:31:46.147
17230
2392
null
10777
2
null
10773
1
null
You can do permutations tests of both your within and between tests. Just make sure that you permute your values nested within your participants. So, if I'm a participant you can permute all the within conditions I ran in, but it's still all my data. Then you can take each participant and permute them through the between conditions.
null
CC BY-SA 3.0
null
2011-05-13T19:23:07.920
2011-05-13T19:23:07.920
null
null
601
null
10778
1
10796
null
0
1021
I have a general understanding of the difference between a population (set of entities under study) and a sample (a subsection selected from the population). However, I've been doing some work in PPC (Pay-Per-Click) and AdWords recently, and can't seem to grasp the population/sample difference in regards to that. For example, let's say there are two Google AdWords ads. Users will click the ad and it takes them to a form which they can fill out if they choose to. Therefore, I have data on the number of clicks and the number of forms filled out. The question I'm trying to answer is which Ad was more effective at getting more clicks and forms filled out. ``` Clicks Forms Ad1 - 300 105 Ad2 - 320 100 ``` Initially, I thought that my sample was the two ads (Ad1 and Ad2), but that wouldn't be right as I'm really examining the number of clicks and forms filled out. So it would seem that the population that I'm examining is the clicks and forms associated with the two ads (Ad1 and Ad2) and my sample would be the number of clicks and number of forms filled out. Is that right/wrong? Thus, would clicks and forms be considered two seperate samples taken from the same population? Or is my population the same as my sample in this case?
Difference between population and sample
CC BY-SA 3.0
null
2011-05-13T19:27:26.613
2011-05-14T05:15:57.400
2011-05-13T19:40:08.083
3310
3310
[ "sample-size", "population" ]
10779
1
null
null
3
463
Suppose I am given $n$ samples of sizes $N_1, \dots, N_n$ from a Dirichlet–multinomial distribution: Fixed and given is a $k$-vector $\mathbf{\alpha}$ of positive real numbers. For each $i, \, 1 \le i \le n$, a random probability vector $\mathbf{p}_i$ is drawn from a Dirichlet distribution $\mathrm{Dir}(\mathbf{\alpha})$, and then a sample of size $N_i$ is drawn from a multinomial distribution on $\{1, \dots, k\}$ with probabilities given by $\mathbf{p}_i$. The observed frequencies are recorded in an $n \times k$ table. Assume that there are no problems with low cell counts. What can one say about the value $X$ of the $\chi^2$-statistic that can be computed for this random table? We can always write $\mathbf{\alpha} = M \mathbf{p_o}$ for some probability vector $\mathbf{p_o}$, and if $M$ is large, then all rows in this random table come from approximately the same multinomial distribution, so approximately $X \sim \chi^2_{r}$ with $r = (n-1)(k-1)$. How large must $M$ be for this to happen? And what happens for small $M$?
$\chi^2$ test for data from Dirichlet-multinomial distribution
CC BY-SA 3.0
null
2011-05-13T19:29:20.173
2011-05-14T14:38:51.870
2011-05-14T14:38:51.870
2970
4062
[ "chi-squared-test", "multinomial-distribution", "dirichlet-distribution" ]
10780
2
null
10768
14
null
## Proof without words ![enter image description here](https://i.stack.imgur.com/NHSjO.png) The lower blue curve is $F$, the upper red curve is $F^\alpha$ (typifying the case $\alpha \lt 1$), and the arrows show how to go from $z$ to $x = g(z)$.
null
CC BY-SA 3.0
null
2011-05-13T19:42:43.590
2011-05-13T19:42:43.590
2020-06-11T14:32:37.003
-1
919
null
10782
1
null
null
7
1470
I am analyzing data from an experiment in which treatment levels increase quadratically, e.g. the treatment levels are $0, 1, 4, 9$. When analyzing the response using regression, would it make sense to use the square root of the treatment level as a predictor? If so, how would this affect interpretation?
When to transform predictors in regression when response may be quadratic?
CC BY-SA 3.0
null
2011-05-13T20:21:44.867
2015-07-23T16:47:20.347
2015-07-23T16:47:20.347
1381
1381
[ "regression", "data-transformation", "predictor" ]
10783
2
null
8514
2
null
Your stated objective: > Compare the population of several states in a small country. Your stated problem: > Since some states have a population of 3000,000 and some a population of 2,000. Is there an easy way to "normalise" or make the data comparable? ## Aim of normalising your data before mapping This answer will be lacking since I am not sure of the context of why you are making the map. Nevertheless, here are some thoughts to explore: Normalise your data so that the map provides interesting meaning to the map's potential readers, so they can link what they see on your map to some concept they normally think about. Basically, I think your new normalised numbers should be linked to some qualitative concept that the map readers find interesting to understand (random tidbit: Measure = Quantity x Quality, Hegel). ## Two proposed ways to normalise your data 1. In order to give a sense of how much open space is in each state. Create a new state variable for population density by calculating the population divided by total state area. 2. In order to make the coloring of the states contrast with one another. Create a new state variable by calculating the deviation from the mean of each state. For example, say you have 3 states with populations as follows: - State A is 100. - State B is 50. - State C is 1. The mean will be be about 50. The new variable's values for each state will be as follows: - State A is +50 (color intense green). - State B is 0 (color grey). - State C is -49 (color intense red). You can use any color scheme where positive numbers contrast with negative numbers (google 'colorbrewer' for lots of examples of color schemes for maps).
null
CC BY-SA 3.0
null
2011-05-13T21:01:47.547
2011-05-13T21:01:47.547
null
null
4329
null
10784
1
10785
null
2
3636
(Apologies if the notations are "unusual", I'm not sure what the correct notations should be. I'm putting an example at the end of the question.) Let's assume there was an initial dataset of an n by m matrix $M=(x_{ij})$ with $1<=i<=n$ and $1<=j<=m$, from which the following two vectors have been calculated: - the vector of the column-wise means, that is, a vector of length $n$ where each element of index $j$ is the mean of the $m$ elements in column $j$ of the matrix: $(\bar{x}_{*j})$ with $1<=i<=n$, where $\bar{x}_{*j} = \frac{1}{m}\sum_{i=1}^m{x_{ij}}$. - the corresponding vector of the columns-wise standard deviations ($s_{*j}$). I'd like to get the mean and standard deviation of the row-wise vector of means, that is, the column vector made of the mean of each row: $(\bar{x}_{i*})$ with $1<=j<=m$, where $\bar{x}_{i*} = \frac{1}{n}\sum_{j=1}^n{x_{ij}}$, assuming that the initial matrix has been lost. The mean of the elements in the row-wise vector is also the mean of the elements in the column-wise vector (which is also the mean of all the elements in the matrix, with equal weight): $\frac{1}{m}\sum_{i=1}^m{\bar{x}_{i*}} = \frac{1}{m}\sum_{i=1}^m({\frac{1}{n}\sum_{j=1}^n{x_{ij}}}) = \frac{1}{n\times m}\sum_{i=1}^m\sum_{j=1}^n{x_{ij}}$ Is there any way to get the standard deviation within the row-wise vector of means without having the original matrix? For example: $$ M = \left(\begin{matrix} 1 & 2 & 3\\ 7 & 5 & 4\\ 8 & 2 & 3\\ 5 & 2 & 4 \end{matrix}\right) $$ $$ (\bar{x}_{*j}) = \left(\begin{matrix}5.25 & 2.75 & 3.5\end{matrix}\right) $$ $$ (s_{*j}) = \left(\begin{matrix} 2.68 & 1.29 & 0.5\end{matrix}\right) $$ $$ (\bar{x}_{i*}) = \left(\begin{matrix} 2\\ 5.33 \\ 4.33 \\ 3.66\end{matrix}\right) $$ The mean of $\{2, 5.33, 4.33, 3.66\}$ is $3.83$ (also the mean of ${5.25, 2.75, 3.5}$). The standard deviation of $\{2, 5.33, 4.33, 3.66\}$ is $1.21$. Would there be any way of calculating this standard deviation, knowing $(\bar{x}_{*j})$, $(s_{*j})$, but without knowing $M$ nor $(\bar{x}_{i*})$? Is there anything else that could have been calculated "column-wise" (that is, independently for each column, thus excluding the "row-wise" means themselves) that would help find out the (preferably sample) standard deviation of the "row-wise" means? Thank you.
Standard deviation of means over two dimensions
CC BY-SA 3.0
null
2011-05-13T21:14:57.233
2011-05-13T22:28:20.920
null
null
4607
[ "standard-deviation", "mean" ]
10785
2
null
10784
1
null
No, there isn't. In essence, having the columns-wise means is equivalent to having the sums along the columns. With that, you cannot get the sums along the rows. In general, to recover the sum along the rows you'll need to recover the full matrix. Knowing the other sum (in your example, $(5.25 2.75 3.5)$ ) is not enough. This can be seen in matrix form. Essentially, you have $A U = M$ where A is the data matrix, U is a row column with ones (or 1/N) and M is the sums or means along the colums or rows. If you could augment U by adding other operations, so that U is square, you could invert U and recover A. For example, if you had not only the sum of the four rows, but the sum of the first three rows, and the first two rows ... etc... you'd have 4 vector of 3 components, and then you could recover your matrix.
null
CC BY-SA 3.0
null
2011-05-13T21:30:54.460
2011-05-13T21:30:54.460
null
null
2546
null
10786
2
null
10782
8
null
When you don't know the functional form ahead of time (which is a common setting) and you have no reason to assume it's linear, it's best to be flexible. If there were more levels of treatment you could fit a quadratic or restricted cubic spline shape, for example. For only 4 levels it may be best to assign 3 degrees of freedom to treatment using 3 dummy variables.
null
CC BY-SA 3.0
null
2011-05-13T21:43:50.037
2011-05-13T21:43:50.037
null
null
4253
null
10787
1
11621
null
18
10215
I am exploring different classification methods for a project I am working on, and am interested in trying Random Forests. I am trying to educate myself as I go along, and would appreciate any help provided by the CV community. I have split my data into training/test sets. From experimentation with random forests in R (using the randomForest package), I have been having trouble with a high misclassification rate for my smaller class. I have read [this paper](http://www.stat.berkeley.edu/tech-reports/666.pdf) concerning the performance of random forests on imbalanced data, and the authors presented two methods with dealing with class imbalance when using random forests. 1. Weighted Random Forests 2. Balanced Random Forests The R package does not allow weighting of the classes (from the R help forums, I have read the classwt parameter is not performing properly and is scheduled as a future bug fix), so I am left with option 2. I am able to specify the number of objects sampled from each class for each iteration of the random forest. I feel uneasy about setting equal sample sizes for random forests, as I feel like I would be losing too much information about the larger class leading to poor performance with future data. The misclassification rates when downsampling the larger class has shown to improve, but I was wondering if there were other ways to deal with imbalanced class sizes in random forests?
For classification with Random Forests in R, how should one adjust for imbalanced class sizes?
CC BY-SA 3.0
null
2011-05-13T21:49:07.063
2019-06-16T10:13:40.737
null
null
2252
[ "r", "machine-learning", "random-forest" ]
10788
1
null
null
5
1603
I was wondering if you can share your experiences on what you feel is the best method to test lead / lag relationships between I(1) time series variables (i.e stock prices) and advantages and disadvantages of your proposed method(s). Also if you have links to academic papers that further describe these methods I would greatly appreciate them. I have read several papers that speak of VECMs, IRSUR, simple OLS, threshold regression etc.. but I'm not sure which to use for my study. I am trying to establish lead/lag relationships in intraday stock data returns (using 1 minute price time series) between stock prices of ~50 companies in one particular sector. Appreciate the help.
Methods to best test lead/lag relationships
CC BY-SA 3.0
null
2011-05-13T22:20:55.140
2011-05-13T22:20:55.140
null
null
4338
[ "regression", "least-squares" ]
10789
1
10791
null
25
3431
I'm having problems understanding the concept of a random variable as a function. I understand the mechanics (I think) but I do not understand the motivation... Say $(\Omega, B, P) $ is a probability triple, where $\Omega = [0,1]$, $B$ is the Borel-$\sigma$-algebra on that interval and $P$ is the regular Lebesgue measure. Let $X$ be a random variable from $B$ to $\{1,2,3,4,5,6\}$ such that $X([0,1/6)) = 1$, $X([1/6,2/6)) = 2$, ..., $X([5/6,1]) = 6$, so $X$ has a discrete uniform distribution on the values 1 through 6. That's all good, but I do not understand the necessity of the original probability triple... we could have directly constructed something equivalent as $(\{1,2,3,4,5,6\}, S, P_x)$ where $S$ is all the appropriate $\sigma$-algebra of the space, and $P_x$ is a measure that assigns to each subset the measure (# of elements)/6. Also, the choice of $\Omega=[0,1]$ was arbitrary-- it could've been $[0,2]$, or any other set. So my question is, why bother constructing an arbitrary $\Omega$ with a $\sigma$-algebra and a measure, and define a random variable as a map from the $\sigma$-algebra to the real line?
Why are random variables defined as functions?
CC BY-SA 3.0
null
2011-05-13T22:24:15.883
2018-08-10T09:12:13.803
2016-10-11T07:01:00.633
7224
4608
[ "probability", "random-variable", "measure-theory" ]
10790
2
null
10784
1
null
You can alter the standard deviation of the row-means by suitably reordering the the elements of each column, while leaving the statistics for each column unchanged. For example, with $$ M_2 = \left(\begin{matrix} 1 & 2 & 3\\ 5 & 2 & 3\\ 7 & 2 & 3\\ 8 & 5 & 4 \end{matrix}\right) $$ you will get the same column statistics, but the row means will be 2.000, 3.333, 4.333, and 5.667, with a standard deviation of about 1.55, rather than the original 2.000, 5.333, 4.333, 3.667 with a standard deviation about 1.40. So you cannot derive statistics for the rows (as opposed to the whole matrix) from the statistics for the columns.
null
CC BY-SA 3.0
null
2011-05-13T22:28:20.920
2011-05-13T22:28:20.920
null
null
2958
null
10791
2
null
10789
24
null
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was developed by Kolmogorov for the purpose of establishing a theory of such generality that it could handle, in some cases, very abstract and complicated probability spaces. In fact, Kolmogorov's measure theoretic foundations for probability ultimately allowed probabilistic tools to be applied far beyond their original intended domain of application into areas such as harmonic analysis. At first it does seem more straightforward to skip any "underlying" $\sigma$-algebra $\Omega$, and to simply assign probability masses to the events comprising the sample space directly, as you have proposed. Indeed, probabilists effectively do the same thing whenever they choose to work with the "induced-measure" on the sample space defined by $P \circ X^{-1}$. However, things start getting tricky when you start getting into infinite dimensional spaces. Suppose you want to prove the Strong Law of Large Numbers for the specific case of flipping fair coins (that is, that the proportion of heads tends arbitrarily closely to 1/2 as the number of coin flips goes to infinity). You could attempt to construct a $\sigma$-algebra on the set of infinite sequences of the form $(H,T,H,...)$. But here can find that it is much more convenient to take the underlying space to be $\Omega = [0,1)$; and then use the binary representations of real numbers (e.g. $0.10100...$) to represent sequences of coin flips (1 being heads, 0 being tails.) An illustration of this very example can be found in the first few chapters of Billingsley's Probability and Measure.
null
CC BY-SA 3.0
null
2011-05-13T22:47:25.717
2011-05-13T22:47:25.717
null
null
3567
null
10792
2
null
10768
6
null
Q1) Yes. It's also useful for generating variables which are stochastically ordered; you can see this from @whuber's pretty picture :). $\alpha>1$ swaps the stochastic order. That it's a valid cdf is just a matter of verifying the requisite conditions: $F_z(z)^\alpha$ has to be [cadlag](http://en.wikipedia.org/wiki/C%C3%A0dl%C3%A0g), nondecreasing and limit to $1$ at infinity and $0$ at negative infinity. $F_z$ has these properties so these are all easy to show. Q2) Seems like it would be pretty difficult analytically, unless $F_Z$ is special
null
CC BY-SA 3.0
null
2011-05-13T22:55:51.613
2011-05-13T22:55:51.613
null
null
26
null
10793
2
null
4909
4
null
The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a kernel density estimate for the PDF and integrate it to get another estimate for the CDF, which would do some kind of interpolation as you suggest. But if it ain't broke....
null
CC BY-SA 3.0
null
2011-05-13T23:07:29.717
2011-05-13T23:07:29.717
null
null
26
null
10794
2
null
10782
8
null
Why not look at a bivariate X-Y scatterplot in advance of running a regression. That'll show you the shape of the line or curve, especially if you have software that can give a lowess/loess fit (locally weighted smoothed fit). As to interpretation, it'll no doubt be easier for you than for your audience, but if you do have a quadratic fit, then for each increment of one on the sq. rt. of X, Y will change by b, your coefficient. If you really only have 4 levels of X, I agree with @Frank's point and would add that you might make your job easier by running an ANOVA instead of regression. Or, some software makes it easy to combine continuous and categorical predictors, fusing regression and anova into a general linear model without the need for dummy variables (if you use SPSS, search 'Unianova').
null
CC BY-SA 3.0
null
2011-05-13T23:12:11.193
2011-05-13T23:12:11.193
null
null
2669
null
10795
1
null
null
93
77020
I was wondering whether anyone could point me to some references that discuss the interpretation of the elements of the inverse covariance matrix, also known as the concentration matrix or the precision matrix. I have access to Cox and Wermuth's Multivariate Dependencies, but what I'm looking for is an interpretation of each element in the inverse matrix. Wikipedia [states](http://en.wikipedia.org/wiki/Covariance_matrix): "The elements of the precision matrix have an interpretation in terms of partial correlations and partial variances," which leads me to [this](http://en.wikipedia.org/wiki/Partial_correlation) page. Is there an interpretation without using linear regression? IE, in terms of covariances or geometry?
How to interpret an inverse covariance or precision matrix?
CC BY-SA 3.0
null
2011-05-14T01:13:14.647
2023-02-16T21:32:29.900
2023-02-16T21:32:29.900
11887
4610
[ "interpretation", "covariance-matrix", "precision-matrix" ]
10796
2
null
10778
2
null
The phrase "Population" is an abstract concept you use to define what type of question you want to answer. You could consider the "population of ads" or the "population of clicks" - they are just two different forms of inference (on about ads, one about clicks). I would suggest that in either case the notion of "population" is not helpful to you from the way you have asked the question. Suppose I write two hypothesis: $$\begin{array}{c c} H_{1}:\text{Ad1 is more effective at getting clicks and forms filled out} \\ H_{2}:\text{Ad2 is more effective at getting clicks and forms filled out} \end{array}$$ Now I also write out the data, denoted $D$ in your table, and your prior information by $I$. Then we calculate: $$P(H_{1}|D,I)=P(H_{1}|I)\frac{P(D|H_{1},I)}{P(D|I)}$$ $P(H_{1}|I)$ is the prior probability - how plausible is $H_{1}$ without considering the data? Note that because you only have two ads, then their probabilities must sum to 1, under this prior information. The main quantity to calculate is the likelihood $P(D|H_{1},I)$ - how plausible is the data you actually observed given that $H_{1}$ is true? $P(D|I)$ is sometimes called the evidence, and is usually not explicitly calculated, but divided out by considering odds ratios $\frac{P(H_{1}|D,I)}{P(H_{2}|D,I)}$. It can be thought of as asking the question how plausible is the data you actually observed, regardless of the hypothesis in question? ("does any hypothesis being considered explain these data well?") You don't need a notion of the population to answer the question you have asked This can be seen quite clearly from the absence of such a notion in your question which Ad was more effective at getting more clicks and forms filled out?. I've personally decided to call this "the golden rule" for inference using probability theory as extended logic. You write a proposition which if true or false would answer your question (such as $H_{1}$ and $H_{2}$). Then you simply calculate the probability of that proposition, conditional on whatever evidence you have (i.e. what you know), marginalising (or averaging) out what is unknown.
null
CC BY-SA 3.0
null
2011-05-14T05:15:57.400
2011-05-14T05:15:57.400
null
null
2392
null