Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6125 | 2 | null | 6119 | 4 | null | PCA is probably fine to use on your data, as it appears that it does not make any assumptions about the structure of the data. See [link](https://web.archive.org/web/20130525093225/http://en.m.wikipedia.org/wiki/Principal_component_analysis#cite_note-4) for a good introduction.
The note leads to a short PDF tutorial on PCA.
Hope this helps.
| null | CC BY-SA 4.0 | null | 2011-01-10T11:31:41.053 | 2023-01-23T16:08:31.280 | 2023-01-23T16:08:31.280 | 362671 | 656 | null |
6126 | 2 | null | 6123 | 1 | null | Just asking for a clarification : why would you want to do this? As I understand it, to do this properly, you need to have some concrete benchmark via which you are quantifying the bias.
The most important thing is how appropriate is the benchmark chosen? (Are we standardizing to the wrong base?)
Even if we have the correct benchmark, is it true that the the benchmark is exactly true? (The issue of matching each variable to the benchmark)
If you ask me, just proceed with the sample, and quantify separately for each variable the apparent bias from your sample to the "benchmark" population. I'd assume that this would be a more informative and accurate picture.
| null | CC BY-SA 2.5 | null | 2011-01-10T11:36:02.833 | 2011-01-10T11:36:02.833 | null | null | 2472 | null |
6127 | 1 | 6128 | null | 63 | 52586 | I have data from an experiment that I analyzed using t-tests. The dependent variable is interval scaled and the data are either unpaired (i.e., 2 groups) or paired (i.e., within-subjects).
E.g. (within subjects):
```
x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5,
99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5)
y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90,
80, 0, 99, 0, 74.5, 0, 100, 49.5)
```
However, the data are not normal so one reviewer asked us to use something other than the t-test. However, as one can easily see, the data are not only not normally distributed, but the distributions are not equal between conditions:

Therefore, the usual nonparametric tests, the Mann-Whitney-U-Test (unpaired) and the Wilcoxon Test (paired), cannot be used as they require equal distributions between conditions. Hence, I decided that some resampling or permutation test would be best.
Now, I am looking for an R implementation of a permutation-based equivalent of the t-test, or any other advice on what to do with the data.
I know that there are some R-packages that can do this for me (e.g., coin, perm, exactRankTest, etc.), but I don't know which one to pick. So, if somebody with some experience using these tests could give me a kick-start, that would be ubercool.
UPDATE: It would be ideal if you could provide an example of how to report the results from this test.
| Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | CC BY-SA 3.0 | null | 2011-01-10T12:10:29.000 | 2023-03-22T14:18:41.363 | 2016-08-24T17:12:32.300 | 28666 | 442 | [
"r",
"t-test",
"nonparametric",
"permutation-test"
] |
6128 | 2 | null | 6127 | 52 | null | It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Trying the three packages with your data with a one-sided test for two independent variables:
```
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
library(coin) # for oneway_test(), pvalue()
pvalue(oneway_test(DV ~ IV, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00330033
library(perm) # for permTS()
permTS(DV ~ IV, alternative="greater", method="exact.mc",
control=permControl(nmc=10^4-1))$p.value
[1] 0.003
library(exactRankTests) # for perm.test()
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.003171822
```
To check the exact p-value with a manual calculation of all permutations, I'll restrict the data to the first 9 values.
```
x1 <- x1[1:9]
y1 <- y1[1:9]
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
pvalue(oneway_test(DV ~ IV, alternative="greater", distribution="exact"))
[1] 0.0945907
permTS(DV ~ IV, alternative="greater", exact=TRUE)$p.value
[1] 0.0945907
# perm.test() gives different result due to rounding of input values
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.1029412
# manual exact permutation test
idx <- seq(along=DV) # indices to permute
idxA <- combn(idx, length(x1)) # all possibilities for different groups
# function to calculate difference in group means given index vector for group A
getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(idx %in% x)]) }
resDM <- apply(idxA, 2, getDiffM) # difference in means for all permutations
diffM <- mean(x1) - mean(y1) # empirical differencen in group means
# p-value: proportion of group means at least as extreme as observed one
(pVal <- sum(resDM >= diffM) / length(resDM))
[1] 0.0945907
```
`coin` and `exactRankTests` are both from the same author, but `coin` seems to be more general and extensive - also in terms of documentation. `exactRankTests` is not actively developed anymore. I'd therefore choose `coin` (also because of informative functions like `support()`), unless you don't like to deal with S4 objects.
EDIT: for two dependent variables, the syntax is
```
id <- factor(rep(1:length(x1), 2)) # factor for participant
pvalue(oneway_test(DV ~ IV | id, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00810081
```
| null | CC BY-SA 4.0 | null | 2011-01-10T13:22:10.797 | 2018-05-31T03:42:08.390 | 2018-05-31T03:42:08.390 | -1 | 1909 | null |
6129 | 2 | null | 6127 | 2 | null | Are these scores proportions? If so, you certainly shouldn't be using a gaussian parametric test, and while you could go ahead with a non-parametric approach like a permutation test or bootstrap of the means, I'd suggest that you'll get more statistical power by employing a suitable non-gaussian parametric approach. Specifically, any time you can compute a proportion measure within a unit of interest (ex. participant in an experiment), you can and probably should use a mixed effects model that specifies observations with binomially distributed error. See [Dixon 2004](http://linkinghub.elsevier.com/retrieve/pii/S0749596X07001283).
| null | CC BY-SA 2.5 | null | 2011-01-10T13:49:44.313 | 2011-01-10T13:49:44.313 | null | null | 364 | null |
6130 | 2 | null | 6122 | 1 | null | SPSS has some scripting facilities (syntax, sax basic, pyhton, ...). I myself have so far only used syntax. Maybe the link below can help you to construct a loop that does the job, it points to the excellent website of the UCLA. The problem solved there is only slightly different from yours so chances are high that you can modify it according to your needs.
[http://www.ats.ucla.edu/stat/spss/faq/looping_parallel_lists.html](http://www.ats.ucla.edu/stat/spss/faq/looping_parallel_lists.html)
psj
| null | CC BY-SA 2.5 | null | 2011-01-10T14:15:34.053 | 2011-01-10T14:15:34.053 | null | null | 1573 | null |
6131 | 2 | null | 6122 | 4 | null | Here is some SPSS code to loop through each of your models.
```
*creating a simulated dataset.
input program.
loop #i = 1 to 100.
compute yvar = RV.BERNOULLI(.5).
compute xvar1 = RV.NORMAL(100,10).
compute xvar2 = RV.NORMAL(100,10).
compute xvar3 = RV.NORMAL(100,10).
compute listwise = RV.BERNOULLI(.1).
end case.
end loop.
end file.
end input program.
execute.
*using these variables you can run the regressions one equation at a time.
LOGISTIC REGRESSION VARIABLES YVAR
/SELECT=LISTWISE EQ 0
/METHOD=ENTER XVAR1.
*or you can create a macro and loop through your independent variable list.
DEFINE !logit_loop (dep = !TOKENS(1)
/ind = !CMDEND )
!DO !i !IN !ind
LOGISTIC REGRESSION VARIABLES !dep
/SELECT=listwise = 0
/METHOD=ENTER !i.
!DOEND.
!ENDDEFINE.
!logit_loop dep = yvar ind = xvar1 xvar2 xvar3.
```
Note the loop does a selection on a variable named listwise = 0 , otherwise missing values for any variables would just be dropped (and potentially produce equations with different sets). While looping through your list is pretty simple in SPSS, where I think R is more convienant is the fact that the elements of your different objects are readily available in R (and so it is easier to access them and manipulate them). I'll update with an example later, but basically all the SPSS code does is produce alot of output which you have to read to find the values your interested in. In R you produce objects that have attributes you can extract. You can do the same thing in SPSS, but it is IMO more difficult and requires going into the output XML and parsing the info you want (which I don't know how to do).
So long story short if you are alright with just viewing the output this is easily accomplished in SPSS (or PASW now). If you want to produce a program to summarize specific elements of those models, I think it is easier to learn a solution in R.
| null | CC BY-SA 2.5 | null | 2011-01-10T14:45:42.287 | 2011-01-10T15:04:25.203 | 2011-01-10T15:04:25.203 | 1036 | 1036 | null |
6132 | 2 | null | 5045 | 5 | null | Somewhere there's a saying "Model what you're interested in." I think you need to determine at what level you're most interested in the parameters. I'd also like to see you define what you mean by "better result." If it were me I'd probably want to obtain best estimates at the more specific level so that I could isolate the contributions of different, specific indices as much as possible. But whatever you do, watch for collinearity. I'd take a close look at a correlation matrix and/or at a printout of zero-order, partial, and part correlations, and be prepared to run several iterations of regression before drawing any conclusions about the relative contributions of different variables. I'd also want to obtain partial regression plots to see more about the nature of each X-Y relationship than a coefficient alone can give.
| null | CC BY-SA 2.5 | null | 2011-01-10T16:46:43.897 | 2011-01-10T16:46:43.897 | null | null | 2669 | null |
6133 | 1 | 6165 | null | 4 | 956 | I think people here could guide me in solving a problem related to anomaly detection in Computer Science. The term anomaly here refers to some undesired event occuring in the system like a virus infection.
I could get to know about it from more than one source. For example, after having extracted a value from two different data structures, if there is a difference it is certain that virus infection is there.
In order to remove the false positive cases, information is gathered from different data structures or mechanisms. Hence, certain information are less trusted and certain information are more trusted.
I am looking for a mathematical method, that could easily handle this type of situations. Whether Fuzzy/Genetic Algorithm/Neural Net fits here? Found in some places they are using normality-based approach (using z-score). Please help.
| Which method to use for anomaly detection? | CC BY-SA 2.5 | null | 2011-01-10T16:58:37.693 | 2011-01-11T14:50:31.043 | 2011-01-10T18:57:52.517 | 930 | 2721 | [
"data-mining",
"computational-statistics"
] |
6134 | 2 | null | 6127 | 33 | null | A few comments are, I believe, in order.
1) I would encourage you to try multiple visual displays of your data, because they can capture things that are lost by (graphs like) histograms, and I also strongly recommend that you plot on side-by-side axes. In this case, I do not believe the histograms do a very good job of communicating the salient features of your data. For example, take a look at side-by-side boxplots:
```
boxplot(x1, y1, names = c("x1", "y1"))
```

Or even side-by-side stripcharts:
```
stripchart(c(x1,y1) ~ rep(1:2, each = 20), method = "jitter", group.names = c("x1","y1"), xlab = "")
```

Look at the centers, spreads, and shapes of these! About three-quarters of the $x1$ data fall well above the median of the $y1$ data. The spread of $x1$ is tiny, while the spread of $y1$ is huge. Both $x1$ and $y1$ are highly left-skewed, but in different ways. For example, $y1$ has five (!) repeated values of zero.
2) You didn't explain in much detail where your data come from, nor how they were measured, but this information is very important when it comes time to select a statistical procedure. Are your two samples above independent? Are there any reasons to believe that the marginal distributions of the two samples should be the same (except for a difference in location, for example)? What were the considerations prior to the study that led you to look for evidence of a difference between the two groups?
3) The t-test is not appropriate for these data because the marginal distributions are markedly non-normal, with extreme values in both samples. If you like, you could appeal to the CLT (due to your moderately-sized sample) to use a $z$-test (which would be similar to a z-test for large samples), but given the skewness (in both variables) of your data I would not judge such an appeal very convincing. Sure, you can use it anyway to calculate a $p$-value, but what does that do for you? If the assumptions aren't satisfied then a $p$-value is just a statistic; it doesn't tell what you (presumably) want to know: whether there is evidence that the two samples come from different distributions.
4) A permutation test would also be inappropriate for these data. The single and often-overlooked assumption for permutation tests is that the two samples are exchangeable under the null hypothesis. That would mean that they have identical marginal distributions (under the null). But you are in trouble, because the graphs suggest that the distributions differ both in location and scale (and shape, too). So, you can't (validly) test for a difference in location because the scales are different, and you can't (validly) test for a difference in scale because the locations are different. Oops. Again, you can do the test anyway and get a $p$-value, but so what? What have you really accomplished?
5) In my opinion, these data are a perfect (?) example that a well chosen picture is worth 1000 hypothesis tests. We don't need statistics to tell the difference between a pencil and a barn. The appropriate statement in my view for these data would be "These data exhibit marked differences with respect to location, scale, and shape." You could follow up with (robust) descriptive statistics for each of those to quantify the differences, and explain what the differences mean in the context of your original study.
6) Your reviewer is probably (and sadly) going to insist on some sort of $p$-value as a precondition to publication. Sigh! If it were me, given the differences with respect to everything I would probably use a nonparametric Kolmogorov-Smirnov test to spit out a $p$-value that demonstrates that the distributions are different, and then proceed with descriptive statistics as above. You would need to add some noise to the two samples to get rid of ties. (And of course, this all assumes that your samples are independent which you didn't state explicitly.)
This answer is a lot longer than I originally intended it to be. Sorry about that.
| null | CC BY-SA 2.5 | null | 2011-01-10T17:06:35.393 | 2011-01-10T17:06:35.393 | null | null | null | null |
6135 | 2 | null | 3779 | 15 | null | This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exploit it to the fullest. His results suggest that a brute-force solution is feasible. After all, including a wildcard, there are at most $27^7 = 10,460,353,203$ words one can make with 7 characters, and it looks like less than 1/10000 of them--say, around a million--will fail to include some valid word.
The first step is to augment the minimal dictionary with a wildcard character, "?". 22 of the letters appear in two-letter words (all but c, q, v, z). Adjoin a wildcard to those 22 letters and add these to the dictionary: {a?, b?, d?, ..., y?} are now in. Similarly we can inspect the minimal three-letter words, causing some additional words to appear in the dictionary. Finally, we add "??" to the dictionary. After removing repetitions that result, it contains 342 minimal words.
An elegant way to proceed--one that uses a very small amount of encoding indeed--is to view this problem as an algebraic one. A word, considered as an unordered set of letters, is just a monomial. For example, "spats" is the monomial $a p s^2 t$. The dictionary therefore is a collection of monomials. It looks like
$$\{a^2, a b, a d, ..., o z \psi, w x \psi, \psi^2\}$$
(where, to avoid confusion, I have written $\psi$ for the wildcard character).
A rack contains a valid word if and only if that word divides the rack.
A more abstract, but extremely powerful, way to say this is that the dictionary generates an ideal $I$ in the polynomial ring $R = \mathbb{Z}[a, b, \ldots, z, \psi]$ and that the racks with valid words become zero in the quotient ring $R/I$, whereas racks without valid words remain nonzero in the quotient. If we form the sum of all racks in $R$ and compute it in this quotient ring, then the number of racks without words equals the number of distinct monomials in the quotient.
Furthermore, the sum of all racks in $R$ is straightforward to express. Let $\alpha = a + b + \cdots + z + \psi$ be the sum of all letters in the alphabet. $\alpha^7$ contains one monomial for each rack. (As an added bonus, its coefficients count the number of ways each rack can be formed, allowing us to compute its probability if we like.)
As a simple example (to see how this works), suppose (a) we don't use wildcards and (b) all letters from "a" through "x" are considered words. Then the only possible racks from which words cannot be formed must consist entirely of y's and z's. We compute $\alpha=(a+b+c+\cdots+x+y+z)^7$ modulo the ideal generated by $\{a,b,c, \ldots, x\}$ one step at a time, thus:
$$\eqalign{
\alpha^0 &= 1 \cr
\alpha^1 &= a+b+c+\cdots+x+y+z \equiv y+z \mod I \cr
\alpha^2 &\equiv (y+z)(a+b+\cdots+y+z) \equiv (y+z)^2 \mod I \cr
\cdots &\cr
\alpha^7 &\equiv (y+z)^6(a+b+\cdots+y+z) \equiv (y+z)^7 \mod I \text{.}
}$$
We can read off the chance of getting a non-word rack from the final answer, $y^7 + 7 y^6 z + 21 y^5 z^2 + 35 y^4 z^3 + 35 y^3 z^4 + 21 y^2 z^5 +
7 y z^6 + z^7$: each coefficient counts the ways in which the corresponding rack can be drawn. For example, there are 21 (out of 26^7 possible) ways to draw 2 y's and 5 z's because the coefficient of $y^2 z^5$ equals 21.
From elementary calculations, it is obvious this is the correct answer. The whole point is that this procedure works regardless of the contents of the dictionary.
Notice how reducing the power modulo the ideal at each stage reduces the computation: that's the shortcut revealed by this approach. (End of example.)
Polynomial algebra systems implement these calculations. For instance, here is Mathematica code:
```
alphabet = a + b + c + d + e + f + g + h + i + j + k + l + m + n + o +
p + q + r + s + t + u + v + w + x + y + z + \[Psi];
dictionary = {a^2, a b, a d, a e, ..., w z \[Psi], \[Psi]^2};
next[pp_] := PolynomialMod[pp alphabet, dictionary];
nonwords = Nest[next, 1, 7];
Length[nonwords]
```
(The dictionary can be constructed in a straightforward manner from @vqv's min.dict; I put a line here showing that it is short enough to be specified directly if you like.)
The output--which takes ten minutes of computation--is 577958. (NB In an earlier version of this message I had made a tiny mistake in preparing the dictionary and obtained 577940. I have edited the text to reflect what I hope are now the correct results!) A little less than the million or so I expected, but of the same order of magnitude.
To compute the chance of obtaining such a rack, we need to account for the number of ways in which the rack can be drawn. As we saw in the example, this equals its coefficient in $\alpha^7$. The chance of drawing some such rack is the sum of all these coefficients, easily found by setting all the letters equal to 1:
```
nonwords /. (# -> 1) & /@ (List @@ alphabet)
```
The answer equals 1066056120, giving a chance of 10.1914% of drawing a rack from which no valid word can be formed (if all letters are equally likely).
When the probabilities of the letters vary, just replace each letter with its chance of being drawn:
```
tiles = {9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6,
4, 2, 2, 1, 2, 1, 2};
chances = tiles / (Plus @@ tiles);
nonwords /. (Transpose[{List @@ alphabet, chances}] /. {a_, b_} -> a -> b)
```
The output is 1.079877553303%, the exact answer (albeit using an approximate model, drawing with replacement). Looking back, it took four lines to enter the data (alphabet, dictionary, and alphabet frequencies) and only three lines to do the work: describe how to take the next power of $\alpha$ modulo $I$, take the 7th power recursively, and substitute the probabilities for the letters.
| null | CC BY-SA 2.5 | null | 2011-01-10T17:41:58.967 | 2011-01-10T23:05:05.543 | 2011-01-10T23:05:05.543 | 919 | 919 | null |
6136 | 5 | null | null | 0 | null | SPSS (Statistical Package for the Social Sciences) is a proprietary cross-platform general-purpose statistical software package. [SPSS's homepage](http://www.spss.com/) The official name at present is IBM SPSS Statistics.
SPSS has both well-developed GUI and command syntax. One unique aspect of SPSS Statistics compared to other popular propriety software packages (such as Stata or SAS) is the built in functionality to call Python or R commands within syntax (in addition to SPSS own commands). Otherwise it is largely comparable to other general proprietary and freeware packages (such as R), although it differs in some advanced statistical capabilities and aspects of data manipulation.
Suggested readings on using SPSS and learning the command syntax are two online PDF's.
- Programming and Data Management for
IBM Statistics
- Programming with
SPSS Syntax and Macros
Other useful print versions are
- An Intermediate Guide to SPSS
Programming: Using Syntax for Data
Management by Sarah Boslaugh (For
Data Management)
- Discovering statistics using SPSS: and sex and drugs and rock 'n' roll. by Andy Field (For Statistical Analysis)
Forums and groups entirely devoted to the software are (and suggested material to search when encountering a problem with SPSS are):
- SPSSX-L mailing list provided by University of Georgia
- SPSS forum at Nabble
- SPSS group on IBM site
- SPSS Reddit group
- SPSS Google group
- SPSS developer central
Other suggested webpages are
- Raynald's SPSS Tools
- Bruce Weaver's SPSS page
- Jeromy Anglim's blog
- Kirill Orlov's macros page on SPSS Tools
- Marta Garcia-Granero's page
[PSPP](http://www.gnu.org/software/pspp/) is a free-ware, open source alternative largely mimicking the look and functionality of SPSS.
| null | CC BY-SA 4.0 | null | 2011-01-10T17:44:45.473 | 2022-08-20T18:29:33.737 | 2022-08-20T18:29:33.737 | 3277 | null | null |
6137 | 4 | null | null | 0 | null | IBM SPSS Statistics is a statistical software package. Use this tag for any on-topic question that (a) involves SPSS either as a critical part of the question or expected answer and (b) is not just about how to use SPSS. | null | CC BY-SA 4.0 | null | 2011-01-10T17:44:45.473 | 2020-08-06T21:52:09.480 | 2020-08-06T21:52:09.480 | 3277 | null | null |
6138 | 2 | null | 6074 | 7 | null | I don’t think it is really a classification problem. 20 questions is often characterized as a compression problem. This actually matches better with the last part of your question where you talk about entropy.
See Chapter 5.7 ([Google books](http://books.google.com/books?id=EuhBluW31hsC&lpg=PP1&dq=cover%20and%20thomas%20information%20theory&pg=PA120)) of
Cover, T.M. and Joy, A.T. (2006) Elements of Information Theory
and also [Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding). This paper I found on arXiv may be of interest as well.
Gill, J.T. and Wu, W. (2010) "Twenty Questions Games Always End With Yes”
[http://arxiv.org/abs/1002.4907](http://arxiv.org/abs/1002.4907)
For simplicity assume yes/no questions (whereas akinator.com allows allows maybe, don’t know).
Assume that every possible subject (what akinator.com knows) can be uniquely identified by a sequence of yes/no questions and answers — essentially a binary vector.
The questions that are (and their answers) asked define a recursive partitioning of the space of subjects. This partitioning also corresponds to a tree structure. The interior vertices of the tree correspond to questions, and the leaves correspond to subjects. The depth of the leaves is exactly the number of questions required to uniquely identify the subject. You can (trivially) identify every known subject by asking every possible question. That’s not interesting because there are potentially hundreds of questions.
The connection with Huffman coding is that it gives an optimal way (under a certain probabilistic model) to construct the tree so that the average depth is (nearly) minimal. In other words, it tells you how to arrange the sequence of questions (construct a tree) so that the number of questions you need to ask is on average small. It uses a greedy approach.
There is, of course, more to akinator.com than this, but the basic idea is that you can think of the problem in terms of a tree and trying to minimize the average depth of its leaves.
| null | CC BY-SA 2.5 | null | 2011-01-10T17:46:20.557 | 2011-01-14T20:29:06.633 | 2011-01-14T20:29:06.633 | 1670 | 1670 | null |
6139 | 1 | null | null | 7 | 1370 | I'm studying Biomedical Computer science and I have to research a paper about genotype-phenotype association.
In this paper the authors use a correlation analysis by first calculating the Pearson correlation and then calculating the hypergeometric distribution to filter out insignificant associations.
[http://www.biomedcentral.com/1471-2164/7/257](http://www.biomedcentral.com/1471-2164/7/257)
Under Methods/Associating genes to phenotypes
>
While the correlation measures the strength of association between an organism's genomic content and its phenotype, we also applied another method, exploiting the hypergeometric distribution function, to determine the significance of these
associations [...] where a result smaller or equal to 20% response is considered negative. So for a given gene found in M species, the hypergeometric function provides the probability by random chance that the gene is found in m species which contain the COG and are also positive in the laboratory test.
The following criteria were applied to the correlated data set. The intersection between a specific COG and a phenotype had to contain at least 3 organisms, and for any intersection, 30% of the microbes had to share the COG. The scores were adjusted using the standard Bonferroni error correction for multiple testing.
Since the Bonferroni correction is one of the most conservative, it is likely that some biologically relevant associations were unnecessarily discarded. In this case $\alpha$ was set as less than equal to 0.01, therefore, any hypergeometric distribution score less than or equal to 0.0001 was deemed significant. Using these criteria, we set a 0.8 and a 0.9 correlation threshold to assess the significance of the COG-phenotype associations.
My question is: Is this a valid scientific correlation analysis or not? Are there any reservations?
Also, can you give me an idea for a good statistics book for science?
| Is a correlation analysis with Pearson's correlation and Bonferroni Method a valid approach to find correlations between two sets of data | CC BY-SA 2.5 | null | 2011-01-10T09:54:12.360 | 2012-06-01T20:17:24.320 | null | null | null | [
"correlation"
] |
6140 | 2 | null | 6123 | 3 | null | The answer to the question, bracketing for the moment whether it is a good idea, is yes, there are ways to do this. If you have information on the joint distribution of the population over all the variables of interest (e.g. from a census), you can "poststratify" to that joint distribution, in the case of all discrete variables, or use "calibration" for more general combinations of discrete and continuous variables. If you have only information on the marginal distribution and you are willing to believe that an assumption of independence over the dimensions won't introduce too much bias, then you can "rake" to the margins. You can combine post-stratification, calibration, and raking to attend to arbitrary combinations of population information. Use a software package to do it though (e.g. the "survey" package in R or the svy suite of commands in Stata), because the computations can be tricky. For accessible introductions to these concepts with worked examples in R and Stata, respectively, see the following two references:
Lee and Forthofer. Analyzing Complex Survey Data. Sage.
Lumley. Complex Surveys: A Guide to Analysis Using R. Wiley.
Now, @Mohit asked why you would want to do this. Indeed, you should think about it. Intuitively, it would seem that you are reducing the bias in estimating population parameters from your sample. But there are cases in which such weighting can exacerbate biases; search for "non-ignorable" missingness or sample selection problems.
| null | CC BY-SA 2.5 | null | 2011-01-10T20:17:54.023 | 2011-01-10T20:17:54.023 | null | null | 96 | null |
6141 | 1 | null | null | 5 | 5758 | I am now writing my bachelors thesis and I have come across some difficulties. I am about to do some panel regressions with time and entity fixed effects and I would therefore like to use the plm package. But when I do add fixed effects and want to have heteroscedasticity robust standard errors they seem to be incorrect.
Does anyone know why the HC standard errors differ?
Here is my code
```
# Load data
load(file="panel")
attach(panel)
# Load packages
library(lmtest)
library(plm)
# Create two models. The lm.model is a linear model and as the
# LAND variable is a factor variable representing countries
# (Land = Country in Swedish) this model will have entity fixed
# effects. In the plm.model the plm package is used and
# individual effects and within model is turned on (which is
# the same as entity fixed effects)
lm.model<-lm(NETTOSPARANDE ~ EURO + LAND, data=panel)
plm.model<-plm(NETTOSPARANDE ~ EURO, index=c("LAND","ÅR"), effect="individual", model="within", data=panel)
# When looking at the coefficents without heteroscadisity robust
# standard errors they are identical. They do also have the same
# value in stata.
coeftest(lm.model)[1:2,]
coeftest(plm.model)
# But when looking at the coefficents using heteroscadisity
# robust standard errors the lm.model and the plm.model produces
# different standard errors.
coeftest(lm.model, vcov.=vcovHC(lm.model, method="white2", type="HC1"))[1:2,]
coeftest(plm.model, vcov.=vcovHC(plm.model, method="white2", type="HC1"))
```
If you want to test the data it can be found here (the panel file)
[1]: [https://sourceforge.net/projects/emumoralhazard/files/](https://sourceforge.net/projects/emumoralhazard/files/) R-data
And here is my output
```
1> # Load data
1> load(file="panel")
1> attach(panel)
1> # Load packages
1> library(lmtest)
Loading required package: zoo
1> library(plm)
Loading required package: kinship
Loading required package: survival
Loading required package: splines
Loading required package: nlme
Loading required package: lattice
[1] "kinship is loaded"
Loading required package: Formula
Loading required package: MASS
Loading required package: sandwich
1> # Create two models. The lm.model is a linear model and as the
1> # LAND variabel is a factor variable representing countries
1> # (Land = Country in swedish) this model will have entity fixed
1> # effects. In the plm.model the plm package is used and
1> # individual effects and within model is turned on (which is
1> # the same as entity fixed effects)
1> lm.model<-lm(NETTOSPARANDE ~ EURO + LAND, data=panel)
1> plm.model<-plm(NETTOSPARANDE ~ EURO, index=c("LAND","ÅR"), effect="individual", model="within", data=panel)
1> # When looking at the coefficients without heteroscedasticity robust
1> # standard errors they are identical. They do also have the same
1> # value in Stata.
1> coeftest(lm.model)[1:2,]
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.731024 0.7731778 -4.825570 1.726921e-06
EURO1 2.187170 0.4076720 5.365024 1.112984e-07
1> coeftest(plm.model)
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
EURO1 2.18717 0.40767 5.365 1.113e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
1> # But when looking at the coefficients using heteroscedasticity
1> # robust standard errors the lm.model and the plm.model produces
1> # different standard errors.
1> coeftest(lm.model, vcov.=vcovHC(lm.model, method="white2", type="HC1"))[1:2,]
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.731024 0.3551280 -10.506138 5.102122e-24
EURO1 2.187170 0.3386029 6.459395 2.009894e-10
1> coeftest(plm.model, vcov.=vcovHC(plm.model, method="white2", type="HC1"))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
EURO1 2.18717 0.33849 6.4615 1.983e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
| Why do I get different heteroscedasticity robust standard errors in R when using the plm package? | CC BY-SA 2.5 | null | 2011-01-10T21:29:47.717 | 2011-01-10T23:31:21.637 | 2011-01-10T23:31:21.637 | 919 | 2724 | [
"r",
"robust",
"panel-data",
"heteroscedasticity",
"fixed-effects-model"
] |
6142 | 1 | 6144 | null | 1 | 8338 | I tried to group a set of elements and to calculate the row means
If my list has more than one element, it works fine:
```
tapply(colnames(myMA), c(1,1,1,2,2,2,3,3,4,4), list)
myMAmean <- sapply(myList, function(x) rowMeans(myMA[,x]))
```
However, in my data some of the rows are unique:
```
myList <- tapply(colnames(myMA), c(1,1,1,2,2,2,3,3,4,5), list)
myMAmean <- sapply(myList, function(x) rowMeans(myMA[,x]))
```
Notice the 4 & 5 it makes them as unique so if I run this it says
```
"Error in rowMeans(myMA[, x]) :
'x' must be an array of at least two dimensions"
```
I don't want to run a loop. Is there any solutions for this?
| How to calculate the rowMeans with some single rows in data? | CC BY-SA 2.5 | null | 2011-01-10T21:51:11.223 | 2011-01-10T22:03:29.930 | 2011-01-10T22:03:29.930 | 930 | 2725 | [
"r"
] |
6143 | 2 | null | 6141 | -1 | null | @Skolnick -
After an admittedly quick look at your problem, it seems that plm() truncates and/or rounds results to less digits than lm(). I think this is the most likely cause of the differences in your heteroscedasticity-robust standard error estimates. Explore summary(plm.model) and summary(lm.model) for differences in residual quantiles.
| null | CC BY-SA 2.5 | null | 2011-01-10T22:01:37.537 | 2011-01-10T22:01:37.537 | null | null | null | null |
6144 | 2 | null | 6142 | 6 | null | You can prevent the loss of the dimension attribute when using "[" with drop=FALSE:
```
tapply(colnames(myMA), c(1,1,1,2,2,2,3,3,4,4), list)
myMAmean <- sapply(myList, function(x) rowMeans(myMA[,x, drop=FALSE]))
```
| null | CC BY-SA 2.5 | null | 2011-01-10T22:01:55.583 | 2011-01-10T22:01:55.583 | null | null | 2129 | null |
6145 | 2 | null | 6141 | 7 | null | Though numerically both of your model coefficients estimates coincide, you are actually fitting two different models: least square dummy variables and fixed effects.
They differ on their assumptions, so the robust standard errors are calculated differently.
Function `vcovHC` is a wrapper function, different functions are actually used in case of `lm` model and `plm`. In the first case the function `vcovHC` comes from the `sandwich` package, in the second from the `plm` package.
The difference in results arises because `vcovHC` for `lm` object does not exploit panel data structure. Comparing
```
plm:::vcovHC.plm
```
and
```
vcovHC.default
```
reveals that for panel data there is additional argument `cluster = c("group", "time")`.
Look into [Wooldridge book](http://rads.stackoverflow.com/amzn/click/0262232588) for more explanation why least squares dummy variables and fixed effects regressions differ. It has whole chapter dedicated for that.
| null | CC BY-SA 2.5 | null | 2011-01-10T22:06:42.507 | 2011-01-10T22:29:15.153 | 2011-01-10T22:29:15.153 | 2116 | 2116 | null |
6146 | 1 | null | null | 6 | 1350 | I want to find the conjugate prior distribution for the [Fisher's noncentral hypergeometric distribution](https://secure.wikimedia.org/wikipedia/en/wiki/Fisher%27s_noncentral_hypergeometric_distribution)? Basically I want to integrate the parameters out of the distributions to arrive at the Bayesian Likelihood of observing some data with the noncentral hypergeometric distribution.
| What is the conjugate of the noncentral hypergeometric distribution? | CC BY-SA 2.5 | null | 2011-01-10T22:38:31.137 | 2019-10-12T09:50:43.957 | 2019-10-12T09:50:43.957 | 11887 | 2728 | [
"bayesian",
"conjugate-prior"
] |
6149 | 2 | null | 6146 | 3 | null | Perhaps a conjugate prior does not exist for the noncentral hypergeometric distribution.
If someone wants to confirm this, it is worth noting that the conjugate to a univariate hypergeometric distribution is the beta-binomial; the conjugate to a multivariate hypergeometric distribution is the Dirichlet-Multinomial (from [Fink, 1997](http://www.johndcook.com/CompendiumOfConjugatePriors.pdf)). For more details on the derivation, see [Dyer and Pierce, 1993](http://www.informaworld.com/smpp/content~content=a780123192~db=all).
---
Dyer, Danny and Pierce, Rebecca L. "On the Choice of the Prior Distribution in Hypergeometric Sampling". Communications in Statistics-Theory and Methods, 1993, 22(8), 2125-214
Fink, D. 1997. A Compendium of Conjugate Priors.
| null | CC BY-SA 2.5 | null | 2011-01-11T00:20:06.003 | 2011-01-11T19:20:21.523 | 2011-01-11T19:20:21.523 | 1381 | 1381 | null |
6150 | 1 | 6177 | null | 13 | 501 | Problem
I would like to plot the variance explained by each of 30 parameters, for example as a barplot with a different bar for each parameter, and variance on the y axis:

However, the variances are are strongly skewed toward small values, including 0, as can be seen in the histogram below:

If I transform them by $\log(x+1)$, it will be easier to see differences among the small values (histogram and barplot below):

Question
Plotting on a log-scale is common, but is plotting $\log(x+1)$ similarly reasonable?
| Is visualization sufficient rationale for transforming data? | CC BY-SA 3.0 | null | 2011-01-11T01:06:58.773 | 2017-12-23T19:27:48.577 | 2013-07-12T14:27:46.170 | 7290 | 1381 | [
"data-visualization",
"data-transformation",
"histogram"
] |
6151 | 1 | null | null | 1 | 3116 | I want to test if there is a rivalry among two siblings in a family. I have 15 questions in my study and I let my 100 respondents ( distributed equally to two siblings) ranked them 1 to 15.
How should I analyse this data?
| Analysing questionnaire data | CC BY-SA 2.5 | null | 2011-01-11T01:09:34.687 | 2011-01-13T22:33:59.487 | 2011-01-13T20:58:11.047 | 8 | null | [
"hypothesis-testing",
"correlation",
"multiple-comparisons",
"statistical-significance",
"survey"
] |
6152 | 1 | 6175 | null | 6 | 3516 | I have a (I suspect) simple question. I have time series cross section data on voting behaviour in the Council of the European Union (the monthly number of yes, no and abstentions for each member state from 1999 to 2007). So basically the variables are counts, thus a Poisson/negative binomial regression would be appropriate, possibly with lagged dependent variables on the right hand side to control for time dependencies. I have seen papers with people using such negative binomial models to forecast, for instance the number of monthly legislative acts adopted in the future, and I have three questions in this regard:
- How can i run a negative binomial regression on panel data without making any inferential mistakes?
- How can I use a negative binomial model with lags to forecast future values of the dependent variable.
- Can this be done in R?
Thomas
| Time series cross section forecasting with R | CC BY-SA 3.0 | null | 2011-01-11T01:45:42.400 | 2013-05-10T14:09:44.877 | 2013-05-10T14:09:44.877 | null | 2704 | [
"r",
"time-series",
"forecasting",
"panel-data",
"negative-binomial-distribution"
] |
6153 | 2 | null | 6150 | 3 | null | It can be reasonable. The better question to ask is whether 1 is the proper number to add. What was your minimum? If it was 1 to begin with, then you are imposing a particular interval between items with value of zero and those with value 1. Depending on the domain of study it may make more sense to choose 0.5 or 1/e as the offset. The implication of transforming to a log scale is that you now have a ratio scale.
But I am bothered by the plots. I would ask whether a model that has most of the explained variance in the tail of a skewed distribution be considered to have desirable statistical properties. I think not.
| null | CC BY-SA 2.5 | null | 2011-01-11T02:57:05.127 | 2011-01-11T02:57:05.127 | null | null | 2129 | null |
6154 | 1 | 6162 | null | 1 | 941 | Anyone know why when you run a SEM model in SAS using proc calis (or tcalis) you do not get p-values for the parameter estimates? It does supply a t-value however.
Two popular SEM packages in R, 'sem' and 'lavaan', both give p-values for the estimates but they use a Z test statistic.
| Proc Calis (or TCalis) and p-values | CC BY-SA 2.5 | null | 2011-01-11T03:27:10.977 | 2014-05-08T10:47:44.173 | 2011-01-11T04:22:03.180 | 2116 | 2310 | [
"r",
"modeling",
"sas",
"p-value"
] |
6155 | 1 | 6156 | null | 22 | 21139 | I am not sure the subject enters into the CrossValidated interest. You'll tell me.
I have to study a graph (from the [graph theory](http://en.wikipedia.org/wiki/Graph_theory))
ie.
I have a certain number of dots that are connected.
I have a table with all the dots and the dots each one is dependant on.
(I have also another table with the implications)
My questions are:
Is there a good software (or a R package) to study that easily?
Is there an easy way to display the graph?
| Graph theory -- analysis and visualization | CC BY-SA 3.0 | null | 2011-01-11T04:57:29.320 | 2022-12-24T06:24:25.417 | 2017-04-15T02:16:25.323 | 11887 | 1709 | [
"r",
"data-visualization",
"graph-theory"
] |
6156 | 2 | null | 6155 | 15 | null | [iGraph](http://igraph.sourceforge.net/index.html) is a very interesting cross-language (R, Python, Ruby, C) library.
It allows you to work with unidirected and directed graphs and has quite a few analysis algorithms already implemented.
| null | CC BY-SA 2.5 | null | 2011-01-11T07:47:02.697 | 2011-01-11T07:47:02.697 | null | null | 582 | null |
6157 | 2 | null | 6155 | 14 | null | There are various packages for representing directed and undirected graphs, incidence/adjacency matrix, etc. in addition to [graph](https://cran.r-project.org/src/contrib/Archive/graph/)$^\dagger$; look for example at the [gR](https://web.archive.org/web/20110525235405/http://cran.r-project.org/web/views/gR.html) Task view.
For visualization and basic computation, I think the [igraph](http://cran.r-project.org/web/packages/igraph/index.html) package is the reliable one, in addition to [Rgraphviz](http://www.bioconductor.org/packages/release/bioc/html/Rgraphviz.html) (on BioC as pointed out by @Rob). Be aware that for the latter to be working properly, [graphviz](http://www.graphviz.org/) must be installed too. The [igraph](http://cran.r-project.org/web/packages/igraph/index.html) package has nice algorithms for creating good layouts, much like [graphviz](http://www.graphviz.org/).
Here is an example of use, starting from a fake adjacency matrix:
```
adj.mat <- matrix(sample(c(0,1), 9, replace=TRUE), nr=3)
g <- graph.adjacency(adj.mat)
plot(g)
```

---
$^\dagger$ Package ‘graph’ was removed from the CRAN repository.
| null | CC BY-SA 4.0 | null | 2011-01-11T07:51:00.740 | 2022-12-24T06:16:18.213 | 2022-12-24T06:16:18.213 | 362671 | 930 | null |
6158 | 1 | 6161 | null | 1 | 967 | I collected some data with an instruments with 1Hz sampling clock, now I want to low-pass filter the data to separate the mean and fluctuation part (Reynolds decomposition).
How can I design a low-pass filter with a cutoff period of 20 minutes?
| Low-pass Filter | CC BY-SA 2.5 | null | 2011-01-11T07:56:27.927 | 2011-01-11T10:21:42.920 | null | null | 1637 | [
"time-series"
] |
6160 | 2 | null | 6155 | 8 | null | Another option is the statnet package. Statnet has functions for all the commonly used measures in SNA, and can also estimate ERG models. If you have your data in an edge list, read in the data as follows (assuming your data frame is labelled "edgelist"):
```
net <- as.network(edgelist, matrix.type = "edgelist", directed = TRUE) #if the network is directed, otherwise: directed = FALSE
```
If your data is in an adjacency matrix you replace the matrix.type argument with "adjacency":
```
net <- as.network(edgelist, matrix.type = "adjacency", directed = TRUE)
```
The statnet package has some very nice plotting capabilities. To do a simple plot simply type:
```
gplot(net)
```
To scale the nodes according to their betweenness centrality, simply do:
```
bet <- betweenness(net)
gplot(net, vertex.cex = bet)
```
By default the gplot function uses Fruchterman-Reingold algorithm for placing the nodes, however this can be controlled from the mode option, for instance to use MDS for the placement of nodes type:
```
gplot(net, vertex.cex, mode = "mds")
```
or to use a circle layout:
```
gplot(net, vertex.cex, mode = "circle")
```
There are many more possibilities, and this [guide](https://web.archive.org/web/20131101162736/http://students.washington.edu/mclarkso/documents/gplot%20Ver2.pdf) covers most of the basic options.
For a self contained example:
```
net <- rgraph(20) #generate a random network with 20 nodes
bet <- betweenness(net) #calculate betweenness scores
gplot(net) #a simple plot
gplot(net, vertex.cex = bet/3) #nodes scaled according to their betweenness centrality, the measure is divided by 3 so the nodes don't become to big.
gplot(net, vertex.cex = bet/3, mode = "circle") #with a circle layout
gplot(net, vertex.cex = bet/3, mode = "circle", label = 1:20) #with node labels
```
| null | CC BY-SA 4.0 | null | 2011-01-11T09:38:04.657 | 2022-12-24T06:17:53.647 | 2022-12-24T06:17:53.647 | 362671 | 2704 | null |
6161 | 2 | null | 6158 | 3 | null | Well, you have to use Fourier transform, either in a dirty way by zeroing elements with frequencies you want to filter out or (in a more accurate way, with moving window) by convolution with filter, like in [this MATLAB code example](https://ccrma.stanford.edu/~jos/sasp/Example_1_Low_Pass_Filtering.html).
| null | CC BY-SA 2.5 | null | 2011-01-11T10:21:42.920 | 2011-01-11T10:21:42.920 | null | null | null | null |
6162 | 2 | null | 6154 | 2 | null | @mpiktas is right and knowing the value of the test statistic ($t$ or $z$) allows you to know which parameter estimate is significant at the desired $\alpha$ level. In practice, the $t$-statistic is equivalent to a $z$-score for large samples (which is often the case in SEM), and the significance thresholds are 1.96 and 2.58 for the .05 and .01 $\alpha$ levels. Most of the time, $p$-values are interesting when comparing models; as shown in this nice tutorial on [Structural equation modeling](http://www.sma.org.sg/smj/4612/4612bs1.pdf) using SAS, by Y H Chan, giving $t$- or $z$-statistic with associated critical values at 5% should be enough, IMO.
| null | CC BY-SA 2.5 | null | 2011-01-11T11:37:56.767 | 2011-01-11T11:37:56.767 | null | null | 930 | null |
6163 | 1 | 6166 | null | 5 | 2299 | [Deming Regression](http://en.wikipedia.org/wiki/Deming_regression) is a regression technique taking into account uncertainty in both the explanatory and dependent variable.
Although I have found some interesting references on the calculation of this property in [matlab](http://iopscience.iop.org/0957-0233/18/11/025) and in [R](http://www.mail-archive.com/r-help@r-project.org/msg85070.html) I'm stuck when I try to calculate the standard prediction error. The error on the model estimate is given in both methods, but I wonder if I can use that for prediction by using the variances of their prediction.
Eg: `var(y_pred) = var(a*x+b) = E[a]^2*var(x) + E[x]^2*a+var(b)`
| What is the prediction error while using deming regression (weighted total least squares) | CC BY-SA 2.5 | null | 2011-01-11T13:53:36.093 | 2017-03-26T12:19:57.747 | 2012-11-09T15:35:58.780 | 8402 | 2732 | [
"regression",
"variance",
"deming-regression"
] |
6165 | 2 | null | 6133 | 3 | null | Read this: [Anomaly Detection : A Survey (2009)](http://www-users.cs.umn.edu/~kumar/papers/anomaly-survey.php)
| null | CC BY-SA 2.5 | null | 2011-01-11T14:50:31.043 | 2011-01-11T14:50:31.043 | null | null | 635 | null |
6166 | 2 | null | 6163 | 7 | null | Update I've updated the answer to reflect the discussions in the comments.
The model is given as
\begin{align*}
y&=y^{*}+\varepsilon\\\\
x&=x^{*}+\eta\\\\
y^{*}&=\alpha+x^{*}\beta
\end{align*}
So when forecasting with a new value $x$ we can forecast either $y$ or $y^{*}$. Their forecasts coincide $\hat{y}=\hat{y}^{*}=\hat{\alpha}+\hat{\beta}x$ but their error variances will be different:
$$Var(\hat{y})=Var(\hat{y}^{*})+Var(\varepsilon)$$
To get $Var(\hat{y}^{*})$ write
\begin{align*}
\hat{y}^{*}-y^{*}&=\hat{\alpha}-\alpha+\hat{\beta} (x^{*}+\eta)-\beta x^{\*}\\\
&=(\hat{\alpha}-\alpha)+(\hat{\beta}-\beta) x^{*}+ \hat{\beta}\eta
\end{align*}
So
\begin{align*}
Var(\hat{y}^{*})=E(\hat{y}^{*}-y^{*})^2&=D(\hat{\alpha}-\alpha)+D(\hat{\beta}-\beta) (x^{*})^2+ E\hat{\beta}^2D\eta\\\\
& + 2\textrm{cov}(\hat{\alpha}-\alpha,\hat{\beta}-\beta)x^{*}
\end{align*}
| null | CC BY-SA 3.0 | null | 2011-01-11T15:11:25.573 | 2017-03-26T12:19:57.747 | 2017-03-26T12:19:57.747 | 8402 | 2116 | null |
6167 | 1 | null | null | 8 | 295 | I have large comparison data in form
In a pairwise comparison data each data point compares two alternatives.
For instance:
A > B (A is preferred to B, A and B are classes, not numbers)
A > B
B > A
B > C
A > C
etc ...
In short we can write numbers of preferences in data set:
A vs B 999:1
X vs A 500:500
X vs B 500:500
Bradley-Terry model models pairwise preference by assigning on parameter to each class:
$ P(A > B\; |\; \vec{w} ) = \frac{w_A}{w_A + w_B} $
Parameters can be estimated from data through maximal likelihood.
I'm looking for extension of Bradley-Terry model (or a completely new model) that would be able to model situations like the one above. I.e. A is always strongly preferred to B: $P(A>B) = 0.999$ but $ P(X<A) = P(X<B) = 0.5 $.
B-T model cannot represent that. Do you have any ideas how to create better model ?
PS
The model will be applied to a data of size $10^8$ so it would be good to have simple maximal likelihood algorithm.
| How to model pairwise preference with both strong and weak preferences? | CC BY-SA 2.5 | null | 2011-01-11T15:59:24.120 | 2012-08-24T06:00:28.380 | null | null | 217 | [
"modeling",
"bradley-terry-model",
"multiple-comparisons"
] |
6169 | 1 | 6173 | null | 8 | 1489 | What are the appropriate uses for a grouped vs a stacked bar plot?
| Group vs Stacked Bar Plots | CC BY-SA 2.5 | null | 2011-01-11T19:10:46.507 | 2020-06-09T21:13:25.447 | 2011-01-12T00:58:18.377 | 559 | 559 | [
"data-visualization",
"barplot"
] |
6170 | 1 | 6174 | null | 3 | 235 | I am trying to model future sales data for products which have very low sales volume. I am a programmer with a smattering in statistics, so I apologise in advance if this qustion is naive!
My question is what distribution is most suitable for my sales profile and is it possible to verify the distribution. The exact framing of the problem is that we might have a product that has say 6 units of inventory and we may sell 8 units a year with a "standard deviation" of 5 units (i.e. the sales are lumpy so we calculate a standard deviation, but its not really a normal distribution)...we want to say with a certain probability how many days inventory we have left. For high volume products we can assume the normal distribution and its pretty easy to back out the days inventory left. However for low volume products we can't assume normal distribution (as obviously it is bounded by the fact that we can't have less than 0 sales). I have looked at Poisson, but I am not sure if that is the most suitable.
Could someone point me to some resources that would help me identify the right model/technique to use?
Thanks,
Mike.
| Suitable Distribution/Methodology for Estimating Data Points | CC BY-SA 2.5 | null | 2011-01-11T20:03:11.703 | 2011-01-11T20:37:46.450 | null | null | 2734 | [
"distributions",
"model-selection"
] |
6172 | 2 | null | 6152 | 3 | null | May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package.
| null | CC BY-SA 2.5 | null | 2011-01-11T20:18:36.863 | 2011-01-11T20:18:36.863 | null | null | 2028 | null |
6173 | 2 | null | 6169 | 11 | null | I think grouped bars are preferable to stacked bars in most situations because they retain information about the sizes of the groups and stay readable even when you have multiple nominal categories. For me, the segments of stacked bars get difficult to compare beyond two categories - and even with just two categories, they can be quite deceptive if your groups are of very different sizes. I'd prefer a frequency table over a stacked bar plot any day.
You should also consider a series of bar plots, with each group in a separate plot:
[](https://i.stack.imgur.com/s6njo.png)
This is probably what I use most often. You can do this in R with `facet_wrap` and `facet_grid in`ggplot2`, as well as the`lattice` package.
---
Historical note: [histograms != bar plots](http://www.worsleyschool.net/science/files/bargraphs/page.html)
| null | CC BY-SA 4.0 | null | 2011-01-11T20:30:15.043 | 2020-06-09T21:13:25.447 | 2020-06-09T21:13:25.447 | 71 | 71 | null |
6174 | 2 | null | 6170 | 3 | null | You are looking for a class of models known as 'purchase incidence'.
A poisson distribution with the rate of sales $\lambda$ such that $Y_{t}=\lambda t$ is the number of units sold during time period $t$ is a good place to start.
$$P(Y_{t}=y_{t})=\frac{e^{-\lambda t}(\lambda t)^{y_{t}}}{y_{t}!}$$
So, if you have 3 items left and 10 before restocking, the chance that you will have an item for each customer who wants to purchase one is $$1-P(Y_t=[0,1,2,3])$$
Additional parameters can be added as needed to model known processes that cause the distribution of sales to vary with time. For example, heterogeneity in $\lambda$ can be modeled as $\lambda\sim\text{Gamma}(a,b)$. You can find a description of this and related models in chapter 12 of Leeflang 2000 "Building models for marketing decisions"
| null | CC BY-SA 2.5 | null | 2011-01-11T20:37:46.450 | 2011-01-11T20:37:46.450 | null | null | 1381 | null |
6175 | 2 | null | 6152 | 5 | null | After a bit of research, I can give a partial answer. In his [book](http://rads.stackoverflow.com/amzn/click/0262232197) Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discussed in the new edition. The main conclusion is that for random effects Poisson regression with lagged random variable can be estimated by mixed effects Poisson regression model. The detailed description can be found [here](http://www.cemmap.ac.uk/wps/cwp0218.pdf). The mixed effects Poisson regression in R can be estimated with glmer from package lme4. To adapt it to work with panel data, you will need to create lagged variable explicitly. Then your estimation command should look something like this:
```
glmer(y~lagy+exo+(1|Country),data,family=quasipoisson)
```
You should also look into gplm package suggested by @dickoa. But be sure to check, whether it supports lagged variables. Yves Croissant, the creator of gplm and plm packages writes wonderful code, but unfortunately in my personal experience, the code is not tested enough, so bugs crop up more frequently than in standard R packages.
| null | CC BY-SA 2.5 | null | 2011-01-11T20:57:05.410 | 2011-01-11T20:57:05.410 | null | null | 2116 | null |
6176 | 1 | 6568 | null | 23 | 3022 | I'm interested in learning more about nonparametric Bayesian (and related) techniques. My background is in computer science and though I have never taken a course on measure theory or probability theory, I have had a limited amount of formal training in probability and statistics. Can anyone recommend a readable introduction to these concepts to get me started?
| Introduction to measure theory | CC BY-SA 3.0 | null | 2011-01-11T21:01:20.807 | 2017-02-27T10:55:00.140 | 2015-11-05T11:33:14.447 | 22468 | 1913 | [
"probability",
"bayesian",
"references",
"mathematical-statistics"
] |
6177 | 2 | null | 6150 | 13 | null | This has been called a "started logarithm" by some (e.g., John Tukey). (For some examples, Google john tukey "started log".)
It's perfectly fine to use. In fact, you could expect to have to use a nonzero starting value to account for rounding of the dependent variable. For example, rounding the dependent variable to the nearest integer effectively lops off 1/12 from its true variance, suggesting a reasonable start value should be at least 1/12. (That value doesn't do a bad job with these data. Using other values above 1 doesn't really change the picture much; it just raises all the values in the bottom right plot almost uniformly.)
There are deeper reasons to use the logarithm (or started log) to assess variance: for example, the slope of a plot of variance against estimated value on a log-log scale estimates a [Box-Cox parameter for stabilizing the variance](http://staff.ustc.edu.cn/~zwp/teach/Reg/Boxcox.pdf). Such power-law fits of variance to some related variable are often observed. (This is an empirical statement, not a theoretical one.)
If your purpose is to present the variances, proceed with care. Many audiences (apart from scientific ones) cannot understand a logarithm, much less a started one. Using a start value of 1 at least has the merit of being a little simpler to explain and interpret than some other start value. Something to consider is to plot their roots, which are the standard deviations, of course. It would look something like this:

Regardless, if your purpose is to explore the data, to learn from them, to fit a model, or to evaluate a model, then don't let anything get in the way of finding reasonable graphical representations of your data and data-derived values such as these variances.
| null | CC BY-SA 3.0 | null | 2011-01-11T21:19:35.873 | 2017-12-23T19:27:48.577 | 2017-12-23T19:27:48.577 | 919 | 919 | null |
6178 | 2 | null | 6151 | 0 | null | I agree with David that your question needs some better explanation. Perhaps you should start by giving us a snippet of your data set up so that we can see how it's structured. Also indicate what variable links the individuals into sibling groups.
It sounds like you might want to take a look at ranked order logit (type "help rologit") if your data & analysis are something like this:
```
*****************************! Begin Example
*-----run this code your Stata do-file editor
clear
inp id ranking option x1 x2 male
1 4 1 1 0 0
1 2 2 0 1 0
1 3 3 0 0 0
1 1 4 1 1 0
2 1 1 3 0 0
2 3 2 0 1 0
2 3 3 2 1 0
2 4 4 1 2 0
3 1 1 3 1 1
3 3 2 1 1 1
3 4 4 0 1 1
4 2 1 1 1 1
4 1 2 1 1 1
4 0 3 0 1 1
4 0 4 1 0 1
end
rologit ranking x1 x2 male, group(id)
****************************! End Example
```
- Eric
eric.a.booth@gmail.com
| null | CC BY-SA 2.5 | null | 2011-01-11T22:34:30.497 | 2011-01-11T22:34:30.497 | null | null | 1033 | null |
6179 | 2 | null | 4220 | -1 | null | The point value at a particular parameter value of a probability density plot would be a likelihood, right? If so, then the statement might be corrected by simply changing P(height|male) to L(height|male).
| null | CC BY-SA 2.5 | null | 2011-01-11T22:56:03.347 | 2011-01-11T22:56:03.347 | null | null | 1679 | null |
6180 | 1 | null | null | 5 | 554 | Confidence intervals for binomial proportions have irregular coverage over the range of possible population parameters (e.g. see Brown et al. 2001 <[Link](https://projecteuclid.org/journals/statistical-science/volume-16/issue-2/Interval-Estimation-for-a-Binomial-Proportion/10.1214/ss/1009213286.full)>). How can I formally and usefully describe the properties of the confidence intervals?
Say I toss a coin ten times and obtain seven heads. Are the following statements accurate?
For the Clopper-Pearson method:
The interval 0.3475–0.9333 has been generated by a method that will on at least 95% of occasions, for any true population proportion, contain the true population proportion. The long-run frequency with which this method would yield confidence intervals containing the true population proportion pertaining to this particular experiment is at least 95%.
For the Wilson's scores method:
The interval 0.3968–0.8922 has been generated by a method that will on 95% of occasions, averaged over all population proportions, contain the true population proportion. The long-run frequency with which this method would yield confidence intervals containing the true population proportion pertaining to this particular experiment may be more or less than 95%.
| Statement of result for binomial confidence intervals | CC BY-SA 4.0 | null | 2011-01-11T23:16:47.687 | 2022-06-27T21:15:16.767 | 2022-06-27T21:15:16.767 | 79696 | 1679 | [
"confidence-interval",
"binomial-distribution"
] |
6181 | 1 | null | null | 6 | 27847 | I am using IDL regression function to compute the multiple linear correlation coefficient...
```
x = [transpose(anom1),transpose(anom2),transpose(anom3)]
coef = regress(x,y, const=a0, correlation=corr, mcorrelation=mcorr, sigma=stderr)
```
`mcorr` is returned with values between 0.0 and 1.0. Clearly, the result of the following IDL code I dug out of the can can't be negative...
```
mcorrelation = SQRT(mult_covariance)
```
But, could you help me verify this? I would like a citation or to see 'the math' behind calculating `mcorrelation`.
| Can the multiple linear correlation coefficient be negative? | CC BY-SA 2.5 | null | 2011-01-12T01:10:42.207 | 2020-03-14T16:52:29.163 | 2011-01-13T17:49:23.117 | 8 | null | [
"regression",
"r-squared"
] |
6182 | 1 | 6185 | null | 4 | 878 | I've never taken a stats course so I really don't know where to begin on this.
I am using microcontroller that according to the datasheet, has a Flash memory that can be reprogrammed a minimum of 10,000 times, with 100,000 times being a more typical number.
These numbers seem low to me (based on other devices using a similar technology which claim lifetimes of an order of magnitude higher), so I decided to take one of our products and do destructive testing on it. I started testing last Friday night and it is already up to 600,000 events without an error (as determined by calculating a 16-bit CRC over the 4K bytes of data test being Flashed). Based on this device far surpassing the "typical" value in the datasheet, I have started tests on three more devices.
So at some point I will have a failure value for all four devices. How can I use that data to predict an expected lifetime x for the device with a 99.9% certainty? (Not sure if I am saying that right, what I mean is that only 0.1% of devices will fail before x events).
| Predicting number of events with 99.9% probability based on tests of four devices | CC BY-SA 2.5 | null | 2011-01-12T01:15:14.267 | 2011-01-12T16:33:25.283 | 2011-01-12T08:04:13.837 | null | 2698 | [
"probability"
] |
6184 | 2 | null | 4756 | 65 | null |
- If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution and the confidence interval constructed thus:
$$\hat{p}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$
- If $\hat{p} = 0$ and $n>30$, the $95\%$ confidence interval is approximately $[0,\frac{3}{n}]$ (Javanovic and Levy, 1997); the opposite holds for $\hat{p}=1$. The reference also discusses using using $n+1$ and $n+b$ (the later to incorporate prior information).
- Else Wikipedia provides a good overview and points to Agresti and Couli (1998) and Ross (2003) for details about the use of estimates other than the normal approximation, the Wilson score, Clopper-Pearson, or Agresti-Coull intervals. These can be more accurate when above assumptions about $n$ and $\hat{p}$ are not met.
R provides functions [binconf {Hmisc}](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/Hmisc/html/binconf.html) and [binom.confint {binom}](http://rss.acs.unt.edu/Rdoc/library/binom/html/binom.confint.html) which can be used in the following manner:
```
set.seed(0)
p <- runif(1,0,1)
X <- sample(c(0,1), size = 100, replace = TRUE, prob = c(1-p, p))
library(Hmisc)
binconf(sum(X), length(X), alpha = 0.05, method = 'all')
library(binom)
binom.confint(sum(X), length(X), conf.level = 0.95, method = 'all')
```
---
[Agresti, Alan; Coull, Brent A. (1998). "Approximate is better than 'exact' for interval estimation of binomial proportions". The American Statistician 52: 119–126.](http://www.jstor.org/stable/2685469)
[Jovanovic, B. D. and P. S. Levy, 1997. A Look at the Rule of Three. The American Statistician Vol. 51, No. 2, pp. 137-139](http://www.jstor.org/pss/2685405)
[Ross, T. D. (2003). "Accurate confidence intervals for binomial proportion and Poisson rate estimation". Computers in Biology and Medicine 33: 509–531.](http://dx.doi.org/10.1016%2FS0010-4825%2803%2900019-2)
| null | CC BY-SA 3.0 | null | 2011-01-12T02:05:53.253 | 2013-12-03T18:05:24.317 | 2013-12-03T18:05:24.317 | 1381 | 1381 | null |
6185 | 2 | null | 6182 | 5 | null | @whuber makes an important point about assumptions,
It is common to assume that failure occurs either because a part is defective or because it has worn out. This means that there are two processes causing failure, and defective units fail shortly after deployment whereas wearing out happens over time. It is difficult (or impossible) to justify parameterizing both of these processes if $n=4$.
Case 1
For a simple case, assume that defective units can be eliminated (e.g. by an initial screening or 'burn in'), and all units fail due to a constant wearing process. This assumption may be valid since the data sheet says that the minimum expected lifetime is $10,000$ events. One might model this process as an exponential distribution:
Example: assume that your four test units fail after $[5,10,20,40]x10^{5}$ reprogrammings. You can find the maximum likelihood estimate for the exponential distribution in R:
```
library(MASS)
fitdistr(c(5, 10, 20, 40)*1e+5, 'exponential')
lambda <- fitdistr(c(5, 10, 20, 40)*1e+5, 'exponential')$estimate
```
To find the time during which only 1/1000 units will fail, estimate the 0.1th quantile of this distribution.
```
qexp(0.001, lambda)
```
In this example, the result is an expected lifetime of 1875.
edit As whuber points out, $n=4$ is still to small for this problem, because it is very difficult to estimate small probabilities, and distribution tails are very sensitive to assumptions about the probability distributions used to model the data, and to the data itself.
Since you have good prior knowledge from the product datasheet, you could incorporate this information into your analysis as a prior on lambda. The gamma distribution is a conjugate prior, $\lambda\sim \text{Gamma}(\alpha, \beta)$, but I can not find a parameterization appropriate to the prior information, with $\text{median}\simeq 100,000$, $P(Y<10,000)\simeq 0)$ and nonzero density above $10,000$. Once you determine the model to use, it would be worth asking a separate question.
Case 2
If you assume that failure is caused either by defect or by wear, you could consider the Weibull and Generalized Exponential distributions (Gupta and Kunda 1999, 2001, 2007). The field of survival analysis provides many options, but as whuber points out, $n=4$ is insufficient to justify a model other than, perhaps, the single parameter exponential.
---
[Gupta, R. D. and Kundu, D. (1999). Generalized exponential distributions", Australian and New Zealand Journal of Statistics, vol. 41, 173 - 188.](http://home.iitk.ac.in/~kundu/paper47.pdf)
[Gupta, R. D. and Kundu, D. (2001), Generalized exponential distributions: different methods of estimation", Journal of Statistical Computation and Simulation. vol. 69, 315-338](http://home.iitk.ac.in/~kundu/paper62.pdf).
[Gupta, R. D. and Kundu, D. 2007. Generalized Exponential Distribution: existing results and some recent developments. Journal of Statistical Planning and Inference](http://home.iitk.ac.in/~kundu/paper113.pdf)
| null | CC BY-SA 2.5 | null | 2011-01-12T05:18:54.760 | 2011-01-12T16:33:25.283 | 2011-01-12T16:33:25.283 | 1381 | 1381 | null |
6187 | 1 | 6188 | null | 14 | 131081 | Still learning basic functions in R,
The subset function seems to only filter based a condition based on single column with or without multiple conditions?
How can I easily filter data from a dataframe?
- when you are provided with multiple conditions
- When the condition needs to be applied across the available columns.
Example:
Given a data frame containing
```
name D1 D2 D3 D4
julius "A" "A" "B" "B"
cate "D" "E" "A" "C"
karo "A" "D" "C" "E"
```
say I want to filter this dataframe so that only names where any of the D1 to D4 is an 'E'
then I should have,
```
name D1 D2 D3 D4
cate "D" "E" "A" "C"
karo "A" "D" "C" "E"
```
Say that the D1 can be a big list of columns, how or what is the recommended approach to perform this filter?
Thank you
| Filtering a dataframe | CC BY-SA 2.5 | null | 2011-01-12T06:05:54.813 | 2020-08-11T11:12:49.060 | 2011-01-12T08:02:30.840 | null | 18462 | [
"r"
] |
6188 | 2 | null | 6187 | 27 | null | If you want to combine several filters in subset function use logical operators:
```
subset(data, D1 == "E" | D2 == "E")
```
will select those rows for which either column D1 or column D2 has value "E". Look at the help pages for available logical operators:
```
> ?"|"
```
For your second question what you need is to filter the rows. This can be achieved in the following way
```
collist <- c("D1","D2","D3","D4")
sel <- apply(data[,collist],1,function(row) "E" %in% row)
data[sel,]
```
The first argument to apply suplies the columns on which we need to filter. The second argument is 1, meaning that we are looping through rows of the data. The third argument is unnamed one-line function which returns TRUE if "E" is present in the row and FALSE if the "E" is not present.
The result of the apply function will be logical vector sel, which has the length the same as number of rows in data. We then use this vector to select the necessary rows.
Update
The same can be achieved with grep:
```
sel <- apply(data[,collist],1,function(row) length(grep("E",row))>0)
```
in R grep with default arguments returns the numbers of elements in the supplied vector which have the matching pattern.
| null | CC BY-SA 2.5 | null | 2011-01-12T07:35:13.187 | 2011-01-12T12:36:43.463 | 2011-01-12T12:36:43.463 | 2116 | 2116 | null |
6189 | 1 | null | null | 10 | 69252 | I have daily measurements of nitrogen dioxide for one year (365 days) and the interquartile (IQR) is 24 microgram per cubic meter. What does "24" mean in this context, apart from the definition of IQR which is the difference between the 25th and 75th percentile? How would you explain this figure to a journalist, for example?
Thanks
| What is the interpretation of interquartile range? | CC BY-SA 2.5 | null | 2011-01-12T07:59:44.047 | 2015-07-14T07:24:09.270 | null | null | 2742 | [
"descriptive-statistics"
] |
6190 | 2 | null | 6189 | 18 | null | From definition, this defines the range witch holds 75-25=50 per cent of all measured values.
: (median-24/2,median+24/2). Median should be written somewhere near this IQR.
The above was false of course, it seems I was still sleeping when writing this; sorry for confusion. It is true that IQR is width of a range which holds 50% of data, but it is not centered in median -- one needs to know both Q1 and Q3 to localize this range.
In general IQR can be seen as a nonparametric (=when we don't assume that the distribution is Gaussian) equivalent to standard deviation -- both measure spread of the data. (Equivalent not equal, for SD, (mean-$\sigma$,mean+$\sigma$) holds 68.2% of perfectly normally distributed data).
EDIT: As for example, this is how it looks on normal data; red lines show $\pm 1\sigma$, the range showed by the box on box plot shows IQR, the histogram shows the data itself:

you can see both show spread pretty good; $\pm 1\sigma$ range holds 68.3% of data (as expected). Now for non-normal data

the SD spread is widened due to long, asymmetric tail and $\pm 1\sigma$ holds 90.5% of data! (IQR holds 50% in both cases by definition)
| null | CC BY-SA 3.0 | null | 2011-01-12T08:17:42.690 | 2015-07-14T07:24:09.270 | 2015-07-14T07:24:09.270 | 69710 | null | null |
6191 | 2 | null | 6176 | 15 | null | After some research, I ended up buying this when I thought I needed to know something about measure-theoretic probability:
[Jeffrey Rosenthal. A First Look at Rigorous Probability Theory. World Scientific 2007. ISBN 9789812703712.](http://books.google.co.uk/books?id=PYNAgzDffDMC)
I haven't read much of it, however, as my personal experience is in accord with [Stephen Senn's quip](http://www.stat.columbia.edu/~cook/movabletype/archives/2009/05/the_benefits_of.html).
| null | CC BY-SA 2.5 | null | 2011-01-12T08:42:00.717 | 2011-01-12T08:42:00.717 | null | null | 449 | null |
6192 | 2 | null | 6189 | 7 | null | The interquartile range is an interval, not a scalar. You should always report both numbers, not just the difference between them. You can then explain it by saying that half the sample readings were between these two values, a quarter were smaller than the lower quartile, and a quarter higher than the upper quartile.
| null | CC BY-SA 2.5 | null | 2011-01-12T08:54:45.957 | 2011-01-12T08:54:45.957 | null | null | 449 | null |
6193 | 2 | null | 6176 | 5 | null | Personally, I've found Kolmogorov's original [Foundations of the Theory of Probability](http://clrc.rhul.ac.uk/resources/fop/Theory%20of%20Probability%20%28small%29.pdf) to be fairly readable, at least compared to most measure theory texts. Although it obviously doesn't contain any later work, it does give you an idea of most of the important concepts (sets of measure zero, conditional expectation, etc.). It is also mercifully brief, at only 84 pages.
| null | CC BY-SA 2.5 | null | 2011-01-12T12:01:00.673 | 2011-01-12T12:01:00.673 | null | null | 495 | null |
6195 | 1 | 6198 | null | 15 | 12272 | I asked [this question](https://math.stackexchange.com/questions/17200/how-to-model-prices) on the matemathics stackexchange site and was recommended to ask here.
I'm working on a hobby project and would need some help with the following problem.
### A bit of context
Let's say there is a collection of items with a description of features and a price. Imagine a list of cars and prices. All cars have a list of features, e.g. engine size, color, horse power, model, year etc. For each make, something like this:
```
Ford:
V8, green, manual, 200hp, 2007, $200
V6, red, automatic, 140hp, 2010, $300
V6, blue, manual, 140hp, 2005, $100
...
```
Going even further, the list of cars with prices is published with some time-interval which means we have access to historical price data. Might not always include exactly the same cars.
### Problem
I would like to understand how to model prices for any car based on this base information, most importantly cars not in the initial list.
```
Ford, v6, red, automatic, 130hp, 2009
```
For the above car, it's almost the same as one in the list, just slightly different in horse power and year. To price this, what is needed?
What I'm looking for is something practical and simple, but I would also like to hear about more complex approaches how to model something like this.
### What I've tried
Here is what I've been experimenting with so far:
1) using historical data to lookup car X. If not found, no price. This is of course very limited and one can only use this in combination with some time decay to alter prices for known cars over time.
2) using a car feature weighting scheme together with a priced sample car. Basically that there is a base price and features just alter that with some factor. Based on this any car's price is derived.
The first proved to be not enough and the second proved to not be always correct and I might not have had the best approach to using the weights. This also seems to be a bit heavy on maintaining weights, so that's why I thought maybe there is some way to use the historical data as statistics in some way to get weights or to get something else. I just don't know where to start.
### Other important aspects
- integrate into some software project I have. Either by using existing libraries or writing algorithm myself.
- fast recalculation when new historical data comes in.
Any suggestions how a problem like this could be approached? All ideas are more than welcome.
Thanks a lot in advance and looking forward to reading your suggestions!
| How to model prices? | CC BY-SA 2.5 | null | 2011-01-12T12:44:59.007 | 2011-01-13T18:40:03.760 | 2017-04-13T12:19:38.853 | -1 | 2746 | [
"regression",
"forecasting",
"econometrics"
] |
6196 | 2 | null | 6195 | 4 | null | >
What I'm looking for is something practical and simple, but I would also like to hear about more complex approaches how to model something like this.
After some sort of a discussion, here is my complete view of the things
The problem
Aim: to understand how to price the cars in a better way
Context: in their decision process people solve several questions: do I need a car, if I do, what attributes I prefer most (including the price, because, being rational, I would like to have a car with best quality/price ratio), compare the number of attributes between different cars and choosing valuing them jointly.
From the seller position, I would like to set the price as high as possible, and sell the car as quickly as possible. So if I set the price too high and am waiting for months it could be considered as not demanded on the market and marked with 0 comparing to very demanded attribute sets.
Observations: real deals that relates the attributes of a particular car with the price set within the bargaining process (regarding the previous remark it is important to know how long it take to set the deal).
Pros: you do observe the things that were actually bought on the market, so you are not guessing if there exist a person with high enough reservation price that wants to buy
a particular car
Cons:
- your assumption is that market is efficient, meaning the prices you observe are close to equilibrium
- you ignore the variants of car attributes that were not purchased or took too long to set the deal, meaning your insights are biased, so you actually do work with latent variable models
- Observing the data for a long time you need to deflate them, though the inclusion of the car age partly compensates this.
Solution methods
The first one, as suggested by whuber, is the classical least squares regression model
Pros:
- indeed the simplest solution as it is the work-horse of econometrics
Cons:
- ignores that you do observe the things incompletely (latent variables)
- acts as the regressors are independent one of the other, so the basic model ignores the fact that you may like blue Ford differently from blue Mercedes, but it is not the sum of marginal influence that comes from blue and Ford
In case of classical regression, since you are not limited in the degrees of freedom, to try also different interaction terms.
Therefore more complicated solution would be either [tobit](http://en.wikipedia.org/wiki/Tobit_model) or Heckman model, you may want to consult A.C. Cameron and P.K. Trivedi [Microeconometrics: methods and applications](http://rads.stackoverflow.com/amzn/click/0521848059) for more details on core methods.
Pros:
- you do separate the fact that people may not like some sets of attributes at all, or some set of attributes has a small probability to be bought from the actual price setting
- your results are not biased (or at least less than in the first case)
- in case of Heckman you separate the reasons that motivates to buy the particular car from the pricing decision of how much I would like to pay for this car: the first one is influenced by individual preferences, the second one by budget constraint
Cons:
- Both models are more data greedy, i.e. we need to observe either the time length between the ask and bid to equalize (if it is fairly short put 1, else 0), or to observe the sets that were ignored by the market
And, finally, if you simply interested in how price influences the probability to be bought you may work with some kind of logit models.
We agreed, that conjoint analysis is not suitable here, because you do have different context and observations.
Good luck.
| null | CC BY-SA 2.5 | null | 2011-01-12T15:00:00.930 | 2011-01-13T09:31:44.543 | 2011-01-13T09:31:44.543 | 2645 | 2645 | null |
6197 | 2 | null | 6189 | 15 | null | This is a simple question asking for a simple answer. Here is a list of statements, starting with the most basic, and proceeding with more precise qualifications.
>
The IQR is the spread of the middle half of the data.
Without making assumptions about how the data are distributed, the IQR quantifies the amount by which individual values typically vary.
The IQR is related to the well-known "standard deviation" (SD): when the data follow a "bell curve," the IQR is about 35% greater than the SD. (Equivalently, the SD is about three-quarters of the IQR.)
As a rule of thumb, data values that deviate from the middle value by more than twice the IQR deserve individual attention. They are called "outliers." Data values that deviate from the middle value by more than 3.5 times the IQR are usually scrutinized closely. They are sometimes called "far outliers."
| null | CC BY-SA 2.5 | null | 2011-01-12T15:32:09.447 | 2011-01-12T15:32:09.447 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
6198 | 2 | null | 6195 | 11 | null | "Practical" and "simple" suggest [least squares regression](http://en.wikipedia.org/wiki/Least_squares). It's easy to set up, easy to do with lots of software (R, Excel, Mathematica, any statistics package), easy to interpret, and can be extended in many ways depending on how accurate you want to be and how hard you're willing to work.
This approach is essentially your "weighting scheme" (2), but it finds the weights easily, guarantees as much accuracy as possible, and is easy and fast to update. There are loads of libraries to perform least squares calculations.
It will help to include not only the variables you listed--engine type, power, etc--but also age of car. Furthermore, make sure to adjust prices for inflation.
| null | CC BY-SA 2.5 | null | 2011-01-12T15:58:50.810 | 2011-01-12T15:58:50.810 | null | null | 919 | null |
6201 | 2 | null | 6195 | 5 | null | I agree with @whuber, that linear regression is a way to go, but care must be taken when interpreting results. The problem is that in economics the price is always related to demand. If demand goes up, prices go up, if demand goes down, prices go down. So the price is determined by demand and in return demand is determined by price. So if we model price as a regression from some attributes without the demand there is a real danger that the regression estimates will be wrong due to [omitted-variable bias](http://en.wikipedia.org/wiki/Omitted-variable_bias).
| null | CC BY-SA 2.5 | null | 2011-01-12T20:01:09.850 | 2011-01-12T20:01:09.850 | null | null | 2116 | null |
6202 | 2 | null | 6181 | 0 | null | $R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number.
For example, if we correlated income and time spent in jail throughout life, I would guess we would get a negative correlation (I haven't done this, I'm just guessing).
| null | CC BY-SA 4.0 | null | 2011-01-12T20:18:55.417 | 2020-03-14T16:52:29.163 | 2020-03-14T16:52:29.163 | 266968 | 686 | null |
6203 | 2 | null | 6195 | 3 | null | Besides what have been said, and not really quite different from some of the suggestions already made, you might want to have a look at the vast literature on [hedonic pricing models](http://en.wikipedia.org/wiki/Hedonic_regression). What it boils down to is a regression model trying to explain the price of a composite good as a function of its attributes.
This would allow you to price a car knowing its attributes (horse power, size, brand, etc.), even if an exactly similar mix of attributes is not present in your sample. It is a very popular approach for valuation of essentially non replicable assets --like real state properties. If you Google for "hedonic models" you will find many references and examples.
| null | CC BY-SA 2.5 | null | 2011-01-12T21:05:47.813 | 2011-01-12T21:05:47.813 | null | null | 892 | null |
6206 | 1 | 6207 | null | 16 | 22466 | I made a logistic regression model using glm in R. I have two independent variables. How can I plot the decision boundary of my model in the scatter plot of the two variables. For example, how can I plot a figure like [here](http://www.personal.psu.edu/jol2/course/stat597e/notes2/logit.pdf).
| How to plot decision boundary in R for logistic regression model? | CC BY-SA 4.0 | null | 2011-01-13T01:43:15.577 | 2023-01-06T06:56:16.303 | 2023-01-06T06:56:16.303 | 362671 | 2755 | [
"r",
"logistic"
] |
6207 | 2 | null | 6206 | 28 | null | ```
set.seed(1234)
x1 <- rnorm(20, 1, 2)
x2 <- rnorm(20)
y <- sign(-1 - 2 * x1 + 4 * x2 )
y[ y == -1] <- 0
df <- cbind.data.frame( y, x1, x2)
mdl <- glm( y ~ . , data = df , family=binomial)
slope <- coef(mdl)[2]/(-coef(mdl)[3])
intercept <- coef(mdl)[1]/(-coef(mdl)[3])
library(lattice)
xyplot( x2 ~ x1 , data = df, groups = y,
panel=function(...){
panel.xyplot(...)
panel.abline(intercept , slope)
panel.grid(...)
})
```

I must remark that perfect separation occurs here, therefore the `glm` function gives you a warning. But that is not important here as the purpose is to illustrate how to draw the linear boundary and the observations colored according to their covariates.
| null | CC BY-SA 2.5 | null | 2011-01-13T02:44:44.073 | 2011-01-13T02:51:15.190 | 2020-06-11T14:32:37.003 | -1 | 1307 | null |
6208 | 1 | 6211 | null | 17 | 4378 | I developed the ez package for R as a means to help folks transition from stats packages like SPSS to R. This is (hopefully) achieved by simplifying the specification of various flavours of ANOVA, and providing SPSS-like output (including effect sizes and assumption tests), among other features. The `ezANOVA()` function mostly serves as a wrapper to `car::Anova()`, but the current version of `ezANOVA()` implements only type-II sums of squares, whereas `car::Anova()` permits specification of either type-II or -III sums of squares. As I possibly should have expected, several users have requested that I provide an argument in `ezANOVA()` that lets the user request type-II or type-III. I have been reticent to do so and outline my reasoning below, but I would appreciate the community's input on my or any other reasoning that bears on the issue.
Reasons for not including a "SS_type" argument in `ezANOVA()`:
- The difference between type I, II, and III sum squares only crops up when data are unbalanced, in which case I'd say that more benefit is derived from ameliorating imbalance by further data collection than fiddling with the ANOVA computation.
- The difference between type II and III applies to lower-order effects that are qualified by higher-order effects, in which case I consider the lower-order effects scientifically uninteresting. (But see below for possible complication of the argument)
- For those rare circumstances when (1) and (2) don't apply (when further data collection is impossible and the researcher has a valid scientific interest in a qualified main effect that I can't currently imagine), one can relatively easily modify the ezANOVA() source or employ car::Anova() itself to achieve type III tests. In this way, I see the extra effort/understanding required to obtain type III tests as a means by which I can ensure that only those that really know what they're doing go that route.
Now, the most recent type-III requestor pointed out that argument (2) is undermined by consideration of circumstances where extant but "non-significant" higher-order effects can bias computation of sums of squares for lower-order effects. In such cases it's imaginable that a researcher would look to the higher-order effect, and seeing that it is "non-significant", turn to attempting interpretation of the lower-order effects that, unbeknownst to the researcher, have been compromised. My initial reaction is that this is not a problem with sums of squares, but with p-values and the tradition of null hypothesis testing. I suspect that a more explicit measure of evidence, such as the likelihood ratio, might be more likely to yield a less ambiguous picture of the models supported consistent with the data. However, I haven't done much thinking on the consequence of unbalanced data for the computation of likelihood ratios (which indeed involve sums of squares), so I'll have to give this some further thought.
| Should I include an argument to request type-III sums of squares in ezANOVA? | CC BY-SA 2.5 | null | 2011-01-13T04:06:59.613 | 2012-07-30T14:48:26.367 | 2011-01-13T04:25:43.920 | 364 | 364 | [
"r",
"anova",
"sums-of-squares"
] |
6209 | 2 | null | 1525 | 0 | null | Proportion and probability, both are calculated from the total but the value of proportion is certain while that of probability is no0t certain..
| null | CC BY-SA 2.5 | null | 2011-01-13T04:13:48.283 | 2011-01-13T04:13:48.283 | null | null | null | null |
6211 | 2 | null | 6208 | 11 | null | Just to amplify - I am the most recent requestor, I believe.
In specific comment on Mike's points:
- It's clearly true that the I/II/III difference only applies with correlated predictors (of which unbalanced designs are the most common example, certainly in factorial ANOVA) - but this seems to me to be an argument that dismisses the analysis of the unbalanced situation (and hence any Type I/II/III debate). It may be imperfect, but that's the way things happen (and in many contexts the costs of further data collection outweigh the statistical problem, caveats notwithstanding).
- This is completely fair and represents the meat of most of the "II versus III, favouring II" arguments I've come across. The best summary I've encountered is Langsrud (2003) "ANOVA for unbalanced data: Use Type II instead of Type III sums of squares", Statistics and Computing 13: 163-167 (I have a PDF if the original is hard to find). He argues (taking the two-factor case as the basic example) that if there's an interaction, there's an interaction, so consideration of main effects is usually meaningless (an obviously fair point) - and if there's no interaction, the Type II analysis of main effects is more powerful than the Type III (undoubtedly), so you should always go with Type II. I've seen other arguments (e.g. Venables, Fox) that emphasize the meaning (or lack of) of considering hypotheses about main effects in the presence of interactions, and/or/equivalently suggesting that the Type III assumptions about the null hypothesis are often not sensible (e.g. Langsrud).
- And I agree with this: if you have an interaction but have some question about the main effect as well, then you're probably into do-it-yourself territory.
Clearly there are those who just want Type III because SPSS does it, or some other reference to statistical Higher Authority. I am not wholly against this view, if it comes down to a choice of a lot of people sticking with SPSS (which I have some things against, namely time, money, and licence expiry conditions) and Type III SS, or a lot of people shifting to R and Type III SS. However, this argument is clearly a lame one statistically.
However, the argument that I found rather more substantial in favour of Type III is that made independently by Myers & Well (2003, "Research Design and Statistical Analysis", pp. 323, 626-629) and Maxwell & Delaney (2004, "Designing Experiments and Analyzing Data: A Model Comparison Perspective", pp. 324-328, 332-335). That is as follows:
- if there's an interaction, all methods give the same result for the interaction sum of squares
- Type II assumes that there's no interaction for its test of main effects; type III doesn't
- Some (e.g. Langsrud) argue that if the interaction is not significant, then you're justified in assuming that there isn't one, and looking at the (more powerful) Type II main effects
- But if the test of the interaction is underpowered, yet there is an interaction, the interaction may come out "non-significant" yet still lead to a violation of the assumptions of the Type II main effects test, biasing those tests to be too liberal.
- Myers & Well cite Appelbaum/Cramer as the primary proponents of the Type II approach, and go on [p323]: "... More conservative criteria for nonsignificance of the interaction could be used, such as requiring that the interaction not be significant at the .25 level, but there is insufficient understanding of the consequences of even this approach. As a general rule, Type II sums of sqaures should not be calculated unless there is strong a priori reason to assume no interaction effects, and a clearly nonsignificant interaction sum of squares." They cite [p629] Overall, Lee & Hornick 1981 as a demonstration that interactions that do not approach significance can bias tests of main effects. Maxwell & Delaney [p334] advocate the Type II approach if the population interaction is zero, for power, and the Type III approach if it isn't [for the interpretability of means derived from this approach]. They too advocate using Type III in the real-life situation (when you're making inferences about the presence of the interaction from the data) because of the problem of making a type 2 [underpowered] error in the interaction test and thus accidentally violating the assumptions of the Type II SS approach; they then make similar further points to Myers & Well, and note the long debate on this issue!
So my interpretation (and I'm no expert!) is that there's plenty of Higher Statistical Authority on both sides of the argument; that the usual arguments put forward aren't about the usual situation that would give rise to problems (that situation being the common one of interpreting main effects with a non-significant interaction); and that there are fair reasons to be concerned about the Type II approach in that situation (and it comes down to a power versus potential over-liberalism thing).
For me, that's enough to wish for the Type III option in ezANOVA, as well as Type II, because (for my money) it's a superb interface to R's ANOVA systems. R is some way from being easy to use for novices, in my view, and the "ez" package, with ezANOVA and the rather lovely effect plotting functions, goes a long way towards making R accessible to a more general research audience. Some of my thoughts-in-progress (and a nasty hack for ezANOVA) are at [http://www.psychol.cam.ac.uk/statistics/R/anova.html](http://www.psychol.cam.ac.uk/statistics/R/anova.html) .
Would be interested to hear everyone's thoughts!
| null | CC BY-SA 2.5 | null | 2011-01-13T11:14:47.550 | 2011-01-13T11:14:47.550 | null | null | 2761 | null |
6212 | 2 | null | 6208 | 8 | null | Caveat: a purely non-statistical answer. I prefer to work with one function (or at least one package) when doing the same type of analysis (e.g., ANOVA). Up to now, I consistently use `Anova()` since I prefer its syntax for specifying models with repeated measures - compared to `aov()`, and lose little (SS type I) with non-repeated measures. `ezANOVA()` is nice for the added benefit of effect sizes. But What I don't particularly like is having to deal with 3 different functions to do essentially the same type of analysis, just because one of them implements feature X (but not Y), and the other one Y (but not X).
For ANOVA, I can choose between `oneway()`, `lm()`, `aov()`, `Anova()`, `ezANOVA()`, and probably others. When teaching R, it already is a pain to explain the different options, how they relate to each other (`aov()` is a wrapper for `lm()`), and which function does what:
- oneway() only for single factor designs but with option var.equal=FALSE. No such option in aov() and others, but those functions also for multifactorial designs.
- syntax for repeated measures a bit complicated in aov(), better in Anova()
- convenient SS type I only in aov(), not in Anova()
- convenient SS type II and III only in Anova(), not in aov()
- convenient effect size measures in ezANOVA(), not in others
It would be neat to only have to teach one function with one consistent syntax that does it all. Without convenient SS type III, `ezANOVA()` can't be that function for me because I know that students will be asked to use them at some point ("just cross-check these results that John Doe got with SPSS"). I feel it's better to have the option to make the choice oneself without having to learn yet another syntax for specifying models. The "I know what's best for you" attitude may have its merits, but can be over-protective.
| null | CC BY-SA 2.5 | null | 2011-01-13T11:33:00.883 | 2011-01-13T12:55:49.043 | 2011-01-13T12:55:49.043 | 1909 | 1909 | null |
6213 | 2 | null | 6169 | -1 | null | I don't think there are any appropriate uses of stacked bar charts; grouped bar charts are better, but both are inferior to other plots, depending on what aspect of your data you want to emphasize, and how much data you have.
| null | CC BY-SA 2.5 | null | 2011-01-13T11:56:13.593 | 2011-01-13T11:56:13.593 | null | null | 686 | null |
6214 | 1 | 6216 | null | 18 | 17106 | How should I define a model formula in R, when one (or more) exact linear restrictions binding the coefficients is available. As an example, say that you know that b1 = 2*b0 in a simple linear regression model.
| Fitting models in R where coefficients are subject to linear restriction(s)? | CC BY-SA 4.0 | null | 2011-01-13T12:07:18.603 | 2021-01-11T15:01:30.240 | 2021-01-11T15:01:30.240 | 11887 | 339 | [
"r",
"regression",
"modeling",
"restrictions"
] |
6215 | 2 | null | 6189 | 1 | null | Roughly speaking, I would say to a journalist that I could declare the daily level of nitrogen dioxide being sure, after discarding the highest values and the lowest values, that in each one of one-half of the days in that year the observed value is not beyond a distance of IQR/2 from the declared level.
For example, if your first quartile and third quartile are 100 and 124, you could say that daily level is 112 (average of 100 and 124) and assure your interlocutor that in half of the days the error you make isn't greater than 12.
| null | CC BY-SA 2.5 | null | 2011-01-13T12:51:14.577 | 2011-01-16T14:11:18.313 | 2011-01-16T14:11:18.313 | 1219 | 1219 | null |
6216 | 2 | null | 6214 | 18 | null | Suppose your model is
$ Y(t) = \beta_0 + \beta_1 \cdot X_1(t) + \beta_2 \cdot X_2(t) + \varepsilon(t)$
and you are planning to restrict the coefficients, for instance like:
$ \beta_1 = 2 \beta_2$
inserting the restriction, rewriting the original regression model you will get
$ Y(t) = \beta_0 + 2 \beta_2 \cdot X_1(t) + \beta_2 \cdot X_2(t) + \varepsilon(t) $
$ Y(t) = \beta_0 + \beta_2 (2 \cdot X_1(t) + X_2(t)) + \varepsilon(t)$
introduce a new variable $Z(t) = 2 \cdot X_1(t) + X_2(t)$ and your model with restriction will be
$ Y(t) = \beta_0 + \beta_2 Z(t) + \varepsilon(t)$
In this way you can handle any exact restrictions, because the number of equal signs reduces the number of unknown parameters by the same number.
Playing with R formulas you can do directly by I() function
```
lm(formula = Y ~ I(1 + 2*X1) + X2 + X3 - 1, data = <your data>)
lm(formula = Y ~ I(2*X1 + X2) + X3, data = <your data>)
```
| null | CC BY-SA 2.5 | null | 2011-01-13T12:53:38.177 | 2011-01-13T13:19:49.620 | 2011-01-13T13:19:49.620 | 2645 | 2645 | null |
6217 | 1 | null | null | 4 | 207 | In their great book "Wavelet methods for time series analysis" (2006), Percival & Walden state on p. 83 that the first-round pyramid algorithm scaling filter coefficients $\tilde{V}_{i,t}$ can be approximated by
$$\frac{1}{N} \sum_{k=-\frac{N}{4}+1}^{\frac{N}{4}}{\chi_{k}e^{i 2 \pi t k/N}}$$
arguing that $2^{1/2} \tilde{V}_{i,t}$
>
is formed by filtering [the time series] $\{X_{t}\}$ with the low-pass filter $\{g_l\}$ with nominal pass-band $[-1/4, 1/4]$ ...
Why is it $\tilde{V}_{i,t}$ and not $2^{1/2} \tilde{V}_{i,t}$ which is approximated by the above expression?
My understanding is that the approximation is due to limiting the frequency domain of the inverse DFT according to the pass-band of the filter.
| Wavelet analysis, scaling filter: where does the square root of 2 go to? | CC BY-SA 4.0 | null | 2011-01-13T13:03:43.850 | 2023-01-04T17:26:44.433 | 2023-01-04T17:26:44.433 | 362671 | 2765 | [
"wavelet"
] |
6219 | 1 | 6220 | null | 3 | 6559 | A physics application I'm using reports for a first order fit of the three points below as $11.388612x - 301.878$.
```
x, y
35, 0
430, 4861
656, 7000
```
It also shows a field labeled: "RMS: 329.499"
How is that RMS calculated? I tried RMSD as defined [here](http://en.wikipedia.org/wiki/Root_mean_square_deviation) but didn't get the same value.
| What does RMS stand for? | CC BY-SA 2.5 | null | 2011-01-13T14:51:20.863 | 2017-01-02T20:20:29.053 | 2017-01-02T20:20:29.053 | 12359 | 2767 | [
"regression",
"terminology",
"curve-fitting"
] |
6220 | 2 | null | 6219 | 6 | null | That's the root mean square error (RMSE) of the regression.
$$RMSE = \sqrt{\frac{1}{n-k}\sum{(y_i-\hat{y_i})^2}},$$
where $y_i$ is the observed and $\hat{y_i}$ the fitted value for observation $i$, $n$ is the number of observations, and $k$ is the number of parameters fitted (including the constant).
I just tried fitting a straight line by [simple linear regression](http://en.wikipedia.org/wiki/Simple_linear_regression) in another statistics package and got an RMSE of 329.499751.
| null | CC BY-SA 2.5 | null | 2011-01-13T15:55:41.003 | 2011-01-13T16:54:03.113 | 2011-01-13T16:54:03.113 | 449 | 449 | null |
6221 | 2 | null | 6219 | 8 | null | RMS stands for the root mean square error. It's calculated in the following way.
- First we calculate the residuals: -96.72, 265.77, -169.05
- Next we calculate the squared residuals: -96.72$^2$, 265.77$^2$, -169.05$^2$
- Then we sum and divide by $n-2=1$
- Take the square root.
---
Further info
A residual is simply the $observed - fitted$. So when x = 35, the observed is 0 and the fitted value is
\begin{equation}
11.388612\times 35 - 301.878 = 96.72
\end{equation}
The residual is then: $0 - 96.72 = -96.72$
| null | CC BY-SA 2.5 | null | 2011-01-13T15:55:47.193 | 2011-01-13T16:42:04.813 | 2011-01-13T16:42:04.813 | 8 | 8 | null |
6222 | 2 | null | 6219 | 4 | null | It's the [RMS](http://en.wikipedia.org/wiki/Root_mean_square) (root mean square) of the residuals of the linear regression.
In R:
```
> x <-c(35, 430, 656)
> y <- c(0, 4861, 7000)
> mod <- lm(y~x)
> mod
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
-301.88 11.39
> sqrt(sum(resid(mod)^2))
[1] 329.4998
```
| null | CC BY-SA 2.5 | null | 2011-01-13T15:59:58.307 | 2011-01-13T16:07:27.117 | 2011-01-13T16:07:27.117 | 582 | 582 | null |
6224 | 1 | null | null | 5 | 9017 | I used the `lmer` function in the `lme4` package in order to assess the effects of 2 categorical fixed effects (1º Animal Group: rodents and ants; 2º Microhabitat: bare soil and under cover) on seed predation (a count dependent variable). I have 2 Sites, with 10 trees per site and 4 seed stations per tree. Site and Tree are my (philosophically) random factors, but given that I have only two level for Site, it must be treated as a fixed factor. I have questions about how to interpret the results:
- I made a model selection criterion based on QAICc, but the best model (lower QAICc) does not result in any significant fixed effect and other models with higher QAIC (e.g. the Full Model) did find significant fixed factors. Does this make sense?
- Given a fixed factor that is important to the model, how do I distinguish which level of fixed factor is influencing the response variable?
Finally, correlation between the fixed factors implies an incorrect estimation of the model?
```
FullModel=lmer(SeedPredation ~ AnimalGroup*Microhabitat*Site + (1|Site:Tree) +
(1|obs), data=datos, family="poisson")
QAICc(FM)104.9896
enterGeneralized linear mixed model fit by the Laplace approximation
Formula: SP ~ AG * MH * Site + (1 | Site:Tree) + (1 | obs)
Data: datos
AIC BIC logLik deviance
101.8 125.6 -40.9 81.8
Random effects:
Groups Name Variance Std.Dev.
obs (Intercept) 0.20536 0.45317
Site:Tree (Intercept) 1.19762 1.09436
Number of obs: 80, groups: obs, 80; Site:Tree, 20
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.01161 0.47608 0.024 0.9805
AGR -18.97679 3130.76500 -0.006 0.9952
MHUC -1.60704 0.63626 -2.526 0.0115 *
Site2 -0.91424 0.74506 -1.227 0.2198
AGR:MHUC 19.92369 3130.76508 0.006 0.9949
AGR:Site2 1.02241 4431.84919 0.000 0.9998
MHUC:Site2 1.80029 0.86235 2.088 0.0368 *
AGR:MHUC:Site2 -3.49042 4431.84933 -0.001 0.9994
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) AGR MHUC Site2 AGR:MHUC AGR:S2 MHUC:S
AGR 0.000
MHUC -0.281 0.000
Site2 -0.639 0.000 0.180
AGR:MHUC 0.000 -1.000 0.000 0.000
AGR:Site2 0.000 -0.706 0.000 0.000 0.706
MHUC:Site2 0.208 0.000 -0.738 -0.419 0.000 0.000
AGR:MHUC:S2 0.000 0.706 0.000 0.000 -0.706 -1.000 0.000 code here
BestModel=lmer(SP ~ AG * MH + (1|Site:Tree) + (1|obs), data=datos,
family = "poisson")
QAICc(M) 101.4419
Generalized linear mixed model fit by the Laplace approximation
Formula: SP ~ AG + AG:MH + (1 | Site:Tree) + (1 | obs)
Data: datos
AIC BIC logLik deviance
100.3 114.6 -44.15 88.3
Random effects:
Groups Name Variance Std.Dev.
obs (Intercept) 0.76027 0.87194
Site:Tree (Intercept) 1.14358 1.06938
Number of obs: 80, groups: obs, 80; Site:Tree, 20
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.5153 0.4061 -1.269 0.205
AGR -18.7146 2603.4397 -0.007 0.994
AGA:MHUC -0.7301 0.5045 -1.447 0.148
AGR:MHUC 17.7221 2603.4397 0.007 0.995
```
| GLMM output interpretation (correct text) | CC BY-SA 3.0 | null | 2011-01-13T16:45:02.370 | 2013-02-15T21:20:15.500 | 2013-02-15T21:20:15.500 | 7290 | null | [
"r",
"mixed-model",
"glmm"
] |
6225 | 1 | 6240 | null | 38 | 23809 | As the question states - Is it possible to prove the null hypothesis? From my (limited) understanding of hypothesis, the answer is no but I can't come up with a rigorous explanation for it. Does the question have a definitive answer?
| Is it possible to prove a null hypothesis? | CC BY-SA 2.5 | null | 2011-01-13T16:46:34.553 | 2016-02-14T22:23:01.000 | 2011-02-12T05:51:37.453 | 183 | 2198 | [
"hypothesis-testing",
"proof",
"equivalence"
] |
6226 | 1 | 6233 | null | 4 | 869 | I'm doing a time series analysis. I'm doing most of my analysis in R, where I can use "NA" to represent "not available" (e.g. a missing data point). But I'm doing some data preparation in OpenOffice; currently, I'm leaving cells blank for missing data. Is there a better way to "declare" that a cell is NA?
| Representing missing data in an OpenOffice spreadsheet | CC BY-SA 2.5 | null | 2011-01-13T16:52:47.640 | 2011-01-13T19:38:36.953 | 2011-01-13T18:21:28.980 | 660 | 660 | [
"r",
"missing-data"
] |
6227 | 2 | null | 6225 | 11 | null | Yes there is a definitive answer. That answer is: No, there isn't a way to prove a null hypothesis. The best you can do, as far as I know, is throw confidence intervals around your estimate and demonstrate that the effect is so small that it might as well be essentially non-existent.
| null | CC BY-SA 2.5 | null | 2011-01-13T16:54:05.220 | 2011-01-13T16:54:05.220 | null | null | 196 | null |
6229 | 2 | null | 6225 | 16 | null | Answer from the mathematical side : it is possible if and only if "hypotheses are mutually singular".
If by "prove" you mean have a rule that can "accept" (should I say that:) ) $H_0$ with a probability to make a mistake that is zero, then you are searching what could be called "ideal test" and this exists:
If you are testing wether a random variable $X$ is drawn from $P_0$ or from $P_1$ (i.e testing $H_0: X\leadsto P_0$ versus $H_1: X\leadsto P_1$) then there exists an ideal test if and only if $P_1\bot P_0$ ($P_1$ and $P_0$ are "mutually singular").
If you don't know what "mutually singular" means I can give you an example: $\mathcal{U}[0,1]$ and $\mathcal{U}[3,4]$ (uniforms on $[0,1]$ and $[3,4]$) are mutually singular. This means if you want to test
$H_0: X\leadsto \mathcal{U}[0,1]$ versus $H_1: X\leadsto \mathcal{U}[3,4]$
then there exist an ideal test (guess what it is :) ) : a test that is never wrong !
If $P_1$ and $P_0$ are not mutually singular, then this does not exist (this results from the "only if part")!
In non mathematical terms this means that you can prove the null if and only if the proof is already in your assumptions (i.e. if and only if you have chosen the hypothesis $H_0$ and $H_1$ that are so different that a single observation from $H_0$ cannot be identifyed as one from $H_1$ and vise versa).
| null | CC BY-SA 3.0 | null | 2011-01-13T17:43:22.683 | 2012-06-06T06:39:26.767 | 2012-06-06T06:39:26.767 | 223 | 223 | null |
6230 | 2 | null | 6195 | 4 | null | It looks like a linear regression problem me too, but what about K nearest neighbors [KNN](http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm). You can come up with a distance formula between each car and compute the price as the average between the K (say 3) nearest. A distance formula can be euclidian based like the difference in cylinders plus the difference in doors, plus difference in horsepower and so on.
If you go with linear regresion I would suggest a couple things:
- Scale the dollar value up to modern day to account for inflation.
- Divide your data into epochs. I'll bet you'll find you will need one model for pre ww2 and post ww2 for example. This is just a hunch though.
- Cross validate your model to avoid over fitting. Divide your data into 5 chunks. Train on 4 and urn the model on the 5th chunk. Sum up the errors, rinse, repeat for the other chunks.
Another idea is to made a hybrid between models. Use regresion and KNN both as datapoints and create the final price as the weighted average or something.
| null | CC BY-SA 2.5 | null | 2011-01-13T18:40:03.760 | 2011-01-13T18:40:03.760 | null | null | 2769 | null |
6231 | 2 | null | 6226 | 4 | null | This won't be specific to OpenOffice, but I have found the ways different spreadsheet software and more traditional stat packages (such as R or SPSS) handle missing data are not always intuitive or even uniform within the software. Anytime I have missing data I typically check certain functions with toy data to see how they are handled.
In Excel IMO it is a waste of time to make explicit missing data declarations as it makes it more difficult to use the built in functions (especially if you use integers to represent missing data). Hence if you asked for Excel I would just say leave the cells blank (although I can't say for sure if this translates directly to OpenOffice).
As always, you will probably be well served to learn to do the data manipulation in R that you are currently doing in spreadsheet software.
| null | CC BY-SA 2.5 | null | 2011-01-13T18:46:07.780 | 2011-01-13T18:46:07.780 | null | null | 1036 | null |
6232 | 1 | null | null | 4 | 816 | We are analysing the effects of different harvesting intensities on soil nutrients. Data was collected over three different years. The first year was before treatments were applied, and the second and third years were 1 year and 6 years post treatment. There are 4 stand types (each replicated 3 times), 6 treatments (retention harvesting of all different intensities) and 6 plots within each stand type*treatment.
I am having some trouble wrapping my head around this question and am wondering which test I should be using?
| Which Test do I use? ANCOVA, repeated measures, multi-way ANOVA? | CC BY-SA 2.5 | null | 2011-01-13T19:28:37.650 | 2011-01-15T12:30:49.283 | 2011-01-13T19:38:15.753 | 5 | null | [
"anova"
] |
6233 | 2 | null | 6226 | 4 | null | If you want to import the data to R leave the cells blank. If file is saved as csv and imported to R, blank cells will be represented as NA automatically.
If you want to do some analysis in OpenOffice, I think you will find that @Andy W advice useful. Built-in OpenOffice functions may behave weirdly if you use some custom NA declaration.
Finally as @cgillespie pointed out it is better to do all data preparation in R. Even if it is slightly harder. The foremost reason for that is that this way you will be able to track changes to original data, which is highly desirable for debugging purposes. Furthermore if you work out the preparation with care it will be very easy to include new data. For this reason alone I now always do my data preparation only in R. I advise to look into [package reshape](http://cran.r-project.org/web/packages/reshape/index.html), it saves a lot of time for me.
| null | CC BY-SA 2.5 | null | 2011-01-13T19:38:36.953 | 2011-01-13T19:38:36.953 | null | null | 2116 | null |
6234 | 1 | 6235 | null | 33 | 18869 | Is there a visualization model that is good for showing the intersection overlap of many sets?
I am thinking something like Venn diagrams but that somehow might lend itself better to a larger number of sets such as 10 or more. Wikipedia does show some higher set Venn diagrams but even the 4 set diagrams are a lot to take in.
My guess as to the final result of the data would be that many of the sets won't overlap so it is possible that Venn diagrams would be fine -- but I would like to find a computer tool that will be able to generate that. Its looks to me like Google charts doesn't allow that many sets.
| Visualizing the intersections of many sets | CC BY-SA 2.5 | null | 2011-01-13T20:08:50.767 | 2015-07-21T11:43:24.410 | 2011-01-14T12:51:34.317 | null | 2772 | [
"data-visualization",
"dataset"
] |
6235 | 2 | null | 6234 | 19 | null | When you have a large number of sets, I would try something that is more linear and shows the links directly (like a network graph). Flare and Protovis both have utilities to handle these visualizations.
See [this question for some examples](https://stats.stackexchange.com/questions/3158/what-is-this-type-of-circular-link-visualization-called/3159#3159) like this:

| null | CC BY-SA 2.5 | null | 2011-01-13T20:38:45.660 | 2011-01-13T20:44:23.957 | 2017-04-13T12:44:46.083 | -1 | 5 | null |
6236 | 2 | null | 5682 | 16 | null | Full disclosure: I work at SAS.
The IML blog is [http://blogs.sas.com/iml](http://blogs.sas.com/iml).
Both languages are matrix-vector languages with a rich run-time library and the ability to write your own functions. For data analysis tasks and matrix computations, they both provide the neccessary tools to help you analyze your data.
The SAS/IML syntax is very similar to the SAS DATA step, so it appeals to SAS programmers. You can also call all of the SAS DATA step functions, and you can call any SAS procedure from within SAS/IML by using the SUBMIT/ENDSUBMIT statements. The SAS/IML Studio application is very nice for developing programs and for creating graphics.
The R community creates and shares a large number of packages, including packages written by top academic researchers. New statistical methods appear in R very quickly. The R community has many help and discussion lists.
The SAS/IML language does not contain every statistical analysis (as a built-in function) because the assumption is that you will call SAS/STAT or SAS/ETS procedures when you need a specialized analysis. For example, SAS/IML does not have functions for mixed modelling, but you can prepare the data in SAS/IML, call the MIXED or GLIMMIX procedure, and then use IML some more to manipulate or modify the output from the procedure.
In chapter 11 (and 16) of my book, I show how to call R from SAS/IML, transfer data back and forth, and generally show how to get the best of both worlds.
| null | CC BY-SA 2.5 | null | 2011-01-13T20:49:25.670 | 2011-01-13T20:49:25.670 | null | null | 2773 | null |
6237 | 2 | null | 6151 | 3 | null | In the spirit of an [earlier response](https://stats.stackexchange.com/questions/3006/factor-analysis-of-dyadic-data/3010#3010), you might be interested in David A Kenny's webpage on [dyadic analysis](http://davidakenny.net/dyad.htm), and models for matched pairs (See Agresti, [Categorical Data Analysis](http://www.stat.ufl.edu/~aa/cda/cda.html), Chapter 10, or this nice [handout](http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/9matched_02.pdf), by C J Anderson). There's also a lot of literature on sib-pair study design, in genetics, epidemiology, and psychometrics.
As you may know, studies on sibling rivalry also suggest that parents' attitude might play a role, but also that generally sibling relationships in early adulthood might be characterized by independent dimensions (warmth, conflict, and rivalry, according to Stocker et al., 1997). So it may be interesting for you to look at what has been done in psychometrics, especially whether your items share some similarity with previous studies or not. The very first hit on Google with `siblings rivalry scale statistical analysis` was a study on [The Effects of Working Mothers on Sibling Rivalry](http://www.kon.org/urc/v8/tsang.html) which offers some clues on how to handle such data (although I still think that model for matched pairs are better than the $\chi^2$-based approach used in this study).
References
Stocker, CM, Lanthier, RP, Furman, W (1997). [Sibling Relationships in Early Adulthood](http://abutler.net/trc/publications/pdfs/Sibling%20relationships%20in%20early%20adulthood%2057.pdf). Journal of Family Psychology, 11(2), 210-221.
| null | CC BY-SA 2.5 | null | 2011-01-13T22:33:59.487 | 2011-01-13T22:33:59.487 | 2017-04-13T12:44:55.360 | -1 | 930 | null |
6238 | 2 | null | 6180 | 0 | null | When using confidence intervals, my response is always as follows:
In repeated sampling % of intervals so constructed will contain _. Thus, we have % confidence that the true lies within __-_.
One example is
In repeated sampling, 95% of all intervals so constructed will contain Mu, the true population mean. Thus we have 95% confidence that the population GPA lies between 2.72 and 3.07.
I don't see anything particularly wrong with your response, except I would imagine the last sentence should be 95%, not more or less, or at least.
| null | CC BY-SA 2.5 | null | 2011-01-13T22:52:29.760 | 2011-01-13T22:52:29.760 | null | null | 2644 | null |
6239 | 1 | 6241 | null | 28 | 28862 | I have a time series and I want to subset it while keeping it as a time series, preserving the start, end, and frequency.
For example, let's say I have a time series:
```
> qs <- ts(101:110, start=c(2009, 2), frequency=4)
> qs
Qtr1 Qtr2 Qtr3 Qtr4
2009 101 102 103
2010 104 105 106 107
2011 108 109 110
```
Now I will subset it:
```
> qs[time(qs) >= 2010 & time(qs) < 2011]
[1] 104 105 106 107
```
Notice that I got the correct results, but I lost the "wrappings" from the time series (namely start, end, frequency.)
I'm looking for a function for this. Isn't subsetting a time series is a common scenario? Since I haven't found one yet, here is a function I wrote:
```
subset.ts <- function(data, start, end) {
ks <- which(time(data) >= start & time(data) < end)
vec <- data[ks]
ts(vec, start=start(data) + c(0, ks[1] - 1), frequency=frequency(data))
}
```
I'd like to hear about improvements or cleaner ways to do this. In particular, I don't like the way I'm hard-coding start and end. I'd rather let the user specify an arbitrary boolean condition.
| Subsetting R time series vectors | CC BY-SA 2.5 | null | 2011-01-13T23:11:58.450 | 2011-01-14T09:35:20.220 | 2011-01-14T06:24:35.500 | 2116 | 660 | [
"r",
"time-series"
] |
6240 | 2 | null | 6225 | 21 | null | If you are talking about the real world & not formal logic, the answer is of course. "Proof" of anything by empirical means depends on the strength of the inference one can make, which in turn is determined by validity of the testing process as evaluated in light of everything one knows about how the world works (i.e., theory). Whenever one accepts that certain empirical results justify rejecting the "null" hypothesis, one is necessarily making judgments of this sort (validity of design; world works in certain way), so having to make the analogous assumptions necessary to justify inferring "proof of the null" is not problematic at all.
So what are the analogous assumptions? Here is an example of "proving the null" that is commonplace in health science & in social science. (1) Define "null" or "no effect" in some way that is practically meaningful. Let's say that I believe that I should conduct myself as if there is no meaningful difference between 2 treatments, t1 & t2, for a disease unless one gives a 3% better chance of recovery than the other. (2) Figure out a valid design for testing whether there is any effect-- in this case, whether there is a difference in recovery likelihood between t1 & t2. (3) Do a power analysis to determine whether what sample size is necessary to generate a sufficiently high likelihood-- one that I am confident relying on given what's at stake -- that I would see the meaningful effect, 3% in my example, assuming it exists. Usually people say power is sufficient if the likelihood of observing a specified effect at a specified alpha is at least 0.80, but the right level of confidence is really a matter of how averse you are to error -- same as it is when you select p-value threshold for "rejecting the null."(4) Perform the empirical test & observe the effect. If it is below the specified "meaningful difference" value -- 3% in my example -- you've "proven" that there is "no effect."
For a good treatment of this matter, see Streiner, D.L. [Unicorns Do Exist: A Tutorial on “Proving” the Null Hypothesis](http://ww1.cpa-apc.org/Publications/Archives/CJP/2003/december/streiner.pdf). Canadian Journal of Psychiatry 48, 756-761 (2003).
| null | CC BY-SA 2.5 | null | 2011-01-13T23:51:59.263 | 2011-01-14T09:17:37.343 | 2011-01-14T09:17:37.343 | 930 | 11954 | null |
6241 | 2 | null | 6239 | 36 | null | Use the `window` function:
```
> window(qs, 2010, c(2010, 4))
Qtr1 Qtr2 Qtr3 Qtr4
2010 104 105 106 107
```
| null | CC BY-SA 2.5 | null | 2011-01-14T00:01:03.723 | 2011-01-14T00:01:03.723 | null | null | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.