Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9367 | 2 | null | 9331 | 1 | null | I learnt about the use of splines in regression from the book "Regression Modeling Strategies" by Frank Harrell. Harrell's R package rms allows you to easily fit regression models in which some predictor variables are represented as splines.
| null | CC BY-SA 3.0 | null | 2011-04-08T20:21:18.013 | 2011-04-08T20:21:18.013 | null | null | 3835 | null |
9368 | 2 | null | 9312 | 1 | null | You should use the point estimate of the reliability, not the lower bound or whatsoever. I guess by lb/up you mean the 95% CI for the ICC (I don't have SPSS, so I cannot check myself)? It's unfortunate that we also talk of Cronbach's alpha as a "lower bound for reliability" since this might have confused you.
It should be noted that this formula is not restricted to the use of an estimate of ICC; in fact, you can plug in any "valid" measure of reliability (most of the times, it is Cronbach's alpha that is being used). Apart from the NCME tutorial that I linked to in my comment, you might be interested in this recent article:
>
Tighe et al. The standard error of
measurement is a more appropriate
measure of quality for postgraduate
medical assessments than is
reliability: an analysis of MRCP(UK)
examinations. BMC Medical
Education 2010, 10:40
Although it might seem to barely address your question at first sight, it has some additional material showing how to compute SEM (here with Cronbach's $\alpha$, but it is straightforward to adapt it with ICC); and, anyway, it's always interesting to look around to see how people use SEM.
| null | CC BY-SA 3.0 | null | 2011-04-08T20:40:25.080 | 2011-04-08T20:40:25.080 | null | null | 930 | null |
9369 | 2 | null | 9365 | 12 | null | It's a bit difficult for me to see what paper might be of interest to you, so let me try and suggest the following ones, from the psychometric literature:
>
Borsboom, D. (2006). The attack of
the psychometricians.
Psychometrika, 71, 425-440.
for dressing the scene (Why do we need to use statistical models that better reflect the underlying hypotheses commonly found in psychological research?), and
>
Borsboom, D. (2008). Psychometric
perspectives on diagnostic
systems. Journal of Clinical
Psychology, 64, 1089-1108.
for an applied perspective on diagnostic medicine (transition from yes/no assessment as used in the DSM-IV to the "dimensional" approach intended for the DSM-V). A larger review of latent variable models in biomedical research that I like is:
>
Rabe-Hesketh, S. and Skrondal, A.
(2008). Classical latent variable
models for medical research.
Statistical Methods in Medical Research, 17(1), 5-32.
| null | CC BY-SA 3.0 | null | 2011-04-08T21:02:38.377 | 2011-04-08T21:02:38.377 | null | null | 930 | null |
9370 | 2 | null | 9365 | 12 | null | Here are five highly-cited papers from the last 40 years of the [Journal of the Royal Statistical Society, Series C: Applied Statistics](http://www.blackwellpublishing.com/journal.asp?ref=0035-9254) with a clear application in the title that caught my eye while scanning through the Web of Knowledge search results:
- Sheila M. Gore, Stuart J. Pocock and Gillian R. Kerr (1984). Regression Models and Non-Proportional Hazards in the Analysis of Breast Cancer Survival. Vol. 33, No. 2, pp. 176-195. (Cited 100 times) (Free PDF)
- John Haslett and Adrian E. Raftery (1989). Space-Time Modelling with Long-Memory Dependence: Assessing Ireland's Wind Power Resource. Vol. 38, No. 1 pp. 1-50 (Cited 156 times)
- Stuart G. Coles and Jonathan A. Tawn (1994). Statistical Methods for Multivariate Extremes: An Application to Structural Design. Vol. 43, No. 1, pp. 1-48. (Cited 99 times)
- Nicholas Lange and Scott L. Zeger (1997). Non-linear Fourier time series analysis for human brain mapping by functional magnetic resonance imaging. Vol. 46, No. 1, pp. 1-29. (Cited 94 times)
- James P. Hughes, Peter Guttorp and Stephen P. Charles (1999). A Non-Homogeneous Hidden Markov Model for Precipitation Occurrence. Vol. 48, No. 1, pp. 15-30. (Cited 103 times)
| null | CC BY-SA 3.0 | null | 2011-04-08T21:33:23.330 | 2011-04-13T06:57:43.827 | 2011-04-13T06:57:43.827 | 183 | 449 | null |
9371 | 2 | null | 9329 | 3 | null | First idea is just to mimic the knock-out strategy from variable importance and just test how mixing each attribute will degenerate the forest confidence in object classification (on OOB and with some repetitions obviously). This requires some coding, but is certainly achievable.
However, I feel it is just a bad idea -- the result will be probably variable like hell (without stabilizing impact of averaging over objects), noisy (for not-so-confident objects the nonsense attributes could have big impacts) and hard to interpret (two or more attribute cooperative rules will probably result in random impacts of each contributing attributes).
Not to leave you with negative answer, I would rather try to look at the proximity matrix and the possible archetypes it may reveal -- this seems much more stable and straightforward.
| null | CC BY-SA 3.0 | null | 2011-04-08T23:21:36.193 | 2011-04-08T23:21:36.193 | null | null | null | null |
9372 | 1 | null | null | 2 | 374 | How would I go about finding out the confidence intervals around a set of distinct bianary occurrences where each occurence has a different associated probability and each occurence is weighted?
To be more specific, we typically determine a milestone budget by assigning reasonable probabilities to each milestone. Each milestone typically usually has a differnt probability of success and a different dollar value. For example if we need to determine the amount of money we need to hold out for 3 milestones, we would do this:
$$P(\text{milestone}_1) \text{cost}(\text{milestone}_1)+P(\text{milestone}_2) \text{cost}(\text{milestone}_2)+\dots $$
So it could look something like this: Budget = (10% x $\$$1,000,000) + (50% x $\$$5,000,000) + (20% x $\$$20,000,000) = $\$$6,600,000.
Each milestone is bianary in that it either happens or it doesn't. so for the first milestone, we are saying that there is a 10% chance that we will need to pay $\$$1M dollars and a 90% chance that we will pay nothing.
Obviously, we won't ever actually be paying the probilized amount of $\$$6.6M. How would I go about determining the probability range around where the spend is most likely to fall. In other words, if I wanted to demonstrate where the actual spend might fall with a certainty of 80% how would I go about calculating this? Note that we have more events in my real world problem...
Thanks in advance for any help. I am not a statistician at all so please forgive my ignorance and any lack of clarity.
| How do I determine confidence intervals around weighted, probilized events? | CC BY-SA 3.0 | null | 2011-04-08T23:28:23.310 | 2011-04-09T00:41:18.617 | 2011-04-09T00:27:18.753 | null | null | [
"probability"
] |
9373 | 2 | null | 9276 | 4 | null | There is one general and "in-universe" criterion for goodness of Monte Carlo -- convergence.
Stick to one M and check how the PG behaves with the number of juries -- it should converge, so will show you a number of repetitions for which you will have a reasonable (for your application) number of significant digits. Repeat this benchmark for few other Ms to be sure you wasn't lucky with M selection, then proceed to the whole simulation.
| null | CC BY-SA 3.0 | null | 2011-04-09T00:04:02.477 | 2011-04-09T00:04:02.477 | null | null | null | null |
9374 | 2 | null | 9372 | 2 | null | Basically you would need to make a [probability tree](http://www.google.com/search?q=probability+tree) with resulting penalty sums in leafs and sum the whole thing up for a criterion of your choice, possibly using some software for more milestones than few (there will be $2^N$ leafs for $N$ milestones). You will be able to build the distribution too.
For a really huge number of milestones, you'd need a Monte Carlo simulation, so basically just simulate the process huge number of times (using random number generator to decide whether to fire or not certain milestone) and then obtain a histogram of outputs as a distribution approximation.
| null | CC BY-SA 3.0 | null | 2011-04-09T00:26:56.037 | 2011-04-09T00:26:56.037 | null | null | null | null |
9375 | 2 | null | 9372 | 1 | null | It rather depends on how many milestones you have. But if this (call it $n$) is small enough and each one either happens or does not, then you can work out the $2^n$ possibilities, working out the probabilities by multiplying the inidividual probabilies.
So for example the probability of paying out $20,000,000$ is $0.9 \times 0.5 \times 0.2 = 0.09$ or 9%.
You then sort by value and add up the probabilities, to find for example that the probability of paying $20,000,000$ or less is $0.89$; adding up in reverse you find the probability of paying $20,000,000$ or more is $0.20$.
Your table might look like this
```
Prob CumProb RevCum Amount
0.36 0.36 1.00 0
0.04 0.40 0.64 1,000,000
0.36 0.76 0.60 5,000,000
0.04 0.80 0.24 6,000,000
0.09 0.89 0.20 20,000,000
0.01 0.90 0.11 21,000,000
0.09 0.99 0.10 25,000,000
0.01 1.00 0.01 26,000,000
```
This will then let you make statements about confidence.
| null | CC BY-SA 3.0 | null | 2011-04-09T00:41:18.617 | 2011-04-09T00:41:18.617 | null | null | 2958 | null |
9376 | 2 | null | 8669 | 0 | null | Canonical Correlation Analysis was one way to go and it works! Credits to @schenectady for this. Thanks a lot for your help.
I want to write this for future reference and for others who might have a similar query, if you want to perform a regression analysis in such a situation, you should attempt to minimize the squared errors from each of your equations, i.e.
$$\left\{\begin{array}{l} a_0 + a_1X_1 + a_2X_2 + \dots + a_{11}X_{11}\\ b_0 + b_1Y_1 + b_2Y_2 + b_3Y_3 + b_4Y_4\end{array}\right.$$
The simplest way is to use the excel solver to arrive at the coefficients with the aim of minimizing the mean of the squared errors from the above two equations.
| null | CC BY-SA 3.0 | null | 2011-04-09T07:49:30.520 | 2011-04-09T18:43:50.403 | 2011-04-09T18:43:50.403 | 930 | 3859 | null |
9377 | 1 | 9380 | null | 7 | 841 | In the paper
>
M. Avellaneda and J. H. Lee, Statistical arbitrage in the U.S. equities market, July 2008,
in the Appendix on page 46, how does he get equilibrium standard deviation as following:
$$\sigma_{eq} = \sqrt{\frac{\text{Variance}(\zeta)}{1 − b^2}}$$
If anyone knows the paper, please explain.
Much appreciated,
| Origin of strange formula for equilibrium standard deviation | CC BY-SA 3.0 | null | 2011-04-09T11:29:27.753 | 2011-04-19T13:04:46.690 | 2011-04-19T13:04:46.690 | 2970 | 862 | [
"regression",
"probability",
"variance",
"stochastic-processes"
] |
9378 | 1 | 9401 | null | 4 | 4826 | I'm working with a CSV which contains approximately 220,000 entries. My aim is to predict one of the attributes (ATT1) using the other 3 (ATT2, ATT3, ATT4).
I've been able to do this using NaiveBayes, but now I feel unsatisfied with the result. The reason is that ATT1 can be one of 6 values (VAL1-6), but these are not evenly distributed into the dataset. I'm afraid this could lead to an unprecise prediction.
How do I select a given number of entries for each value of ATT1 from within RapidMiner?
| How to choose a data subset in RapidMiner? | CC BY-SA 3.0 | null | 2011-04-09T12:49:36.187 | 2017-05-19T12:31:50.697 | 2017-05-19T12:31:50.697 | 101426 | 1522 | [
"dataset",
"rapidminer"
] |
9380 | 2 | null | 9377 | 15 | null | The authors are providing a simple means for estimating the parameters of a mean-reverting Orstein-Uhlenbeck process via a regression on returns at discretized points in time.
The model they are considering has a representation as a stochastic differential equation of the form [pg. 16, Eq. (12)]
$$
\newcommand{\rd}{\mathrm{d}}
\rd X(t) = \kappa (m - X(t)) \rd t + \sigma \rd W(t)
$$
where $W(t)$ is a standard Brownian motion.
The solution to this SDE is well-known and easy to find via Ito's lemma and an analogous technique to integrating constants in ODEs. The solution is [pg. 17, Eq. (13)]
$$
X(t_0 + \Delta t) = e^{-\kappa \Delta t} X(t_0) + (1-e^{-\kappa \Delta t}) m + \sigma \int_{t_0}^{\,t_0 + \Delta t} e^{-\kappa(t_0+\Delta t - s)} \, \rd W(s) .
$$
This is a Gaussian process and so is characterized by its mean and covariance as a function of time. Letting "time go to infinity" (i.e., $\Delta t \to \infty$), we get an equilibrium mean and variance of
$$
\begin{aligned}
\mathbb{E} X(t) &= m \\
\mathbb{V}\mathrm{ar}(X(t)) &= \frac{\sigma^2}{2 \kappa}
\end{aligned}
$$
Now, skipping to the appendix [bottom of page 45], the authors are trying to estimate the parameters by doing a regression using the discrete values of the process and model
$$
X_{n+1} = a + b X_n + \zeta_{n+1} .
$$
Matching up the parameters $a$ and $b$ with the portions from above, we get that
$$
\begin{aligned}
a &= m (1 - e^{-\kappa \Delta t}) \\
b &= e^{-\kappa \Delta t} \\
\mathbb{V}\mathrm{ar}(\zeta) &= \sigma^2 \frac{1-e^{-2\kappa \Delta t}}{2\kappa}
\end{aligned}
$$
Substituting the second equation into the first and solving for $m$ gives $m = a / (1-b)$. Use the same substitution in the third equation and rearrange to get
$$
\sigma^2 = \frac{\mathbb{V}\mathrm{ar}(\zeta) \cdot 2 \kappa}{1 - b^2} \>,
$$
but, recall that the variance of the equilibrium distribution (by looking far into the future) for $X(t)$ is just $\sigma^2 / 2 \kappa$ and so this gives your result.
---
Addendum: If you're wondering how the expression
$$
\mathbb{V}\mathrm{ar}(\zeta) = \sigma^2 \frac{1-e^{-2\kappa \Delta t}}{2\kappa}
$$
was obtained, it is via the (remarkable and beautiful!) [Ito isometry](http://en.wikipedia.org/wiki/Ito_isometry) and the fact that an Ito integral is a zero-mean martingale; namely, in this instance,
$$
\mathbb{E}\Big(\sigma \int_{t_0}^{\,t_0 + \Delta t} e^{-\kappa(t_0+\Delta t - s)} \, \rd W(s)\Big)^2 = \sigma^2 \int_{t_0}^{\,t_0 + \Delta t} e^{-2 \kappa(t_0 +\Delta t - s)} \, \rd s
$$
where we note that the integrand has been squared on the right-hand side and we "get to replace" $\rd W(s)$ with $\rd s$, converting the problem into one of solving a standard Riemann integral.
| null | CC BY-SA 3.0 | null | 2011-04-09T13:32:30.010 | 2011-04-11T02:17:49.223 | 2011-04-11T02:17:49.223 | 2970 | 2970 | null |
9381 | 1 | null | null | 4 | 149 | I was wondering from the view of dividing the topics of statistical theory into inference part and non-inference part, what inference topics and non-inference topics statistical theory is covering?
By inference, I mean the task in logic to reach some conclusion from some premises. Probabilistic inference is a way of logical inference using probability.
Can we say statistical theory is just providing probabilistic ways to accomplish logic inference? Or statistical theory and logic inference are overlapping, but neither falls into the other?
Relating to [my previous post regarding decision theory and statistical theory](https://stats.stackexchange.com/questions/9208/what-is-the-relation-between-statistics-theory-and-decision-theory), how are logical inference, decision theory and statistical theory are related together and differ from each other?
| Inference and noninference parts of statistical theory | CC BY-SA 3.0 | null | 2011-04-09T14:00:11.930 | 2017-03-15T19:34:48.247 | 2017-04-13T12:44:33.550 | -1 | 1005 | [
"inference"
] |
9383 | 1 | null | null | 2 | 666 | I'm doing 10-fold cross validation on a dataset. But in some folds there are edge cases that the denominator in precision-recall calculation is zero (tp + fp =0).
What are the correct values for precision and recall in this case? And what is the correct way of doing cross-validation (should I include these results when reporting average precision-recall over 10 folds)?
PS: This question is very similar to [What are correct values for precision and recall when the denominators equal 0?](https://stats.stackexchange.com/questions/8025/what-are-correct-values-for-precision-and-recall-when-the-denominators-equal-0)
Thanks in advance : -)
| What are the correct edge case values of precision and recall and how to integrate them into cross validation? | CC BY-SA 3.0 | null | 2011-04-09T15:24:45.393 | 2011-04-09T20:05:43.907 | 2017-04-13T12:44:39.283 | -1 | 4091 | [
"cross-validation",
"precision-recall"
] |
9384 | 2 | null | 9342 | 11 | null | The Ward clustering algorithm is a hierarchical clustering method that minimizes an 'inertia' criteria at each step. This inertia quantifies the sum of squared residuals between the reduced signal and the initial signal: it is a measure of the variance of the error in an l2 (Euclidean) sens. Actually, you even mention it in your question. This is why, I believe, it makes no sens to apply it to a distance matrix that is not a l2 Euclidean distance.
On the other hand, an average linkage or a single linkage hierarchical clustering would be perfectly suitable for other distances.
| null | CC BY-SA 3.0 | null | 2011-04-09T15:57:28.113 | 2011-04-09T15:57:28.113 | null | null | 1265 | null |
9385 | 1 | null | null | 7 | 11210 | I am using the Holt-Winters' exponential smoothing technique to forecast expenditure data 2 years into the furture. The monthly data has an increasing trend and annual seasonality.
I'm using MS Excel with the Solver add-in to calculate the optimal values of $\alpha$, $\beta$ and $\gamma$ to give the smallest MSE for the forecasts. The optimal values found for $\alpha$ and $\beta$ lie in (0,1) and $\gamma$ is found to be 1.
I am able to calculate the forecasts for the next year (season) because the seasonals from the previous year exist. However, the forecasts for the second year are calculated to be zero because the seasonals do not exist (because m is greater than 12).
I have discovered that if $\gamma$ is zero, then the seasonals will be periodic, and so could be replicated after the last observed values. Is this the best way to forecast beyond one season after the last observed values? Any advice would be appreciated.
Example data is below. Forecasts are needed for every month up to December 2011. I cannot see how this is possible unless $\gamma$ is zero.
Numbers of Tourists
Period Month No. Tourists (Yt)
1 Jan-99 500
2 Feb-99 543
3 Mar-99 899
4 Apr-99 835
5 May-99 900
6 Jun-99 881
7 Jul-99 1154
8 Aug-99 1586
9 Sep-99 743
10 Oct-99 1104
11 Nov-99 799
12 Dec-99 560
13 Jan-00 514
14 Feb-00 665
15 Mar-00 949
16 Apr-00 975
17 May-00 924
18 Jun-00 724
19 Jul-00 1155
20 Aug-00 1541
21 Sep-00 746
22 Oct-00 944
23 Nov-00 786
24 Dec-00 652
25 Jan-01 479.4
26 Feb-01 644.4
27 Mar-01 815.8
28 Apr-01 1035.4
29 May-01 1000.9
30 Jun-01 793.8
31 Jul-01 1347.3
32 Aug-01 1378
33 Sep-01 798.1
34 Oct-01 1070.5
35 Nov-01 625.3
36 Dec-01 654
37 Jan-02 477.5
38 Feb-02 656.2
39 Mar-02 888.7
40 Apr-02 926.6
41 May-02 1000.1
42 Jun-02 1030.8
43 Jul-02 1123
44 Aug-02 1473.5
45 Sep-02 717.8
46 Oct-02 974.7
47 Nov-02 761.2
48 Dec-02 641.5
49 Jan-03 501.6
50 Feb-03 588.3
51 Mar-03 917.6
52 Apr-03 990
53 May-03 1051
54 Jun-03 764.4
55 Jul-03 1014.2
56 Aug-03 1313.6
57 Sep-03 736.3
58 Oct-03 1042.9
59 Nov-03 685.9
60 Dec-03 621.5
61 Jan-04 492.8
62 Feb-04 722
63 Mar-04 869.9
64 Apr-04 927.9
65 May-04 1028.1
66 Jun-04 883
67 Jul-04 1097.4
68 Aug-04 1398.9
69 Sep-04 834.4
70 Oct-04 1072.3
71 Nov-04 801.9
72 Dec-04 711.2
73 Jan-05 616.1
74 Feb-05 774
75 Mar-05 1088.5
76 Apr-05 956.2
77 May-05 1175.6
78 Jun-05 949.5
79 Jul-05 1120.8
80 Aug-05 1426.2
81 Sep-05 841.5
82 Oct-05 996.6
83 Nov-05 908
84 Dec-05 696.7
85 Jan-06 606.4
86 Feb-06 771.6
87 Mar-06 967.1
88 Apr-06 1235
89 May-06 1216.1
90 Jun-06 945.1
91 Jul-06 1194.4
92 Aug-06 1433.4
93 Sep-06 830.6
94 Oct-06 984.7
95 Nov-06 880.2
96 Dec-06 668.3
97 Jan-07 644.9
98 Feb-07 808
99 Mar-07 998.2
100 Apr-07 1283.9
101 May-07 1080.9
102 Jun-07 989.9
103 Jul-07 1167
104 Aug-07 1568.9
105 Sep-07 951.7
106 Oct-07 1121.4
107 Nov-07 859
108 Dec-07 660.9
109 Jan-08 647.9
110 Feb-08 911.1
111 Mar-08 1201.2
112 Apr-08 1258.1
113 May-08 1177.8
114 Jun-08 1067.6
115 Jul-08 1349.4
116 Aug-08 1702.1
117 Sep-08 982.8
118 Oct-08 1116.5
119 Nov-08 904.7
120 Dec-08 655.9
121 Jan-09 733.75
122 Feb-09 852.67
123 Mar-09 1049.88
124 Apr-09 1377.11
125 May-09 1344.05
126 Jun-09 1030.95
127 Jul-09 1242.56
128 Aug-09 1542.24
129 Sep-09 1016.42
130 Oct-09 2301.41
131 Nov-09 1138.9
132 Dec-09 1032.87
| Forecasting beyond one season using Holt-Winters' exponential smoothing | CC BY-SA 3.0 | null | 2011-04-09T15:59:58.363 | 2013-01-16T11:28:44.610 | 2013-01-16T11:28:44.610 | 1352 | 4092 | [
"time-series",
"forecasting",
"excel",
"exponential-smoothing"
] |
9387 | 2 | null | 9365 | 9 | null | On a wider level I would recommend the ["Statistical Modeling: The Two Cultures"][1] paper by Leo Breiman in 2001 (cited 515) I know it was covered by the journal club recently and I found it to be really interesting. I've c&p'd the abstract.
>
Abstract. There are two cultures in
the use of statistical modeling to
reach conclusions from data. One
assumes that the data are generated by
a given stochastic data model. The
other uses algorithmic models and
treats the data mechanism as unknown.
The statistical community has been
committed to the almost exclusive use
of data models. This commitment has
led to irrelevant theory, questionable
conclusions, and has kept
statisticians from working on a large
range of interesting current problems.
Algorithmic modeling, both in theory
and practice, has developed rapidlyin
fields outside statistics. It can be
used both on large complex data sets
and as a more accurate and informative
alternative to data modeling on
smaller data sets. If our goal as a
field is to use data to solve
problems, then we need to move away
from exclusive dependence on data
models and adopt a more diverse set of
tools.
[1]:
https://doi.org/10.1214/ss/1009213726 (open access)
| null | CC BY-SA 4.0 | null | 2011-04-09T16:49:38.487 | 2019-03-02T12:41:01.320 | 2019-03-02T12:41:01.320 | 166514 | 3597 | null |
9388 | 2 | null | 3898 | 1 | null | If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the variance of the error. They supply to my knowledge the most updated approach to resolve this issue. See also Alexei Onatski who uses a different method based on eigenvalues of the factor covariance matrix.
| null | CC BY-SA 3.0 | null | 2011-04-09T18:09:15.317 | 2011-04-09T18:09:15.317 | null | null | 4093 | null |
9390 | 1 | null | null | 9 | 2126 | An English soccer team plays a series of matches against different opponents of varying ability. A bookmaker offers odds for each match as to whether it will be a home win, away win, or draw. Part-way through the season, the team has played $n$ matches and has drawn $k$ of them, which is more than might be expected from the odds.
What is the probability that the bookmaker is mis-pricing the odds on these matches, rather than just being unlucky? If the bookmaker continues to price the team's remaining matches in a similar way, and I bet $\$1$ that each one will be a draw, what is my expected return?
| What's the probability that a bookmaker is mispricing odds on soccer games? | CC BY-SA 4.0 | null | 2011-04-09T18:24:23.090 | 2018-06-04T01:09:24.283 | 2018-06-04T01:09:24.283 | 116107 | null | [
"probability",
"games",
"gambling"
] |
9391 | 2 | null | 9390 | 0 | null | Bookmakers use an overround so they don't actually care what the result is because they win whatever. That is why you never meet a poor bookie. If a bookmaker is mispricing draws your ability to make profit would depend on the odds the bookmaker was offering and whether the profits generated would cover the times you lose.
| null | CC BY-SA 3.0 | null | 2011-04-09T18:49:59.820 | 2011-04-09T18:49:59.820 | null | null | 3597 | null |
9392 | 1 | null | null | 0 | 237 | My data looks like this (F=Features)
```
F1 F2 F3 F4 F5 F6 F7 F8....
ID1 0.67 0.76 0.3 0.54 0.21 0.88 0.97 0.45....
ID2 0.76 0.68 0.10 0.45 0.12 0.44 0.79 0.54....
ID3 0.67 0.76 0.3 0.54 0.21 0.88 0.68 0.76....
ID4 0.67 0.10 0.3 0.45 0.3 0.88 0.97 0.45....
...
...
...
```
I have about 40 features (I have just put 8 here). If I set a threshold for, say 4 features, what I am looking for is a combination of 4 features which, together, are most relevant and significant in the dataset. I need some kind of score that measures how good a combination of features is. This is what I meant by the confidence score (or whatever score we may call it). So instead of selecting 1 feature I want to select a combination of 4 features. For example.
```
F1-F4-F9-F12 = 0.92
F2-F3-F7-F6 = 0.85
F5-F3-F4-F8 = 0.667
```
Here, F1-F4 is not subtraction. I am just attaching the two features together. The scores above, I do not know how to get it. That is my question? So I do not know what kind of a test to use or can be used.
How can I go about it? Thanks
| How to get scored combination of features | CC BY-SA 3.0 | null | 2011-04-09T18:54:22.423 | 2011-04-13T02:59:16.640 | 2011-04-11T20:52:30.830 | 3111 | 3111 | [
"feature-selection",
"text-mining"
] |
9393 | 2 | null | 9383 | 1 | null | Instead of using CV to estimate precision and recall, use it to obtain the expected TP, TN, FP, and FN rate. Then use those values to compute the expected precision and recall and the standard errors. (Taylor expansions come in handy for the latter.)
| null | CC BY-SA 3.0 | null | 2011-04-09T20:05:43.907 | 2011-04-09T20:05:43.907 | null | null | 3567 | null |
9394 | 2 | null | 9365 | 8 | null | From a genetic epidemiology perspective, I would now recommend the following series of papers about [genome-wide association studies](http://en.wikipedia.org/wiki/Genome-wide_association_study):
- Cordell, H.J. and Clayton, D.G. (2005). Genetic association studies. Lancet 366, 1121-1131.
- Cantor, R.M., Lange, K., and Sinsheimer, J.S. (2010). Prioritizing GWAS results: A review of statistical methods and recommendations for their application. The American Journal of Human Genetics 86, 6–22.
- Ioannidis, J.P.A., Thomas, G., Daly, M.J. (2009). Validating, augmenting and refining genome-wide association signals. Nature Reviews Genetics 10, 318-329.
- Balding, D.J. (2006). A tutorial on statistical methods for population association studies. Nature Reviews Genetics 7, 781-791.
- Green, A.E. et al. (2008). Using genetic data in cognitive neuroscience: from growing pains to genuine insights. Nature Reviews Neuroscience 9, 710-720.
- McCarthy, M.I. et al. (2008). Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nature Reviews Genetics 9, 356-369.
- Psychiatric GWAS Consortium Coordinating Committee (2009). Genomewide Association Studies: History, Rationale, and Prospects for Psychiatric Disorders. American Journal of Psychiatry 166(5), 540-556.
- Sebastiani, P. et al. (2009). Genome-wide association studies and the genetic dissection of complex traits. American Journal of Hematology 84(8), 504-15.
- The Wellcome Trust Case Control Consortium (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature 447, 661-678.
- The Wellcome Trust Case Control Consortium (2010). Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls. Nature 464, 713-720.
| null | CC BY-SA 3.0 | null | 2011-04-09T20:39:39.403 | 2011-04-09T20:39:39.403 | null | null | 930 | null |
9395 | 1 | null | null | 8 | 429 | I don't know what is the appropriate term for my question. The scenario is described as following.
In the analysis there one dependent variable Y and two independent variable X1 and X2.
All three variables are continuous.
I converted X1 into a categorical variable which has three levels A, B, and C. It was found that Y and X2 are positively correlated in group A and B, but negatively correlated in group C.
I was told that convert a continuous variable into categorical was generally a bad idea and I understand this. My question is, how can I demonstrate the above pattern without break X1 into categories? I was suggested to go with multiple regression, but I still don't know how to demonstrate this kind of relationship in the three variables with multiple regression.
| How can I demonstrate non-linearity without categorising a predictor? | CC BY-SA 4.0 | null | 2011-04-09T23:20:10.947 | 2020-02-17T00:16:09.207 | 2020-02-17T00:16:09.207 | 11887 | 400 | [
"regression",
"categorical-data",
"data-visualization",
"nonlinear-regression",
"continuous-data"
] |
9396 | 1 | null | null | 6 | 2895 | In the paper
>
M. Avellaneda and J. H. Lee, Statistical arbitrage in the U.S. equities market, July 2008,
in the Appendix on page 44, I have some questions.
First he runs the regression of stock-return ($R_n^S$) with index/ETF-return ($R_n^I$).
$R_n^S = \beta_0 + \beta R_n^I + \epsilon_n, ~~~~~n=1,2,...,60$
Then he defines an auxiliary process $X_n$ as sum of the residuals from regression of $R_n$, and estimates it to be a mean reverting process.
$X_n = \sum_{j=1}^{n} \epsilon_j ~~~~~n=1,2,...,60$
I have two related questions?
What is the significance of taking cumulative sum of residuals as opposed to taking just residuals as a mean reverting process? I have a basic intuition but lack a good understanding.
Second, he says that the regression on stock-returns "forces" the residuals to have mean zero. Why is this? How does this imply that the sum of all residuals, $X_{60}=0$?
| Why are cumulative residuals from regression on stock and index returns mean reverting | CC BY-SA 3.0 | null | 2011-04-09T23:24:44.063 | 2011-07-09T18:29:14.500 | null | null | 862 | [
"regression",
"mean",
"stochastic-processes"
] |
9397 | 2 | null | 9395 | 5 | null | Converting a continuous variable into categorical may be a bad idea, but may be a good idea as well, this depends on the problem. When the relationships of the variable can be best described using thresholds, categorisation may be one of the best options.
You wrote that in different categories of X1 the correlation between Y and X2 is very different. This is a clear indication of a non-linear relationship between Y, X1 and X2. Thus multiple linear regression is probably not the best method to use here.
In any case I suggest you to visualize your data (maybe using a [circles plot](http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=73), or [coloured scatterplot](http://www.cas.manchester.ac.uk/restools/instruments/aerosol/sp2/results/singleplot/index.html)). You may continue with machine learning, or modelling methods that suit what you know about your data.
| null | CC BY-SA 3.0 | null | 2011-04-09T23:49:17.103 | 2011-04-09T23:49:17.103 | null | null | 3911 | null |
9398 | 1 | 9406 | null | 15 | 3042 | Suppose you get to observe "matches" between buyers and sellers in a market. You also get to observe characteristics of both buyers and sellers which you would like to use to predict future matches & make recommendations to both sides of the market.
For simplicity, assume there are N buyers and N sellers and that each finds a match. There are N matches and (N-1)(N-1) non-matches. The all-inclusive training dataset has N + (N-1)*(N-1) observations, which can be prohibitively large. It would seem that randomly sampling from the (N-1)(N-1) non-matches and training an algorithm on that reduced data could be more efficient. My questions are:
(1) Is sampling from the non-matches to build a training dataset a reasonable way to deal with this problem?
(2) If (1) is true, is there is a rigorous way to decide how big of a chunk of (N-1)(N-1) to include?
| Supervised learning with "rare" events, when rarity is due to the large number of counter-factual events | CC BY-SA 3.0 | null | 2011-04-09T23:31:25.733 | 2011-04-10T17:54:19.620 | 2011-04-10T17:54:19.620 | 919 | 4095 | [
"machine-learning"
] |
9399 | 2 | null | 9398 | 1 | null | Concerning (1). You need to keep positive and negative observations if you want meaningful results.
(2) There is no wiser method of subsampling than uniform distribution if you don't have any a priori on your data.
| null | CC BY-SA 3.0 | null | 2011-04-09T23:47:33.450 | 2011-04-10T00:28:15.327 | null | null | 3896 | null |
9400 | 1 | 9403 | null | 6 | 387 | My actual project is a bit complicated, but I'll explain by analogy (which I hope facilitates response):
I have 3 substances, say water, motor oil, and ethanol. For each substance, I have 5 samples in a beaker (total 15 beakers). I heat all the beakers on a hot-plate up to 70 degrees Celsius, and over the next hour, I measure the temperature of the fluid in each beaker at 5 minute intervals.
Newtonian cooling provides me with a good prediction about these temperature data, namely that the temperature of the fluid in each cup should follow an exponential distribution: y = a + e^(-kt) where a is room temperature.
I want to estimate the value k for each substance and test the hypothesis that k1 > k2 > k3 (1, 2, 3 corresponding to my three substances). The natural method of estimating k seems to be computing a non-linear regression on each substance's data, or possibly log-transforming all the data and then just computing a simple linear regression. However, there are problems.
Some questions:
- Given the obvious autocorrelation in the longitudinal data (confirmed by my (P)ACF plots of course), must I compute an AR term and filter my data prior to computing the regression?
- Assuming I compute this autoregression term, how do I compute it for five independent sets of data (the five beakers of a give substance)? I could average the five beakers together and then compute the regression, but this screws up my AR term (assuming I need one) and also throws off my estimate of the actual within-beaker variance from the model.
- What completely wrong-headed assumption(s) have I worked in here...?
| How can I compute regression for several longitudinal data sets (thus, with auto-correlated error)? | CC BY-SA 3.0 | null | 2011-04-10T00:58:48.477 | 2011-04-10T17:25:16.590 | 2011-04-10T03:07:18.390 | 3911 | 4096 | [
"regression",
"autocorrelation",
"nonlinear-regression",
"panel-data",
"exponential-distribution"
] |
9401 | 2 | null | 9378 | 4 | null | Use the Sample operator with the Balance checkbox. You can set the sample size per class that way (to a balanced one)
@steffen, the mandate for this site covers stats AND stats software. There are tons of R questions on here, so it's fair to ask questions about other software too.
| null | CC BY-SA 3.0 | null | 2011-04-10T01:21:50.127 | 2011-04-10T01:21:50.127 | null | null | 74 | null |
9402 | 2 | null | 9342 | 2 | null | Another way of thinking about this, which might lend itself to an adaptation for $\ell_1$ is that choice of the mean comes from the fact that the mean is the point that minimizes the sum of squared Euclidean distances. If you're using $\ell_1$ to measure the distance between time series, then you should be using a center that minimizes the sum of squared $\ell_1$ distances.
| null | CC BY-SA 3.0 | null | 2011-04-10T02:10:19.477 | 2011-04-10T02:10:19.477 | null | null | 139 | null |
9403 | 2 | null | 9400 | 3 | null | As we have strong reasons to believe that the cooling will follow the $y(t) = a + e^{-kt}$ function for each beaker I would first check if this model fits the data well indeed.
If it does I wouldn't bother with analysing the autocorrelation at all, but focus on the estimation of $k_1$, $k_2$ and $k_3$, and testing the hypothesis about them.
To estimate $k_1$, $k_2$ and $k_3$ you need a non-linear model. Your idea of log transformation followed by linear modelling is best when the error (difference between the measured $y$ temperature and the one predicted by the formula) is proportional to the temperature. However, I suspect that the error will be primarily due to temperature measurement and thus normally distributed with the same variance for any temperature (you need to check this). If so, a non-linear model would be more appropriate.
A model using the above function will give you estimates for the parameters of the cooling of a single beaker, $a$ and $k$. We may however assume that $a$ should be the same for each beaker, $k$s should be similar for the same substance, and that the standard deviation ($\sigma$) is the same across all temperature measurements. These can be expressed in a model accounting for all the beakers in the same time (second index $j$ is beaker ID):
$$y_j(t) = a + e^{-(k_i + \alpha_j)t} + \epsilon$$
where
$\epsilon$ is normally distributed error of SD $\sigma$, $k_i$ is one of 3 mean $k$ values for substance $i$, $\alpha_j$ is normally distributed random deviation of a specific beaker from the $k_i$ substance mean, with a substance specific SD ($\sigma_{\alpha{}i}$). This is now a non-linear mixed effect model, that can be fitted using various software. After this you have the $k_i$ values and their standard errors.
The next question is how to test the hypothesis that $k_1 > k_2 > k_3$. It may be “cleaner” to formulate such a hypothesis in the Bayesian way. However you used the word test, so you probably want a significance test – but in order to do that you have to have a more specific alternative hypothesis (or family of hypotheses).
| null | CC BY-SA 3.0 | null | 2011-04-10T02:37:10.260 | 2011-04-10T13:02:01.177 | 2011-04-10T13:02:01.177 | 3911 | 3911 | null |
9404 | 2 | null | 9390 | 9 | null | The answer to your question depends intricately on what information and assumptions you are going to use. This is because the result of a game is an extraordinarily complicated process. It can become arbitrarily complicated depending on what information you have about:
- Players in the particular team - perhaps even particular combinations of players may be relevant.
- Players in other teams
- Past history of the league
- How stable the team's players are - do players keep getting selected and dropped, or is it the same 11.
- The time that you place your bet (during the game? before? how much before? what info is lost from betting before to betting on the day?)
- some other relevant feature of soccer which I have omitted.
The odds that a book-maker gives are not a reflection of the book-makers odds. which is impossible if they are probabilities. A book-maker will adjust the odds down when someone bets on a draw, and adjust them up when someone bets on a non-draw. Thus, the odds are a reflection of the gamblers (who use that book-maker) odds as a whole. So it is not the bookmaker who is miss-pricing per se, it is the gambling collective - or the "average gambler".
Now if you are willing to assume that whatever "causal mechanism" is resulting in a draw remains constant across the season (reasonable? probably not...), then a simple mathematical problem is obtained (but note there is no reason for this to be "more right" than some other simplifying assumption). To remind us that this is the assumption being used, an $A$ will be put on the conditioning side of the probabilities. Under this assumption the binomial distribution applies:
$$P(\text{k Draws in n matches}|\theta,A)={n \choose k}\theta^{k}(1-\theta)^{n-k}$$
And we want to calculate the following
$$P(\text{next match is a draw}|\text{k Draws in n matches},A)$$
$$=\int_{0}^{1}P(\text{next match is a draw}|\theta,A)P(\theta|\text{k Draws in n matches},A)d\theta$$
where
$$P(\theta|\text{k Draws in n matches},A)=P(\theta|A)\frac{P(\text{k Draws in n matches}|\theta,A)}{P(\text{k Draws in n matches}|A)}$$
is the posterior for $\theta$. Now in this case, it is fairly obvious that it is possible for a draw to happen, and also possible for it to not happen, so a uniform prior is appropriate (unless there is extra information we wish to include beyond the results of the season) and we set $P(\theta|A)=1$. The posterior is then given by a beta distribution (where $B(\alpha,\beta)$ is the [beta function](http://en.wikipedia.org/wiki/Beta_function))
$$P(\theta|\text{k Draws in n matches},A)=\frac{\theta^{k}(1-\theta)^{n-k}}{B(k+1,n-k+1)}$$
Given $\theta$ and $A$ the probability that the next match is a draw is just $\theta$ so the integral becomes:
$$\int_{0}^{1}\theta \frac{\theta^{k}(1-\theta)^{n-k}}{B(k+1,n-k+1)}d\theta=\frac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=\frac{k+1}{n+2}$$
and hence the probability is just:
$$P(\text{next match is a draw}|\text{k Draws in n matches},A)=\frac{k+1}{n+2}$$
But note that it depends on $A$ - the assumptions that were made. Call the "priced odds" a probability conditional on some other unknown complex information, say $B$. So if the published odds are different to the the above fraction, then this says that $A$ and $B$ lead to different conclusions, so both can't be right about the "true outcome" (but both can be right conditional on the assumptions each made).
THE KILLER BLOW
This example showed that the answer to your question boiled down to deciding if $A$ was "more accurate" than $B$ in describing the mechanics of the soccer game. This will happen regardless of what the proposition $A$ happens to be. We will always boil down to the question of asking "whose assumptions are right, the gambling collective's or mine?" This last question is a basically unanswerable question until you know exactly what the proposition $B$ consists of (or at least some key features of it). For how can you compare something that is known with something that is not?
UPDATE: An actual answer :)
As @whuber has cheekily pointed out, I haven't actually given an expected value here - so this part simply completes that part of my answer. If one was to assume that $A$ is true with priced odds of $Q$, then you would expect, in the next game to receive
$$Q\times P(\text{next match is a draw}|\text{k Draws in n matches},A)-1$$
$$=Q\times \frac{k+1}{n+2}-1=\frac{Q(k+1)-n-2}{n+2}$$
Now if you assume that the value of $Q$ is based on the same model as yours then we can predict exactly how $Q$ will change into the future. Suppose $Q$ was based on a different prior to the uniform one, say $Beta(\alpha_Q,\beta_Q)$, then the corresponding probability is
$$P(\text{next match is a draw}|\text{k Draws in n matches},A_Q)=\frac{k+\alpha_Q}{n+\alpha_Q+\beta_Q}$$
with expected return of
$$\frac{Q(k+\alpha_Q)-n-\alpha_Q-\beta_Q}{n+\alpha_Q+\beta_Q}$$
Now if we make the "prior weight" $\alpha_Q+\beta_Q = \frac{N}{2}$ where $N$ is the length of the season (this will allow the "miss-pricing" to continue into the remainder of the season) and set the expected return to zero we get:
$$\alpha_Q=\frac{2n+N}{2Q}-k$$
(NOTE: unless this is the actual model, $\alpha_Q$ will depend on when this calculation was done, as it depends on $n,k,Q$ which will vary over time). Now we are able to predict how $Q$ will be adjusted into the future, it will add $1$ to the denominator for each match, and $1$ to the numerator if the match was a draw. So the expected odds after the first match are:
$$(1+\frac{n+\beta_Q-k+1}{k+\alpha_Q})\frac{n-k+\beta_Q}{n+\alpha_Q+\beta_Q}+(1+\frac{n+\beta_Q-k}{k+\alpha_Q+1})\frac{k+\alpha_Q}{n+\alpha_Q+\beta_Q}$$
$$=1+\frac{n+\beta_Q-k}{k+\alpha_Q}\left(1+\frac{2}{(2n+N)(k+\alpha_Q+1)}\right)\approx 1+\frac{n+\beta_Q-k}{k+\alpha_Q}$$
That is the odds won't change much over the season. Using this approximation, we get the expected return over the remainder of the season as:
$$(N-n)\frac{Q(k+1)-n-2}{n+2}$$
But remember that this is based on the overly simplistic model of a draw (note: this does not necessarily mean that it will be a "crap" predictor). There can be no unique answer to your question, because there has been no specified model, and no specified prior information (e.g. how many people use this bookie? what is the bookie's turnover? how will my bets influence the odds they price?). The only thing which has been specified is the data from one season, and that for "some unspecified model" the probabilities are inconsistent with those implied by the odds pricing.
| null | CC BY-SA 3.0 | null | 2011-04-10T03:24:53.640 | 2011-04-14T03:21:44.263 | 2011-04-14T03:21:44.263 | 2392 | 2392 | null |
9405 | 1 | null | null | 0 | 2305 | I have two random poisson variables $x_1$ and $x_2$ with value 10 and 25 respectively. I am interested to use likelihood ratio test to test the null hypothesis: $\lambda_1=\lambda_2$, versus alernate hypthesis $\lambda_1$ not equal to $\lambda_2$.
I want to use simulation to calculate power and alpha values. I would want to do it in R so any reference to R codes will be appreciated. Thanks in advance
| Simulation of maximum likelihood ratio test to test two poisson random variables | CC BY-SA 3.0 | null | 2011-04-10T05:15:44.633 | 2011-04-11T17:48:03.737 | 2011-04-10T12:18:59.777 | 3911 | 4098 | [
"r",
"maximum-likelihood",
"poisson-distribution",
"statistical-power",
"likelihood-ratio"
] |
9406 | 2 | null | 9398 | 13 | null | If I understand correctly, you have a two class classification problem, where the positive class (matches) is rare. Many classifiers struggle with such a class imbalance, and it is common practice to sub-sample the majority class in order to obtain better performance, so the answer to the first question is "yes". However, if you sub-sample too much, you will end up with a classifier that over-predicts the minority positive class, so the best thing to do is to choose the sub-sampling ration to maximise performance, perhaps by minimising the cross-validation error where the test data has not been sub-sampled so you get a good indication of operational performance.
If you have a probabilistic classifier, that gives an estimate of the probabiility of class memebership, you can go one better and post-process the output to compensate for the difference between class frequencies in the training set and in operation. I suspect that for some classifiers, the optimal approach is to optimise both the sub-sampling ratio and the correction to the output by optimising the cross-validation error.
Rather than sub-sampling, for some classifiers (e.g. SVMs) you can give different weights to positive and negative patterns. I prefer this to sub-sampling as it means there is no variability in the results due to the particular sub-sample used. Where this is not possible, use bootstrapping to make a bagged classifier, where a different sub-sample of the majority class is used in each iteration.
The one other thing I would say is that commonly where there is a large class imbalance, false negative errors and false positive error are not equally bad, and it is a good idea to build this into the classifier design (which can be accomplished by sub-sampling or weighting patterns belonging to each class).
| null | CC BY-SA 3.0 | null | 2011-04-10T08:29:11.100 | 2011-04-10T08:29:11.100 | null | null | 887 | null |
9407 | 1 | 9408 | null | 4 | 13484 | I'm trying to compute the minimum sample size for a psychometric test based on 7 point Likert scales. I'd like to run ANOVA on each scale to look for differences between groups.
Most online survey sample size calculators seem to be designed for polls, e.g. Yes/No, Agree/Disagree. They take as input population size, a confidence interval and a proportion (50% Yes/50% no) and then return the required sample size.
Most statistical books suggest using power tests (such as R's power.t.test), which take as input a minimum effect size, alpha, beta and a statistical test and then return the required sample size.
For my purposes power tests seems to make the most sense, but what has me concerned is that none of them take into account the population size, which seems like it ought to have at least some effect on the outcome.
So my question is, what is the correct calculation to use in my specific survey situation and more generally what is the connection between power tests and these online survey sample size calculators, does population size matter in some way, perhaps helping to capture the notion of representative sample?
| Statistical power and minimum sample size for ANOVA with likert scale as dependent variable | CC BY-SA 3.0 | null | 2011-04-10T09:58:24.353 | 2011-04-12T06:59:48.757 | 2011-04-11T05:55:35.603 | 183 | 4099 | [
"anova",
"likert",
"statistical-power",
"finite-population"
] |
9408 | 2 | null | 9407 | 5 | null | The commonly used statistical methods assume that you take a sample of an infinite or very large population. ANOVA, too, has this assumption. When the subjects of your survey can be viewed as a representative sample of an existing or hypothetical much larger population, you do not need the finite population methods.
The second question is if ANOVA is appropriate to analyse the data collected. 7 point Likert scales are strictly speaking ordinal scales, so methods for ordinal dependent variables may be best. However, in psychometry it's usual to assume that the values from a Likert scale will follow a distribution that may be approximated with a normal distribution. In this case ANOVA is an acceptable method; the t-test too, although the latter compares two groups only. (The methods designed for binary (yes/no) outcomes may be used after setting a threshold in your Likert scale and dichotomising your data, however unless this threshold also exists in the psychological mechanism it will lead to loss of detail in your data and loss of power in your test. So not generally recommended.)
You need to check or think over if the homoscedasticity assumption of ANOVA is likely to be met. If yes, use a power formula for ANOVA and you need not worry about not having to specify the population size.
| null | CC BY-SA 3.0 | null | 2011-04-10T11:29:49.783 | 2011-04-10T11:29:49.783 | null | null | 3911 | null |
9409 | 2 | null | 9405 | 2 | null | This is a particularly ill-formed question.
If by "alpha" you mean Type I error, you need to go back to Square One and get definitions straight. Type I error is not something inherent in the data, or even in the hypothesis; it's a subjectively and externally applied measure of risk. And without the Type I error, you have no reference point from which to calculate Type II error, the complement of power.
Worse yet, it's not clear--after Adam's question--whether you have JUST TWO OBSERVATIONS (10 and 25), or two distributions with means of 10 and 25, and you're looking for a suitable sample size for a balanced test comparing the means. In the first case, all you can do is a likelihood ratio test that gives an approximate p-value; there's no more information to be had in two observations. In the second case, simulation can give some useful results, but you still need a value for the Type I error to get started.
| null | CC BY-SA 3.0 | null | 2011-04-10T12:07:16.413 | 2011-04-10T12:07:16.413 | null | null | 5792 | null |
9410 | 2 | null | 9405 | 2 | null | For the simulation let's first choose sample sizes N1 and N2 for the two Poisson samples:
```
require(lmtest)
N1 = 20; N2 = 15
```
Generate a random sample and run a likelihood ratio test:
```
# CODE BLOCK "A"
x = c(rpois(N1, 10), rpois(N2, 25))
group = factor(c(rep('a', N1), rep('b', N2)))
m1 = glm(x ~ 1, family=poisson)
m2 = glm(x ~ group, family=poisson)
(t = lrtest(m1, m2))
```
Result:
```
Likelihood ratio test
Model 1: x ~ 1
Model 2: x ~ group
#Df LogLik Df Chisq Pr(>Chisq)
1 1 -158.26
2 2 -93.39 1 129.75 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Now let's run many simulations to see the power for this particular N1 and N2:
```
s = 1000 # 10*1000 simulations
sigs = NULL
for (i in 1:10) {
sig = 0
for (j in i:s) {
CODE BLOCK "A" COMES HERE
if (t$Pr[2] <= 0.05) sig = sig + 1
}
sigs = c(sigs, sig / s)
}
c(quantile(sigs, c(.025, .5, .975)), mean=mean(sigs), sd=sd(sigs))
```
Result:
```
2.5% 50% 97.5% mean sd
0.991225000 0.995500000 0.999775000 0.995500000 0.003027650
```
Thus the power for N1 = 20; N2 = 15 is 99%.
You can calculate power for various N1 and N2 values.
| null | CC BY-SA 3.0 | null | 2011-04-10T12:09:13.697 | 2011-04-11T17:48:03.737 | 2011-04-11T17:48:03.737 | 3911 | 3911 | null |
9411 | 2 | null | 9400 | 4 | null | If I understand your question correctly, you should be able to achieve what you want to do using a non-linear mixed-effects model. If you use R, you can use the `nlme` package. Basically as fixed factors you have a covariate (a) and a factor (substance or $i$ in $k_{i}$). You also have a random effect (individual measurements units or unitID). The good thing about `nlme` is that it also allows you to model the correlations in the residuals with e.g. an AR covariance structure.
edit: I always like to use a mixed-model when dealing with repeated measures. Still, if you don't want to include a random factor, you can model it with `gnls` in the same package. `gnls` still lets you select AR as covariance structure of he residuals.
| null | CC BY-SA 3.0 | null | 2011-04-10T13:09:08.573 | 2011-04-10T17:25:16.590 | 2011-04-10T17:25:16.590 | 2020 | 2020 | null |
9412 | 2 | null | 9330 | 1 | null | I believe that this is an experiment where it is safe to assume a monotone relationship: for a longer exposition time the infection probability can not be smaller. So you can run monotone/isotonic regression.
You can even incorporate into your model that the infection probability at time=0 is 0.
| null | CC BY-SA 3.0 | null | 2011-04-10T13:13:06.367 | 2011-04-13T02:40:18.117 | 2011-04-13T02:40:18.117 | 3911 | 3911 | null |
9413 | 2 | null | 2715 | 11 | null | Think hard about the underlying data generating process (DGP). If the model you want to use doesn't reflect the DGP, you need to find a new model.
| null | CC BY-SA 4.0 | null | 2011-04-10T14:26:46.080 | 2018-06-29T02:38:26.897 | 2018-06-29T02:38:26.897 | 164061 | 3265 | null |
9414 | 2 | null | 2 | 3 | null | Other answers have covered what is normality and suggested normality test methods. Christian highlighted that in practice perfect normality barely exists.
I highlight that observed deviation from normality does not necessarily mean that methods assuming normality may not be used, and normality test may not be very useful.
- Deviation from normality may be caused by outliers that are due to errors in data collection. In many cases checking the data collection logs you can correct these figures and normality often improves.
- For large samples a normality test will be able to detect a negligible deviation from normality.
- Methods assuming normality may be robust to non-normality and give results of acceptable accuracy. The t-test is known to be robust in this sense, while the F test is not source. Concerning a specific method it's best to check the literature about robustness.
| null | CC BY-SA 4.0 | null | 2011-04-10T14:30:50.133 | 2022-11-23T13:01:37.863 | 2022-11-23T13:01:37.863 | 362671 | 3911 | null |
9415 | 1 | 9418 | null | 14 | 7267 | Covariance between two random variables defines a measure of how closely are they linearly related to each other. But what if the joint distribution is circlular? Surely there is structure in the distribution. How is this structure extracted?
| Measuring non-linear dependence | CC BY-SA 3.0 | null | 2011-04-10T14:46:33.510 | 2012-12-05T10:30:41.000 | null | null | 4101 | [
"covariance-matrix"
] |
9416 | 1 | null | null | 1 | 253 | I have done an experiment to find the effective n/w bandwidth.
The data I got in kbps is
223, 221, 510, 220, 471, 229, 222, 221, 220, 221
How can I find the effective bandwidth? Averaging gives 275.8. But if I have done only first 4 rounds then the average is 293.5. How can I find out a more reasonable value as the effective bandwidth. Or is averaging the correct way of doing this?
| How to find the effective bandwidth correctly using statistics? | CC BY-SA 3.0 | null | 2011-04-10T14:55:57.757 | 2011-04-11T07:51:46.153 | 2011-04-11T07:38:25.130 | 183 | 4102 | [
"estimation"
] |
9417 | 2 | null | 9415 | 5 | null | [Mutual information](http://en.wikipedia.org/wiki/Mutual_information) has properties somewhat analogous to covariance. Covariance is a number which is 0 for independent variables and nonzero for variables which are linearly dependent. In particular, if two variables are the same, then the covariance is equal to variance (which is usually a positive number). One issue with covariance is that it may be zero even if two variables are not independent, provided the dependence is nonlinear.
Mutual information (MI) is a non-negative number. It is zero if and only if the two variables are statistically independent. This property is more general than that of covariance and covers any dependencies, including nonlinear ones.
If the two variable are the same, MI is equal to the variable's entropy (again, usually a positive number). If the variables are different and not deterministically related, then MI is smaller than the entropy. In this sense, MI of two variables goes between 0 and H (the entropy), with 0 only if independent and H only if deterministically dependent.
One difference from covariance is that the "sign" of dependency is ignored. E.g. $Cov(X, -X) = -Cov(X, X) = -Var(X)$, but $MI(X, -X) = MI(X, X) = H(X)$.
| null | CC BY-SA 3.0 | null | 2011-04-10T16:30:36.543 | 2011-04-11T18:37:15.167 | 2011-04-11T18:37:15.167 | 3369 | 3369 | null |
9418 | 2 | null | 9415 | 10 | null | By "circular" I understand that the distribution is concentrated on a circular region, as in this contour plot of a pdf.

If such a structure exists, even partially, a natural way to identify and measure it is to average the distribution circularly around its center. (Intuitively, this means that for each possible radius $r$ we should spread the probability of being at distance $r$ from the center equally around in all directions.) Denoting the variables as $(X,Y)$, the center must be located at the point of first moments $(\mu_X, \mu_Y)$. To do the averaging it is convenient to define the radial distribution function
$$F(\rho) = \Pr[(X-\mu_X)^2 + (Y-\mu_Y)^2 \le \rho^2], \rho \ge 0;$$
$$F(\rho) = 0, \rho \lt 0.$$
This captures the total probability of lying between distance $0$ and $\rho$ of the center. To spread it out in all directions, let $R$ be a random variable with cdf $F$ and $\Theta$ be a uniform random variable on $[0, 2\pi]$ independent of $R$. The bivariate random variable $(\Xi, H) = (R\cos(\Theta) + \mu_X, R\sin(\Theta)+\mu_Y)$ is the circular average of $(X,Y)$. (This does the job our intuition demands of a "circular average" because (a) it has the correct radial distribution, namely $F$, by construction, and (b) all directions from the center ($\Theta$) are equally probable.)
At this point you have many choices: all that remains is to compare the distribution of $(X,Y)$ to that of $(\Xi, H)$. Possibilities include an [$L^p$ distance](http://en.wikipedia.org/wiki/Convergence_of_random_variables) and the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) (along with myriad related distance measures: symmetrized divergence, Hellinger distance, mutual information, etc.). The comparison suggests $(X,Y)$ may have a circular structure when it is "close" to $(\Xi, H)$. In this case the structure can be "extracted" from properties of $F$. For instance, a measure of central location of $F$, such as its mean or median, identifies the "radius" of the distribution of $(X,Y)$, and the standard deviation (or other measure of scale) of $F$ expresses how "spread out" $(X,Y)$ are in the radial directions about their central location $(\mu_X, \mu_Y)$.
When sampling from a distribution, with data $(x_i,y_i), 1 \le i \le n$, a reasonable test of circularity is to estimate the central location as usual (with means or medians) and thence convert each value $(x_i,y_i)$ into polar coordinates $(r_i, \theta_i)$ relative to that estimated center. Compare the standard deviation (or IQR) of the radii to their mean (or median). For non-circular distributions the ratio will be large; for circular distributions it should be relatively small. (If you have a specific model in mind for the underlying distribution, you can work out the sampling distribution of the radial statistic and construct a significance test with it.) Separately, test the angular coordinate for uniformity in the interval $[0, 2\pi)$. It will be approximately uniform for circular distributions (and for some other distributions, too); non-uniformity indicates a departure from circularity.
| null | CC BY-SA 3.0 | null | 2011-04-10T16:51:07.647 | 2011-04-11T14:20:58.970 | 2011-04-11T14:20:58.970 | 919 | 919 | null |
9420 | 1 | null | null | 2 | 637 | Contingency tables are typically formatted as tables similar to matrices in mathematics, see [this example](http://en.wikipedia.org/wiki/Contingency_table#Example).
Is the equation below an accepted notation of expressing the probabilities of the outcomes as a matrix? If not, what would be the accepted way? Are there any published materials using the same notation?
$$
\widehat{Pr_\text{outcome}} =
\begin{matrix}
&
\begin{matrix}
\text{RH} & \text{LH}
\end{matrix}
\\
\begin{matrix}
\text{male} \\ \text{female}
\end{matrix}
&
\begin{bmatrix}
43 & 9 \\
44 & 4
\end{bmatrix}
\end{matrix}
\cdot \frac{1}{100}
=
\begin{matrix}
&
\begin{matrix}
\text{RH} & \text{LH}
\end{matrix}
\\
\begin{matrix}
\text{male} \\ \text{female}
\end{matrix}
&
\begin{bmatrix}
0.43 & 0.09 \\
0.44 & 0.04
\end{bmatrix}
\end{matrix}
$$
| Notation of probability matrix corresponding to a contingency table | CC BY-SA 4.0 | null | 2011-04-10T17:20:13.737 | 2022-07-11T16:46:21.610 | 2022-07-11T16:46:21.610 | 282433 | 3911 | [
"contingency-tables",
"matrix",
"notation"
] |
9421 | 2 | null | 9420 | 1 | null | It looks reasonable to me. Though realize that with an N larger than say 1,000, your table of probabilities won't represent exact counts because you'll end up truncating to three or so decimal places.
| null | CC BY-SA 3.0 | null | 2011-04-10T19:42:39.777 | 2011-04-10T19:42:39.777 | null | null | 1499 | null |
9422 | 1 | 9443 | null | 6 | 203 | Suppose I have survey responses that look like this:
```
N=60000, Population
n=1000, Total sample
n=800, Users of Company X
n=200, Randomly chosen from 800 and asked about their Future Use of Company X
n=100, Planning to use Company X less in the future
```
The reason that only 200 of 800 users were asked about future use was due to them being asked about other companies as well. The survey would be far too long if they were asked about their future use of all companies that they use.
My goal is to understand the flow of future use. For example, of those individuals who are using Company X less in the future, which other companies are they planning on using more. However, with a sample of 100, there are not enough responses to other companies from the same users to other companies to get a usable proportion.
Can I, with some level of accuracy, infer the flow of business from a more robust sample of users of Company X?
Update: I think, what I may be referring to is called bootstrapping.
| Resampling within a survey to account for missing data | CC BY-SA 3.0 | null | 2011-04-11T01:00:28.697 | 2011-04-13T04:57:11.190 | 2011-04-13T04:57:11.190 | 776 | 776 | [
"sampling",
"inference",
"resampling"
] |
9423 | 2 | null | 7019 | 3 | null | If you really only have one data point greater than 1000, it would be easiest just to delete that point from the graph. You can make a note in the caption or as a text box that there is an outlier.
| null | CC BY-SA 3.0 | null | 2011-04-11T01:13:15.487 | 2011-04-11T01:13:15.487 | null | null | 1569 | null |
9424 | 2 | null | 9416 | 2 | null | From what I can understand, I think you have the following options:
- Sample more! n = 10 is hardly enough for drawing conclusions
- If you don't/can't do "enough" sampling, you can always try to do some Monte Carlo type study with bootstrapping
| null | CC BY-SA 3.0 | null | 2011-04-11T07:51:46.153 | 2011-04-11T07:51:46.153 | null | null | 3014 | null |
9425 | 1 | null | null | 25 | 112345 | Reading Field's Discovering Statistics Using SPSS (3rd Edition) I was struck by a bit about post-hoc tests in ANOVA. For those wanting to control the Type I error rate he suggests Bonferroni or Tukey and says (p. 374):
>
Bonferroni has more power when the
number of comparisons is small,
whereas Tukey is more powerful when
testing large numbers of means.
Where should the line be drawn between a small and large number of means?
| Bonferroni or Tukey? When does the number of comparisons become large? | CC BY-SA 3.0 | null | 2011-04-11T12:08:13.933 | 2015-12-04T16:04:40.800 | 2015-12-04T16:04:40.800 | 28666 | 3597 | [
"anova",
"multiple-comparisons",
"post-hoc",
"bonferroni",
"tukey-hsd-test"
] |
9426 | 2 | null | 9420 | 2 | null | I'm not sure I can justify what I'm about to say, but I would be uneasy about expressing probabilities in a form like this. The structure is too reminiscent of other things that don't properly apply and suggests you should be able to do stuff like matrix multiplication that wouldn't mean anything here.
If the goal is to express a set of 4 mutually-exclusive outcomes, then your probabilities should be a single vector with 4 entries adding to 1.
OTOH, if it is to express a structure within that -- the difference in the handedness distribution by sex -- then it would make more sense if each of the rows added to 1.
And if what you actually want is a contingency table, use a contingency table. Dividing through by the total count doesn't really gain you anything.
Using a convention whereby all entries in a matrix add to 1 seems to me, well, unconventional. But I admit this objection is a bit subjective and hand-wavy.
| null | CC BY-SA 3.0 | null | 2011-04-11T12:23:03.697 | 2011-04-11T12:23:03.697 | null | null | 174 | null |
9427 | 1 | 9428 | null | 5 | 6929 | >
Possible Duplicate:
Probability distribution value exceeding 1 is OK?
I'm a bit confused how I am getting probabilities greater than 1 when calculating p(x | mu, sigma) when x = mu. For example, if I run:
```
>> gaussProb(0, 0, 0.1)
ans =
1.2616
```
where gaussProb is a matlab function from the PMTK toolbox:
```
function p = gaussProb(X, mu, Sigma)
% Multivariate Gaussian distribution, pdf
% X(i,:) is i'th case
% *** In the univariate case, Sigma is the variance, not the standard
% deviation! ***
% This file is from pmtk3.googlecode.com
d = size(Sigma, 2);
X = reshape(X, [], d); % make sure X is n-by-d and not d-by-n
X = bsxfun(@minus, X, rowvec(mu));
logp = -0.5*sum((X/(Sigma)).*X, 2);
logZ = (d/2)*log(2*pi) + 0.5*logdet(Sigma);
logp = logp - logZ;
p = exp(logp);
end
```
Is this some fundamental property of the Gaussian distribution or an issue with numerical accuracy in the computation?
I've come across this issue by trying to weight samples from a Gaussian distribution obtained from a Gaussian process prediction, where I will get massive probabilities.
Thanks
| Interpreting Gaussian probabilities greater than 1 | CC BY-SA 3.0 | null | 2011-04-11T12:58:01.893 | 2011-04-11T13:01:11.077 | 2017-04-13T12:44:36.927 | -1 | 4108 | [
"normal-distribution",
"matlab"
] |
9428 | 2 | null | 9427 | 11 | null | The code in the question returns the values of [probability density function](http://en.wikipedia.org/wiki/Probability_density_function). The values of probability density function can be greater than one. The actual probability $P(X<x)$ for random variable $X$ with probability density function $p(x)$ is integral $\int_{-\infty}^xp(t)dt$. The values of this integral are of course restricted to interval $[0,1]$.
| null | CC BY-SA 3.0 | null | 2011-04-11T13:01:11.077 | 2011-04-11T13:01:11.077 | null | null | 2116 | null |
9429 | 1 | 10415 | null | 6 | 3431 | I am trying to fit a multilevel longitudinal model and i have a question regarding how to specify it.
The data consist of about 8k observations collected from about 3k individuals at four time points. Individuals are nested in groups and there are about 200 groups.
I have two different types of fixed effects: (a) repeated measures at the observation level (e.g. pred1.obs ), and (b) group level predictors that also change over time (e.g. pred2.grp).
Because each group level fixed effect is also longitudinal there are 800 values (4x200 which are repeated for each member of the group at that time) but there are only 200 groups.
My question is what would be the correct specification for this model and why?
e.g:
>
1: lmer(outcome ~ time + pred1.obs + pred2.grp + (time|id) + (time|grp))
2: lmer(outcome ~ time + pred1.obs + pred2.grp + (time|id) + (1|grp:time))
3: lmer(outcome ~ time + pred1.obs + pred2.grp + (time|id) + (time|grp) + (1|grp:time))
Thus, would lme4 correctly estimate the model if i use (time|grp) or do i need to use (1|grp:time) or the combination?
Or something else that i haven't thought of?
Many thanks,
George
| Correct specification of longitudinal model in lme4 | CC BY-SA 3.0 | null | 2011-04-11T13:42:47.047 | 2011-05-06T15:27:14.023 | 2020-06-11T14:32:37.003 | -1 | 1871 | [
"r",
"multilevel-analysis",
"panel-data"
] |
9430 | 2 | null | 9385 | 6 | null | I am not very familiar with Holt-Winters, however I have this [excellent book](http://www.amazon.co.uk/Forecasting-Exponential-Smoothing-Approach-Statistics/dp/3540719164/ref=sr_1_1?ie=UTF8&s=books&qid=1302529451&sr=8-1) by @Rob Hyndman. The package forecast (which is based on the book) of statistical package R gives the following result on your data:
```
> hw<-read.table("~/R/stackoverflow/hw.txt")
> tt<-ts(hw[,3],start=c(1999,1),freq=12)
> aa<-forecast(tt)
> plot(aa)
> summary(aa)
Forecast method: ETS(M,N,A)
Model Information:
ETS(M,N,A)
Call:
ets(y = object)
Smoothing parameters:
alpha = 0.1701
gamma = 1e-04
Initial states:
l = 870.4847
s = -278.0815 -143.6584 151.959 -135.595 514.2527 236.9216
-32.7679 128.8337 115.0829 47.5922 -234.4105 -370.1288
sigma: 0.1122
AIC AICc BIC
1892.756 1896.346 1933.115
In-sample error measures:
ME RMSE MAE MPE MAPE MASE
18.1543007 121.8594668 70.7086492 0.8480306 7.0006920 0.2893504
```
Here is the graph of the forecast together with the confidence intervals:

Note that the function forecast picks automatically the best exponential smoothing model from 30 models which are classified by the type of trend model, seasonal part model and the additivity or multiplicity of error.
The best model found in your data is with multiplicative error, no trend and additive seasonality, which is less complicated model than you are trying to fit. The way function forecast works is however that the more complicated model was considered and rejected in favor the final model.
If you provide the exact formulas it would be possible to fit the precise model to see whether the problem you described is really property of the model.
| null | CC BY-SA 3.0 | null | 2011-04-11T13:45:02.443 | 2011-04-11T13:51:21.390 | 2011-04-11T13:51:21.390 | 2116 | 2116 | null |
9431 | 1 | 9528 | null | 9 | 1662 | I'm running a binary logit regression where I know the dependent variable is miscoded in a small percentage of cases. So I'm trying to estimate $\beta$ in this model:
$prob(y_i) = 1/(1 + e^{-z_i})$
$z_i = \alpha + X_i\beta$
But instead of the vector $Y$, I have $\tilde{Y}$, which includes some random errors (i.e. $y_i = 1$, but $\tilde{y_i} = 0$, or vice versa, for some $i$).
Is there a (reasonably) simple correction for this problem?
I know that logit has some nice properties in case-control studies. It seems likely that something similar applies here, but I haven't been able to find a good solution.
A few other constraints: this is a text-mining application, so the dimensions of $X$ are large (in the thousands or tens of thousands). This may rule out some computationally intensive procedures.
Also, I don't care about correctly estimating $\alpha$, only $\beta$.
| How can I correct for measurement error in the dependent variable in a logit regression? | CC BY-SA 3.0 | null | 2011-04-11T14:03:13.367 | 2011-04-13T18:13:24.987 | 2011-04-11T14:38:25.397 | 3911 | 4110 | [
"logistic",
"measurement-error"
] |
9432 | 2 | null | 1610 | 0 | null | This is how I remember the difference between Type I and Type II errors
Type I is a false POSITIVE
Type II is a false NEGATIVE
Type I is so POSITIVE it jumps out of bed first, runs downstairs and finds a significant breakfast while Type II is so NEGATIVE it stays in bed all day so when it eventually crawls out all the food is gone. It can never find anything!
| null | CC BY-SA 3.0 | null | 2011-04-11T14:31:06.300 | 2011-04-11T14:31:06.300 | null | null | 3597 | null |
9434 | 2 | null | 9431 | 2 | null | This situation is often referred to as misclassification error. [This paper](http://www.ncbi.nlm.nih.gov/pubmed/20552681) my help you correctly estimating $\beta$. EDIT: I found relevant-looking papers using [http://www.google.com/search?q=misclassification+of+dependent+variable+logistic](http://www.google.com/search?q=misclassification+of+dependent+variable+logistic).
| null | CC BY-SA 3.0 | null | 2011-04-11T14:41:17.657 | 2011-04-11T15:34:28.983 | 2011-04-11T15:34:28.983 | 3911 | 3911 | null |
9435 | 1 | null | null | 4 | 695 | I'm trying to create something similar to this.

So, 3 different Node classes, and a whole bunch of relationships between them. In my case, there should be roughly half of the number of nodes present at most.
What I'm looking for is recommendations as to the best way to create a similar type of graph. Spent some time looking at R/GGplot2, but haven't found any solutions so far. I expect it's because I'm not using the correct vocabulary.
The posted image was created using a proprietary app that I unfortunately am not able to leverage, otherwise I'd simply use that.
Any suggestions/solutions would be fantastic!
| Displaying relationships between nodes | CC BY-SA 4.0 | null | 2011-04-11T14:46:09.620 | 2019-03-02T00:40:43.843 | 2019-03-02T00:40:43.843 | 11887 | 4112 | [
"data-visualization"
] |
9436 | 2 | null | 9429 | 3 | null | You have a large number of groups, so I speculate that (depending on the setting) you may think about group as a random effect, so your `(…|grp)` terms are probably justified. It may also be reasonable to associate random effects with individuals (`(…|id)` terms). However you have `time` as a covariate in all your models, so I assume you are looking for a linear time effect. In the same time having the `(1|grp:time))` term or any kind of `(…|time))` terms in the model makes the interpretation of the time covariate difficult.
The `(1|…)` terms correspond to random intercepts, e.g. including `(1|grp)` would estimate a “mean” outcome at time=0 for each group expressed as deviation from the grand intercept; `(1|id)` would estimate an individual intercept.
The `(time|…)` terms correspond to random slopes of the time covariate, e.g. including `(time|grp)` would estimate a deviation from the grand slope for each group; `(time|id)` would estimate individual slopes. [Edit: Please note the difference between the `(time|id)` and `(0 + time|id)` expressions, see `?lmer`]
In your model specifications you are using both random intercepts and random slopes. Whether you need them or which you need depends on the relationships among the variables you study. If you know the relationship, you should specify the corresponding model. Alternatively you can fit multiple models and explain the differences among the results.
| null | CC BY-SA 3.0 | null | 2011-04-11T15:09:01.237 | 2011-04-11T17:57:45.803 | 2011-04-11T17:57:45.803 | 3911 | 3911 | null |
9437 | 1 | null | null | 2 | 224 | I am quite a newbie in this area:
- What are the boosting methods for regression systems? I know about Gradient boosting; are there any other approaches?
- Are there textbooks or tutorials devoted to this area?
| Boosting for regression systems | CC BY-SA 3.0 | null | 2011-04-11T15:10:56.120 | 2011-04-12T06:49:35.103 | 2011-04-12T04:46:37.933 | 183 | 976 | [
"regression",
"boosting"
] |
9438 | 2 | null | 9276 | 0 | null | It seems to me that the problem here is whether the model is too complex to look out without using Monte Carlo simulation.
If the model is all relatively simple then it should be possible to look at it through conventioanl statistics and derive a solution to the question being asked, without re-running the model multiple times. This is a bit of an over simplication, but if all your model did was produce points based on a normal distribution, then you could easily derive the sort of answers you are looking for. Of course, if the model is this simple then you are unlikely to need to do a Monte Carlo simulation find your answers.
If the problem is complex and it is not possible to break it down to more elementary, the Monte-Carlo is the right type of model to use, but I don't think there is any way of defining confidence limits without runing the model. Ultimately to get the type of confidence limits described he model would have to be run a number of times, a probability distribution would have to be fit to the outputs and from there the confidnce limits could be defined. One of the challenges with Monte-Carlo simulation is that models give good and regular answers for distributions in the mid range but the tails often give much more variable results, which ultimately means more runs to define the shape of the outputs at the 2.5% and 97.5% percentiles.
| null | CC BY-SA 3.0 | null | 2011-04-11T16:37:21.353 | 2011-04-11T16:37:21.353 | null | null | 210 | null |
9441 | 2 | null | 8898 | 2 | null | The ROC curve (Receiver Operating Characteristics) is one of the techniques available. You can check the questions with the tag roc on this site for further details. The wikipedia article
[http://en.wikipedia.org/wiki/Receiver_operating_characteristic](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) and the external links to it may also be useful.
Some other methods can be found here
[http://onbiostatistics.blogspot.com/2011/01/agreement-statistics-and-kappa.html](http://onbiostatistics.blogspot.com/2011/01/agreement-statistics-and-kappa.html)
| null | CC BY-SA 3.0 | null | 2011-04-11T17:33:41.190 | 2011-04-11T18:21:18.727 | 2011-04-11T18:21:18.727 | 4116 | 4116 | null |
9442 | 2 | null | 9190 | 2 | null | [Visual explanations](http://www.edwardtufte.com/tufte/books_visex) or anything else by Tufte is inspirational.
| null | CC BY-SA 3.0 | null | 2011-04-11T18:45:10.307 | 2011-04-11T18:45:10.307 | null | null | 2817 | null |
9443 | 2 | null | 9422 | 2 | null | Your question is above my pay grade, as it were, but I can suggest a first look at [the R survey package](http://faculty.washington.edu/tlumley/survey/), which might implement some of the routines that you'd use to answer your questions.
| null | CC BY-SA 3.0 | null | 2011-04-11T18:52:07.253 | 2011-04-11T18:52:07.253 | null | null | 1764 | null |
9444 | 2 | null | 9220 | 2 | null | Why not test it out?
```
set.seed(347)
x <- rnorm(10000)
y <- rnorm(10000)
x2 <- rnorm(10000)
y2 <- rnorm(10000)
qdf <- data.frame(x,y,x2,y2)
qdf <- data.frame(qdf,(x-x2)^2+(y-y2)^2)
colnames(qdf)[5] <- "euclid"
plot(c(x,y),c(x2,y2))
plot(qdf$euclid)
hist(qdf$euclid)
plot(dentist(qdf$euclid))
```




| null | CC BY-SA 3.0 | null | 2011-04-11T19:19:33.157 | 2011-04-11T19:19:33.157 | null | null | 776 | null |
9446 | 1 | null | null | 3 | 2765 |
### Context:
My research aims to assess whether parents who have been involved in a relationship with a social worker are at higher risk of child abuse/neglect.
I am trying to establish whether there exists a relationship between social workers' attitude toward the parents that reflect an overall outcome of the case (positive or negative).
I want to assess whether social worker attitudes have an effect on parental responses.
- The questionnaire was only given to parents who have a relationship with a social worker.
- There were 20 participants (16 females and 4 males).
- I aggregated my 13 Likert items/questions into a single attitude (word) per each.
- I also reversed the order of the Likert scale when I entered the data, so that a score of 5 stands for "strongly disagree", whereas a score of 1 reads "strongly agree". This was done because of the way my Likert items were asked.
### Question:
- Can I run a test between genders given that there are an uneven number of males and females?
- What analyses can I run to test my research question?
| How to carry out a Likert scale analysis? | CC BY-SA 3.0 | null | 2011-04-11T19:37:39.740 | 2011-04-12T04:28:22.427 | 2011-04-12T04:28:22.427 | 183 | 4119 | [
"self-study",
"psychometrics",
"scales",
"likert"
] |
9447 | 1 | null | null | 6 | 383 | I would like to predict the average number of days in a year for which two conditions are true:
- daily average temperature is below zero celsius
- the day was preceded by at least four days with daily average temperature below zero celsius
I've historical daily average temperature data for the location available for about 10 years. My initial approach was to use the [one sided Chebyshev inequality](http://en.wikipedia.org/wiki/Chebyshev_inequality#Variant%3a_One-sided_Chebyshev_inequality)
which can be used to approximate a probability if the distribution is not known. However in this application I am interested in the probability of a special condition, can I use the Chebyshev inequality for a dummy time series as well? I.e.: 1 if condition is fullfilled, otherwise 0, --> the dataset would therefore look something like 0,0,0,0,0,1,1,0,0,0,0,0,1,1,1,1,1,1,etc.
How would you approach a problem like that from a different angle, the data clearly has seasonality, is there any distribution which I could use to have a better estimate than Chebyshev?
| Estimating event probability from historical time series with clear seasonality | CC BY-SA 3.0 | null | 2011-04-11T20:01:06.680 | 2011-04-12T04:32:44.390 | 2011-04-12T04:32:44.390 | 183 | 4120 | [
"time-series",
"predictive-models",
"seasonality"
] |
9448 | 2 | null | 9447 | 2 | null | I think the joint distribution of temperature data on successive days could be reasonably modelled using a multi-variate Gaussian (Gaussian distributions are often used in statistical downscaling of temperature). What I would try would be to regress the mean and covariance matrix of the temperature time series on sine and cosine components of the day of year (to deal with the seasonality). The details on how to do that are given in a paper by [Peter Williams](http://dx.doi.org/10.1162/neco.1996.8.4.843), Williams uses a neural network, but I would start off with just a linear model. This will give you what climatologists would call a "weather generator" (of sorts). Using this you could generate as many synthetic time series as you want with the appropriate statistical properties, from which you could estimate the probabilities you require directly. You would need to estimate the window over which temperatures were usefully correllated - which may be quite high in winter due to blocking patterns (for the U.K. anyway). A bit baroque I suppose, but it would be the thing I would try!
| null | CC BY-SA 3.0 | null | 2011-04-11T20:28:41.743 | 2011-04-11T20:35:07.073 | 2011-04-11T20:35:07.073 | 887 | 887 | null |
9449 | 1 | 9451 | null | 9 | 11446 | How do you compute confidence intervals for positive predictive value?
The standard error is:
$$SE = \sqrt{ \frac{PPV(1-PPV)}{TP+FP}} $$
Is that right? (here my concern is the denominator)
Does that formula work for any similar ratio in a 2x2 table. E.g. for sensitivity, it would be
$$SE = \sqrt{ \frac{SENS(1-SENS)}{FP+TN}} $$
Is that right? (here my concern is that it is generalizable to other ratios as long as you get the denominator right)
And the for the 95% confidence intervals:
$$CI_{PPV} = PPV \pm 1.96*SE$$
Is that right? (my concern here is how to go from SE to the confidence interval)
(of course with all the cell restrictions like $n\cdot p\cdot (1-p) \ge 5$)
| How do you compute confidence intervals for positive predictive value? | CC BY-SA 3.0 | null | 2011-04-11T20:33:14.453 | 2011-04-12T02:45:56.907 | 2011-04-12T02:45:56.907 | 3911 | 3186 | [
"confidence-interval",
"binomial-distribution",
"contingency-tables"
] |
9450 | 2 | null | 9392 | 2 | null | As I understand (see comments to the original question) you want to select a subset of the features by two criteria:
- the subset covers most of the information content of the dataset,
- the subset includes as few features as possible.
The paper [Variable selection in large environmental data sets using principal components analysis by King and Jackson in Environmetrics, 1999](http://j.mp/gYJ4sn) compares the methods for this problem.
| null | CC BY-SA 3.0 | null | 2011-04-11T22:43:54.183 | 2011-04-13T02:59:16.640 | 2011-04-13T02:59:16.640 | 3911 | 3911 | null |
9451 | 2 | null | 9449 | 12 | null | Your first SE formula is correct. The second SE formula which concerns sensitivity should have the total number of positive cases in the denominator:
$$SE_\text{sensitivity} = \sqrt{ \frac{SENS(1-SENS)}{TP+FN}} $$
The logic is that sensitivity = $\frac{TP}{TP+FN}$, and the denominator in the SE formula is the same.
As @onestop pointed out in their comment [methods of calculating a binomial proportion confidence interval](http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval) can be used here. The method you follow is the normal approximation, however unless you have really large counts other methods like the Wilson interval will be more accurate.
| null | CC BY-SA 3.0 | null | 2011-04-11T23:22:22.870 | 2011-04-11T23:22:22.870 | null | null | 3911 | null |
9452 | 2 | null | 9447 | 0 | null | I know little about meteorology, so my following assumptions may be wrong: today's temperature is similar to yesterday's and the day before yesterday's (maybe more days going back), and also similar to temperature a year age, two years ago, three years ago, etc.
If these assumptions got reinforcement I would use an ARMA model using days -1, -2, … and -365, -365*2, -365*3, … as predictors of today's temperature, and maybe a few days looking back in the moving average terms. (You can imagine many variants of this model.)
After fitting the model I would make a large number of model based simulations predicting the temperatures for each of the following 365 days, and count the cases satisfying the two conditions.
| null | CC BY-SA 3.0 | null | 2011-04-11T23:46:32.527 | 2011-04-11T23:46:32.527 | null | null | 3911 | null |
9454 | 1 | 9458 | null | 10 | 8090 |
### Context:
I have two data sets from the same questionnaire run over two years. Each question is measured using a 5-Likert scale.
### Q1: Coding scheme
At the moment, I have coded my responses on a [0, 1] interval, with 0 meaning "most negative response", 1 meaning "most positive response", and other responses spaced evenly between.
- What is the "best" coding scheme to use for the Likert scale?
I realise that this might be a bit subjective.
### Q2: Significance across years
- What is the best way to determine whether there is statistically significant change across the two years?
That is, looking at the results for question 1 for each year, how do I tell if the difference between the 2011 result and the 2010 result is statistically significant? I've got a vague recollection of the Student's t-test being of use here, but I'm not sure.
| Statistical significance of changes over time on a 5-point Likert item | CC BY-SA 3.0 | null | 2011-04-12T02:44:24.163 | 2011-04-13T03:23:53.867 | 2011-04-12T04:01:04.667 | 183 | 4123 | [
"statistical-significance",
"likert"
] |
9456 | 1 | 32961 | null | 7 | 844 | Over the years, I picked up many Statistics Concepts in by a variety of situations and means. I studied some Statistics maybe for a couple semesters almost 10 years back. But I also picked up concepts while doing Machine Learning work. I only understood things in a narrow scope - I always tried to get away knowing only what I need to know for a project etc.
Consequently, I do not have a big picture understanding of Statistics. For eg. I know vaguely what a t-test is and what a chi-square test is. But I can't relate both these concepts solidly in my head. And without relating each of these concepts I feel they are useless and not as powerful.
I have come to understand that the only way I understand anything is by working on hard problems chosen by experts. It should be hard so that I take some time to work it, also it should be chosen carefully by experts so that each problem contains a 'moral' which takes me one step closer to enlightenment.
So, help me assemble a set of problems to work through to seek out zen through statistics. Scope is all of statistics (regression, ANOVA, t-tests etc, Structural Equations of Latent Variables etc).
| Hard exemplary problem sets to work through to solidify my understanding of statistical concepts? | CC BY-SA 3.0 | null | 2011-04-12T03:14:53.957 | 2015-11-11T11:28:53.737 | 2015-11-11T11:28:53.737 | 22468 | 24040 | [
"regression",
"anova",
"t-test",
"references",
"structural-equation-modeling"
] |
9457 | 1 | 9460 | null | 9 | 17075 | This is a question of definition, does the stats community differentiate these terms?
| Is there a difference between seasonality / cyclicality / periodicity | CC BY-SA 3.0 | null | 2011-04-12T03:25:49.320 | 2023-03-10T14:20:04.163 | null | null | 1709 | [
"seasonality"
] |
9458 | 2 | null | 9454 | 7 | null |
### 1. Coding scheme
In terms of assessing statistical significance using a t-test, it is the relative distances between the scale points that matters. Thus, (0, 0.25, 0.5, 0.75, 1) is equivalent to (1, 2, 3, 4, 5).
From my experience an equal distance coding scheme, such as those mentioned previously are the most common, and seem reasonable for Likert items.
If you explore optimal scaling, you might be able to derive an alternative coding scheme.
### 2. Statistical test
The question of how to assess group differences on a Likert item has already been answered [here](https://stats.stackexchange.com/questions/203/group-differences-on-a-five-point-likert-item).
The first issue is whether you can link observations across the two time points. It sounds like you had a different sample.
This leads to a few options:
- Independent groups t-test: this is a simple option; it also does test for differences in group means; purists will argue that the p-value may be not entirely accurate; however, depending on your purposes, it may be adequate.
- Bootstrapped test of differences in group means: If you still want to test differences between group means but are uncomfortable with the discrete nature of dependent variable, then you could use a bootstrap to generate confidence intervals from which you could draw inferences about changes in group means.
- Mann-Whitney U test (among other non-parametric tests): Such a test does not assume normality, but it is also testing a different hypothesis.
| null | CC BY-SA 3.0 | null | 2011-04-12T04:16:34.013 | 2011-04-13T03:23:53.867 | 2017-04-13T12:44:54.643 | -1 | 183 | null |
9459 | 2 | null | 9457 | 5 | null | Yes, there is a difference.
A classic time series decomposition model is $$ Y = T + S + C + I, $$
where
\begin{align}
Y & = \text{data,} \\
T & = \text{trend,} \\
S & = \text{seasonal,} \\
C & = \text{cyclical,} \\
I & = \text{irregular (i.e. error left over).}
\end{align}
'seasonal' refers to REGULAR patterns that occur with time, e.g. oatmeal sales higher in winter, or Starbucks coffee sales being highest at 7 a.m. These are usually very predictable.
'cyclical' refers to longer term patterns like business cycles. These aren't as regular as seasonality, and may involve some subjectivity in estimation.
'periodicity' refers to seasonal component. Periodicity could be monthly, biweekly, hourly, etc.
The equation above has $+$ signs, indicating an additive model. Multiplicative models are also commonly used if the seasonality is multiplicative.
I took out the '*' signs in deference to comments below ;)
| null | CC BY-SA 4.0 | null | 2011-04-12T04:41:37.783 | 2023-03-10T14:20:04.163 | 2023-03-10T14:20:04.163 | 285236 | 3919 | null |
9460 | 2 | null | 9457 | 3 | null | Perhaps. Though my take could easily be construed as a bit too anal retentive:
I tend to use the term seasonality as a metaphor for the 'seasons' of the year: i.e. Spring, Summer, Fall, Winter (or 'Almost Winter', Winter, 'Still Winter', and 'Construction' if you live in Pennsylvania...). In other words, I would expect a seasonal trend to have a periodicity of roughly 365 days.
I tend to use the term 'cyclicality' to refer to a response, which when decomposed in frequency space has a single dominant peak. Or, a bit more generally, much as one could stare at an engine, 'cyclicality' implies a dominant cycle -- the piston moves up, and then it moves down, and then it moves up again. Numerically, I would expect low, high, low, high, low, high, etc. So two things: (1) magnitude &/or sign switches from a low to high and (2) these switches occur with a predictable frequency. This rigor naturally evaporates when talking about business cycles -- however, I often find that a dominant frequency remains, e.g. every business quarter, or every year, things are slow for the first few weeks and high pressure the last few weeks... So there is a dominant period, but it could be very different from 'seasonality' which to me implies a year.
Lastly, I tend to use 'periodicity' when referring to the frequency of collecting measurements. Differing from cyclicality, the term 'periodicity' for me implies no expectation for the magnitude or sign of the data collected.
But this is just my $0.02. And I'm just a stat student -- take from this what you will.
| null | CC BY-SA 3.0 | null | 2011-04-12T05:41:23.373 | 2011-04-12T05:41:23.373 | null | null | 1499 | null |
9461 | 1 | null | null | 5 | 201 | I'm wondering how to test the significance of factor(s) and/or covariate(s) along with modeling the causal relationship among responses.
Let me explain this with a concrete example.
### Example:
Suppose a researcher observed four responses Y1, Y2, Y3, and Y4 along with three covariates X1, X2, and X3 from an experiment involving ab treatment combinations from a fixed factor A with a levels and a random factor B with b levels. Based on past experience, it is assumed four responses are correlated and Y1 is also influenced by the other three (Y2, Y3, and Y4).
### Question:
- How can I test the significance of Factors (A and B) and covariates (X1, X2, and X3) and causality among responses (Y1, Y2, Y3, and Y4) using a single model?
| Testing significance of factors and covariates along with modeling causality among responses | CC BY-SA 3.0 | null | 2011-04-12T05:42:54.500 | 2011-09-30T01:33:27.793 | 2011-04-12T06:48:16.857 | 3903 | 3903 | [
"multivariate-analysis",
"experiment-design",
"structural-equation-modeling"
] |
9462 | 2 | null | 9422 | 2 | null |
- Standard formulas for standard errors of a proportion would be suitable. With regards to your question about which companies the "n=100 sample" plan to use in the future, these standard errors would be based on n = 100. If this yields standard errors that are too large for your liking, then you need to increase your sample size.
- In some cases you might be able to increase your effective sample size by engaging in more targeted sampling of the subset of the population that interests you (i.e., with company X, but planning to use company X less in the future).
| null | CC BY-SA 3.0 | null | 2011-04-12T05:54:54.803 | 2011-04-13T03:30:18.077 | 2011-04-13T03:30:18.077 | 183 | 183 | null |
9463 | 2 | null | 9437 | 4 | null | While I haven't seen anything specifically, I doubt that it would achieve much, at least for linear regression. Each regression equation is just a linear combination of the predictors:
$$\hat{y}^{(j)} = \sum_i \hat{\beta}_i^{(j)} x_i$$
Most boosting algorithms in turn combine multiple predictors by taking a weighted average, which is another linear combination:
$$\hat{y} = \sum_j \hat{w}^{(j)} \hat{y}^{(j)}$$
So the final output is still a linear combination of the predictors, only with the coefficients obtained by a complicated process.
| null | CC BY-SA 3.0 | null | 2011-04-12T06:49:35.103 | 2011-04-12T06:49:35.103 | null | null | 1569 | null |
9464 | 1 | 9466 | null | 4 | 495 | I'm trying to fully understand the confidence interval formal given on [this site](http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepVsShallowComparisonICML2007#Results):
$$\hat{\mu}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{\mu}(1-\hat{\mu})}{n}}$$
so I can reproduce the same type of intervals for my own data. But I don't quite understand what the parameters such as $\alpha$ and $Z$ mean. I'm guessing they're related to defining a 95% confidence interval if your data were distributed normally.
Can explain to me how that formula works or a reference I can find a description of the formula used?
| Can someone give me details about a particular confidence interval formula? | CC BY-SA 3.0 | null | 2011-04-12T06:54:42.497 | 2012-01-10T15:00:45.907 | 2012-01-10T15:00:45.907 | 919 | 4127 | [
"confidence-interval",
"binomial-distribution"
] |
9465 | 2 | null | 9407 | 4 | null |
### 1. Power analysis for one-way ANOVA:
Download [G-Power 3](http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/download-and-register). It allows you to do for a range of statistical tests including ANOVA
- a priori power analysis (sample required given effect size, desired power, and alpha), and
- post hoc power analysis (observed power given sample size, effect size, and alpha)
[Here's a G Power 3 tutorial for ANOVA](http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/user-guide-by-distribution/f/anova_fixed_effects_omnibus).
### 2. Power analysis and Likert items
Assuming that the 7-point Likert item is a continuous variable might be a useful simplifying assumption. This is especially true given that assumptions made about effect size for power analysis are only guesses (if they weren't, then you wouldn't need to do the study).
### 3. Finite populations
Also, even when there is a finite population, you are often wanting to determine whether any differences reflect a theoretically meaningful difference or just random sampling. For example, assume that you measured the whole population of both groups, and you found that group A had a mean of 6.12, and group B had a mean of 6.11. You could conclude that there is a difference between the groups. Alternatively, you could assume that the participants in each group are drawn from some theoretical data generating process, and that you are interested in drawing inferences about this. Then, you would apply standard t-tests or ANOVAs to test whether the observed difference is statistically significant. If this was your purpose, you would not need to apply corrections for finite populations.
| null | CC BY-SA 3.0 | null | 2011-04-12T06:59:48.757 | 2011-04-12T06:59:48.757 | null | null | 183 | null |
9466 | 2 | null | 9464 | 2 | null | If $\hat{\mu}$ is the mean error rate computed averaging $N$ error rates from different $N$ tests, an explanation could be:
Let $X$ be the number of errors on $N$ tests, so $X$ is a binomial distributed random variable with mean $N\hat{\mu}$ and variance $N\hat{\mu}(1-\hat{\mu})$ (it is sum of $N$ Bernoulli random variables).
Thus $X/N\sim Bin\bigg(\hat{\mu},\frac{\hat{\mu}(1-\hat{\mu})}{N}\bigg)$.
By the central limit theorem it could be approximated to a normal random variable with same mean and variance. Then you can compute the $\alpha$ confidence interval with:
$$P\bigg(-z_{1-\alpha/2}\leq\frac{\mu-\hat{\mu}}{\sqrt{\hat{\mu}(1-\hat{\mu})/N}}\leq z_{1-\alpha/2}\bigg) = 1 - \alpha$$
Bibliography:
It is similar to estimate a confidence interval for accuracy using a $N$ values test set in a classification problem. You should take a look to P.N. Tan, M. Steinbach, V. Kumar Introduction to Data Mining. Addison Wesley, 2006.
| null | CC BY-SA 3.0 | null | 2011-04-12T07:45:02.760 | 2011-07-21T13:21:28.917 | 2011-07-21T13:21:28.917 | 2719 | 2719 | null |
9467 | 1 | 9473 | null | 9 | 568 | Following on from my [earlier question](https://stats.stackexchange.com/questions/9341/regularized-fit-from-summarized-data), the solution to the normal equations for ridge regression is given by:
$$\hat{\beta}_\lambda = (X^TX+\lambda I)^{-1}X^Ty$$
Could you offer any guidance for choosing the regularization parameter $\lambda$. Additionally, since the diagonal of $X^TX$ grows with the number of observations $m$, should $\lambda$ also be a function of $m$?
| Regularized fit from summarized data: choosing the parameter | CC BY-SA 3.0 | null | 2011-04-12T08:21:07.817 | 2011-04-12T13:05:16.340 | 2017-04-13T12:44:52.660 | -1 | 439 | [
"regression",
"regularization",
"ridge-regression"
] |
9468 | 1 | 9471 | null | 4 | 422 | Suppose I have a model of stock prices developed using Brownian motion. I have a second time series derived from the first. At each price point in the first time series, I take the arithmetic mean up to that point and that is the data point for my second time series. The volatility of the second time series is lower because the arithmetic mean has a dampening effect. Are there any statistical tools to model the volatility of the second time series?
| How do I model the volatility of an arithmetic mean? | CC BY-SA 3.0 | null | 2011-04-12T10:33:33.660 | 2011-07-11T14:10:11.577 | null | null | 4128 | [
"standard-deviation"
] |
9469 | 2 | null | 9454 | 3 | null | Wilcoxon Ranksum Test aka Mann-Whitney is the way to go in the case of ordinal data. The bootstrapping solution is also elegant albeit not the "classic" way to go. The Bootstrapping method might also be valuable in case you aim for other things like factor analysis. In case of regression analysis you might chose ordered probit or ordered logit as a model specification.
BTW: If your scale has a larger range (>10 values per variable) you might use the results as a metric variable, wich makes a t-test a safe choice. Be adviced that this is a little dirty and may be considered devil's work by some.
stephan
| null | CC BY-SA 3.0 | null | 2011-04-12T10:39:25.950 | 2011-04-12T10:39:25.950 | null | null | null | null |
9470 | 1 | null | null | 1 | 813 | I'm trying to replicate the paper of Blundell et al. (2008) to separate permanent and transitory shocks on income on a panel dataset.
He solves the non-linear system of equations using Chamberlain's minimum distance estimator (it is shown in the appendix of the paper), but I haven't found a library, nor in R, nor in STATA, that does the trick. Can someone help me?
| Minimum distance estimator | CC BY-SA 3.0 | null | 2011-04-12T12:12:56.453 | 2012-12-07T23:03:30.100 | 2011-04-12T12:23:58.203 | 2116 | null | [
"nonlinear-regression"
] |
9471 | 2 | null | 9468 | 2 | null | Let's say your stock price series is $S_t$ so that your arithmetic average series is $X_t = \frac{1}{t}\sum_{j=1}^t S_j$. The series of returns that you use to calculate the volatility can be defined in several ways, the most popular being the log-returns series
$R_t = \frac{1}{\delta t} \log \frac{S_t}{S_{t-1}}$
and the simple returns series
$R_t = \frac{1}{\delta t} \left( \frac{S_t}{S_{t-1}} - 1 \right)$
where $\delta t$ is the time between observations (measured as a fraction of a year). Both of these involve the ratio $S_i / S_{i-1}$, so to look at the volatility of the series $X$ we should calculate this ratio:
$\frac{X_t}{X_{t-1}} = \frac{t-1}{t} \left( 1 + \frac{S_t}{S_1+\cdots+S_{t-1}} \right)$
Either way you cut it, the returns series for $X$ is not going to be pretty. The most straightforward approach is to look at the simple returns series for $X$, let's call it $Q$:
$Q_t = \frac{1}{\delta t} \frac{t-1}{t} \left( \frac{S_t}{S_1+\cdots+S_{t-1}} - \frac{1}{t-1} \right)$
The volatility of the series $X$ is the standard deviation of this series. Note that even if the volatility of $S$ is constant, the volatility of $X$ won't be constant, because of the dependence of the series on $t$. In fact, under some mild assumptions on the nature of the series $S$ (eg bounded returns) you should be able to show that the volatility of $X$ decreases to zero as $t\to\infty$.
This doesn't fully answer your question, but it points out some of the difficulties involved in what you're trying to do.
Perhaps a better idea (depending on what you're trying to achieve) is to take a moving average of your stock price series, taking the arithmetic average of the previous $T$ observations. At least this way your averaged series won't have an explicit dependence on the time index $t$.
You might also consider taking the geometric mean rather than the arithmetic mean. Although conceptually less intuitive, it has the advantage that it is equivalent to taking the arithmetic average of the log-returns series, which greatly simplifies the algebra later.
| null | CC BY-SA 3.0 | null | 2011-04-12T12:57:20.417 | 2011-04-12T13:32:31.587 | 2011-04-12T13:32:31.587 | 2425 | 2425 | null |
9472 | 2 | null | 9464 | 3 | null | Answering the following part of your question:
>
I don't quite understand what the
parameters such as alpha and Z mean
$\alpha$ is the parameter that defines the confidence level of the interval. Specifically, the confidence level will be $100(1-\alpha)$%, so to get a 95% confidence interval, set $\alpha=0.05$.
$Z$ is a reference to the normal distribution, and in this case $z_q$ means its $q$-th quantile, that is the value for which $P(Z < z_q) = q$, where $Z$ is the standard normal distribution. This can be looked up in tables or calculated by computers. For example, when $\alpha=0.05$, the formula needs the 0.975-th quantile, that is the value which exceeds 97.5% of the normal distribution. Its value is $z_{0.975}=1.96$.
| null | CC BY-SA 3.0 | null | 2011-04-12T13:01:17.443 | 2011-04-12T13:01:17.443 | null | null | 279 | null |
9473 | 2 | null | 9467 | 7 | null | My answer will be based on a nice review of the problem by Anders Bjorkstorm [Ridge regression and inverse problems](http://www2.math.su.se/matstat/reports/seriea/2000/rep5/report.pdf) (I would recommend to read the whole article).
Part 4 in this review is dedicated to the selection of a parameter $\lambda$ in ridge regression introducing several key approaches:
- ridge trace corresponds to graphical analysis of $\hat{\beta}_{i,\lambda}$ against $\lambda$. A typical plot will depict unstable (for a true ill-posted problem, you have to be sure you need this regularization in any case) behavior of different $\hat{\beta}_{i,\lambda}$ estimates for $\lambda$ close to zero, and almost constant from some point (roughly we have to detect constant behavior intersection region for all of the parameters). However decision regarding where this almost constant behavior starts is somewhat subjective. Good news for this approach is that it does not require to observe $X$ and $y$.
- $L$-curve it plots Euclidean norm of the vector of estimated parameters $|\hat{{\beta}}_\lambda|$ against the residual norm $|y - X\hat{\beta}_\lambda|$. The shape is typically close to letter $L$ so there exists a corner that determines where optimal parameter belongs to (one may choose the point in $L$ curve where the latter reaches the maximum curvature, but it is better to search for Hansen's article for more details).
- For cross-validation actually a simple "leave-one-out" approach is often chosen, seeking for $\lambda$ that maximizes (or minimizes) some forecasting accuracy criterion (you have a wide range of them, RMSE and MAPE are the two to begin with). Difficulties with 2. and 3. are that you have to observe $X$ and $y$ to implement them in practice.
| null | CC BY-SA 3.0 | null | 2011-04-12T13:05:16.340 | 2011-04-12T13:05:16.340 | null | null | 2645 | null |
9474 | 1 | 9527 | null | 1 | 450 | I'm looking for an implementation of a fast maximum rank correlation (MRC) estimator. This will be applied to large-ish sparse matrices (~100,000 by 10,000) in a text-mining application.
I'm working in python and R, so it would be nice to find something in those languages. Failing that, I could probably convert code from some other language.
Any suggestions?
Quick note: the best algorithm I've seen described is in [Abrevaya (1999)](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V84-3W7XC0R-4&_user=99318&_coverDate=03%2F01%2F1999&_rdoc=1&_fmt=high&_orig=gateway&_origin=gateway&_sort=d&_docanchor=&view=c&_searchStrId=1715406837&_rerunOrigin=scholar.google&_acct=C000007678&_version=1&_urlVersion=0&_userid=99318&md5=2818ab10a7b249b05d8a0d0a24c2ff82&searchtype=a), which runs in $nlog(n)$ time. [Wang (2007)](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4M813VJ-1&_user=99318&_coverDate=03%2F01%2F2007&_rdoc=1&_fmt=high&_orig=gateway&_origin=gateway&_sort=d&_docanchor=&view=c&_searchStrId=1715396891&_rerunOrigin=scholar.google&_acct=C000007678&_version=1&_urlVersion=0&_userid=99318&md5=ed8045c282ad6ba5635207366e765e53&searchtype=a) has an "IMO" algorithm that he claims runs in $n^2log(n)$ -- much worse.
Note: I've [cross-posted](https://stackoverflow.com/questions/5636093/is-there-a-library-that-implements-a-fast-maximum-rank-correlation-estimator) this question on stackoverflow because I'm not sure which community is the right place to ask.
| Is there a library that implements a fast maximum rank correlation estimator? | CC BY-SA 3.0 | null | 2011-04-12T13:42:54.853 | 2011-04-13T18:07:07.070 | 2017-05-23T12:39:26.523 | -1 | 4110 | [
"r",
"estimation",
"algorithms",
"python"
] |
9475 | 1 | null | null | 11 | 5239 | I have many time series in this format 1 column in which I have date (d/m/yr) format and many columns that represent different time series like here:
```
DATE TS1 TS2 TS3 ...
24/03/2003 0.00 0.00 ...
17/04/2003 -0.05 1.46
11/05/2003 0.46 -3.86
04/06/2003 -2.21 -1.08
28/06/2003 -1.18 -2.16
22/07/2003 0.00 0.23
```
With R, how I can group the time series that show similar trends?
| Time series clustering | CC BY-SA 3.0 | null | 2011-04-12T14:33:45.357 | 2020-05-05T04:21:24.840 | 2011-11-28T09:32:21.230 | 2116 | 4133 | [
"r",
"time-series",
"clustering"
] |
9476 | 2 | null | 9385 | 5 | null | The formulae for Holt-Winters' method include forecasting the seasonal component. You don't need $\gamma=0$. See a forecasting textbook for the details.
| null | CC BY-SA 3.0 | null | 2011-04-12T14:48:00.777 | 2011-04-12T14:48:00.777 | null | null | 159 | null |
9477 | 1 | 9480 | null | 13 | 40144 | My attempts:
- I couldn't get confidence intervals in interaction.plot()
- and on the other hand plotmeans() from package 'gplot' wouldn't display two graphs. Furthermore, I couldn't impose two plotmeans() graphs one on top of the other because by default the axis are different.
- I had some success using plotCI() from package 'gplot' and superimposing two graphs but still the match of the axis wasn't perfect.
Any advice on how to make an interaction plot with confidence intervals?
Either by one function, or advice on how to superimpose `plotmeans()` or `plotCI()` graphs.
code sample
```
br=structure(list(tangle = c(140L, 50L, 40L, 140L, 90L, 70L, 110L,
150L, 150L, 110L, 110L, 50L, 90L, 140L, 110L, 50L, 60L, 40L,
40L, 130L, 120L, 140L, 70L, 50L, 140L, 120L, 130L, 50L, 40L,
80L, 140L, 100L, 60L, 70L, 50L, 60L, 60L, 130L, 40L, 130L, 100L,
70L, 110L, 80L, 120L, 110L, 40L, 100L, 40L, 60L, 120L, 120L,
70L, 80L, 130L, 60L, 100L, 100L, 60L, 70L, 90L, 100L, 140L, 70L,
100L, 90L, 130L, 70L, 130L, 40L, 80L, 130L, 150L, 110L, 120L,
140L, 90L, 60L, 90L, 80L, 120L, 150L, 90L, 150L, 50L, 50L, 100L,
150L, 80L, 90L, 110L, 150L, 150L, 120L, 80L, 80L), gtangles = c(141L,
58L, 44L, 154L, 120L, 90L, 128L, 147L, 147L, 120L, 127L, 66L,
118L, 141L, 111L, 59L, 72L, 45L, 52L, 144L, 139L, 143L, 73L,
59L, 148L, 141L, 135L, 63L, 51L, 88L, 147L, 110L, 68L, 78L, 63L,
64L, 70L, 133L, 49L, 129L, 100L, 78L, 128L, 91L, 121L, 109L,
48L, 113L, 50L, 68L, 135L, 120L, 85L, 97L, 136L, 59L, 112L, 103L,
62L, 87L, 92L, 116L, 141L, 70L, 121L, 92L, 137L, 85L, 117L, 51L,
84L, 128L, 162L, 102L, 127L, 151L, 115L, 57L, 93L, 92L, 117L,
140L, 95L, 159L, 57L, 65L, 130L, 152L, 90L, 117L, 116L, 147L,
140L, 116L, 98L, 95L), up = c(-1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
-1L, -1L, 1L, 1L, 1L, 1L, -1L, -1L, -1L, -1L, 1L, 1L, -1L, -1L,
1L, 1L, -1L, 1L, 1L, -1L, 1L, 1L, 1L, 1L, 1L, -1L, -1L, 1L, 1L,
1L, 1L, -1L, -1L, 1L, 1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L,
1L, -1L, -1L, -1L, -1L, -1L, 1L, -1L, 1L, 1L, -1L, -1L, -1L,
-1L, 1L, -1L, 1L, -1L, -1L, -1L, 1L, -1L, 1L, -1L, 1L, 1L, 1L,
-1L, -1L, -1L, -1L, -1L, -1L, 1L, -1L, 1L, 1L, -1L, -1L, 1L,
1L, 1L, -1L, 1L, 1L, 1L)), .Names = c("tangle", "gtangles", "up"
), class = "data.frame", row.names = c(NA, -96L))
plotmeans2 <- function(br, alph) {
dt=br; tmp <- split(br$gtangles, br$tangle);
means <- sapply(tmp, mean); stdev <- sqrt(sapply(tmp, var));
n <- sapply(tmp,length);
ciw <- qt(alph, n) * stdev / sqrt(n)
plotCI(x=means, uiw=ciw, col="black", barcol="blue", lwd=1,ylim=c(40,150), xlim=c(1,12));
par(new=TRUE) dt= subset(br,up==1);
tmp <- split(dt$gtangles, dt$tangle);
means <- sapply(tmp, mean);
stdev <- sqrt(sapply(tmp, var));
n <- sapply(tmp,length);
ciw <- qt(0.95, n) * stdev / sqrt(n)
plotCI(x=means, uiw=ciw, type='l',col="black", barcol="red", lwd=1,ylim=c(40,150), xlim=c(1,12),pch='+');
abline(v=6);abline(h=90);abline(30,10); par(new=TRUE);
dt=subset(br,up==-1);
tmp <- split(dt$gtangles, dt$tangle);
means <- sapply(tmp, mean);
stdev <- sqrt(sapply(tmp, var));
n <- sapply(tmp,length);
ciw <- qt(0.95, n) * stdev / sqrt(n)
plotCI(x=means, uiw=ciw, type='l', col="black", barcol="blue", lwd=1,ylim=c(40,150), xlim=c(1,12),pch='-');abline(v=6);abline(h=90);
abline(30,10);
}
plotmeans2(br,.95)
```
| How to draw an interaction plot with confidence intervals? | CC BY-SA 3.0 | null | 2011-04-12T16:07:15.017 | 2011-12-09T15:59:19.010 | 2011-12-09T15:59:19.010 | 930 | 1084 | [
"r",
"data-visualization",
"confidence-interval",
"interaction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.