Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9811 | 2 | null | 5396 | 1 | null | for your first question:
this should be possible with `Table`/ `Create Table`
(I'm using the german version of `JMP7`, so I'm not sure how the menu commands are called in the english version)
| null | CC BY-SA 3.0 | null | 2011-04-20T23:57:53.813 | 2011-04-20T23:57:53.813 | null | null | 3760 | null |
9813 | 1 | null | null | 5 | 114 | Consider data where each observation was generated as follows.
- We draw $Z_1,...,Z_m$ from some distribution. (Possibly they're independent or related in some other simple way.)
- Next, based on the $Z_1,...,Z_m$, we choose a sequence $0=I_0 < I_1 < ... < I_N=m$ so that, for each $k$, (i) $I_k-I_{k-1}$ is not too small and (ii) the sample variance within $Z_{I_{k-1}+1},...,Z_{I_k}$ is small. (I am intentionally somewhat vague here - I am open to making various different assumptions along these lines.)
- We generate the observed variables $X_1,...,X_N$ as $X_k=$ the average of $Z_{I_{k-1}+1},...,Z_{I_k}$.
For example, the hidden sequence $Z=(0.1, 0.3, 0.2, 1.3, 1.2, 0.1)$ might lead to the observed sequence $X=(0.2, 1.25, 0.1)$ [or perhaps to $X=(0.2,0.86)$ due to (i) above].
Does anyone here know whether this type of setup has been studied before, and if so, what are some keywords to search for or papers/books to look at?
Thanks in advance for any answers!
Added on Apr 21: The motivation is as follows. Think of each $Z$ sequence as SNP data from a single patient. In order to anonymize the data for a public release a procedure like the one I described above can be performed. Based on the anonymized data $X$, I want to predict survival and/or to identify SNPs that are relevant to survival.
Note that $I$, $N$, and $X$ are all functions of $Z$, so they will be different for each patient. Also note that that $I$'s are observed, i.e., I know which $Z_j$'s were averaged to produce each $X_k$.
| Averages of random subsets of variables | CC BY-SA 3.0 | null | 2011-04-21T04:00:49.700 | 2011-04-22T09:29:57.620 | 2011-04-21T13:57:43.063 | 3891 | 3891 | [
"modeling",
"censoring"
] |
9814 | 1 | null | null | 1 | 681 | I got some nice data on multi-environment trials (MET) data for genotype evaluation and would like to use some new developed techniques as discussed in [Smith et al. 2005](http://ro.uow.edu.au/cgi/viewcontent.cgi?article=9411&context=infopapers). I'm specifically interested in Factor Analytic (FA) structure. Authors mentioned codes for these methods will be available on request.
>
There are now several statistical packages (including ASReml, GENSTAT, S-language packages and SAS; Littel et al. 1996) that allow REML estimation of a range of mixed models. The present authors have found the packages ASReml and GENSTAT and the samm functions (through S-language environments) to be the most suitable for the analysis of MET data, both in terms of the generality of models that can be fitted and the ease with which predictions and inference about varietal effects can be made. All models in the current paper are easily fitted and summarized using these software (code is available from the authors on request).
Even after multiple requests to the authors I haven't heard back from the authors. I'm wondering if someone has tried these models and be kind enough to share worked examples, that would be great. I'm looking forward for some positive response.
Smith et al. (2001) used the following mixed model version of multiplicative models
${\small \begin{eqnarray*}
(\boldsymbol{I}_{e}\otimes\boldsymbol{I}_{g})\boldsymbol{\eta}_{eg\times1} & = & (\boldsymbol{1}_{e}\otimes\boldsymbol{1}_{g})\mu+(\boldsymbol{I}_{e}\otimes\boldsymbol{1}_{g})\boldsymbol{E}_{e\times1}+(\boldsymbol{1}_{e}\otimes\boldsymbol{I}_{g})\boldsymbol{G}_{g\times1}+\underbrace{(\boldsymbol{I}_{e}\otimes\boldsymbol{I}_{g})(\boldsymbol{GE})_{eg\times1}}\\
& = & (\boldsymbol{1}_{e}\otimes\boldsymbol{1}_{g})\mu+(\boldsymbol{I}_{e}\otimes\boldsymbol{1}_{g})\boldsymbol{E}_{e\times1}+(\boldsymbol{1}_{e}\otimes\boldsymbol{I}_{g})\boldsymbol{G}_{g\times1}\\
& & +\underbrace{(\boldsymbol{\Lambda}_{E_{e\times k}}\otimes\boldsymbol{I}_{g})\:\boldsymbol{f}_{G_{kg\times1}}+(\boldsymbol{I}_{e}\otimes\boldsymbol{I}_{g})\delta_{eg\times1}}\end{eqnarray*}
}{\tiny }$
where $\boldsymbol{\Lambda}_{E_{e\times k}}$
is a matrix of environment loadings, $f_{G_{kg\times1}}$ is the associated
vector of genotype scores and k is the number of components (multiplicative terms) included in the model.
The authors assumed that the environments are fixed and $\boldsymbol{G}_{g\times1}$, $\boldsymbol{f}_{G_{kg\times1}}$, and $\delta_{eg\times1}$ are random effects with
$\small \left(\begin{array}{l}
\boldsymbol{G}_{g\times1}\\
\boldsymbol{f}_{G_{kg\times1}}\\
\boldsymbol{\delta}_{eg\times1}\end{array}\right)\sim\mathcal{N}\left(\begin{array}{ccc}
\left[\begin{array}{c}
\boldsymbol{0}\\
\boldsymbol{0}\\
\boldsymbol{0}\end{array}\right] & , & \left[\begin{array}{ccc}
\sigma_{g}^{2}\,\boldsymbol{I}_{g} & \boldsymbol{0} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{I}_{k}\otimes\boldsymbol{I}_{g} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{\Psi}_{e}\otimes\boldsymbol{I}_{g}\end{array}\right]\end{array}\right)$
$\small E((\boldsymbol{I}_{e}\otimes\boldsymbol{I}_{g})\boldsymbol{\eta}_{eg\times1})=(\boldsymbol{1}_{e}\otimes\boldsymbol{1}_{g})\mu+(\boldsymbol{I}_{e}\otimes\boldsymbol{1}_{g})\boldsymbol{E}_{e\times1}$
${\small \begin{eqnarray*}
\mathrm{var}((\boldsymbol{I}_{e}\otimes\boldsymbol{I}_{g})\boldsymbol{\eta}_{eg\times1}) & = & \sigma_{g}^{2}\,(\boldsymbol{J}_{e}\otimes\boldsymbol{I}_{g})+(\boldsymbol{\Lambda}_{E_{e\times k}}\otimes\boldsymbol{I}_{g})(\boldsymbol{I}_{k}\otimes\boldsymbol{I}_{g})(\boldsymbol{\Lambda}_{E_{e\times k}}\otimes\boldsymbol{I}_{g})^{T}+\boldsymbol{\Psi}_{e}\otimes\boldsymbol{I}_{g}\\
& = & \sigma_{g}^{2}\,(\boldsymbol{J}_{e}\otimes\boldsymbol{I}_{g})+(\boldsymbol{\Lambda}_{E_{e\times k}}\boldsymbol{\Lambda}_{E_{e\times k}}^{T}\otimes\boldsymbol{I}_{g})+\boldsymbol{\Psi}_{e}\otimes\boldsymbol{I}_{g}\\
& = & \sigma_{g}^{2}\,(\boldsymbol{J}_{e}\otimes\boldsymbol{I}_{g})+(\boldsymbol{\Lambda}_{E_{e\times k}}\boldsymbol{\Lambda}_{E_{e\times k}}^{T}+\boldsymbol{\Psi}_{e})\otimes\boldsymbol{I}_{g}\\
& = & (\underbrace{\sigma_{g}^{2}\,\boldsymbol{J}_{e}+\boldsymbol{\Lambda}_{E_{e\times k}}\boldsymbol{\Lambda}_{E_{e\times k}}^{T}}+\boldsymbol{\Psi}_{e})\otimes\boldsymbol{I}_{g}\\
& = & (\underbrace{\boldsymbol{\Lambda}_{E}^{*}\boldsymbol{\Lambda}_{E}^{*T}}+\boldsymbol{\Psi}_{e})\otimes\boldsymbol{I}_{g}\end{eqnarray*}
}$
where $\boldsymbol{\Psi}_{e}$ is a diagonal $e\times e$ matrix with elements commonly referred to as specific variances.
| Multiplicative mixed models for analyzing variety-by-environment data | CC BY-SA 3.0 | null | 2011-04-21T05:33:27.247 | 2016-01-04T04:59:54.717 | 2016-01-04T04:59:54.717 | 21599 | 3903 | [
"r",
"mixed-model",
"factor-analysis",
"experiment-design",
"sas"
] |
9815 | 2 | null | 9801 | 13 | null | You can also use formulas from [Matrix cookbook](http://web.archive.org/web/20110430075537/http://matrixcookbook.com/). We have
$$(y-X\beta)'(y-X\beta)=y'y-\beta'X'y-y'X\beta+\beta'X'X\beta$$
Now take derivatives of each term. You might want to notice that $\beta'X'y=y'X\beta$. The derivative of term $y'y$ with respect to $\beta$ is zero. The remaining term
$$\beta'X'X\beta-2y'X\beta$$
is of form of function
$$f(x)=x'Ax+b'x,$$
in formula (88) in the book in page 11, with $x=\beta$, $A=X'X$ and $b=-2X'y$. The derivative is given in the formula (89):
$$\frac{\partial f}{\partial x}=(A+A')x+b$$
so
$$\frac{\partial}{\partial \beta}(y-X\beta)'(y-X\beta)=(X'X+(X'X)')\beta-2X'y$$
Now since $(X'X)'=X'X$ we get the desired solution:
$$X'X\beta=X'y$$
| null | CC BY-SA 4.0 | null | 2011-04-21T06:26:04.143 | 2021-11-22T04:52:33.890 | 2021-11-22T04:52:33.890 | 311814 | 2116 | null |
9816 | 2 | null | 9794 | 6 | null | The cannonical way is probably [MIDAS](http://en.wikipedia.org/wiki/Mixed_data_sampling) regression. There is a Matlab toolbox for estimating, available upon request from the [author Eric Ghysels](http://www.unc.edu/~eghysels/). You might look into [user guide](http://www.unc.edu/~eghysels/papers/MIDAS_Usersguide_Version8.pdf) of this toolbox, since it has a review of all literature on MIDAS.
The wikipedia page also talks about connection with Kalman filters, so @F. Tussel observation is spot on.
Update There is now also an R package [midasr](http://mpiktas.github.io/midasr/) to estimate MIDAS regression.
| null | CC BY-SA 3.0 | null | 2011-04-21T06:50:58.667 | 2013-11-27T14:46:12.393 | 2013-11-27T14:46:12.393 | 2116 | 2116 | null |
9817 | 1 | 9830 | null | 4 | 4974 | I am trying to use Maximum Likelihood Estimation to learn the structure of a DAG, G.
How is the number of free parameters of G calculated to compare the complexity of different graphical models?
Is it based on one of the following, or something else?
- Number of edges in the graph
- Maximum number of possible edges in the graph
- Maximum number of parents (children)
| What is the number of free parameters for a directed acyclic graph? | CC BY-SA 3.0 | null | 2011-04-21T08:56:58.870 | 2011-04-21T15:07:33.127 | null | null | 3595 | [
"bayesian",
"graphical-model"
] |
9818 | 2 | null | 9801 | 7 | null | One way which may help you understand is to not use matrix algebra, and differentiate with each respect to each component, and then "store" the results in a column vector. So we have:
$$\frac{\partial}{\partial \beta_{k}}\sum_{i=1}^{N}\left(Y_{i}-\sum_{j=1}^{p}X_{ij}\beta_{j}\right)^{2}=0$$
Now you have $p$ of these equations, one for each beta. This is a simple application of the chain rule:
$$\sum_{i=1}^{N}2\left(Y_{i}-\sum_{j=1}^{p}X_{ij}\beta_{j}\right)^{1}\left(\frac{\partial}{\partial \beta_{k}}\left[Y_{i}-\sum_{j=1}^{p}X_{ij}\beta_{j}\right]\right)=0$$
$$-2\sum_{i=1}^{N}X_{ik}\left(Y_{i}-\sum_{j=1}^{p}X_{ij}\beta_{j}\right)=0$$
Now we can re-write the sum inside the bracket as $\sum_{j=1}^{p}X_{ij}\beta_{j}=\bf{x}_{i}^{T}\boldsymbol{\beta}$ So you get:
$$\sum_{i=1}^{N}X_{ik}Y_{i}-\sum_{i=1}^{N}X_{ik}\bf{x}_{i}^{T}\boldsymbol{\beta}=0$$
Now we have $p$ of these equations, and we will "stack them" in a column vector. Notice how $X_{ik}$ is the only term which depends on $k$, so we can stack this into the vector $\bf{x}_{i}$ and we get:
$$\sum_{i=1}^{N}\bf{x}_{i}\rm{Y}_{i}=\sum_{i=1}^{N}\bf{x}_{i}\bf{x}_{i}^{T}\boldsymbol{\beta}$$
Now we can take the beta outside the sum (but must stay on RHS of sum), and then take the invervse:
$$\left(\sum_{i=1}^{N}\bf{x}_{i}\bf{x}_{i}^{T}\right)^{-1}\sum_{i=1}^{N}\bf{x}_{i}\rm{Y}_{i}=\boldsymbol{\beta}$$
| null | CC BY-SA 3.0 | null | 2011-04-21T09:35:00.280 | 2011-04-21T09:35:00.280 | null | null | 2392 | null |
9819 | 2 | null | 9779 | 7 | null | No Model
Wayne points out that you do not say anything about the process generating the data. In particular, you don't say whether the quantity you are tracking, the 'state' is fixed or moving. If it's fixed - the case Wayne considers - then you may as well keep a running average of all the observations and hope for the best because that's all a full-blown state space model with state estimated by KF would do for you anyway. If it's moving then you need to ask yourself how it moves. Is it a random walk? Is it constantly increasing? Does it have cyclical or other recurring structure? You need those assumptions to define the state space model for which the KF supplies state estimates.
Off by 2
When you say 'let us say that measurements can be off by 2 points' you may think you are making things easier, either to explain or implement, but you aren't. If we take the 'off by 2' idea literally, then Kalman filtering cannot do what you want (although I suppose it might possibly approximate it). This is because the KF assumes your observations are conditionally Normally distributed. Your measurement error assumption would instead be Uniform. This will lead to incorrect inferences about the state if you apply the KF directly.
Thinking About the Problem Statistically
You ask 'how do I partition the data such that I get only distinct measurements'. That's a good question if all your data are either good measurements or bad ones. However, when considering KF we assume rather that all measurements have some error, about which we have a small theory - the state space model - that contains a sub-model of this error and another sub-model of the evolution of a underlying state that generates the measurements. The KF makes inferences on the basis of this theory. Consequently, in this framework you don't privilege any measurements but rather look to the estimated state for your answers.
Suggestion
If you don't feel like or simply cannot specify as much detail as necessary for a complete state space model to which you can apply a KF, it might be better to back off to more 'empirical' (and easier to implement) methods based on smoothing and weighted averages of recent data. Exponential smoothing might be a helpful place to start, e.g. as described fairly clearly here: [http://www.duke.edu/~rnau/411avg.htm](http://www.duke.edu/~rnau/411avg.htm) This approach has quite close connections to KF approaches, so you could return to them easily if necessary.
| null | CC BY-SA 3.0 | null | 2011-04-21T09:39:02.513 | 2011-04-21T09:39:02.513 | null | null | 1739 | null |
9820 | 1 | 9823 | null | 2 | 501 | Assume I have a trading system that I'm evaluating over a three-year period. The returns are 25%, -40% and 25%. Empirically, I can see that this system loses because at the end of three years, I have less than when I started.
Wikipedia defines `expected return` as follows:
```
E(R)= Sum: probability (in scenario i) * the return (in scenario i)
```
If we insert our values into this formula, we get the following:
```
E(R) = (.33 * .25) + (.33 * -.40) + (.33 * .25) = .033
```
So a positive expected return for a system that loses every time, no matter if you re-arrange the order of the returns.
What's wrong here?
To further explain how the game works, consider an initial start value of 100. What can you expect at the end of the game? There are no re-investments, no withdrawls, no dividends, no broker fees nor SEC fees. It is a simple game. Here is some R code to illustrate the game.
```
first <- c(.25, .25, -.4)
second <- c(-.4, .25, .25)
third <- c(.25, -.4, .25)
```
Pass any of the above sequence of returns into this function:
```
game <- function(x){
start <- 100
for(i in 1:NROW(x))
start <- start + start*x[i]
return(start)
}
```
NOTE: I asked a similar question on quantexchange, but I'm interested here in the math behind the expected return equation.
| Calculating expected return | CC BY-SA 3.0 | null | 2011-04-21T12:42:03.650 | 2011-04-21T13:53:43.563 | 2011-04-21T13:53:43.563 | 3306 | 3306 | [
"expected-value"
] |
9821 | 2 | null | 9813 | 2 | null | You could apply linear regression with regularization ([Lasso](http://www-stat.stanford.edu/~tibs/lasso.html)) to solve this problem. The idea would be to fit the data with piece-wise constant functions adding a penalty for every jump that occurs. The objective you have to minimize is
$x^* = \arg\min_{x\in\mathbb{R}^m} \|z - x\|_2^2 + \lambda \|\nabla x\|_1$,
where $(\nabla x)_i = x_{i-1} - x_{i}$ is the (backward) difference operator on the grid. The parameter $\lambda$ controls the tradeoff between small- vs. large intervals and low vs. high variance within the intervals.
From the solution $x^*$ you can then reconstruct the intervals $I$.
| null | CC BY-SA 3.0 | null | 2011-04-21T12:43:00.117 | 2011-04-21T12:43:00.117 | null | null | 4272 | null |
9822 | 2 | null | 9820 | 2 | null | Wikipedia is right. So are your calculations: this system should win.
However: it also assumes that your investments in each part are equal in size (because you give equal probability 1/3 to each). If this is not true, that may explain the difference with your empirical observations (perhaps you should share the numbers with us on those). e.g. If you invested twice as much in the part that has a negative return than in the two other parts, the probabilities become 1/4,1/2,1/4 and you'd have a losing system.
This is also ignoring any extra costs, so if these extra costs are bigger than your meager (?) 1/30 return, this could be another explanation (I'm not familiar with the practicalities of trading).
| null | CC BY-SA 3.0 | null | 2011-04-21T13:25:33.207 | 2011-04-21T13:25:33.207 | null | null | 4257 | null |
9823 | 2 | null | 9820 | 2 | null | The difference between the two ways to look at the return is whether you are reinvesting your gains. If you start with 100 dollars and reinvest all of the money the next year, your balances will be: 125, 75, 93.75 dollars - you lost money. However if you invest 100 dollars every year, then you get +25 - 40 +25 = +10 dollars gain. That's one way you could think about the Wikipedia formula. A better way to interpret it, though, is that you are investing in one of the three years chosen at random, and it gives you the expected return.
| null | CC BY-SA 3.0 | null | 2011-04-21T13:39:55.973 | 2011-04-21T13:39:55.973 | null | null | 279 | null |
9824 | 2 | null | 9801 | 9 | null | Here is a technique for minimizing the sum of squares in regression that actually has applications to more general settings and which I find useful.
Let's try to avoid vector-matrix calculus altogether.
Suppose we are interested in minimizing
$$
\newcommand{\err}{\mathcal{E}}\newcommand{\my}{\mathbf{y}}\newcommand{\mX}{\mathbf{X}}\newcommand{\bhat}{\hat{\beta}}\newcommand{\reals}{\mathbb{R}}
\err = (\my - \mX \beta)^T (\my - \mX \beta) = \|\my - \mX \beta\|_2^2 \> ,
$$
where $\my \in \reals^n$, $\mX \in \reals^{n\times p}$ and $\beta \in \reals^p$. We assume for simplicity that $p \leq n$ and $\mathrm{rank}(\mX) = p$.
For any $\bhat \in \reals^p$, we get
$$
\err = \|\my - \mX \bhat + \mX \bhat - \mX \beta\|_2^2 = \|\my - \mX \bhat\|_2^2 + \|\mX(\beta-\bhat)\|_2^2 - 2(\beta - \bhat)^T \mX^T (\my - \mX \bhat) \>.
$$
If we can choose (find!) a vector $\bhat$ such that the last term on the right-hand side is zero for every $\beta$, then we would be done, since that would imply that $\min_\beta \err \geq \|\my - \mX \bhat\|_2^2$.
But, $(\beta - \bhat)^T \mX^T (\my - \mX \bhat) = 0$ for all $\beta$ if and only if $\mX^T (\my - \mX \bhat) = 0$ and this last equation is true if and only if $\mX^T \mX \bhat = \mX^T \my$. So $\err$ is minimized by taking $\bhat = (\mX^T \mX)^{-1} \mX^T \my$.
---
While this may seem like a "trick" to avoid calculus, it actually has wider application and there is some interesting geometry at play.
One example where this technique makes a derivation much simpler than any matrix-vector calculus approach is when we generalize to the matrix case. Let $\newcommand{\mY}{\mathbf{Y}}\newcommand{\mB}{\mathbf{B}}\mY \in \reals^{n \times p}$, $\mX \in \reals^{n \times q}$ and $\mB \in \reals^{q \times p}$. Suppose we wish to minimize
$$
\err = \mathrm{tr}( (\mY - \mX \mB) \Sigma^{-1} (\mY - \mX \mB)^T )
$$
over the entire matrix $\mB$ of parameters. Here $\Sigma$ is a covariance matrix.
An entirely analogous approach to the above quickly establishes that the minimum of $\err$ is attained by taking
$$
\hat{\mB} = (\mX^T \mX)^{-1} \mX^T \mY \>.
$$
That is, in a regression setting where the response is a vector with covariance $\Sigma$ and the observations are independent, then the OLS estimate is attained by doing $p$ separate linear regressions on the components of the response.
| null | CC BY-SA 3.0 | null | 2011-04-21T14:10:55.193 | 2011-04-21T14:32:57.543 | 2011-04-21T14:32:57.543 | 2970 | 2970 | null |
9825 | 1 | null | null | 21 | 8224 | I have some data that is highly correlated. If I run a linear regression I get a regression line with a slope close to one (= 0.93). What I'd like to do is test if this slope is significantly different from 1.0. My expectation is that it is not. In other words, I'd like to change the null hypothesis of the linear regression from a slope of zero to a slope of one. Is this a sensible approach? I'd also really appreciate it you could include some R code in your answer so I could implement this method (or a better one you suggest!). Thanks.
| Changing null hypothesis in linear regression | CC BY-SA 3.0 | null | 2011-04-21T14:12:41.697 | 2021-06-18T22:39:00.810 | 2011-04-21T14:16:00.223 | null | 4274 | [
"regression",
"correlation",
"hypothesis-testing"
] |
9826 | 2 | null | 7505 | 1 | null | When you move from testing a difference in the conversion rate to testing the difference in volume, the characteristics of your underlying data is changing.
The conversion rates you were testing were proportions based on the number of "successes" out of the number of "trials" which meant their intervals were [0,1]. That means those rates can be approximated to binomial distributions. As such, the sample variance/standard deviation required for confidence intervals and hypothesis testing could be simplified to a term related to the probability of a conversion, (p*(1-p)).
On the other hand, comparing two sales volumes which would not lie on the [0,1] interval would mean you would not be using binomials. Therefore you would want to testing the difference between two means rather than two proportions. You would need to define the means you wish to test based on the business question you wanted to answer (for example: is the the test daily average of revenue, visitors, or some other metric statistically significant from the control?).
Because the data you are using is not on the [0,1] interval, your variance will not be limited in the same manner as it is when using proportions. Therefore, you will need to find the sample variances of the two groups using whatever data you were using to construct the means (both the data and the time period). Depending on the circumstance of your data acquisition, you may be able to continue to use excel or it may better to compute your variances through another package.
Assuming the two groups have the same sample sizes, you should be able to average the two variances together to find the variance that would be used in either the confidence interval or the test statistic calculation.
| null | CC BY-SA 3.0 | null | 2011-04-21T14:16:03.350 | 2011-04-21T15:05:07.710 | 2011-04-21T15:05:07.710 | 4260 | 4260 | null |
9827 | 2 | null | 9825 | 11 | null | ```
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data
summary(lm(y ~ x)) # original model
summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1
summary(lm(y-x ~ x)) # testing against slope=1
```
Outputs:
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.75
x 0.91424 0.04128 22.148 1.64e-14 ***
```
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.7497
x -0.08576 0.04128 -2.078 0.0523 .
```
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.7497
x -0.08576 0.04128 -2.078 0.0523 .
```
| null | CC BY-SA 3.0 | null | 2011-04-21T14:25:14.817 | 2011-04-21T14:38:38.310 | 2011-04-21T14:38:38.310 | 3911 | 3911 | null |
9828 | 2 | null | 9825 | 7 | null | Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is
$$y=\beta_0+\beta_1x+u$$
then for hypothesis $\beta_1=0$, $R=[0,1]$ and $r=1$.
For these type of hypotheses you can use `linearHypothesis` function from package car:
```
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data
mod <- lm(y ~ x)) # original model
> linearHypothesis(mod,matrix(c(0,1),nrow=1),rhs=c(1))
Linear hypothesis test
Hypothesis:
x = 1
Model 1: restricted model
Model 2: y ~ x
Res.Df RSS Df Sum of Sq F Pr(>F)
1 19 0.96022
2 18 0.77450 1 0.18572 4.3162 0.05234 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
| null | CC BY-SA 3.0 | null | 2011-04-21T14:43:27.883 | 2011-04-21T14:43:27.883 | null | null | 2116 | null |
9830 | 2 | null | 9817 | 2 | null | The answer depends on your likelihood for the data $X$
1) Joint Gaussian
You can fit the model by sequentially fitting the conditional distribution of each node $v$ given its parents $\mathrm{pa}(v)$, in this case:
$X_v | X_{\mathrm{pa}(v)} \sim N\big(\mu_v + \beta_v^\top [X_{\mathrm{pa}(v)} - \mu_{\mathrm{pa}(v)}], \sigma_v^2 \big)$
Then for each node you need 1 parameter $\mu_v$ for the mean, 1 parameter $\sigma_v^2$ for the conditional variance and a vector $\beta_v$ of length $|\mathrm{pa}(v)|$.
So the total number of parameters needed for a graph $G=(V,E)$ is $2|V| + |E|$.
2) Discrete
Suppose the variable $X_v$ for each node $v$ is discrete with $n_v$ possible outcomes. Then for each possible outcome in the parent space, you require $n_v-1$ parameters. So for each node, you will need:
$(n_v-1) \prod_{u \in \mathrm{pa}(v)} n_u$
parameters. If each node has 2 possible outcomes, you will need:
$\sum_{v} 2^{|\mathrm{pa}(v)|}$
total parameters, unless you make some simplifying assumptions, such as proportional-odds (i.e. logistic regression).
| null | CC BY-SA 3.0 | null | 2011-04-21T15:07:33.127 | 2011-04-21T15:07:33.127 | null | null | 495 | null |
9831 | 2 | null | 9825 | 3 | null | The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference. For that, you'll have to define what effect size you deem reasonable to reject the null.
Testing whether your slope is significantly different from 1 is not that difficult, you just test whether the difference $slope - 1$ differs significantly from zero. By hand this would be something like :
```
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2)
model <- lm(y~x)
coefx <- coef(summary(model))[2,1]
seslope <- coef(summary(model))[2,2]
DF <- model$df.residual
# normal test
p <- (1 - pt(coefx/seslope,DF) )*2
# test whether different from 1
p2 <- (1 - pt(abs(coefx-1)/seslope,DF) )*2
```
Now you should be aware of the fact that the effect size for which a difference becomes significant, is
```
> qt(0.975,DF)*seslope
[1] 0.08672358
```
provided that we have a decent estimator of the standard error on the slope. Hence, if you decide that a significant difference should only be detected from 0.1, you can calculate the necessary DF as follows :
```
optimize(
function(x)abs(qt(0.975,x)*seslope - 0.1),
interval=c(5,500)
)
$minimum
[1] 6.2593
```
Mind you, this is pretty dependent on the estimate of the seslope. To get a better estimate on seslope, you could do a resampling of your data. A naive way would be :
```
n <- length(y)
seslope2 <-
mean(
replicate(n,{
id <- sample(seq.int(n),1)
model <- lm(y[-id]~x[-id])
coef(summary(model))[2,2]
})
)
```
putting seslope2 in the optimization function, returns :
```
$minimum
[1] 6.954609
```
All this will tell you that your dataset will return a significant result faster than you deem necessary, and that you only need 7 degrees of freedom (in this case 9 observations) if you want to be sure that non-significant means what you want it means.
| null | CC BY-SA 3.0 | null | 2011-04-21T15:08:24.687 | 2011-04-21T15:08:24.687 | null | null | 1124 | null |
9832 | 2 | null | 9825 | 6 | null | It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're different from 1. It sounds like you don't care that the slope is 0.07 different from 1. But what if you can't really tell? What if you're actually estimating a slope that varies wildly and may actually be quite far from 1 with something like a confidence interval of ±0.4. Your best tactic here is not changing the null hypothesis but actually speaking reasonably about an interval estimate. If you apply the command confint() to your model you can get a 95% confidence interval around your slope. Then you can use this to discuss the slope you did get. If 1 is within the confidence interval you can state that it is within the range of values you believe likely to contain the true value. But more importantly you can also state what that range of values is.
| null | CC BY-SA 3.0 | null | 2011-04-21T15:08:36.183 | 2011-04-21T15:08:36.183 | null | null | 601 | null |
9833 | 1 | null | null | 27 | 15003 | In my elementary statistics course, I learnt how to construct 95% confidence interval such as population mean, $\mu$, based on asymptotic normality for "large" sample sizes. Apart from resampling methods (such as bootstrap), there is another approach based on "profile likelihood". Could someone elucidate this approach?
Under what situations, the constructed 95% CI based on asymptotic normality and profile likelihood are comparable? I could not find any references on this topic, any suggested references, please? Why isn't it more widely used?
| Constructing confidence intervals based on profile likelihood | CC BY-SA 3.0 | null | 2011-04-21T15:11:29.673 | 2016-09-02T13:49:32.110 | 2016-09-02T13:49:32.110 | 28666 | null | [
"confidence-interval",
"profile-likelihood"
] |
9835 | 1 | 9847 | null | 4 | 226 | Do there exist any systems for symbolically solving expectations?
This is sort of a follow-up to my question [List of Tricks for Solving Messy Expectations?](https://stats.stackexchange.com/q/9809/3577) Basically, I'm looking for ways to solve a messy expectation after I've exhausted all obvious routes.
EDIT: BACKGROUND
I'm trying to solve the following for $\alpha$ (constrained to be greater than 0) as a function of $\sigma_X^2$, $\sigma_Y^2$, and $p$
$E\left[\ln(F) F^\alpha X^2 (X + Y)^2 + \ln(F) F^{2\alpha}(X + Y)^4\right] = 0$
where:
$Y \sim N(0,\sigma_Y^2)$
$ X = \left\{ \begin{array}{cc}
N(0,\sigma_X^2) & p \\
0 & (1-p)
\end{array}\right.$ where it's assumed $\sigma_X^2 \gg \sigma_Y^2$ and that $p$ is very small. (i.e., $X$ is a jump process with most of its mass at 0)
$F = F_{|Z|}(|X+Y|)$ where $Z \sim N(0,\sigma_Y^2)$ (i.e., $F$ is the CDF for the absolute value of a normal)
EDIT: EVEN MORE BACKGROUND
The equation I'm trying to solve above is the first order condition for the min-MSE problem:
$\min_{\alpha > 0} \left(X^2 - \widehat{X^2}\right)^2 $
where $ \widehat{X^2} = F_{|Z|}\left(|R|\right)^{\alpha}R^2$ and $R = X + Y\,$ is the only observed variable.
Basically, I'm trying to estimate the square of the jump, $X^2$ (given that I can only observe the aggregated process $R$) by smoothing down $R^2$. If $|R|$ is large, the smoothing function $F_{|Z|}\left(|R|\right)^{\alpha}$ should be close to 1 and the estimate of $X^2$ would be close to $R^2$. If $|R|$ is small, the estimate of $X^2$ would be close to 0.
| Systems for symbolically solving expectations? | CC BY-SA 3.0 | null | 2011-04-21T16:31:04.693 | 2012-09-07T23:52:43.753 | 2017-04-13T12:44:46.680 | -1 | 3577 | [
"expected-value"
] |
9836 | 1 | null | null | 8 | 12819 | Can somebody give me references (book/online resource) on using R for Marketing Mix Modelling?
| Market mix modelling with R | CC BY-SA 3.0 | null | 2011-04-21T16:44:01.063 | 2022-08-30T14:59:54.507 | 2022-08-30T14:59:54.507 | 11887 | 4278 | [
"r",
"references",
"marketing"
] |
9837 | 2 | null | 9833 | 27 | null | In general, the confidence interval based on the standard error strongly depends on the assumption of normality for the estimator. The "profile likelihood confidence interval" provides an alternative.
I am pretty sure you can find documentation for this. For instance, [here](http://people.upei.ca/hstryhn/stryhn208.pdf) and references therein.
Here is a brief overview.
Let us say the data depend upon two (vectors of) parameters, $\theta$ and $\delta$, where $\theta$ is of interest and $\delta$ is a nuisance parameter.
The profile likelihood of $\theta$ is defined by
$L_p(\theta) = \max_{\delta} L(\theta, \delta)$
where $L(\theta, \delta)$ is the 'complete likelihood'. $L_p(\theta)$ does no longer depend on $\delta$ since it has been profiled out.
Let a null hypothesis be $H_0 : \theta = \theta_0$ and the likelihood ratio statistic be
$LR = 2 (\log L_p(\hat{\theta}) - \log L_p(\theta_0))$
where $\hat{\theta}$ is the value of $\theta$ that maximises the profile likelihood $L_p(\theta)$.
A "profile likelihood confidence interval" for $\theta$ consists of those values $\theta_0$ for which the test is not significant.
| null | CC BY-SA 3.0 | null | 2011-04-21T17:12:45.203 | 2011-04-21T17:34:22.940 | 2011-04-21T17:34:22.940 | 3019 | 3019 | null |
9838 | 2 | null | 8784 | 3 | null | Why not? The models are estimating how much 1 unit of change in any model predictor will influence the probability of "1" for the outcome variable. I'll assume the models are the same-- that they have the same predictors in them. The most informative way to compare the relative magnitudes of any given predictor in the 2 models is to use the models to calculate (either deterministically or better by simulation) how much some meaningful increment of change (e.g., +/- 1 SD) in the predictor affects the probabilities of the respective outcome variables--& compare them! You'll want to determine confidence intervals for the two estimates as well as so you can satisfy yourself that the difference is "significant," practically & statistically.
| null | CC BY-SA 3.0 | null | 2011-04-21T17:48:27.180 | 2011-04-22T23:30:14.527 | 2011-04-22T23:30:14.527 | 11954 | 11954 | null |
9839 | 1 | null | null | 5 | 2612 | I am trying to write a prediction algorithm for a set of temperature data. I settled on Holt-Winters since it seemed to be a simple time series prediction algorithm and I can easily code it up in python to understand what is going on with it.
When I am plotting the smoothing function as it learns this is how it looks. As you can see, it follows the original curve pretty well.

But when I try to plot a future curve for one year (365 days) -- it really falls down and dies.

And to me intuitively it makes sense why it dies like that. Because if you see the last prediction equation of Holt-Winters it really only makes use of the very very last point in both the curve smoothing and the trend smoothing. And we know that exponential smoothing has a very short memory because of the whole exponential thing.
So I am wondering how does one actually go about using Holt-Winters for prediction (specifically for 365 day seasonal data like this).
If you know of any other methods which I can look at for this domain (temperature prediction) please let me know. I come from python, so it would be very useful if you can point me to resources such as libraries etc to get the job done.
| Predicting temperature time series with Holt-Winters | CC BY-SA 3.0 | null | 2011-04-21T17:49:53.597 | 2017-12-11T23:35:11.010 | 2017-12-11T23:35:11.010 | 128677 | null | [
"time-series",
"exponential-smoothing"
] |
9840 | 2 | null | 9835 | 2 | null | My favourite and free symbolic algebra system is [Sage](http://www.sagemath.org/), currently available as a Linux installer and via a [web interface](http://www.sagenb.org/). It is powerful, but I haven't tested how good it is in solving expectations.
| null | CC BY-SA 3.0 | null | 2011-04-21T18:26:17.807 | 2011-04-21T22:08:27.317 | 2011-04-21T22:08:27.317 | 3911 | 3911 | null |
9841 | 1 | 9902 | null | 5 | 218 | We have data on the day in which a butterfly pupates (forms a cocoon) in the summer/fall of 2 different pairs of years (2005-2006 vs 2009-2010). At the time that the pupa forms it can either be in diapause (suspended animation) or not.
Individual butterfly caterpillars were randomly selected from a wild population and observed in outdoor enclosures until they pupated. At this time the pupa was determined to be either in diapause or not.
We would like to know if there is a shift in the probability that a butterfly pupa will be in diapause or not on a given day between years. In other words, does a butterfly pupa have a significantly different probability of being in diapause later (or earlier) in 2005 vs 2006?
We have analyzed each pair of years using separate multiple logistic regressions with Day of pupation and Year as factors and are interpreting a significant Year effect as a significant shift in diapause probability on a given day between years.
Is this the appropriate way to approach this problem?
Thank you.
Data Format:
```
Individual Diapause Pupation_Day Year
1 1 200 2005
2 0 201 2005
. . . .
. . . .
. . . .
n 1 300 2006
```
| What is the appropriate way to test for a shift in probability using multiple logistic regression? | CC BY-SA 3.0 | null | 2011-04-21T18:42:44.193 | 2011-04-23T11:22:22.667 | 2011-04-21T20:11:02.477 | 4048 | 4048 | [
"logistic"
] |
9842 | 1 | 9869 | null | 16 | 8409 | I need some resources to get started on using neural networks for time series forecasting. I am wary of implementing some paper and then finding out that they have greatly over stated the potential of their methods. So if you have experience with the methods you are suggesting it will be even more awesome.
| Getting started with neural networks for forecasting | CC BY-SA 3.0 | null | 2011-04-21T19:36:55.910 | 2019-05-29T09:24:16.947 | 2019-05-29T09:24:16.947 | 53690 | null | [
"time-series",
"neural-networks",
"forecasting",
"references"
] |
9843 | 2 | null | 9839 | 3 | null | Jason,
Holt-Winters is a particular model form, normally additive or multiplicative and apparently may not be applicable to your particular time series. In general a Transfer Function incorporating both stochastic and deterministic structure has been found to a powerful way of handling problems like this. The problem you face I believe may be better handled with a mixed frequency model that might include one or more trends and/or level shifts and perhaps either weekly or monthly effects to deal with the "seasonal component" . Furthermore the model might exclude any identifiable anomalies so as to get a more robust "signal". I suggest that you post your original data to the web and have some of the readers use their methods/software to try to model the data. I will try and do the same. It might make for a very interesting comparison !
| null | CC BY-SA 3.0 | null | 2011-04-21T19:40:45.750 | 2011-04-21T19:40:45.750 | null | null | 3382 | null |
9844 | 1 | 19132 | null | 7 | 593 | While studying machine learning, I've read the following statement:
>
The kernel $K(x,y)=(x\cdot y+1)^d$ , for $x, y \in \mathbb{R}^p$, has $M={p+d \choose d}$ eigenfunctions that span the space of polynomials in $\mathbb{R}^p$ of total degree $d$.
I do not understand how does the $M={p+d \choose d}$ come from? What does exactly the degree $d$ mean here?
| Number of eigenfunctions for kernel | CC BY-SA 3.0 | null | 2011-04-21T20:05:29.060 | 2015-04-19T20:51:00.977 | 2015-04-19T20:51:00.977 | 9964 | 3026 | [
"machine-learning",
"svm",
"kernel-trick"
] |
9845 | 1 | null | null | -1 | 935 | I have 1000 data for two continuous variables (pressure and temperature). I'd like to calculate Bayesian probability between two variables.
In other words, I would like to determine probability that temperature will increase/decrease if pressure is changed?
| How to calculate Bayesian probability between two variables? | CC BY-SA 3.0 | null | 2011-04-21T21:22:26.710 | 2012-09-02T02:55:25.620 | 2012-09-02T02:55:25.620 | 3826 | 4282 | [
"probability",
"correlation",
"bayesian"
] |
9846 | 2 | null | 9845 | 2 | null | Relationships between physical quantities (like pressure and temperature) often may be described using functions or equations, and sometimes the measurement error is small – this might be the case here as well. If so you could derive the type of relationship and the specific function from physics knowledge you could use statistical (including Bayesian) techniques to determine the values and uncertainties of the parameters.
If the error is large the relationship between the variables will be less apparent and (compared to the physical equation behind the phenomenon) a more simple function could be fitted on the data. Linear regression is a widely used, simple method.
You tagged your question “machine-learning”, so the relationship between your variables may be more complex, but you may not be able to derive it from physics. In this case you can choose from a wide spectrum of non-linear machine learning and statistical techniques.
In any case I suggest you to plot your data using a scatterplot, think about the possible mechanisms between the variables. Although machine learning can be Bayesian, too, the best Bayesian approach may be fitting plausible models on the data.
| null | CC BY-SA 3.0 | null | 2011-04-21T21:54:04.843 | 2011-04-21T21:54:04.843 | null | null | 3911 | null |
9847 | 2 | null | 9835 | 4 | null | Mathematica will do integrals (and simplify the results) like no tomorrow. You have to be a little careful specifying your assumptions - that is, you should specify all of them - but it works quite well. If you're a student then your university may well have a site license but if you're just using it for a couple of problems then it's probably not worth picking it up.
Wolfram Alpha (http://www.wolframalpha.com/) is free and has many of the same facilities but it wil choke on some heavier problems. You can also pose your problems in more natural language ("integrate ... wrt x from 0 to infinity") so it's nice for one-off's.
| null | CC BY-SA 3.0 | null | 2011-04-21T23:13:24.443 | 2011-04-21T23:13:24.443 | null | null | 26 | null |
9848 | 2 | null | 9845 | 3 | null | I think there is a relationship between pressure, temperature and volume (if my memory of high school chemistry serves me correctly - confirmed by [wikipedia](http://en.wikipedia.org/wiki/Gas_laws)):
$$T=\frac{PV}{kN}$$
$$\begin{array} \\
P=\text{Pressure} \\
V=\text{Volume} \\
k=\text{Boltzman's constant}\\
N=\text{Number of molecules}\\
T=\text{Temperature}
\end{array}
$$
Note that this is an approximate relation [this page](http://en.wikipedia.org/wiki/Real_gas) shows some more complex ways of modeling pressure on temperature. The principle would be the same as below, just different (harder) equations. To remind us that this is the information, an $I$ will be put as part of the conditions.
Now you have some data on part of this equation, but I'm sure you would also have some less precise information on Volume. If you were to take the logarithm of both sides, you get:
$$log(T)=log(P)+log(V)-log(k)-log(N)$$
A very simple procedure is to assume that the volume and number of molecules is approximately a constant, and thus we can model this part as a normal distribution. So we have:
$$log(T_{i})=log(P_{i})-log(k)+n_{i}$$
Where we assume $n_{i}\sim N(\mu,\sigma)$. You can look at the noise level estimate $\sigma$ to decide if its worthwhile in getting a more complex model. So this model has two parameters which you need to estimate.
So you have a new starting pressure $P_{old}$ and starting temperature $T_{old}$ and the pressure then changes to $P_{new}=P_{old}(1+\delta_{P})$, with a corresponding change in temperature $T_{new}=T_{old}(1+\delta_{T})$. You want the probability that $-1<\delta_{T}<0$. Note that this would be conditional on $I$, that you are making predictions in similar Volumes and number of molecules to that which was present when you took the data. The golden rule is to calculate this probability, conditional on what you know, integrating or averaging over what you don't know. I'll add more details later.
MORE DETAILS
Any $T_{new}$ will be greater than $T_{old}$ when $log(T_{new})>log(T_{old})$, substituting in the model equation and simplifying gives:
$$log(1+\delta_{P})>n_{old}-n_{new}$$
Thus we require the probability
$$P(log(1+\delta_{P})>n_{old}-n_{new}|n_{1},\dots,n_{N},I)$$
Fitting the model is a straight forward exercise. You may have reasonable prior knowledge about, but since you have lots of data, it isn't necessary and a non-informative prior will do fine. This is $p(\mu,\sigma|I)\propto\frac{1}{\sigma}$. And the likelihood of the data is just the probability that the noise will make up the difference:
$$p(n_{1},\dots,n_{N}|\mu,\sigma,I)\propto\frac{1}{\sigma^{N}}exp\left(-\frac{\sum_{i}n_{i}^{2}}{2\sigma^{2}}\right)=\frac{1}{\sigma^{N}}exp\left(-\frac{\sum_{i}\left(d_{i}-\mu\right)^{2}}{2\sigma^{2}}\right)$$
Where
$$d_{i}=log\left[\frac{T_{i}k}{P_{i}}\right]$$
Now you will only need 3 numbers from your data in this solution, the sample size, $N$, the mean of $d_{i}$ denoted by $\overline{d}=\frac{1}{N}\sum_{i}d_{i}$, and its standard deviation, $s_{d}^{2}=\frac{1}{N}\sum_{i}(d_{i}-\overline{d})^{2}=\overline{d^{2}}-\overline{d}^{2}$. The posterior predictive distribution for the noise is given by:
$$p(n_{old},n_{new}|N,\overline{d},s_{d}^{2},I)=\int_{0}^{\infty}\int_{-\infty}^{\infty}p(n_{old},n_{new}|\mu,\sigma,I)p(n_{1},\dots,n_{N}|\mu,\sigma,I)p(\mu,\sigma|I)d\mu d\sigma$$
$$=St_{2}\left(\begin{pmatrix}n_{new}\\n_{old}\end{pmatrix}|\begin{pmatrix}\overline{d}\\\overline{d}\end{pmatrix},\frac{s_{d}^{2}}{N-1}\begin{pmatrix} N+1 & 1\\ 1 & N+1 \end{pmatrix},N-1\right)$$
where $St(x|\mu,\Sigma,\nu)$ is a [bivariate student distribution](http://en.wikipedia.org/wiki/Multivariate_Student_distribution) with location $\mu$, scatter matrix $\Sigma$, and $\nu$ degrees of freedom. Now we simply integrate over the region that we want the probability
$$P(log(1+\delta_{P})>n_{old}-n_{new}|n_{1},\dots,n_{N},I)$$
$$=\int_{-\infty}^{\infty}\int_{-\infty}^{log(1+\delta_{P})+t}St_{2}\left(\begin{pmatrix}t\\s\end{pmatrix}|\begin{pmatrix}\overline{d}\\\overline{d}\end{pmatrix},\frac{s_{d}^{2}}{N-1}\begin{pmatrix} N+1 & 1\\ 1 & N+1 \end{pmatrix},N-1\right)dsdt$$
This integral can be done using the t-tables, but at the moment I can't remember exactly how to do it. You get an 1-D integral over a regularised incomplete beta function.
| null | CC BY-SA 3.0 | null | 2011-04-22T01:20:12.637 | 2011-04-23T14:37:22.967 | 2011-04-23T14:37:22.967 | 2392 | 2392 | null |
9850 | 1 | null | null | 16 | 47783 | I tried clustering a set of data (a set of marks) and got 2 clusters. I would like to graphically represent it. Bit confused about the representation, since I don't have the (x,y) coordinates.
Also looking for MATLAB/Python function for doing so.
EDIT
I think posting data make the question clearer. I have two clusters I made using kmeans clustering in Python (not using scipy). They are
```
class 1: a=[3222403552.0, 3222493472.0, 3222491808.0, 3222489152.0, 3222413632.0,
3222394528.0, 3222414976.0, 3222522768.0, 3222403552.0, 3222498896.0, 3222541408.0,
3222403552.0, 3222402816.0, 3222588192.0, 3222403552.0, 3222410272.0, 3222394560.0,
3222402704.0, 3222298192.0, 3222409264.0, 3222414688.0, 3222522512.0, 3222404096.0,
3222486720.0, 3222403968.0, 3222486368.0, 3222376320.0, 3222522896.0, 3222403552.0,
3222374480.0, 3222491648.0, 3222543024.0, 3222376848.0, 3222403552.0, 3222591616.0,
3222376944.0, 3222325568.0, 3222488864.0, 3222548416.0, 3222424176.0, 3222415024.0,
3222403552.0, 3222407504.0, 3222489584.0, 3222407872.0, 3222402736.0, 3222402032.0,
3222410208.0, 3222414816.0, 3222523024.0, 3222552656.0, 3222487168.0, 3222403728.0,
3222319440.0, 3222375840.0, 3222325136.0, 3222311568.0, 3222491984.0, 3222542032.0,
3222539984.0, 3222522256.0, 3222588336.0, 3222316784.0, 3222488304.0, 3222351360.0,
3222545536.0, 3222323728.0, 3222413824.0, 3222415120.0, 3222403552.0, 3222514624.0,
3222408000.0, 3222413856.0, 3222408640.0, 3222377072.0, 3222324304.0, 3222524016.0,
3222324000.0, 3222489808.0, 3222403552.0, 3223571920.0, 3222522384.0, 3222319712.0,
3222374512.0, 3222375456.0, 3222489968.0, 3222492752.0, 3222413920.0, 3222394448.0,
3222403552.0, 3222403552.0, 3222540576.0, 3222407408.0, 3222415072.0, 3222388272.0,
3222549264.0, 3222325280.0, 3222548208.0, 3222298608.0, 3222413760.0, 3222409408.0,
3222542528.0, 3222473296.0, 3222428384.0, 3222413696.0, 3222486224.0, 3222361280.0,
3222522640.0, 3222492080.0, 3222472144.0, 3222376560.0, 3222378736.0, 3222364544.0,
3222407776.0, 3222359872.0, 3222492928.0, 3222440496.0, 3222499408.0, 3222450272.0,
3222351904.0, 3222352480.0, 3222413952.0, 3222556416.0, 3222410304.0, 3222399984.0,
3222494736.0, 3222388288.0, 3222403552.0, 3222323824.0, 3222523616.0, 3222394656.0,
3222404672.0, 3222405984.0, 3222490432.0, 3222407296.0, 3222394720.0, 3222596624.0,
3222597520.0, 3222598048.0, 3222403552.0, 3222403552.0, 3222403552.0, 3222324448.0,
3222408976.0, 3222448160.0, 3222366320.0, 3222489344.0, 3222403552.0, 3222494480.0,
3222382032.0, 3222450432.0, 3222352000.0, 3222352528.0, 3222414032.0, 3222728448.0,
3222299456.0, 3222400016.0, 3222495056.0, 3222388848.0, 3222403552.0, 3222487568.0,
3222523744.0, 3222394624.0, 3222408112.0, 3222406496.0, 3222405616.0, 3222592160.0,
3222549360.0, 3222438560.0, 3222597024.0, 3222597616.0, 3222598128.0, 3222403552.0,
3222403552.0, 3222403552.0, 3222499056.0, 3222408512.0, 3222402064.0, 3222368992.0,
3222511376.0, 3222414624.0, 3222554816.0, 3222494608.0, 3222449792.0, 3222351952.0,
3222352272.0, 3222394736.0, 3222311856.0, 3222414288.0, 3222402448.0, 3222401056.0,
3222413568.0, 3222298848.0, 3222297184.0, 3222488000.0, 3222490528.0, 3222394688.0,
3222408224.0, 3222406672.0, 3222404896.0, 3222443120.0, 3222403552.0, 3222596400.0,
3222597120.0, 3222597712.0, 3222400896.0, 3222403552.0, 3222403552.0, 3222403552.0,
3222299200.0, 3222321296.0, 3222364176.0, 3222602208.0, 3222513040.0, 3222414656.0,
3222564864.0, 3222407904.0, 3222449984.0, 3222352096.0, 3222352432.0, 3222452832.0,
3222368560.0, 3222414368.0, 3222399376.0, 3222298352.0, 3222573152.0, 3222438080.0,
3222409168.0, 3222523488.0, 3222394592.0, 3222405136.0, 3222490624.0, 3222406928.0,
3222407104.0, 3222442464.0, 3222403552.0, 3222596512.0, 3222597216.0, 3222597968.0,
3222438208.0, 3222403552.0, 3222403552.0, 3222403552.0]
class 2: b=[3498543128.0, 3498542920.0, 3498543252.0, 3498543752.0, 3498544872.0,
3498544528.0, 3498543024.0, 3498542548.0, 3498542232.0]
```
I would like to plot it. I tried the following and got the following result when I plot `a` and `b`.
```
pylab.plot(a,'x')
pylab.plot(b,'o')
pylab.show()
```

can I get a better visualization of clustering?
| How to plot data output of clustering? | CC BY-SA 3.0 | null | 2011-04-22T02:22:15.400 | 2017-08-11T09:10:39.017 | 2014-06-19T03:16:03.887 | 7290 | 2721 | [
"clustering",
"data-visualization",
"python"
] |
9851 | 1 | null | null | 5 | 432 | Where can I obtain more weather data?
NOAA has some:
- http://weather.noaa.gov/pub/SL.us008001/DF.an/DC.sflnd/DS.synop/
- http://weather.noaa.gov/pub/SL.us008001/DF.an/DC.sflnd/DS.metar/
- http://weather.noaa.gov/pub/SL.us008001/DF.an/DC.sfmar/DS.dbuoy/
- http://weather.noaa.gov/pub/SL.us008001/DF.an/DC.sfmar/DS.ships/
but I also want amateur observations. Specifically:
- Where can I get APRS WX data? I know that "telnet rotate.aprs.net
10152" with "user READONLY pass -1" will give me lots of
information, but:
Can I filter it down to just weather reports?
Is this all APRS WX data, or just a subset?
- wunderground.com has over 17K stations
at http://www.wunderground.com/weatherstation/index.asp reporting to
them. Is this data available publicly on some "free weather exchange
network"? I realize I can get it from wunderground.com, but would
prefer a "legit" source.
- Are there other sources of weather data out there? Is there any
one source that conglomerates multiple other sources?
Recently found RAWS: [http://raws.wrh.noaa.gov/rawsobs.html](http://raws.wrh.noaa.gov/rawsobs.html) but sense there is more
| Where can I obtain more weather data? | CC BY-SA 3.0 | null | 2011-04-22T02:35:47.893 | 2013-09-09T01:17:23.823 | 2013-09-09T01:17:23.823 | 7290 | null | [
"data-mining",
"dataset"
] |
9852 | 1 | 9953 | null | 9 | 5994 | Let's say I have two regression models, one with three variables and one with four. Each spits out an adjusted r^2, which I can compare directly.
Obviously, the model with the higher adjusted r^2 is the better fit, but is there way to test the difference between the two adjusted r^2 and get a p-value?
I know you can do Chow test to test the difference between slopes, but this is variance, so I don't think that's what I'm looking for.
Edit: One model does not simply contain a subset of variables from the other model, or else I would probably use stepwise regression.
In model 1, I have four variables: W, X, Y, and Z.
In model 2, I have three variables: W, X, and (Y+Z)/2.
The idea is that if Y and Z are conceptually similar, the model may make better predictions by grouping these two variables together prior to entering them into the model.
| Testing difference between two (adjusted) r^2 | CC BY-SA 3.0 | null | 2011-04-22T02:41:34.453 | 2017-05-11T14:55:18.430 | 2011-04-22T03:40:03.617 | 1977 | 1977 | [
"regression"
] |
9853 | 2 | null | 9852 | 0 | null | Take a look at Mallow's Cp:
[Mallow's Cp](http://en.wikipedia.org/wiki/Mallows%27_Cp)
Here's a related question:
[Is there a way to optimize regression according to a specific criterion?](https://stats.stackexchange.com/questions/8918/is-there-a-way-to-optimize-regression-according-to-a-specific-criterion)
| null | CC BY-SA 3.0 | null | 2011-04-22T02:51:35.427 | 2011-04-22T02:51:35.427 | 2017-04-13T12:44:48.343 | -1 | 2775 | null |
9854 | 1 | null | null | 1 | 5300 | I tried `CorrelationFunction[Transpose[{data,data}]][[All,1,2]]` but it doesn't work! I mean the results are identical with those if I run `CorrelationFunction[data]`.
| How can I calculate the autocorrelation of a signal in Mathematica environment? | CC BY-SA 3.0 | null | 2011-04-22T03:26:07.057 | 2012-09-13T19:02:10.430 | 2011-04-22T06:17:24.753 | 2116 | 4286 | [
"autocorrelation",
"mathematica"
] |
9855 | 5 | null | null | 0 | null | Mathematica is a software package for symbolic and numerical computation. It has a graphical user interface with high quality graphics output. It is used for numerical, mathematical, statistical and other calculations.
| null | CC BY-SA 3.0 | null | 2011-04-22T06:52:58.393 | 2013-06-27T09:54:27.757 | 2013-06-27T09:54:27.757 | 805 | 805 | null |
9856 | 4 | null | null | 0 | null | Mathematica is a software package for symbolic mathematical computations. | null | CC BY-SA 3.0 | null | 2011-04-22T06:52:58.393 | 2012-04-23T01:38:57.783 | 2012-04-23T01:38:57.783 | 919 | 2116 | null |
9857 | 3 | null | null | 0 | null | null | CC BY-SA 3.0 | null | 2011-04-22T06:59:10.240 | 2011-04-22T06:59:10.240 | 2011-04-22T06:59:10.240 | -1 | -1 | null | |
9858 | 3 | null | null | 0 | null | Linear regression is a type of regression when regression function is linear. It is most widely used regression type. | null | CC BY-SA 3.0 | null | 2011-04-22T06:59:10.240 | 2011-04-22T09:24:38.193 | 2011-04-22T09:24:38.193 | 2116 | -1 | null |
9859 | 1 | 14654 | null | 22 | 3189 | Consider a vector of parameters $(\theta_1, \theta_2)$, with $\theta_1$ the parameter of interest, and $\theta_2$ a nuisance parameter.
If $L(\theta_1, \theta_2 ; x)$ is the likelihood constructed from the data $x$, the profile likelihood for $\theta_1$ is defined as $L_P(\theta_1 ; x) = L(\theta_1, \hat{\theta}_2(\theta_1) ; x)$ where $ \hat{\theta}_2(\theta_1)$ is the MLE of $\theta_2$ for a fixed value of $\theta_1$.
$\bullet$ Maximising the profile likelihood with respect to $\theta_1$ leads to same estimate $\hat{\theta}_1$ as the one obtained by maximising the likelihood simultaneously with respect to $\theta_1$ and $\theta_2$.
$\bullet$ I think the standard deviation of $\hat{\theta}_1$ may also be estimated from the second derivative of the profile likelihood.
$\bullet$ The likelihood statistic for $H_0: \theta_1 = \theta_0$ can be written in terms of the profile likelihood: $LR = 2 \log( \tfrac{L_P(\hat{\theta}_1 ; x)}{L_P(\theta_0 ; x)})$.
So, it seems that the profile likelihood can be used exactly as if it was a genuine likelihood. Is it really the case ? What are the main drawbacks of that approach ? And what about the 'rumor' that the estimator obtained from the profile likelihood is biased (edit: even asymptotically) ?
| What are the disadvantages of the profile likelihood? | CC BY-SA 3.0 | null | 2011-04-22T07:01:16.563 | 2021-03-15T07:10:51.920 | 2011-04-22T07:24:18.283 | 3019 | 3019 | [
"maximum-likelihood",
"likelihood",
"profile-likelihood"
] |
9860 | 5 | null | null | 0 | null | null | CC BY-SA 3.0 | null | 2011-04-22T07:09:42.677 | 2011-04-22T07:09:42.677 | 2011-04-22T07:09:42.677 | -1 | -1 | null | |
9861 | 4 | null | null | 0 | null | Dynamic regression is a type of regression, where one of the independent variables is a lagged dependent variable. | null | CC BY-SA 3.0 | null | 2011-04-22T07:09:42.677 | 2011-04-22T07:24:23.333 | 2011-04-22T07:24:23.333 | 2116 | 2116 | null |
9862 | 2 | null | 9854 | 6 | null | It's been a long since I didn't play with Mathematica, and I just had a quick look on Google, but can't you just use (here with some fake data)
```
x = Table[Sin[x] + 0.2 RandomReal[], {x, -4, 4, .1}];
ListPlot[x, DataRange -> {-4, 4}]
```

the function [ListCorrelate](http://reference.wolfram.com/mathematica/ref/ListCorrelate.html)?
```
acf = ListCorrelate[x, x, {1, 1}, 0]
ListPlot[acf, Filling -> Axis]
```

| null | CC BY-SA 3.0 | null | 2011-04-22T07:15:33.557 | 2011-04-22T07:23:44.970 | 2011-04-22T07:23:44.970 | 930 | 930 | null |
9863 | 2 | null | 9807 | 8 | null |
### 1. How do I input them in SPSS?
You can open an Excel file in SPSS.
Use the standard file open option, and select `file type = *xls`.
Try to ensure that the first row has the variable names.
### 2. How do I work out the frequency of replies for each recipient?
- Do you mean the frequency of responses for each question?
- Check out the menu Descriptive Statistics - Frequencies
### 3. How do I work out frequency of replies i.e agrees/disagrees etc for each group?
- Check out Descriptive Statistics - Crosstabs
### 4. How can I rank each individual question (12 of them)? Remember, there are 3 individual statements to each question.
- Rank them in terms of what?
- If you intend to rank each question in terms of their mean (e.g., on a one to five scale). One way would be to run Descriptive Statistics - Descriptives and get the mean for each item. Then copy and paste the table of item means into Excel and sort by the Mean column.
### 5. How do I compare UK architects to US architects to show congruence or not?
- Check out Descriptive Statistics - Explore; you could also look at some of the compare mean options.
### 6. How would show correlation between the two groups UK and US?
- These are different participants so I don't know what you mean by asking for correlations.
### 7. Will SPSS develop graphs etc for me showing frequency or correlation?
- Yes, it will.
- Just have a play around with the Graphs menu (e.g., Legacy - Scatter or Legacy - Bar)
### General Suggestions
It sounds like you need a basic book explaining how to use SPSS.
A good one is the SPSS Survival Manual.
I also wrote a [120 page PDF Introduction to SPSS](http://web.psych.unimelb.edu.au/jkanglim/Anglim2006-SPSSIntroductionWorkshop.pdf) several years back which explains all the things mentioned above with examples.
| null | CC BY-SA 3.0 | null | 2011-04-22T07:55:35.147 | 2011-04-22T08:19:58.057 | 2011-04-22T08:19:58.057 | 183 | 183 | null |
9864 | 1 | 9873 | null | 6 | 827 | I'm trying to use the EM cluster algorithm, provided by the software Weka, to classify my data and it only finds one cluster.
- Could I interpret this as there are no ways to distinguish the instances in my sample?
This is a result that is coherent with others analysis that I'm doing to the data, but I don't know if I could use a cluster analysis to state that.
| Interpretation of a one cluster solution using the EM cluster algorithm | CC BY-SA 3.0 | 0 | 2011-04-22T08:34:21.153 | 2011-04-24T18:02:58.863 | 2011-04-24T18:02:58.863 | 4203 | 4203 | [
"machine-learning",
"clustering",
"weka"
] |
9865 | 2 | null | 9813 | 1 | null | Your problem would fall under the category of "missing data." Ultimately, one way or another you are going to have to infer the hidden variables $Z$. This can be done using the [Expectation-Maximization Algorithm](http://en.wikipedia.org/wiki/Expectation-maximization_algorithm).
| null | CC BY-SA 3.0 | null | 2011-04-22T09:29:57.620 | 2011-04-22T09:29:57.620 | null | null | 3567 | null |
9866 | 2 | null | 9735 | 2 | null | I will take this as an opportunity to explain some fundamental issues regarding the difference between frequentist and Bayesian statistics, by interpreting frequentist practices from a Bayesian standpoint.
In this example, we have observed data $D_1$ for the original and data $D_2$ for the combination case. One assumes that these are generated by Bernoulli random variables with parameters $p_1$ and $p_2$, respectively, and that these parameters come from the priors, $f_i(p_i)$ (with cdfs $F_i(p_i)$). The probability $p_1 > p_2$ can be calculated, as you pointed out. It is:
$$
P[p_1 > p_2;f_1,f_2] = \frac{\int_0^1 \int_0^1 I(p_1 > p_2) P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2)}{\int_0 ^1 \int_0^1 P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2) }
$$
Here the Bayesian chooses priors $f_1(p_1)$ and $f_2(p_2)$ (and will usually choose the same prior for both, due to exchangeability) and proceeds with inference.
The frequentist takes a "conservative" approach when choosing a prior. The possible values of $\theta$ are assumed to be known, but the frequentist has so little confidence in their ability to assign a meaningful prior, so that they effectively look at all possible priors and then only make an inferential statement when that inferential statement is true under all possible priors. When no inference is valid under all possible priors, the frequentist remains silent.
That is the situation in this case. When one considers the priors $g_{\theta_i}(p_i)$ given by:
$$
g_{\theta_i}(p_i) = \delta (\theta_i)
$$
that is, the point mass concentrated at $\theta_i$, then one can easily see that the probability desired is
$$
P[p_1 > p_2;g_{\theta_1},g_{\theta_2}] = \delta_{\theta_1, \theta_2}$$
that is, 1 when $\theta_1 = \theta_2$ and 0 otherwise.
Thus the frequentist remains silent. (Or, alternatively, makes the trivial statement: "The probability is between 0 and 1...")
| null | CC BY-SA 3.0 | null | 2011-04-22T09:59:08.940 | 2011-04-22T10:05:54.733 | 2011-04-22T10:05:54.733 | 3567 | 3567 | null |
9867 | 1 | null | null | 7 | 3010 | I have performed a Cochran's Q test for a within-subjects experimental design with 3 conditions and 36 participants with a dichotomous dependent variable.
I found a (just) statistically significant effect ($\chi^2$ = 6.00, df = 2, p = 0.04979) and would like to also report the effect size, but haven't been able to find any information as to which effect measure to use and how to calculate it.
Any pointers would be gratefully received - the domain is human factors (psychology).
| Effect size of Cochran's Q | CC BY-SA 3.0 | null | 2011-04-22T11:44:46.420 | 2018-04-04T16:19:40.230 | 2014-07-25T15:04:43.617 | 44269 | 4258 | [
"categorical-data",
"repeated-measures",
"effect-size",
"cochran-q"
] |
9868 | 1 | null | null | 0 | 5141 | Hey, im a beginner to R and trying to run pvclust so as to test a cluster solution.
I've managed to load data and run the heirachical cluster, however the code i find online for running pvclust is constantly producing errors - just wondering if someone can point out where I'm going wrong...
here is my code (data already transposed. my dataset is called "transpose" below..)
```
##loaddata
transpose <-
read.table("C:/Users/Tim/University/Advanced Design and Data Analysis/Assignment/transposed.csv",
header=TRUE, sep=",", na.strings="NA", dec=".", strip.white=TRUE)
View(transpose)
hello <- hclust(dist(model.matrix(~-1 +
var001+var002+var003+var004+var005+var006+var007+var008+var009+var010+var011+var012+var013+var014+var015+var016+var017+var018+var019+var020+var021+var022+var023+var024+var025+var026+var027+var028+var029+var030+var031+var032+var033+var034+var035+var036+var037+var038+var039+var040+var041+var042+var043+var044+var045+var046+var047+var048+var049+var050+var051+var052+var053+var054+var055+var056+var057+var058+var059+var060+var061+var062+var063+var064+var065+var066+var067+var068+var069+var070+var071+var072+var073+var074+var075+var076+var077+var078+var079+var080+var081+var082+var083+var084+var085+var086+var087+var088+var089+var090+var091+var092+var093+var094+var095+var096+var097+var098+var099+var100+var101+var102+var103+var104+var105+var106+var107+var108+var109+var110+var111+var112+var113+var114+var115+var116+var117+var118+var119+var120+var121+var122+var123+var124+var125+var126+var127+var128+var129+var130+var131+var132+var133+var134+var135+var136+var137+var138+var139+var140+var141+var142+var143+var144+var145+var146+var147+var148+var149+var150+var151+var152+var153+var154+var155+var156+var157+var158+var159+var160+var161+var162+var163+var164+var165+var166+var167+var168+var169+var170+var171+var172+var173+var174+var175+var176,
transpose)) , method= "ward")
plot(hello, main= "Cluster Dendrogram for Solution hello", xlab=
"Observation Number in Data Set transpose", sub="Method=ward;
Distance=euclidian")
```
all the above works fine, except the below where i try the pvclust
```
library(pvclust)
fit <- pvclust(transpose, method.hclust="ward",
method.dist="euclidean")
plot(fit)
pvrect(fit, alpha=.95)
```
the error comes back=
```
library(pvclust)
> fit <- pvclust(transpose, method.hclust="ward",
+ method.dist="euclidean")
Warning in dist(t(x), method) : NAs introduced by coercion
Error in hclust(distance, method = method.hclust) :
NA/NaN/Inf in foreign function call (arg 11)
> plot(fit)
Error in plot(fit) : object 'fit' not found
> pvrect(fit, alpha=.95)
Error in nrow(x$edges) : object 'fit' not found
```
| Problem with pvclust in R | CC BY-SA 3.0 | null | 2011-04-22T12:00:12.613 | 2014-09-29T11:21:29.533 | 2011-04-22T16:05:37.807 | null | null | [
"r",
"clustering"
] |
9869 | 2 | null | 9842 | 11 | null | Here's a good quick introduction:
[intro to neural networks.](http://arxiv.org/pdf/cs/0308031)
Note that R has neural-network functionality, so no need to spend any time implementing NN yourself until you've given it a spin and decided it looks promising for your application.
Neural networks are not obsolete, but they have gone through a couple of hype cycles, and then after realizing they don't do everything as was claimed, their reputation goes into a trough for a while (we're currently in one of those). Neural networks are good at certain tasks, and generally are better for tasks in which a human can do a similar task, but cannot explain exactly how they do it.
Neural networks do not give you much insight into the system you're using them to analyze, even after they are trained and operating well. That is, they can predict what will happen (for some systems), but not tell you why. In some cases, that is fine. In others, that is not fine. Generally, if you want or especially if you already have an understanding of the rules of how something works, you can use other techniques.
But, for certain tasks, they work well.
For time-series in particular, see this question's discussion:
[Proper way of using recurrent neural network for time series analysis](https://stats.stackexchange.com/questions/8000/proper-way-of-using-recurrent-neural-network-for-time-series-analysis)
| null | CC BY-SA 3.0 | null | 2011-04-22T12:58:02.280 | 2011-04-22T12:58:02.280 | 2017-04-13T12:44:46.083 | -1 | 2917 | null |
9870 | 2 | null | 9842 | 6 | null | While it is focussed on statistical pattern recognition, rather than time series forecasting, I would strongly recommend Chris Bishop's book [Neural Networks for Pattern Recognition](http://www.oup.com/us/catalog/general/subject/Medicine/Neuroscience/?view=usa&ci=9780198538646) becuase it is the best introduction to neural networks in general, and I think it would be a good idea to get to grips with the potential pitfalls in the use of neural networks in a more simple context, where the problems are more easily visualised an understood. Then move on to the book on recurrent neural networks by [Mandic and Chambers](http://rads.stackoverflow.com/amzn/click/0471495174). The bishop book is a classic, nobody should use neural nets for anything until they feel confident that they understand the material contained in that book; ANN make it all too easy to shoot yourself in the foot!
I also disagree with mbq, nn are not obsolete, while many problems are better solved with linear models or more modern machine learning techniques (e.g. kernel methods), there are some problems where they work well and other methods don't. It is still a tool that should be in our toolboxes.
| null | CC BY-SA 3.0 | null | 2011-04-22T13:18:17.590 | 2011-04-22T13:18:17.590 | null | null | 887 | null |
9871 | 1 | 9875 | null | 14 | 3818 | I have a "basic statistics" concept question. As a student I would like to know if I'm thinking about this totally wrong and why, if so:
Let's say I am hypothetically trying to look at the relationship between "anger management issues" and say divorce (yes/no) in a logistic regression and I have the option of using two different anger management scores -- both out of 100.
Score 1 comes from questionnaire rating instrument 1 and my other choice; score 2 comes from a different questionnaire. Hypothetically, we have reason to believe from previous work that anger management issues give rise to divorce.
If, in my sample of 500 people, the variance of score 1 is much higher than that of score 2, is there any reason to believe that score 1 would be a better score to use as a predictor of divorce based on its variance?
To me, this instinctively seems right, but is it so?
| Is a predictor with greater variance "better"? | CC BY-SA 3.0 | null | 2011-04-22T13:28:43.467 | 2011-04-23T15:48:17.837 | 2011-04-22T16:11:54.387 | null | 4054 | [
"regression",
"logistic"
] |
9872 | 1 | null | null | 6 | 5018 | I have a set of variables for building credit scorecards with logistic-regression. I need to bin some variables, for e.g. years of credit history. What is the method to determine how many bins and what is the interval for each bin?
| Binning raw data prior to building a logistic regression model | CC BY-SA 4.0 | null | 2011-04-22T13:42:38.583 | 2020-03-18T01:04:57.843 | 2020-03-18T01:04:57.843 | 11887 | null | [
"regression",
"logistic",
"binning"
] |
9873 | 2 | null | 9864 | 1 | null | Two assumptions here: 1) Weka's finding the number of clusters (k) without issues, and 2) I believe EM uses mixtures of Gaussians which means the clusters need to be round/elliptical.
So, given that Weka's algorithm is finding the best k, the answer would be that using round/elliptical clusters, the most likely clustering is one group. That doesn't mean that your data doesn't cluster at all (using other shapes, essentially).
| null | CC BY-SA 3.0 | null | 2011-04-22T13:58:04.903 | 2011-04-22T13:58:04.903 | null | null | 1764 | null |
9874 | 2 | null | 9871 | 1 | null | Always check the assumptions for the statistical test you're using!
One of the assumptions of logistic regression is independence of errors which means that cases of data should not be related. Eg. you can't measure the same people at different points in time which I fear you may have done with your anger management surveys.
I would also be worried that with 2 anger management surveys you're basically measuring the same thing and your analysis could suffer from multicollinearity.
| null | CC BY-SA 3.0 | null | 2011-04-22T14:05:57.473 | 2011-04-22T14:05:57.473 | null | null | 3597 | null |
9875 | 2 | null | 9871 | 11 | null | A few quick points:
- Variance can be arbitrarily increased or decreased by adopting a different scale for your variable. Multiplying a scale by a constant greater than one would increase the variance, but not change the predictive power of the variable.
- You may be confusing variance with reliability. All else being equal (and assuming that there is at least some true score prediction), increasing the reliability with which you measure a construct should increase its predictive power. Check out this discussion of correction for attenuation.
- Assuming that both scales were made up of twenty 5-point items, and thus had total scores that ranged from 20 to 100, then the version with the greater variance would also be more reliable (at least in terms of internal consistency).
- Internal consistency reliability is not the only standard by which to judge a psychological test, and it is not the only factor that distinguishes the predictive power of one scale versus another for a given construct.
| null | CC BY-SA 3.0 | null | 2011-04-22T14:11:05.953 | 2011-04-23T15:48:17.837 | 2011-04-23T15:48:17.837 | 2669 | 183 | null |
9876 | 1 | 9914 | null | 4 | 1703 | Suppose that I am conducting a questionnaire study that is trying to measure level of awareness of subjects about a programming language and find the relation of those level of awareness to working conditions and methods etc.
To improve my precision I decided to go with stratified sampling. If I have 1 criterion for stratification such as geographical distribution (to make sure I don't over-represent subjects from areas that have less programmers), then I end up with 6 distinct strata (country provinces).
I know how to go about analysing these to find margin of error, standard error etc but I realised that is not good enough and I need to introduce more criteria for stratification, such as level of education (so i don't over-represent a group who are not very present between programmers based on their education), level of seniority etc.
I have the proportion (in `%`) for all these criteria but I don't know how I go about sampling when I have more than one criterion?
| Stratified sampling question | CC BY-SA 3.0 | null | 2011-04-22T14:31:04.797 | 2017-08-10T10:34:13.537 | 2017-08-10T10:34:13.537 | 173120 | 4043 | [
"stratification"
] |
9877 | 2 | null | 9872 | 6 | null | Binning will result in a more complex model, i.e., you will need more terms in the model to predict the outcome as well as a model that treats the predictors as continuous. Bins also bring a degree of arbitrariness into the model. Take a look at regression splines as an alternative. Notes about this may be found at [http://biostat.mc.vanderbilt.edu/rms](http://biostat.mc.vanderbilt.edu/rms). Also make sure that your outcome is truly dichotomous, i.e., that the time until the event is irrelevant and you have no censoring.
| null | CC BY-SA 3.0 | null | 2011-04-22T15:32:13.207 | 2011-04-22T15:32:13.207 | null | null | 4253 | null |
9878 | 2 | null | 9868 | 2 | null | It's difficult to answer without seeing the data itself, but my best guess is that you have some non numerical entries in the matrix/dataframe (which is what is expected by `pvclust`). For example,
```
> as.numeric(c(1,2,"NA"))
[1] 1 2 NA
```
or
```
> dist(c(1,2,"NA"))
1 2
2 1
3 NA NA
```
will produce the same warning message ('NAs introduced by coercion'). I deliberately used `"NA"`, but any element that is not numerical will result in the same warning message.
So,
- A warning message is issued when trying to compute a distance matrix from the non-numerical input
- Then, hclust failed when it is called within pvclust. Again,
> hclust(dist(c(1,2,"NA")))
will throw the same error message.
In your first try, you called `hclust` by using a matrix. Can't you just check that you use the same variables in both cases, or that there is no strange values in your data (e.g., `summary(transpose)`), or no missing values coded as the character `"NA"` instead of `NA`, as below:
```
> xx <- data.frame(replicate(3, sample(c(1,2,3), 3)))
> xx[2,3] <- "NA"
> is.na(xx[2,3])
[1] FALSE
> sapply(xx, is.character)
X1 X2 X3
FALSE FALSE TRUE
> apply(xx, 2, function(x) sum(is.na(x)))
X1 X2 X3
0 0 0
# Now if we had a true NA value, we would see
> xx[2,3] <- NA
> apply(xx, 2, function(x) sum(is.na(x)))
X1 X2 X3
0 0 1
```
| null | CC BY-SA 3.0 | null | 2011-04-22T15:58:34.223 | 2011-04-22T15:58:34.223 | null | null | 930 | null |
9879 | 1 | 9882 | null | 11 | 24728 | Given two multivariate gaussian (say in 2D with mean $\mu$ as a 2D point and convariance marix $\Sigma$ as $2$x$2$ Matrix) $N_1(\mu_1,\Sigma_1)$ and $N_2(\mu_2,\Sigma_1)$, I would like to derive the pdf of $N_1+N_2$.
Can any one point me to the reference where i can find the pdf derivation of $N_1 + N_2$.
Thanks in advance
| Addition of multivariate gaussians | CC BY-SA 3.0 | null | 2011-04-22T16:49:50.407 | 2022-02-25T12:53:49.053 | 2011-04-23T08:07:08.500 | null | 4290 | [
"multivariate-analysis",
"normal-distribution"
] |
9880 | 1 | 9958 | null | 3 | 336 | I'd like to model a set of processes. The processes in question are related to human-decision making, so the model will need a measure of input, processing, and then, finally, output.
Ideally, the model would be implemented in something like R (which I know quite well) or Python (which I know less well).
### Questions:
- Where is the best place to start with something like this?
- What tools are available?
- What software or language is suited to writing the model in?
- What method would be suited to testing that model against data I've collected from real humans?
| What models and software are suited to modelling human decision making? | CC BY-SA 3.0 | null | 2011-04-22T17:40:49.883 | 2011-04-25T16:55:53.633 | 2011-04-23T07:09:31.090 | 183 | 4204 | [
"r",
"modeling"
] |
9881 | 2 | null | 9868 | 0 | null | Pvclust is a bit odd in that it expects your data to be organized with the observations in the rows, rather than the columns. This is what the documentation in `?pvclust` tries to explain, just not very well.
try transposing your original data matrix in the call to `pvclust()` and see if it runs.
i.e.,
```
fit <- pvclust(t(transpose), method.hclust="ward",method.dist="euclidean")
```
as an aside its not good R practice to give names to objects that are also the names of R functions e.g., 'transpose'
Good luck
| null | CC BY-SA 3.0 | null | 2011-04-22T17:42:06.957 | 2011-04-22T17:42:06.957 | null | null | 1475 | null |
9882 | 2 | null | 9879 | 19 | null |
## Method 1: characteristic functions
Referring to (say) the Wikipedia article on the [multivariate normal distribution](http://en.wikipedia.org/wiki/Multivariate_normal_distribution) and using the 1D technique to compute sums in the article on [sums of normal distributions](http://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables), we find the log of its characteristic function is
$$i t \mu - t' \Sigma t.$$
The cf of a sum is the product of the cfs, so the logarithms add. This tells us the cf of the sum of two independent MVN distributions (indexed by 1 and 2) has a logarithm equal to
$$i t (\mu_1 + \mu_2) - t' (\Sigma_1 + \Sigma_2) t.$$
Because the cf uniquely determines the distribution we can immediately read off that the sum is MVN with mean $\mu_1 + \mu_2$ and variance $\Sigma_1 + \Sigma_2$.
## Method 2: Linear combinations
View the pair of MVN distributions as being a single MVN with mean $(\mu_1, \mu_2)$ and covariance $\Sigma_1 \oplus \Sigma_2$. In block matrix form this is
$$\Sigma_1 \oplus \Sigma_2 = \pmatrix{\Sigma_1 & 0 \\ 0 & \Sigma_2}$$
where the zeros represent square matrices of zeros (indicating all covariances between any component of distribution 1 and any component of distribution 2 are zero).
The sum is given by a [linear transformation](http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation) and therefore is MVN. The covariance again works out to $\Sigma_1 + \Sigma_2$. (See p. 2 #4 in course notes by the late Dr. E.B. Moser, LSU EXST 7037. Edit Jan 2017: alas, the university appears to have removed them from its Web site. A copy of the original PDF file is available on [archive.org](https://web.archive.org/web/20060901231551/http://www.stat.lsu.edu/faculty/moser/exst7037/mvnprop.pdf).)
| null | CC BY-SA 4.0 | null | 2011-04-22T19:46:13.887 | 2022-02-25T12:53:49.053 | 2022-02-25T12:53:49.053 | 919 | 919 | null |
9883 | 2 | null | 9871 | 9 | null | A simple example helps us identify what is essential.
Let
$$Y = C + \gamma X_1 + \varepsilon$$
where $C$ and $\gamma$ are parameters, $X_1$ is the score on the first instrument (or independent variable), and $\varepsilon$ represents unbiased iid error. Let the score on the second instrument be related to the first one via
$$X_1 = \alpha X_2 + \beta.$$
For example, scores on the second instrument might range from 25 to 75 and scores on the first from 0 to 100, with $X_1 = 2 X_2 - 50$. The variance of $X_1$ is $\alpha^2$ times the variance of $X_2$. Nevertheless, we can rewrite
$$Y = C + \gamma(\alpha X_2 + \beta) = (C + \beta \gamma) + (\gamma \alpha) X_2 + \varepsilon = C' + \gamma' X_2 + \varepsilon.$$
The parameters change, and the variance of the independent variable changes, yet the predictive capability of the model remains unchanged.
In general the relationship between $X_1$ and $X_2$ may be nonlinear. Which is a better predictor of $Y$ will depend on which has a closer linear relationship to $Y$. Thus the issue is not one of scale (as reflected by the variance of the $X_i$) but has to be decided by the relationships between the instruments and what they are being used to predict. This idea is closely related to one explored in a recent question about [selecting independent variables in regression](https://stats.stackexchange.com/q/9590/919).
There can be mitigating factors. For instance, if $X_1$ and $X_2$ are discrete variables and both are equally well related to $Y$, then the one with larger variance might (if it is sufficiently uniformly spread out) allow for finer distinctions among its values and thereby afford more precision. E.g., if both instruments are questionnaires on a 1-5 Likert scale, both are equally well correlated with $Y$, and the answers to $X_1$ are all 2 and 3 and the answers to $X_2$ are spread among 1 through 5, $X_2$ might be favored on this basis.
| null | CC BY-SA 3.0 | null | 2011-04-22T20:15:55.110 | 2011-04-22T20:15:55.110 | 2017-04-13T12:44:24.947 | -1 | 919 | null |
9884 | 1 | null | null | 3 | 1830 |
### Question on pairs()
I'd like to use `pairs()` to choose a functional form for modeling a set of data. I know that several of my independent and dependent variables are probably lognormally distributed, so I'd like to produce `pairs()` plots where some of my variables are plotted on log axes and some are not.
It seems like the par() command might hold the key, but I don't know how to implement it.
- Can I specify log/linear axes behavior when I call pairs()?
- Do I plot pairs() first and then modify the resulting graph using par()? Am I on the wrong track entirely?
I'm sure I could accomplish the task more easily by transforming some of my variables to log space, but I'm trying to learn something along the way here.
| Selectively tweaking pairs() axes? | CC BY-SA 3.0 | null | 2011-04-22T21:52:28.570 | 2011-04-23T06:57:50.600 | 2011-04-23T06:57:50.600 | 183 | 4237 | [
"r",
"data-visualization",
"scatterplot"
] |
9885 | 1 | 9915 | null | 39 | 1089 | I have a Machine Learning course this semester and the professor asked us to find a real-world problem and solve it by one of machine learning methods introduced in the class, as:
- Decision Trees
- Artificial Neural Networks
- Support Vector Machines
- Instance-based Learning (kNN, LWL)
- Bayesian Networks
- Reinforcement learning
I am one of fans of stackoverflow and stackexchange and know [database dumps](http://blog.stackoverflow.com/category/cc-wiki-dump/) of these websites are provided to the public because they are awesome! I hope I could find a good machine learning challenge about these databases and solve it.
My idea
One idea came to my mind is predicting tags for questions based on the entered words in question body. I think the Bayesian network is the right tool for learning tags for a question but need more research.
Anyway, after learning phase when user finishes entering the question some tags should be suggested to him.
Please tell me:
I want to ask the stats community as experienced people about ML two questions:
- Do you think tag suggestion is at least a problem which has any chance to solve? Do you have any advice about it? I am a little worried because stackexchange does not implement such feature yet.
- Do you have any other/better idea for the ML project that is based on stackexchange database? I find it really hard to find something to learn from stackexchange databases.
---
Consideration about database errors:
I would like to point that although the databases are huge and have many instances, they are not perfect and are prune to error. The obvious one is the age of users that is unreliable. Even selected tags for the question are not 100% correct. Anyway, we should consider the percent of correctness of data in selecting a problem.
Consideration about the problem itself: My project should not be about `data-mining` or something like this. It just should be an application of ML methods in real-world.
| Application of machine learning methods in StackExchange websites | CC BY-SA 3.0 | null | 2011-04-22T22:27:24.467 | 2011-11-10T17:36:47.890 | 2011-04-23T22:30:19.920 | 2970 | 2148 | [
"machine-learning"
] |
9886 | 1 | null | null | 6 | 2841 | I have a set of data that looks at the number of "hits" a specific program makes over the course of time. The data goes back to September 2010, and includes data up to March 2011, so the data points are monthly. What I want to see if the most recent data (March 2011) shows a statistically significant decrease in the number of "hits" this program makes.
I have a feeling there might not be a test that would fit this perfectly, as the data is a bit limited. I can also pull data weekly for the same time frame, which would build 31 points (at which point I would still want to look at the most recent unit for comparison). There hasn't been a population mean built for this data as of yet, as the data can only be pulled as far back as Jan 2010 (but the data from then is not reliable).
For reference, just 9 weeks data (as i pulled that first) reveals
mean= 1013.67
n=9
st.dev= 53.57
Most recent week= 991
Just eyeballing it does not appear statistically significant as a drop in "hits", however I'll need to perform this analysis every few weeks, and wondering if there's something reliable I can use. Thanks ahead of time for the input!
| Statistical test for a series of data over time | CC BY-SA 3.0 | null | 2011-04-22T23:01:54.647 | 2013-02-25T20:26:55.607 | null | null | 4292 | [
"time-series",
"statistical-significance"
] |
9887 | 2 | null | 9885 | 9 | null | I was thinking about tag prediction, too, I like the idea. I have the feeling that it is possible, but you may need to overcome many issues before you arrive to your final dataset. So I speculate the tag prediction may need a lot of time. In addition to incorrect tags the limit of max 5 tags may play a role. Also that some tags are subcategories of others (e.g. “multiple comparisons” can be viewed as a subcategory of “significance testing”).
I did not check if up-vote times are included in the downloadable database, but a more simple and still interesting project could be to predict the “final” number of votes (maybe after 5 months) on a question depending on the initial votes, and the timing of accepting an answer.
| null | CC BY-SA 3.0 | null | 2011-04-22T23:16:59.540 | 2011-04-22T23:16:59.540 | null | null | 3911 | null |
9890 | 2 | null | 9809 | 2 | null | If you are just looking to simplify an expression involving expectations, Economists’ Mathematical Manual has a nice concise list of identities. You can find a copy online [here](http://www.tbparis.com/Econ230/References%20from%20Web/Sydsaeter%20etal%20-%20Economists%27%20Mathematical%20Manual,%204th%20Edition,%20Springer%20%282005%29.pdf).
| null | CC BY-SA 3.0 | null | 2011-04-23T02:12:41.907 | 2011-04-23T02:12:41.907 | null | null | 4281 | null |
9892 | 1 | null | null | 9 | 10359 | I am trying to perform logistic regression with lasso. For the logistic regression part I am using `PROC LOGISTIC` but I am not sure how to do lasso with `PROC LOGISTIC`. I searched online and found that `PROC GLMSELECT` allows us to do lasso. But I am not sure how to do logistic regression with lasso using `PROC GLMSELECT`.
Note: I posted this question in the [SAS Discussion Forum](http://support.sas.com/forums/thread.jspa?threadID=13929&tstart=0).
| How to perform logistic regression with lasso using GLMSELECT? | CC BY-SA 3.0 | null | 2011-04-23T02:46:51.203 | 2011-06-21T01:31:29.493 | 2011-04-23T07:56:14.857 | null | 3897 | [
"logistic",
"sas",
"lasso"
] |
9893 | 1 | 9896 | null | 2 | 4885 | Context: To recommend a minimum sample size when performing multivariate testing of a web page. The sample size would vary based on the number of factors being tested (e.g. A heading, and an image) and the number of variations of a factor (e.g. Two different headings, and three different images). The goal could be to see which combination caused the most purchases of a product.
Based on a recommended minimum sample size of 100 for a single factor with perhaps two variations, I'm trying to work out a formula that recommends a sample size with multiple factors and variations.
The formula I first came up with is shown below where n is the number of variations for that factor.
$ samplesize =
100*\left((n_{f1}-1)^{n_{f1}-1}*(n_{f2}-1)^{n_{f2}-1}*\ldots\right)$
Does this seem reasonable and/or is there a simpler formula that would be as reasonable? The intended audience are online business owners who aren't necessarily strong at maths.
Thanks for reading!
| Formula for recommended sample size for multivariate testing | CC BY-SA 3.0 | null | 2011-04-23T00:55:25.810 | 2011-04-23T06:48:32.757 | 2011-04-23T06:48:32.757 | 930 | 4346 | [
"multivariate-analysis",
"sample-size"
] |
9895 | 1 | 55221 | null | 2 | 6693 | I've tough luck with the use of nls() in R for the following model
$$N_e = N_o\{1-exp[\frac{(d+bN_o)(T_h N_e - T)}{(1+c N_o)}]\}$$
where $b>0$, $c\geq 0$, $T_h>0$, and $T=72$.
This code
```
T <- 72
NLS.Fit3 <- nls(Ne~No*(1-exp((d+b*No)*(Th*Ne-T)/(1+c*No))), data = Data,
start = list(d = 0.01, b = 0.01, Th = 0.01, c = 0.01),
control = nls.control(maxiter=50, tol=1e-05, minFactor=1/1024))
```
gives the following error message:
Error in nls(Ne ~ No * (1 - exp((d + b * No) * (Th * Ne - T)/(1 + c * :
singular gradient
And the following
```
NLS.Fit31 <- nls(Ne~No*(1-exp((d+b*No)*(Th*Ne-T)/(1+c*No))), data = Data,
start = list(d = 0.01, b = 0.01, Th = 0.01, c = 0.01),
control = nls.control(maxiter=50, tol=1e-05, minFactor=1/1024),
algorithm = "port", lower=c(0, 0, 0, 0))
summary(NLS.Fit31)
```
code converges but provides the wrong results (drastically different from PROC NLIN)
```
Formula: Ne ~ No * (1 - exp((d + b * No) * (Th * Ne - T)/(1 + c * No)))
Parameters:
Estimate Std. Error t value Pr(>|t|)
d 0.008325 0.003488 2.387 0.0192 *
b 0.000000 0.000064 0.000 1.0000
Th 0.000000 0.614220 0.000 1.0000
c 0.020670 0.034439 0.600 0.5500
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.631 on 85 degrees of freedom
Algorithm "port", convergence message: relative convergence (4)
```
I'd prefer to do this in R rather than SAS and how the constrains can be placed on only few paramters. Any help in this regard will be highly appreciated. Thanks
Data is here:
```
No Ne
5 0
5 1
5 1
5 2
5 2
5 2
5 2
5 3
7 0
7 0
7 1
7 1
7 2
7 2
7 2
7 3
10 1
10 1
10 2
10 2
10 3
10 3
10 3
10 4
10 7
15 1
15 1
15 3
15 3
15 4
15 5
15 5
15 5
20 3
20 4
20 7
20 7
20 8
20 8
20 9
20 11
25 4
25 5
25 6
25 7
25 9
25 9
25 13
25 14
30 5
30 8
30 10
30 11
30 11
30 12
30 14
30 20
45 4
45 7
45 8
45 10
45 11
45 14
45 15
45 19
60 9
60 14
60 14
60 16
60 18
60 21
60 24
60 26
80 7
80 11
80 12
80 15
80 17
80 12
80 21
80 23
100 7
100 8
100 10
100 11
100 15
100 24
100 26
100 33
```
| Having trouble with nls function in R | CC BY-SA 3.0 | null | 2011-04-23T03:51:41.710 | 2013-04-05T08:06:00.117 | 2011-04-23T09:16:25.693 | 3903 | 3903 | [
"r",
"nonlinear-regression"
] |
9896 | 2 | null | 9893 | 2 | null | Commonly, the different values that a factor can attain in an experiment are called "levels". So let's say there are $k$ factors, and factor $j$ has $n_j$ levels.
There are $n_{f1}\cdot n_{f2}\cdot \dots \cdot n_{fk}$ possible factor combinations, i.e. possible versions of web pages that could be viewed. To answer the question whether any one of these versions is better than any other one, each has to be viewed a certain number of times, let's say $N$ for simplicity, for a sample size of $2N$. (You assumed $N = 100$). So the total sample size required (the total number of pairs of eyes that you'll need for all versions) is
$$ N \cdot n_{f1}\cdot n_{f2}\cdot \dots \cdot n_{fk} $$
which can become pretty large, although it's generally smaller than your formula.
The size of $N$ in turn depends on the separation of the purchase probabilities that you want to distinguish. If all purchase probabilities are close to each other, then $N$ would have to be quite large to pick the larger probability reliably even in a simple pairwise comparison. Examples: If you use $N = 100$ and one particular page design has purchase probability $p = .5$ and you are using a test with significance level 0.95, then you'll have a better than even chance of correctly identifying another design as better only if that design has purchase probability at least $p = .62$ or so. If that other design has $p = .55$, you won't be able to tell with $N = 100$ ... although it means 10% more revenue. Paradoxically you would be forced to work with an even larger sample size if the differences in probabilities are smaller.
In practice, one would not use all possible level combinations for all factors, because experience shows that interactions between multiple factors rarely matter. For example if you have four factors (say number of headings, number of images, number of columns, background color), then it is likely that once two have been set (say number of headings and number of images), the other two factors don't matter that much any more. This can be used to reduce the total number of level combinations. Google "fractional factorial design".
| null | CC BY-SA 3.0 | null | 2011-04-23T04:20:25.040 | 2011-04-23T04:20:25.040 | null | null | 4062 | null |
9897 | 2 | null | 9884 | 2 | null | With pairs, you can set the upper, lower and diagonal panels to display differently by passing functions which setup the layout of the plot (so, use `text()` or `points()` or `lines()` rather than `plot()` which makes a new plot).
So something like this:
```
set.seed(123)
#fake some data
exp1 <- rnorm(1000,5,1)
exp2 <- rlnorm(1000,0.5,1)
response <- exp1+exp2+rnorm(1000)
panel.loglog <- function(x,y, ...){
#"usr" is a par attribute that sets the plot area
par(usr = c(0, max(log(x)+3), 0, max(log(y)+3) ))
points(log(x), log(y))
}
pairs(response~exp1+exp2, panel.lower=panel.loglog)
```
But it seems you're more interested in the distribution of your variables than the relationship? In that case you could add histograms to the diagnoal (the panel.hist function is from the help for `pairs`)
```
panel.hist < function(x, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col="cyan", ...)
}
pairs(response~exp1+exp2, lower.panel=panel.loglog, diag.panel=panel.hist)
```
Which will give you this:

| null | CC BY-SA 3.0 | null | 2011-04-23T05:27:54.883 | 2011-04-23T05:27:54.883 | null | null | 3732 | null |
9898 | 1 | 9900 | null | 20 | 33089 | Could someone come up with R code to plot an ellipse from the eigenvalues and the eigenvectors of the following matrix
$$
\mathbf{A} = \left(
\begin{array} {cc}
2.2 & 0.4\\
0.4 & 2.8
\end{array} \right)
$$
| How to plot an ellipse from eigenvalues and eigenvectors in R? | CC BY-SA 4.0 | null | 2011-04-23T06:48:25.657 | 2020-03-20T13:03:06.523 | 2020-03-20T02:39:36.017 | 11887 | 3903 | [
"r",
"multivariate-analysis",
"matrix",
"matrix-decomposition",
"geometry"
] |
9899 | 2 | null | 9898 | 9 | null | I think this is the R code that you want. I borrowed the R-code from this [thread](https://stat.ethz.ch/pipermail/r-help/2006-October/114652.html) on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotate the ellipse by the amount of angle between the first eigen vector and the x-axis
```
mat <- matrix(c(2.2, 0.4, 0.4, 2.8), 2, 2)
eigens <- eigen(mat)
evs <- sqrt(eigens$values)
evecs <- eigens$vectors
a <- evs[1]
b <- evs[2]
x0 <- 0
y0 <- 0
alpha <- atan(evecs[ , 1][2] / evecs[ , 1][1])
theta <- seq(0, 2 * pi, length=(1000))
x <- x0 + a * cos(theta) * cos(alpha) - b * sin(theta) * sin(alpha)
y <- y0 + a * cos(theta) * sin(alpha) + b * sin(theta) * cos(alpha)
png("graph.png")
plot(x, y, type = "l", main = expression("x = a cos " * theta * " + " * x[0] * " and y = b sin " * theta * " + " * y[0]), asp = 1)
arrows(0, 0, a * evecs[ , 1][2], a * evecs[ , 1][2])
arrows(0, 0, b * evecs[ , 2][3], b * evecs[ , 2][2])
dev.off()
```

| null | CC BY-SA 3.0 | null | 2011-04-23T09:18:26.517 | 2011-04-23T18:20:42.837 | 2011-04-23T18:20:42.837 | 1307 | 1307 | null |
9900 | 2 | null | 9898 | 19 | null | You could extract the eigenvectors and -values via `eigen(A)`. However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually scaled to have length = square-root of the corresponding eigenvalues, and this is what the Cholesky decomposition gives.
```
ctr <- c(0, 0) # data centroid -> colMeans(dataMatrix)
A <- matrix(c(2.2, 0.4, 0.4, 2.8), nrow=2) # covariance matrix -> cov(dataMatrix)
RR <- chol(A) # Cholesky decomposition
angles <- seq(0, 2*pi, length.out=200) # angles for ellipse
ell <- 1 * cbind(cos(angles), sin(angles)) %*% RR # ellipse scaled with factor 1
ellCtr <- sweep(ell, 2, ctr, "+") # center ellipse to the data centroid
plot(ellCtr, type="l", lwd=2, asp=1) # plot ellipse
points(ctr[1], ctr[2], pch=4, lwd=2) # plot data centroid
library(car) # verify with car's ellipse() function
ellipse(c(0, 0), shape=A, radius=0.98, col="red", lty=2)
```
Edit: in order to plot the eigenvectors as well, you have to use the more complicated approach. This is equivalent to suncoolsu's answer, it just uses matrix notation to shorten the code.
```
eigVal <- eigen(A)$values
eigVec <- eigen(A)$vectors
eigScl <- eigVec %*% diag(sqrt(eigVal)) # scale eigenvectors to length = square-root
xMat <- rbind(ctr[1] + eigScl[1, ], ctr[1] - eigScl[1, ])
yMat <- rbind(ctr[2] + eigScl[2, ], ctr[2] - eigScl[2, ])
ellBase <- cbind(sqrt(eigVal[1])*cos(angles), sqrt(eigVal[2])*sin(angles)) # normal ellipse
ellRot <- eigVec %*% t(ellBase) # rotated ellipse
plot((ellRot+ctr)[1, ], (ellRot+ctr)[2, ], asp=1, type="l", lwd=2)
matlines(xMat, yMat, lty=1, lwd=2, col="green")
points(ctr[1], ctr[2], pch=4, col="red", lwd=3)
```

| null | CC BY-SA 3.0 | null | 2011-04-23T09:25:59.283 | 2011-04-23T11:04:09.037 | 2011-04-23T11:04:09.037 | 1909 | 1909 | null |
9901 | 2 | null | 9895 | 2 | null | I now think you have a problem with data. Why?
First of all, let's get rid of $e^x$, solving it out and taking $\log$ of both sides
$$\log\left(\frac{N_0-N_e}{N_0}\right)=\frac{(d+bN_0)(T_h N_e -T)}{1+cN_0}.$$
Now, this log should be a linear function of $N_e$ for fixed $N_0$. As I understand, $T-T_h N_e $ is a time available to the predator to predate and this $-\frac{d+bN_0}{1+cN_0}$ (let's call it $\lambda$) is the frequency of attacks, which is told to be somehow related to the number of prey. Putting this $\lambda$ into the model, we have
$$\log\left(\frac{N_0-N_e}{N_0}\right)=\lambda T- \lambda T_h N_e,$$
linear function of $N_e$. Is this in data? It seems so:

Thus, we fit lines, get line parameters
```
lamT -lamTh No
1 0.05753546 -0.29739723 5
2 0.01408135 -0.18090769 7
3 0.13772614 -0.18005256 10
4 0.02065162 -0.08429763 15
5 0.09416886 -0.07751886 20
6 0.11148562 -0.06477383 25
7 0.19165200 -0.06134858 30
8 0.04328295 -0.03025566 45
9 0.06706140 -0.02399182 60
10 0.02236404 -0.01553757 80
11 0.01947130 -0.01247877 100
```
and try to calculate the $\lambda$ and $T_h$; but something is wrong at this point:

$-\lambda T_h$ is a clearly $(\alpha N_0+\beta)^{-1}$ (what corresponds to $b=0$), but $T/T_h$ is a total mess; thus my guess is that $T$ was not equal to 72 for all samples and this is the main origin of your problems.
| null | CC BY-SA 3.0 | null | 2011-04-23T10:40:49.003 | 2011-04-24T18:04:48.237 | 2011-04-24T18:04:48.237 | null | null | null |
9902 | 2 | null | 9841 | 2 | null | Your initial logistic regression approach seems reasonable, assuming good diagnostics. However, if I understand the scientific question right, you'll need an interaction between year and day to see differences between years in the tendency to be in diapause as day increases. The main effect of year only says that one year has a higher per day probability of being in diapause, which is not, I think, what you care about.
Alternatively
The other way to think about the problem is to have days as a dependent variable. This means you would reverse the conditional probability formulation that you implicitly started with by not asking 'are later pupae more likely to be in diapause?' i.e. P(diapause | day, year), but rather 'do pupae in dipause tend to occur later in the year' i.e. P(day | diapause, year).
In such a model the observations are Pupation_Day which is classified by Diapause status which is nested within Year. The effect of interest is a difference of differences, specifically: the difference between the difference in mean Pupation_Day for Diapause==1 and Diapause==0 in Year==2005 versus Year=2006. (I'm not still sure what role the 'pairs of years' play in your design though.)
The only potentially tricky bits I see are remembering that Diapause is a random effect, and making a sensible assumption about the conditional distribution of Pupation_Day. Frankly, I'd try Normal with shared variance first and then see if the model diagnostics object. If they do, I'd crack open JAGS/BUGS, and just write out the full model, which would make inference about difference in differences easier too.
This alternative approach may be too much machinery for the scientific question though. If we could tell what the ideal inferential endpoint was it would be easier to recommend an approach.
| null | CC BY-SA 3.0 | null | 2011-04-23T11:22:22.667 | 2011-04-23T11:22:22.667 | null | null | 1739 | null |
9903 | 2 | null | 9886 | 1 | null | As GaBorgulya pointed out one needs to have a model to detect the potential anomaly. This model needs to generate a "white noise" error series or be sufficient to separate signal and noise. With this model in hand based upon older data one could then compare the new value with the prediction interval. This is the classical , albeit limited approach called an "out off model test". A more comprehensive approach is to to include a "pulse variable" i.e. zeros and a 1 for the new data point and to estimate coefficients for the augmented model using all of the data. The probability of observing what you observed before you observed it ( i.e. the new value" ) is then available from the "t value" of the "pulse variable" in this augmented model. In general this approach is referred to as Intervention Detection which scans ( data mines ) the time periods to detect the points where Pulses , Level Shifts , Seasonal Pulses and Local Time Trends have been significantly evidented. In your case you are not searching for the null hypothesis but rather simply is there a potential change point at the last observation i.e. the last "1" period. Your question also suggests solutions that we have seen which detect a significant change in the mean of the last K periods alerting the analyst to the innovation.
| null | CC BY-SA 3.0 | null | 2011-04-23T12:39:47.857 | 2011-04-23T13:08:48.037 | 2011-04-23T13:08:48.037 | 3382 | 3382 | null |
9904 | 1 | 9906 | null | 6 | 4415 | The location and scale of a normally distributed data can be estimated by sampling the data then taking the mean of the sample means and standard deviations, respectively. For non-normal (heavy-tailed) data, is it correct to take the median of the sample medians and IQR/MAD, instead? That is, is it correct to use the median of sample medians as a robust estimator of location similar to the mean of sample means for normal data?
| Median of medians as robust mean of means? | CC BY-SA 3.0 | null | 2011-04-23T13:28:08.003 | 2019-07-08T19:16:50.617 | 2012-06-11T12:42:35.523 | 4856 | 3074 | [
"robust",
"mad"
] |
9905 | 2 | null | 9880 | 2 | null | Have you thought about agent based modeling/social simulation? It's not that clear from your question what exactly you're trying to achieve but ABM may be suitable for your purposes as it does lend itself to modelling human decision making as you can programme agents with different characteristics. It's also very good for spatial problems.
You could either use the data you have already gained on the population under study and use that to programme your agents and then you can forecast into the future and compare to what happens in the real life population. This can be repeatedly tested with different environmental factors say with an economic model on the decision to invest with the environmental effect of different tax rates. Alternatively you could set up a model and compare it to your real life data.
While chapter 26 of The R Book is on Simulation Models you may be better off with Netlogo than with R which is also free and especially designed for agent based modelling.
[http://ccl.northwestern.edu/netlogo/](http://ccl.northwestern.edu/netlogo/)
The Journal of Artificial Societies and Social Simulation is a great open source journal covering this area.
[http://jasss.soc.surrey.ac.uk/JASSS.html](http://jasss.soc.surrey.ac.uk/JASSS.html)
| null | CC BY-SA 3.0 | null | 2011-04-23T13:36:38.763 | 2011-04-23T13:36:38.763 | null | null | 3597 | null |
9906 | 2 | null | 9904 | 3 | null | If all the samples come from the same distribution, then yes the median of the sample medians is a fairly robust estimate of the median of the underlying distribution (though this need not be the same as the mean), since the median of a sample from a continuous distribution has probability 0.5 of being below (or above) the population median.
Added
Here is some illustrative R code. It takes a sample from a normal distribution and a case with outliers where 1% of data is 10,000 times bigger than it should be. It looks at the various statistics for the overall sample data (50,000 points) and then by the centre (mean or median) of the statistics of the 10,000 samples with 5 points in each sample.
```
library(matrixStats)
wholestats <- function(x,n) {
mea <- sum(x)/n
var <- sum((x-mea)^2)/(n-1)
sdv <- sqrt(var)
qun <- quantile(x, probs=c(0.25,0.5,0.75))
mad <- median(abs(x-qun[2]))
c(mean=mea, variance=var, st.dev=sdv,
median=qun[2], IQR=qun[3]-qun[1],
MAD=mad)
}
rowstats <- function(x,b) {
rmea <- rowSums(x)/b
rvar <- rowSums((x-rmea)^2)/(b-1)
rsdv <- sqrt(rvar)
rqun <- rowQuantiles(x, probs=c(0.25,0.5,0.75))
rmad <- rowMedians(abs(x-rqun[,2]))
c(mean=mean(rmea), variance=mean(rvar), st.dev=mean(rsdv),
median=median(rqun[,2]), IQR=median(rqun[,3]-rqun[,1]),
MAD=median(rmad))
}
a <- 10000 # number of samples
b <- 5 # samplesize
set.seed(1)
d <- array(rnorm(a*b), dim=c(a,b))
doutlier <- array(d * ifelse(runif(a*b)>0.99, 10000, 1) , dim=c(a,b))
```
The median based statistics as expected are more robust, though they fail to show that the heavy tailed outlier variant is heavy tailed.
```
> wholestats(d,a*b)
mean variance st.dev median.50% IQR.75% MAD
-0.002440456 1.011306552 1.005637386 -0.001610677 1.357029247 0.678706371
> wholestats(doutlier,a*b)
mean variance st.dev median.50% IQR.75% MAD
-3.425664e+00 9.591583e+05 9.793663e+02 -1.610677e-03 1.373658e+00 6.871415e-01
> rowstats(d,b)
mean variance st.dev median IQR MAD
-0.002440456 1.014611308 0.947630870 0.003460172 0.917642167 0.510115277
> rowstats(doutlier,b)
mean variance st.dev median IQR MAD
-3.425664e+00 9.607212e+05 1.685929e+02 3.460172e-03 9.301795e-01 5.175084e-01
```
| null | CC BY-SA 4.0 | null | 2011-04-23T13:42:53.630 | 2019-07-08T19:16:50.617 | 2019-07-08T19:16:50.617 | 2958 | 2958 | null |
9907 | 1 | null | null | 2 | 477 | This questions consists of two parts that are quite similar and concern conditional probability.
Firstly, I would like to confirm the calculation of the conditional probability when we know the value of a random variable. Let's assume that we have a conditional matrix which gives us the values for $P(D|A\&B)$ and that B can take the values $(b0, b1)$. If we know that $B = b0$, then if we have to calculate $P(D|A)$, it would just be $P(D|A\&b0)$, right?
Let's now assume that we have a conditional probability matrix giving us $P(F|G\&H)$. If we assume that $F$ is independent from $H$, how can we calculate $P(F|G)$ only using $P(F|G\&H)$?
| Conditional probability and instantiation | CC BY-SA 3.0 | 0 | 2011-04-23T17:03:22.900 | 2014-05-03T18:27:51.233 | 2011-04-23T18:04:16.397 | null | 4297 | [
"conditional-probability"
] |
9911 | 1 | null | null | 2 | 237 | I'm new in stats, and maybe this is a duplicated question, but I could not find a similar one.
I'm trying to reduce a dimension of my dataset. Maybe reduce is not a good word. I need to sample some of my dimensions.
Setup
For example:
- I have $M$ events (let's say $M \tilde{} 60$). They are ALL labeled.
- I have $K$ trials/repetitions (let's say $K \tilde{} 10$ or more). $K$ different sets of $M$ events.
Therefore, I have a matrix $K\times M$
Now I need to choose $N$ events among $M$ of them as my representatives. I don't need to reduce the dataset,
I just need to pick $N$ columns (events) from the matrix $K\times M$. These $N$ events should reflect the behavior of particular trial. I'm interesting in corresponding labels of $N$ chosen events.
I know that events are quite different, and probably if there is some correlation it's safe to assume that is a nonlinear and in the best case maybe linear.
Further, I want the best representation. Meaning $N$ is not an input (of course $N < M$).
Discussion
I considered PCA, kernel PCA and sampling.
After some reading, I found the following:
- PCA will reduce my dataset, but I just need to pinpoint the events. Even if I brake down the algorithm, just choose the appropriate eigenvalues, and reverse back to the original dataset, PCA still remains as linear projection.
- kernel PCA is basically nonlinear, but still it will reduce my dataset instead of choosing particular event.
- sampling, I went through different types (random, systematic, stratified, cluster ...) but I could not find the particular type. I need something to capture nonlinear connection between the events.
Any pointer or explanation of the problem would be much appreciated.
Thanks.
| Sampling dataset, choosing among N dimensions | CC BY-SA 3.0 | null | 2011-04-23T20:27:33.923 | 2011-04-25T18:15:06.373 | 2011-04-25T08:24:48.713 | 2148 | 1313 | [
"pca",
"sampling",
"large-data"
] |
9913 | 1 | null | null | 5 | 20816 | In the log-log regression case,
$$\log(Y) = B_0 + B_1 \log(X) + U \>,$$
can you show $B_1$ is the elasticity of $Y$ with respect to $X$, i.e., that $E_{yx} = \frac{dY}{dX}(\frac{Y}{X})$?
| Elasticity of log-log regression | CC BY-SA 3.0 | null | 2011-04-23T18:08:25.280 | 2020-04-04T22:33:35.513 | 2011-04-24T23:06:07.013 | 2970 | null | [
"regression",
"self-study"
] |
9914 | 2 | null | 9876 | 7 | null | There isn't going to be one best answer for this kind of sampling. It depends on the observable covariates in your sampling frame, the variables you expect to be important determinants of survey response, and the analysis you want to run once the survey is complete.
With that said, there are a couple of general principles that can help guide your sampling strategy.
For descriptive surveys, you generally want your sample to closely resemble the population of interest in as many ways as possible. This will help [keep your weights even](http://en.wikipedia.org/wiki/Weighted_mean#Dealing_with_variance), in order to maximize your effective sample size.
If you intend to do multivariate analysis, you may want to stratify on important variables of interest. This will increase variance in your IVs and DVs, and can help increase your [statistical power](http://en.wikipedia.org/wiki/Statistical_power) in later analysis. This is why some studies conduct oversamples of minority populations -- because race and ethnicity are important IVs in many analyses. [Case-control studies](http://en.wikipedia.org/wiki/Case-control_study) follow a similar logic for stratifying on the dependent variable.
If you intend to do description and analysis, then these are goals will be partly at odds. No matter what, you need to follow the basic principle of sampling and make sure that every individual in the population has a known, non-zero chance of being selected into the sample. Advanced topics worth looking up in this area include [propensity scores](http://en.wikipedia.org/wiki/Propensity_score_matching), and sample weighting via raking.
Closing thought: these are general principles for sampling design and sample weighting. You don't say much about your specific application, but I'm guessing that most of this is overkill. The main reason to stratify a sample is if you have reason to believe a simple random sample will miss out on some important group of interest (geographic, demographic, or otherwise). That is, the sampled population would be too small for useful analysis. If you don't have that problem, then you don't need to worry too much about stratification.
| null | CC BY-SA 3.0 | null | 2011-04-23T21:47:19.023 | 2011-04-25T18:05:38.407 | 2011-04-25T18:05:38.407 | 4110 | 4110 | null |
9915 | 2 | null | 9885 | 28 | null | Yes, I think tag prediction is an interesting one and one for which you have a good shot at "success".
Below are some thoughts intended to potentially aid in brainstorming and further exploration of this topic. I think there are many potentially interesting directions that such a project could take. I would guess that a serious attempt at just one or two of the below would make for a more than adequate project and you're likely to come up with more interesting questions than those I've posed.
I'm going to take a very wide view as to what is considered machine learning. Undoubtedly some of my suggestions would be better classified as exploratory data analysis and more traditional statistical analysis. But, perhaps, it will help in some small way as you formulate your own interesting questions. You'll note, I try to address questions that I think would be interesting in terms of enhancing the functionality of the site. Of course, there are many other interesting questions as well that may not be that related to site friendliness.
- Basic descriptive analysis of user behavior: I'm guessing there is a very clear cyclic weekly pattern to user participation on this site. When does the site get the most traffic? What does the graph of user participation on the site look like, say, stratified by hour over the week? You'd want to adjust for potential changes in overall popularity of the site over time. This leads to the question, how has the site's popularity changed since inception? How does the participation of a "typical" user vary with time since joining? I'm guessing it ramps up pretty quickly at the start, then plateaus, and probably heads south after a few weeks or so of joining.
- Optimal submission of questions and answers: Getting insight on the first question seems to naturally lead to some more interesting (in an ML sense) questions. Say I have a question I need an answer to. If I want to maximize my probability of getting a response, when should I submit it? If I am responding to a question and I want to maximize my vote count, when should I submit my answer? Maybe the answers to these two are very different. How does this vary by the topic of the question (say, e.g., as defined by the associated tags)?
- Biclustering of users and topics: Which users are most alike in terms of their interests, again, perhaps as measured by tags? What topics are most similar according to which users participate? Can you come up with a nice visualization of these relationships? Offshoots of this would be to try to predict which user(s) is most likely to submit an answer to a particular question. (Imagine providing such technology to SE so that users could be notified of potentially interesting questions, not simply based on tags.)
- Clustering of answerers by behavior: It seems that there are a few different basic behavioral patterns regarding how answerers use this site. Can you come up with features and a clustering algorithm to cluster answerers according to their behavior. Are the clusters interpretable?
- Suggesting new tags: Can you come up with suggestions for new tags based on inferring topics from the questions and answers currently in the database. For example, I believe the tag [mixture-model] was recently added because someone noticed we were getting a bunch of related questions. But, it seems an information-retrieval approach should be able to extract such topics directly and potentially suggest them to moderators.
- Semisupervised learning of geographic locations: (This one may be a bit touchy from a privacy perspective.) Some users list where they are located. Others do not. Using usage patterns and potentially vocabulary, etc, can you put a geographic confidence region on the location of each user? Intuitively, it would seem that this would be (much) more accurate in terms of longitude than latitude.
- Automated flagging of possible duplicates and highly related questions: The site already has a similar sort of feature with the Related bar in the right margin. Finding nearly exact duplicates and suggesting them could be useful to the moderators. Doing this across sites in the SE community would seem to be new.
- Churn prediction and user retention: Using features from each user's history, can you predict the next time you expect to see them? Can you predict the probability they will return to the site conditional on how long they've been absent and features of their past behavior? This could be used, e.g., to try to notice when users are at risk of "churn" and engage them (say, via email) in an effort to retain them. A typical approach would shoot out an email after some fixed period of inactivity. But, each user is very different and there is lots of information about lots of users, so a more tailored approach could be developed.
| null | CC BY-SA 3.0 | null | 2011-04-23T22:26:56.633 | 2011-04-25T23:08:59.523 | 2011-04-25T23:08:59.523 | 2970 | 2970 | null |
9916 | 2 | null | 9876 | 4 | null | You basically run a pilot study so you can guess which strata are more variable and then oversample on those strata.
Acquire a book on sampling. (Maybe check out the QA 278 section of a university library. I have Sampling of Populations by Levy and Lemeshow, but I'm sure others are fine.) Look at sample size estimation for stratified sampling. The simpler approaches are
- Equal sample size in each stratum
- Sample size proportional to stratum size
You're more interested in these fancy sampling size estimation schemes
- Variance minimzation
- Cost minimization (in case it costs more money to sample one stratum than another)
| null | CC BY-SA 3.0 | null | 2011-04-23T23:48:58.730 | 2011-04-24T00:05:45.970 | 2011-04-24T00:05:45.970 | 3874 | 3874 | null |
9917 | 1 | 9924 | null | 4 | 268 | I was actually looking at this [problem](http://math.mit.edu/~jsoto/localpapers/spams2010.pdf) on slide 12. I will write it here briefly:
Problem:
Unknown number of people arriving in a fixed time period and my goal is to maximize my probability of picking the best candidate
Assumptions:
- Assume people arrive in interval [0,1] independently
- May assume also uniformly
- Can NOT beat probability $\frac{1}{e}$ to win.
Proof:
Fix a wall at time $T$ and select the best candidate after $T$. Let $t$ be the time at which the best candidate arrives, then:
$Prob(Win) \geq E_{t}[1_{\{t>T\}}\frac{T}{t}] = \int_{T}^{1}\frac{T}{t}dt = -T~ln~T$
This is optimum at $T=\frac{1}{e}$ with a $Prob(Win) = \frac{1}{e} = 36.7\%$
I have a few questions. Can someone please explain how to interpret the expectation equation? In addition, what is the significance of the statement "Assume people arrive in interval $[0,1]$ independently."? Does it mean that I cannot consider some other time interval, say $[0, \alpha]$ or do I need to just multiply $\alpha$ by $\frac{1}{e}$ to get the actual time of making the decision?
I am just trying to figure out what I should do to convert this into an actual real-world implementation. Any suggestions?
| Meaning of this expectation equation? | CC BY-SA 3.0 | null | 2011-04-24T03:17:39.473 | 2011-05-24T13:49:46.970 | 2011-04-24T03:25:03.007 | 2164 | 2164 | [
"probability",
"algorithms",
"stochastic-processes",
"expected-value"
] |
9918 | 1 | 9944 | null | 14 | 34484 | I have a matrix of 1000 observations and 50 variables each measured on a 5-point scale. These variables are organized into groups, but there aren't an equal number of variables in each group.
I'd like to calculate two types of correlations:
- Correlation within groups of variables (among characteristics): some measure of whether the variables within the group of variables are measuring the same thing.
- Correlation between groups of variables: some measure, assuming that each group reflects one overall trait, of how each trait (group) is related to every other trait.
These characteristics have been previously classified into groups. I'm interested in finding the correlation between the groups - i.e. assuming that the characteristics within in group are measuring the same underlying trait (having completed #1 above - Cronbach's alpha), are the traits themselves related?
Does anybody have suggestions for where to start?
| How to compute correlation between/within groups of variables? | CC BY-SA 3.0 | null | 2011-04-24T03:41:44.080 | 2013-01-22T18:52:18.873 | 2011-04-25T08:26:32.587 | 930 | 4301 | [
"correlation",
"psychometrics",
"scales"
] |
9919 | 2 | null | 4609 | 0 | null | Double-click the output to obtain the test statistics box.
| null | CC BY-SA 3.0 | null | 2011-04-24T04:00:58.850 | 2011-04-24T04:00:58.850 | null | null | 2025 | null |
9920 | 1 | null | null | 3 | 1624 | My question concerns the assumption of additivity for intraclass correlation. I shall first explain what I have done and then end with my questions.
I want to calculate inter-rater reliability using intra-class correlation so I can report an overall coefficient (as done in previous similar research), and perhaps replace a rater if their judgements correspond poorly to the other raters. I have five raters and they have each rated video recordings of facial and vocal expressions of the same (randomly sampled) 4 participants in an experiment where the participants watched different emotional films.
Raters make 18 ratings per film. These ratings are Likert-type (generally ranging from 1-6 but for some measures 1-4) of the intensity of different 6 facial emotional expressions (anger, fear etc), the intensity of facial expression overall, and the number (frequency ratings) and intensity (Likert ratings) of positive and negative words and sounds, and level of overall vocal expressiveness.
There are 16 films, so there is a total of 288 variables per rater, per participant rated. I have organised my data into four files, one per participant being rated, with each rater as a column and the 288 variables as rows. As I am calculating inter-rater reliability, I am interested in the similarity of the raters overall, and not any other (e.g. film) effects.
I have calculated the ICC using the mixed model because all judges rate all targets, which are a random sample (as per [http://faculty.chass.ncsu.edu/garson/PA765/reliab.htm#rater](http://faculty.chass.ncsu.edu/garson/PA765/reliab.htm#rater))
Questions:
The assumption of additivity states that each item should be linearly related to the total score. However I don’t think that the concept of a total score really applies, although I may be wrong. Tukey’s test of non-additivity tests the null hypothesis that there is no multiplicative interaction between cases and items.
- Could somebody please explain this to
me in simple terms?
I found a significant Tukey’s test value, so I tried removing the overall facial and vocal ratings for each film, as I thought this perhaps violated the requirement that each item contributes to the total score. However Tukey’s test remained significant. So just as a little experiment, I removed 282 variables, leaving me with ratings of the 6 possible facial emotions for a single film. Tukey’s test was still significant!
- Is Tukey’s test of non-additivity
relevant to my problem?
- If yes, what should I do about it
being significant?
| Assumption of additivity for intra-class correlation | CC BY-SA 3.0 | null | 2011-04-24T05:03:58.663 | 2020-11-16T23:00:43.997 | null | null | 2025 | [
"reliability",
"agreement-statistics",
"intraclass-correlation"
] |
9922 | 2 | null | 856 | 2 | null | One quick thing about sample weights - they are usually a way to incorporate some information about the population that one is sampling from - but usually they are based on "big sample" type scenarios (typically constrained BLUP or BLUE prediction in disguise). So I would imagine that sample weights will probably do no better than no weights. What would be better I think is to use the information about the population that the sample design was based on directly.
For example, on what basis were the selection probabilities calculated? My bet is that you knew a population total or some kind of population break-down which does not involve A or B (say age by sex groups). If this is not correct then I am about to waste some space, but if it is correct, and supposing you had population totals $R_{1},\dots,R_{k}$ for $k$ groups (or strata), and within each group you had a "mini" 2 by 2 contingency table. So we can now write $R_{1;11},R_{1;12},R_{1;21},R_{1;22},\dots$ as the "target" of our inference. Or perhaps it is the sum $\sum_{l=1}^{k}R_{l;ij}$ that is the target of inference (how many in the population give response N/N??). You are then trying to reason about $R_{l;ij}$ from the sampled numbers $r_{l;ij}$ subject to the constraint that $\sum_{i,j}R_{l;ij}=R_{l}$ for $(l=1,\dots,k)$. (maxent anyone?)
Note that if the sampling probabilities were based only on what data you were likely to receive, then they are irrelevant (and Fisher's exact test applies), because once you receive the data, you know what sample you received. So the coherent thing to do is to update the sampling probability to $P(D_{m})=1$ if the mth unit is in the sample, and $P(D_{m})=0$ if they weren't in the sample. However, usually the design is based in more information than just the data one is likely to observe. but note that it is the information rather than the survey design per se that is important. Design based inference is just a rather efficient way to incorporate all that information into your analysis.
| null | CC BY-SA 3.0 | null | 2011-04-24T08:04:05.637 | 2011-04-24T08:04:05.637 | null | null | 2392 | null |
9923 | 2 | null | 9233 | 2 | null | The required number of observations to Identify a model depends on the ratio of signal to noise in the data and the form of the model. If I am given the numbers ,1,2,3,4,5 , I will predict 6,7,8,.... Box-Jenkins model identification is an approach to determine the underlying General Term much like the test for "numerical intelligence" that we give to children. If the signal is strong then we need fewer observations and vice-versa. If the observed frequency suggests a possible "seasonal structure" then we need repetitions of this phenomena e.g. at least 3 seasons ( preferably more ) as a rule of thumb to extract (identify this from the basic descriptive statistics (the acf/pacf ).
| null | CC BY-SA 3.0 | null | 2011-04-24T11:34:08.887 | 2011-04-24T11:34:08.887 | null | null | 3382 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.