idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
42,001
What is the motivation for the entropy term in the proof of EM algorithm?
A bit late to add my contribution. I think there is another (longer) scheme to demonstrate the EM algorithm that uses the KL divergence. What is sure is that the Jensen inequality is also used in that scheme. The EM algorithm seeks to maximize a lower bound of a likelihood. Let $Y=(Y_1, ..., Y_n)$ be the set of possible observations. We want to discover hidden or latent variables $X=(X_1, ..., X_n)$. Each $X_i$ is actually associated to an observation $Y_i$, and we suppose here that the hidden variable are drawn from $\{1,...,c\}$. We introduce $q=(q_1,...,q_n)$ s.t $\forall i \in \{1,...,n\}$, $q_i$ approximates somehow $P(X_i|Y_i, \theta)$. In the following, $P(X_{ij}|Y_i, \theta)$ is equivalent to $P(X_{i}=j|Y_i, \theta)$. We denote by $\theta$ the hyperparameters of our probabilities. We would like to maximize the following log-likelihood \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}(P(Y_i|\theta)) \end{equation} However, solving directly on all the unknowns, the problem would be intractable since: \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}( \sum\limits_{j=1}^c P(Y_i, X_{ij}|\theta)) \end{equation} Fortunately, it is possible to show that: \begin{equation} \mathcal{L}(\theta) \geq \text{LB}(\theta) \end{equation} With \begin{equation} \text{LB} = \mathcal{H}(q) + \mathcal{E}(Y, X, q, \theta) \end{equation} $\mathcal{H}(q)$ is an entropy term that will force the mass distribution of $q$ to spread, so it does not concentrate on a single location $\mathcal{E}(Y, X, q, \theta)$ is an energy term that encourages the mass distribution of $P(Y,X|\theta)$ to focus on location where the model puts high probability I'd say the entropy term could be seen as a regularizer, that encourage solutions that do not overfit. I'd say introducing the $q_i$ s help separate the unknowns and optimize them alternatively. I think you can find more here, that is where I found a reliable explanation of those terms. Expressions of those two terms can be found below. Proof Sketch We would like to maximize the following log-likelihood \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}(P(Y_i|\theta)) \end{equation} Now let's focus on each term, using the lemma provided at the end, we have: \begin{equation} \text{log}(P(Y_i|\theta)) =\text{KL}(q_i(X_i) || P(X_i|Y_i, \theta)) + \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} Then: \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{KL}(q_i || P(X_{i}|Y_i, \theta)) + \sum\limits_{i=1}^n \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} Now the KL divergence is always non-negative, due to the Jensen Inequality, so : \begin{equation} \mathcal{L}(\theta) \geq \sum\limits_{i=1}^n \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} We can finally split this lower bound, let us call it LB, in two terms: \begin{equation} \text{LB} = \sum\limits_{i=1}^n \text{H}(q_i) + \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{c} q_i(X_{ij}) \text{log}(P(Y_i, X_{ij}|\theta)) \end{equation} \begin{equation} \text{LB} = \mathcal{H}(q) + \mathcal{E}(Y, X, q, \theta) \end{equation} Lemma Let's show that the following holds: \begin{equation} \text{log}(p(x)) = \text{KL}(q(z) || p(z|x)) + \mathbb{E}_{q(z)} [\text{log}(p(x,z) - log(q(z))] \end{equation} with $p$, $q$ all probabilities. Now let's focus on each term, we have: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(p(x)) \end{equation} We next make use of Bayes' rule: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{p(x,z)}{p(z|x)}) \end{equation} We can make use of a little trick: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{q(z) \, p(x,z)}{p(z|x) \, q(z)}) \end{equation} We can separate into two terms: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{q(z)}{p(z|x)}) + \sum\limits_{z} q(z) \, \text{log}(\frac{p(x,z)}{q(z)}) \end{equation} We recognize the KL divergence... \begin{equation} \text{log}(p(x)) = \text{KL}(q(z)||p(z|x)) + \mathbb{E}_{q(z)} [\text{log}(p(x,z) - log(q(z))] \end{equation}
What is the motivation for the entropy term in the proof of EM algorithm?
A bit late to add my contribution. I think there is another (longer) scheme to demonstrate the EM algorithm that uses the KL divergence. What is sure is that the Jensen inequality is also used in that
What is the motivation for the entropy term in the proof of EM algorithm? A bit late to add my contribution. I think there is another (longer) scheme to demonstrate the EM algorithm that uses the KL divergence. What is sure is that the Jensen inequality is also used in that scheme. The EM algorithm seeks to maximize a lower bound of a likelihood. Let $Y=(Y_1, ..., Y_n)$ be the set of possible observations. We want to discover hidden or latent variables $X=(X_1, ..., X_n)$. Each $X_i$ is actually associated to an observation $Y_i$, and we suppose here that the hidden variable are drawn from $\{1,...,c\}$. We introduce $q=(q_1,...,q_n)$ s.t $\forall i \in \{1,...,n\}$, $q_i$ approximates somehow $P(X_i|Y_i, \theta)$. In the following, $P(X_{ij}|Y_i, \theta)$ is equivalent to $P(X_{i}=j|Y_i, \theta)$. We denote by $\theta$ the hyperparameters of our probabilities. We would like to maximize the following log-likelihood \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}(P(Y_i|\theta)) \end{equation} However, solving directly on all the unknowns, the problem would be intractable since: \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}( \sum\limits_{j=1}^c P(Y_i, X_{ij}|\theta)) \end{equation} Fortunately, it is possible to show that: \begin{equation} \mathcal{L}(\theta) \geq \text{LB}(\theta) \end{equation} With \begin{equation} \text{LB} = \mathcal{H}(q) + \mathcal{E}(Y, X, q, \theta) \end{equation} $\mathcal{H}(q)$ is an entropy term that will force the mass distribution of $q$ to spread, so it does not concentrate on a single location $\mathcal{E}(Y, X, q, \theta)$ is an energy term that encourages the mass distribution of $P(Y,X|\theta)$ to focus on location where the model puts high probability I'd say the entropy term could be seen as a regularizer, that encourage solutions that do not overfit. I'd say introducing the $q_i$ s help separate the unknowns and optimize them alternatively. I think you can find more here, that is where I found a reliable explanation of those terms. Expressions of those two terms can be found below. Proof Sketch We would like to maximize the following log-likelihood \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{log}(P(Y_i|\theta)) \end{equation} Now let's focus on each term, using the lemma provided at the end, we have: \begin{equation} \text{log}(P(Y_i|\theta)) =\text{KL}(q_i(X_i) || P(X_i|Y_i, \theta)) + \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} Then: \begin{equation} \mathcal{L}(\theta) = \sum\limits_{i=1}^n \text{KL}(q_i || P(X_{i}|Y_i, \theta)) + \sum\limits_{i=1}^n \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} Now the KL divergence is always non-negative, due to the Jensen Inequality, so : \begin{equation} \mathcal{L}(\theta) \geq \sum\limits_{i=1}^n \sum\limits_{j=1}^c q_{i}(X_{ij}) \, \text{log}(\frac{P(Y_i, X_{ij}|\theta)}{q_{i}(X_{ij})}) \end{equation} We can finally split this lower bound, let us call it LB, in two terms: \begin{equation} \text{LB} = \sum\limits_{i=1}^n \text{H}(q_i) + \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{c} q_i(X_{ij}) \text{log}(P(Y_i, X_{ij}|\theta)) \end{equation} \begin{equation} \text{LB} = \mathcal{H}(q) + \mathcal{E}(Y, X, q, \theta) \end{equation} Lemma Let's show that the following holds: \begin{equation} \text{log}(p(x)) = \text{KL}(q(z) || p(z|x)) + \mathbb{E}_{q(z)} [\text{log}(p(x,z) - log(q(z))] \end{equation} with $p$, $q$ all probabilities. Now let's focus on each term, we have: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(p(x)) \end{equation} We next make use of Bayes' rule: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{p(x,z)}{p(z|x)}) \end{equation} We can make use of a little trick: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{q(z) \, p(x,z)}{p(z|x) \, q(z)}) \end{equation} We can separate into two terms: \begin{equation} \text{log}(p(x)) = \sum\limits_{z} q(z) \, \text{log}(\frac{q(z)}{p(z|x)}) + \sum\limits_{z} q(z) \, \text{log}(\frac{p(x,z)}{q(z)}) \end{equation} We recognize the KL divergence... \begin{equation} \text{log}(p(x)) = \text{KL}(q(z)||p(z|x)) + \mathbb{E}_{q(z)} [\text{log}(p(x,z) - log(q(z))] \end{equation}
What is the motivation for the entropy term in the proof of EM algorithm? A bit late to add my contribution. I think there is another (longer) scheme to demonstrate the EM algorithm that uses the KL divergence. What is sure is that the Jensen inequality is also used in that
42,002
Understanding of the specification of the Johansen Cointegration test in R
The lag selection for cointegration test is the same as selecting lags for VAR model, since cointegration is a actually a special feature of VAR model. Use VARselect to choose number of lags. The two statistics test the same thing and are constructed from the same eigenvalues of a certain matrix. For practical purposes there are no differences between these two. Cointegration means that the linear combination of unit root processes is stationary process. It is usually assumed that this stationary process has zero mean. However it is entirely possible that it has a non-zero mean and there is a trend added to the process. In the case of trend and two unit root processes this means that the difference $y_t-\alpha x_t$ has a trend, which means that the two processes are pushed apart over time. Judging from your graph it would be difficult to argue if this is really the case.
Understanding of the specification of the Johansen Cointegration test in R
The lag selection for cointegration test is the same as selecting lags for VAR model, since cointegration is a actually a special feature of VAR model. Use VARselect to choose number of lags. The two
Understanding of the specification of the Johansen Cointegration test in R The lag selection for cointegration test is the same as selecting lags for VAR model, since cointegration is a actually a special feature of VAR model. Use VARselect to choose number of lags. The two statistics test the same thing and are constructed from the same eigenvalues of a certain matrix. For practical purposes there are no differences between these two. Cointegration means that the linear combination of unit root processes is stationary process. It is usually assumed that this stationary process has zero mean. However it is entirely possible that it has a non-zero mean and there is a trend added to the process. In the case of trend and two unit root processes this means that the difference $y_t-\alpha x_t$ has a trend, which means that the two processes are pushed apart over time. Judging from your graph it would be difficult to argue if this is really the case.
Understanding of the specification of the Johansen Cointegration test in R The lag selection for cointegration test is the same as selecting lags for VAR model, since cointegration is a actually a special feature of VAR model. Use VARselect to choose number of lags. The two
42,003
Can I fit logistic regression over a dataset with only categorical data?
Yes of course you can. Just be aware of the nature of your categorical data - is it ordered or unordered? If ordered (e.g. small, medium, large) you might want a single feature X1 with values like (1, 1, 3, 2, 3, 1, ...) where 1 represents small, 2 represents medium, etc. If unordered (e.g. red, blue, green) you'll want multiple features like X1 = (0, 0, 1, 0) representing "is red?", X2 = (1, 0, 0, 1) representing "is blue?" and so forth.
Can I fit logistic regression over a dataset with only categorical data?
Yes of course you can. Just be aware of the nature of your categorical data - is it ordered or unordered? If ordered (e.g. small, medium, large) you might want a single feature X1 with values like (1,
Can I fit logistic regression over a dataset with only categorical data? Yes of course you can. Just be aware of the nature of your categorical data - is it ordered or unordered? If ordered (e.g. small, medium, large) you might want a single feature X1 with values like (1, 1, 3, 2, 3, 1, ...) where 1 represents small, 2 represents medium, etc. If unordered (e.g. red, blue, green) you'll want multiple features like X1 = (0, 0, 1, 0) representing "is red?", X2 = (1, 0, 0, 1) representing "is blue?" and so forth.
Can I fit logistic regression over a dataset with only categorical data? Yes of course you can. Just be aware of the nature of your categorical data - is it ordered or unordered? If ordered (e.g. small, medium, large) you might want a single feature X1 with values like (1,
42,004
Can I fit logistic regression over a dataset with only categorical data?
Yes, this is doable. The (potentially) unseen pitfall is that your model may require a great deal more data than you expect. A general rule of thumb for logistic regression is that you need at least $15$ observations in the less commonly occurring category (i.e., either $0$s or $1$s) for each variable in the model (cf., here). You may think that you have just $2$ variables (viz., X_1 and X_2), and thus, you will be OK as long as you have at least $30$ 'successes' and $30$ 'failures'. However, there is a subtle inconsistency between how we interpret your variables and how a statistical model will use them. You will quite naturally think of X_1 as a single variable, but the model will treat it as $3$. Likewise, the model will treat X_2 as $7$ (!) additional variables, not one. More specifically, you are using the number of levels minus one ($4-1=3$ and $8-1=7$) in your model for every categorical variable you add. The upshot of this is that you want to have at least $150$ 'successes' and $150$ 'failures' ($N>300$) in your dataset to fit a model with just your X_1 and X_2 variables. A related issue is that you want to be sure there are sufficient data in each of those levels. Obviously, if no one chose X_2 = G, you won't be able to estimate anything about the effect of that level of X_2, but you will also have a problem if some did choose G, but everyone who did has Y = 1. That would lead to the problem of separation. Moreover, if you want to fit the interaction, you will need sufficient data in every combination of levels ($32$, in your case). To read more about these topics, you may want to peruse some of our threads categorized under hauck-donner-effect and many-categories.
Can I fit logistic regression over a dataset with only categorical data?
Yes, this is doable. The (potentially) unseen pitfall is that your model may require a great deal more data than you expect. A general rule of thumb for logistic regression is that you need at leas
Can I fit logistic regression over a dataset with only categorical data? Yes, this is doable. The (potentially) unseen pitfall is that your model may require a great deal more data than you expect. A general rule of thumb for logistic regression is that you need at least $15$ observations in the less commonly occurring category (i.e., either $0$s or $1$s) for each variable in the model (cf., here). You may think that you have just $2$ variables (viz., X_1 and X_2), and thus, you will be OK as long as you have at least $30$ 'successes' and $30$ 'failures'. However, there is a subtle inconsistency between how we interpret your variables and how a statistical model will use them. You will quite naturally think of X_1 as a single variable, but the model will treat it as $3$. Likewise, the model will treat X_2 as $7$ (!) additional variables, not one. More specifically, you are using the number of levels minus one ($4-1=3$ and $8-1=7$) in your model for every categorical variable you add. The upshot of this is that you want to have at least $150$ 'successes' and $150$ 'failures' ($N>300$) in your dataset to fit a model with just your X_1 and X_2 variables. A related issue is that you want to be sure there are sufficient data in each of those levels. Obviously, if no one chose X_2 = G, you won't be able to estimate anything about the effect of that level of X_2, but you will also have a problem if some did choose G, but everyone who did has Y = 1. That would lead to the problem of separation. Moreover, if you want to fit the interaction, you will need sufficient data in every combination of levels ($32$, in your case). To read more about these topics, you may want to peruse some of our threads categorized under hauck-donner-effect and many-categories.
Can I fit logistic regression over a dataset with only categorical data? Yes, this is doable. The (potentially) unseen pitfall is that your model may require a great deal more data than you expect. A general rule of thumb for logistic regression is that you need at leas
42,005
Can I fit logistic regression over a dataset with only categorical data?
Of course it is possible. You just need to transform your categorical variables into binary variables and to remove each time one item. For instance, if the variable X takes two values A and B, you need to create the variable which is equal to 1 if X == A and to 0 otherwise. Since X == A implies X != B, you'll have a collinearity in your model if you add the variable which is equal to 1 if X == B and to 0 otherwise.
Can I fit logistic regression over a dataset with only categorical data?
Of course it is possible. You just need to transform your categorical variables into binary variables and to remove each time one item. For instance, if the variable X takes two values A and B, you n
Can I fit logistic regression over a dataset with only categorical data? Of course it is possible. You just need to transform your categorical variables into binary variables and to remove each time one item. For instance, if the variable X takes two values A and B, you need to create the variable which is equal to 1 if X == A and to 0 otherwise. Since X == A implies X != B, you'll have a collinearity in your model if you add the variable which is equal to 1 if X == B and to 0 otherwise.
Can I fit logistic regression over a dataset with only categorical data? Of course it is possible. You just need to transform your categorical variables into binary variables and to remove each time one item. For instance, if the variable X takes two values A and B, you n
42,006
Exponential family form of multinomial distribution
Exponential families are characterised by their densities, which are such that the interaction between the outcome $x$ of the random variable and the parameter $\theta$ occurs in an exponential scalar product, $$\exp\{T(\theta)^\text{T} S(x)\}$$ The other terms in the density are the normalising constant $C(\theta)=\exp\{-\psi(\theta)\}$ and a function of $x$, $h(x)$, which comes to complement the dominating measure $\text{d}\nu(x)$. But all that matters is the product $h(x)\text{d}\nu(x)$ which can be seen as a new measure $\text{d}\nu´(x)$. So in the multinomial example, the term $${n \choose x_1 \cdots x_m}=\frac{n!}{\prod_{i=1}^m x_i!}$$ can either be seen as $h(x_1,\ldots,x_m)$ completing the counting measure [which qualifies as "standard" in your terms] on $$\left\{(x_1,\ldots,x_m)\in\mathbb{N}^m;\sum_{i=1}^m x_i=n\right\}$$ or as part of a new measure on that set.
Exponential family form of multinomial distribution
Exponential families are characterised by their densities, which are such that the interaction between the outcome $x$ of the random variable and the parameter $\theta$ occurs in an exponential scalar
Exponential family form of multinomial distribution Exponential families are characterised by their densities, which are such that the interaction between the outcome $x$ of the random variable and the parameter $\theta$ occurs in an exponential scalar product, $$\exp\{T(\theta)^\text{T} S(x)\}$$ The other terms in the density are the normalising constant $C(\theta)=\exp\{-\psi(\theta)\}$ and a function of $x$, $h(x)$, which comes to complement the dominating measure $\text{d}\nu(x)$. But all that matters is the product $h(x)\text{d}\nu(x)$ which can be seen as a new measure $\text{d}\nu´(x)$. So in the multinomial example, the term $${n \choose x_1 \cdots x_m}=\frac{n!}{\prod_{i=1}^m x_i!}$$ can either be seen as $h(x_1,\ldots,x_m)$ completing the counting measure [which qualifies as "standard" in your terms] on $$\left\{(x_1,\ldots,x_m)\in\mathbb{N}^m;\sum_{i=1}^m x_i=n\right\}$$ or as part of a new measure on that set.
Exponential family form of multinomial distribution Exponential families are characterised by their densities, which are such that the interaction between the outcome $x$ of the random variable and the parameter $\theta$ occurs in an exponential scalar
42,007
Exponential family form of multinomial distribution
I was trying to build the softmax regression from scratch, and got stuck here also: why all the posts just silently omit the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ totally, after some research and thinking i share my understanding here: In reality, we always use a special form of the multinomial distribution with $k>2$ and $n=1$, that is categorical distribution, and since $n=1$, then the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ of the multinomial distribution's PMF will be always all 1, that is why we can omit it, more details: $$ \begin{cases} \text{class 1 is chosen: } \frac{1!}{1! \cdot 0! \cdot 0! \dots 0!} = 1 \\ \text{class 2 is chosen: } \frac{1!}{0! \cdot 1! \cdot 0! \dots 0!} = 1 \\ \text{class 3 is chosen: } \frac{1!}{0! \cdot 0! \cdot 1! \dots 0!} = 1 \\ \vdots \end{cases} $$ For getting more details and a better full pic of understanding u can refer to my post GLM and exponential family distributions -> Why the PMF has no coefficient Especially pay attention to understand what does the $x_i$ stand for in multinomial distribution from this section
Exponential family form of multinomial distribution
I was trying to build the softmax regression from scratch, and got stuck here also: why all the posts just silently omit the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ totally, after some research
Exponential family form of multinomial distribution I was trying to build the softmax regression from scratch, and got stuck here also: why all the posts just silently omit the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ totally, after some research and thinking i share my understanding here: In reality, we always use a special form of the multinomial distribution with $k>2$ and $n=1$, that is categorical distribution, and since $n=1$, then the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ of the multinomial distribution's PMF will be always all 1, that is why we can omit it, more details: $$ \begin{cases} \text{class 1 is chosen: } \frac{1!}{1! \cdot 0! \cdot 0! \dots 0!} = 1 \\ \text{class 2 is chosen: } \frac{1!}{0! \cdot 1! \cdot 0! \dots 0!} = 1 \\ \text{class 3 is chosen: } \frac{1!}{0! \cdot 0! \cdot 1! \dots 0!} = 1 \\ \vdots \end{cases} $$ For getting more details and a better full pic of understanding u can refer to my post GLM and exponential family distributions -> Why the PMF has no coefficient Especially pay attention to understand what does the $x_i$ stand for in multinomial distribution from this section
Exponential family form of multinomial distribution I was trying to build the softmax regression from scratch, and got stuck here also: why all the posts just silently omit the coefficient $\frac{n!}{x_1! x_2! \dots x_k!}$ totally, after some research
42,008
Exponential family form of multinomial distribution
Let $Y_{i}$ be a random variable, $i=1\dots k$. $n\in \mathbb{N}$ be the number of trials. $y_{i}\in \mathbb{N}$ be the number of $i$-th events in a sequence of $n$ trials. $p_{i}\in [0,1]$ be the probability of the $i$-th event of each trial. The probability mass function of multinomial distribution is $\begin{equation} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) =\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}, \end{equation}$ with $\displaystyle\sum^{k}_{i=1}y_{i}=n$, and $\displaystyle\sum^{k}_{i=1}p_{i}=1$. Multinomial distribution is in the exponential family Proof. $\begin{align} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) &=\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)p_{1}^{x1}\dots p_{k}^{1-\sum_{i}^{k-1}y_{i}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log p_{1}+\dots+\left(1-\sum_{i}^{k-1}y_{i}\right)\log p_{k}\right]\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log \left(\frac{p_{1}}{p_{k}}\right)+\dots+y_{k-1}\log\left(\frac{p_{1}}{p_{k-1}}\right)+\log p_{k}\right]\\ &=b(y_{1},\dots ,y_{k})\exp\left[\eta^{T}T(y_{1},\dots ,y_{k})-a(\eta)\right] \end{align}$ , where $\begin{align} \eta&= \begin{bmatrix} \log(p_{1}/p_{k}) \\ \log(p_{2}/p_{k}) \\ \vdots\\ \log(p_{k-1}/p_{k})\\ \end{bmatrix}\\ T(y_{1},\dots ,y_{k})&= \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots\\ y_{k}\\ \end{bmatrix}\\ a(\eta)&=-\log p_{k}\\ b(y_{1},\dots ,y_{k})&=\frac{n!}{y_{1}!\dots y_{k}!}. \end{align}$
Exponential family form of multinomial distribution
Let $Y_{i}$ be a random variable, $i=1\dots k$. $n\in \mathbb{N}$ be the number of trials. $y_{i}\in \mathbb{N}$ be the number of $i$-th events in a sequence of $n$ trials. $p_{i}\in [0,1]$ be the pr
Exponential family form of multinomial distribution Let $Y_{i}$ be a random variable, $i=1\dots k$. $n\in \mathbb{N}$ be the number of trials. $y_{i}\in \mathbb{N}$ be the number of $i$-th events in a sequence of $n$ trials. $p_{i}\in [0,1]$ be the probability of the $i$-th event of each trial. The probability mass function of multinomial distribution is $\begin{equation} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) =\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}, \end{equation}$ with $\displaystyle\sum^{k}_{i=1}y_{i}=n$, and $\displaystyle\sum^{k}_{i=1}p_{i}=1$. Multinomial distribution is in the exponential family Proof. $\begin{align} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) &=\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)p_{1}^{x1}\dots p_{k}^{1-\sum_{i}^{k-1}y_{i}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log p_{1}+\dots+\left(1-\sum_{i}^{k-1}y_{i}\right)\log p_{k}\right]\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log \left(\frac{p_{1}}{p_{k}}\right)+\dots+y_{k-1}\log\left(\frac{p_{1}}{p_{k-1}}\right)+\log p_{k}\right]\\ &=b(y_{1},\dots ,y_{k})\exp\left[\eta^{T}T(y_{1},\dots ,y_{k})-a(\eta)\right] \end{align}$ , where $\begin{align} \eta&= \begin{bmatrix} \log(p_{1}/p_{k}) \\ \log(p_{2}/p_{k}) \\ \vdots\\ \log(p_{k-1}/p_{k})\\ \end{bmatrix}\\ T(y_{1},\dots ,y_{k})&= \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots\\ y_{k}\\ \end{bmatrix}\\ a(\eta)&=-\log p_{k}\\ b(y_{1},\dots ,y_{k})&=\frac{n!}{y_{1}!\dots y_{k}!}. \end{align}$
Exponential family form of multinomial distribution Let $Y_{i}$ be a random variable, $i=1\dots k$. $n\in \mathbb{N}$ be the number of trials. $y_{i}\in \mathbb{N}$ be the number of $i$-th events in a sequence of $n$ trials. $p_{i}\in [0,1]$ be the pr
42,009
Feature importance for random forest classification of a sample
Variable importance accounts for the increase in out-of-bag cross-validated prediction error. It would be possible but not meaningful to account for the change of prediction error by one sample only. As one sample only can be correctly or wrongly predicted, such a term would be very unstable and crude. You could check out 'local variable importance', 'partial dependence plots' or 'feature contributions'. Here's an example from my package forestFloor using feature contributions. Each plot shows the change of predicted class probability as function each variable. For the iris data set, there no strong variable interactions. Therefore, the model structure can be boiled down to a 2D visualization. The R-sqaured terms quantifies how much the model structure deviates from this main effect only interpretation/visualization. library(forestFloor) library(randomForest) data(iris) X = iris[,!names(iris) %in% "Species"] Y = iris[,"Species"] rf = randomForest(X,Y, keep.forest=TRUE, #mandatory for classification replace=FALSE, #if TRUE use trimTrees::cinbag, not randomForest keep.inbag=TRUE, #mandatory always for forestFloor sampsize =15 ) #optional:smaller trees smoother model structure ff = forestFloor(rf.fit = rf, # mandatory X = X, # mandatory calc_np = "sad monkey", # this input takes no effect for classification binary_reg = FALSE) # can change two class classification to regression # Thus cannot be TRUE for IRIS (three class) plot(ff,plot_GOF=TRUE,cex=.7, colLists=list(c("#FF0000A5"), c("#00FF0050"), c("#0000FF35")))
Feature importance for random forest classification of a sample
Variable importance accounts for the increase in out-of-bag cross-validated prediction error. It would be possible but not meaningful to account for the change of prediction error by one sample only.
Feature importance for random forest classification of a sample Variable importance accounts for the increase in out-of-bag cross-validated prediction error. It would be possible but not meaningful to account for the change of prediction error by one sample only. As one sample only can be correctly or wrongly predicted, such a term would be very unstable and crude. You could check out 'local variable importance', 'partial dependence plots' or 'feature contributions'. Here's an example from my package forestFloor using feature contributions. Each plot shows the change of predicted class probability as function each variable. For the iris data set, there no strong variable interactions. Therefore, the model structure can be boiled down to a 2D visualization. The R-sqaured terms quantifies how much the model structure deviates from this main effect only interpretation/visualization. library(forestFloor) library(randomForest) data(iris) X = iris[,!names(iris) %in% "Species"] Y = iris[,"Species"] rf = randomForest(X,Y, keep.forest=TRUE, #mandatory for classification replace=FALSE, #if TRUE use trimTrees::cinbag, not randomForest keep.inbag=TRUE, #mandatory always for forestFloor sampsize =15 ) #optional:smaller trees smoother model structure ff = forestFloor(rf.fit = rf, # mandatory X = X, # mandatory calc_np = "sad monkey", # this input takes no effect for classification binary_reg = FALSE) # can change two class classification to regression # Thus cannot be TRUE for IRIS (three class) plot(ff,plot_GOF=TRUE,cex=.7, colLists=list(c("#FF0000A5"), c("#00FF0050"), c("#0000FF35")))
Feature importance for random forest classification of a sample Variable importance accounts for the increase in out-of-bag cross-validated prediction error. It would be possible but not meaningful to account for the change of prediction error by one sample only.
42,010
Comparing & clustering time series with unequal lengths
It just happened that few days ago I read Marco Cuturi's paper on "Fast Global Alignment Kernels" [1]. The idea is to cast the well-known DTW distances as similarities eligible for use in kernel machines, e.g. SVM. You cannot directly transform DTW distance into similarity and hope it will work (e.g. negative exponential of distance) - you will get non positive definite kernel. The author proposed a novel technique with Global Alignment kernels such that the nice DTW properties are accessible as well as the kernel is positive definite. I have experimented with his code on several standard time-series classification datasets [2] and was pleasantly surprised of the performance. The beauty and power of his approach that it works for multivariate time-series as well. And oh, it works for different length time-series as well. Ten time-series might be too little to train a good classifier. Perhaps getting more examples is possible. The paper, implementations (C, MATLAB and Python) can be found [here]. UPDATE: Author's website has moved. Please use the following link to get paper and implementations [4].
Comparing & clustering time series with unequal lengths
It just happened that few days ago I read Marco Cuturi's paper on "Fast Global Alignment Kernels" [1]. The idea is to cast the well-known DTW distances as similarities eligible for use in kernel machi
Comparing & clustering time series with unequal lengths It just happened that few days ago I read Marco Cuturi's paper on "Fast Global Alignment Kernels" [1]. The idea is to cast the well-known DTW distances as similarities eligible for use in kernel machines, e.g. SVM. You cannot directly transform DTW distance into similarity and hope it will work (e.g. negative exponential of distance) - you will get non positive definite kernel. The author proposed a novel technique with Global Alignment kernels such that the nice DTW properties are accessible as well as the kernel is positive definite. I have experimented with his code on several standard time-series classification datasets [2] and was pleasantly surprised of the performance. The beauty and power of his approach that it works for multivariate time-series as well. And oh, it works for different length time-series as well. Ten time-series might be too little to train a good classifier. Perhaps getting more examples is possible. The paper, implementations (C, MATLAB and Python) can be found [here]. UPDATE: Author's website has moved. Please use the following link to get paper and implementations [4].
Comparing & clustering time series with unequal lengths It just happened that few days ago I read Marco Cuturi's paper on "Fast Global Alignment Kernels" [1]. The idea is to cast the well-known DTW distances as similarities eligible for use in kernel machi
42,011
Comparing & clustering time series with unequal lengths
You can simply extend what I responded in How to statistically compare two time series? to more than two series . Essentially identify a common model and estimate the parameters both globally and locally. Perform an F test and either accept or reject the hypothesis of common parameters. Another post gives a similar view Proving similarities of two time series
Comparing & clustering time series with unequal lengths
You can simply extend what I responded in How to statistically compare two time series? to more than two series . Essentially identify a common model and estimate the parameters both globally and loca
Comparing & clustering time series with unequal lengths You can simply extend what I responded in How to statistically compare two time series? to more than two series . Essentially identify a common model and estimate the parameters both globally and locally. Perform an F test and either accept or reject the hypothesis of common parameters. Another post gives a similar view Proving similarities of two time series
Comparing & clustering time series with unequal lengths You can simply extend what I responded in How to statistically compare two time series? to more than two series . Essentially identify a common model and estimate the parameters both globally and loca
42,012
Reporting the Actual Formula/Equation of an LME model (with factors) used in R?
Your second formulation, $$ y_{imj} = \beta_0 + \sum\beta_{1m}[year]_{im} + b_{0j}[building.id]_j + \epsilon_{imj}, $$ is correct. Depending on your audience, it might be slightly clearer to use the slightly more general mixed model notation and write this as $$ \begin{split} y_i & \sim N(\eta_i,\sigma^2) \\ \eta_{imj} & = \beta_0 + \beta_{1,m(i)} + b_{j(i)} \\ b_j & \sim N(0,\sigma^2_b) \end{split} $$ where $m(i)$ gives the year and $j(i)$ gives the building corresponding to the $i^{\textrm{th}}$ observation.
Reporting the Actual Formula/Equation of an LME model (with factors) used in R?
Your second formulation, $$ y_{imj} = \beta_0 + \sum\beta_{1m}[year]_{im} + b_{0j}[building.id]_j + \epsilon_{imj}, $$ is correct. Depending on your audience, it might be slightly clearer to use the
Reporting the Actual Formula/Equation of an LME model (with factors) used in R? Your second formulation, $$ y_{imj} = \beta_0 + \sum\beta_{1m}[year]_{im} + b_{0j}[building.id]_j + \epsilon_{imj}, $$ is correct. Depending on your audience, it might be slightly clearer to use the slightly more general mixed model notation and write this as $$ \begin{split} y_i & \sim N(\eta_i,\sigma^2) \\ \eta_{imj} & = \beta_0 + \beta_{1,m(i)} + b_{j(i)} \\ b_j & \sim N(0,\sigma^2_b) \end{split} $$ where $m(i)$ gives the year and $j(i)$ gives the building corresponding to the $i^{\textrm{th}}$ observation.
Reporting the Actual Formula/Equation of an LME model (with factors) used in R? Your second formulation, $$ y_{imj} = \beta_0 + \sum\beta_{1m}[year]_{im} + b_{0j}[building.id]_j + \epsilon_{imj}, $$ is correct. Depending on your audience, it might be slightly clearer to use the
42,013
regression analysis with confounding variables, how to interpret your main coefficient when controlling for confounders
One purpose of regression is to control for the effects of covariates. This question is predicated on the (correct) understanding that this purpose should not be confused with testing the significance of those covariates. In a linear multiple regression model $$\mathbb{E}(y) = \alpha + \beta_1 x_1 + \cdots + \beta_k x_k,$$ the $F$-test compares the null hypothesis $$H_0: \beta_1 = \beta_2 = \cdots = \beta_k = 0$$ to the alternative $$H_1: \beta_j \ne 0\text{ for at least one }j.$$ In your case, you're not interested in this hypothesis because most of those coefficients are associated with covariates. Letting $j$ be the index of the single predictor in which you are interested and $n$ be the amount of data, your test should be based on comparing $$H_0: \beta_j = 0$$ to $$H_1: \beta_j \ne 0.$$ This is usually done with a t-test in which the estimate $\hat \beta_j$ is divided by its standard error $se(\hat\beta_j)$ and the resulting t-statistic is referred to the Student t distribution with $n-k-1$ degrees of freedom. If you consider that result to be significant, then you will reject this null hypothesis (rather than the omnibus null hypothesis of the F test) and conclude that after controlling for all covariates, variable $x_j$ was found to be significantly associated with $y$. Additional considerations Note that if you intended to conduct several such tests separately, involving several variables, then this procedure would no longer be correct for any one of them. Context matters! You would need first to perform a test to see whether any of that set of variables is significant. The usual procedure is an F test based on the "extra sum of squares" associated with the variables of interest. In the case of a single variable, this F test is mathematically equivalent to the Student t test. More subtly, note that what matters is the number of tests you planned to make before seeing the data. If first you examined the data and then based on that examination you selected $x_j$ as the sole variable of interest, then you would somehow have to figure out how to account for the additional information you used in order to narrow the model down to this single variable. You might, for instance, attempt (as honestly as possible) to enumerate all the variables you could possibly ever have been interested in testing, then treat them as a group as just described. Reference Montgomery, Peck, and Vining, Introduction to Linear Regression Analysis. Fifth Edition, 2012. John Wiley & Sons. Section 3.3.
regression analysis with confounding variables, how to interpret your main coefficient when controll
One purpose of regression is to control for the effects of covariates. This question is predicated on the (correct) understanding that this purpose should not be confused with testing the significanc
regression analysis with confounding variables, how to interpret your main coefficient when controlling for confounders One purpose of regression is to control for the effects of covariates. This question is predicated on the (correct) understanding that this purpose should not be confused with testing the significance of those covariates. In a linear multiple regression model $$\mathbb{E}(y) = \alpha + \beta_1 x_1 + \cdots + \beta_k x_k,$$ the $F$-test compares the null hypothesis $$H_0: \beta_1 = \beta_2 = \cdots = \beta_k = 0$$ to the alternative $$H_1: \beta_j \ne 0\text{ for at least one }j.$$ In your case, you're not interested in this hypothesis because most of those coefficients are associated with covariates. Letting $j$ be the index of the single predictor in which you are interested and $n$ be the amount of data, your test should be based on comparing $$H_0: \beta_j = 0$$ to $$H_1: \beta_j \ne 0.$$ This is usually done with a t-test in which the estimate $\hat \beta_j$ is divided by its standard error $se(\hat\beta_j)$ and the resulting t-statistic is referred to the Student t distribution with $n-k-1$ degrees of freedom. If you consider that result to be significant, then you will reject this null hypothesis (rather than the omnibus null hypothesis of the F test) and conclude that after controlling for all covariates, variable $x_j$ was found to be significantly associated with $y$. Additional considerations Note that if you intended to conduct several such tests separately, involving several variables, then this procedure would no longer be correct for any one of them. Context matters! You would need first to perform a test to see whether any of that set of variables is significant. The usual procedure is an F test based on the "extra sum of squares" associated with the variables of interest. In the case of a single variable, this F test is mathematically equivalent to the Student t test. More subtly, note that what matters is the number of tests you planned to make before seeing the data. If first you examined the data and then based on that examination you selected $x_j$ as the sole variable of interest, then you would somehow have to figure out how to account for the additional information you used in order to narrow the model down to this single variable. You might, for instance, attempt (as honestly as possible) to enumerate all the variables you could possibly ever have been interested in testing, then treat them as a group as just described. Reference Montgomery, Peck, and Vining, Introduction to Linear Regression Analysis. Fifth Edition, 2012. John Wiley & Sons. Section 3.3.
regression analysis with confounding variables, how to interpret your main coefficient when controll One purpose of regression is to control for the effects of covariates. This question is predicated on the (correct) understanding that this purpose should not be confused with testing the significanc
42,014
regression analysis with confounding variables, how to interpret your main coefficient when controlling for confounders
Null hypothesis under the Overall F-test is the following: H0: β1 = β2 = ... = βm= 0 Ha: At least one of the slope parameters is not equal to 0. Looking at the alternative hypothesis (Ha), I doubt you can say anything about individual coefficients once you fail to reject the Null. Having said that, failing to reject the NULL only means that the relationship between X and Y cannot be explained by the given model (in this case linear). You might try another specification or a non-linear specification between X and Y.
regression analysis with confounding variables, how to interpret your main coefficient when controll
Null hypothesis under the Overall F-test is the following: H0: β1 = β2 = ... = βm= 0 Ha: At least one of the slope parameters is not equal to 0. Looking at the alternative hypothesis (Ha), I doubt
regression analysis with confounding variables, how to interpret your main coefficient when controlling for confounders Null hypothesis under the Overall F-test is the following: H0: β1 = β2 = ... = βm= 0 Ha: At least one of the slope parameters is not equal to 0. Looking at the alternative hypothesis (Ha), I doubt you can say anything about individual coefficients once you fail to reject the Null. Having said that, failing to reject the NULL only means that the relationship between X and Y cannot be explained by the given model (in this case linear). You might try another specification or a non-linear specification between X and Y.
regression analysis with confounding variables, how to interpret your main coefficient when controll Null hypothesis under the Overall F-test is the following: H0: β1 = β2 = ... = βm= 0 Ha: At least one of the slope parameters is not equal to 0. Looking at the alternative hypothesis (Ha), I doubt
42,015
How sum of squares is calculated by R ANOVA function fo non-factor variables in linear model
One method (the easiest to grasp in one sentence) is to look at the increment in sums of squares due to regression when a covariate is added. This is R's ANOVA (or AOV) strategy, which implies that the order of addition of variables is important: > anova( lm(mpg ~ cyl, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) cyl 1 817.71 817.71 79.561 6.113e-10 Residuals 30 308.33 10.28 --- When we add another variable the regression sums of squares stays the same for the cyl variable: > anova( lm(mpg ~ cyl+disp, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) cyl 1 817.71 817.71 87.5883 2.903e-10 disp 1 37.59 37.59 4.0268 0.05419 Residuals 29 270.74 9.34 If disp is added first, its SS-regression is maintained and the incremental SS-regression is attributed to the next covariate, this time to cyl. > anova( lm(mpg ~ disp+cyl, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) disp 1 808.89 808.89 86.643 3.271e-10 *** cyl 1 46.42 46.42 4.972 0.03366 * Residuals 29 270.74 9.34 There is ongoing holy-war between the proponents of this method as default and the SAS authors who want to use a method that allocates sums-of-squares differently (and I don't think I can state in one sentence what they do do except to say that the sums of squares regression using so-called "type-III" ANOVA for each variable at any given level of complexity are not affected by the order of addition or removal of variables.) The proponents of the R approach think that the theory-agnostic application of stepwise methods is bad statistics. They think you should be setting up yur models based on what is known or established by existing science and then adding variables that represent any new hypotheses. I'm not sure who invented the "typing" system for sums-of-squares strategies but R uses type II while SAS uses type III sums of squares in their respective default regression methods. There are R packages that can provide a type III calculation if that's what you need to attempt replication of SAS results. My memory is the the car package has an Anova function that will allow specification of the desired type.
How sum of squares is calculated by R ANOVA function fo non-factor variables in linear model
One method (the easiest to grasp in one sentence) is to look at the increment in sums of squares due to regression when a covariate is added. This is R's ANOVA (or AOV) strategy, which implies that th
How sum of squares is calculated by R ANOVA function fo non-factor variables in linear model One method (the easiest to grasp in one sentence) is to look at the increment in sums of squares due to regression when a covariate is added. This is R's ANOVA (or AOV) strategy, which implies that the order of addition of variables is important: > anova( lm(mpg ~ cyl, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) cyl 1 817.71 817.71 79.561 6.113e-10 Residuals 30 308.33 10.28 --- When we add another variable the regression sums of squares stays the same for the cyl variable: > anova( lm(mpg ~ cyl+disp, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) cyl 1 817.71 817.71 87.5883 2.903e-10 disp 1 37.59 37.59 4.0268 0.05419 Residuals 29 270.74 9.34 If disp is added first, its SS-regression is maintained and the incremental SS-regression is attributed to the next covariate, this time to cyl. > anova( lm(mpg ~ disp+cyl, mtcars)) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) disp 1 808.89 808.89 86.643 3.271e-10 *** cyl 1 46.42 46.42 4.972 0.03366 * Residuals 29 270.74 9.34 There is ongoing holy-war between the proponents of this method as default and the SAS authors who want to use a method that allocates sums-of-squares differently (and I don't think I can state in one sentence what they do do except to say that the sums of squares regression using so-called "type-III" ANOVA for each variable at any given level of complexity are not affected by the order of addition or removal of variables.) The proponents of the R approach think that the theory-agnostic application of stepwise methods is bad statistics. They think you should be setting up yur models based on what is known or established by existing science and then adding variables that represent any new hypotheses. I'm not sure who invented the "typing" system for sums-of-squares strategies but R uses type II while SAS uses type III sums of squares in their respective default regression methods. There are R packages that can provide a type III calculation if that's what you need to attempt replication of SAS results. My memory is the the car package has an Anova function that will allow specification of the desired type.
How sum of squares is calculated by R ANOVA function fo non-factor variables in linear model One method (the easiest to grasp in one sentence) is to look at the increment in sums of squares due to regression when a covariate is added. This is R's ANOVA (or AOV) strategy, which implies that th
42,016
Hypothesis test based on entropy
A fantastic reference that I have been using for self study on this topic is Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach by Kenneth Burnham. In brief, a hypothesis test compares a test statistic, $T$, calculated from the data on hand (assumed to be from some distribution, usually Gaussian) to a critical value, e.g. $t_{crit}$ or $F_{crit}$ or $\chi^2_{crit}$ to establish the veracity of a hypothesis, e.g. $\mu=0$. Entropy, on the other hand, (better said, relative entropy) is an estimate of the expected Kullback-Leibler distance between the hypothesized model (could be Gaussian, Chi-square, whatever) and the true, generating model— i.e. nature— denoted by Burnham simply as $f$. What does estimated distance mean? Well, if the relative entropy, a.k.a. expected K-L distance between your candidate model, $g$, and full reality (nature) is large, then you have lost information (measured in bits) by using your candidate to try to represent reality. There is no hypothesis being tested in information-theoretic approaches, only (the distance from) truth to be discovered. The quantity used in statistics most often to measure this distance is the AIC. A quote from the textbook on the AIC: Thus, rather than having a simple measure of the directed distance between two models (i.e. the K-L distance), one has instead an estimate of the expected, relative distance between the fitted model and the unknown true mechanism (perhaps of infinite dimension) that actually generated the observed data. (Page 61)
Hypothesis test based on entropy
A fantastic reference that I have been using for self study on this topic is Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach by Kenneth Burnham. In brief, a hypoth
Hypothesis test based on entropy A fantastic reference that I have been using for self study on this topic is Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach by Kenneth Burnham. In brief, a hypothesis test compares a test statistic, $T$, calculated from the data on hand (assumed to be from some distribution, usually Gaussian) to a critical value, e.g. $t_{crit}$ or $F_{crit}$ or $\chi^2_{crit}$ to establish the veracity of a hypothesis, e.g. $\mu=0$. Entropy, on the other hand, (better said, relative entropy) is an estimate of the expected Kullback-Leibler distance between the hypothesized model (could be Gaussian, Chi-square, whatever) and the true, generating model— i.e. nature— denoted by Burnham simply as $f$. What does estimated distance mean? Well, if the relative entropy, a.k.a. expected K-L distance between your candidate model, $g$, and full reality (nature) is large, then you have lost information (measured in bits) by using your candidate to try to represent reality. There is no hypothesis being tested in information-theoretic approaches, only (the distance from) truth to be discovered. The quantity used in statistics most often to measure this distance is the AIC. A quote from the textbook on the AIC: Thus, rather than having a simple measure of the directed distance between two models (i.e. the K-L distance), one has instead an estimate of the expected, relative distance between the fitted model and the unknown true mechanism (perhaps of infinite dimension) that actually generated the observed data. (Page 61)
Hypothesis test based on entropy A fantastic reference that I have been using for self study on this topic is Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach by Kenneth Burnham. In brief, a hypoth
42,017
Hypothesis test based on entropy
To test for the significance of the difference between two entropy values, compute the maximum possible entropy, Log2N, then take the ratio with the actual entropy. The Z-test for proportions is an appropriate test.
Hypothesis test based on entropy
To test for the significance of the difference between two entropy values, compute the maximum possible entropy, Log2N, then take the ratio with the actual entropy. The Z-test for proportions is an ap
Hypothesis test based on entropy To test for the significance of the difference between two entropy values, compute the maximum possible entropy, Log2N, then take the ratio with the actual entropy. The Z-test for proportions is an appropriate test.
Hypothesis test based on entropy To test for the significance of the difference between two entropy values, compute the maximum possible entropy, Log2N, then take the ratio with the actual entropy. The Z-test for proportions is an ap
42,018
GARCH vs SV for Forecasting
This is more like an extended comment than an answer as the answer really depends on the data series at hand. 1) For the purpose of probabilistic forecasting of financial and macro-economic time series: Is it known whether one class of model has a tendency to perform better than the other in general? There are quite some papers comparing forecast abilities of a broad class of different volatility models. My recommendation is to look around and read some papers in order to determine which models have shown good abilities for the kind of series you are interested in. Hansen and Lunde (2005): A FORECAST COMPARISON OF VOLATILITY MODELS: DOES ANYTHING BEAT A GARCH(1,1)?. In this article they compare 330 different volatility models on exchange rate and IBM return data and find that no model performs significantly better than the GARCH(1,1) model. They do however not consider SV models. Hansen, Lunde and Nason (2003): Choosing the Best Volatility Models: The Model Confidence Set Approach*. They compare 55 different volatility models including fractional GARCH and SV models. Fleming and Kirby (2003): A Closer Look at the Relation between GARCH and Stochastic Autoregressive Volatility. They compare the two model classes and find similar results. Giot and Laurent (2003): Value-at-risk for long and short trading positions. They compare the abilities of different univariate and multivariate ARCH class models in VaR modelling. Giot and Laurent (2004): Modelling daily Value-at-Risk using realized volatility and ARCH type models. They compare ARCH models and daily realized volatility in VaR modelling. These were just a few papers and there are loads more out there. 2) From a computational standpoint, is one class of model significantly more convenient to estimate in a Bayesian paradigm than the other. Specifically for computing the posterior predictive log score above? I cannot answer this as I have no experience in estimating these models using Bayesian techniques, however ARCH type models are easily estimated by MLE while SV can be estimated using the EM algorithm or the Kalman filter. There are as well several different ways to evaluate them. One such way is mentioned in the Hansen, Lunde and Nason (2003) paper above. 3) Will a GARCH and SV model tend to produce similar forecasts in general, or will these differ drastically between the 2 classes of models? Moreover are the consequences of misspecifying a SV process by modeling it as a GARCH or vice versa likely to be very significant in a univariate time series, or is it usually "not that big of a deal". (I know this one is probably a real big "it depends", but if there is any general consensus, practical experience or a paper related to this topic it would be greatly appreciated). Often when modelling financial data with a GARCH model you see that the coefficients are very close to 1 indicating an IGARCH (integrated-GARCH). This could be a sign of very persisitent data but also mis-specification as changing parameters/breaks would result in an IGARCH (residuals could also be non-normal etc.). In this case you would get better results by estimating a SV model as these can capture the changes. This is a massive litterature so you will need to read a bit up on this topic yourself.
GARCH vs SV for Forecasting
This is more like an extended comment than an answer as the answer really depends on the data series at hand. 1) For the purpose of probabilistic forecasting of financial and macro-economic time
GARCH vs SV for Forecasting This is more like an extended comment than an answer as the answer really depends on the data series at hand. 1) For the purpose of probabilistic forecasting of financial and macro-economic time series: Is it known whether one class of model has a tendency to perform better than the other in general? There are quite some papers comparing forecast abilities of a broad class of different volatility models. My recommendation is to look around and read some papers in order to determine which models have shown good abilities for the kind of series you are interested in. Hansen and Lunde (2005): A FORECAST COMPARISON OF VOLATILITY MODELS: DOES ANYTHING BEAT A GARCH(1,1)?. In this article they compare 330 different volatility models on exchange rate and IBM return data and find that no model performs significantly better than the GARCH(1,1) model. They do however not consider SV models. Hansen, Lunde and Nason (2003): Choosing the Best Volatility Models: The Model Confidence Set Approach*. They compare 55 different volatility models including fractional GARCH and SV models. Fleming and Kirby (2003): A Closer Look at the Relation between GARCH and Stochastic Autoregressive Volatility. They compare the two model classes and find similar results. Giot and Laurent (2003): Value-at-risk for long and short trading positions. They compare the abilities of different univariate and multivariate ARCH class models in VaR modelling. Giot and Laurent (2004): Modelling daily Value-at-Risk using realized volatility and ARCH type models. They compare ARCH models and daily realized volatility in VaR modelling. These were just a few papers and there are loads more out there. 2) From a computational standpoint, is one class of model significantly more convenient to estimate in a Bayesian paradigm than the other. Specifically for computing the posterior predictive log score above? I cannot answer this as I have no experience in estimating these models using Bayesian techniques, however ARCH type models are easily estimated by MLE while SV can be estimated using the EM algorithm or the Kalman filter. There are as well several different ways to evaluate them. One such way is mentioned in the Hansen, Lunde and Nason (2003) paper above. 3) Will a GARCH and SV model tend to produce similar forecasts in general, or will these differ drastically between the 2 classes of models? Moreover are the consequences of misspecifying a SV process by modeling it as a GARCH or vice versa likely to be very significant in a univariate time series, or is it usually "not that big of a deal". (I know this one is probably a real big "it depends", but if there is any general consensus, practical experience or a paper related to this topic it would be greatly appreciated). Often when modelling financial data with a GARCH model you see that the coefficients are very close to 1 indicating an IGARCH (integrated-GARCH). This could be a sign of very persisitent data but also mis-specification as changing parameters/breaks would result in an IGARCH (residuals could also be non-normal etc.). In this case you would get better results by estimating a SV model as these can capture the changes. This is a massive litterature so you will need to read a bit up on this topic yourself.
GARCH vs SV for Forecasting This is more like an extended comment than an answer as the answer really depends on the data series at hand. 1) For the purpose of probabilistic forecasting of financial and macro-economic time
42,019
VGAM fitting a betabinomial model
The binomial distribution is the distribution of the number of 'successes' out of a known, finite number of 'trials' (e.g., heads on a certain number of coin flips). With a fixed probability of success, $\pi$, and a fixed number of trials, $n$, the variance of the number of successes is fixed as well. A typical logistic regression scenario has Bernoulli data (a single coin flip) as its response, but when you have binomial data with $n>1$ per observation, you can find that the response data vary more than they ought to. In that case, the assumptions of a binomial GLiM will be violated. The beta binomial distribution relaxes that assumption. It contains three parameters, $n, \alpha, \& \beta$, which gives it additional flexibility to address the overdispersion in the situation described above. The important point here, though, is that the overdispersion / greater variance can only exist with data that are counts of successes out of $n>1$ trials. Thus, R (or any other software) needs the data to be in that form to fit the model. SAS, for example, uses events/trials; R uses cbind(successes, failures), which is equivalent. (For what it's worth, in the documentation page you link to, I see only cbind(successes, failures) in the examples listed.)
VGAM fitting a betabinomial model
The binomial distribution is the distribution of the number of 'successes' out of a known, finite number of 'trials' (e.g., heads on a certain number of coin flips). With a fixed probability of succe
VGAM fitting a betabinomial model The binomial distribution is the distribution of the number of 'successes' out of a known, finite number of 'trials' (e.g., heads on a certain number of coin flips). With a fixed probability of success, $\pi$, and a fixed number of trials, $n$, the variance of the number of successes is fixed as well. A typical logistic regression scenario has Bernoulli data (a single coin flip) as its response, but when you have binomial data with $n>1$ per observation, you can find that the response data vary more than they ought to. In that case, the assumptions of a binomial GLiM will be violated. The beta binomial distribution relaxes that assumption. It contains three parameters, $n, \alpha, \& \beta$, which gives it additional flexibility to address the overdispersion in the situation described above. The important point here, though, is that the overdispersion / greater variance can only exist with data that are counts of successes out of $n>1$ trials. Thus, R (or any other software) needs the data to be in that form to fit the model. SAS, for example, uses events/trials; R uses cbind(successes, failures), which is equivalent. (For what it's worth, in the documentation page you link to, I see only cbind(successes, failures) in the examples listed.)
VGAM fitting a betabinomial model The binomial distribution is the distribution of the number of 'successes' out of a known, finite number of 'trials' (e.g., heads on a certain number of coin flips). With a fixed probability of succe
42,020
Variable importance in party vs randomForest
The two importances agree for a and c actually. There is a good tutorial about variable importance here. These are some of the guys who worked in cforest. Also, I think the values are different in absolute value because if you check the documentation of the randomForest package it says: importance(x, type=NULL, class=NULL, scale=TRUE, ...) Which means that by default the importance of feature $\mathbf{x}_j$ is standardized: $\frac{\mbox{VI}(\mathbf{x}_j)}{\hat{\sigma}/\sqrt{\mbox{ntree}}}$ Where $$\mbox{VI}(\mathbf{x}_j) = \frac{\sum_{t}^{\mbox{ntree}}\mbox{VI}^{(t)}(\mathbf{x}_j)}{\mbox{ntree}}$$
Variable importance in party vs randomForest
The two importances agree for a and c actually. There is a good tutorial about variable importance here. These are some of the guys who worked in cforest. Also, I think the values are different in abs
Variable importance in party vs randomForest The two importances agree for a and c actually. There is a good tutorial about variable importance here. These are some of the guys who worked in cforest. Also, I think the values are different in absolute value because if you check the documentation of the randomForest package it says: importance(x, type=NULL, class=NULL, scale=TRUE, ...) Which means that by default the importance of feature $\mathbf{x}_j$ is standardized: $\frac{\mbox{VI}(\mathbf{x}_j)}{\hat{\sigma}/\sqrt{\mbox{ntree}}}$ Where $$\mbox{VI}(\mathbf{x}_j) = \frac{\sum_{t}^{\mbox{ntree}}\mbox{VI}^{(t)}(\mathbf{x}_j)}{\mbox{ntree}}$$
Variable importance in party vs randomForest The two importances agree for a and c actually. There is a good tutorial about variable importance here. These are some of the guys who worked in cforest. Also, I think the values are different in abs
42,021
Variable importance in party vs randomForest
Traditional Random Forest uses "Gini Gain" splitting criterion in assessing Variable Importance, which is biased towards factor variables with many levels/categories. In contract, cforest function creates random forests not from CART trees, but from unbiased classification trees based on conditional inference, which gives much more robust results when multifactorial variables are involved, particularly when the function is used with subsampling without replacement. Here's a good paper that explains the differences more in depth: Bias in random forest variable importance measures: Illustrations, sources and a solution. Strobl et al. (2007)
Variable importance in party vs randomForest
Traditional Random Forest uses "Gini Gain" splitting criterion in assessing Variable Importance, which is biased towards factor variables with many levels/categories. In contract, cforest function cre
Variable importance in party vs randomForest Traditional Random Forest uses "Gini Gain" splitting criterion in assessing Variable Importance, which is biased towards factor variables with many levels/categories. In contract, cforest function creates random forests not from CART trees, but from unbiased classification trees based on conditional inference, which gives much more robust results when multifactorial variables are involved, particularly when the function is used with subsampling without replacement. Here's a good paper that explains the differences more in depth: Bias in random forest variable importance measures: Illustrations, sources and a solution. Strobl et al. (2007)
Variable importance in party vs randomForest Traditional Random Forest uses "Gini Gain" splitting criterion in assessing Variable Importance, which is biased towards factor variables with many levels/categories. In contract, cforest function cre
42,022
How to build a confidence interval with only binary test results?
Interesting question. Your data is Bernoulli which is a binary distribution with probability of sucess equal to $p$. The simplest thing you can do is compute the probability of success from your sample and use the Central Limit Theorem to arrive at an asymptotic 95% confidence interval. If we denote the sample proportion of success by $\widehat{p}$ then the interval for the true probability $p$ would be $$\widehat{p}\pm 1.96 \times \sqrt{\frac{\widehat{p} \left(1-\widehat{p} \right)}{n}}$$ I have to emphasize that this is not a probability interval since the true parameter $p$ is considered fixed and does not have a sampling distribution. A long-run interpretation of this interval is that if you gather $100$ samples and compute for each the confidence interval above, then 95 of these will contain the true parameter. Now you also mentioned that you would like to compare probabilities of success. Let us use the index $1$ and $2$ for the first and second sample respectively. Assuming independent samples, an extension of the above procedure would be to compute the confidence interval $$\widehat{p_1}-\widehat{p_2}\pm 1.96 \times \sqrt{\frac{\widehat{p}_1 \left(1-\widehat{p}_1 \right)}{n_1}+\frac{\widehat{p}_2 \left(1-\widehat{p}_2 \right)}{n_2}}$$ Notice that this interval now concerns the difference in the population probabilities, i.e. the difference of the true probabilities. The interpretation is precisely the same, neverthless. Gather 100 samples (from each population), compute the confidence intervals and in 95 cases you will find that they contain the population difference. You decide that the population probabilities are not equal if the interval does not contain $0$. This is just one way to compare population probabilities, though. In practice this method might not be the one which you want to use. The reason is that these intervals are not always very informative. Instead one might want to find an interval for the relative risk $\frac{p_1}{p_2}$ or the odds ratio $\displaystyle{\frac{\frac{p_1}{1-p_1}}{\frac{p_2}{1-p_2}}}$. This is also possible using an asymptotic approximation. If you think that this will be of interest to you, here is a relevant question concerning the relative risk How to calculate the relative risk based on two independent confidence intervals
How to build a confidence interval with only binary test results?
Interesting question. Your data is Bernoulli which is a binary distribution with probability of sucess equal to $p$. The simplest thing you can do is compute the probability of success from your sampl
How to build a confidence interval with only binary test results? Interesting question. Your data is Bernoulli which is a binary distribution with probability of sucess equal to $p$. The simplest thing you can do is compute the probability of success from your sample and use the Central Limit Theorem to arrive at an asymptotic 95% confidence interval. If we denote the sample proportion of success by $\widehat{p}$ then the interval for the true probability $p$ would be $$\widehat{p}\pm 1.96 \times \sqrt{\frac{\widehat{p} \left(1-\widehat{p} \right)}{n}}$$ I have to emphasize that this is not a probability interval since the true parameter $p$ is considered fixed and does not have a sampling distribution. A long-run interpretation of this interval is that if you gather $100$ samples and compute for each the confidence interval above, then 95 of these will contain the true parameter. Now you also mentioned that you would like to compare probabilities of success. Let us use the index $1$ and $2$ for the first and second sample respectively. Assuming independent samples, an extension of the above procedure would be to compute the confidence interval $$\widehat{p_1}-\widehat{p_2}\pm 1.96 \times \sqrt{\frac{\widehat{p}_1 \left(1-\widehat{p}_1 \right)}{n_1}+\frac{\widehat{p}_2 \left(1-\widehat{p}_2 \right)}{n_2}}$$ Notice that this interval now concerns the difference in the population probabilities, i.e. the difference of the true probabilities. The interpretation is precisely the same, neverthless. Gather 100 samples (from each population), compute the confidence intervals and in 95 cases you will find that they contain the population difference. You decide that the population probabilities are not equal if the interval does not contain $0$. This is just one way to compare population probabilities, though. In practice this method might not be the one which you want to use. The reason is that these intervals are not always very informative. Instead one might want to find an interval for the relative risk $\frac{p_1}{p_2}$ or the odds ratio $\displaystyle{\frac{\frac{p_1}{1-p_1}}{\frac{p_2}{1-p_2}}}$. This is also possible using an asymptotic approximation. If you think that this will be of interest to you, here is a relevant question concerning the relative risk How to calculate the relative risk based on two independent confidence intervals
How to build a confidence interval with only binary test results? Interesting question. Your data is Bernoulli which is a binary distribution with probability of sucess equal to $p$. The simplest thing you can do is compute the probability of success from your sampl
42,023
On the sample complexity of mean estimation in $\ell_p$-norm
A closely related topic is that of concentration inequalities, which give you a bound (of the sort that you are looking for) which also depends on the number of samples (among other things). Concretely, the concept of Rademacher complexity is a standard tool to address this sort of problems. The Rademacher complexity can be understood as a permutation test, where you changes your labels randomly. When employed to the problem of estimating the mean, the bound tells you how likely is that you get close to the actual mean by chance (how concentrated are the samples around the mean, and thus, how stable are your estimates based on different samples). To be more specific, for a sample, $X=(x_{i})$, of size $l$, drawn i.i.d. from a probability distribution, $D$, and for a real-valued function class, $F$, with domain $X$, the empirical Rademacher complexity is the random variable defined as, $$ \hat{R}_{l}(F) = E_{\sigma}\left[sup_{f \in F}\left|\frac{2}{l}\sum_{i=1}^{l}\sigma_{i}f(x_{i})\right|X\right] $$ where $\sigma = (\sigma_{1},...,\sigma_{l})$ are independent uniform $\pm1$-valued random variables. The Rademacher complexity is, $$ R_{l}(F) = E_{S \sim D}[\hat{R}_{l}(F)] = E_{S\sigma}\left[sup_{f \in F}\left|\frac{2}{l}\sum_{i=1}^{l}\sigma_{i}f(x_{i})\right|X\right] $$ The sup means that it looks for the highest correlation possible with random noise. Now, this concept is relevant because of the following theorem, Given the above conditions, assuming that $F$ is the class of mappings from $X$ to the interval $[0,1]$, and let $(z_{i})$ be a sample of size $l$. If you fix $\delta \in (0,1)$, then with probability $1-\delta$ over random draws of size $l$, every $f \in F$ satisfies, $$ E[f(z)] \leq \hat{E}[f(z)] + R_{l}(F) + \sqrt{\frac{ln(2/\delta)}{2l}} \leq \hat{E}[f(z)] + \hat{R}_{l}(F) + 3\sqrt{\frac{ln(2/\delta)}{2l}} $$ Notice that the hat is used to indicate the empirical expectation measured on a particular sample. The idea is to find such a family of f´s and use the theorem. Since $D$ has a compact support, you know that $(W-E[W])^{2}/R$ is bounded in $[0,1]$, where $R$ is the radius of the ball. Using the properties of the Rademacher complexity and a second theorem which gives you the Rademacher complexity for linear prediction (details can be found here and in great detail here), you get the following bound for your probability $$ \sqrt{\frac{2R^{2}}{l}}\left(\sqrt2 + \sqrt{ln\frac{1}{\delta}}\right) $$ P.S. I just realized you referred to the p-norm. But still, you can use the Khintchine inequality to bound that quantity with the 2-norm.
On the sample complexity of mean estimation in $\ell_p$-norm
A closely related topic is that of concentration inequalities, which give you a bound (of the sort that you are looking for) which also depends on the number of samples (among other things). Concretel
On the sample complexity of mean estimation in $\ell_p$-norm A closely related topic is that of concentration inequalities, which give you a bound (of the sort that you are looking for) which also depends on the number of samples (among other things). Concretely, the concept of Rademacher complexity is a standard tool to address this sort of problems. The Rademacher complexity can be understood as a permutation test, where you changes your labels randomly. When employed to the problem of estimating the mean, the bound tells you how likely is that you get close to the actual mean by chance (how concentrated are the samples around the mean, and thus, how stable are your estimates based on different samples). To be more specific, for a sample, $X=(x_{i})$, of size $l$, drawn i.i.d. from a probability distribution, $D$, and for a real-valued function class, $F$, with domain $X$, the empirical Rademacher complexity is the random variable defined as, $$ \hat{R}_{l}(F) = E_{\sigma}\left[sup_{f \in F}\left|\frac{2}{l}\sum_{i=1}^{l}\sigma_{i}f(x_{i})\right|X\right] $$ where $\sigma = (\sigma_{1},...,\sigma_{l})$ are independent uniform $\pm1$-valued random variables. The Rademacher complexity is, $$ R_{l}(F) = E_{S \sim D}[\hat{R}_{l}(F)] = E_{S\sigma}\left[sup_{f \in F}\left|\frac{2}{l}\sum_{i=1}^{l}\sigma_{i}f(x_{i})\right|X\right] $$ The sup means that it looks for the highest correlation possible with random noise. Now, this concept is relevant because of the following theorem, Given the above conditions, assuming that $F$ is the class of mappings from $X$ to the interval $[0,1]$, and let $(z_{i})$ be a sample of size $l$. If you fix $\delta \in (0,1)$, then with probability $1-\delta$ over random draws of size $l$, every $f \in F$ satisfies, $$ E[f(z)] \leq \hat{E}[f(z)] + R_{l}(F) + \sqrt{\frac{ln(2/\delta)}{2l}} \leq \hat{E}[f(z)] + \hat{R}_{l}(F) + 3\sqrt{\frac{ln(2/\delta)}{2l}} $$ Notice that the hat is used to indicate the empirical expectation measured on a particular sample. The idea is to find such a family of f´s and use the theorem. Since $D$ has a compact support, you know that $(W-E[W])^{2}/R$ is bounded in $[0,1]$, where $R$ is the radius of the ball. Using the properties of the Rademacher complexity and a second theorem which gives you the Rademacher complexity for linear prediction (details can be found here and in great detail here), you get the following bound for your probability $$ \sqrt{\frac{2R^{2}}{l}}\left(\sqrt2 + \sqrt{ln\frac{1}{\delta}}\right) $$ P.S. I just realized you referred to the p-norm. But still, you can use the Khintchine inequality to bound that quantity with the 2-norm.
On the sample complexity of mean estimation in $\ell_p$-norm A closely related topic is that of concentration inequalities, which give you a bound (of the sort that you are looking for) which also depends on the number of samples (among other things). Concretel
42,024
On the sample complexity of mean estimation in $\ell_p$-norm
Let me follow up on this question and answer. Indeed, the connection to Rademacher complexity of the linear functions from the dual body can be used to provide upper bounds for the problem. But this is not quite what Cristobal and myself are asking about. (Not to mention that the question we ask is even more fundamental). Rademacher complexity characterizes the convergence rate of the empirical mean to the true mean. So it can give an upper bound. This upper bound is tight in many cases but we are interested in bounds that apply to any mean estimator. We are also interested in results beyond the straightforward $L_2$ (or even $L_p$ for $p>2$ cases covered in the answer but general norms defined by a convex origin-centered body.
On the sample complexity of mean estimation in $\ell_p$-norm
Let me follow up on this question and answer. Indeed, the connection to Rademacher complexity of the linear functions from the dual body can be used to provide upper bounds for the problem. But this i
On the sample complexity of mean estimation in $\ell_p$-norm Let me follow up on this question and answer. Indeed, the connection to Rademacher complexity of the linear functions from the dual body can be used to provide upper bounds for the problem. But this is not quite what Cristobal and myself are asking about. (Not to mention that the question we ask is even more fundamental). Rademacher complexity characterizes the convergence rate of the empirical mean to the true mean. So it can give an upper bound. This upper bound is tight in many cases but we are interested in bounds that apply to any mean estimator. We are also interested in results beyond the straightforward $L_2$ (or even $L_p$ for $p>2$ cases covered in the answer but general norms defined by a convex origin-centered body.
On the sample complexity of mean estimation in $\ell_p$-norm Let me follow up on this question and answer. Indeed, the connection to Rademacher complexity of the linear functions from the dual body can be used to provide upper bounds for the problem. But this i
42,025
On the sample complexity of mean estimation in $\ell_p$-norm
Thanks everybody for the answers. Rademacher complexity is in fact a useful tool to derive upper bounds. However, the sample complexity can also depend on the geometry of the convex body we are interested in. In this regard, one can use ideas of uniform smoothness and uniform convexity from Banach space theory to get the right rates. This is something well-known in some fields, but I haven't found a concise reference so we included the analysis in our paper (see Appendix B in http://arxiv.org/pdf/1512.09170v1.pdf) Two questions that still remain for me are, first: How to derive lower bounds on sample complexity of empirical mean based on Rademacher complexity? This I suppose is standard, but I haven't found a reference. The second question is: Are there examples where empirical mean does not provide the best sample complexity for mean estimation?
On the sample complexity of mean estimation in $\ell_p$-norm
Thanks everybody for the answers. Rademacher complexity is in fact a useful tool to derive upper bounds. However, the sample complexity can also depend on the geometry of the convex body we are intere
On the sample complexity of mean estimation in $\ell_p$-norm Thanks everybody for the answers. Rademacher complexity is in fact a useful tool to derive upper bounds. However, the sample complexity can also depend on the geometry of the convex body we are interested in. In this regard, one can use ideas of uniform smoothness and uniform convexity from Banach space theory to get the right rates. This is something well-known in some fields, but I haven't found a concise reference so we included the analysis in our paper (see Appendix B in http://arxiv.org/pdf/1512.09170v1.pdf) Two questions that still remain for me are, first: How to derive lower bounds on sample complexity of empirical mean based on Rademacher complexity? This I suppose is standard, but I haven't found a reference. The second question is: Are there examples where empirical mean does not provide the best sample complexity for mean estimation?
On the sample complexity of mean estimation in $\ell_p$-norm Thanks everybody for the answers. Rademacher complexity is in fact a useful tool to derive upper bounds. However, the sample complexity can also depend on the geometry of the convex body we are intere
42,026
Interpretation of Saturated Model vs. Model with Interaction and One Main Effect
Throughout my answer, the usual conditional mean independence $\mathbb{E}(\varepsilon_{i}\vert X_{i},Z_{i})=0$ is maintained. It is instructive to consider a concrete example. Let $X_{i}$ be a dummy of college education, such that $X_{i}=1$ if worker $i$ is a college graduate, and $X_{i}=0$ otherwise; and let $Z_{i}$ be a dummy of gender, such that $Z_{i}=1$ if $i$ is male, and $0$ if $i$ is female. And suppose $Y_{i}$ is the observed income. Hence $\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)$ is the expected income of a male college graduate, and $\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0)$ is the expected income of a female college graduate. Other conditional expectations, such as $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0$), have similar interpretations. First, it is not hard to verify that the coefficients $\alpha_{2}$ equals to $$ \alpha_{2}=\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0). $$ This is the difference of the expected income of male and female college graduates. The significance of $\alpha_{2}$ may indicate gender discrimination among college graduates. Next, we have $$ \beta_{2}+\beta_{3}=\alpha_{2}=\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0). $$ And $$ \beta_{0}+\beta_{2}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1),\ \beta_{0}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0). $$ So $$ \beta_{2}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0), $$ which measures the gender discrimination among workers without college degrees. And $\beta_{3}=(\beta_{2}+\beta_{3})-\beta_{2}$, that is $$ \beta_{3}=\{\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0)\}-\{\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)\}. $$ So $\beta_{3}$ can be understood the difference of the magnitudes of gender discrimination in two cohorts, workers with college education and workers without college degree. The positive sign of $\beta_{3}$ indicates that the gender discrimation among higher educated workers is greater than it is in less educated workers. Last but not least, one important assumption made implicitly by model (1) is the following $$ \mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)=\mathbb{E}(Y_{i}\vert X_{i}=0)=\alpha_{0}. $$ That is by specifying model (1), one has assumed that there is no wage discrimination against gender for those who have no college degree. The expectation $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)$ and $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)$ are the expected income of female and male worker without college education, respectively. Such an assumption in general may or may not hold, depending on your empirical exercise.
Interpretation of Saturated Model vs. Model with Interaction and One Main Effect
Throughout my answer, the usual conditional mean independence $\mathbb{E}(\varepsilon_{i}\vert X_{i},Z_{i})=0$ is maintained. It is instructive to consider a concrete example. Let $X_{i}$ be a dummy o
Interpretation of Saturated Model vs. Model with Interaction and One Main Effect Throughout my answer, the usual conditional mean independence $\mathbb{E}(\varepsilon_{i}\vert X_{i},Z_{i})=0$ is maintained. It is instructive to consider a concrete example. Let $X_{i}$ be a dummy of college education, such that $X_{i}=1$ if worker $i$ is a college graduate, and $X_{i}=0$ otherwise; and let $Z_{i}$ be a dummy of gender, such that $Z_{i}=1$ if $i$ is male, and $0$ if $i$ is female. And suppose $Y_{i}$ is the observed income. Hence $\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)$ is the expected income of a male college graduate, and $\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0)$ is the expected income of a female college graduate. Other conditional expectations, such as $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0$), have similar interpretations. First, it is not hard to verify that the coefficients $\alpha_{2}$ equals to $$ \alpha_{2}=\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0). $$ This is the difference of the expected income of male and female college graduates. The significance of $\alpha_{2}$ may indicate gender discrimination among college graduates. Next, we have $$ \beta_{2}+\beta_{3}=\alpha_{2}=\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0). $$ And $$ \beta_{0}+\beta_{2}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1),\ \beta_{0}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0). $$ So $$ \beta_{2}=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0), $$ which measures the gender discrimination among workers without college degrees. And $\beta_{3}=(\beta_{2}+\beta_{3})-\beta_{2}$, that is $$ \beta_{3}=\{\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=1,Z_{i}=0)\}-\{\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)-\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)\}. $$ So $\beta_{3}$ can be understood the difference of the magnitudes of gender discrimination in two cohorts, workers with college education and workers without college degree. The positive sign of $\beta_{3}$ indicates that the gender discrimation among higher educated workers is greater than it is in less educated workers. Last but not least, one important assumption made implicitly by model (1) is the following $$ \mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)=\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)=\mathbb{E}(Y_{i}\vert X_{i}=0)=\alpha_{0}. $$ That is by specifying model (1), one has assumed that there is no wage discrimination against gender for those who have no college degree. The expectation $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=0)$ and $\mathbb{E}(Y_{i}\vert X_{i}=0,Z_{i}=1)$ are the expected income of female and male worker without college education, respectively. Such an assumption in general may or may not hold, depending on your empirical exercise.
Interpretation of Saturated Model vs. Model with Interaction and One Main Effect Throughout my answer, the usual conditional mean independence $\mathbb{E}(\varepsilon_{i}\vert X_{i},Z_{i})=0$ is maintained. It is instructive to consider a concrete example. Let $X_{i}$ be a dummy o
42,027
Why is the Hessian of the log likelihood function in the logit model not negative *semi*definite?
When they say the Hessian is always negative definite, they are assuming that $X$ is a full rank matrix, which is a very typical assumption for regression models. So it is true that in your example, the Hessian would not be negative definite. However, they assume that you have informative data: you can't expect to estimate the effect of $x$ if you don't observe different levels of $x$.
Why is the Hessian of the log likelihood function in the logit model not negative *semi*definite?
When they say the Hessian is always negative definite, they are assuming that $X$ is a full rank matrix, which is a very typical assumption for regression models. So it is true that in your example,
Why is the Hessian of the log likelihood function in the logit model not negative *semi*definite? When they say the Hessian is always negative definite, they are assuming that $X$ is a full rank matrix, which is a very typical assumption for regression models. So it is true that in your example, the Hessian would not be negative definite. However, they assume that you have informative data: you can't expect to estimate the effect of $x$ if you don't observe different levels of $x$.
Why is the Hessian of the log likelihood function in the logit model not negative *semi*definite? When they say the Hessian is always negative definite, they are assuming that $X$ is a full rank matrix, which is a very typical assumption for regression models. So it is true that in your example,
42,028
R: how to interpret mosaic and association plots
In the mosaic plot you see that the first split is wrt gender with about 2/3 female and about 1/3 male. The second split is wrt to alcohol (conditional on gender) showing that only about 1/6 of females drink alcohol while it is about 3/4 of the males. The final split is wrt to cigarettes (conditional on gender and alcohol) showing a clear association that persons who drink tend to smoke and vice versa. The shadings are made based on the Pearson residuals of an independence model - by default complete independence of all factors but can also be changed to other independence models. The cutoffs of 2 and 4 are based on certain heuristics and are meant to bring out patterns in the Pearson residuals. Here, the default cutoffs do not work very well and you could consider changing them. The association plot shows the Pearson residuals directly, highlighting in which cells there are more or less observations than expected. For further details on the methods, I would recommend to read the references listed in ?mosaic. A starting point could be Michael Friendly's 1994 JASA paper or our JSS and JCGS papers on the vcd package and its shadings.
R: how to interpret mosaic and association plots
In the mosaic plot you see that the first split is wrt gender with about 2/3 female and about 1/3 male. The second split is wrt to alcohol (conditional on gender) showing that only about 1/6 of female
R: how to interpret mosaic and association plots In the mosaic plot you see that the first split is wrt gender with about 2/3 female and about 1/3 male. The second split is wrt to alcohol (conditional on gender) showing that only about 1/6 of females drink alcohol while it is about 3/4 of the males. The final split is wrt to cigarettes (conditional on gender and alcohol) showing a clear association that persons who drink tend to smoke and vice versa. The shadings are made based on the Pearson residuals of an independence model - by default complete independence of all factors but can also be changed to other independence models. The cutoffs of 2 and 4 are based on certain heuristics and are meant to bring out patterns in the Pearson residuals. Here, the default cutoffs do not work very well and you could consider changing them. The association plot shows the Pearson residuals directly, highlighting in which cells there are more or less observations than expected. For further details on the methods, I would recommend to read the references listed in ?mosaic. A starting point could be Michael Friendly's 1994 JASA paper or our JSS and JCGS papers on the vcd package and its shadings.
R: how to interpret mosaic and association plots In the mosaic plot you see that the first split is wrt gender with about 2/3 female and about 1/3 male. The second split is wrt to alcohol (conditional on gender) showing that only about 1/6 of female
42,029
R: how to interpret mosaic and association plots
For reporting, I think following statement may be right: Association of smoking and alcohol intake in males was found to be statistically most significant finding.
R: how to interpret mosaic and association plots
For reporting, I think following statement may be right: Association of smoking and alcohol intake in males was found to be statistically most significant finding.
R: how to interpret mosaic and association plots For reporting, I think following statement may be right: Association of smoking and alcohol intake in males was found to be statistically most significant finding.
R: how to interpret mosaic and association plots For reporting, I think following statement may be right: Association of smoking and alcohol intake in males was found to be statistically most significant finding.
42,030
How to Balance my Dataset?
You can oversample your minority class examples by simply duplicating them, or you can use the SMOTE algorithm (DMwR package in R, function SMOTE), that generates synthetic minority class examples while downsampling the majority category at the same time. Since you have a pretty high number of cases, downsampling should not lead to too much concept loss, but of course, you'll still be losing a bit of information, which is not ideal. Note that as already mentioned by Analyst, 1300 minority cases is relative rarity but not absolute rarity. That is, if the minority class is represented by strong concepts, your classifier should be able to pick that up (see this paper for a good discussion of absolute and relative rarity). So maybe your predictors are not that good at discriminating between classes in the first place, or maybe you have some concept overlap, which makes learning difficult. Also, which learning algorithm are you using? For instance, Stochastic Gradient Tree Boosting is a little bit less sensitive to relative class imbalance than Random Forests (since the focus is gradually put on misclassified cases). With Random Forests, two strategies have been developed to cope with class imbalance, they involve resampling and weighting. Some methods similar in spirit have also been introduced for Boosting (e.g.). EDIT: added references (in case links die in the future): Chawla, Nitesh V., et al. "SMOTE: synthetic minority over-sampling technique." Journal of artificial intelligence research (2002): 321-357. Weiss, G. M. (2004). Mining with rarity: a unifying framework. ACM SIGKDD Explorations Newsletter, 6(1), 7-19. Chen, Chao, Andy Liaw, and Leo Breiman. "Using random forest to learn imbalanced data." University of California, Berkeley (2004). Sun, Yanmin, et al. "Cost-sensitive boosting for classification of imbalanced data." Pattern Recognition 40.12 (2007): 3358-3378.
How to Balance my Dataset?
You can oversample your minority class examples by simply duplicating them, or you can use the SMOTE algorithm (DMwR package in R, function SMOTE), that generates synthetic minority class examples whi
How to Balance my Dataset? You can oversample your minority class examples by simply duplicating them, or you can use the SMOTE algorithm (DMwR package in R, function SMOTE), that generates synthetic minority class examples while downsampling the majority category at the same time. Since you have a pretty high number of cases, downsampling should not lead to too much concept loss, but of course, you'll still be losing a bit of information, which is not ideal. Note that as already mentioned by Analyst, 1300 minority cases is relative rarity but not absolute rarity. That is, if the minority class is represented by strong concepts, your classifier should be able to pick that up (see this paper for a good discussion of absolute and relative rarity). So maybe your predictors are not that good at discriminating between classes in the first place, or maybe you have some concept overlap, which makes learning difficult. Also, which learning algorithm are you using? For instance, Stochastic Gradient Tree Boosting is a little bit less sensitive to relative class imbalance than Random Forests (since the focus is gradually put on misclassified cases). With Random Forests, two strategies have been developed to cope with class imbalance, they involve resampling and weighting. Some methods similar in spirit have also been introduced for Boosting (e.g.). EDIT: added references (in case links die in the future): Chawla, Nitesh V., et al. "SMOTE: synthetic minority over-sampling technique." Journal of artificial intelligence research (2002): 321-357. Weiss, G. M. (2004). Mining with rarity: a unifying framework. ACM SIGKDD Explorations Newsletter, 6(1), 7-19. Chen, Chao, Andy Liaw, and Leo Breiman. "Using random forest to learn imbalanced data." University of California, Berkeley (2004). Sun, Yanmin, et al. "Cost-sensitive boosting for classification of imbalanced data." Pattern Recognition 40.12 (2007): 3358-3378.
How to Balance my Dataset? You can oversample your minority class examples by simply duplicating them, or you can use the SMOTE algorithm (DMwR package in R, function SMOTE), that generates synthetic minority class examples whi
42,031
How to Balance my Dataset?
You failed to tell us about your classification approach and your dataset characteristics. If you are using any other method that takes ages, to train and tune, I would recommend using cost sensitive learning. Oversampling burdens the training without adding much in return, tough it is very simple. Undersampling on the other hand failed to impress me, as of yet. You could also look into vector quantization. Try a bunch of strategies and find what works for your data...
How to Balance my Dataset?
You failed to tell us about your classification approach and your dataset characteristics. If you are using any other method that takes ages, to train and tune, I would recommend using cost sensitive
How to Balance my Dataset? You failed to tell us about your classification approach and your dataset characteristics. If you are using any other method that takes ages, to train and tune, I would recommend using cost sensitive learning. Oversampling burdens the training without adding much in return, tough it is very simple. Undersampling on the other hand failed to impress me, as of yet. You could also look into vector quantization. Try a bunch of strategies and find what works for your data...
How to Balance my Dataset? You failed to tell us about your classification approach and your dataset characteristics. If you are using any other method that takes ages, to train and tune, I would recommend using cost sensitive
42,032
R-squared value when using offset -- how is it calculated?
$R^2$ is computed in terms of the sum of squares of fitted values $\text{MSS}=\sum (\hat{y}_i - \bar y_i)^2$ (assuming an intercept term is present) and sum of squares of residuals $\text{RSS} = \sum \left(y_i - \hat{y}_i\right)^2$ as $$R^2 = \frac{\text{MSS}}{\text{MSS} + \text{RSS}}:$$ it is the fraction of the total sum of squares "explained" by the fit. Whether you subtract an offset $z_i$ or declare it as variable in the offset parameter, the model will be equivalent--it produces the same residuals--but in the former case the values to be predicted are those of $y_i-z_i$; that is, $z_i$ has been subtracted from $\hat y_i$. The sum of squares to be "explained" is thereby changed when the offset is manually subtracted (and the software has no way of knowing that). $\text{MSS}$ could increase or decrease, resulting either in an increase or decrease in $R^2$, respectively.
R-squared value when using offset -- how is it calculated?
$R^2$ is computed in terms of the sum of squares of fitted values $\text{MSS}=\sum (\hat{y}_i - \bar y_i)^2$ (assuming an intercept term is present) and sum of squares of residuals $\text{RSS} = \sum
R-squared value when using offset -- how is it calculated? $R^2$ is computed in terms of the sum of squares of fitted values $\text{MSS}=\sum (\hat{y}_i - \bar y_i)^2$ (assuming an intercept term is present) and sum of squares of residuals $\text{RSS} = \sum \left(y_i - \hat{y}_i\right)^2$ as $$R^2 = \frac{\text{MSS}}{\text{MSS} + \text{RSS}}:$$ it is the fraction of the total sum of squares "explained" by the fit. Whether you subtract an offset $z_i$ or declare it as variable in the offset parameter, the model will be equivalent--it produces the same residuals--but in the former case the values to be predicted are those of $y_i-z_i$; that is, $z_i$ has been subtracted from $\hat y_i$. The sum of squares to be "explained" is thereby changed when the offset is manually subtracted (and the software has no way of knowing that). $\text{MSS}$ could increase or decrease, resulting either in an increase or decrease in $R^2$, respectively.
R-squared value when using offset -- how is it calculated? $R^2$ is computed in terms of the sum of squares of fitted values $\text{MSS}=\sum (\hat{y}_i - \bar y_i)^2$ (assuming an intercept term is present) and sum of squares of residuals $\text{RSS} = \sum
42,033
Comparing and visualising highly skewed distributions
Note that your gamma data are not at zero, they are near zero, so you can work with logarithms. If your actual data do contain zeros and you only used skewed gammas to create an example here, you could shift your data by a small $\epsilon$ to make them positive and then take logs. set.seed(1) gamma1 <- rgamma(10000, shape=0.05, rate=1) gamma2 <- rgamma(10000, shape=0.055, rate=0.98) gamma3 <- rgamma(10000, shape=0.06, rate=0.95) I see two possibilities. You could use beanplots with logged data, aligning them via the ylim argument. library(beanplot) opar <- par(mfrow=c(1,3)) beanplot(gamma1,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") beanplot(gamma2,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") beanplot(gamma3,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") par(opar) Or, if you really want to compare your three distributions, you could do pairwise qq plots on log scales: pairs(cbind(sort(gamma1),sort(gamma2),sort(gamma3)),log="xy", panel=function(x,y,...){points(x,y,pch=19,cex=0.2,...); abline(a=0,b=1)}) The beanplot emphasizes the extreme skewness of all three datasets (note the logs), while the pairwise qq plots emphasize that the datasets are different, while losing the skewness.
Comparing and visualising highly skewed distributions
Note that your gamma data are not at zero, they are near zero, so you can work with logarithms. If your actual data do contain zeros and you only used skewed gammas to create an example here, you coul
Comparing and visualising highly skewed distributions Note that your gamma data are not at zero, they are near zero, so you can work with logarithms. If your actual data do contain zeros and you only used skewed gammas to create an example here, you could shift your data by a small $\epsilon$ to make them positive and then take logs. set.seed(1) gamma1 <- rgamma(10000, shape=0.05, rate=1) gamma2 <- rgamma(10000, shape=0.055, rate=0.98) gamma3 <- rgamma(10000, shape=0.06, rate=0.95) I see two possibilities. You could use beanplots with logged data, aligning them via the ylim argument. library(beanplot) opar <- par(mfrow=c(1,3)) beanplot(gamma1,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") beanplot(gamma2,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") beanplot(gamma3,what=c(1,1,0,0),log="y",ylim=range(c(gamma1,gamma2,gamma3)),col="grey") par(opar) Or, if you really want to compare your three distributions, you could do pairwise qq plots on log scales: pairs(cbind(sort(gamma1),sort(gamma2),sort(gamma3)),log="xy", panel=function(x,y,...){points(x,y,pch=19,cex=0.2,...); abline(a=0,b=1)}) The beanplot emphasizes the extreme skewness of all three datasets (note the logs), while the pairwise qq plots emphasize that the datasets are different, while losing the skewness.
Comparing and visualising highly skewed distributions Note that your gamma data are not at zero, they are near zero, so you can work with logarithms. If your actual data do contain zeros and you only used skewed gammas to create an example here, you coul
42,034
What are the mean and variance of the ratio of two normal variables, with non-zero means?
Since, as pointed by Alexander Chervov, the mean of $1/X$ does not exist when $X\sim\mathcal{N}(\mu,\sigma^2)$, the mean of $Y/X$, which, were it to exist, would be equal to the mean of $Y$ times the mean of $1/X$ does not exist either. Since the mean does not exist, the variance does not exist either. To make the above more precise (in connection with whuber's criticism), the integral $$\int_{\mathbb{R}^2} \frac{y}{x}\,\varphi(x-a;\sigma)\,\varphi(y-b;\tau)\,\text{d}x\text{d}y$$ is defined iff the integral $$\int_{\mathbb{R}^2} \frac{|y|}{|x|}\,\varphi(x-a;\sigma)\,\varphi(y-b;\tau)\,\text{d}x\text{d}y$$ is finite, which is not the case since $$\int_{\mathbb{R}} |y|\,\varphi(y-b;\tau)\,\text{d}y\,\int_{\mathbb{R}} \frac{1}{|x|}\,\varphi(x-a;\sigma)\,\text{d}x=+\infty\,.$$
What are the mean and variance of the ratio of two normal variables, with non-zero means?
Since, as pointed by Alexander Chervov, the mean of $1/X$ does not exist when $X\sim\mathcal{N}(\mu,\sigma^2)$, the mean of $Y/X$, which, were it to exist, would be equal to the mean of $Y$ times the
What are the mean and variance of the ratio of two normal variables, with non-zero means? Since, as pointed by Alexander Chervov, the mean of $1/X$ does not exist when $X\sim\mathcal{N}(\mu,\sigma^2)$, the mean of $Y/X$, which, were it to exist, would be equal to the mean of $Y$ times the mean of $1/X$ does not exist either. Since the mean does not exist, the variance does not exist either. To make the above more precise (in connection with whuber's criticism), the integral $$\int_{\mathbb{R}^2} \frac{y}{x}\,\varphi(x-a;\sigma)\,\varphi(y-b;\tau)\,\text{d}x\text{d}y$$ is defined iff the integral $$\int_{\mathbb{R}^2} \frac{|y|}{|x|}\,\varphi(x-a;\sigma)\,\varphi(y-b;\tau)\,\text{d}x\text{d}y$$ is finite, which is not the case since $$\int_{\mathbb{R}} |y|\,\varphi(y-b;\tau)\,\text{d}y\,\int_{\mathbb{R}} \frac{1}{|x|}\,\varphi(x-a;\sigma)\,\text{d}x=+\infty\,.$$
What are the mean and variance of the ratio of two normal variables, with non-zero means? Since, as pointed by Alexander Chervov, the mean of $1/X$ does not exist when $X\sim\mathcal{N}(\mu,\sigma^2)$, the mean of $Y/X$, which, were it to exist, would be equal to the mean of $Y$ times the
42,035
OLS with ordinal dependent variable - do the coefficients mean anything?
Interpretive issues for the OLS estimator notwithstanding, the real issue here is in the treatment of an ordinal variable as if it were a variable on the ratio scale. By using standard linear regression analysis, the researchers are essentially treating the ordinal response as if it were a continuous quantity. By averaging three ratings they are also implicitly treating these life satisfaction measures as continuous measures of equal weighting in a continuous aggregated measure. This involves a lot of potentially dubious assumptions about the nature of the rating scale, so you could reasonably be skeptical of the legitimacy of this measure. At a minimum, such a treatment obscures a great deal of information in the specific effects of the explanatory variables on the ordinal categories in the individual response measures. In any case, if we let $\bar{Y}$ denote the response variable in this case (i.e., the average of the three ratings for life satisfaction) then we have a model of the form: $$\bar{Y}_i = u(\boldsymbol{\beta}, \mathbf{x}_i) + \varepsilon_i,$$ where the true regression function has the linear form: $$u(\boldsymbol{\beta}, \mathbf{x}_i) = \beta_0 + \beta_1 x_{i,1} + \cdots + \beta_K x_{i,k}.$$ As usual, each slope coefficient $\beta_k$ (with $k=1,...,K$) is the rate-of-change of the conditional expected response with respect to the corresponding explanatory variable: $$\beta_k = \frac{\partial u}{\partial x_{i,k}} (\boldsymbol{\beta}, \mathbf{x}_i).$$ As you can see, the coefficient values in the regression look at rates-of-change of the conditional expected value of the averaged life-satisfaction rating, which you may or may not regard as a dubious measure. The fact that all individual life-satisfaction ratings are ordinal integer values means that the averaged value is restricted to the support $\{ 1, \tfrac{4}{3}, \tfrac{5}{3}, \cdots , \tfrac{11}{3}, 4 \}$, and so the expected value is a convex combination of these possible values. With regard to your follow-on questions: (1) the OLS estimator is unbiased and consistent (under broad limiting conditions on the explanatory variables) for the true coefficient values in the model, which in this case may be of dubious meaning to begin with; and (2) standardisation of the response values will merely transform them via a linear transformation, which will alter all the slope coefficients by the corresponding linear transformation; it does not fundamentally change the information coming out of the model.
OLS with ordinal dependent variable - do the coefficients mean anything?
Interpretive issues for the OLS estimator notwithstanding, the real issue here is in the treatment of an ordinal variable as if it were a variable on the ratio scale. By using standard linear regress
OLS with ordinal dependent variable - do the coefficients mean anything? Interpretive issues for the OLS estimator notwithstanding, the real issue here is in the treatment of an ordinal variable as if it were a variable on the ratio scale. By using standard linear regression analysis, the researchers are essentially treating the ordinal response as if it were a continuous quantity. By averaging three ratings they are also implicitly treating these life satisfaction measures as continuous measures of equal weighting in a continuous aggregated measure. This involves a lot of potentially dubious assumptions about the nature of the rating scale, so you could reasonably be skeptical of the legitimacy of this measure. At a minimum, such a treatment obscures a great deal of information in the specific effects of the explanatory variables on the ordinal categories in the individual response measures. In any case, if we let $\bar{Y}$ denote the response variable in this case (i.e., the average of the three ratings for life satisfaction) then we have a model of the form: $$\bar{Y}_i = u(\boldsymbol{\beta}, \mathbf{x}_i) + \varepsilon_i,$$ where the true regression function has the linear form: $$u(\boldsymbol{\beta}, \mathbf{x}_i) = \beta_0 + \beta_1 x_{i,1} + \cdots + \beta_K x_{i,k}.$$ As usual, each slope coefficient $\beta_k$ (with $k=1,...,K$) is the rate-of-change of the conditional expected response with respect to the corresponding explanatory variable: $$\beta_k = \frac{\partial u}{\partial x_{i,k}} (\boldsymbol{\beta}, \mathbf{x}_i).$$ As you can see, the coefficient values in the regression look at rates-of-change of the conditional expected value of the averaged life-satisfaction rating, which you may or may not regard as a dubious measure. The fact that all individual life-satisfaction ratings are ordinal integer values means that the averaged value is restricted to the support $\{ 1, \tfrac{4}{3}, \tfrac{5}{3}, \cdots , \tfrac{11}{3}, 4 \}$, and so the expected value is a convex combination of these possible values. With regard to your follow-on questions: (1) the OLS estimator is unbiased and consistent (under broad limiting conditions on the explanatory variables) for the true coefficient values in the model, which in this case may be of dubious meaning to begin with; and (2) standardisation of the response values will merely transform them via a linear transformation, which will alter all the slope coefficients by the corresponding linear transformation; it does not fundamentally change the information coming out of the model.
OLS with ordinal dependent variable - do the coefficients mean anything? Interpretive issues for the OLS estimator notwithstanding, the real issue here is in the treatment of an ordinal variable as if it were a variable on the ratio scale. By using standard linear regress
42,036
OLS with ordinal dependent variable - do the coefficients mean anything?
Q1: The OLS is consistent and unbiased by standard theory. There are assumptions for this, but these don't seem more problematic for the outcome variables you have here than for "standard" applications of linear regression. These assumptions in particular do not say anything about how the quantitative outcome variable is obtained (and do not require it to be normally distributed). Q2: As far as I can see, what you propose here is just a linear transformation of the outcome variables. Due to its affine equivariance (*), linear regression using this will be technically equivalent to using the original data, which, if I see it correctly, are scaled between 1 and 4 (I'm assuming here that you use $X_{min}=1,\ X_{max}=4$; equivalence may not hold if you use the minimum and maximum achieved in the data, which may not generally be 1 and 4). Regression coefficients tell you, as always, the (estimated) expected change of the response variable if the value of an explanatory variable is changed by 1. I don't see much difference between whether this has to be interpreted on a $[1,4]$-scale or on a $[0,1]$-scale, but if you feel more comfortable with the latter, nobody would stop you from using it. As said before, technically it's equivalent (for example, $\hat\beta=0.12$ on the $[1,4]$-scale with range 3 should change to $0.12/3=0.04$ on the $[0,1]$-scale). (*) Affine equivariance means roughly that if the data are linearly transformed, the estimated regression parameters will change in the appropriate way implied by this transformation, so that they have the same meaning after transformation. Addendum: To what extent it is appropriate to use ordinal responses in this way as if they were meaningful quantitative numbers is a controversial issue that may be worth some thought but for which there is no generally accepted true answer. In any case it doesn't have implications on your questions (other than knowing that background knowledge about how measurements were obtained is generally valuable for assessing model assumptions such as independence, and interpretative meaning of the results, but this is not specific to these data).
OLS with ordinal dependent variable - do the coefficients mean anything?
Q1: The OLS is consistent and unbiased by standard theory. There are assumptions for this, but these don't seem more problematic for the outcome variables you have here than for "standard" application
OLS with ordinal dependent variable - do the coefficients mean anything? Q1: The OLS is consistent and unbiased by standard theory. There are assumptions for this, but these don't seem more problematic for the outcome variables you have here than for "standard" applications of linear regression. These assumptions in particular do not say anything about how the quantitative outcome variable is obtained (and do not require it to be normally distributed). Q2: As far as I can see, what you propose here is just a linear transformation of the outcome variables. Due to its affine equivariance (*), linear regression using this will be technically equivalent to using the original data, which, if I see it correctly, are scaled between 1 and 4 (I'm assuming here that you use $X_{min}=1,\ X_{max}=4$; equivalence may not hold if you use the minimum and maximum achieved in the data, which may not generally be 1 and 4). Regression coefficients tell you, as always, the (estimated) expected change of the response variable if the value of an explanatory variable is changed by 1. I don't see much difference between whether this has to be interpreted on a $[1,4]$-scale or on a $[0,1]$-scale, but if you feel more comfortable with the latter, nobody would stop you from using it. As said before, technically it's equivalent (for example, $\hat\beta=0.12$ on the $[1,4]$-scale with range 3 should change to $0.12/3=0.04$ on the $[0,1]$-scale). (*) Affine equivariance means roughly that if the data are linearly transformed, the estimated regression parameters will change in the appropriate way implied by this transformation, so that they have the same meaning after transformation. Addendum: To what extent it is appropriate to use ordinal responses in this way as if they were meaningful quantitative numbers is a controversial issue that may be worth some thought but for which there is no generally accepted true answer. In any case it doesn't have implications on your questions (other than knowing that background knowledge about how measurements were obtained is generally valuable for assessing model assumptions such as independence, and interpretative meaning of the results, but this is not specific to these data).
OLS with ordinal dependent variable - do the coefficients mean anything? Q1: The OLS is consistent and unbiased by standard theory. There are assumptions for this, but these don't seem more problematic for the outcome variables you have here than for "standard" application
42,037
Has there been a project to apply machine learning to generation of indices for books?
To add on to the answer by @denis-tarasov, I would refer you to this amazing article by the Guardian. Take a look at this passage: One of the things that’s commonly imagined is that indexing is, in the age of Google, something that can be outsourced to a computer algorithm. Dead wrong. A concordance – essentially, an alphabetical list of all the words in a book with page references – can be done by a computer. But an index, to be useful, needs to be done by a human. In a book about the Middle East, say, an entry that said: “Syria 2, 3, 5, 6, 7, 10, 23, 25, 26, 27 … ” would be no use at all. Indeed, the main argument is that keyword/keyphrase extraction cannot replace the human element of wittiness, critical thinking, and engagement with the reader of a book.
Has there been a project to apply machine learning to generation of indices for books?
To add on to the answer by @denis-tarasov, I would refer you to this amazing article by the Guardian. Take a look at this passage: One of the things that’s commonly imagined is that indexing is, in t
Has there been a project to apply machine learning to generation of indices for books? To add on to the answer by @denis-tarasov, I would refer you to this amazing article by the Guardian. Take a look at this passage: One of the things that’s commonly imagined is that indexing is, in the age of Google, something that can be outsourced to a computer algorithm. Dead wrong. A concordance – essentially, an alphabetical list of all the words in a book with page references – can be done by a computer. But an index, to be useful, needs to be done by a human. In a book about the Middle East, say, an entry that said: “Syria 2, 3, 5, 6, 7, 10, 23, 25, 26, 27 … ” would be no use at all. Indeed, the main argument is that keyword/keyphrase extraction cannot replace the human element of wittiness, critical thinking, and engagement with the reader of a book.
Has there been a project to apply machine learning to generation of indices for books? To add on to the answer by @denis-tarasov, I would refer you to this amazing article by the Guardian. Take a look at this passage: One of the things that’s commonly imagined is that indexing is, in t
42,038
Has there been a project to apply machine learning to generation of indices for books?
I think this problem is very similar to keywords/keyphrases extraction problem. Keywords extraction is well studied task (see for example this paper for review). Possible approaches include heuristics, supervised machine learning (with a number of special feautures like TD/IDF, position in the sentence and other) and language models. Powerful keywords/keyphrases extraction tools exist, thus one can try them first, and see if they do job well.
Has there been a project to apply machine learning to generation of indices for books?
I think this problem is very similar to keywords/keyphrases extraction problem. Keywords extraction is well studied task (see for example this paper for review). Possible approaches include heuristic
Has there been a project to apply machine learning to generation of indices for books? I think this problem is very similar to keywords/keyphrases extraction problem. Keywords extraction is well studied task (see for example this paper for review). Possible approaches include heuristics, supervised machine learning (with a number of special feautures like TD/IDF, position in the sentence and other) and language models. Powerful keywords/keyphrases extraction tools exist, thus one can try them first, and see if they do job well.
Has there been a project to apply machine learning to generation of indices for books? I think this problem is very similar to keywords/keyphrases extraction problem. Keywords extraction is well studied task (see for example this paper for review). Possible approaches include heuristic
42,039
Translate glmer (lme4) model specification into MCMCglmm
According to MCMCglmm documentation: "Multiple random terms can be passed using the + operator" (as you already do). Having said that you appear to be defining multiple random intercepts and no random slopes. In general you need to use a syntax similar to ~us(1+x1):x2, where x2 is your discrete variables (for the intercepts), x1 is your continuous variable (for the slopes) and us denotes an unstructured random covariance. You may want to check other covariance structures (eg. idh for an identity one). Please check the chapters 3 and 4 in dealing with Categorical Random Interactions and Continuous Random Interactions respectively in the related MCMCglmm Course Notes. The general overview document is also very helpful (and more concise). I cannot emphasise enough how helpful these notes are, using MCMCglmm without reading them would be nearly impossible for me. I suspect you want a structure similar to: ~us(1+x1+x2):class + obs but please check this twice before using it. Note that a global intercept is not fitted by default for variance structure models so you need the +1 that was otherwise redundant for lmer. Good luck specifying your priors! :D
Translate glmer (lme4) model specification into MCMCglmm
According to MCMCglmm documentation: "Multiple random terms can be passed using the + operator" (as you already do). Having said that you appear to be defining multiple random intercepts and no random
Translate glmer (lme4) model specification into MCMCglmm According to MCMCglmm documentation: "Multiple random terms can be passed using the + operator" (as you already do). Having said that you appear to be defining multiple random intercepts and no random slopes. In general you need to use a syntax similar to ~us(1+x1):x2, where x2 is your discrete variables (for the intercepts), x1 is your continuous variable (for the slopes) and us denotes an unstructured random covariance. You may want to check other covariance structures (eg. idh for an identity one). Please check the chapters 3 and 4 in dealing with Categorical Random Interactions and Continuous Random Interactions respectively in the related MCMCglmm Course Notes. The general overview document is also very helpful (and more concise). I cannot emphasise enough how helpful these notes are, using MCMCglmm without reading them would be nearly impossible for me. I suspect you want a structure similar to: ~us(1+x1+x2):class + obs but please check this twice before using it. Note that a global intercept is not fitted by default for variance structure models so you need the +1 that was otherwise redundant for lmer. Good luck specifying your priors! :D
Translate glmer (lme4) model specification into MCMCglmm According to MCMCglmm documentation: "Multiple random terms can be passed using the + operator" (as you already do). Having said that you appear to be defining multiple random intercepts and no random
42,040
Translate glmer (lme4) model specification into MCMCglmm
To my understanding, MCMCglmm poisson models include an observation-level random effect by default, as the author seems to believe that over-dispersion is the regular case, not the exception. In the manual it is called additive over-dispersion. In MCMCglmm summaries it is called the unit effect
Translate glmer (lme4) model specification into MCMCglmm
To my understanding, MCMCglmm poisson models include an observation-level random effect by default, as the author seems to believe that over-dispersion is the regular case, not the exception. In the m
Translate glmer (lme4) model specification into MCMCglmm To my understanding, MCMCglmm poisson models include an observation-level random effect by default, as the author seems to believe that over-dispersion is the regular case, not the exception. In the manual it is called additive over-dispersion. In MCMCglmm summaries it is called the unit effect
Translate glmer (lme4) model specification into MCMCglmm To my understanding, MCMCglmm poisson models include an observation-level random effect by default, as the author seems to believe that over-dispersion is the regular case, not the exception. In the m
42,041
Parameters in a non-parametric model
Nonparametric methods don't specify something (maybe the distribution, maybe a relationship between two variables*) with a fixed finite number of parameters; they're (potentially) infinite parametric. Consider a loess curve (a form of nonparametric regression), for example; the parameters not only aren't explicit, if you attempt to count them, the number isn't even an integer. On the other hand, you never need more than $n$ parameters to define $n$ observations; presumably in at least that sense* the number of parameters grows as you increase $n$. Consider, for example, a kernel density estimate using the usual AIMSE-optimal bandwidth selection; that has an effective number of parameters (e.g. as measured by Ye's generalized degrees of freedom) that grows as $n$ increases (but not in proportion to $n$). However, since the statement is referenced (Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT), you should probably consult that work for the complete context. * the second can also be regarded as not specifying something about the distribution, such as the functional form of a conditional mean **(though in other senses as well)
Parameters in a non-parametric model
Nonparametric methods don't specify something (maybe the distribution, maybe a relationship between two variables*) with a fixed finite number of parameters; they're (potentially) infinite parametric.
Parameters in a non-parametric model Nonparametric methods don't specify something (maybe the distribution, maybe a relationship between two variables*) with a fixed finite number of parameters; they're (potentially) infinite parametric. Consider a loess curve (a form of nonparametric regression), for example; the parameters not only aren't explicit, if you attempt to count them, the number isn't even an integer. On the other hand, you never need more than $n$ parameters to define $n$ observations; presumably in at least that sense* the number of parameters grows as you increase $n$. Consider, for example, a kernel density estimate using the usual AIMSE-optimal bandwidth selection; that has an effective number of parameters (e.g. as measured by Ye's generalized degrees of freedom) that grows as $n$ increases (but not in proportion to $n$). However, since the statement is referenced (Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT), you should probably consult that work for the complete context. * the second can also be regarded as not specifying something about the distribution, such as the functional form of a conditional mean **(though in other senses as well)
Parameters in a non-parametric model Nonparametric methods don't specify something (maybe the distribution, maybe a relationship between two variables*) with a fixed finite number of parameters; they're (potentially) infinite parametric.
42,042
Why is the complement to Power not $\alpha$?
if I had to verbalize the complement of Power given its definition in statement iii), I would arrive at "wrongly rejecting H0" (simply by kind of flipping "correctly" to "wrongly"). The problem is that it's not that simple. Everyday English (at least when expressed that way) doesn't quite so simply take complements of conditional events. Let's look at the four events: H0 true H0 false not reject "correctly not reject" Type II error reject Type I error "correctly reject" The column headings are the events being conditioned on. We don't change those when taking the complement. The row headings are the actions. What the event "correctly rejecting $H_0$" means is "reject $H_0|H_0$ false". (What "simply flipping correctly to wrongly" does is change the part after the "|".) Taking the complementary event changes the part before "|" and leaves the conditioning event alone, which is "fail to reject $H_0|H_0$ false", which colloquially might be rendered as "wrongly failing to reject $H_0$*". So the first thing is that it's the event "reject $H_0$" that is the thing that has to flip - that's what represents taking the complement. However, because we characterized it as "correctly" or "wrongly" instead of specifying the conditioning event (resulting in a a simpler English expression, but no longer one that's easily flipped), we have to also flip that characterization.
Why is the complement to Power not $\alpha$?
if I had to verbalize the complement of Power given its definition in statement iii), I would arrive at "wrongly rejecting H0" (simply by kind of flipping "correctly" to "wrongly"). The problem is th
Why is the complement to Power not $\alpha$? if I had to verbalize the complement of Power given its definition in statement iii), I would arrive at "wrongly rejecting H0" (simply by kind of flipping "correctly" to "wrongly"). The problem is that it's not that simple. Everyday English (at least when expressed that way) doesn't quite so simply take complements of conditional events. Let's look at the four events: H0 true H0 false not reject "correctly not reject" Type II error reject Type I error "correctly reject" The column headings are the events being conditioned on. We don't change those when taking the complement. The row headings are the actions. What the event "correctly rejecting $H_0$" means is "reject $H_0|H_0$ false". (What "simply flipping correctly to wrongly" does is change the part after the "|".) Taking the complementary event changes the part before "|" and leaves the conditioning event alone, which is "fail to reject $H_0|H_0$ false", which colloquially might be rendered as "wrongly failing to reject $H_0$*". So the first thing is that it's the event "reject $H_0$" that is the thing that has to flip - that's what represents taking the complement. However, because we characterized it as "correctly" or "wrongly" instead of specifying the conditioning event (resulting in a a simpler English expression, but no longer one that's easily flipped), we have to also flip that characterization.
Why is the complement to Power not $\alpha$? if I had to verbalize the complement of Power given its definition in statement iii), I would arrive at "wrongly rejecting H0" (simply by kind of flipping "correctly" to "wrongly"). The problem is th
42,043
Why is the complement to Power not $\alpha$?
I think if you form these statements into statements of probability it will make more sense. It is much harder for me to "take the compliment" of a sentence than it is to take the compliment of a conditional probability and then form it as a sentence. Below I'm using the fact that the complement of $P(A|B)$ is $P(A^c|B)$. i) Wrongly rejecting $H_0$ is called a type I error (controlled by $\alpha$). Forming this as a probability statement, $P(\text{reject } H_0| H_0 \text{ is true})$. The complement of this is $P(\text{fail to reject } H_0| H_0 \text{ is true})$. In other words, correctly rejecting the null. ii) Wrongly accepting $H_0$ is called a type II error (the probability of which is indicated by $\beta$). Forming this as a probability statement, $P(\text{fail to reject } H_0| H_0 \text{ is false})$. The complement of this is $P(\text{reject } H_0| H_0 \text{ is false})$. This is power. iii) Power is the probability of correctly rejecting $H_0$ (equal to $1-\beta$). Then $1 = Power + \beta$, i.e. $\beta$ is the complement to Power. Forming this as a probability statement, $P(\text{reject } H_0| H_0 \text{ is false})$. The complement of this is $P(\text{fail to reject } H_0| H_0 \text{ is false})$. Verbalizing this, you would arrive at "wrongly failing to reject the null".
Why is the complement to Power not $\alpha$?
I think if you form these statements into statements of probability it will make more sense. It is much harder for me to "take the compliment" of a sentence than it is to take the compliment of a cond
Why is the complement to Power not $\alpha$? I think if you form these statements into statements of probability it will make more sense. It is much harder for me to "take the compliment" of a sentence than it is to take the compliment of a conditional probability and then form it as a sentence. Below I'm using the fact that the complement of $P(A|B)$ is $P(A^c|B)$. i) Wrongly rejecting $H_0$ is called a type I error (controlled by $\alpha$). Forming this as a probability statement, $P(\text{reject } H_0| H_0 \text{ is true})$. The complement of this is $P(\text{fail to reject } H_0| H_0 \text{ is true})$. In other words, correctly rejecting the null. ii) Wrongly accepting $H_0$ is called a type II error (the probability of which is indicated by $\beta$). Forming this as a probability statement, $P(\text{fail to reject } H_0| H_0 \text{ is false})$. The complement of this is $P(\text{reject } H_0| H_0 \text{ is false})$. This is power. iii) Power is the probability of correctly rejecting $H_0$ (equal to $1-\beta$). Then $1 = Power + \beta$, i.e. $\beta$ is the complement to Power. Forming this as a probability statement, $P(\text{reject } H_0| H_0 \text{ is false})$. The complement of this is $P(\text{fail to reject } H_0| H_0 \text{ is false})$. Verbalizing this, you would arrive at "wrongly failing to reject the null".
Why is the complement to Power not $\alpha$? I think if you form these statements into statements of probability it will make more sense. It is much harder for me to "take the compliment" of a sentence than it is to take the compliment of a cond
42,044
Combining Poisson estimates
You can use maximum likelihood. Let $X$ be the number of buses, $Y$ be the number of cars. Then, with $\lambda$ the observation period in hours, we have $X\sim \text{Po}(10\lambda), Y\sim \text{Po}(60\lambda)$, so the likelihood function based on assuming that $X, Y$ are independent, is proportional to $$ L(\lambda)= e^{-70\lambda} \lambda^{x+y} $$ (factors which do not depend on $\lambda$ can be omitted) so the maximum likelihood estimator is $$ \hat{\lambda} = \frac{x+y}{70} $$ We can check that that gives an unbiased estimate, that is, $E \hat{\lambda}=\lambda$, below: $$ E \hat{\lambda} = \frac{E(X+Y)}{70}= \frac{10\lambda+60\lambda}{70}=\lambda $$
Combining Poisson estimates
You can use maximum likelihood. Let $X$ be the number of buses, $Y$ be the number of cars. Then, with $\lambda$ the observation period in hours, we have $X\sim \text{Po}(10\lambda), Y\sim \text{Po}(6
Combining Poisson estimates You can use maximum likelihood. Let $X$ be the number of buses, $Y$ be the number of cars. Then, with $\lambda$ the observation period in hours, we have $X\sim \text{Po}(10\lambda), Y\sim \text{Po}(60\lambda)$, so the likelihood function based on assuming that $X, Y$ are independent, is proportional to $$ L(\lambda)= e^{-70\lambda} \lambda^{x+y} $$ (factors which do not depend on $\lambda$ can be omitted) so the maximum likelihood estimator is $$ \hat{\lambda} = \frac{x+y}{70} $$ We can check that that gives an unbiased estimate, that is, $E \hat{\lambda}=\lambda$, below: $$ E \hat{\lambda} = \frac{E(X+Y)}{70}= \frac{10\lambda+60\lambda}{70}=\lambda $$
Combining Poisson estimates You can use maximum likelihood. Let $X$ be the number of buses, $Y$ be the number of cars. Then, with $\lambda$ the observation period in hours, we have $X\sim \text{Po}(10\lambda), Y\sim \text{Po}(6
42,045
How bad can heteroscedasticity be before causing problems?
It is true that heteroscedasticity reduce your power (see: Efficiency of beta estimates with heteroscedasticity), but it can also inflate type I errors. Consider the following simulation (coded in R): set.seed(1044) # this makes the example exactly reproducible b0 = 10 # these are the true values of the intercept b1 = 0 # & the slope x = rep(c(0, 2, 4), each=10) # these are the X values hetero.p.vector = vector(length=10000) # these vectors are to store the results homo.p.vector = vector(length=10000) # of the simulation for(i in 1:10000){ # I simulate this 10k times y.homo = b0 + b1*x + rnorm(30, mean=0, sd=1) # these are the homoscedastic y's y.x0 = b0 + b1*0 + rnorm(10, mean=0, sd=1) # these are the heteroscedastic y's y.x2 = b0 + b1*2 + rnorm(10, mean=0, sd=2) # (notice the SDs of the error y.x4 = b0 + b1*4 + rnorm(10, mean=0, sd=4) # term goes from 1 to 4) y.hetero = c(y.x0, y.x2, y.x4) homo.model = lm(y.homo~x) # here I fit 2 models & get the hetero.model = lm(y.hetero~x) # p-values homo.p.vector[i] = summary(homo.model)$coefficients[2,4] hetero.p.vector[i] = summary(hetero.model)$coefficients[2,4] } mean(homo.p.vector<.05) # there are ~5% type I errors in the homoscedastic case # 0.049 # (as there should be) mean(hetero.p.vector<.05) # but there are ~8% type I errors w/ heteroscedasticity # 0.0804 Linear models (such as multiple regression), tend to be fairly robust, though. In general, a rule of thumb is that you are OK as long as the largest variance is not more than four times the lowest variance. This is a rule of thumb, so that should be taken for what it's worth. However, notice that in the simulation above, in the heteroscedastic model, the highest variance is $16\times$ the smallest variance ($4^2=16$, vs $1^2 = 1$) and the resulting type I error rate is $8\%$ instead of $5\%$.
How bad can heteroscedasticity be before causing problems?
It is true that heteroscedasticity reduce your power (see: Efficiency of beta estimates with heteroscedasticity), but it can also inflate type I errors. Consider the following simulation (coded in R)
How bad can heteroscedasticity be before causing problems? It is true that heteroscedasticity reduce your power (see: Efficiency of beta estimates with heteroscedasticity), but it can also inflate type I errors. Consider the following simulation (coded in R): set.seed(1044) # this makes the example exactly reproducible b0 = 10 # these are the true values of the intercept b1 = 0 # & the slope x = rep(c(0, 2, 4), each=10) # these are the X values hetero.p.vector = vector(length=10000) # these vectors are to store the results homo.p.vector = vector(length=10000) # of the simulation for(i in 1:10000){ # I simulate this 10k times y.homo = b0 + b1*x + rnorm(30, mean=0, sd=1) # these are the homoscedastic y's y.x0 = b0 + b1*0 + rnorm(10, mean=0, sd=1) # these are the heteroscedastic y's y.x2 = b0 + b1*2 + rnorm(10, mean=0, sd=2) # (notice the SDs of the error y.x4 = b0 + b1*4 + rnorm(10, mean=0, sd=4) # term goes from 1 to 4) y.hetero = c(y.x0, y.x2, y.x4) homo.model = lm(y.homo~x) # here I fit 2 models & get the hetero.model = lm(y.hetero~x) # p-values homo.p.vector[i] = summary(homo.model)$coefficients[2,4] hetero.p.vector[i] = summary(hetero.model)$coefficients[2,4] } mean(homo.p.vector<.05) # there are ~5% type I errors in the homoscedastic case # 0.049 # (as there should be) mean(hetero.p.vector<.05) # but there are ~8% type I errors w/ heteroscedasticity # 0.0804 Linear models (such as multiple regression), tend to be fairly robust, though. In general, a rule of thumb is that you are OK as long as the largest variance is not more than four times the lowest variance. This is a rule of thumb, so that should be taken for what it's worth. However, notice that in the simulation above, in the heteroscedastic model, the highest variance is $16\times$ the smallest variance ($4^2=16$, vs $1^2 = 1$) and the resulting type I error rate is $8\%$ instead of $5\%$.
How bad can heteroscedasticity be before causing problems? It is true that heteroscedasticity reduce your power (see: Efficiency of beta estimates with heteroscedasticity), but it can also inflate type I errors. Consider the following simulation (coded in R)
42,046
Comparing inter-rater agreement between classes of raters
Here is an approach you could take. I'm going to first assume you need agreement, not consistency, but I'll show you how you can use consistency afterwards. For a great review of the difference, see this paper. Agreement. This focuses on absolute agreement between raters - if I give it a 2, you will give it a 2. Here are the steps I would take: 1) Krippendorff's $\alpha$ across both groups. This is going to be an overall benchmark. 2) Krippendorff's $\alpha$ for each group separately. Compare the two coefficients, and see which group has a higher reliability. You can calculate confidence intervals for both and see if they cross, see Hayes and Krippendorff (2007). For an implementation in R, look at the irr package, kripp.alpha and kripp.boot. The general approach is to use bootstrapping, though I haven't implemented it myself. If the reliability of the semi-experts is statistically equivalent or close enough for your purposes, then you could proceed to consider it. If it is substantially lower, you would need to justify it by a) significantly lower costs to using semi-experts; and b) identifying ways to improve it. 3) Sufficient inter-rater reliability for semi-experts is not adequate, of course, if they are not in agreement with the experts. Here you could do a statistical comparison of the two groups distributions and central tendency - if you are comfortable comparing means on ordinal data, you have sufficient observations, and the data look normal, use standard tests like a t-test or ANOVA. Otherwise a crosstab and $\chi^2$ test may be more appropriate (just keep in mind sample size sensitivities to these tests). If there is a statistical and substantive difference between the groups, and their reliability differs substantively, then the semi-experts won't likely give you the same "quality" as the experts. Consistency. This looks at whether the two groups are aligned, though not necessarily in agreement. If I rate highly, you will too, even if we don't both rate it the same. One common way to do this is with the intra-class correlation coefficient, the classic reference is: Shrout, P. and Fleiss, J. L. (1979) "Intraclass correlation: uses in assessing rater reliability" in Psychological Bulletin. Vol. 86, No. 2, pp. 420–428. The psych package in R has formulas for this. This basically relies on a nested ANOVA model - you could treat the reviewers as nested in two groups, and look at how much of the variance is attributed to the groupings relative to the overall variance. If you are familiar with ANOVA models it should be fairly straightforward to do (you may want to use a lmer model in the lme4 package to run a mixed effects regression, extract the variance components from there (that's how I've done it before).
Comparing inter-rater agreement between classes of raters
Here is an approach you could take. I'm going to first assume you need agreement, not consistency, but I'll show you how you can use consistency afterwards. For a great review of the difference, see t
Comparing inter-rater agreement between classes of raters Here is an approach you could take. I'm going to first assume you need agreement, not consistency, but I'll show you how you can use consistency afterwards. For a great review of the difference, see this paper. Agreement. This focuses on absolute agreement between raters - if I give it a 2, you will give it a 2. Here are the steps I would take: 1) Krippendorff's $\alpha$ across both groups. This is going to be an overall benchmark. 2) Krippendorff's $\alpha$ for each group separately. Compare the two coefficients, and see which group has a higher reliability. You can calculate confidence intervals for both and see if they cross, see Hayes and Krippendorff (2007). For an implementation in R, look at the irr package, kripp.alpha and kripp.boot. The general approach is to use bootstrapping, though I haven't implemented it myself. If the reliability of the semi-experts is statistically equivalent or close enough for your purposes, then you could proceed to consider it. If it is substantially lower, you would need to justify it by a) significantly lower costs to using semi-experts; and b) identifying ways to improve it. 3) Sufficient inter-rater reliability for semi-experts is not adequate, of course, if they are not in agreement with the experts. Here you could do a statistical comparison of the two groups distributions and central tendency - if you are comfortable comparing means on ordinal data, you have sufficient observations, and the data look normal, use standard tests like a t-test or ANOVA. Otherwise a crosstab and $\chi^2$ test may be more appropriate (just keep in mind sample size sensitivities to these tests). If there is a statistical and substantive difference between the groups, and their reliability differs substantively, then the semi-experts won't likely give you the same "quality" as the experts. Consistency. This looks at whether the two groups are aligned, though not necessarily in agreement. If I rate highly, you will too, even if we don't both rate it the same. One common way to do this is with the intra-class correlation coefficient, the classic reference is: Shrout, P. and Fleiss, J. L. (1979) "Intraclass correlation: uses in assessing rater reliability" in Psychological Bulletin. Vol. 86, No. 2, pp. 420–428. The psych package in R has formulas for this. This basically relies on a nested ANOVA model - you could treat the reviewers as nested in two groups, and look at how much of the variance is attributed to the groupings relative to the overall variance. If you are familiar with ANOVA models it should be fairly straightforward to do (you may want to use a lmer model in the lme4 package to run a mixed effects regression, extract the variance components from there (that's how I've done it before).
Comparing inter-rater agreement between classes of raters Here is an approach you could take. I'm going to first assume you need agreement, not consistency, but I'll show you how you can use consistency afterwards. For a great review of the difference, see t
42,047
Find Neural Network Inputs Given Outputs
I'm no expert in this field, so I might be wrong. Therefore, correct me if I'm wrong. consider this neural network (which I suppose is equivalent to yours): A---H1 \ / \ X C / \ / B---H2 consider that the activation function of H1, H2 and C is the bipolar sigmoid, to which we'll refer to as "bsig(x)" also, we'll name the connections as follows: A, H1: wa1; A, H2: wa2; B, H1: wb1; B, H2: wb2; H1, C: wh1; H2, C: wh2 now the values of H1, H2 and C can be defined as: H1 = bsig(wa1 * A + wb1 * B) H2 = bsig(wa2 * A + wb2 * B) C = bsig(wh1 * H1 + wh2 * H2) So, C can be written as: C = bsig(wh1 * bsig(wa1 * A + wb1 * B) + wh2 * bsig(wa2 * A + wb2 * B)) All you need to do is solve this equation in order to B or A depending on which of the values is unkown.
Find Neural Network Inputs Given Outputs
I'm no expert in this field, so I might be wrong. Therefore, correct me if I'm wrong. consider this neural network (which I suppose is equivalent to yours): A---H1 \ / \ X C / \ / B---H2 con
Find Neural Network Inputs Given Outputs I'm no expert in this field, so I might be wrong. Therefore, correct me if I'm wrong. consider this neural network (which I suppose is equivalent to yours): A---H1 \ / \ X C / \ / B---H2 consider that the activation function of H1, H2 and C is the bipolar sigmoid, to which we'll refer to as "bsig(x)" also, we'll name the connections as follows: A, H1: wa1; A, H2: wa2; B, H1: wb1; B, H2: wb2; H1, C: wh1; H2, C: wh2 now the values of H1, H2 and C can be defined as: H1 = bsig(wa1 * A + wb1 * B) H2 = bsig(wa2 * A + wb2 * B) C = bsig(wh1 * H1 + wh2 * H2) So, C can be written as: C = bsig(wh1 * bsig(wa1 * A + wb1 * B) + wh2 * bsig(wa2 * A + wb2 * B)) All you need to do is solve this equation in order to B or A depending on which of the values is unkown.
Find Neural Network Inputs Given Outputs I'm no expert in this field, so I might be wrong. Therefore, correct me if I'm wrong. consider this neural network (which I suppose is equivalent to yours): A---H1 \ / \ X C / \ / B---H2 con
42,048
Find Neural Network Inputs Given Outputs
The answer is simple: backpropagation. Say you have a trained net $f$ which maps some $x$ to some $y$, which you'd ideally want to be $z$. You then use a different loss, which is the deviation from $y$ to $z$, e.g.: $$ \mathcal{C} = ||z - y||_2^2. $$ In back propagation, you typically follow the gradient of the loss $\mathcal{L}$ with respect to the weights via stochastic gradient descent. Say you have a weight matrix $W$, then you do: $$ W \leftarrow W - \eta {\partial \mathcal{L} \over \partial W} $$ where $\eta$ is some learning rate. Now, to obtain the right $x$ to get out $z$, you do the same thing with the inputs on $\mathcal{C}$ for several iterations: $$ x \leftarrow x - \eta {\partial \mathcal{C} \over \partial x}, $$ where you have initialised $x$ randomly. Since this optimisation will typically be non convex, you might want to start this with different initialisations. Practically, you will have to compute the derivatives for the input layer (e.g. the $\delta$ in most text books).
Find Neural Network Inputs Given Outputs
The answer is simple: backpropagation. Say you have a trained net $f$ which maps some $x$ to some $y$, which you'd ideally want to be $z$. You then use a different loss, which is the deviation from $y
Find Neural Network Inputs Given Outputs The answer is simple: backpropagation. Say you have a trained net $f$ which maps some $x$ to some $y$, which you'd ideally want to be $z$. You then use a different loss, which is the deviation from $y$ to $z$, e.g.: $$ \mathcal{C} = ||z - y||_2^2. $$ In back propagation, you typically follow the gradient of the loss $\mathcal{L}$ with respect to the weights via stochastic gradient descent. Say you have a weight matrix $W$, then you do: $$ W \leftarrow W - \eta {\partial \mathcal{L} \over \partial W} $$ where $\eta$ is some learning rate. Now, to obtain the right $x$ to get out $z$, you do the same thing with the inputs on $\mathcal{C}$ for several iterations: $$ x \leftarrow x - \eta {\partial \mathcal{C} \over \partial x}, $$ where you have initialised $x$ randomly. Since this optimisation will typically be non convex, you might want to start this with different initialisations. Practically, you will have to compute the derivatives for the input layer (e.g. the $\delta$ in most text books).
Find Neural Network Inputs Given Outputs The answer is simple: backpropagation. Say you have a trained net $f$ which maps some $x$ to some $y$, which you'd ideally want to be $z$. You then use a different loss, which is the deviation from $y
42,049
Find Neural Network Inputs Given Outputs
You could use reconstruction error and consider your second input as a parameter. Suppose your fixed input is $x_1$ and your latent input is $x_2$. The output $y$ can be: $$y = s(W_1 \cdot x_1 + W_2 \cdot x_2 + b)$$ $s$ being your bipolar sigmoid. You then use a decoder to map back into a reconstruction $z$: $$z = s(W' \cdot y + b')$$ Then the error function to minimize could be the squared error: $$L(xz) = || x - z ||^2$$ You can now train your network the way you want, with $W_1, W_2, b, W', b'$ AND $x_2$ being parameters of your gradient.
Find Neural Network Inputs Given Outputs
You could use reconstruction error and consider your second input as a parameter. Suppose your fixed input is $x_1$ and your latent input is $x_2$. The output $y$ can be: $$y = s(W_1 \cdot x_1 + W_2 \
Find Neural Network Inputs Given Outputs You could use reconstruction error and consider your second input as a parameter. Suppose your fixed input is $x_1$ and your latent input is $x_2$. The output $y$ can be: $$y = s(W_1 \cdot x_1 + W_2 \cdot x_2 + b)$$ $s$ being your bipolar sigmoid. You then use a decoder to map back into a reconstruction $z$: $$z = s(W' \cdot y + b')$$ Then the error function to minimize could be the squared error: $$L(xz) = || x - z ||^2$$ You can now train your network the way you want, with $W_1, W_2, b, W', b'$ AND $x_2$ being parameters of your gradient.
Find Neural Network Inputs Given Outputs You could use reconstruction error and consider your second input as a parameter. Suppose your fixed input is $x_1$ and your latent input is $x_2$. The output $y$ can be: $$y = s(W_1 \cdot x_1 + W_2 \
42,050
Find Neural Network Inputs Given Outputs
I agree with the backpropagation. However, if the question is more general, especially for bigger nets, you can train another network with many generated input outputs from your original network and approximate the backpropagation with the 2nd network. Hope it helps
Find Neural Network Inputs Given Outputs
I agree with the backpropagation. However, if the question is more general, especially for bigger nets, you can train another network with many generated input outputs from your original network and a
Find Neural Network Inputs Given Outputs I agree with the backpropagation. However, if the question is more general, especially for bigger nets, you can train another network with many generated input outputs from your original network and approximate the backpropagation with the 2nd network. Hope it helps
Find Neural Network Inputs Given Outputs I agree with the backpropagation. However, if the question is more general, especially for bigger nets, you can train another network with many generated input outputs from your original network and a
42,051
Find Neural Network Inputs Given Outputs
I believe bernardo may have made an error. At the hidden layer h there my not be a function sigmoid since sigmoid is the function applied to h1 and h2 to give result c. Therefore to correct his assumption h1 = awa1 + bwb1. Hence, h2 follows suit. Rendering the function sigmoid to process h1 and h2. C= sig(h1 + h2) Then solve for input. This may work and per linear algebra we know there maybe a number of inputs that can result in target c. The only caveat to this who solution is to assume the weight matrix has already been trained. If it has then it is simply trained for input a and b. In other words the only way i can think this can be valuable is to determine if other inputs can result in the same solution.
Find Neural Network Inputs Given Outputs
I believe bernardo may have made an error. At the hidden layer h there my not be a function sigmoid since sigmoid is the function applied to h1 and h2 to give result c. Therefore to correct his assump
Find Neural Network Inputs Given Outputs I believe bernardo may have made an error. At the hidden layer h there my not be a function sigmoid since sigmoid is the function applied to h1 and h2 to give result c. Therefore to correct his assumption h1 = awa1 + bwb1. Hence, h2 follows suit. Rendering the function sigmoid to process h1 and h2. C= sig(h1 + h2) Then solve for input. This may work and per linear algebra we know there maybe a number of inputs that can result in target c. The only caveat to this who solution is to assume the weight matrix has already been trained. If it has then it is simply trained for input a and b. In other words the only way i can think this can be valuable is to determine if other inputs can result in the same solution.
Find Neural Network Inputs Given Outputs I believe bernardo may have made an error. At the hidden layer h there my not be a function sigmoid since sigmoid is the function applied to h1 and h2 to give result c. Therefore to correct his assump
42,052
How can heteroskedasticity that is only contingent on omitted variables not effect the validity of standard errors?
I don't have access to the book, but I think there should be an additional caveat. The omitted variable[s] should be uncorrelated with the explanatory variables that appear in the model. If the omitted variable, on which the conditional variance depends, is correlated with the included variables, then the residual variance will vary with the model variables and the homoscedasticity assumption will be violated. On the other hand, if the omitted variable is uncorrelated, then the residual / error variance will be the same along the range of the model variables. Thus, the homoscedasticity assumption obtains in effect for that model. Sometimes it helps to look at an example or try a little simulation. Here is one I worked up in R: set.seed(9018) # this makes the example exactly reproducible x = runif(500, min=0, max=10) # x is a uniformly distributed continuous variable g = rep(c(0,1), each=250) # g is a grouping variable, which will be omitted y1 = 5 + .3*x + g + c(rnorm(250, mean=0, sd=1), # residual SD=1 when g=0 rnorm(250, mean=0, sd=2) ) # residual SD=2 when g=1 xs = sort(x) # by sorting x, I make it correlated w/ g y2 = 5 + .3*xs + g + c(rnorm(250, mean=0, sd=1), rnorm(250, mean=0, sd=2) ) uncor.m = lm(y1~x) # this is the model w/ g omitted, but uncorrelated w/ x cor.m = lm(y2~xs) # in this case, g is correlated w/ xs library(lmtest) # we use this package to run the Breusch-Pagan tests bptest(uncor.m) # studentized Breusch-Pagan test # # data: uncor.m # BP = 0.1178, df = 1, p-value = 0.7314 bptest(cor.m) # studentized Breusch-Pagan test # # data: cor.m # BP = 38.2682, df = 1, p-value = 6.166e-10 Here are the scale-location plots for the models, you can see that the uncorrelated version is flat, whereas the correlated version has higher residual variance on the right: The residual distribution for your model is the integral of the errors over the omitted variables. In the simplest case, you could have a mixture of two normals with the same mean ($0$) but different variances / SDs. This will yield what is, in effect, a single distribution with a middling variance (at least marginally). This is similar to the case I illustrate above (in the simulation, there is an effect of g on the mean as well as the variance, so the distribution will be somewhat bimodal). Typically, the error distribution marginalized over the omitted variables will not be very normal at all. The situation is directly analogous to the marginal distribution of $Y$, which integrates over the conditional distribution of $Y$ (the residuals) and the distribution of $X$. For an example it may help to read my answer here: What if residuals are normally distributed, but Y is not? Note that even if you have homoscedasticity, the normality of the errors / residuals can also effect the validity of the standard errors. With enough data, the residuals don't have to be perfectly normal for the SEs to be valid, but this requires more data the further your residuals are from normality, and the necessary $N$ can be much higher than people suspect (see @Macro's answer here: Regression when the OLS residuals are not normally distributed). In general, if you believe this is a reasonable possibility, you would be better off just using standard errors that are robust to these issues. The Huber-White heteroscedasticity consistent 'sandwich' errors are quite convenient, and are commonly used for this reason.
How can heteroskedasticity that is only contingent on omitted variables not effect the validity of s
I don't have access to the book, but I think there should be an additional caveat. The omitted variable[s] should be uncorrelated with the explanatory variables that appear in the model. If the omit
How can heteroskedasticity that is only contingent on omitted variables not effect the validity of standard errors? I don't have access to the book, but I think there should be an additional caveat. The omitted variable[s] should be uncorrelated with the explanatory variables that appear in the model. If the omitted variable, on which the conditional variance depends, is correlated with the included variables, then the residual variance will vary with the model variables and the homoscedasticity assumption will be violated. On the other hand, if the omitted variable is uncorrelated, then the residual / error variance will be the same along the range of the model variables. Thus, the homoscedasticity assumption obtains in effect for that model. Sometimes it helps to look at an example or try a little simulation. Here is one I worked up in R: set.seed(9018) # this makes the example exactly reproducible x = runif(500, min=0, max=10) # x is a uniformly distributed continuous variable g = rep(c(0,1), each=250) # g is a grouping variable, which will be omitted y1 = 5 + .3*x + g + c(rnorm(250, mean=0, sd=1), # residual SD=1 when g=0 rnorm(250, mean=0, sd=2) ) # residual SD=2 when g=1 xs = sort(x) # by sorting x, I make it correlated w/ g y2 = 5 + .3*xs + g + c(rnorm(250, mean=0, sd=1), rnorm(250, mean=0, sd=2) ) uncor.m = lm(y1~x) # this is the model w/ g omitted, but uncorrelated w/ x cor.m = lm(y2~xs) # in this case, g is correlated w/ xs library(lmtest) # we use this package to run the Breusch-Pagan tests bptest(uncor.m) # studentized Breusch-Pagan test # # data: uncor.m # BP = 0.1178, df = 1, p-value = 0.7314 bptest(cor.m) # studentized Breusch-Pagan test # # data: cor.m # BP = 38.2682, df = 1, p-value = 6.166e-10 Here are the scale-location plots for the models, you can see that the uncorrelated version is flat, whereas the correlated version has higher residual variance on the right: The residual distribution for your model is the integral of the errors over the omitted variables. In the simplest case, you could have a mixture of two normals with the same mean ($0$) but different variances / SDs. This will yield what is, in effect, a single distribution with a middling variance (at least marginally). This is similar to the case I illustrate above (in the simulation, there is an effect of g on the mean as well as the variance, so the distribution will be somewhat bimodal). Typically, the error distribution marginalized over the omitted variables will not be very normal at all. The situation is directly analogous to the marginal distribution of $Y$, which integrates over the conditional distribution of $Y$ (the residuals) and the distribution of $X$. For an example it may help to read my answer here: What if residuals are normally distributed, but Y is not? Note that even if you have homoscedasticity, the normality of the errors / residuals can also effect the validity of the standard errors. With enough data, the residuals don't have to be perfectly normal for the SEs to be valid, but this requires more data the further your residuals are from normality, and the necessary $N$ can be much higher than people suspect (see @Macro's answer here: Regression when the OLS residuals are not normally distributed). In general, if you believe this is a reasonable possibility, you would be better off just using standard errors that are robust to these issues. The Huber-White heteroscedasticity consistent 'sandwich' errors are quite convenient, and are commonly used for this reason.
How can heteroskedasticity that is only contingent on omitted variables not effect the validity of s I don't have access to the book, but I think there should be an additional caveat. The omitted variable[s] should be uncorrelated with the explanatory variables that appear in the model. If the omit
42,053
Can somebody identify this distribution?
I think it relates to a Gumbel distribution: if I define $S=R^2$, when $R\sim p(r|\lambda)$, the density of $S$ is given by the Jacobian formula: \begin{align*} q(s|\lambda) &= p(\sqrt{s}|\lambda)\times \left|\frac{\text{d}r}{\text{d}s}\right|\\ &= p(\sqrt{s}|\lambda)\times \frac{1}{2\sqrt{s}}\\ &= \frac{2\lambda \sqrt{s}\exp\left(\lambda\exp\left(-s\right)-s\right)}{\exp\left(\lambda\right)-1}\,\frac{1}{2\sqrt{s}}\\ &= \frac{\lambda \exp\left(\lambda\exp\left(-s\right)-s\right)}{\exp\left(\lambda\right)-1}\\ &= \frac{\exp\left(\exp\left(\log\{\lambda\}-s\right)+\log\{\lambda\}-s\right)}{\exp\left(\lambda\right)-1}\\ &= \frac{\exp\left(\exp\left(-z\right)-z\right)}{\exp\left(\lambda\right)-1}\\ \end{align*} where $z=s-\log\{\lambda\}$ So $S$ is almost distributed as a Gumbel distribution with parameter $(\log\{\lambda\},1)$ except that (a) its support is truncated to $(0,+\infty)$ and (b) there is a missing - in front of the exponential inside the exponential... This means that $S-\log\{\lambda\}$ has a fixed distribution with the above density and cdf $F$. From this representation, there exists a transform $G^{-1}\circ F$ (with $G$ being the cdf of the Gumbel distribution) that turns $ S-\log\{\lambda\}$ into a standard Gumbel, but this is not very useful! Note that, in the mixture representation, $S=R^2$ is then distributed as an Exponential $\mathcal{E}(n)$ variate.
Can somebody identify this distribution?
I think it relates to a Gumbel distribution: if I define $S=R^2$, when $R\sim p(r|\lambda)$, the density of $S$ is given by the Jacobian formula: \begin{align*} q(s|\lambda) &= p(\sqrt{s}|\lambda)\tim
Can somebody identify this distribution? I think it relates to a Gumbel distribution: if I define $S=R^2$, when $R\sim p(r|\lambda)$, the density of $S$ is given by the Jacobian formula: \begin{align*} q(s|\lambda) &= p(\sqrt{s}|\lambda)\times \left|\frac{\text{d}r}{\text{d}s}\right|\\ &= p(\sqrt{s}|\lambda)\times \frac{1}{2\sqrt{s}}\\ &= \frac{2\lambda \sqrt{s}\exp\left(\lambda\exp\left(-s\right)-s\right)}{\exp\left(\lambda\right)-1}\,\frac{1}{2\sqrt{s}}\\ &= \frac{\lambda \exp\left(\lambda\exp\left(-s\right)-s\right)}{\exp\left(\lambda\right)-1}\\ &= \frac{\exp\left(\exp\left(\log\{\lambda\}-s\right)+\log\{\lambda\}-s\right)}{\exp\left(\lambda\right)-1}\\ &= \frac{\exp\left(\exp\left(-z\right)-z\right)}{\exp\left(\lambda\right)-1}\\ \end{align*} where $z=s-\log\{\lambda\}$ So $S$ is almost distributed as a Gumbel distribution with parameter $(\log\{\lambda\},1)$ except that (a) its support is truncated to $(0,+\infty)$ and (b) there is a missing - in front of the exponential inside the exponential... This means that $S-\log\{\lambda\}$ has a fixed distribution with the above density and cdf $F$. From this representation, there exists a transform $G^{-1}\circ F$ (with $G$ being the cdf of the Gumbel distribution) that turns $ S-\log\{\lambda\}$ into a standard Gumbel, but this is not very useful! Note that, in the mixture representation, $S=R^2$ is then distributed as an Exponential $\mathcal{E}(n)$ variate.
Can somebody identify this distribution? I think it relates to a Gumbel distribution: if I define $S=R^2$, when $R\sim p(r|\lambda)$, the density of $S$ is given by the Jacobian formula: \begin{align*} q(s|\lambda) &= p(\sqrt{s}|\lambda)\tim
42,054
How to transform continuous data with extreme bimodal distribution
1) There's no way to transform a discrete random variable to be continuous. If it takes $k$ distinct values, no transformation will leave you with more than $k$ distinct values. So you can't transform this to be normal. It's always going to have two big spikes (or worse, with non-monotonic transformations you might end up with only one big spike). 2) Since this is a predictor, you don't need it to be normal, so this inability is inconsequential.
How to transform continuous data with extreme bimodal distribution
1) There's no way to transform a discrete random variable to be continuous. If it takes $k$ distinct values, no transformation will leave you with more than $k$ distinct values. So you can't transfor
How to transform continuous data with extreme bimodal distribution 1) There's no way to transform a discrete random variable to be continuous. If it takes $k$ distinct values, no transformation will leave you with more than $k$ distinct values. So you can't transform this to be normal. It's always going to have two big spikes (or worse, with non-monotonic transformations you might end up with only one big spike). 2) Since this is a predictor, you don't need it to be normal, so this inability is inconsequential.
How to transform continuous data with extreme bimodal distribution 1) There's no way to transform a discrete random variable to be continuous. If it takes $k$ distinct values, no transformation will leave you with more than $k$ distinct values. So you can't transfor
42,055
Expected value of least squares estimator $\hat{\beta}$
Note that in regression we condition on $X$. Hence expressions like $(X^TX)^{-1}X^T$ will be a matrix of constants. Recall that $Y=X\beta+e$, and just apply linearity of expectation, and that E[e]=0. Edit: $E[\hat{\beta}]=E[(X^TX)^{-1}X^TY]=E[(X^TX)^{-1}X^T(X\beta+e)]$ $\hspace{1cm}=(X^TX)^{-1}X^TX E[\beta]+(X^TX)^{-1}X^T E[e]=I\beta+\,0=\beta$
Expected value of least squares estimator $\hat{\beta}$
Note that in regression we condition on $X$. Hence expressions like $(X^TX)^{-1}X^T$ will be a matrix of constants. Recall that $Y=X\beta+e$, and just apply linearity of expectation, and that E[e]=0.
Expected value of least squares estimator $\hat{\beta}$ Note that in regression we condition on $X$. Hence expressions like $(X^TX)^{-1}X^T$ will be a matrix of constants. Recall that $Y=X\beta+e$, and just apply linearity of expectation, and that E[e]=0. Edit: $E[\hat{\beta}]=E[(X^TX)^{-1}X^TY]=E[(X^TX)^{-1}X^T(X\beta+e)]$ $\hspace{1cm}=(X^TX)^{-1}X^TX E[\beta]+(X^TX)^{-1}X^T E[e]=I\beta+\,0=\beta$
Expected value of least squares estimator $\hat{\beta}$ Note that in regression we condition on $X$. Hence expressions like $(X^TX)^{-1}X^T$ will be a matrix of constants. Recall that $Y=X\beta+e$, and just apply linearity of expectation, and that E[e]=0.
42,056
Effective degrees of freedom for regularized regression
Assume $\mathbb A$ is invertible. Let $\mathbb A = \Sigma^\prime \mathbb U^\prime \mathbb U \Sigma$ where $\mathbb U$ is an orthogonal matrix, $\mathbb {U^\prime U} = \mathbb I$, and write $$\mathbf \beta = \mathbb U \Sigma \mathbf b$$ so that $$\mathbf \beta ^\prime \mathbf \beta = \mathbf b^\prime \mathbb A \mathbf b.$$ Then, since $\mathbb U \Sigma$ is invertible, $$\eqalign{ \frac{1}{2}\mathbf b^\prime \mathbb{X^\prime X} \mathbf b - \mathbf b^\prime \mathbb{X^\prime} \mathbf y &=\frac{1}{2}\left(\left(\mathbb U\Sigma\right)^{-1} \beta\right)^\prime \mathbb{X^\prime X} \left(\left(\mathbb U\Sigma\right)^{-1} \beta\right) - \left(\left(\mathbb U\Sigma\right)^{-1} \beta\right)^\prime \mathbb{X^\prime} \mathbf y \\ &=\frac{1}{2}\mathbf \beta^\prime \mathbb{Z^\prime Z} \mathbf\beta - \mathbf \beta^\prime \mathbb{Z^\prime} \mathbf y } $$ where $$\mathbb Z = \mathbb X \Sigma^{-1} \mathbb U^{-1}.$$ The "generalized regularized" objective function can therefore be written $$\frac{1}{2}\mathbf b^\prime \left(\mathbb{X^\prime X} + \lambda \mathbb A\right) \mathbf b - \mathbf b^\prime \mathbb{X^\prime} \mathbf y = \frac{1}{2}\mathbf \beta^\prime \left(\mathbb{Z^\prime Z} + \lambda \mathbb I\right) \mathbf\beta - \mathbf \beta^\prime \mathbb{Z^\prime} \mathbf y, $$ which is back in the usual regularized form. Regardless of what $\mathbb U$ may be, its orthogonality guarantees the eigenvalues of $\mathbb {Z^\prime Z} = \mathbb{U^{-1\prime} \Sigma^{-1\prime} X^\prime X \Sigma ^{-1} U^{-1} }$ will be those of $ \mathbb{\Sigma^{-1\prime} X^\prime X \Sigma ^{-1} }$, whence--writing this common set of eigenvalues as $(t_i)$, the sum $$\sum_i \frac{t_i}{t_i+\lambda}$$ is well-defined and depends only on $\mathbb A$ and $\mathbb {X^\prime X}$.
Effective degrees of freedom for regularized regression
Assume $\mathbb A$ is invertible. Let $\mathbb A = \Sigma^\prime \mathbb U^\prime \mathbb U \Sigma$ where $\mathbb U$ is an orthogonal matrix, $\mathbb {U^\prime U} = \mathbb I$, and write $$\mathbf
Effective degrees of freedom for regularized regression Assume $\mathbb A$ is invertible. Let $\mathbb A = \Sigma^\prime \mathbb U^\prime \mathbb U \Sigma$ where $\mathbb U$ is an orthogonal matrix, $\mathbb {U^\prime U} = \mathbb I$, and write $$\mathbf \beta = \mathbb U \Sigma \mathbf b$$ so that $$\mathbf \beta ^\prime \mathbf \beta = \mathbf b^\prime \mathbb A \mathbf b.$$ Then, since $\mathbb U \Sigma$ is invertible, $$\eqalign{ \frac{1}{2}\mathbf b^\prime \mathbb{X^\prime X} \mathbf b - \mathbf b^\prime \mathbb{X^\prime} \mathbf y &=\frac{1}{2}\left(\left(\mathbb U\Sigma\right)^{-1} \beta\right)^\prime \mathbb{X^\prime X} \left(\left(\mathbb U\Sigma\right)^{-1} \beta\right) - \left(\left(\mathbb U\Sigma\right)^{-1} \beta\right)^\prime \mathbb{X^\prime} \mathbf y \\ &=\frac{1}{2}\mathbf \beta^\prime \mathbb{Z^\prime Z} \mathbf\beta - \mathbf \beta^\prime \mathbb{Z^\prime} \mathbf y } $$ where $$\mathbb Z = \mathbb X \Sigma^{-1} \mathbb U^{-1}.$$ The "generalized regularized" objective function can therefore be written $$\frac{1}{2}\mathbf b^\prime \left(\mathbb{X^\prime X} + \lambda \mathbb A\right) \mathbf b - \mathbf b^\prime \mathbb{X^\prime} \mathbf y = \frac{1}{2}\mathbf \beta^\prime \left(\mathbb{Z^\prime Z} + \lambda \mathbb I\right) \mathbf\beta - \mathbf \beta^\prime \mathbb{Z^\prime} \mathbf y, $$ which is back in the usual regularized form. Regardless of what $\mathbb U$ may be, its orthogonality guarantees the eigenvalues of $\mathbb {Z^\prime Z} = \mathbb{U^{-1\prime} \Sigma^{-1\prime} X^\prime X \Sigma ^{-1} U^{-1} }$ will be those of $ \mathbb{\Sigma^{-1\prime} X^\prime X \Sigma ^{-1} }$, whence--writing this common set of eigenvalues as $(t_i)$, the sum $$\sum_i \frac{t_i}{t_i+\lambda}$$ is well-defined and depends only on $\mathbb A$ and $\mathbb {X^\prime X}$.
Effective degrees of freedom for regularized regression Assume $\mathbb A$ is invertible. Let $\mathbb A = \Sigma^\prime \mathbb U^\prime \mathbb U \Sigma$ where $\mathbb U$ is an orthogonal matrix, $\mathbb {U^\prime U} = \mathbb I$, and write $$\mathbf
42,057
Are normally distributed residuals not necessarily homoskedastic?
You're absolutely right, normally distributed does not imply homoskedastic, and your example is great.
Are normally distributed residuals not necessarily homoskedastic?
You're absolutely right, normally distributed does not imply homoskedastic, and your example is great.
Are normally distributed residuals not necessarily homoskedastic? You're absolutely right, normally distributed does not imply homoskedastic, and your example is great.
Are normally distributed residuals not necessarily homoskedastic? You're absolutely right, normally distributed does not imply homoskedastic, and your example is great.
42,058
In LDA, how to interpret the meaning of topics?
What LDA does, and what it can answer Consider this snippet from the paper introducing supervised LDA: Most topic models, such as latent Dirichlet allocation (LDA), are unsupervised: only the words in the documents are modeled. The goal is to infer topics that maximize the likelihood (or the posterior probability) of the collection. In other words, for a given corpus and trained LDA model of fixed $k$, that's all you get: The latent topics that maximize the posterior probability of the observed corpus. Now, that's not to say that a domain subject matter expert couldn't make some intuitive guesses in the right direction. Take a look at these topics from an LDA model trained for $k = 16$ on the handwritten digits data that ships with sklearn: Some are entirely recognizable as digits; some we're left to speculate about or further analyze, maybe "half a nine" or "one common way of writing a seven." (See the code below to produce this and a few other plots of varied number of topics.) How many topics, via hierarchal topic models Above, our choice of $k$ was taken from a quick look through an arbitrary space of possible parameters. This was straightforward since we rather expect that the number of meaningful topics won't be too far removed from ten, the number of digits. In your case, there's no mention of prior knowledge that justifies either a chosen $k$, or even a subspace to search. Hierarchal topic models can handle this in a principled fashion, by employing Dirichlet processes. (Loosely, DPs can be thought of as an infinite-dimensional generalization of the Dirichlet distribution.) Empirically, it's been shown to choose $k$ similar to the LDA model that minimizes perplexity. From the paper: Though hierarchal topic models can handle a single layered hierarchy, they were motivated by more elaborate models of dependency within and between groups, which may interest you: We assume that the data are subdivided into a set of groups, and that within each group we wish to find clusters that capture latent structure in the data assigned to that group. The number of clusters within each group is unknown and is to be inferred. Moreover, in a sense that we make precise, we wish to allow clusters to be shared among the groups. They go on further to detail an example of likely interest: An example of the kind of problem that motivates us can be found in genetics. Consider a set of k binary markers (e.g., single nucleotide polymorphisms or “SNPs”) in a localized region of the human genome. While an individual human could exhibit any of 2 k different patterns of markers on a single chromosome, in real populations only a small subset of such patterns—haplotypes—are actually observed (Gabriel et al. 2002). [...] Now consider an extension of this problem in which the population is divided into a set of groups; e.g., African, Asian and European subpopulations. We may not only want to discover the sets of haplotypes within each subpopulation, but we may also wish to discover which haplotypes are shared between subpopulations. The identification of such haplotypes would have significant implications for the understanding of the migration patterns of ancestral populations of humans. So, you can use hierarchal models simply to choose the number of topics, or to model much more elaborate group relationships. (I've not the slightest bioinformatics expertise, so I can't even begin to suggest what would be useful or appropriate, but I hope the details in the paper can help guide you.) What the topics mean, via sLDA Finally, if your data includes response variables you'd like to predict, e.g. the diseases or genetic disorders you mention, then supervised LDA is probably what you're looking for. From the paper linked above, emphasis mine: In supervised latent Dirichlet allocation (sLDA), we add to LDA a response variable associated with each document. As mentioned, this variable might be the number of stars given to a movie, a count of the users in an on-line community who marked an article interesting, or the category of a document. We jointly model the documents and the responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. A brief aside: Cited in the sLDA paper is this one, which may be of interest: P. Flaherty, G. Giaever, J. Kumm, M. Jordan, and A. Arkin. A latent variable model for chemogenomic profiling. Bioinformatics, 21(15):3286–3293, 2005. Code # -*- coding: utf-8 -*- """ Created on Mon Apr 18 18:24:41 2016 @author: SeanEaster """ from sklearn.decomposition import LatentDirichletAllocation as LDA from sklearn.datasets import load_digits import matplotlib.pyplot as plt, numpy as np def untick(sub): sub.tick_params(which='both', bottom='off', top='off', labelbottom='off', labelleft='off', left='off', right='off') digits = load_digits() images = digits['images'] images = [image.reshape((1,-1)) for image in images] images = np.concatenate(tuple(images), axis = 0) topicsRange = [i + 4 for i in range(22)] ldaModels = [LDA(n_topics = numTopics) for numTopics in topicsRange] for lda in ldaModels: lda.fit(images) scores = [lda.score(images) for lda in ldaModels] plt.plot(topicsRange, scores) plt.show() maxLogLikelihoodTopicsNumber = np.argmax(scores) plotNumbers = [4, 9, 16, 25] if maxLogLikelihoodTopicsNumber not in plotNumbers: plotNumbers.append(maxLogLikelihoodTopicsNumber) for numberOfTopics in plotNumbers: plt.figure() modelIdx = topicsRange.index(numberOfTopics) lda = ldaModels[modelIdx] sideLen = int(np.ceil(np.sqrt(numberOfTopics))) for topicIdx, topic in enumerate(lda.components_): ax = plt.subplot(sideLen, sideLen, topicIdx + 1) ax.imshow(topic.reshape((8,8)), cmap = plt.cm.gray_r) untick(ax) plt.show()
In LDA, how to interpret the meaning of topics?
What LDA does, and what it can answer Consider this snippet from the paper introducing supervised LDA: Most topic models, such as latent Dirichlet allocation (LDA), are unsupervised: only the words i
In LDA, how to interpret the meaning of topics? What LDA does, and what it can answer Consider this snippet from the paper introducing supervised LDA: Most topic models, such as latent Dirichlet allocation (LDA), are unsupervised: only the words in the documents are modeled. The goal is to infer topics that maximize the likelihood (or the posterior probability) of the collection. In other words, for a given corpus and trained LDA model of fixed $k$, that's all you get: The latent topics that maximize the posterior probability of the observed corpus. Now, that's not to say that a domain subject matter expert couldn't make some intuitive guesses in the right direction. Take a look at these topics from an LDA model trained for $k = 16$ on the handwritten digits data that ships with sklearn: Some are entirely recognizable as digits; some we're left to speculate about or further analyze, maybe "half a nine" or "one common way of writing a seven." (See the code below to produce this and a few other plots of varied number of topics.) How many topics, via hierarchal topic models Above, our choice of $k$ was taken from a quick look through an arbitrary space of possible parameters. This was straightforward since we rather expect that the number of meaningful topics won't be too far removed from ten, the number of digits. In your case, there's no mention of prior knowledge that justifies either a chosen $k$, or even a subspace to search. Hierarchal topic models can handle this in a principled fashion, by employing Dirichlet processes. (Loosely, DPs can be thought of as an infinite-dimensional generalization of the Dirichlet distribution.) Empirically, it's been shown to choose $k$ similar to the LDA model that minimizes perplexity. From the paper: Though hierarchal topic models can handle a single layered hierarchy, they were motivated by more elaborate models of dependency within and between groups, which may interest you: We assume that the data are subdivided into a set of groups, and that within each group we wish to find clusters that capture latent structure in the data assigned to that group. The number of clusters within each group is unknown and is to be inferred. Moreover, in a sense that we make precise, we wish to allow clusters to be shared among the groups. They go on further to detail an example of likely interest: An example of the kind of problem that motivates us can be found in genetics. Consider a set of k binary markers (e.g., single nucleotide polymorphisms or “SNPs”) in a localized region of the human genome. While an individual human could exhibit any of 2 k different patterns of markers on a single chromosome, in real populations only a small subset of such patterns—haplotypes—are actually observed (Gabriel et al. 2002). [...] Now consider an extension of this problem in which the population is divided into a set of groups; e.g., African, Asian and European subpopulations. We may not only want to discover the sets of haplotypes within each subpopulation, but we may also wish to discover which haplotypes are shared between subpopulations. The identification of such haplotypes would have significant implications for the understanding of the migration patterns of ancestral populations of humans. So, you can use hierarchal models simply to choose the number of topics, or to model much more elaborate group relationships. (I've not the slightest bioinformatics expertise, so I can't even begin to suggest what would be useful or appropriate, but I hope the details in the paper can help guide you.) What the topics mean, via sLDA Finally, if your data includes response variables you'd like to predict, e.g. the diseases or genetic disorders you mention, then supervised LDA is probably what you're looking for. From the paper linked above, emphasis mine: In supervised latent Dirichlet allocation (sLDA), we add to LDA a response variable associated with each document. As mentioned, this variable might be the number of stars given to a movie, a count of the users in an on-line community who marked an article interesting, or the category of a document. We jointly model the documents and the responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. A brief aside: Cited in the sLDA paper is this one, which may be of interest: P. Flaherty, G. Giaever, J. Kumm, M. Jordan, and A. Arkin. A latent variable model for chemogenomic profiling. Bioinformatics, 21(15):3286–3293, 2005. Code # -*- coding: utf-8 -*- """ Created on Mon Apr 18 18:24:41 2016 @author: SeanEaster """ from sklearn.decomposition import LatentDirichletAllocation as LDA from sklearn.datasets import load_digits import matplotlib.pyplot as plt, numpy as np def untick(sub): sub.tick_params(which='both', bottom='off', top='off', labelbottom='off', labelleft='off', left='off', right='off') digits = load_digits() images = digits['images'] images = [image.reshape((1,-1)) for image in images] images = np.concatenate(tuple(images), axis = 0) topicsRange = [i + 4 for i in range(22)] ldaModels = [LDA(n_topics = numTopics) for numTopics in topicsRange] for lda in ldaModels: lda.fit(images) scores = [lda.score(images) for lda in ldaModels] plt.plot(topicsRange, scores) plt.show() maxLogLikelihoodTopicsNumber = np.argmax(scores) plotNumbers = [4, 9, 16, 25] if maxLogLikelihoodTopicsNumber not in plotNumbers: plotNumbers.append(maxLogLikelihoodTopicsNumber) for numberOfTopics in plotNumbers: plt.figure() modelIdx = topicsRange.index(numberOfTopics) lda = ldaModels[modelIdx] sideLen = int(np.ceil(np.sqrt(numberOfTopics))) for topicIdx, topic in enumerate(lda.components_): ax = plt.subplot(sideLen, sideLen, topicIdx + 1) ax.imshow(topic.reshape((8,8)), cmap = plt.cm.gray_r) untick(ax) plt.show()
In LDA, how to interpret the meaning of topics? What LDA does, and what it can answer Consider this snippet from the paper introducing supervised LDA: Most topic models, such as latent Dirichlet allocation (LDA), are unsupervised: only the words i
42,059
In LDA, how to interpret the meaning of topics?
LDA is an unsupervised learning method that maximizes the probability of word assignments to one of K fixed topics. The topic meaning is extracted by interpreting the top N probability words for a given topic, i.e. LDA will not output the meaning of topics, rather it will organize words by topic to be interpreted by the user. In some cases, we have access to the meaning of topics, for example in the 20 newsgroups dataset, we know the newsgroup titles (e.g. sci.med, sci.crypt, comp.graphics). So we expect the learned topics to be closely related to newsgroup titles. In general, however, the topic meaning is interpreted by the user. On the other hand, for a quantitative evaluation of topic models, perplexity is used as a measure of how well the topic model fits the data by computing the average log-likelihood of the test set.
In LDA, how to interpret the meaning of topics?
LDA is an unsupervised learning method that maximizes the probability of word assignments to one of K fixed topics. The topic meaning is extracted by interpreting the top N probability words for a giv
In LDA, how to interpret the meaning of topics? LDA is an unsupervised learning method that maximizes the probability of word assignments to one of K fixed topics. The topic meaning is extracted by interpreting the top N probability words for a given topic, i.e. LDA will not output the meaning of topics, rather it will organize words by topic to be interpreted by the user. In some cases, we have access to the meaning of topics, for example in the 20 newsgroups dataset, we know the newsgroup titles (e.g. sci.med, sci.crypt, comp.graphics). So we expect the learned topics to be closely related to newsgroup titles. In general, however, the topic meaning is interpreted by the user. On the other hand, for a quantitative evaluation of topic models, perplexity is used as a measure of how well the topic model fits the data by computing the average log-likelihood of the test set.
In LDA, how to interpret the meaning of topics? LDA is an unsupervised learning method that maximizes the probability of word assignments to one of K fixed topics. The topic meaning is extracted by interpreting the top N probability words for a giv
42,060
Survival analysis where P(event) < 1
One set of models that does what you describe is sometimes called a "cure model". The logic behind the name is based on a question like how long does it take before a cancer patient dies? Some of them are cured and won't die at all (from this occurance of the disease). For example see here.
Survival analysis where P(event) < 1
One set of models that does what you describe is sometimes called a "cure model". The logic behind the name is based on a question like how long does it take before a cancer patient dies? Some of them
Survival analysis where P(event) < 1 One set of models that does what you describe is sometimes called a "cure model". The logic behind the name is based on a question like how long does it take before a cancer patient dies? Some of them are cured and won't die at all (from this occurance of the disease). For example see here.
Survival analysis where P(event) < 1 One set of models that does what you describe is sometimes called a "cure model". The logic behind the name is based on a question like how long does it take before a cancer patient dies? Some of them
42,061
Propensity Score Analysis with continuous treatment
This has been asked on the Statalist too. The post here mentions the user-written command doseresponse (a multivalued treatment effect evaluation method to assess the effect of a drug that participants could take in different levels of intensity) or the subroutine gpscore. The relevant reference is Bia, M. and Mattei, A. (2008) "A STATA Package for the Estimation of the Dose-Response Function through Adjustment for the Generalized Propensity Score", Stata Journal Vol. 8 Nr. 3 where the authors introduce the algorithm and its computation. So even if it's not for R this will surely help you to program it. For the theoretical background of continuous treatment evaluation methods see Kluve (2007) or Imbens and Hirano (2004).
Propensity Score Analysis with continuous treatment
This has been asked on the Statalist too. The post here mentions the user-written command doseresponse (a multivalued treatment effect evaluation method to assess the effect of a drug that participan
Propensity Score Analysis with continuous treatment This has been asked on the Statalist too. The post here mentions the user-written command doseresponse (a multivalued treatment effect evaluation method to assess the effect of a drug that participants could take in different levels of intensity) or the subroutine gpscore. The relevant reference is Bia, M. and Mattei, A. (2008) "A STATA Package for the Estimation of the Dose-Response Function through Adjustment for the Generalized Propensity Score", Stata Journal Vol. 8 Nr. 3 where the authors introduce the algorithm and its computation. So even if it's not for R this will surely help you to program it. For the theoretical background of continuous treatment evaluation methods see Kluve (2007) or Imbens and Hirano (2004).
Propensity Score Analysis with continuous treatment This has been asked on the Statalist too. The post here mentions the user-written command doseresponse (a multivalued treatment effect evaluation method to assess the effect of a drug that participan
42,062
Impute missing values using aregImpute
You can't really use $R^2$ in the manner you suggest. If $R^2$ is 0 you can still use multiple imputation; it's just not better than a random guess in that case. Force linearity of all the variables if you have a very small sample size relative to the number of variables. aregImpute stores all the multiple imputations, then the fit.mult.impute function calls for them, one imputation at a time, to fill in the original dataset.
Impute missing values using aregImpute
You can't really use $R^2$ in the manner you suggest. If $R^2$ is 0 you can still use multiple imputation; it's just not better than a random guess in that case. Force linearity of all the variables
Impute missing values using aregImpute You can't really use $R^2$ in the manner you suggest. If $R^2$ is 0 you can still use multiple imputation; it's just not better than a random guess in that case. Force linearity of all the variables if you have a very small sample size relative to the number of variables. aregImpute stores all the multiple imputations, then the fit.mult.impute function calls for them, one imputation at a time, to fill in the original dataset.
Impute missing values using aregImpute You can't really use $R^2$ in the manner you suggest. If $R^2$ is 0 you can still use multiple imputation; it's just not better than a random guess in that case. Force linearity of all the variables
42,063
How does one interpret the distribution over parameters in bayesian estimation?
how do I know what is the relative probability of the process of generation being a gaussian MM with this particular parameter combination instead of say a neural network with that parameter configuration. Your $\theta$ is the set of parameters in your model. So for a Gaussian mixture model they are the means, covariances, and mixing parameters. In a Neural Network they are the weights and biases. These are totally different sets of quantities, so there's no reason to think that the $P(\theta)$ in either case will be related, either a priori or after seeing $D$. $P(D \mid \theta)$ is the part of the formula that will be realised as a mixture model or a network, or whatever. But you have to decide, otherwise your prior is for the wrong quantities, which makes no sense. And further it is intuitive to think of one process generating the data, whose parameters we are guessing. But instead here we have multiple processes generating the data in tandem, i.e. a sense of a true model is lost. You already think of the data as being potentially generated by different values of $\theta$ before any Bayesian questions arise. After all, the likelihood tells you how likely the data would have been generated under different sets of values. But your 'in tandem' idea suggests you think they all do it 'all at once' in the Bayesian case, so there is no sense of 'one true model'. That's a mistake. Maybe think of it like this: Call the 'true model parameters' $\theta_0$. Bayesians and everybody else can agree that these are the things we want to know about. Then $D$ is actually a sample from $P(D \mid \theta_0)$. We just don't happen to know the $\theta_0$ is. Our $P(D \mid \theta)$, where $\theta$ is any setting of parameters, just specifies the mechanism by which $D$ is assumed to be generated if we knew what the parameters were - a 'forward model' if you like. Often it's straightforwardly physical, think of the $\theta$ as settings in a control panel. Bayesian methods start with $P(\theta)$ - your opinions or knowledge about what $\theta_0$ might be before seeing $D$, and then condition on $D$ to get $P(\theta \mid D)$ - your new opinions or knowledge about what $\theta_0$ is after seeing $D$. The sum you present above is actually mostly useful just as a normalising constant on the way to getting $P(\theta \mid D)$ which actually is useful. It's our updated beliefs about $\theta_0$. It has some other roles, as 'evidence', but for the purposes of your question these aren't relevant.
How does one interpret the distribution over parameters in bayesian estimation?
how do I know what is the relative probability of the process of generation being a gaussian MM with this particular parameter combination instead of say a neural network with that parameter configura
How does one interpret the distribution over parameters in bayesian estimation? how do I know what is the relative probability of the process of generation being a gaussian MM with this particular parameter combination instead of say a neural network with that parameter configuration. Your $\theta$ is the set of parameters in your model. So for a Gaussian mixture model they are the means, covariances, and mixing parameters. In a Neural Network they are the weights and biases. These are totally different sets of quantities, so there's no reason to think that the $P(\theta)$ in either case will be related, either a priori or after seeing $D$. $P(D \mid \theta)$ is the part of the formula that will be realised as a mixture model or a network, or whatever. But you have to decide, otherwise your prior is for the wrong quantities, which makes no sense. And further it is intuitive to think of one process generating the data, whose parameters we are guessing. But instead here we have multiple processes generating the data in tandem, i.e. a sense of a true model is lost. You already think of the data as being potentially generated by different values of $\theta$ before any Bayesian questions arise. After all, the likelihood tells you how likely the data would have been generated under different sets of values. But your 'in tandem' idea suggests you think they all do it 'all at once' in the Bayesian case, so there is no sense of 'one true model'. That's a mistake. Maybe think of it like this: Call the 'true model parameters' $\theta_0$. Bayesians and everybody else can agree that these are the things we want to know about. Then $D$ is actually a sample from $P(D \mid \theta_0)$. We just don't happen to know the $\theta_0$ is. Our $P(D \mid \theta)$, where $\theta$ is any setting of parameters, just specifies the mechanism by which $D$ is assumed to be generated if we knew what the parameters were - a 'forward model' if you like. Often it's straightforwardly physical, think of the $\theta$ as settings in a control panel. Bayesian methods start with $P(\theta)$ - your opinions or knowledge about what $\theta_0$ might be before seeing $D$, and then condition on $D$ to get $P(\theta \mid D)$ - your new opinions or knowledge about what $\theta_0$ is after seeing $D$. The sum you present above is actually mostly useful just as a normalising constant on the way to getting $P(\theta \mid D)$ which actually is useful. It's our updated beliefs about $\theta_0$. It has some other roles, as 'evidence', but for the purposes of your question these aren't relevant.
How does one interpret the distribution over parameters in bayesian estimation? how do I know what is the relative probability of the process of generation being a gaussian MM with this particular parameter combination instead of say a neural network with that parameter configura
42,064
How does one interpret the distribution over parameters in bayesian estimation?
This was too long for the comments, so posting it here. From what the others have pointed out about thinking about the prior as a belief, I think a road-block in understanding had been combining the prior and the conditional. The prior $P(\theta)$ is understood as a belief in what the true $\theta$ might be. The conditional $P(Data|\theta)$ is better thought in frequentist terms, i.e. take a model with this $\theta$ and generate many samples from it, and just count the frequencies for each sample. Their combination $\sum_{\theta} {P(\theta)\times P(Data|\theta)}$ doesn't remain a concrete process with a well defined $\theta$. So the problem is to understand that. Suppose, initially I didn't have any concrete data, I just had a belief about what the background generating process could be, i.e. a $P(\theta)$. Also, for each process I could tell what the frequencies $P(Data|\theta)$ would be. Because I wasn't really sure about the process, the $P(Data)$ was a belief: With all my uncertainty about $\theta$, I'd on average expect data, if I ever collected any, to have a distribution like this $P(Data)$. But now I actually collect some samples, call this set $S$, and I calculate the frequencies of the samples. What I have now is $P(Data|S)$. But I could write: $P(Data|S)=\sum_{\theta} P(Data|\theta)P(\theta|S)$. Thinking in this way, my counting probability $P(Data|S)$ has been arrived by first changing my belief about $\theta$ to $P(\theta|S)$, which becomes more spiked towards a particular $\theta$, and the data distribution now looks more like $P(Data|\theta)$ for that $\theta$. So, was the crux the difference between $P(Data)$ and $P(Data|S)$?
How does one interpret the distribution over parameters in bayesian estimation?
This was too long for the comments, so posting it here. From what the others have pointed out about thinking about the prior as a belief, I think a road-block in understanding had been combining the p
How does one interpret the distribution over parameters in bayesian estimation? This was too long for the comments, so posting it here. From what the others have pointed out about thinking about the prior as a belief, I think a road-block in understanding had been combining the prior and the conditional. The prior $P(\theta)$ is understood as a belief in what the true $\theta$ might be. The conditional $P(Data|\theta)$ is better thought in frequentist terms, i.e. take a model with this $\theta$ and generate many samples from it, and just count the frequencies for each sample. Their combination $\sum_{\theta} {P(\theta)\times P(Data|\theta)}$ doesn't remain a concrete process with a well defined $\theta$. So the problem is to understand that. Suppose, initially I didn't have any concrete data, I just had a belief about what the background generating process could be, i.e. a $P(\theta)$. Also, for each process I could tell what the frequencies $P(Data|\theta)$ would be. Because I wasn't really sure about the process, the $P(Data)$ was a belief: With all my uncertainty about $\theta$, I'd on average expect data, if I ever collected any, to have a distribution like this $P(Data)$. But now I actually collect some samples, call this set $S$, and I calculate the frequencies of the samples. What I have now is $P(Data|S)$. But I could write: $P(Data|S)=\sum_{\theta} P(Data|\theta)P(\theta|S)$. Thinking in this way, my counting probability $P(Data|S)$ has been arrived by first changing my belief about $\theta$ to $P(\theta|S)$, which becomes more spiked towards a particular $\theta$, and the data distribution now looks more like $P(Data|\theta)$ for that $\theta$. So, was the crux the difference between $P(Data)$ and $P(Data|S)$?
How does one interpret the distribution over parameters in bayesian estimation? This was too long for the comments, so posting it here. From what the others have pointed out about thinking about the prior as a belief, I think a road-block in understanding had been combining the p
42,065
Multi-class logarithmic loss function per class
As you rightly pointed out, a pure classifier (with probability 1) will have log loss of 0, which is the preferred case. Consider a classifier that assigns labels in a completely random manner. Probability of assigning to the correct class will be 1/M. Therefore, the log loss for each observation will be -log(1/M) = log(M). This is label independent. Log loss for an individual observation can be compared with this value to check how well the classifier is performing with respect to random classification. However, this may not make much sense. Let us take an example. Consider a powerful classifier which misclassified an observation. Let us assume that the observation actually belongs to class 'x' and the predicted probability of belonging to class is 0 (nearly). Therefore, the individual and overall value of log loss will be Inf. This is very common and mostly ignored - it is an observation, but it does not comment on the overall accuracy of the classifier. However, we can make sense of this in 2 ways: Method 1: The observation could be an outlier. Remove it and run the classification again Method 2: Smooth the probability density function for class belongingness of all observations (not just the current observation) Note: If you are concerned with the predicted probability of class belongingness and not just the predicted class, I strongly recommend you to look at method 2. It is generally studied in text retrieval (Language model); it may be relevant to your case. Addition: e^(-loss) is the average probability of correct prediction. This value can be compared to that of random classification.
Multi-class logarithmic loss function per class
As you rightly pointed out, a pure classifier (with probability 1) will have log loss of 0, which is the preferred case. Consider a classifier that assigns labels in a completely random manner. Probab
Multi-class logarithmic loss function per class As you rightly pointed out, a pure classifier (with probability 1) will have log loss of 0, which is the preferred case. Consider a classifier that assigns labels in a completely random manner. Probability of assigning to the correct class will be 1/M. Therefore, the log loss for each observation will be -log(1/M) = log(M). This is label independent. Log loss for an individual observation can be compared with this value to check how well the classifier is performing with respect to random classification. However, this may not make much sense. Let us take an example. Consider a powerful classifier which misclassified an observation. Let us assume that the observation actually belongs to class 'x' and the predicted probability of belonging to class is 0 (nearly). Therefore, the individual and overall value of log loss will be Inf. This is very common and mostly ignored - it is an observation, but it does not comment on the overall accuracy of the classifier. However, we can make sense of this in 2 ways: Method 1: The observation could be an outlier. Remove it and run the classification again Method 2: Smooth the probability density function for class belongingness of all observations (not just the current observation) Note: If you are concerned with the predicted probability of class belongingness and not just the predicted class, I strongly recommend you to look at method 2. It is generally studied in text retrieval (Language model); it may be relevant to your case. Addition: e^(-loss) is the average probability of correct prediction. This value can be compared to that of random classification.
Multi-class logarithmic loss function per class As you rightly pointed out, a pure classifier (with probability 1) will have log loss of 0, which is the preferred case. Consider a classifier that assigns labels in a completely random manner. Probab
42,066
Multi-class logarithmic loss function per class
In my understanding of Log loss if you are evaluating an observation where $y_{ij} = 0$ then $y_{ij} \cdot Ln(p_{ij}) = 0$ and so these will decrease the value of the log loss for each instance that doesn't contain this label. If $p_{ij}$ is high even when $y_{ij} = 0$ it will not affect log loss for the current class but presumably the prediction for the correct class will be correspondingly low causing the log loss for that class to be higher and increasing $\sum_{j}^{M}F_i$ Therefore I would expect that you should avoid interpreting $F_i$ individually because it will not be taking into account false positives.
Multi-class logarithmic loss function per class
In my understanding of Log loss if you are evaluating an observation where $y_{ij} = 0$ then $y_{ij} \cdot Ln(p_{ij}) = 0$ and so these will decrease the value of the log loss for each instance that d
Multi-class logarithmic loss function per class In my understanding of Log loss if you are evaluating an observation where $y_{ij} = 0$ then $y_{ij} \cdot Ln(p_{ij}) = 0$ and so these will decrease the value of the log loss for each instance that doesn't contain this label. If $p_{ij}$ is high even when $y_{ij} = 0$ it will not affect log loss for the current class but presumably the prediction for the correct class will be correspondingly low causing the log loss for that class to be higher and increasing $\sum_{j}^{M}F_i$ Therefore I would expect that you should avoid interpreting $F_i$ individually because it will not be taking into account false positives.
Multi-class logarithmic loss function per class In my understanding of Log loss if you are evaluating an observation where $y_{ij} = 0$ then $y_{ij} \cdot Ln(p_{ij}) = 0$ and so these will decrease the value of the log loss for each instance that d
42,067
Why (mathematically) is the parametric bootstrap usually better than the empirical one?
Basically, all nonparametric bootstrap procedures underestimate the variance of the sampling distribution. This issue doesn't have a consistent name in the literature, but I like "narrowness bias" from "What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum" by Tim Hesterberg. The reason the parametric bootstrap doesn't have this issue is because when you are building a parameterized model based on some data, you'll use the unbiased estimator for the variance (which is corrected by $ {n/(n-1)}$). Because the non-parametric bootstrap distribution is essentially the same thing as the plug-in estimate, its estimate of the variance of the sampling distribution (and thus the standard error) is too small. This is also why the issue goes away with very large sample sizes. If you are stuck with small sample sizes, however, there are a two ways to get good non-parametric intervals. If you can ask your question in the form of a two-sample hypothesis test, inverting a permutation test will get you intervals with the proper coverage. If you can ask your question in the form of a one-sample hypothesis test, you will get decent intervals with a sign-change permutation test (or the wild bootstrap, which is essentially the same thing).
Why (mathematically) is the parametric bootstrap usually better than the empirical one?
Basically, all nonparametric bootstrap procedures underestimate the variance of the sampling distribution. This issue doesn't have a consistent name in the literature, but I like "narrowness bias" fro
Why (mathematically) is the parametric bootstrap usually better than the empirical one? Basically, all nonparametric bootstrap procedures underestimate the variance of the sampling distribution. This issue doesn't have a consistent name in the literature, but I like "narrowness bias" from "What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum" by Tim Hesterberg. The reason the parametric bootstrap doesn't have this issue is because when you are building a parameterized model based on some data, you'll use the unbiased estimator for the variance (which is corrected by $ {n/(n-1)}$). Because the non-parametric bootstrap distribution is essentially the same thing as the plug-in estimate, its estimate of the variance of the sampling distribution (and thus the standard error) is too small. This is also why the issue goes away with very large sample sizes. If you are stuck with small sample sizes, however, there are a two ways to get good non-parametric intervals. If you can ask your question in the form of a two-sample hypothesis test, inverting a permutation test will get you intervals with the proper coverage. If you can ask your question in the form of a one-sample hypothesis test, you will get decent intervals with a sign-change permutation test (or the wild bootstrap, which is essentially the same thing).
Why (mathematically) is the parametric bootstrap usually better than the empirical one? Basically, all nonparametric bootstrap procedures underestimate the variance of the sampling distribution. This issue doesn't have a consistent name in the literature, but I like "narrowness bias" fro
42,068
Model with non-linear transformation
You haven't used the full form of the cubic representation and are missing two terms (i.e. have unintentionally constrained their parameters to equal zero): $$Leads = \beta_{0} + \beta_{ImpA}ImpA + \beta_{ImpA^{2}}ImpA^{2} + \beta_{ImpA^{3}}ImpA^{3} + \varepsilon$$ Per the currently highest voted answer in Fitting polynomial model to data in R, you can do either lm(Leads ~ ImpA + I(ImpA^2) + I(ImpA^{3})) (as you indicate in your comment, but you had a missing parenthesis), or: lm(Leads ~ poly(ImpA, 3, raw=TRUE))
Model with non-linear transformation
You haven't used the full form of the cubic representation and are missing two terms (i.e. have unintentionally constrained their parameters to equal zero): $$Leads = \beta_{0} + \beta_{ImpA}ImpA + \
Model with non-linear transformation You haven't used the full form of the cubic representation and are missing two terms (i.e. have unintentionally constrained their parameters to equal zero): $$Leads = \beta_{0} + \beta_{ImpA}ImpA + \beta_{ImpA^{2}}ImpA^{2} + \beta_{ImpA^{3}}ImpA^{3} + \varepsilon$$ Per the currently highest voted answer in Fitting polynomial model to data in R, you can do either lm(Leads ~ ImpA + I(ImpA^2) + I(ImpA^{3})) (as you indicate in your comment, but you had a missing parenthesis), or: lm(Leads ~ poly(ImpA, 3, raw=TRUE))
Model with non-linear transformation You haven't used the full form of the cubic representation and are missing two terms (i.e. have unintentionally constrained their parameters to equal zero): $$Leads = \beta_{0} + \beta_{ImpA}ImpA + \
42,069
Two or more time series. What is the best way to test whether one of them is leading and by what time period?
You could try cross-correlation analysis with R for example. Cross-correlation at lag h measures temporal dependency of two time series (x{t+h},y{t}) at lag h. If h<0 and cross-correlation is statistically significant then you could say that x series leads y series by h time units. For example approval of building permit and finishing of housing project might have several months lag. In R cross-correlation can be calculated as follows: ccf(x series,y series,lags to show) Look spikes in the graph produced by this function/object call. EDIT: Raw time series must often be pre-whitened before ccf - analysis should be done. Pre-whitening can be done this way: 1) Create an arima model for series x{t} and save residuals. 2) Use previous model to filter series y{t} so that you get residuals. 3) Do ccf- analysis for the residual series. Why pre-whiten? Autocorrelation structures and need for differencing might demand that ccf - analysis should be done for the residual series and not to raw pre-filtered series.
Two or more time series. What is the best way to test whether one of them is leading and by what tim
You could try cross-correlation analysis with R for example. Cross-correlation at lag h measures temporal dependency of two time series (x{t+h},y{t}) at lag h. If h<0 and cross-correlation is statist
Two or more time series. What is the best way to test whether one of them is leading and by what time period? You could try cross-correlation analysis with R for example. Cross-correlation at lag h measures temporal dependency of two time series (x{t+h},y{t}) at lag h. If h<0 and cross-correlation is statistically significant then you could say that x series leads y series by h time units. For example approval of building permit and finishing of housing project might have several months lag. In R cross-correlation can be calculated as follows: ccf(x series,y series,lags to show) Look spikes in the graph produced by this function/object call. EDIT: Raw time series must often be pre-whitened before ccf - analysis should be done. Pre-whitening can be done this way: 1) Create an arima model for series x{t} and save residuals. 2) Use previous model to filter series y{t} so that you get residuals. 3) Do ccf- analysis for the residual series. Why pre-whiten? Autocorrelation structures and need for differencing might demand that ccf - analysis should be done for the residual series and not to raw pre-filtered series.
Two or more time series. What is the best way to test whether one of them is leading and by what tim You could try cross-correlation analysis with R for example. Cross-correlation at lag h measures temporal dependency of two time series (x{t+h},y{t}) at lag h. If h<0 and cross-correlation is statist
42,070
Combinef in R HTS package- constrain to keep forecasts positive?
The positive=TRUE argument for forecast.gts and forecast.hts ensures the starting forecasts to be positive, but not the final reconciled forecasts. Even when the starting forecasts are positive, it is possible for the reconciled forecasts to be negative. When you use combinef, you provide your own starting forecasts, so it is up to you to make them positive. It would be possible to use a non-linear least squares reconciliation procedure to produce positively constrained reconciled forecasts, but that would be much slower.
Combinef in R HTS package- constrain to keep forecasts positive?
The positive=TRUE argument for forecast.gts and forecast.hts ensures the starting forecasts to be positive, but not the final reconciled forecasts. Even when the starting forecasts are positive, it is
Combinef in R HTS package- constrain to keep forecasts positive? The positive=TRUE argument for forecast.gts and forecast.hts ensures the starting forecasts to be positive, but not the final reconciled forecasts. Even when the starting forecasts are positive, it is possible for the reconciled forecasts to be negative. When you use combinef, you provide your own starting forecasts, so it is up to you to make them positive. It would be possible to use a non-linear least squares reconciliation procedure to produce positively constrained reconciled forecasts, but that would be much slower.
Combinef in R HTS package- constrain to keep forecasts positive? The positive=TRUE argument for forecast.gts and forecast.hts ensures the starting forecasts to be positive, but not the final reconciled forecasts. Even when the starting forecasts are positive, it is
42,071
Combinef in R HTS package- constrain to keep forecasts positive?
You can't ensure positivity and sum consistency of hierarchical forecasts if you use forecast::combinef(). What I personally found useful was to set up the summation matrix (see the original publication by Hyndman et al., 2011) and then solve the relevant (weighted) least squares problem with additional nonnegativity constraints. This will give you nonnegative and sum consistent forecasts. I have found repeatedly that this approach still results in better forecasts on all levels in the hierarchy. This approach also allows including equality constraints, or constraints that are more general than just "$\geq 0$". I have had applications where some forecasts needed to be larger than a certain number (because of existing orders), or needed to be constrained to be equal to a given value, which you can model using two inequality constraints. One possible tool that solves (weighted) least squares with linear constraints is the pcls() function in the mgcv package for R. (Note that this is slightly more general than your use case: mgcv::pcls() allows for linear constraints, but you and the use cases I outline in the previous paragraph only need box constraints.) However, this is of course also not optimized to leverage the specific structure of potential forecast hierarchy matrices, so your performance may be significantly worse than if you use combinef().
Combinef in R HTS package- constrain to keep forecasts positive?
You can't ensure positivity and sum consistency of hierarchical forecasts if you use forecast::combinef(). What I personally found useful was to set up the summation matrix (see the original publicati
Combinef in R HTS package- constrain to keep forecasts positive? You can't ensure positivity and sum consistency of hierarchical forecasts if you use forecast::combinef(). What I personally found useful was to set up the summation matrix (see the original publication by Hyndman et al., 2011) and then solve the relevant (weighted) least squares problem with additional nonnegativity constraints. This will give you nonnegative and sum consistent forecasts. I have found repeatedly that this approach still results in better forecasts on all levels in the hierarchy. This approach also allows including equality constraints, or constraints that are more general than just "$\geq 0$". I have had applications where some forecasts needed to be larger than a certain number (because of existing orders), or needed to be constrained to be equal to a given value, which you can model using two inequality constraints. One possible tool that solves (weighted) least squares with linear constraints is the pcls() function in the mgcv package for R. (Note that this is slightly more general than your use case: mgcv::pcls() allows for linear constraints, but you and the use cases I outline in the previous paragraph only need box constraints.) However, this is of course also not optimized to leverage the specific structure of potential forecast hierarchy matrices, so your performance may be significantly worse than if you use combinef().
Combinef in R HTS package- constrain to keep forecasts positive? You can't ensure positivity and sum consistency of hierarchical forecasts if you use forecast::combinef(). What I personally found useful was to set up the summation matrix (see the original publicati
42,072
Combinef in R HTS package- constrain to keep forecasts positive?
Some code based on Rob's workaround of setting the negatives to zero and reconcile # Re-reconciliate when zero values present # Extract groups groups <- hts.obj %>% aggts() %>% get_groups() x=0 #Counter # Loop until all positive while(sum(hts.obj[[1]] <0) > 0){ # Generate all time series hts.obj <- aggts(hts.obj) # Overwrite negatives by zero hts.obj[hts.obj<0] <- 0 # Reconcile hts.obj <- hts.obj %>% ts() %>% combinef(groups = groups, keep ="gts") # Count up x=x+1 # Break after 10 loops if(x>=10)break } rm("x") # Overwrite remaining negatives by zero hts.obj[[1]][hts.obj[[1]]<0] <- 0 In this example forecasts are indexed by [[1]] - this might change. Also note that overwriting zeros leads to biased forecasts.
Combinef in R HTS package- constrain to keep forecasts positive?
Some code based on Rob's workaround of setting the negatives to zero and reconcile # Re-reconciliate when zero values present # Extract groups groups <- hts.obj %>% aggts() %>% get_groups() x=0
Combinef in R HTS package- constrain to keep forecasts positive? Some code based on Rob's workaround of setting the negatives to zero and reconcile # Re-reconciliate when zero values present # Extract groups groups <- hts.obj %>% aggts() %>% get_groups() x=0 #Counter # Loop until all positive while(sum(hts.obj[[1]] <0) > 0){ # Generate all time series hts.obj <- aggts(hts.obj) # Overwrite negatives by zero hts.obj[hts.obj<0] <- 0 # Reconcile hts.obj <- hts.obj %>% ts() %>% combinef(groups = groups, keep ="gts") # Count up x=x+1 # Break after 10 loops if(x>=10)break } rm("x") # Overwrite remaining negatives by zero hts.obj[[1]][hts.obj[[1]]<0] <- 0 In this example forecasts are indexed by [[1]] - this might change. Also note that overwriting zeros leads to biased forecasts.
Combinef in R HTS package- constrain to keep forecasts positive? Some code based on Rob's workaround of setting the negatives to zero and reconcile # Re-reconciliate when zero values present # Extract groups groups <- hts.obj %>% aggts() %>% get_groups() x=0
42,073
What to Do When a Log-binomial Model's Convergence Fails
The Poisson approximation to the relative risk is a very good approach with two small limitations: it is easily possible to overpredict the risk, and the mean-variance assumption may be unreasonable in moderately high risks. Together these do not invalidate the estimates (when using robust standard errors) but they and their inference may be biased and/or conservative. The log-binomial GLM is very poorly behaved for it fails to converge when encountering overprediction. If you inspect the workhorse for GLM, it begins with the 0 vector as starting coefficients. For logistic regression, this is a 50% risk assigned to each observation but for log-binomial it is a 100% risk which immediately destroys the iterations almost every single time. I think future versions of R could stand to use more intelligent starting vectors. Using start=c(log(mean(y), rep(0, np-1)) will usually fix the problem ($n_p$ the number of parameters in the model including the intercept). I made a little wrapper in the R package epitools called probratio to do this. Another thing it does is marginal standardization. A nice paper on this can be found by Muller Maclehose, 2005. While the odds ratios are biased estimators of the relative risk, the risk predictions from logistic regression are not biased. Using this, you can predict risk all observations in the model when the covariate achieves its current value, then predict risk in all observations in the model when the covariate achieves one unit higher. Average the risks, and take their ratio, and this is an estimate of the relative risk that has (arguably) the correct interpretation whether or not it is mathematically equivalent to the actual relative risk (they are almost always very, very close). The sandwich does not work here, but bootstrapping works brilliantly. I also implemented this in the probratio function but need to tweak it to implement bias corrected accelerated (BCA) bootstrapping. The third solution is to trick the Cox proportional hazards model to do this for you. If everyone in the sample is assigned a time of 1 unit and the event indicator is taken to indicate failure or censoring, then the Cox model with the Efron method for ties estimates the relative risk. There is a bepress working paper from Thomas Lumley that brilliantly describes how to do this. A fourth solution is to directly maximize the binomial likelihood for the truncated risk function. An example of R code to do this would be something like: negLogLik <- function(b) { risk <- pmin(1, exp(X%*%b)) -sum(dbinom(y, 1, risk, log=T)) } fit <- nlm(negLogLik, b=c(log(mean(y)), 0,0,0), hessian=T)
What to Do When a Log-binomial Model's Convergence Fails
The Poisson approximation to the relative risk is a very good approach with two small limitations: it is easily possible to overpredict the risk, and the mean-variance assumption may be unreasonable i
What to Do When a Log-binomial Model's Convergence Fails The Poisson approximation to the relative risk is a very good approach with two small limitations: it is easily possible to overpredict the risk, and the mean-variance assumption may be unreasonable in moderately high risks. Together these do not invalidate the estimates (when using robust standard errors) but they and their inference may be biased and/or conservative. The log-binomial GLM is very poorly behaved for it fails to converge when encountering overprediction. If you inspect the workhorse for GLM, it begins with the 0 vector as starting coefficients. For logistic regression, this is a 50% risk assigned to each observation but for log-binomial it is a 100% risk which immediately destroys the iterations almost every single time. I think future versions of R could stand to use more intelligent starting vectors. Using start=c(log(mean(y), rep(0, np-1)) will usually fix the problem ($n_p$ the number of parameters in the model including the intercept). I made a little wrapper in the R package epitools called probratio to do this. Another thing it does is marginal standardization. A nice paper on this can be found by Muller Maclehose, 2005. While the odds ratios are biased estimators of the relative risk, the risk predictions from logistic regression are not biased. Using this, you can predict risk all observations in the model when the covariate achieves its current value, then predict risk in all observations in the model when the covariate achieves one unit higher. Average the risks, and take their ratio, and this is an estimate of the relative risk that has (arguably) the correct interpretation whether or not it is mathematically equivalent to the actual relative risk (they are almost always very, very close). The sandwich does not work here, but bootstrapping works brilliantly. I also implemented this in the probratio function but need to tweak it to implement bias corrected accelerated (BCA) bootstrapping. The third solution is to trick the Cox proportional hazards model to do this for you. If everyone in the sample is assigned a time of 1 unit and the event indicator is taken to indicate failure or censoring, then the Cox model with the Efron method for ties estimates the relative risk. There is a bepress working paper from Thomas Lumley that brilliantly describes how to do this. A fourth solution is to directly maximize the binomial likelihood for the truncated risk function. An example of R code to do this would be something like: negLogLik <- function(b) { risk <- pmin(1, exp(X%*%b)) -sum(dbinom(y, 1, risk, log=T)) } fit <- nlm(negLogLik, b=c(log(mean(y)), 0,0,0), hessian=T)
What to Do When a Log-binomial Model's Convergence Fails The Poisson approximation to the relative risk is a very good approach with two small limitations: it is easily possible to overpredict the risk, and the mean-variance assumption may be unreasonable i
42,074
Is there a name for this process/ distribution?
I gather all these r.v.'s are independent. This is a Markov process, because the past is fully encapsulated in $S_{t-1}$, and so current probabilities conditional in its whole past are equivalent to current probabilities conditional only on the previous period. It is also a martingale process because a) it is absolutely integrable (its expected value exists) and b), $$E[S_t \mid t-1]= E(1+\omega_t\delta_t)\cdot S_{t-1} = [1+E(\omega_t)E(\delta_t)]\cdot S_{t-1} = S_{t-1}$$ Finally it is a mean-stationary process that is not second-order stationary. Writing recursively we obtain (using $t$ for indexing) $$S_t = \left(\prod_{i=1}^t(1+\omega_i\delta_i)\right)S_0$$ $$E[S_t] = E\left(\prod_{i=1}^t(1+\omega_i\delta_i)\right)\cdot E[S_0]=0$$ While $$E[S_t^2] = \left(\prod_{i=1}^tE(1+\omega_i\delta_i)^2\right)\cdot E[S_0^2]$$ $$E(1+\omega_i\delta_i)^2 = E(1+2\omega_i\delta_i+\omega_i^2\delta_i^2) = 1+0+E(\omega_i^2)E(\delta_i^2) = 1+\frac 12 = 3/2$$ So $$\text{Var}(S_t) = \left (\frac 32\right)^t$$ If we treat the more general case writing the term as $(a+\omega_i\delta_i)$, then the autocovariance function $\gamma_k$ is $$\gamma_k = \text{Cov}(S_t,S_{t-k}) = E[S_tS_{t-k}] = E\left[\Big(S_{t-k} \cdot \prod_{i=t-k+1}^t(a+\omega_i\delta_i)\Big)\cdot S_{t-k}\right]$$ $$\Rightarrow \gamma_k = a^k\cdot E(S_{t-k}^2) = a^k\left (a^2+\frac 12\right)^{t-k}$$ We see that if $|a|=1/\sqrt2$ the process will be covariance-stationary having variance equal to unity, while if $|a|<1/\sqrt2$ the process will tend to a constant value, as its unconditional variance will tend to zero. This would be worth simulating.
Is there a name for this process/ distribution?
I gather all these r.v.'s are independent. This is a Markov process, because the past is fully encapsulated in $S_{t-1}$, and so current probabilities conditional in its whole past are equivalent to
Is there a name for this process/ distribution? I gather all these r.v.'s are independent. This is a Markov process, because the past is fully encapsulated in $S_{t-1}$, and so current probabilities conditional in its whole past are equivalent to current probabilities conditional only on the previous period. It is also a martingale process because a) it is absolutely integrable (its expected value exists) and b), $$E[S_t \mid t-1]= E(1+\omega_t\delta_t)\cdot S_{t-1} = [1+E(\omega_t)E(\delta_t)]\cdot S_{t-1} = S_{t-1}$$ Finally it is a mean-stationary process that is not second-order stationary. Writing recursively we obtain (using $t$ for indexing) $$S_t = \left(\prod_{i=1}^t(1+\omega_i\delta_i)\right)S_0$$ $$E[S_t] = E\left(\prod_{i=1}^t(1+\omega_i\delta_i)\right)\cdot E[S_0]=0$$ While $$E[S_t^2] = \left(\prod_{i=1}^tE(1+\omega_i\delta_i)^2\right)\cdot E[S_0^2]$$ $$E(1+\omega_i\delta_i)^2 = E(1+2\omega_i\delta_i+\omega_i^2\delta_i^2) = 1+0+E(\omega_i^2)E(\delta_i^2) = 1+\frac 12 = 3/2$$ So $$\text{Var}(S_t) = \left (\frac 32\right)^t$$ If we treat the more general case writing the term as $(a+\omega_i\delta_i)$, then the autocovariance function $\gamma_k$ is $$\gamma_k = \text{Cov}(S_t,S_{t-k}) = E[S_tS_{t-k}] = E\left[\Big(S_{t-k} \cdot \prod_{i=t-k+1}^t(a+\omega_i\delta_i)\Big)\cdot S_{t-k}\right]$$ $$\Rightarrow \gamma_k = a^k\cdot E(S_{t-k}^2) = a^k\left (a^2+\frac 12\right)^{t-k}$$ We see that if $|a|=1/\sqrt2$ the process will be covariance-stationary having variance equal to unity, while if $|a|<1/\sqrt2$ the process will tend to a constant value, as its unconditional variance will tend to zero. This would be worth simulating.
Is there a name for this process/ distribution? I gather all these r.v.'s are independent. This is a Markov process, because the past is fully encapsulated in $S_{t-1}$, and so current probabilities conditional in its whole past are equivalent to
42,075
Random Forest - how to know if variables affect positively or negatively
As the answers at the linked question by @Simone show, it is entirely possible to quantify the partial effect of a predictor/independent variable on the target/dependent variable. This is a very useful way to derive understanding from a random forest (or machine learning model in general). However, this is not exactly the same as what this question asks about i.e. quantify 'whether a variable affects positively or negatively the target value'. The reason is simple: random forests behave differently from linear models, which capture nonlinearities only when specified in the model formulation. In a simple linear model without quadratic terms - assuming no problems with the model fit - a positive slope estimate indicates that increases in a predictor variable lead to increases in the target variable. In contrast, random forests capture highly nonlinear partial effects without any prior specification. Therefore, to use the example in the question, it's entirely possible for a random forest to capture the reality that income is low when young, high at middle age, and low again when old (i.e. post-retirement). In fact, random forests' structural flexibility means that they can capture far more complex nonlinear shapes than the simple one in this example. As a consequence of this flexibility, it would be unusual (though not impossible) for single predictors to have an effect that can accurately described as being just positive or negative. Plotting out the partial effects will give you a more complete and accurate impression.
Random Forest - how to know if variables affect positively or negatively
As the answers at the linked question by @Simone show, it is entirely possible to quantify the partial effect of a predictor/independent variable on the target/dependent variable. This is a very usefu
Random Forest - how to know if variables affect positively or negatively As the answers at the linked question by @Simone show, it is entirely possible to quantify the partial effect of a predictor/independent variable on the target/dependent variable. This is a very useful way to derive understanding from a random forest (or machine learning model in general). However, this is not exactly the same as what this question asks about i.e. quantify 'whether a variable affects positively or negatively the target value'. The reason is simple: random forests behave differently from linear models, which capture nonlinearities only when specified in the model formulation. In a simple linear model without quadratic terms - assuming no problems with the model fit - a positive slope estimate indicates that increases in a predictor variable lead to increases in the target variable. In contrast, random forests capture highly nonlinear partial effects without any prior specification. Therefore, to use the example in the question, it's entirely possible for a random forest to capture the reality that income is low when young, high at middle age, and low again when old (i.e. post-retirement). In fact, random forests' structural flexibility means that they can capture far more complex nonlinear shapes than the simple one in this example. As a consequence of this flexibility, it would be unusual (though not impossible) for single predictors to have an effect that can accurately described as being just positive or negative. Plotting out the partial effects will give you a more complete and accurate impression.
Random Forest - how to know if variables affect positively or negatively As the answers at the linked question by @Simone show, it is entirely possible to quantify the partial effect of a predictor/independent variable on the target/dependent variable. This is a very usefu
42,076
Introduction to recurrent neural networks?
The major applications of Recurrent Neural Network (RNN) is sequence modelling. Applications like Machine Translation, Named Entity Recognition, Part-of-Speech Tagging etc.(since, i work in NLP, all the examples are from NLP) Some of the resources which I found helpful for a beginner are: Supervised Sequence Labelling with Recurrent Neural Networks By Alex Graves's Thesis Bengio's Deep Learning Draft Chapter Also, the RNN tutorial link containing Python Code,http://www.nehalemlabs.net/prototype/blog/2013/10/10/implementing-a-recurrent-neural-network-in-python/ Specifically the part on Backpropagation Through Time was well explained in the thesis.
Introduction to recurrent neural networks?
The major applications of Recurrent Neural Network (RNN) is sequence modelling. Applications like Machine Translation, Named Entity Recognition, Part-of-Speech Tagging etc.(since, i work in NLP, all t
Introduction to recurrent neural networks? The major applications of Recurrent Neural Network (RNN) is sequence modelling. Applications like Machine Translation, Named Entity Recognition, Part-of-Speech Tagging etc.(since, i work in NLP, all the examples are from NLP) Some of the resources which I found helpful for a beginner are: Supervised Sequence Labelling with Recurrent Neural Networks By Alex Graves's Thesis Bengio's Deep Learning Draft Chapter Also, the RNN tutorial link containing Python Code,http://www.nehalemlabs.net/prototype/blog/2013/10/10/implementing-a-recurrent-neural-network-in-python/ Specifically the part on Backpropagation Through Time was well explained in the thesis.
Introduction to recurrent neural networks? The major applications of Recurrent Neural Network (RNN) is sequence modelling. Applications like Machine Translation, Named Entity Recognition, Part-of-Speech Tagging etc.(since, i work in NLP, all t
42,077
How do Restricted Boltzmann Machines work?
While it seems that the OP is not interested in this question anymore (and, based on the profile information, in Cross Validated, for that matter), I've decided to add some additional information, which is IMHO relevant and hopefully will be useful for the community. First and foremost, I would like to share an excellent tutorial on deep learning, which contains a whole section dedicated to restricted Boltzmann machines (RBM). This tutorial by LISA Lab team (guided by Youshua Bengio) is available online as well as in several document formats. The RBM section can be found here and in Chapter 9 of the corresponding PDF version of the tutorial. Code samples are presented in Python, with some focus on GPU-enabled Theano deep learning library. Speaking of software, the rbm R package is an example of a few R ecosystem's packages for deep learning: https://github.com/zachmayer/rbm. Extensive collections of references to libraries and other types of software for machine learning, including deep learning and RBM, can be found here, here and here. Finally, returning from talking about software to the subject of literature and resources, this page contains an extensive list of resources on deep learning, in general, and RBM, in particular.
How do Restricted Boltzmann Machines work?
While it seems that the OP is not interested in this question anymore (and, based on the profile information, in Cross Validated, for that matter), I've decided to add some additional information, whi
How do Restricted Boltzmann Machines work? While it seems that the OP is not interested in this question anymore (and, based on the profile information, in Cross Validated, for that matter), I've decided to add some additional information, which is IMHO relevant and hopefully will be useful for the community. First and foremost, I would like to share an excellent tutorial on deep learning, which contains a whole section dedicated to restricted Boltzmann machines (RBM). This tutorial by LISA Lab team (guided by Youshua Bengio) is available online as well as in several document formats. The RBM section can be found here and in Chapter 9 of the corresponding PDF version of the tutorial. Code samples are presented in Python, with some focus on GPU-enabled Theano deep learning library. Speaking of software, the rbm R package is an example of a few R ecosystem's packages for deep learning: https://github.com/zachmayer/rbm. Extensive collections of references to libraries and other types of software for machine learning, including deep learning and RBM, can be found here, here and here. Finally, returning from talking about software to the subject of literature and resources, this page contains an extensive list of resources on deep learning, in general, and RBM, in particular.
How do Restricted Boltzmann Machines work? While it seems that the OP is not interested in this question anymore (and, based on the profile information, in Cross Validated, for that matter), I've decided to add some additional information, whi
42,078
How do Restricted Boltzmann Machines work?
First look into this paper in deep. A. Fischer and C. Igel, "An Introduction to Restricted Boltzmann machines," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, ed: Springer, 2012, pp. 14-36. Then, You may look into Hinton's coursera course website. You will understand proper. In RBM, neuron states are stochastically decided based on p(v|h) and p(h|v). These two equations are derived from energy equation. For training, maximum likelihood(ML) approximation method is applied. To avoid the complexity in training using ML method, gradient descent method is applied. This will provide us learning rules. Again here, we need to calculate for huge no of nodes. So Gibbs sampling methods enter into the scene. Gibbs sampling also have problem of convergence.So we use approximation method, which is known as CD method / PCD..etc
How do Restricted Boltzmann Machines work?
First look into this paper in deep. A. Fischer and C. Igel, "An Introduction to Restricted Boltzmann machines," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications,
How do Restricted Boltzmann Machines work? First look into this paper in deep. A. Fischer and C. Igel, "An Introduction to Restricted Boltzmann machines," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, ed: Springer, 2012, pp. 14-36. Then, You may look into Hinton's coursera course website. You will understand proper. In RBM, neuron states are stochastically decided based on p(v|h) and p(h|v). These two equations are derived from energy equation. For training, maximum likelihood(ML) approximation method is applied. To avoid the complexity in training using ML method, gradient descent method is applied. This will provide us learning rules. Again here, we need to calculate for huge no of nodes. So Gibbs sampling methods enter into the scene. Gibbs sampling also have problem of convergence.So we use approximation method, which is known as CD method / PCD..etc
How do Restricted Boltzmann Machines work? First look into this paper in deep. A. Fischer and C. Igel, "An Introduction to Restricted Boltzmann machines," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications,
42,079
Treatment Effect Bounds
Let $R_i$ be a dummy which equals one for respondents and zero for non-respondents, $Y_i$ the outcome and $D_i$ the treatment variable from your randomized experiment. You cannot observe the counterfactual quantities that you want to compare, i.e. $E[Y_{i1}|R_i = 0|D_i = 1]$ and $E[Y_{i0}|R_i = 0, D_i = 0]$, due to non-response but you know their probability weights from the data and you can use the fact that $Y_i$ and $R_i$ are bounded between zero and one. Assuming the worst case scenario in which $E[Y_{i1}|R_i = 0|D_i = 1] = 0$ and $E[Y_{i0}|R_i = 0, D_i = 0] = 1$, the lower Manski bound is given by: $$ \begin{align} B^{L} &= P(R_i = 1|D_i = 1)E(Y_i|D_i = 1, R_i = 1) \newline &- [P(R_i = 1|D_i = 0)E(Y_i|D_i = 0, R_i = 1) + P(R_i = 0|D_i = 0)] \end{align} $$ which is the difference between the outcome of the treated given that they responded, minus the outcome of the non-treated given that they responded plus the probability that non-treated individuals did not respond. In the same spirit, assuming the best case scenario in which $E[Y_{i1}|R_i = 0|D_i = 1] = 1$ and $E[Y_{i0}|R_i = 0, D_i = 0] = 0$, the upper Manski bound is given by: $$ \begin{align} B^{U} &= P(R_i = 1|D_i = 1)E(Y_i|D_i = 1, R_i = 1) + P(R_i = 0 | D_i = 1) \newline &- P(R_i = 1|D_i = 0)E(Y_i|D_i = 0, R_i = 1) \end{align} $$ Naturally the average treatment effect will lie in between those extreme cases. The width of the Manski bounds is simply the difference between the upper and lower bound: $$\text{width} = P(R_i = 0|D_i = 1) + P(R_i = 0|D_i = 0)$$ This width is determined by the probabilities of selection into treatment given the response behavior, i.e. more non-response increases the width. The bounds were to be informative if both upper and lower bound could jointly lie to either side of zero. It turns out that they never can, so you cannot sign your treatment effect. At best you can get an idea about the range of your effect but in general Manski bounds are uninformative. You cannot make these bounds any smaller without imposing additional assumptions. Lee's (2009) bounds on the other hand are narrower but they only bound a specific treatment effect. I'm not deriving the width of the bounds again now because they are obtained in the same way as for the Manski bounds. When you subtract the lower bound from the upper bound you get: $$\text{width} = \frac{P(R_i = 1|D_i = 1) - P(R_i = 1|D_i = 0)}{P(R_i = 1|D_i = 0)}$$ If the difference in response rates across treated and non-treated individuals is small, the bounds will be informative. The crucial assumption is that this difference in response rates is not because of a difference between these two groups but because treatment has an effect on response. In your particular case this assumption does not seem to hold because as far as I see it you lost a batch of responses or some administrative issue held you back. An example for when this assumption holds is if people are angry if they didn't get the treatment and therefore do not respond, or people who got the treatment do not care anymore about responding because they have gotten what they wanted. Even if you can credibly make this assumption, Lee's bounds cannot distinguish between people who always respond and those who respond because they received the treatment. Lee calls those always respondents and response compliers. So what you are bounding with this is the average treatment effect of these two subpopulations. Whether this is what you want really depends on your application.
Treatment Effect Bounds
Let $R_i$ be a dummy which equals one for respondents and zero for non-respondents, $Y_i$ the outcome and $D_i$ the treatment variable from your randomized experiment. You cannot observe the counterfa
Treatment Effect Bounds Let $R_i$ be a dummy which equals one for respondents and zero for non-respondents, $Y_i$ the outcome and $D_i$ the treatment variable from your randomized experiment. You cannot observe the counterfactual quantities that you want to compare, i.e. $E[Y_{i1}|R_i = 0|D_i = 1]$ and $E[Y_{i0}|R_i = 0, D_i = 0]$, due to non-response but you know their probability weights from the data and you can use the fact that $Y_i$ and $R_i$ are bounded between zero and one. Assuming the worst case scenario in which $E[Y_{i1}|R_i = 0|D_i = 1] = 0$ and $E[Y_{i0}|R_i = 0, D_i = 0] = 1$, the lower Manski bound is given by: $$ \begin{align} B^{L} &= P(R_i = 1|D_i = 1)E(Y_i|D_i = 1, R_i = 1) \newline &- [P(R_i = 1|D_i = 0)E(Y_i|D_i = 0, R_i = 1) + P(R_i = 0|D_i = 0)] \end{align} $$ which is the difference between the outcome of the treated given that they responded, minus the outcome of the non-treated given that they responded plus the probability that non-treated individuals did not respond. In the same spirit, assuming the best case scenario in which $E[Y_{i1}|R_i = 0|D_i = 1] = 1$ and $E[Y_{i0}|R_i = 0, D_i = 0] = 0$, the upper Manski bound is given by: $$ \begin{align} B^{U} &= P(R_i = 1|D_i = 1)E(Y_i|D_i = 1, R_i = 1) + P(R_i = 0 | D_i = 1) \newline &- P(R_i = 1|D_i = 0)E(Y_i|D_i = 0, R_i = 1) \end{align} $$ Naturally the average treatment effect will lie in between those extreme cases. The width of the Manski bounds is simply the difference between the upper and lower bound: $$\text{width} = P(R_i = 0|D_i = 1) + P(R_i = 0|D_i = 0)$$ This width is determined by the probabilities of selection into treatment given the response behavior, i.e. more non-response increases the width. The bounds were to be informative if both upper and lower bound could jointly lie to either side of zero. It turns out that they never can, so you cannot sign your treatment effect. At best you can get an idea about the range of your effect but in general Manski bounds are uninformative. You cannot make these bounds any smaller without imposing additional assumptions. Lee's (2009) bounds on the other hand are narrower but they only bound a specific treatment effect. I'm not deriving the width of the bounds again now because they are obtained in the same way as for the Manski bounds. When you subtract the lower bound from the upper bound you get: $$\text{width} = \frac{P(R_i = 1|D_i = 1) - P(R_i = 1|D_i = 0)}{P(R_i = 1|D_i = 0)}$$ If the difference in response rates across treated and non-treated individuals is small, the bounds will be informative. The crucial assumption is that this difference in response rates is not because of a difference between these two groups but because treatment has an effect on response. In your particular case this assumption does not seem to hold because as far as I see it you lost a batch of responses or some administrative issue held you back. An example for when this assumption holds is if people are angry if they didn't get the treatment and therefore do not respond, or people who got the treatment do not care anymore about responding because they have gotten what they wanted. Even if you can credibly make this assumption, Lee's bounds cannot distinguish between people who always respond and those who respond because they received the treatment. Lee calls those always respondents and response compliers. So what you are bounding with this is the average treatment effect of these two subpopulations. Whether this is what you want really depends on your application.
Treatment Effect Bounds Let $R_i$ be a dummy which equals one for respondents and zero for non-respondents, $Y_i$ the outcome and $D_i$ the treatment variable from your randomized experiment. You cannot observe the counterfa
42,080
Theil-Sen estimation, more than one independent variable
There have been a number of proposals for extending Theil-Sen estimation to multiple regression contexts. I'll point to a couple: 1) Zhou, W. and R. Serfling (2007), Multivariate Spatial U-Quantiles: a Bahadur-Kiefer Representation, a Theil-Sen Estimator for Multiple Regression, and a Robust Dispersion Estimator, Journal of Statistical Planning and Inference, May see here 2) Wang, X., X. Dang, H. Peng, and H. Zhang (2009), The Theil-Sen Estimators in a Multiple Linear Regression Model see here or here (different versions) The first is based on extending univariate U-quantiles to multivariate U-quantiles, and the second is based on a multivariate median.
Theil-Sen estimation, more than one independent variable
There have been a number of proposals for extending Theil-Sen estimation to multiple regression contexts. I'll point to a couple: 1) Zhou, W. and R. Serfling (2007), Multivariate Spatial U-Quantiles:
Theil-Sen estimation, more than one independent variable There have been a number of proposals for extending Theil-Sen estimation to multiple regression contexts. I'll point to a couple: 1) Zhou, W. and R. Serfling (2007), Multivariate Spatial U-Quantiles: a Bahadur-Kiefer Representation, a Theil-Sen Estimator for Multiple Regression, and a Robust Dispersion Estimator, Journal of Statistical Planning and Inference, May see here 2) Wang, X., X. Dang, H. Peng, and H. Zhang (2009), The Theil-Sen Estimators in a Multiple Linear Regression Model see here or here (different versions) The first is based on extending univariate U-quantiles to multivariate U-quantiles, and the second is based on a multivariate median.
Theil-Sen estimation, more than one independent variable There have been a number of proposals for extending Theil-Sen estimation to multiple regression contexts. I'll point to a couple: 1) Zhou, W. and R. Serfling (2007), Multivariate Spatial U-Quantiles:
42,081
PCA: 91% of explained variance on one principal component
I am (very) new to this, but I'll do my best to help. The answers to your questions are Am I justified in removing the other 8 principal components? I do not think you are "justified". But if you want to make a first coarse assessment of the data you can concentrate on the first PC, just bear in mind that you neglect 9% of the total variability. This leads you to ask many other questions: were the variables expected to be so strongly correlated? Could you simulate or explain this 9% extra variability simply by invoking measurement errors? How do I interpret 91% of explained variance on one component? You interpret it with a very high degree of correlation between the many variables you included, or between at least two variables while the others show a much smaller dispersion. When you look at the PC components in terms of original measurements, how many significant components do you have? If I only kept one component what would be the best way to visualize the data? If you only kept one component your final description of the data would be 1D, so an axis would do the job. I repeat myself, and please do not take my words as patronizing, but I would try to understand if the PC you calculated makes sense given the data.
PCA: 91% of explained variance on one principal component
I am (very) new to this, but I'll do my best to help. The answers to your questions are Am I justified in removing the other 8 principal components? I do not think you are "justified". But if you wa
PCA: 91% of explained variance on one principal component I am (very) new to this, but I'll do my best to help. The answers to your questions are Am I justified in removing the other 8 principal components? I do not think you are "justified". But if you want to make a first coarse assessment of the data you can concentrate on the first PC, just bear in mind that you neglect 9% of the total variability. This leads you to ask many other questions: were the variables expected to be so strongly correlated? Could you simulate or explain this 9% extra variability simply by invoking measurement errors? How do I interpret 91% of explained variance on one component? You interpret it with a very high degree of correlation between the many variables you included, or between at least two variables while the others show a much smaller dispersion. When you look at the PC components in terms of original measurements, how many significant components do you have? If I only kept one component what would be the best way to visualize the data? If you only kept one component your final description of the data would be 1D, so an axis would do the job. I repeat myself, and please do not take my words as patronizing, but I would try to understand if the PC you calculated makes sense given the data.
PCA: 91% of explained variance on one principal component I am (very) new to this, but I'll do my best to help. The answers to your questions are Am I justified in removing the other 8 principal components? I do not think you are "justified". But if you wa
42,082
PCA: 91% of explained variance on one principal component
Going by your plotting the first 2 components, I would definitely say keep the second one, and maybe the third one too. Drawing cumulative information gain graphs also help a lot in deciding about PCA. If you are using R, there are simple methods to do that. You could look up R labs in standard data mining books like the ones by Tibshirani.
PCA: 91% of explained variance on one principal component
Going by your plotting the first 2 components, I would definitely say keep the second one, and maybe the third one too. Drawing cumulative information gain graphs also help a lot in deciding about PC
PCA: 91% of explained variance on one principal component Going by your plotting the first 2 components, I would definitely say keep the second one, and maybe the third one too. Drawing cumulative information gain graphs also help a lot in deciding about PCA. If you are using R, there are simple methods to do that. You could look up R labs in standard data mining books like the ones by Tibshirani.
PCA: 91% of explained variance on one principal component Going by your plotting the first 2 components, I would definitely say keep the second one, and maybe the third one too. Drawing cumulative information gain graphs also help a lot in deciding about PC
42,083
Is there an advantage to squaring dissimilarities when using Ward clustering?
From the Conclusion of Murtaugh, F. & Legendre, P. (2011). Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm, ArXive:1111.6285v2 (pdf): Two algorithms, Ward1 and Ward2...When applied to the same distance matrix D, they produce different results. This article has shown that when they are applied to the same dissimilarity matrix D, only Ward2 minimizes the Ward clustering criterion and produces the Ward method. The Ward1 and Ward2 algorithms can be made to optimize the same criterion and produce the same clustering topology by using Ward1 with D-squared and Ward2 with D. For example, hclust(dist(x)^2,method="ward") is equivalent to hclust(dist(x),method="ward.D2").
Is there an advantage to squaring dissimilarities when using Ward clustering?
From the Conclusion of Murtaugh, F. & Legendre, P. (2011). Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm, ArXive:1111.6285v2 (pdf): Two algorithms, Ward1 an
Is there an advantage to squaring dissimilarities when using Ward clustering? From the Conclusion of Murtaugh, F. & Legendre, P. (2011). Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm, ArXive:1111.6285v2 (pdf): Two algorithms, Ward1 and Ward2...When applied to the same distance matrix D, they produce different results. This article has shown that when they are applied to the same dissimilarity matrix D, only Ward2 minimizes the Ward clustering criterion and produces the Ward method. The Ward1 and Ward2 algorithms can be made to optimize the same criterion and produce the same clustering topology by using Ward1 with D-squared and Ward2 with D. For example, hclust(dist(x)^2,method="ward") is equivalent to hclust(dist(x),method="ward.D2").
Is there an advantage to squaring dissimilarities when using Ward clustering? From the Conclusion of Murtaugh, F. & Legendre, P. (2011). Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm, ArXive:1111.6285v2 (pdf): Two algorithms, Ward1 an
42,084
Is there an advantage to squaring dissimilarities when using Ward clustering?
Judging from the explanation, ward in R was first implemented incorrectly. Only in recent versions, a corrected version of ward linkage was added, as ward.D2. So if you want to use ward linkage, use ward.D2.
Is there an advantage to squaring dissimilarities when using Ward clustering?
Judging from the explanation, ward in R was first implemented incorrectly. Only in recent versions, a corrected version of ward linkage was added, as ward.D2. So if you want to use ward linkage, use w
Is there an advantage to squaring dissimilarities when using Ward clustering? Judging from the explanation, ward in R was first implemented incorrectly. Only in recent versions, a corrected version of ward linkage was added, as ward.D2. So if you want to use ward linkage, use ward.D2.
Is there an advantage to squaring dissimilarities when using Ward clustering? Judging from the explanation, ward in R was first implemented incorrectly. Only in recent versions, a corrected version of ward linkage was added, as ward.D2. So if you want to use ward linkage, use w
42,085
Bayesian inferencing: how iterative parameter updates work?
The problem of finding the hyperparameters is called evidence approximation. It is nicely explained in Bishop's book (page 166), or else in this paper, in great detail. The idea is that your problem has the canonical form (the predictive distribution for a new sample), $$ p(t|\mathbf{t}) = \int p(t|\mathbf{w},\alpha) p(\mathbf{w}|\mathbf{t},\alpha,\beta)p(\alpha,\beta|\mathbf{t}) d\mathbf{w} d\alpha d\beta $$ where $\mathbf{t}$ is your training data, $\alpha,\beta$ are hyperparameters, and $\mathbf{w}$ are your weights. First, computing this integral is expensive or maybe even intractable, and has an additional difficulty: $p(\alpha,\beta|\mathbf{t})$. This term tells us that we need to integrate over the ensemble of interpolators. In practice means that you would train your ensemble, that is, each of the $p(\mathbf{t}|\alpha,\beta)$, and using Bayes' theorem, $$ p(\alpha,\beta|\mathbf{t}) \propto p(\mathbf{t}|\alpha,\beta) p(\alpha,\beta) $$ you could calculate each term applying Bayes. And finally sum over all of them. The evidence framework assumes (in the referred paper validity conditions for this assumption are given) that $p(\alpha,\beta|\mathbf{t})$ a dominant peak at some values $\hat{\alpha},\hat{\beta}$. Under this assumption you substitute your integral by a point estimation at the peak, namely, $$ p(t|\mathbf{t}) \approx \int p(t|\mathbf{w},\alpha) p(\mathbf{w}|\mathbf{t},\hat{\alpha},\hat{\beta}) $$ If the prior is relatively flat, then the problem of finding $\hat{\alpha}$ and $\hat{\beta}$ finally reduces to maximizing the likelihood $p(\mathbf{t}|\alpha,\beta)$. In your case the integral term has a closed form solution (is also Gaussian). P.S. In statistics this method is known as empirical Bayes. If you google for it, you shall find a few references. I find this one to be very really nice, since it works easier problems in detail, and carefully introduces all necessary terms.
Bayesian inferencing: how iterative parameter updates work?
The problem of finding the hyperparameters is called evidence approximation. It is nicely explained in Bishop's book (page 166), or else in this paper, in great detail. The idea is that your problem h
Bayesian inferencing: how iterative parameter updates work? The problem of finding the hyperparameters is called evidence approximation. It is nicely explained in Bishop's book (page 166), or else in this paper, in great detail. The idea is that your problem has the canonical form (the predictive distribution for a new sample), $$ p(t|\mathbf{t}) = \int p(t|\mathbf{w},\alpha) p(\mathbf{w}|\mathbf{t},\alpha,\beta)p(\alpha,\beta|\mathbf{t}) d\mathbf{w} d\alpha d\beta $$ where $\mathbf{t}$ is your training data, $\alpha,\beta$ are hyperparameters, and $\mathbf{w}$ are your weights. First, computing this integral is expensive or maybe even intractable, and has an additional difficulty: $p(\alpha,\beta|\mathbf{t})$. This term tells us that we need to integrate over the ensemble of interpolators. In practice means that you would train your ensemble, that is, each of the $p(\mathbf{t}|\alpha,\beta)$, and using Bayes' theorem, $$ p(\alpha,\beta|\mathbf{t}) \propto p(\mathbf{t}|\alpha,\beta) p(\alpha,\beta) $$ you could calculate each term applying Bayes. And finally sum over all of them. The evidence framework assumes (in the referred paper validity conditions for this assumption are given) that $p(\alpha,\beta|\mathbf{t})$ a dominant peak at some values $\hat{\alpha},\hat{\beta}$. Under this assumption you substitute your integral by a point estimation at the peak, namely, $$ p(t|\mathbf{t}) \approx \int p(t|\mathbf{w},\alpha) p(\mathbf{w}|\mathbf{t},\hat{\alpha},\hat{\beta}) $$ If the prior is relatively flat, then the problem of finding $\hat{\alpha}$ and $\hat{\beta}$ finally reduces to maximizing the likelihood $p(\mathbf{t}|\alpha,\beta)$. In your case the integral term has a closed form solution (is also Gaussian). P.S. In statistics this method is known as empirical Bayes. If you google for it, you shall find a few references. I find this one to be very really nice, since it works easier problems in detail, and carefully introduces all necessary terms.
Bayesian inferencing: how iterative parameter updates work? The problem of finding the hyperparameters is called evidence approximation. It is nicely explained in Bishop's book (page 166), or else in this paper, in great detail. The idea is that your problem h
42,086
Bayesian inferencing: how iterative parameter updates work?
Ok, I finally figured out the intuitive reason or this. Thanks to @juampa for the tip. The thing in Bishop's book that brought home the point for me was figure 3.13. The thing is that we need to think what happens to the model evidence with relation to model complexity. So, in my example when the regularisation term $\lambda$ is set low, it means that the predictive posterior distribution is going to be really spread out, so it will assign low probability to any particular observation (so the prior will have high variance and so the probabilities will be spread out). Similarly, when $\lambda$ is high, we will have low prior variance and the model won't fit the data well. Hence, the best fit will usually be in some intermediate value which is what $\lambda$ will tend to (unless there is some good reason for $\lambda$ to take extreme values).
Bayesian inferencing: how iterative parameter updates work?
Ok, I finally figured out the intuitive reason or this. Thanks to @juampa for the tip. The thing in Bishop's book that brought home the point for me was figure 3.13. The thing is that we need to thin
Bayesian inferencing: how iterative parameter updates work? Ok, I finally figured out the intuitive reason or this. Thanks to @juampa for the tip. The thing in Bishop's book that brought home the point for me was figure 3.13. The thing is that we need to think what happens to the model evidence with relation to model complexity. So, in my example when the regularisation term $\lambda$ is set low, it means that the predictive posterior distribution is going to be really spread out, so it will assign low probability to any particular observation (so the prior will have high variance and so the probabilities will be spread out). Similarly, when $\lambda$ is high, we will have low prior variance and the model won't fit the data well. Hence, the best fit will usually be in some intermediate value which is what $\lambda$ will tend to (unless there is some good reason for $\lambda$ to take extreme values).
Bayesian inferencing: how iterative parameter updates work? Ok, I finally figured out the intuitive reason or this. Thanks to @juampa for the tip. The thing in Bishop's book that brought home the point for me was figure 3.13. The thing is that we need to thin
42,087
What does "PCA (Principal Component Analysis) spheres the data" mean?
Your understanding is right. Have a look at this figure which represents various possibilities of your data points: http://shapeofdata.files.wordpress.com/2013/02/pca22.png They look ellipsoidal. If you do what you've described above i.e. compress the points in the direction in which they are spread the most (approx the 45 degree line in the image), the points will be lying in a circle (sphere in higher dimensions). One reason you spherify the data is while doing prediction and understanding which coordinates are important. Say you wish to predict $y$ using $x_1$ and $x_2$, and you get coefficient values $\beta_1$ and $\beta_2$ i.e. $y\sim \beta_1 x_1+\beta_2x_2 $. Now if $x_1$ and $x_2$ have the same variance, i.e. they are roughly distributed spherically, and you find that $\beta_1=1$ while $\beta_2=10$, you can interpret this has saying that $x_2$ influences $y$ more than $x_1$. If their scales were not the same however, and $x_1$ was distributed 10 times more than $x_2$, then you would get the above values of $\beta_1$ and $\beta_2$ even if they both influenced $y$ roughly the same. To summarize, you "spherify" or "normalize" to make inferences about the variable's importance from its coefficient.
What does "PCA (Principal Component Analysis) spheres the data" mean?
Your understanding is right. Have a look at this figure which represents various possibilities of your data points: http://shapeofdata.files.wordpress.com/2013/02/pca22.png They look ellipsoidal. If y
What does "PCA (Principal Component Analysis) spheres the data" mean? Your understanding is right. Have a look at this figure which represents various possibilities of your data points: http://shapeofdata.files.wordpress.com/2013/02/pca22.png They look ellipsoidal. If you do what you've described above i.e. compress the points in the direction in which they are spread the most (approx the 45 degree line in the image), the points will be lying in a circle (sphere in higher dimensions). One reason you spherify the data is while doing prediction and understanding which coordinates are important. Say you wish to predict $y$ using $x_1$ and $x_2$, and you get coefficient values $\beta_1$ and $\beta_2$ i.e. $y\sim \beta_1 x_1+\beta_2x_2 $. Now if $x_1$ and $x_2$ have the same variance, i.e. they are roughly distributed spherically, and you find that $\beta_1=1$ while $\beta_2=10$, you can interpret this has saying that $x_2$ influences $y$ more than $x_1$. If their scales were not the same however, and $x_1$ was distributed 10 times more than $x_2$, then you would get the above values of $\beta_1$ and $\beta_2$ even if they both influenced $y$ roughly the same. To summarize, you "spherify" or "normalize" to make inferences about the variable's importance from its coefficient.
What does "PCA (Principal Component Analysis) spheres the data" mean? Your understanding is right. Have a look at this figure which represents various possibilities of your data points: http://shapeofdata.files.wordpress.com/2013/02/pca22.png They look ellipsoidal. If y
42,088
Statistics for Area under the ROC curve
An alternative given by [1] is to compute the interval for the logit AUC: $ log \left( \frac{AUC}{1-AUC} \right) \pm \phi ^{-1} \left( 1 - \frac{\alpha}{2} \right) \frac{\sqrt{AUC}}{AUC(1 - AUC)} $ so that you get an asymmetric interval. In your case, you would get a 95% CI $(0.38, 0.81)$. If you are frequently dealing with high AUCs and small sample sizes, you may want to have a look at [2] that shows there is no single method that can optimally compute confidence interval for all ROC curves. [1] Pepe MS, The Statistical Evaluation of Medical Tests for Classification and Prediction, OUP 2003, p. 107 [2] Obuchowski NA, Lieber ML, Confidence bounds when the estimated ROC area is 1.0, Acad Radiol. 2002, 9 (5) p. 526-30
Statistics for Area under the ROC curve
An alternative given by [1] is to compute the interval for the logit AUC: $ log \left( \frac{AUC}{1-AUC} \right) \pm \phi ^{-1} \left( 1 - \frac{\alpha}{2} \right) \frac{\sqrt{AUC}}{AUC(1 - AUC)} $ so
Statistics for Area under the ROC curve An alternative given by [1] is to compute the interval for the logit AUC: $ log \left( \frac{AUC}{1-AUC} \right) \pm \phi ^{-1} \left( 1 - \frac{\alpha}{2} \right) \frac{\sqrt{AUC}}{AUC(1 - AUC)} $ so that you get an asymmetric interval. In your case, you would get a 95% CI $(0.38, 0.81)$. If you are frequently dealing with high AUCs and small sample sizes, you may want to have a look at [2] that shows there is no single method that can optimally compute confidence interval for all ROC curves. [1] Pepe MS, The Statistical Evaluation of Medical Tests for Classification and Prediction, OUP 2003, p. 107 [2] Obuchowski NA, Lieber ML, Confidence bounds when the estimated ROC area is 1.0, Acad Radiol. 2002, 9 (5) p. 526-30
Statistics for Area under the ROC curve An alternative given by [1] is to compute the interval for the logit AUC: $ log \left( \frac{AUC}{1-AUC} \right) \pm \phi ^{-1} \left( 1 - \frac{\alpha}{2} \right) \frac{\sqrt{AUC}}{AUC(1 - AUC)} $ so
42,089
Zero-inflated two-part models for semi-continuous data
Not sure about Stata, but R can run zero-inflated models with fixed effects. Check out, for example, the gamlss package and zeroinfl() from the pscl package.
Zero-inflated two-part models for semi-continuous data
Not sure about Stata, but R can run zero-inflated models with fixed effects. Check out, for example, the gamlss package and zeroinfl() from the pscl package.
Zero-inflated two-part models for semi-continuous data Not sure about Stata, but R can run zero-inflated models with fixed effects. Check out, for example, the gamlss package and zeroinfl() from the pscl package.
Zero-inflated two-part models for semi-continuous data Not sure about Stata, but R can run zero-inflated models with fixed effects. Check out, for example, the gamlss package and zeroinfl() from the pscl package.
42,090
Zero-inflated two-part models for semi-continuous data
Disadvantages of $\ln(0+c)$: $c=1$ is arbitrary. Often the value of $c$ changes estimates, so you need to conduct a grid search for the "optimal" result and justify that choice in the end Zero mass may respond differently to covariates (extensive vs. intensive margin may have different DGPs) Retransformation back to natural scale problem is worse at the low end if you want to predict $y$ Sometimes works poorly. See Duan, N., W.G. Manning, et al. “A Comparison of Alternative Models for the Demand for Medical Care,” Journal of Business and Economics Statistics, 1:115-126, 1983 for some examples. (gated JSTOR link, RAND working paper link). There's no panel version of tpm. I would try using dummies and clustering on the panel id if computationally possible. I might also give xtpoisson, fe robust or xtpqml (a user-written wrapper) a whirl, justifying it as Quasi-MLE, which has performed well in CS simulations even when the number of zeros is large.
Zero-inflated two-part models for semi-continuous data
Disadvantages of $\ln(0+c)$: $c=1$ is arbitrary. Often the value of $c$ changes estimates, so you need to conduct a grid search for the "optimal" result and justify that choice in the end Zero mass m
Zero-inflated two-part models for semi-continuous data Disadvantages of $\ln(0+c)$: $c=1$ is arbitrary. Often the value of $c$ changes estimates, so you need to conduct a grid search for the "optimal" result and justify that choice in the end Zero mass may respond differently to covariates (extensive vs. intensive margin may have different DGPs) Retransformation back to natural scale problem is worse at the low end if you want to predict $y$ Sometimes works poorly. See Duan, N., W.G. Manning, et al. “A Comparison of Alternative Models for the Demand for Medical Care,” Journal of Business and Economics Statistics, 1:115-126, 1983 for some examples. (gated JSTOR link, RAND working paper link). There's no panel version of tpm. I would try using dummies and clustering on the panel id if computationally possible. I might also give xtpoisson, fe robust or xtpqml (a user-written wrapper) a whirl, justifying it as Quasi-MLE, which has performed well in CS simulations even when the number of zeros is large.
Zero-inflated two-part models for semi-continuous data Disadvantages of $\ln(0+c)$: $c=1$ is arbitrary. Often the value of $c$ changes estimates, so you need to conduct a grid search for the "optimal" result and justify that choice in the end Zero mass m
42,091
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples)
Most of the 31000 genes are unlikely to differ much in expression among the cell lines (at least when appropriately normalized), so they add no information to the problem. For a practical biological problem like this, it may help to concentrate on genes whose expression levels are relatively high on an absolute basis. That way it will be easier to validate your results on these 29 lines and then apply and test your predictions on cell lines beyond those you are now examining, for example with standard PCR instead of the expensive microarray or RNAseq methods used to examine 31000 genes at once. Start by (a) limiting your analysis to highly expressed genes whose normalized expression levels have the greatest variance among cell lines (typically on a log scale in gene expression work) and closest relation to IC50 values, so that your intractable $p \gg n$ problem becomes a less difficult $p > n$ problem. Then (b) combine information from different genes whose expression levels co-vary among cell lines. The "supervised principal components" method described in section 18.6 of The Elements of Statistical Learning, second edition, provides a documented way to accomplish this. Genes are rank-ordered with respect to univariate relations to IC50 values (accomplishing goal a, if you limit to highly expressed genes) and PCA is performed on a subset of genes with the highest relations to IC50s (accomplishing goal b). The number of genes included in the PCA and the number of principal components retained are chosen by cross-validation.
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors
Most of the 31000 genes are unlikely to differ much in expression among the cell lines (at least when appropriately normalized), so they add no information to the problem. For a practical biological p
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples) Most of the 31000 genes are unlikely to differ much in expression among the cell lines (at least when appropriately normalized), so they add no information to the problem. For a practical biological problem like this, it may help to concentrate on genes whose expression levels are relatively high on an absolute basis. That way it will be easier to validate your results on these 29 lines and then apply and test your predictions on cell lines beyond those you are now examining, for example with standard PCR instead of the expensive microarray or RNAseq methods used to examine 31000 genes at once. Start by (a) limiting your analysis to highly expressed genes whose normalized expression levels have the greatest variance among cell lines (typically on a log scale in gene expression work) and closest relation to IC50 values, so that your intractable $p \gg n$ problem becomes a less difficult $p > n$ problem. Then (b) combine information from different genes whose expression levels co-vary among cell lines. The "supervised principal components" method described in section 18.6 of The Elements of Statistical Learning, second edition, provides a documented way to accomplish this. Genes are rank-ordered with respect to univariate relations to IC50 values (accomplishing goal a, if you limit to highly expressed genes) and PCA is performed on a subset of genes with the highest relations to IC50s (accomplishing goal b). The number of genes included in the PCA and the number of principal components retained are chosen by cross-validation.
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors Most of the 31000 genes are unlikely to differ much in expression among the cell lines (at least when appropriately normalized), so they add no information to the problem. For a practical biological p
42,092
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples)
I believe the reason you will be getting varying answers is because you have $p>>n$, i.e more variables than samples. In this situation the LASSO can only selected $n$ variable and I assume will have problems with convergence. While having no experience with dealing with this, the Elastic Net supposedly overcomes some of these issues
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors
I believe the reason you will be getting varying answers is because you have $p>>n$, i.e more variables than samples. In this situation the LASSO can only selected $n$ variable and I assume will have
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples) I believe the reason you will be getting varying answers is because you have $p>>n$, i.e more variables than samples. In this situation the LASSO can only selected $n$ variable and I assume will have problems with convergence. While having no experience with dealing with this, the Elastic Net supposedly overcomes some of these issues
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors I believe the reason you will be getting varying answers is because you have $p>>n$, i.e more variables than samples. In this situation the LASSO can only selected $n$ variable and I assume will have
42,093
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples)
Can I suggest you the paper robustness of lasso solutions under cross-validation variability and Stability selection (Meinshausen and Bruhlman, 2009)? They propose stable version of the lasso estimator.
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors
Can I suggest you the paper robustness of lasso solutions under cross-validation variability and Stability selection (Meinshausen and Bruhlman, 2009)? They propose stable version of the lasso estimato
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors and ~30 samples) Can I suggest you the paper robustness of lasso solutions under cross-validation variability and Stability selection (Meinshausen and Bruhlman, 2009)? They propose stable version of the lasso estimator.
Regression in $p\gg N$ setting (predicting drug efficiency from gene expression with 30k predictors Can I suggest you the paper robustness of lasso solutions under cross-validation variability and Stability selection (Meinshausen and Bruhlman, 2009)? They propose stable version of the lasso estimato
42,094
Estimated variance using linear factor models
There exists a basic result in least squares algebra that says that, in a linear regression and using OLS estimation, regressing a variable on a set of regressors plus a constant, is equivalent (in a mathematically exact way) to estimating using the dependent variable and the regressors as deviations from their respective means. The specification that is presented in the reference you provided, adjusted for univariate time-series regression with a sample of size $T$ (I am sorry but I always use $T$ to denote the time-dimension, unlike the reference), with one dependent variable is (in vector matrix notation to represent the whole sample) $$\mathbf R = a \cdot\mathbf 1 + \mathbf F\mathbf B + \mathbf u$$ Now write $\mathbf D = I_{T\times T} -\mathbf 1\left(\mathbf 1'\mathbf1\right)^{-1}\mathbf1' $ for the special matrix that de-means a column vector, and define $$\tilde {\mathbf R} = \mathbf D \mathbf R,\;\; \tilde {\mathbf F} = \mathbf D \mathbf F$$ Then you have equivalently $$\tilde {\mathbf R} = \tilde {\mathbf F}\mathbf B + \mathbf u$$ From this alternative (but again, equivalent) representation of the model we can see that the sample variance of the dependent variable (spare me the bias correction term) can be expressed as (using estimated magnitudes) $$\operatorname{\hat Var}(R) = \frac 1n \tilde {\mathbf R}'\tilde {\mathbf R} = \frac 1n\left[\tilde {\mathbf F}\hat {\mathbf B} + \hat {\mathbf u}\right]'\left[\tilde {\mathbf F}\hat {\mathbf B} + \hat {\mathbf u}\right]$$ $$=\frac 1n\hat {\mathbf B}'\tilde {\mathbf F}'\tilde {\mathbf F}\hat {\mathbf B} + \frac 1n\hat {\mathbf B}'\tilde {\mathbf F}'\hat {\mathbf u} + \frac 1n\hat {\mathbf u}'\tilde {\mathbf F}\hat {\mathbf B}+\frac 1n\hat {\mathbf u}'\hat {\mathbf u}$$ Note the following: by construction, the regressors are orthogonal to the error term, and hence the 2nd and 3d terms are exactly zero. Moreover, $\frac 1n\tilde {\mathbf F}'\tilde {\mathbf F} = \Sigma$ (the $\Sigma$ in the question) so $$\operatorname{\hat Var}(R) = \hat {\mathbf B}'\Sigma\hat {\mathbf B} + \frac 1n\hat {\mathbf u}'\hat {\mathbf u}\qquad [1]$$ which is the expression for the sample variance of $R$ given in the reference, which we see that it can be obtained easily through this alternative representation of the model. The LHS of this equation is the sample variance of the dependent variable. This magnitude should not be affected by the choice of regressors -and so taking any set of regressors, expressing them in mean deviation form, running a regression, obtaining a new set of residuals and then plugging all these into the RHS of the equation, should give exactly the same result. And it does, as the OP found numerically, which can also be shown to hold for any set of regressors. Define $\mathbf M_F$ to be the residual-maker matrix of the regression, $\mathbf M_F = I - \mathbf P_F$ where $\mathbf P_F$ is the projection matrix $P_F = \tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'$ Then, using the expressions for the estimated coefficients and the residuals we have $$n\cdot RHS =\left[\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R}\right]' \tilde {\mathbf F}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \left(\mathbf M_F\tilde {\mathbf R}\right)'\left(\mathbf M_F\tilde {\mathbf R}\right)$$ $$= \tilde {\mathbf R}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\mathbf M_F\tilde {\mathbf R}$$ where we have used the fact that $\mathbf M_F$ is always a symmetric matrix. Simplifying and using the fact that the residual maker matrix is also idempotent, we have $$n\cdot RHS =\tilde {\mathbf R}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R}$$ and using the relation between the residula maker and the projection matrix we have $$n\cdot RHS =\tilde {\mathbf R}'\mathbf P_F\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R} = \tilde {\mathbf R}'(I-\mathbf M_F)\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R}$$ $$\Rightarrow n\cdot RHS = \tilde {\mathbf R}'\tilde {\mathbf R} $$ $$\Rightarrow RHS = \frac 1n \tilde {\mathbf R}'\tilde {\mathbf R} $$ So the RHS of eq. $[1]$ is composed in such a way as to be mathematically equal with the result we would obtain if we simply calculated the sample variance of the dependent variable, irrespective of the regressors chosen. What the choice of the regressors affects is the allocation of the sample variance into a "common factor component" $\hat {\mathbf B}'\Sigma\hat {\mathbf B}$ and into an "asset specific" component $\frac 1n\hat {\mathbf u}'\hat {\mathbf u}$ (which is the "translation" in the context of the specific model of the traditional statement about the"explained" and "unexplained" portion of the variance of the dependent variable). So, you want to compute something called the "bias statistic", that involves "the estimated variance of the asset and its forward return". If you should expect that this statistic should have a different value depending on the regressors chosen, then, in light of the above, this could happen if 1) In the bias statistic, only one of the two parts of the decomposed sample variance enters(i.e. only one of the two terms of the RHS of eq. $[1]$ and / or 2) The choice of regressors affects the "forward return". I am not familiar with the definition of the bias statistic and what it attempts to measure so I cannot help you further than that.
Estimated variance using linear factor models
There exists a basic result in least squares algebra that says that, in a linear regression and using OLS estimation, regressing a variable on a set of regressors plus a constant, is equivalent (in a
Estimated variance using linear factor models There exists a basic result in least squares algebra that says that, in a linear regression and using OLS estimation, regressing a variable on a set of regressors plus a constant, is equivalent (in a mathematically exact way) to estimating using the dependent variable and the regressors as deviations from their respective means. The specification that is presented in the reference you provided, adjusted for univariate time-series regression with a sample of size $T$ (I am sorry but I always use $T$ to denote the time-dimension, unlike the reference), with one dependent variable is (in vector matrix notation to represent the whole sample) $$\mathbf R = a \cdot\mathbf 1 + \mathbf F\mathbf B + \mathbf u$$ Now write $\mathbf D = I_{T\times T} -\mathbf 1\left(\mathbf 1'\mathbf1\right)^{-1}\mathbf1' $ for the special matrix that de-means a column vector, and define $$\tilde {\mathbf R} = \mathbf D \mathbf R,\;\; \tilde {\mathbf F} = \mathbf D \mathbf F$$ Then you have equivalently $$\tilde {\mathbf R} = \tilde {\mathbf F}\mathbf B + \mathbf u$$ From this alternative (but again, equivalent) representation of the model we can see that the sample variance of the dependent variable (spare me the bias correction term) can be expressed as (using estimated magnitudes) $$\operatorname{\hat Var}(R) = \frac 1n \tilde {\mathbf R}'\tilde {\mathbf R} = \frac 1n\left[\tilde {\mathbf F}\hat {\mathbf B} + \hat {\mathbf u}\right]'\left[\tilde {\mathbf F}\hat {\mathbf B} + \hat {\mathbf u}\right]$$ $$=\frac 1n\hat {\mathbf B}'\tilde {\mathbf F}'\tilde {\mathbf F}\hat {\mathbf B} + \frac 1n\hat {\mathbf B}'\tilde {\mathbf F}'\hat {\mathbf u} + \frac 1n\hat {\mathbf u}'\tilde {\mathbf F}\hat {\mathbf B}+\frac 1n\hat {\mathbf u}'\hat {\mathbf u}$$ Note the following: by construction, the regressors are orthogonal to the error term, and hence the 2nd and 3d terms are exactly zero. Moreover, $\frac 1n\tilde {\mathbf F}'\tilde {\mathbf F} = \Sigma$ (the $\Sigma$ in the question) so $$\operatorname{\hat Var}(R) = \hat {\mathbf B}'\Sigma\hat {\mathbf B} + \frac 1n\hat {\mathbf u}'\hat {\mathbf u}\qquad [1]$$ which is the expression for the sample variance of $R$ given in the reference, which we see that it can be obtained easily through this alternative representation of the model. The LHS of this equation is the sample variance of the dependent variable. This magnitude should not be affected by the choice of regressors -and so taking any set of regressors, expressing them in mean deviation form, running a regression, obtaining a new set of residuals and then plugging all these into the RHS of the equation, should give exactly the same result. And it does, as the OP found numerically, which can also be shown to hold for any set of regressors. Define $\mathbf M_F$ to be the residual-maker matrix of the regression, $\mathbf M_F = I - \mathbf P_F$ where $\mathbf P_F$ is the projection matrix $P_F = \tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'$ Then, using the expressions for the estimated coefficients and the residuals we have $$n\cdot RHS =\left[\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R}\right]' \tilde {\mathbf F}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \left(\mathbf M_F\tilde {\mathbf R}\right)'\left(\mathbf M_F\tilde {\mathbf R}\right)$$ $$= \tilde {\mathbf R}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\mathbf M_F\tilde {\mathbf R}$$ where we have used the fact that $\mathbf M_F$ is always a symmetric matrix. Simplifying and using the fact that the residual maker matrix is also idempotent, we have $$n\cdot RHS =\tilde {\mathbf R}'\tilde {\mathbf F}\Big(\tilde {\mathbf F}'\tilde {\mathbf F}\Big)^{-1}\tilde {\mathbf F}'\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R}$$ and using the relation between the residula maker and the projection matrix we have $$n\cdot RHS =\tilde {\mathbf R}'\mathbf P_F\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R} = \tilde {\mathbf R}'(I-\mathbf M_F)\tilde {\mathbf R} + \tilde {\mathbf R}'\mathbf M_F\tilde {\mathbf R}$$ $$\Rightarrow n\cdot RHS = \tilde {\mathbf R}'\tilde {\mathbf R} $$ $$\Rightarrow RHS = \frac 1n \tilde {\mathbf R}'\tilde {\mathbf R} $$ So the RHS of eq. $[1]$ is composed in such a way as to be mathematically equal with the result we would obtain if we simply calculated the sample variance of the dependent variable, irrespective of the regressors chosen. What the choice of the regressors affects is the allocation of the sample variance into a "common factor component" $\hat {\mathbf B}'\Sigma\hat {\mathbf B}$ and into an "asset specific" component $\frac 1n\hat {\mathbf u}'\hat {\mathbf u}$ (which is the "translation" in the context of the specific model of the traditional statement about the"explained" and "unexplained" portion of the variance of the dependent variable). So, you want to compute something called the "bias statistic", that involves "the estimated variance of the asset and its forward return". If you should expect that this statistic should have a different value depending on the regressors chosen, then, in light of the above, this could happen if 1) In the bias statistic, only one of the two parts of the decomposed sample variance enters(i.e. only one of the two terms of the RHS of eq. $[1]$ and / or 2) The choice of regressors affects the "forward return". I am not familiar with the definition of the bias statistic and what it attempts to measure so I cannot help you further than that.
Estimated variance using linear factor models There exists a basic result in least squares algebra that says that, in a linear regression and using OLS estimation, regressing a variable on a set of regressors plus a constant, is equivalent (in a
42,095
Estimated variance using linear factor models
The values are not exactly the same, but are very close. This appears to be purely accidental when we look at what goes into them: Model beta'*Sigma*beta Deviance DF Deviance/DF "est.var" 1 0.0001504468 0.0005199823 1256 4.139987e-07 0.0001508608 2 9.168255e-08 0.1896668 1256 0.0001510086 0.0001511003 In the first model the result is approximately $0.000151$ because $\beta'\Sigma\beta$ is almost this value and the deviance (divided by the DF) adds essentially nothing. In the second model $\beta'\Sigma\beta$ contributes almost nothing but the deviance (divided by the DF) is approximately $0.000151.$ It is a very good idea to look into results that seem like more than coincidence, as you have done here, but accidents do happen.
Estimated variance using linear factor models
The values are not exactly the same, but are very close. This appears to be purely accidental when we look at what goes into them: Model beta'*Sigma*beta Deviance DF Deviance/DF "est.var
Estimated variance using linear factor models The values are not exactly the same, but are very close. This appears to be purely accidental when we look at what goes into them: Model beta'*Sigma*beta Deviance DF Deviance/DF "est.var" 1 0.0001504468 0.0005199823 1256 4.139987e-07 0.0001508608 2 9.168255e-08 0.1896668 1256 0.0001510086 0.0001511003 In the first model the result is approximately $0.000151$ because $\beta'\Sigma\beta$ is almost this value and the deviance (divided by the DF) adds essentially nothing. In the second model $\beta'\Sigma\beta$ contributes almost nothing but the deviance (divided by the DF) is approximately $0.000151.$ It is a very good idea to look into results that seem like more than coincidence, as you have done here, but accidents do happen.
Estimated variance using linear factor models The values are not exactly the same, but are very close. This appears to be purely accidental when we look at what goes into them: Model beta'*Sigma*beta Deviance DF Deviance/DF "est.var
42,096
Prove that distribution of sample median (for even sample) is symmetric
To those comfortable with the mathematics of probability and random variables the following argument is a one-liner, because you will take for granted most of the manipulation (and will find it obvious). To cover all bases, though, I provide the details. The setting--and a generalization Let $X = (X_1, X_2, \ldots, X_n)$ be a vector-valued random variable with a distribution $F$ that is "symmetric" about the origin in the sense that for all events $E\subset\mathbb{R}^n,$ $${\Pr}_F(E) = {\Pr}_F(-E).$$ The notation "$-E$" refers to $\{-x\ |\ x \in E\}$. No assumptions are made about the parity of $n$ or about independence of the components $X_i$. Note that the conditions of the problem--viz, $n$ even and $X_i$ iid Normal$(0,1)$--are a special case. Observe--because this is the crux of the matter--that median$(-X)$ = $-$median$(X)$ no matter what $X$ may be. (For this to be true, it is essential that we define the median of an even number of elements to be the average of their two middle values.) The one-line proof The distribution of the median is symmetric because the distribution of $X$ is symmetric and, as we just observed, the median commutes with the symmetry operation $X \to -X,$ QED. The details We are asked to show that $f(X)$ = median$(X)$ has a symmetric distribution. To this end, let $D\subset\mathbb{R}$ be measurable. The chance that $f(X)$ lies in $D$ is--by definition--given by $${\Pr}_F(f(X)\in D) = {\Pr}_F(X\in f^{-1}(D)) = {\Pr}_F(f^{-1}(D))$$ The first equality is valid because $f$ is a measurable function. To prove symmetry, we need to deduce that ${\Pr}_F(f^{-1}(D)) = {\Pr}_F(f^{-1}(-D)).$ Using the definitions and the key observation (which justifies the third equality below), notice that $$\eqalign{ f^{-1}(D) &= \{X\in\mathbb{R}^n\ |\ f(X)\in D\} \\ &= \{X\in\mathbb{R}^n\ |\ -f(X)\in -D\} \\ &= \{X\in\mathbb{R}^n\ |\ f(-X)\in -D\} \\ &= \{-X\in\mathbb{R}^n\ |\ f(X)\in -D\} \\ &=-f^{-1}(-D). }$$ The symmetry of $F$, applied to the event $E = f^{-1}(D),$ implies the second equality below: $${\Pr}_F(f^{-1}(D)) = {\Pr}_F(-f^{-1}(-D)) = {\Pr}_F(f^{-1}(-D)).$$ But the latter is exactly the chance that the median lies in $-D$, proving the median has a symmetric distribution.
Prove that distribution of sample median (for even sample) is symmetric
To those comfortable with the mathematics of probability and random variables the following argument is a one-liner, because you will take for granted most of the manipulation (and will find it obviou
Prove that distribution of sample median (for even sample) is symmetric To those comfortable with the mathematics of probability and random variables the following argument is a one-liner, because you will take for granted most of the manipulation (and will find it obvious). To cover all bases, though, I provide the details. The setting--and a generalization Let $X = (X_1, X_2, \ldots, X_n)$ be a vector-valued random variable with a distribution $F$ that is "symmetric" about the origin in the sense that for all events $E\subset\mathbb{R}^n,$ $${\Pr}_F(E) = {\Pr}_F(-E).$$ The notation "$-E$" refers to $\{-x\ |\ x \in E\}$. No assumptions are made about the parity of $n$ or about independence of the components $X_i$. Note that the conditions of the problem--viz, $n$ even and $X_i$ iid Normal$(0,1)$--are a special case. Observe--because this is the crux of the matter--that median$(-X)$ = $-$median$(X)$ no matter what $X$ may be. (For this to be true, it is essential that we define the median of an even number of elements to be the average of their two middle values.) The one-line proof The distribution of the median is symmetric because the distribution of $X$ is symmetric and, as we just observed, the median commutes with the symmetry operation $X \to -X,$ QED. The details We are asked to show that $f(X)$ = median$(X)$ has a symmetric distribution. To this end, let $D\subset\mathbb{R}$ be measurable. The chance that $f(X)$ lies in $D$ is--by definition--given by $${\Pr}_F(f(X)\in D) = {\Pr}_F(X\in f^{-1}(D)) = {\Pr}_F(f^{-1}(D))$$ The first equality is valid because $f$ is a measurable function. To prove symmetry, we need to deduce that ${\Pr}_F(f^{-1}(D)) = {\Pr}_F(f^{-1}(-D)).$ Using the definitions and the key observation (which justifies the third equality below), notice that $$\eqalign{ f^{-1}(D) &= \{X\in\mathbb{R}^n\ |\ f(X)\in D\} \\ &= \{X\in\mathbb{R}^n\ |\ -f(X)\in -D\} \\ &= \{X\in\mathbb{R}^n\ |\ f(-X)\in -D\} \\ &= \{-X\in\mathbb{R}^n\ |\ f(X)\in -D\} \\ &=-f^{-1}(-D). }$$ The symmetry of $F$, applied to the event $E = f^{-1}(D),$ implies the second equality below: $${\Pr}_F(f^{-1}(D)) = {\Pr}_F(-f^{-1}(-D)) = {\Pr}_F(f^{-1}(-D)).$$ But the latter is exactly the chance that the median lies in $-D$, proving the median has a symmetric distribution.
Prove that distribution of sample median (for even sample) is symmetric To those comfortable with the mathematics of probability and random variables the following argument is a one-liner, because you will take for granted most of the manipulation (and will find it obviou
42,097
Covariance pattern models versus generalized estimating equation models
Actually you have correctly listed the major differences between the covariance pattern models and GEE models. One thing I would like to add is that, for the Section 6.2.5 "Random Effects Structure" of Hedeker and Gibbons (2006), the two models would be characterized as subject-specific (conditional) models and population average (marginal) models respectively, though the two coincide for linear cases. See my answer here: What is a difference between random effects-, fixed effects- and marginal model? I would say the two are numerically equivalent, though they use different estimation methods. See the example in Stata below. Note that the covariance pattern models can be fitted with command mixed, but the random effects are suppressed by the option noconstant. Of course, we can turn to REML instead of ML to obtain unbiased variance estimates. . webuse pig . mixed weight week || id:, noconstant residuals(exchangeable) Mixed-effects ML regression Number of obs = 432 Group variable: id Number of groups = 48 Obs per group: min = 9 avg = 9.0 max = 9 Wald chi2(1) = 25337.48 Log likelihood = -1014.9268 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974056 32.40 0.000 18.18472 20.52651 ------------------------------------------------------------------------------ . xtset id week . xtgee weight week, corr(exchangeable) GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 25337.48 Scale parameter: 19.20076 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974055 32.40 0.000 18.18472 20.52651 ------------------------------------------------------------------------------ But in GEE, we often use robust (empirically) standard errors instead of model-based standard errors. When we add the option robust, only the standard errors change. . xtgee weight week, corr(exchangeable) robust GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 4552.32 Scale parameter: 19.20076 Prob > chi2 = 0.0000 (Std. Err. adjusted for clustering on id) ------------------------------------------------------------------------------ | Robust weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0920382 67.47 0.000 6.029504 6.390287 _cons | 19.35561 .4038676 47.93 0.000 18.56405 20.14718 ------------------------------------------------------------------------------
Covariance pattern models versus generalized estimating equation models
Actually you have correctly listed the major differences between the covariance pattern models and GEE models. One thing I would like to add is that, for the Section 6.2.5 "Random Effects Structure" o
Covariance pattern models versus generalized estimating equation models Actually you have correctly listed the major differences between the covariance pattern models and GEE models. One thing I would like to add is that, for the Section 6.2.5 "Random Effects Structure" of Hedeker and Gibbons (2006), the two models would be characterized as subject-specific (conditional) models and population average (marginal) models respectively, though the two coincide for linear cases. See my answer here: What is a difference between random effects-, fixed effects- and marginal model? I would say the two are numerically equivalent, though they use different estimation methods. See the example in Stata below. Note that the covariance pattern models can be fitted with command mixed, but the random effects are suppressed by the option noconstant. Of course, we can turn to REML instead of ML to obtain unbiased variance estimates. . webuse pig . mixed weight week || id:, noconstant residuals(exchangeable) Mixed-effects ML regression Number of obs = 432 Group variable: id Number of groups = 48 Obs per group: min = 9 avg = 9.0 max = 9 Wald chi2(1) = 25337.48 Log likelihood = -1014.9268 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974056 32.40 0.000 18.18472 20.52651 ------------------------------------------------------------------------------ . xtset id week . xtgee weight week, corr(exchangeable) GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 25337.48 Scale parameter: 19.20076 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974055 32.40 0.000 18.18472 20.52651 ------------------------------------------------------------------------------ But in GEE, we often use robust (empirically) standard errors instead of model-based standard errors. When we add the option robust, only the standard errors change. . xtgee weight week, corr(exchangeable) robust GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 4552.32 Scale parameter: 19.20076 Prob > chi2 = 0.0000 (Std. Err. adjusted for clustering on id) ------------------------------------------------------------------------------ | Robust weight | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- week | 6.209896 .0920382 67.47 0.000 6.029504 6.390287 _cons | 19.35561 .4038676 47.93 0.000 18.56405 20.14718 ------------------------------------------------------------------------------
Covariance pattern models versus generalized estimating equation models Actually you have correctly listed the major differences between the covariance pattern models and GEE models. One thing I would like to add is that, for the Section 6.2.5 "Random Effects Structure" o
42,098
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/discussed?
The TeachingDemos package for R has the TkBuildDist and TkBuildDist2 functions that provide interactive ways to draw a histogram and/or log density plot for creating a histogram. The first one uses left clicks to add points and right clicks to remove points and the second allows you to click on the tops of bars and drag them up or down. Both show the distribution as it is being created/modified and then return information about the distribution created (histogram and logspline fit). This is not quite paper and pencil, but does accomplish the same general idea using a mouse.
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/d
The TeachingDemos package for R has the TkBuildDist and TkBuildDist2 functions that provide interactive ways to draw a histogram and/or log density plot for creating a histogram. The first one uses l
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/discussed? The TeachingDemos package for R has the TkBuildDist and TkBuildDist2 functions that provide interactive ways to draw a histogram and/or log density plot for creating a histogram. The first one uses left clicks to add points and right clicks to remove points and the second allows you to click on the tops of bars and drag them up or down. Both show the distribution as it is being created/modified and then return information about the distribution created (histogram and logspline fit). This is not quite paper and pencil, but does accomplish the same general idea using a mouse.
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/d The TeachingDemos package for R has the TkBuildDist and TkBuildDist2 functions that provide interactive ways to draw a histogram and/or log density plot for creating a histogram. The first one uses l
42,099
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/discussed?
So, I found the "trial roulette method" described here which is I guess could be considered as drawing with chips. More info in the picture below:
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/d
So, I found the "trial roulette method" described here which is I guess could be considered as drawing with chips. More info in the picture below:
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/discussed? So, I found the "trial roulette method" described here which is I guess could be considered as drawing with chips. More info in the picture below:
Could prior elicitation by actually drawing the density of the prior be sensible? Has it been done/d So, I found the "trial roulette method" described here which is I guess could be considered as drawing with chips. More info in the picture below:
42,100
Quantile regression power analysis
For general linear quantile models of the form $$F^{-1}_{Y|X}(\tau) = X'\beta_n(\tau)$$ where $\beta_n$ is allowed to depend on the sample size, Chernozhukov and Fernandez-Val (2005) construct a power analysis for conditional quantile regression models as in your case. They consider the general null hypothesis $$R(\tau)\beta_0 (\tau) - r(\tau) = 0$$ for $\tau \in (0,1)$. Their procedure allows you to test for a significant effect for a given predictor a constant effect for a given predictor across quantiles stochastic dominance (e.g. unanimously beneficial impact of a treatment) Victor Chernozhukov makes his R code available on his website under the section "Policy Analysis" for the paper "Subsampling on Quantile Regression Processes". References Chernozhukov, V. and Fernandez-Val, I. (2005) "Subsampling on Quantile Regression Processes", The Indian Journal of Statistics, Special Issue on Quantile Regression and Related Methods, Vol. 67 part 2, pp. 253-276
Quantile regression power analysis
For general linear quantile models of the form $$F^{-1}_{Y|X}(\tau) = X'\beta_n(\tau)$$ where $\beta_n$ is allowed to depend on the sample size, Chernozhukov and Fernandez-Val (2005) construct a power
Quantile regression power analysis For general linear quantile models of the form $$F^{-1}_{Y|X}(\tau) = X'\beta_n(\tau)$$ where $\beta_n$ is allowed to depend on the sample size, Chernozhukov and Fernandez-Val (2005) construct a power analysis for conditional quantile regression models as in your case. They consider the general null hypothesis $$R(\tau)\beta_0 (\tau) - r(\tau) = 0$$ for $\tau \in (0,1)$. Their procedure allows you to test for a significant effect for a given predictor a constant effect for a given predictor across quantiles stochastic dominance (e.g. unanimously beneficial impact of a treatment) Victor Chernozhukov makes his R code available on his website under the section "Policy Analysis" for the paper "Subsampling on Quantile Regression Processes". References Chernozhukov, V. and Fernandez-Val, I. (2005) "Subsampling on Quantile Regression Processes", The Indian Journal of Statistics, Special Issue on Quantile Regression and Related Methods, Vol. 67 part 2, pp. 253-276
Quantile regression power analysis For general linear quantile models of the form $$F^{-1}_{Y|X}(\tau) = X'\beta_n(\tau)$$ where $\beta_n$ is allowed to depend on the sample size, Chernozhukov and Fernandez-Val (2005) construct a power