idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
37,301
Calculating Probability of x1 > x2
Your problem may sound easy, but it's surprisingly complicated. In order to assess the probability that Rachel's CPGA (call it $y_1$) is larger than Tobias' ($y_2$), while knowing what their hgpa, sat and ltrs-scores are, is the same as writing $P(y_2 - y_1 > 0 | X)$, where $X$ are their scores. Because we can write $y_i = \hat{y_i} + \epsilon_i$, we can also say \begin{align*} P(y_2 - y_1 > 0 | X) = & P( \underbrace{\epsilon_2 - \epsilon_1}_{\sim N(0, 2\sigma_y^2)} + \underbrace{\hat{y_2} - \hat{y_1}}_{ = 2.8812 - 2.5082} > 0 | X) \\ = &P(\epsilon_2 - \epsilon_1 < 0,373) \end{align*} This is where you get stuck, because we don't know $\sigma_y^2$ for sure. The best we can do here, is estimate it by calculating the variance of your regression residuals. If your sample is large enough ($\rightarrow \infty$), this will converge to $\sigma_y^2$. If you want to ignore the estimation error in $\hat{\sigma_y^2}$, you can implement this in R: sigma_hat <- summary(lm)$sigma e2_min_e1 <- diff(predict(lm, new.df)) * -1 pnorm(e2_min_e1, 0, 2*sigma_hat) # 0.6255
Calculating Probability of x1 > x2
Your problem may sound easy, but it's surprisingly complicated. In order to assess the probability that Rachel's CPGA (call it $y_1$) is larger than Tobias' ($y_2$), while knowing what their hgpa, sat
Calculating Probability of x1 > x2 Your problem may sound easy, but it's surprisingly complicated. In order to assess the probability that Rachel's CPGA (call it $y_1$) is larger than Tobias' ($y_2$), while knowing what their hgpa, sat and ltrs-scores are, is the same as writing $P(y_2 - y_1 > 0 | X)$, where $X$ are their scores. Because we can write $y_i = \hat{y_i} + \epsilon_i$, we can also say \begin{align*} P(y_2 - y_1 > 0 | X) = & P( \underbrace{\epsilon_2 - \epsilon_1}_{\sim N(0, 2\sigma_y^2)} + \underbrace{\hat{y_2} - \hat{y_1}}_{ = 2.8812 - 2.5082} > 0 | X) \\ = &P(\epsilon_2 - \epsilon_1 < 0,373) \end{align*} This is where you get stuck, because we don't know $\sigma_y^2$ for sure. The best we can do here, is estimate it by calculating the variance of your regression residuals. If your sample is large enough ($\rightarrow \infty$), this will converge to $\sigma_y^2$. If you want to ignore the estimation error in $\hat{\sigma_y^2}$, you can implement this in R: sigma_hat <- summary(lm)$sigma e2_min_e1 <- diff(predict(lm, new.df)) * -1 pnorm(e2_min_e1, 0, 2*sigma_hat) # 0.6255
Calculating Probability of x1 > x2 Your problem may sound easy, but it's surprisingly complicated. In order to assess the probability that Rachel's CPGA (call it $y_1$) is larger than Tobias' ($y_2$), while knowing what their hgpa, sat
37,302
KL divergence bounds square of L1 norm
I also bumped into this passage recently! I'm not very familiar with probability/information theory, so I hope this makes sense and my notation is understandable; I tried for precision at the expense of brevity, but there's some notation in the book that I just don't know how to use precisely. As far as I can tell, the "data-processing inequality" for KL divergence (aka relative entropy) is proved "in the same way as the data-processing inequality for mutual information" in the sense that they both involve expanding a certain quantity in two ways with a chain rule and then bounding parts of the expansion, even though the chain rule for mutual information (Theorem 2.5.2) and the chain rule for relative entropy (Theorem 2.5.3) don't seem analogous to me, except in some intuitive sense. In its most general form, the data-processing inequality for relative entropy is probably something like this: Theorem. Let $X_1$ and $X_2$ be two random variables with the same set of possible values $\mathcal{X}$ and probability distributions $P_1$ and $P_2$. Let $Y_1$ and $Y_2$ be two random variables with the same set of possible values $\mathcal{Y}$, that depend on $X_1$ and $X_2$, respectively, in the same way; that is, we draw $Y_1$ from a distribution depending on only $X_1$ and we draw $Y_2$ from a distribution depending on only $X_2$, and when $X_1 = X_2$, the distributions of $Y_1$ and $Y_2$ are identical. If the probability distributions of $Y_1$ and $Y_2$ are $\hat{P_1}$ and $\hat{P_2}$, then $$ D(P_1\| P_2) \geq D(\hat{P_1}\| \hat{P_2}). $$ Note: We can in particular pick a function $f : \mathcal{X} \to \mathcal{Y}$ and just take $Y_1$ and $Y_2$ to be $f(X_1)$ and $f(X_2)$. This is how this inequality is applied in the proof of Theorem 11.6; $f$ is taken to be the indicator function of the set $$ A := \{x \in \mathcal{X} : P_1(x) > P_2(x)\}, $$ which reduces the theorem for general random variables to the case of binary random variables. Proof: By applying the chain rule for relative entropy (Theorem 2.5.3), we can expand the KL divergence between the joint distributions $D(P_1, \hat{P_1}\| P_2, \hat{P_2})$ (over the values $\mathcal{X} \times \mathcal{Y}$) in two ways: $$ \begin{align*} D(P_1, \hat{P_1}\| P_2, \hat{P_2}) &= D(\hat{P_1}\|\hat{P_2}) + D(P_1(X_1|Y_1)\| P_2(X_2|Y_2))\\ &= D(P_1\| P_2) + D(P_1(Y_1|X_1)\| P_2(Y_2|X_2)). \end{align*} $$ The last term in each line is a conditional relative entropy, as defined in (2.66) as $$ D(p(y|x)\| q(y|x)) := \mathbb{E}_{p(x,y)} \log \frac{p(Y|X)}{q(Y|X)}. $$ Since the distributions of $Y_1$ given $X_1$ and the distribution of $Y_2$ given $X_2$ are identical, the conditional relative entropy between them is 0 (equation 2.91), i.e. $$ D(P_1(Y_1|X_1)\|P_2(Y_2|X_2)) = 0 $$ Since conditional relative entropies are nonnegative (equation 2.91), $$ D(P_1(X_1|Y_1)\| P_2(X_2|Y_2)) \geq 0 $$ Therefore, $$ \begin{align*} D(P_1\| P_2) &= D(\hat{P_1}\|\hat{P_2}) + D(P_1(X_1|Y_1)\| P_2(X_2|Y_2)) \\ &\geq D(\hat{P_1}\|\hat{P_2}), \end{align*} $$ as desired. Note 2: These lecture notes seem to have a proof of a more general version of this theorem, which also applies to more general divergences, although I can't say I understand it.
KL divergence bounds square of L1 norm
I also bumped into this passage recently! I'm not very familiar with probability/information theory, so I hope this makes sense and my notation is understandable; I tried for precision at the expense
KL divergence bounds square of L1 norm I also bumped into this passage recently! I'm not very familiar with probability/information theory, so I hope this makes sense and my notation is understandable; I tried for precision at the expense of brevity, but there's some notation in the book that I just don't know how to use precisely. As far as I can tell, the "data-processing inequality" for KL divergence (aka relative entropy) is proved "in the same way as the data-processing inequality for mutual information" in the sense that they both involve expanding a certain quantity in two ways with a chain rule and then bounding parts of the expansion, even though the chain rule for mutual information (Theorem 2.5.2) and the chain rule for relative entropy (Theorem 2.5.3) don't seem analogous to me, except in some intuitive sense. In its most general form, the data-processing inequality for relative entropy is probably something like this: Theorem. Let $X_1$ and $X_2$ be two random variables with the same set of possible values $\mathcal{X}$ and probability distributions $P_1$ and $P_2$. Let $Y_1$ and $Y_2$ be two random variables with the same set of possible values $\mathcal{Y}$, that depend on $X_1$ and $X_2$, respectively, in the same way; that is, we draw $Y_1$ from a distribution depending on only $X_1$ and we draw $Y_2$ from a distribution depending on only $X_2$, and when $X_1 = X_2$, the distributions of $Y_1$ and $Y_2$ are identical. If the probability distributions of $Y_1$ and $Y_2$ are $\hat{P_1}$ and $\hat{P_2}$, then $$ D(P_1\| P_2) \geq D(\hat{P_1}\| \hat{P_2}). $$ Note: We can in particular pick a function $f : \mathcal{X} \to \mathcal{Y}$ and just take $Y_1$ and $Y_2$ to be $f(X_1)$ and $f(X_2)$. This is how this inequality is applied in the proof of Theorem 11.6; $f$ is taken to be the indicator function of the set $$ A := \{x \in \mathcal{X} : P_1(x) > P_2(x)\}, $$ which reduces the theorem for general random variables to the case of binary random variables. Proof: By applying the chain rule for relative entropy (Theorem 2.5.3), we can expand the KL divergence between the joint distributions $D(P_1, \hat{P_1}\| P_2, \hat{P_2})$ (over the values $\mathcal{X} \times \mathcal{Y}$) in two ways: $$ \begin{align*} D(P_1, \hat{P_1}\| P_2, \hat{P_2}) &= D(\hat{P_1}\|\hat{P_2}) + D(P_1(X_1|Y_1)\| P_2(X_2|Y_2))\\ &= D(P_1\| P_2) + D(P_1(Y_1|X_1)\| P_2(Y_2|X_2)). \end{align*} $$ The last term in each line is a conditional relative entropy, as defined in (2.66) as $$ D(p(y|x)\| q(y|x)) := \mathbb{E}_{p(x,y)} \log \frac{p(Y|X)}{q(Y|X)}. $$ Since the distributions of $Y_1$ given $X_1$ and the distribution of $Y_2$ given $X_2$ are identical, the conditional relative entropy between them is 0 (equation 2.91), i.e. $$ D(P_1(Y_1|X_1)\|P_2(Y_2|X_2)) = 0 $$ Since conditional relative entropies are nonnegative (equation 2.91), $$ D(P_1(X_1|Y_1)\| P_2(X_2|Y_2)) \geq 0 $$ Therefore, $$ \begin{align*} D(P_1\| P_2) &= D(\hat{P_1}\|\hat{P_2}) + D(P_1(X_1|Y_1)\| P_2(X_2|Y_2)) \\ &\geq D(\hat{P_1}\|\hat{P_2}), \end{align*} $$ as desired. Note 2: These lecture notes seem to have a proof of a more general version of this theorem, which also applies to more general divergences, although I can't say I understand it.
KL divergence bounds square of L1 norm I also bumped into this passage recently! I'm not very familiar with probability/information theory, so I hope this makes sense and my notation is understandable; I tried for precision at the expense
37,303
Multi categorical Dice loss?
Dice Loss (DL) for Multi-class: Dice loss is a popular loss function for medical image segmentation which is a measure of overlap between the predicted sample and real sample. This measure ranges from 0 to 1 where a Dice score of 1 denotes the complete overlap as defined as follows $Loss_{DL}\hspace{0.25em}=\hspace{0.25em}1-2\frac{\sum_{l\in L}\sum_{i\in N}y_i^{\left(l\right)}\oversetˆy_i^{\left(l\right)}\hspace{0.25em}+\hspace{0.25em}\varepsilon}{\sum_{l\in L}\sum_{i\in N}{\left(y_i^{\left(l\right)}+\hspace{0.25em}\oversetˆy_i^{\left(l\right)}\right)}+\hspace{0.25em}\varepsilon}$ where $\varepsilon$ is a small constant to avoid dividing by 0. Generalized Dice (GDL): Sudre et al. proposed to use the class rebalancing properties of the Generalized Dice overlap as a robust and accurate deep-learning loss function for unbalanced tasks. The authors investigate the behavior of Dice loss, cross-entropy loss, and generalized dice loss functions in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. The results demonstrate that the GDL is more robust than the other loss functions $Loss_{GDL}\hspace{0.25em}=1-2\frac{\sum_{l\in L}w_i\sum_{i\in N}y_i^{\left(l\right)}\oversetˆy_i^{\left(l\right)}\hspace{0.25em}+\hspace{0.25em}\varepsilon}{\sum_{l\in L}w_i\sum_{i\in N}{\left(y_i^{\left(l\right)}+\hspace{0.25em}\oversetˆy_i^{\left(l\right)}\right)}+\hspace{0.25em}\varepsilon}$ For more detailed notations and explanation refer to this Link More Info
Multi categorical Dice loss?
Dice Loss (DL) for Multi-class: Dice loss is a popular loss function for medical image segmentation which is a measure of overlap between the predicted sample and real sample. This measure ranges fro
Multi categorical Dice loss? Dice Loss (DL) for Multi-class: Dice loss is a popular loss function for medical image segmentation which is a measure of overlap between the predicted sample and real sample. This measure ranges from 0 to 1 where a Dice score of 1 denotes the complete overlap as defined as follows $Loss_{DL}\hspace{0.25em}=\hspace{0.25em}1-2\frac{\sum_{l\in L}\sum_{i\in N}y_i^{\left(l\right)}\oversetˆy_i^{\left(l\right)}\hspace{0.25em}+\hspace{0.25em}\varepsilon}{\sum_{l\in L}\sum_{i\in N}{\left(y_i^{\left(l\right)}+\hspace{0.25em}\oversetˆy_i^{\left(l\right)}\right)}+\hspace{0.25em}\varepsilon}$ where $\varepsilon$ is a small constant to avoid dividing by 0. Generalized Dice (GDL): Sudre et al. proposed to use the class rebalancing properties of the Generalized Dice overlap as a robust and accurate deep-learning loss function for unbalanced tasks. The authors investigate the behavior of Dice loss, cross-entropy loss, and generalized dice loss functions in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. The results demonstrate that the GDL is more robust than the other loss functions $Loss_{GDL}\hspace{0.25em}=1-2\frac{\sum_{l\in L}w_i\sum_{i\in N}y_i^{\left(l\right)}\oversetˆy_i^{\left(l\right)}\hspace{0.25em}+\hspace{0.25em}\varepsilon}{\sum_{l\in L}w_i\sum_{i\in N}{\left(y_i^{\left(l\right)}+\hspace{0.25em}\oversetˆy_i^{\left(l\right)}\right)}+\hspace{0.25em}\varepsilon}$ For more detailed notations and explanation refer to this Link More Info
Multi categorical Dice loss? Dice Loss (DL) for Multi-class: Dice loss is a popular loss function for medical image segmentation which is a measure of overlap between the predicted sample and real sample. This measure ranges fro
37,304
Rao-Blackwell unbiased estimator binomial distribution
Update based on whuber's comment. First some notation. Let $T_{-1} = \sum_{i=2}^nX_i$ and note that $T \sim Binom(nm, \theta)$ and $T_{-1} \sim Binom((n-1)m, \theta)$. Moreover, note that $X_1$ and $T_{-1}$ are independent. \begin{align*} \phi(T) &= E(X_1/m |T =t) \\ &= \frac{1}{m}E(X_1|T=t) \\ &= \frac{1}{m}\sum_{x=0}^m xP(X_1=x|T=t) \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x \cap T=t)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x \cap T_{-1}=t-x)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x)P(T_{-1}=t-x)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{\binom{m}{x}\theta^x(1-\theta)^{m-x}\binom{(n-1)m}{t-x}\theta^{t-x}(1-\theta)^{(n-1)m-t+x}}{\binom{nm}{t}\theta^t(1-\theta)^{nm-t}} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{\binom{m}{x}\binom{nm -m}{t-x}}{\binom{nm}{t}} \\ &= \frac{1}{m}\sum_{x=0}^m x f(x;nm, m, t) \quad\text{where $f$ is the pmf of a hypergeometric random variable}\\ &= \frac{1}{m}E(X) \quad \text{where $X$ is a hypergeometric rv} \\ &= \frac{1}{m}\frac{tm}{mn} = \frac{t}{mn} \end{align*} Recalling that $t$ is the value of $T$, we get $\hat\theta_{UMVUE} = \frac{T}{nm}$ as expected.
Rao-Blackwell unbiased estimator binomial distribution
Update based on whuber's comment. First some notation. Let $T_{-1} = \sum_{i=2}^nX_i$ and note that $T \sim Binom(nm, \theta)$ and $T_{-1} \sim Binom((n-1)m, \theta)$. Moreover, note that $X_1$ and $T
Rao-Blackwell unbiased estimator binomial distribution Update based on whuber's comment. First some notation. Let $T_{-1} = \sum_{i=2}^nX_i$ and note that $T \sim Binom(nm, \theta)$ and $T_{-1} \sim Binom((n-1)m, \theta)$. Moreover, note that $X_1$ and $T_{-1}$ are independent. \begin{align*} \phi(T) &= E(X_1/m |T =t) \\ &= \frac{1}{m}E(X_1|T=t) \\ &= \frac{1}{m}\sum_{x=0}^m xP(X_1=x|T=t) \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x \cap T=t)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x \cap T_{-1}=t-x)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{P(X_1=x)P(T_{-1}=t-x)}{P(T=t)} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{\binom{m}{x}\theta^x(1-\theta)^{m-x}\binom{(n-1)m}{t-x}\theta^{t-x}(1-\theta)^{(n-1)m-t+x}}{\binom{nm}{t}\theta^t(1-\theta)^{nm-t}} \\ &= \frac{1}{m}\sum_{x=0}^m x\frac{\binom{m}{x}\binom{nm -m}{t-x}}{\binom{nm}{t}} \\ &= \frac{1}{m}\sum_{x=0}^m x f(x;nm, m, t) \quad\text{where $f$ is the pmf of a hypergeometric random variable}\\ &= \frac{1}{m}E(X) \quad \text{where $X$ is a hypergeometric rv} \\ &= \frac{1}{m}\frac{tm}{mn} = \frac{t}{mn} \end{align*} Recalling that $t$ is the value of $T$, we get $\hat\theta_{UMVUE} = \frac{T}{nm}$ as expected.
Rao-Blackwell unbiased estimator binomial distribution Update based on whuber's comment. First some notation. Let $T_{-1} = \sum_{i=2}^nX_i$ and note that $T \sim Binom(nm, \theta)$ and $T_{-1} \sim Binom((n-1)m, \theta)$. Moreover, note that $X_1$ and $T
37,305
Score function of bivariate/multivariate normal distribution
The problem is in the matrix differentiation. As the covariance matrix is symmetric, we have $$ \frac{\partial l}{\partial \Sigma}=-\Sigma^{-1}+\frac{\operatorname{diag}(\Sigma^{-1})}{2}+\Sigma^{-1}(x-\mu)(x-\mu)'\Sigma^{-1}-\frac{\operatorname{diag}(\Sigma^{-1}(x-\mu)(x-\mu)'\Sigma^{-1})}{2} $$ where $l$ is the log-likelihood function.
Score function of bivariate/multivariate normal distribution
The problem is in the matrix differentiation. As the covariance matrix is symmetric, we have $$ \frac{\partial l}{\partial \Sigma}=-\Sigma^{-1}+\frac{\operatorname{diag}(\Sigma^{-1})}{2}+\Sigma^{-1}(x
Score function of bivariate/multivariate normal distribution The problem is in the matrix differentiation. As the covariance matrix is symmetric, we have $$ \frac{\partial l}{\partial \Sigma}=-\Sigma^{-1}+\frac{\operatorname{diag}(\Sigma^{-1})}{2}+\Sigma^{-1}(x-\mu)(x-\mu)'\Sigma^{-1}-\frac{\operatorname{diag}(\Sigma^{-1}(x-\mu)(x-\mu)'\Sigma^{-1})}{2} $$ where $l$ is the log-likelihood function.
Score function of bivariate/multivariate normal distribution The problem is in the matrix differentiation. As the covariance matrix is symmetric, we have $$ \frac{\partial l}{\partial \Sigma}=-\Sigma^{-1}+\frac{\operatorname{diag}(\Sigma^{-1})}{2}+\Sigma^{-1}(x
37,306
How to dissuade laypeople from drawing inaccurate conclusions about their data?
I wish you had provided more details in your question, but I'm just going to answer it with whatever information you have provided. I think it's unlikely that they will go back and actually peruse the "resources" you suggest, so you might be better off with reading about these yourself and trying to explain it them with examples. You can brief them about how they could have designed the experiment in the first place, maybe with an example of A/B testing (check it out on wikipedia), and how it is imperative that the sample is representative of the population. Links: https://en.wikipedia.org/wiki/Design_of_experiments Also, listing some possible pitfalls of drawing conclusions from statistically unsound experiments should help them understand your point. Link: http://www.ch.embnet.org/CoursEMBnet/Arrays06/files/arrays06_ExpDes2.pdf
How to dissuade laypeople from drawing inaccurate conclusions about their data?
I wish you had provided more details in your question, but I'm just going to answer it with whatever information you have provided. I think it's unlikely that they will go back and actually peruse th
How to dissuade laypeople from drawing inaccurate conclusions about their data? I wish you had provided more details in your question, but I'm just going to answer it with whatever information you have provided. I think it's unlikely that they will go back and actually peruse the "resources" you suggest, so you might be better off with reading about these yourself and trying to explain it them with examples. You can brief them about how they could have designed the experiment in the first place, maybe with an example of A/B testing (check it out on wikipedia), and how it is imperative that the sample is representative of the population. Links: https://en.wikipedia.org/wiki/Design_of_experiments Also, listing some possible pitfalls of drawing conclusions from statistically unsound experiments should help them understand your point. Link: http://www.ch.embnet.org/CoursEMBnet/Arrays06/files/arrays06_ExpDes2.pdf
How to dissuade laypeople from drawing inaccurate conclusions about their data? I wish you had provided more details in your question, but I'm just going to answer it with whatever information you have provided. I think it's unlikely that they will go back and actually peruse th
37,307
Application of Machine Learning to the Automated Theorem Proving
Recent papers I know of are Deep Network Guided Proof Search and Deep Math. Both of these reduce the search-space of current non-ML based theorem-provers by suggesting the next premise or tactic to use, pruning the search tree. (Quite similar to how Alpha Zero still relies on traditional game-tree search, but heavily prunes it!) A more direct approach taken by End to End Differentiable Proving focuses on the backward chaining proof search technique. It replaces the hard decision of which variable to substitute into which rule (for example: substitute "John" into the rule: If Human(x) then Alive(x)) with a scoring function which assigns a number to how good each possible substitution is. Then a beam-search is used to find the highest scoring set of substitutions. I would contest the idea that finding proofs can't be solved using pattern recognition -- I rely on a lot on my intuition when proving things, which pretty much just guesses which direction to try next based on similar problems I've seen before. This approach seems quite similar to above papers. In other words, my intuition performs pattern recognition. On the other hand, current theorem provers -- either with or without ML -- are quite weak compared to human provers, so I wouldn't try to prove the Jacobian conjecture with one.
Application of Machine Learning to the Automated Theorem Proving
Recent papers I know of are Deep Network Guided Proof Search and Deep Math. Both of these reduce the search-space of current non-ML based theorem-provers by suggesting the next premise or tactic to us
Application of Machine Learning to the Automated Theorem Proving Recent papers I know of are Deep Network Guided Proof Search and Deep Math. Both of these reduce the search-space of current non-ML based theorem-provers by suggesting the next premise or tactic to use, pruning the search tree. (Quite similar to how Alpha Zero still relies on traditional game-tree search, but heavily prunes it!) A more direct approach taken by End to End Differentiable Proving focuses on the backward chaining proof search technique. It replaces the hard decision of which variable to substitute into which rule (for example: substitute "John" into the rule: If Human(x) then Alive(x)) with a scoring function which assigns a number to how good each possible substitution is. Then a beam-search is used to find the highest scoring set of substitutions. I would contest the idea that finding proofs can't be solved using pattern recognition -- I rely on a lot on my intuition when proving things, which pretty much just guesses which direction to try next based on similar problems I've seen before. This approach seems quite similar to above papers. In other words, my intuition performs pattern recognition. On the other hand, current theorem provers -- either with or without ML -- are quite weak compared to human provers, so I wouldn't try to prove the Jacobian conjecture with one.
Application of Machine Learning to the Automated Theorem Proving Recent papers I know of are Deep Network Guided Proof Search and Deep Math. Both of these reduce the search-space of current non-ML based theorem-provers by suggesting the next premise or tactic to us
37,308
Application of Machine Learning to the Automated Theorem Proving
I'm sorry because of adding another answer, but I can't undelete my message, and I do not arrive to talk to admins. This domain is related to automated theorem proving. But as far as I know there is no automated theorem prover powerfull enough to create such a proof. The actual automated theorem provers use propositional calculus or first order logic or second order logic to prove or refute theorems. For instance if you would like to ask to an automated theorem prover if Jacobian Conjecture is true or false, you must ask a question like: is the theorem "commutative algebra and set theory and analysis implies jacobian conjecture" is true or false.
Application of Machine Learning to the Automated Theorem Proving
I'm sorry because of adding another answer, but I can't undelete my message, and I do not arrive to talk to admins. This domain is related to automated theorem proving. But as far as I know there is n
Application of Machine Learning to the Automated Theorem Proving I'm sorry because of adding another answer, but I can't undelete my message, and I do not arrive to talk to admins. This domain is related to automated theorem proving. But as far as I know there is no automated theorem prover powerfull enough to create such a proof. The actual automated theorem provers use propositional calculus or first order logic or second order logic to prove or refute theorems. For instance if you would like to ask to an automated theorem prover if Jacobian Conjecture is true or false, you must ask a question like: is the theorem "commutative algebra and set theory and analysis implies jacobian conjecture" is true or false.
Application of Machine Learning to the Automated Theorem Proving I'm sorry because of adding another answer, but I can't undelete my message, and I do not arrive to talk to admins. This domain is related to automated theorem proving. But as far as I know there is n
37,309
Paradox of Poisson process with at least one event in the interval
I finally figured it out! According to @combo's advice, I am going to use term "occurrence". Surprisingly, the first part of the rationale for my intuition was almost correct. If the observed occurrence was at time $t$ from the beginning of the interval, then it is enough to calculate probability that no occurrence occurred in either $(0, t)$ or $(t, T)$ open intervals: $\Pr(X_T = 1 \mid t) = \Pr(X_t = 0) \Pr(X_{T-t} = 0) = e^{-t} e^{t - T} = e^{-T} = \Pr(X_T = 0)$. The difference is replacement of $\Pr(X_T = 1 \mid X_T > 0)$ by $\Pr(X_T = 1 \mid t)$, which may be viewed as $\Pr(X_T = 1 \mid X_{dt} = 1)$, where $dt$ is a length of infinitysmall interval $[t - \frac{dt}{2}, t + \frac{dt}{2}]$. Since $\Pr(X_T = 1, t) = e^{-T} dt$, we may see $\operatorname{pdf}(t) = \Pr(X_T = 1 \mid t)$ as a density in point $t$ of the only occurrence within the $(0, T)$ interval. Since the value of $t$ is unknown, we integrate $\int_0^T \operatorname{pdf}(t) dt = Te^{-T} = \Pr(X_T = 1)$. So far so good - we know the unconditional probability of having exactly one occurrence in the interval. However by conditioning on $X_T > 0$ the $e^{-T}$ fraction of the sampling space has been discarded - thus it is necessary to normalize every remaining probability/density by a factor of $\frac{1}{1 - e^{-T}}$. Thus, the intuition was fallacy of mistaking probability with density.
Paradox of Poisson process with at least one event in the interval
I finally figured it out! According to @combo's advice, I am going to use term "occurrence". Surprisingly, the first part of the rationale for my intuition was almost correct. If the observed occurren
Paradox of Poisson process with at least one event in the interval I finally figured it out! According to @combo's advice, I am going to use term "occurrence". Surprisingly, the first part of the rationale for my intuition was almost correct. If the observed occurrence was at time $t$ from the beginning of the interval, then it is enough to calculate probability that no occurrence occurred in either $(0, t)$ or $(t, T)$ open intervals: $\Pr(X_T = 1 \mid t) = \Pr(X_t = 0) \Pr(X_{T-t} = 0) = e^{-t} e^{t - T} = e^{-T} = \Pr(X_T = 0)$. The difference is replacement of $\Pr(X_T = 1 \mid X_T > 0)$ by $\Pr(X_T = 1 \mid t)$, which may be viewed as $\Pr(X_T = 1 \mid X_{dt} = 1)$, where $dt$ is a length of infinitysmall interval $[t - \frac{dt}{2}, t + \frac{dt}{2}]$. Since $\Pr(X_T = 1, t) = e^{-T} dt$, we may see $\operatorname{pdf}(t) = \Pr(X_T = 1 \mid t)$ as a density in point $t$ of the only occurrence within the $(0, T)$ interval. Since the value of $t$ is unknown, we integrate $\int_0^T \operatorname{pdf}(t) dt = Te^{-T} = \Pr(X_T = 1)$. So far so good - we know the unconditional probability of having exactly one occurrence in the interval. However by conditioning on $X_T > 0$ the $e^{-T}$ fraction of the sampling space has been discarded - thus it is necessary to normalize every remaining probability/density by a factor of $\frac{1}{1 - e^{-T}}$. Thus, the intuition was fallacy of mistaking probability with density.
Paradox of Poisson process with at least one event in the interval I finally figured it out! According to @combo's advice, I am going to use term "occurrence". Surprisingly, the first part of the rationale for my intuition was almost correct. If the observed occurren
37,310
Paradox of Poisson process with at least one event in the interval
A quick note on terminology to avoid confusing ourselves: I will refer to something happening at a particular time as an "occurence" rather than an "event" in order to avoid confusion with the more rigorous definition of an event as an element of the sample space. Let's start from the definition of the Poisson counting process. Let $N(t)$ be the number of occurrences that happen by time $T$. It has the following properties: $N(0) = 0$ $N(T_1), (N(T_2)- N(T_1))$ are independent for $T_1 < T_2$ (independent increments property) The number of occurrences in any interval of length $t$ is a Poisson random variable with parameter $\lambda t$ (for our purposes, $\lambda = 1$). The independent increments property is what is tripping you up - specifically the implications of the strict inequality. We are asking the question, given that an occurrence happens at time $t \in [0,T]$, what is the probability that $N(T) = 1$? Following your approach, let's break the process into three segments. For some $\epsilon < \min\{t, T-t\}$, we have: $$N(T) = \left(N(T) - N(t+\epsilon) \right) + \left(N(t+\epsilon) - N(t-\epsilon) \right) + \left(N(t-\epsilon) - N(0)\right) $$ and now we are looking at the probability \begin{align*} &\lim_{\epsilon \to 0} P\left( N(T) = 1 \mid N(t+\epsilon) - N(t-\epsilon) = 1 \right)\\ &=\lim_{\epsilon\to 0} P\left( N(t-\epsilon) - N(0) = 0, N(T) - N(t+\epsilon) = 0 \mid N(t+\epsilon) - N(t-\epsilon) = 1\right) \end{align*} Note that this is not quite the same as what you wrote. There are two key differences: The intervals are not disjoint, so while $N(t-\epsilon) - N(0)$ is independent from $N(T) - N(t+\epsilon)$, neither are independent of $N(t+\epsilon) - N(t-\epsilon)$. The intervals all have positive measure. In your approach you break the interval $[0,T]$ into three pieces, $[0,t), [t], (t,T]$. The problem is that now you are conditioning on the event that an occurrence happens in the interval $[t]$, which has measure zero (for more details on why this is an issue, see this post).
Paradox of Poisson process with at least one event in the interval
A quick note on terminology to avoid confusing ourselves: I will refer to something happening at a particular time as an "occurence" rather than an "event" in order to avoid confusion with the more r
Paradox of Poisson process with at least one event in the interval A quick note on terminology to avoid confusing ourselves: I will refer to something happening at a particular time as an "occurence" rather than an "event" in order to avoid confusion with the more rigorous definition of an event as an element of the sample space. Let's start from the definition of the Poisson counting process. Let $N(t)$ be the number of occurrences that happen by time $T$. It has the following properties: $N(0) = 0$ $N(T_1), (N(T_2)- N(T_1))$ are independent for $T_1 < T_2$ (independent increments property) The number of occurrences in any interval of length $t$ is a Poisson random variable with parameter $\lambda t$ (for our purposes, $\lambda = 1$). The independent increments property is what is tripping you up - specifically the implications of the strict inequality. We are asking the question, given that an occurrence happens at time $t \in [0,T]$, what is the probability that $N(T) = 1$? Following your approach, let's break the process into three segments. For some $\epsilon < \min\{t, T-t\}$, we have: $$N(T) = \left(N(T) - N(t+\epsilon) \right) + \left(N(t+\epsilon) - N(t-\epsilon) \right) + \left(N(t-\epsilon) - N(0)\right) $$ and now we are looking at the probability \begin{align*} &\lim_{\epsilon \to 0} P\left( N(T) = 1 \mid N(t+\epsilon) - N(t-\epsilon) = 1 \right)\\ &=\lim_{\epsilon\to 0} P\left( N(t-\epsilon) - N(0) = 0, N(T) - N(t+\epsilon) = 0 \mid N(t+\epsilon) - N(t-\epsilon) = 1\right) \end{align*} Note that this is not quite the same as what you wrote. There are two key differences: The intervals are not disjoint, so while $N(t-\epsilon) - N(0)$ is independent from $N(T) - N(t+\epsilon)$, neither are independent of $N(t+\epsilon) - N(t-\epsilon)$. The intervals all have positive measure. In your approach you break the interval $[0,T]$ into three pieces, $[0,t), [t], (t,T]$. The problem is that now you are conditioning on the event that an occurrence happens in the interval $[t]$, which has measure zero (for more details on why this is an issue, see this post).
Paradox of Poisson process with at least one event in the interval A quick note on terminology to avoid confusing ourselves: I will refer to something happening at a particular time as an "occurence" rather than an "event" in order to avoid confusion with the more r
37,311
Paradox of Poisson process with at least one event in the interval
Of your two ways of approaching the problem, the second one seems to be spot on. It is much more rigorous and makes complete sense to me. However, I had a little more difficulty in understanding the first approach, which gave reason to believe that this is the source of your mistake. First of all, out of curiosity,why are you calculating $P(X_{T}=1|X_{T}>0)$? Given the very same approach you used below, this probability (as far as it is relevant) should equal $\frac{Te^{-T}}{1-e^{-T}}$ which is not by definition equal to $e^{-T}$. Hope this helps!
Paradox of Poisson process with at least one event in the interval
Of your two ways of approaching the problem, the second one seems to be spot on. It is much more rigorous and makes complete sense to me. However, I had a little more difficulty in understanding the
Paradox of Poisson process with at least one event in the interval Of your two ways of approaching the problem, the second one seems to be spot on. It is much more rigorous and makes complete sense to me. However, I had a little more difficulty in understanding the first approach, which gave reason to believe that this is the source of your mistake. First of all, out of curiosity,why are you calculating $P(X_{T}=1|X_{T}>0)$? Given the very same approach you used below, this probability (as far as it is relevant) should equal $\frac{Te^{-T}}{1-e^{-T}}$ which is not by definition equal to $e^{-T}$. Hope this helps!
Paradox of Poisson process with at least one event in the interval Of your two ways of approaching the problem, the second one seems to be spot on. It is much more rigorous and makes complete sense to me. However, I had a little more difficulty in understanding the
37,312
Why logarithmic scale for hyper-parameter optimization?
... because logarithmic scale enables us to search a bigger space quickly. In your SVM example, we do not know the range for the hyper-parameter. So, a quicker way is trying dramatically different values, say, 1, 10, 100, 1000, which come from a logarithmic scale. In addition, I think log scale search is the first step. Suppose that we found C=10 is better than C=1 or C=100; then we can focus on that scale to try a better value. Another reason is for "regularization" parameters, such as C in svm. It is not too sensitive. In other words, we may not find too much difference with 10 or 15, or 20, but results would be very different from 10 to 1000. That is why we start with log search.
Why logarithmic scale for hyper-parameter optimization?
... because logarithmic scale enables us to search a bigger space quickly. In your SVM example, we do not know the range for the hyper-parameter. So, a quicker way is trying dramatically different val
Why logarithmic scale for hyper-parameter optimization? ... because logarithmic scale enables us to search a bigger space quickly. In your SVM example, we do not know the range for the hyper-parameter. So, a quicker way is trying dramatically different values, say, 1, 10, 100, 1000, which come from a logarithmic scale. In addition, I think log scale search is the first step. Suppose that we found C=10 is better than C=1 or C=100; then we can focus on that scale to try a better value. Another reason is for "regularization" parameters, such as C in svm. It is not too sensitive. In other words, we may not find too much difference with 10 or 15, or 20, but results would be very different from 10 to 1000. That is why we start with log search.
Why logarithmic scale for hyper-parameter optimization? ... because logarithmic scale enables us to search a bigger space quickly. In your SVM example, we do not know the range for the hyper-parameter. So, a quicker way is trying dramatically different val
37,313
Can something be "very statistically significant"? [duplicate]
Significance testing has two different interpretations. The Neyman-Pearson interpretation is that a result is significant, or it is not significant, that is all that can be said. That is all you can say. The Fisher tradition is that a p-value is a representation of the strength of evidence against the null hypothesis - a p-value of 0.10 tells you something, a p-value of 0.01 also tells you something. Fisher wrote If $P$ is between .1 and .9 there is certainly no reason to suspect the hypothesis tested. If it is below .02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 [...]
Can something be "very statistically significant"? [duplicate]
Significance testing has two different interpretations. The Neyman-Pearson interpretation is that a result is significant, or it is not significant, that is all that can be said. That is all you can s
Can something be "very statistically significant"? [duplicate] Significance testing has two different interpretations. The Neyman-Pearson interpretation is that a result is significant, or it is not significant, that is all that can be said. That is all you can say. The Fisher tradition is that a p-value is a representation of the strength of evidence against the null hypothesis - a p-value of 0.10 tells you something, a p-value of 0.01 also tells you something. Fisher wrote If $P$ is between .1 and .9 there is certainly no reason to suspect the hypothesis tested. If it is below .02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 [...]
Can something be "very statistically significant"? [duplicate] Significance testing has two different interpretations. The Neyman-Pearson interpretation is that a result is significant, or it is not significant, that is all that can be said. That is all you can s
37,314
Can something be "very statistically significant"? [duplicate]
if the p-value is around 0.05, we can say it is marginally significant, but p-value is very small (ex: 0.0006), then we can say it is statistically significant. I am not sure, you can use the term "very significant".
Can something be "very statistically significant"? [duplicate]
if the p-value is around 0.05, we can say it is marginally significant, but p-value is very small (ex: 0.0006), then we can say it is statistically significant. I am not sure, you can use the term "ve
Can something be "very statistically significant"? [duplicate] if the p-value is around 0.05, we can say it is marginally significant, but p-value is very small (ex: 0.0006), then we can say it is statistically significant. I am not sure, you can use the term "very significant".
Can something be "very statistically significant"? [duplicate] if the p-value is around 0.05, we can say it is marginally significant, but p-value is very small (ex: 0.0006), then we can say it is statistically significant. I am not sure, you can use the term "ve
37,315
What is the confidence interval of a p-value?
The problem is that a p value is not an estimate of a parameter so the idea of a confidence interval does not apply. It also does not make sense to talk about the uncertainty surrounding a p value. The p value is certain; the conclusion you draw from it is not.
What is the confidence interval of a p-value?
The problem is that a p value is not an estimate of a parameter so the idea of a confidence interval does not apply. It also does not make sense to talk about the uncertainty surrounding a p value. Th
What is the confidence interval of a p-value? The problem is that a p value is not an estimate of a parameter so the idea of a confidence interval does not apply. It also does not make sense to talk about the uncertainty surrounding a p value. The p value is certain; the conclusion you draw from it is not.
What is the confidence interval of a p-value? The problem is that a p value is not an estimate of a parameter so the idea of a confidence interval does not apply. It also does not make sense to talk about the uncertainty surrounding a p value. Th
37,316
Which variance estimate to use for a Wald test?
Either approach is legitimate, with both leading to the same asymptotic null distribution of the statistic. $\sqrt{n}(\hat{\theta}_n - \theta_0) \rightarrow_d N(0, i(\theta_0)^{-1})$ implies that $\hat{\theta}_n \to_p \theta_0$ so that the continuous mapping theorem (CMT) yields that $i(\hat{\theta}_n) \to_p i(\theta_0)$, provided, as is the case in regular problems, that $i$ is continuous. Then, again by the CMT, $$ \sqrt{\dfrac{1}{i(\hat{\theta}_n)}}\to_p\sqrt{\dfrac{1}{i(\theta_0 )}} $$ and Slutzky's theorem yields that $$ \dfrac{\sqrt{n}(\hat{\theta}_n - \theta_0)}{\sqrt{\dfrac{1}{i(\hat{\theta})}}}\to_dN(0,1)$$ under $H_0$ as well. See also Does Null Hypothesis affect Standard Error? for a more specific illustration.
Which variance estimate to use for a Wald test?
Either approach is legitimate, with both leading to the same asymptotic null distribution of the statistic. $\sqrt{n}(\hat{\theta}_n - \theta_0) \rightarrow_d N(0, i(\theta_0)^{-1})$ implies that $\ha
Which variance estimate to use for a Wald test? Either approach is legitimate, with both leading to the same asymptotic null distribution of the statistic. $\sqrt{n}(\hat{\theta}_n - \theta_0) \rightarrow_d N(0, i(\theta_0)^{-1})$ implies that $\hat{\theta}_n \to_p \theta_0$ so that the continuous mapping theorem (CMT) yields that $i(\hat{\theta}_n) \to_p i(\theta_0)$, provided, as is the case in regular problems, that $i$ is continuous. Then, again by the CMT, $$ \sqrt{\dfrac{1}{i(\hat{\theta}_n)}}\to_p\sqrt{\dfrac{1}{i(\theta_0 )}} $$ and Slutzky's theorem yields that $$ \dfrac{\sqrt{n}(\hat{\theta}_n - \theta_0)}{\sqrt{\dfrac{1}{i(\hat{\theta})}}}\to_dN(0,1)$$ under $H_0$ as well. See also Does Null Hypothesis affect Standard Error? for a more specific illustration.
Which variance estimate to use for a Wald test? Either approach is legitimate, with both leading to the same asymptotic null distribution of the statistic. $\sqrt{n}(\hat{\theta}_n - \theta_0) \rightarrow_d N(0, i(\theta_0)^{-1})$ implies that $\ha
37,317
Regression methods for predicting rank
From what I can tell, the rank-based estimation this paper is referring to is slightly different than what you're interested in. Note that least-squares estimation is based on the idea that $\boldsymbol \beta$ should be chosen to minimize $||\boldsymbol y - \boldsymbol X \boldsymbol \beta||^2$. This isn't suitable in your case because the distribution of $y$ isn't very nice and it's also not really of interest. However, the focus of the paper is still to predict $y$ as a linear function of $X$. The only difference is the way in which it estimates $\boldsymbol \beta$: In their case, they choose $\boldsymbol \beta$ to minimize a rank-based norm which is still applied to $\boldsymbol y - \boldsymbol X \boldsymbol \beta$. Hence, this method is still largely dependent on the distribution of $y$. You mentioned that you only care about the ranks of the response variable. In other words, you'd be just as well off using $X$ to model $R(Y)$ rather than $Y$ itself. The fact that $R(Y)$ is limited to $[0, 1]$ means that the usual linear regression approach may not work. You could end up with predictions outside the unit interval or you might not even have a linear relationship between $X$ and $R(Y)$. But this really isn't a problem. The usual modeling approach in this situation is to employ a Generalized Linear Model. The only additional step in fitting this model is to choose an appropriate link function. For example, suppose $X \sim Normal(0, 1)$ and $Y|X \sim Normal(\beta_0 + \beta_1 X, \sigma^2)$. It would then be appropriate to use $X$ to model $R(Y)$ with a GLM and a logit or probit link.
Regression methods for predicting rank
From what I can tell, the rank-based estimation this paper is referring to is slightly different than what you're interested in. Note that least-squares estimation is based on the idea that $\boldsymb
Regression methods for predicting rank From what I can tell, the rank-based estimation this paper is referring to is slightly different than what you're interested in. Note that least-squares estimation is based on the idea that $\boldsymbol \beta$ should be chosen to minimize $||\boldsymbol y - \boldsymbol X \boldsymbol \beta||^2$. This isn't suitable in your case because the distribution of $y$ isn't very nice and it's also not really of interest. However, the focus of the paper is still to predict $y$ as a linear function of $X$. The only difference is the way in which it estimates $\boldsymbol \beta$: In their case, they choose $\boldsymbol \beta$ to minimize a rank-based norm which is still applied to $\boldsymbol y - \boldsymbol X \boldsymbol \beta$. Hence, this method is still largely dependent on the distribution of $y$. You mentioned that you only care about the ranks of the response variable. In other words, you'd be just as well off using $X$ to model $R(Y)$ rather than $Y$ itself. The fact that $R(Y)$ is limited to $[0, 1]$ means that the usual linear regression approach may not work. You could end up with predictions outside the unit interval or you might not even have a linear relationship between $X$ and $R(Y)$. But this really isn't a problem. The usual modeling approach in this situation is to employ a Generalized Linear Model. The only additional step in fitting this model is to choose an appropriate link function. For example, suppose $X \sim Normal(0, 1)$ and $Y|X \sim Normal(\beta_0 + \beta_1 X, \sigma^2)$. It would then be appropriate to use $X$ to model $R(Y)$ with a GLM and a logit or probit link.
Regression methods for predicting rank From what I can tell, the rank-based estimation this paper is referring to is slightly different than what you're interested in. Note that least-squares estimation is based on the idea that $\boldsymb
37,318
Mathematical justification for using recurrent neural networks over feed-forward networks
I don't think this will be a very satisfying answer, because it's somewhat of a proof by definition, but I believe it is correct nonetheless (albeit not very mathematical). Can anyone provide an example of families of functions which can't be captured by Feed-forward but can be well approximated by RNNs? No. At least not if we accept this definition of a function; ...a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. [Wikipedia] If we imagine some hypothetical function $\psi(x)$ that operates on some vector of inputs $x$ and cannot yet be expressed by any feed forward neural network, we could simply use $\psi(x)$ as a transfer functions and, voila, we can now construct a simple perceptron which performs a superset of the functionality of $\psi(x)$; $f(x) = \psi(b + wx)$ I will leave it as an exercise for the reader to figure out which values we need for the bias, $b$, and weight vector, $w$, in order to make our perceptron output $f(x)$ mimic that of our mystery function $\psi(x)$! The only thing that an RNN can do that a feed forward network cannot is retain state. By virtue of the requirement that an input maps to only a single output, functions cannot retain state. So by the above contorted example we can see that a feed forward network can do anything (but no more) than any function (continuous or otherwise). Note: I think I've answered your question, but I think it's worth pointing out a slight caveat; while there does not exist a function that cannot be mapped by a feed-forward network, there most certainly are functions that are better suited to RNNs than feed-forward networks. Any function that is arranged in such a way that feature-sets within the function are readily expressed as transformations of previous results may be better suited to an RNN. An example of this might be finding the nth number of the fibonacci sequence, iff the inputs are presented sequentially; $F(x) = F(x-1) + F(x-2)$ An RNN might approximate this sequence effectively by using only a set of linear transformation functions, whereas a stateless function, or feed-forward neural net, would need to approximate the functional solution to the Fibonacci sequence: $F(x) = \frac{\phi^n - \psi^n}{\sqrt5}$ where $\phi$ is the golden ratio and $\psi \approx 1.618$. As you can imagine, the first variant is far easier to approximate given the usual array of transfer functions available to the designer of a neural network.
Mathematical justification for using recurrent neural networks over feed-forward networks
I don't think this will be a very satisfying answer, because it's somewhat of a proof by definition, but I believe it is correct nonetheless (albeit not very mathematical). Can anyone provide an exam
Mathematical justification for using recurrent neural networks over feed-forward networks I don't think this will be a very satisfying answer, because it's somewhat of a proof by definition, but I believe it is correct nonetheless (albeit not very mathematical). Can anyone provide an example of families of functions which can't be captured by Feed-forward but can be well approximated by RNNs? No. At least not if we accept this definition of a function; ...a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. [Wikipedia] If we imagine some hypothetical function $\psi(x)$ that operates on some vector of inputs $x$ and cannot yet be expressed by any feed forward neural network, we could simply use $\psi(x)$ as a transfer functions and, voila, we can now construct a simple perceptron which performs a superset of the functionality of $\psi(x)$; $f(x) = \psi(b + wx)$ I will leave it as an exercise for the reader to figure out which values we need for the bias, $b$, and weight vector, $w$, in order to make our perceptron output $f(x)$ mimic that of our mystery function $\psi(x)$! The only thing that an RNN can do that a feed forward network cannot is retain state. By virtue of the requirement that an input maps to only a single output, functions cannot retain state. So by the above contorted example we can see that a feed forward network can do anything (but no more) than any function (continuous or otherwise). Note: I think I've answered your question, but I think it's worth pointing out a slight caveat; while there does not exist a function that cannot be mapped by a feed-forward network, there most certainly are functions that are better suited to RNNs than feed-forward networks. Any function that is arranged in such a way that feature-sets within the function are readily expressed as transformations of previous results may be better suited to an RNN. An example of this might be finding the nth number of the fibonacci sequence, iff the inputs are presented sequentially; $F(x) = F(x-1) + F(x-2)$ An RNN might approximate this sequence effectively by using only a set of linear transformation functions, whereas a stateless function, or feed-forward neural net, would need to approximate the functional solution to the Fibonacci sequence: $F(x) = \frac{\phi^n - \psi^n}{\sqrt5}$ where $\phi$ is the golden ratio and $\psi \approx 1.618$. As you can imagine, the first variant is far easier to approximate given the usual array of transfer functions available to the designer of a neural network.
Mathematical justification for using recurrent neural networks over feed-forward networks I don't think this will be a very satisfying answer, because it's somewhat of a proof by definition, but I believe it is correct nonetheless (albeit not very mathematical). Can anyone provide an exam
37,319
RNNs for Sparse Time Series Data
This depends a bit on whether interpolating between points to infill missing data points makes sense, i.e. is there a high correlation between points or are they completely random at each minute reading. If they are correlated then filling them in with even time intervals is probably the way to go. If the values are random and independent of each other, then this would not work. The second method would (probably) not work, one issue would be that the time itself does not really impact the values directly, so you would have an input without predictive power in your RNN. This tends to produce poor results. The other large issue is to get your RNN to produce time points in the future; ANNs typically don't produce values outside of the observed range of input data. If the interpolation method doesn't work for your time series data, I might recommend resampling at a coarser frequency, for example every 5 or 10 seconds, and grab the closest point you can at each of those intervals (or average if needed). It would reduce some information but would avoid potentially interpolating where it doesn't make sense. In short this is a non-trivial question and as far as I know there is no standard or universal way of handling the issue of variable time increments, but hopefully these are some helpful thoughts.
RNNs for Sparse Time Series Data
This depends a bit on whether interpolating between points to infill missing data points makes sense, i.e. is there a high correlation between points or are they completely random at each minute readi
RNNs for Sparse Time Series Data This depends a bit on whether interpolating between points to infill missing data points makes sense, i.e. is there a high correlation between points or are they completely random at each minute reading. If they are correlated then filling them in with even time intervals is probably the way to go. If the values are random and independent of each other, then this would not work. The second method would (probably) not work, one issue would be that the time itself does not really impact the values directly, so you would have an input without predictive power in your RNN. This tends to produce poor results. The other large issue is to get your RNN to produce time points in the future; ANNs typically don't produce values outside of the observed range of input data. If the interpolation method doesn't work for your time series data, I might recommend resampling at a coarser frequency, for example every 5 or 10 seconds, and grab the closest point you can at each of those intervals (or average if needed). It would reduce some information but would avoid potentially interpolating where it doesn't make sense. In short this is a non-trivial question and as far as I know there is no standard or universal way of handling the issue of variable time increments, but hopefully these are some helpful thoughts.
RNNs for Sparse Time Series Data This depends a bit on whether interpolating between points to infill missing data points makes sense, i.e. is there a high correlation between points or are they completely random at each minute readi
37,320
Do I need to do regularization if I have already used early stopping?
Logically the answer is no: since early stopping is an alternative to L2 regularization, and used mainly to be faster than regularization, it is not meant to be used with a regularized cost function. L1 regularization is used for a different purpose and I think that early stopping is not meant to be equivalent to L1 regularization. Early stopping is possibly not as precise as L2 regularization. For the moment, it's hard to see clearly, but I haven't read of a case where early stopping outperforms L2 regularization for precision. I understand it as a sort of lower quality L2 regularization even if the difference may be very small on big datasets. Supposing you use (appropriate) L2 regularization, then early stopping will not provide a better accuracy.
Do I need to do regularization if I have already used early stopping?
Logically the answer is no: since early stopping is an alternative to L2 regularization, and used mainly to be faster than regularization, it is not meant to be used with a regularized cost function.
Do I need to do regularization if I have already used early stopping? Logically the answer is no: since early stopping is an alternative to L2 regularization, and used mainly to be faster than regularization, it is not meant to be used with a regularized cost function. L1 regularization is used for a different purpose and I think that early stopping is not meant to be equivalent to L1 regularization. Early stopping is possibly not as precise as L2 regularization. For the moment, it's hard to see clearly, but I haven't read of a case where early stopping outperforms L2 regularization for precision. I understand it as a sort of lower quality L2 regularization even if the difference may be very small on big datasets. Supposing you use (appropriate) L2 regularization, then early stopping will not provide a better accuracy.
Do I need to do regularization if I have already used early stopping? Logically the answer is no: since early stopping is an alternative to L2 regularization, and used mainly to be faster than regularization, it is not meant to be used with a regularized cost function.
37,321
Do I need to do regularization if I have already used early stopping?
You don't specify on what you're considering early stopping vs. regularization, but, in general, I think that early stopping is not necessarily a substitute for l1 regularization. Consider partial least squares, for example, with early stopping. PLS can be viewed as optimizing the gradient out of a finite set of gradients (one for each variable). If you look at Elements Of Statistical Learning 3.8, the effect of early stopping on PLS is similar to Ridge (l2) regularization. l1 regularization can lead to sparse models. l2 regularization, on its own, can't. If the true model is sparse, adding l1 regularization (e.g., using the Elastic Net), can improve prediction performance. For more complex regressors (e.g., neural nets), it is more difficult to describe what will be the effect of early stopping (see Rgularization Versus Early Stopping: A Case Study With A Real System). Personally, I'd go with GeoMatt22's suggestion. You have (at least) three options: 1. early stopping, 2. l1 regularization, and 3. l2 regularization (and, of course, combinations). Cross validation can be used to see what works best for your specific problem.
Do I need to do regularization if I have already used early stopping?
You don't specify on what you're considering early stopping vs. regularization, but, in general, I think that early stopping is not necessarily a substitute for l1 regularization. Consider partial lea
Do I need to do regularization if I have already used early stopping? You don't specify on what you're considering early stopping vs. regularization, but, in general, I think that early stopping is not necessarily a substitute for l1 regularization. Consider partial least squares, for example, with early stopping. PLS can be viewed as optimizing the gradient out of a finite set of gradients (one for each variable). If you look at Elements Of Statistical Learning 3.8, the effect of early stopping on PLS is similar to Ridge (l2) regularization. l1 regularization can lead to sparse models. l2 regularization, on its own, can't. If the true model is sparse, adding l1 regularization (e.g., using the Elastic Net), can improve prediction performance. For more complex regressors (e.g., neural nets), it is more difficult to describe what will be the effect of early stopping (see Rgularization Versus Early Stopping: A Case Study With A Real System). Personally, I'd go with GeoMatt22's suggestion. You have (at least) three options: 1. early stopping, 2. l1 regularization, and 3. l2 regularization (and, of course, combinations). Cross validation can be used to see what works best for your specific problem.
Do I need to do regularization if I have already used early stopping? You don't specify on what you're considering early stopping vs. regularization, but, in general, I think that early stopping is not necessarily a substitute for l1 regularization. Consider partial lea
37,322
Reference to learn how to interpret learning curves of deep convolutional neural networks
2 things: You should probably switch your 50/50 train/validation repartition to something like 80% training and 20% validation. In most cases it will improve the classifier performance overall (more training data = better performance) If you have never heard about "early-stopping" you should look it up, it's an important concept in the neural network domain : https://en.wikipedia.org/wiki/Early_stopping . To summarize, the idea behind early-stopping is to stop the training once the validation loss starts plateauing. Indeed, when this happens it almost always mean you are starting to overfit your classifier. The training loss value in itself is not something you should trust, because it will continue to decrease even when you are overfitting your classifier. I hope I was clear enough, good luck in your work :)
Reference to learn how to interpret learning curves of deep convolutional neural networks
2 things: You should probably switch your 50/50 train/validation repartition to something like 80% training and 20% validation. In most cases it will improve the classifier performance overall (more
Reference to learn how to interpret learning curves of deep convolutional neural networks 2 things: You should probably switch your 50/50 train/validation repartition to something like 80% training and 20% validation. In most cases it will improve the classifier performance overall (more training data = better performance) If you have never heard about "early-stopping" you should look it up, it's an important concept in the neural network domain : https://en.wikipedia.org/wiki/Early_stopping . To summarize, the idea behind early-stopping is to stop the training once the validation loss starts plateauing. Indeed, when this happens it almost always mean you are starting to overfit your classifier. The training loss value in itself is not something you should trust, because it will continue to decrease even when you are overfitting your classifier. I hope I was clear enough, good luck in your work :)
Reference to learn how to interpret learning curves of deep convolutional neural networks 2 things: You should probably switch your 50/50 train/validation repartition to something like 80% training and 20% validation. In most cases it will improve the classifier performance overall (more
37,323
Reference to learn how to interpret learning curves of deep convolutional neural networks
I know its an ancient question but i guess if anyone else bumps across it looking for something similar... I would say the barely decreasing loss value suggests using a lower learning rate? But i guess that's been mentioned in the link you included in your question so you probably considered that. I would also try to change the filters you use in your model. From most of the stuff I've seen online usually they use 3x3 filters for all layers, the ones that use varying filter sizes also tends to go from 3x3 to 4x4 etc. I guess intuitively this allows you to extract more finer details in the initial layers and build up upon them? In terms of resources this also seems interesting although also relatively limited: https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/ I was also hoping someone who comes across this could check out the problem I have which is sort of similar in nature. I've tried changing a number of parameters but there is a large gap between training and validation loss which closes in suddenly after a few epochs. How do I interpret my validation and training loss curve if there is a large difference between the two which closes in sharply I would also find it EXTREMELY useful if someone could put together a reference suggesting practical suggestions of how to diagnose and address common issues in models / parameters from loss plots. Maybe even for these plots http://lossfunctions.tumblr.com/
Reference to learn how to interpret learning curves of deep convolutional neural networks
I know its an ancient question but i guess if anyone else bumps across it looking for something similar... I would say the barely decreasing loss value suggests using a lower learning rate? But i gues
Reference to learn how to interpret learning curves of deep convolutional neural networks I know its an ancient question but i guess if anyone else bumps across it looking for something similar... I would say the barely decreasing loss value suggests using a lower learning rate? But i guess that's been mentioned in the link you included in your question so you probably considered that. I would also try to change the filters you use in your model. From most of the stuff I've seen online usually they use 3x3 filters for all layers, the ones that use varying filter sizes also tends to go from 3x3 to 4x4 etc. I guess intuitively this allows you to extract more finer details in the initial layers and build up upon them? In terms of resources this also seems interesting although also relatively limited: https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/ I was also hoping someone who comes across this could check out the problem I have which is sort of similar in nature. I've tried changing a number of parameters but there is a large gap between training and validation loss which closes in suddenly after a few epochs. How do I interpret my validation and training loss curve if there is a large difference between the two which closes in sharply I would also find it EXTREMELY useful if someone could put together a reference suggesting practical suggestions of how to diagnose and address common issues in models / parameters from loss plots. Maybe even for these plots http://lossfunctions.tumblr.com/
Reference to learn how to interpret learning curves of deep convolutional neural networks I know its an ancient question but i guess if anyone else bumps across it looking for something similar... I would say the barely decreasing loss value suggests using a lower learning rate? But i gues
37,324
Reference to learn how to interpret learning curves of deep convolutional neural networks
Here are some suggestions: Training error and test error are too close (your system is unable to overfit on your training data), this means that your model is too simple. Solution: more layers or more neurons per layer. Decrease gamma, e.g., 0.01. If the curve still reach plateau early, you can try a larger learning rate, e.g., 0.1. Try Adam instead of SGD. The convergence of Adam is usually faster than SGD.
Reference to learn how to interpret learning curves of deep convolutional neural networks
Here are some suggestions: Training error and test error are too close (your system is unable to overfit on your training data), this means that your model is too simple. Solution: more layers or mor
Reference to learn how to interpret learning curves of deep convolutional neural networks Here are some suggestions: Training error and test error are too close (your system is unable to overfit on your training data), this means that your model is too simple. Solution: more layers or more neurons per layer. Decrease gamma, e.g., 0.01. If the curve still reach plateau early, you can try a larger learning rate, e.g., 0.1. Try Adam instead of SGD. The convergence of Adam is usually faster than SGD.
Reference to learn how to interpret learning curves of deep convolutional neural networks Here are some suggestions: Training error and test error are too close (your system is unable to overfit on your training data), this means that your model is too simple. Solution: more layers or mor
37,325
How to implement generalized hypergeometric function to use in beta-binomial cdf, sf, ppf?
This does not answer your question directly, but if you are thinking of estimating the cumulative distribution function of beta-binomial more efficiently, then you can use a recursive algorithm that is a little bit more efficient than the naive implementation. Notice that probability mass function of beta-binomial distribution $$ f(x) = {n \choose x} \frac{\mathrm{B}(x+\alpha, n-x+\beta)}{\mathrm{B}(\alpha, \beta)} $$ may be re-written if you recall that $\mathrm{B}(x,y)=\tfrac{\Gamma(x)\,\Gamma(y)}{\Gamma(x+y)}$, and $\Gamma(x) = (x-1)!$, and that ${n \choose k} = \prod_{i=1}^k \tfrac{n+1-i}{i}$, so that it becomes $$ f(x) = \left( \prod_{i=1}^x \frac{n+1-i}{i} \right) \frac{\frac{(\alpha+x-1)!\,(\beta+n-x-1)!}{(\alpha+\beta+n-1)!}}{\mathrm{B}(\alpha,\beta)} $$ this makes updating from $x$ to $x+1$ easy $$ f(x\color{red}{+1}) = \left( \prod_{i=1}^x \frac{n+1-i}{i} \right) \color{red}{\frac{n+1-x+1}{x+1}} \frac{\frac{(\alpha+x-1)! \,\color{red}{(\alpha+x)}\,(\beta+n-x-1)! \, \color{red}{(\beta+n-x)^{-1}}}{(\alpha+\beta+n-1)!\,\color{red}{(\alpha+\beta+n)}}}{\mathrm{B}(\alpha,\beta)} $$ and using this you can calculate cumulative distribution function as $$ F(x) = \sum_{k=0}^x f(k) $$ using just simple arithmetic operations rather then calculating more computer-intensive functions. Sidenote: when dealing with large numbers, you would get into numeric precision issues, so more robust code would need working with logarithms, but even though you could expect improvement in efficiency (up to 2 to 3 times faster code when I ran few benchmarks on C++ code implementing it as compared to naive implementation).
How to implement generalized hypergeometric function to use in beta-binomial cdf, sf, ppf?
This does not answer your question directly, but if you are thinking of estimating the cumulative distribution function of beta-binomial more efficiently, then you can use a recursive algorithm that i
How to implement generalized hypergeometric function to use in beta-binomial cdf, sf, ppf? This does not answer your question directly, but if you are thinking of estimating the cumulative distribution function of beta-binomial more efficiently, then you can use a recursive algorithm that is a little bit more efficient than the naive implementation. Notice that probability mass function of beta-binomial distribution $$ f(x) = {n \choose x} \frac{\mathrm{B}(x+\alpha, n-x+\beta)}{\mathrm{B}(\alpha, \beta)} $$ may be re-written if you recall that $\mathrm{B}(x,y)=\tfrac{\Gamma(x)\,\Gamma(y)}{\Gamma(x+y)}$, and $\Gamma(x) = (x-1)!$, and that ${n \choose k} = \prod_{i=1}^k \tfrac{n+1-i}{i}$, so that it becomes $$ f(x) = \left( \prod_{i=1}^x \frac{n+1-i}{i} \right) \frac{\frac{(\alpha+x-1)!\,(\beta+n-x-1)!}{(\alpha+\beta+n-1)!}}{\mathrm{B}(\alpha,\beta)} $$ this makes updating from $x$ to $x+1$ easy $$ f(x\color{red}{+1}) = \left( \prod_{i=1}^x \frac{n+1-i}{i} \right) \color{red}{\frac{n+1-x+1}{x+1}} \frac{\frac{(\alpha+x-1)! \,\color{red}{(\alpha+x)}\,(\beta+n-x-1)! \, \color{red}{(\beta+n-x)^{-1}}}{(\alpha+\beta+n-1)!\,\color{red}{(\alpha+\beta+n)}}}{\mathrm{B}(\alpha,\beta)} $$ and using this you can calculate cumulative distribution function as $$ F(x) = \sum_{k=0}^x f(k) $$ using just simple arithmetic operations rather then calculating more computer-intensive functions. Sidenote: when dealing with large numbers, you would get into numeric precision issues, so more robust code would need working with logarithms, but even though you could expect improvement in efficiency (up to 2 to 3 times faster code when I ran few benchmarks on C++ code implementing it as compared to naive implementation).
How to implement generalized hypergeometric function to use in beta-binomial cdf, sf, ppf? This does not answer your question directly, but if you are thinking of estimating the cumulative distribution function of beta-binomial more efficiently, then you can use a recursive algorithm that i
37,326
Dealing with imbalanced/zero-inflated training examples for regression
(a) Assess the performance(s) you are interested in. Thus, if you are mainly interested in getting the expectation of the response E(y) right, then MAE or RMSE are useful. Similarly, you could also the conditional expectation E(y | y > 0), i.e., the expected amount of precipitation given that there is precipitation. If you are mostly interested in the probability of any precipitation P(y > 0) you could look at the corresponding misclassification rate or the Brier score etc. And if you are interested in the entire distribution, the scoring rules like the log-likelihood (or log-score) or the CRPS (continuous ranked probability score) would be natural. (b) Instead of a two-step model with binary first step and zero-truncated second step, you could also use a single regression model with a response that is censored at zero. A worked example with precipitation in a weather forecasting context is available in a paper about our crch R package (see https://doi.org/10.32614/RJ-2016-012).
Dealing with imbalanced/zero-inflated training examples for regression
(a) Assess the performance(s) you are interested in. Thus, if you are mainly interested in getting the expectation of the response E(y) right, then MAE or RMSE are useful. Similarly, you could also th
Dealing with imbalanced/zero-inflated training examples for regression (a) Assess the performance(s) you are interested in. Thus, if you are mainly interested in getting the expectation of the response E(y) right, then MAE or RMSE are useful. Similarly, you could also the conditional expectation E(y | y > 0), i.e., the expected amount of precipitation given that there is precipitation. If you are mostly interested in the probability of any precipitation P(y > 0) you could look at the corresponding misclassification rate or the Brier score etc. And if you are interested in the entire distribution, the scoring rules like the log-likelihood (or log-score) or the CRPS (continuous ranked probability score) would be natural. (b) Instead of a two-step model with binary first step and zero-truncated second step, you could also use a single regression model with a response that is censored at zero. A worked example with precipitation in a weather forecasting context is available in a paper about our crch R package (see https://doi.org/10.32614/RJ-2016-012).
Dealing with imbalanced/zero-inflated training examples for regression (a) Assess the performance(s) you are interested in. Thus, if you are mainly interested in getting the expectation of the response E(y) right, then MAE or RMSE are useful. Similarly, you could also th
37,327
Reinforcement learning by Sutton, Tic tac toe self play
I am not sure about the first question. Regarding the second, these are my thoughts: If you think about the state space of tic-tac-toe, it can be partitioned into two mutually exclusive subsets, one consisting of states seen by the agent when playing first, the other consisting of states seen while playing second. If one of the sides is always going to play first, then the other side will experience only one of the two subsets in the state-space. It would try to learn a policy that would try to win as a second player. It would be good to have both the sides play as first and second players. Toss a coin before every match - if heads, let the left-side play first, else the right-side starts. This way we can at least ensure that the agent's policy is independent of which side starts first.
Reinforcement learning by Sutton, Tic tac toe self play
I am not sure about the first question. Regarding the second, these are my thoughts: If you think about the state space of tic-tac-toe, it can be partitioned into two mutually exclusive subsets, one c
Reinforcement learning by Sutton, Tic tac toe self play I am not sure about the first question. Regarding the second, these are my thoughts: If you think about the state space of tic-tac-toe, it can be partitioned into two mutually exclusive subsets, one consisting of states seen by the agent when playing first, the other consisting of states seen while playing second. If one of the sides is always going to play first, then the other side will experience only one of the two subsets in the state-space. It would try to learn a policy that would try to win as a second player. It would be good to have both the sides play as first and second players. Toss a coin before every match - if heads, let the left-side play first, else the right-side starts. This way we can at least ensure that the agent's policy is independent of which side starts first.
Reinforcement learning by Sutton, Tic tac toe self play I am not sure about the first question. Regarding the second, these are my thoughts: If you think about the state space of tic-tac-toe, it can be partitioned into two mutually exclusive subsets, one c
37,328
What is "Descriptive Discriminant Analysis"?
There exists a real difference between Descriptive Discriminant Analysis(DDA) and Predictive Discriminant Analysis (PDA). which is depicted in the diagram given below. I found it here: http://as.wiley.com/WileyCDA/WileyTitle/productCd-0471468150.html
What is "Descriptive Discriminant Analysis"?
There exists a real difference between Descriptive Discriminant Analysis(DDA) and Predictive Discriminant Analysis (PDA). which is depicted in the diagram given below. I found it here: http://as.wiley
What is "Descriptive Discriminant Analysis"? There exists a real difference between Descriptive Discriminant Analysis(DDA) and Predictive Discriminant Analysis (PDA). which is depicted in the diagram given below. I found it here: http://as.wiley.com/WileyCDA/WileyTitle/productCd-0471468150.html
What is "Descriptive Discriminant Analysis"? There exists a real difference between Descriptive Discriminant Analysis(DDA) and Predictive Discriminant Analysis (PDA). which is depicted in the diagram given below. I found it here: http://as.wiley
37,329
How to interpret daily vs. hourly "% chance of rain" precipitation forecasts?
I was a little bit confused at first when thinking about the matter. However, the two main reason why you won't be able to harmonize forecasts on hourly and daily basis is the following. The events for rain in breakdown perspective are not independent. Aggregating probabilities would require knowledge on how the events are related, but this will vary from case to case. On my blog you'll find the article Rain Risk: Too much detail? which deals with the question and provides some neat images.
How to interpret daily vs. hourly "% chance of rain" precipitation forecasts?
I was a little bit confused at first when thinking about the matter. However, the two main reason why you won't be able to harmonize forecasts on hourly and daily basis is the following. The events fo
How to interpret daily vs. hourly "% chance of rain" precipitation forecasts? I was a little bit confused at first when thinking about the matter. However, the two main reason why you won't be able to harmonize forecasts on hourly and daily basis is the following. The events for rain in breakdown perspective are not independent. Aggregating probabilities would require knowledge on how the events are related, but this will vary from case to case. On my blog you'll find the article Rain Risk: Too much detail? which deals with the question and provides some neat images.
How to interpret daily vs. hourly "% chance of rain" precipitation forecasts? I was a little bit confused at first when thinking about the matter. However, the two main reason why you won't be able to harmonize forecasts on hourly and daily basis is the following. The events fo
37,330
A name for operator-dependent cross-product
My understanding of the question is that it asks for a name for any $p\times p$ matrix $\mathbf F$ with elements $F_{ij} = f(\mathbf X_i, \mathbf X_j)$, where $\mathbf X_k$ are columns of a $n\times p$ data matrix $\mathbf X$ and $f(\cdot, \cdot)$ is some arbitrary function. This can be seen as a generalization of covariance matrix (if $f$ is covariance), correlation matrix (if $f$ is correlation), sum-of-squares-and-cross-products matrix (if $f$ is dot product), etc. Note that even if $f$ is symmetric and "sensible", the resulting matrix $\mathbf F$ can fail to be positive-definite. This is the case e.g. when $f$ is mutual information (as stated in the title of this paper). I doubt that there is a generic term for $\mathbf F$. If you really have to have a name for it, I think you should invent one. If your $f$ is supposed to measure some relationship between variables $i$ and $j$, then perhaps $\mathbf F$ can be called "cross-relationship matrix" or simply "relationship matrix"?
A name for operator-dependent cross-product
My understanding of the question is that it asks for a name for any $p\times p$ matrix $\mathbf F$ with elements $F_{ij} = f(\mathbf X_i, \mathbf X_j)$, where $\mathbf X_k$ are columns of a $n\times p
A name for operator-dependent cross-product My understanding of the question is that it asks for a name for any $p\times p$ matrix $\mathbf F$ with elements $F_{ij} = f(\mathbf X_i, \mathbf X_j)$, where $\mathbf X_k$ are columns of a $n\times p$ data matrix $\mathbf X$ and $f(\cdot, \cdot)$ is some arbitrary function. This can be seen as a generalization of covariance matrix (if $f$ is covariance), correlation matrix (if $f$ is correlation), sum-of-squares-and-cross-products matrix (if $f$ is dot product), etc. Note that even if $f$ is symmetric and "sensible", the resulting matrix $\mathbf F$ can fail to be positive-definite. This is the case e.g. when $f$ is mutual information (as stated in the title of this paper). I doubt that there is a generic term for $\mathbf F$. If you really have to have a name for it, I think you should invent one. If your $f$ is supposed to measure some relationship between variables $i$ and $j$, then perhaps $\mathbf F$ can be called "cross-relationship matrix" or simply "relationship matrix"?
A name for operator-dependent cross-product My understanding of the question is that it asks for a name for any $p\times p$ matrix $\mathbf F$ with elements $F_{ij} = f(\mathbf X_i, \mathbf X_j)$, where $\mathbf X_k$ are columns of a $n\times p
37,331
Autocorrelation of concatenated independent AR(1) processes
Executive summary: It seems that you are mistaking noise for true autocorrelation due to a small sample size. You can simply confirm this by increasing the k parameter in your code. See these examples below (I have used your same set.seed(987) throughout to maintain replicability): k=1000 (your original code) k=2000 k=5000 k=10000 k=50000 This sequence of images tells us two things: The autocorrelation after the first 10 observations greatly diminishes as the number of iterations increases. Indeed, with a sufficiently large number of iterations the $\hat\rho(l)$ for any $l>10$ will converge to zero. This is the basis for my statement at the beginning - that the autocorrelation that you observed was simply noise. Notwithstanding the aforementioned observation that $\hat\rho(l)$ converges to zero for any $l>10$ as the number of simulation increases, $\hat\rho(l)$ for any $l \le 10$ actually remains constant at $\hat\rho(l)=\rho(l)=0.9^l$, just as the construction of your model would suggest. Note that I refer to the observed autocorrelation as $\hat\rho(l)$ and to the true autocorrelation as $\rho(l)$.
Autocorrelation of concatenated independent AR(1) processes
Executive summary: It seems that you are mistaking noise for true autocorrelation due to a small sample size. You can simply confirm this by increasing the k parameter in your code. See these examples
Autocorrelation of concatenated independent AR(1) processes Executive summary: It seems that you are mistaking noise for true autocorrelation due to a small sample size. You can simply confirm this by increasing the k parameter in your code. See these examples below (I have used your same set.seed(987) throughout to maintain replicability): k=1000 (your original code) k=2000 k=5000 k=10000 k=50000 This sequence of images tells us two things: The autocorrelation after the first 10 observations greatly diminishes as the number of iterations increases. Indeed, with a sufficiently large number of iterations the $\hat\rho(l)$ for any $l>10$ will converge to zero. This is the basis for my statement at the beginning - that the autocorrelation that you observed was simply noise. Notwithstanding the aforementioned observation that $\hat\rho(l)$ converges to zero for any $l>10$ as the number of simulation increases, $\hat\rho(l)$ for any $l \le 10$ actually remains constant at $\hat\rho(l)=\rho(l)=0.9^l$, just as the construction of your model would suggest. Note that I refer to the observed autocorrelation as $\hat\rho(l)$ and to the true autocorrelation as $\rho(l)$.
Autocorrelation of concatenated independent AR(1) processes Executive summary: It seems that you are mistaking noise for true autocorrelation due to a small sample size. You can simply confirm this by increasing the k parameter in your code. See these examples
37,332
Decomposition of average squared bias (in Elements of Statistical Learning)
The result is basically due to the property of best linear estimator. Note that we don't assume $f(X)$ is linear here. Nevertheless we can find the linear predictor that approximates $f$ the best. Recall the definition of $\beta_*$: $\beta_{*} = \arg\min_\beta E{[(f(X) - X^T \beta)^2]}$. We can derive the theoretical estimator for $\beta_*$: \begin{align*} g(\beta) &= E[(f(X) - X^T \beta)^2] = E [f^2(X)] - 2\beta^T E[Xf(X)] + \beta^T E[XX^T]\beta \\ &\implies \frac{\partial{g(\beta)}}{\partial{\beta}} = -2 E{[Xf(X)]} + 2 E[XX^T]\beta = 0 \\ &\implies \beta_{*} = E[X X^T]^{-1}E[X f(X)], \end{align*} where we have assumed $E[X X^T]$ is invertible. I call it theoretical estimator as we never know (in real world scenarios anyways) the marginal distribution of X, or $P(X)$, so we won't know those expectations. You should still recall the resemblance of this estimator to the ordinaory least square estimator (if you replace $f$ with $y$, then the OLS estimator is the plugin equivalent estimator. at the end I show they are the same for estimating the value of $\beta_*$), which basically tells us another way of deriving the OLS estimator (by large number theory). The L.H.S of (7.14) can be expanded as: \begin{align*} E_{x_0}[f(x_0) - E{\hat{f}_\alpha (x_0)}]^2 &= E_{x_0}[f(x_0) -x_0^T\beta_{*}+ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &= E_{x_0}[f(x_0) - x_0^T\beta_{*}]^2 + E_{x_0}[ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &\;\;+ 2 E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})]. \end{align*} To show (7.14), one only needs to show the third term is zero, i.e. $$E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})] = 0, $$ where the L.H.S equals \begin{align*} LHS = E_{x_0}[(f(x_0) - x_0^T\beta_{*})x_0^T\beta_{*}] - E_{x_0}[(f(x_0) - x_0^T\beta_{*})E{\hat{f}_\alpha (x_0)})] \end{align*} The first term (for convenience, I have omitted $x_0$ and replace it with $x$): \begin{align} &E{[(f(x) - x^T\beta_{*})x^T\beta_{*}]} = E{[f(x)x^T\beta_*]}- E{[(x^T\beta_*)^2]} \\ &= E[f(x)x^T]\beta_* - \left(Var{[x^T\beta_*]} + (E{[x^T\beta_*]})^2\right) \\ &= E[f(x)x^T]\beta_* - \left( \beta_*^T Var{[x]} \beta_* + (\beta_* ^T E[x])^2\right) \\ &= E[f(x)x^T]\beta_* - \left( \beta_*^T (E[xx^T] - E[x]E[x]^T) \beta_* + (\beta_* ^T E[x])^2\right) \\ &= E[f(x)x^T]\beta_* - E{[f(x)x^T]}E[xx^T]^{-1} E[xx^T]\beta_* + \beta_*^TE[x]E[x]^T \beta_*\\ &\;\;- \beta_*^TE[x]E[x]^T \beta_* \\ &= 0, \end{align} where we have used the variance identity $Var{[z]} = E{[zz^T]} - E{[z]}E{[z]}^T$ twice for both the second and forth step; we have substituted $\beta_*^T$ in the second last line and all the other steps follow due to standard expectation/variance properties. In particular, $\beta_*$ is a constant vector w.r.t the expectation, as it is independent from where $x$ (or $x_0$) is measured. The second term \begin{align} E{[(f(x) - x^T\beta_{*})E{\hat{f}_\alpha (x)}]} &= E{[(f(x) - x^T\beta_{*}) E{[x^T\hat{\beta}_\alpha]}]} \\ &= E{[E{[\hat{\beta}_\alpha}^T]x (f(x) - x^T\beta_{*})]} \\ &= E{\hat{\beta}_\alpha}^TE{[x f(x) - x x^T\beta_*]} \\ &=E{\hat{\beta}_\alpha}^T\left( E{[x f(x)]} - E[xx^T] E[xx^T]^{-1}E{[xf(x)]}\right)\\ &=0, \end{align} where the second equality holds because $E{\hat{f}_\alpha (x)}$ is a point-wise expectation where the randomness arises from the training data $y$, so $x$ is fixed; the third equality holds as $E{\hat{\beta}_\alpha}$ is independent from where $x$ ($x_0$) is predicted so it's a constant w.r.t the outside expectation. Combining the above results, the sum of these two terms is zero, which shows eq.(7.14). Although not related to the question, it is worth noting that $ f(X) = E[Y|X]$, i.e. $f(X)$ is the optimal regression function, as $$ f(X) = E{[f(X) +\varepsilon |X]} = E[Y|X].$$ Therefore, \begin{align} \beta_{*} &= E[XX^T]^{-1}E{[Xf(X)]} = E[XX^T]^{-1}E{[XE[Y|X]]} \\ &= E[XX^T]^{-1}E[E[XY|X]] \\ &= E[XX^T]^{-1}E[XY], \end{align} if we recall the last estimator is the best linear estimator, the above equation basically tells us, using the optimal regression function $f(x)$ or the noisy version, $y$ is the same as far as the point estimator is the concern. Of course, the estimator with $f$ will have better property/efficiency as it will lead to smaller variance, which can be easily seen from that fact $y$ introduces extra error, or variance.
Decomposition of average squared bias (in Elements of Statistical Learning)
The result is basically due to the property of best linear estimator. Note that we don't assume $f(X)$ is linear here. Nevertheless we can find the linear predictor that approximates $f$ the best. Re
Decomposition of average squared bias (in Elements of Statistical Learning) The result is basically due to the property of best linear estimator. Note that we don't assume $f(X)$ is linear here. Nevertheless we can find the linear predictor that approximates $f$ the best. Recall the definition of $\beta_*$: $\beta_{*} = \arg\min_\beta E{[(f(X) - X^T \beta)^2]}$. We can derive the theoretical estimator for $\beta_*$: \begin{align*} g(\beta) &= E[(f(X) - X^T \beta)^2] = E [f^2(X)] - 2\beta^T E[Xf(X)] + \beta^T E[XX^T]\beta \\ &\implies \frac{\partial{g(\beta)}}{\partial{\beta}} = -2 E{[Xf(X)]} + 2 E[XX^T]\beta = 0 \\ &\implies \beta_{*} = E[X X^T]^{-1}E[X f(X)], \end{align*} where we have assumed $E[X X^T]$ is invertible. I call it theoretical estimator as we never know (in real world scenarios anyways) the marginal distribution of X, or $P(X)$, so we won't know those expectations. You should still recall the resemblance of this estimator to the ordinaory least square estimator (if you replace $f$ with $y$, then the OLS estimator is the plugin equivalent estimator. at the end I show they are the same for estimating the value of $\beta_*$), which basically tells us another way of deriving the OLS estimator (by large number theory). The L.H.S of (7.14) can be expanded as: \begin{align*} E_{x_0}[f(x_0) - E{\hat{f}_\alpha (x_0)}]^2 &= E_{x_0}[f(x_0) -x_0^T\beta_{*}+ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &= E_{x_0}[f(x_0) - x_0^T\beta_{*}]^2 + E_{x_0}[ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &\;\;+ 2 E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})]. \end{align*} To show (7.14), one only needs to show the third term is zero, i.e. $$E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})] = 0, $$ where the L.H.S equals \begin{align*} LHS = E_{x_0}[(f(x_0) - x_0^T\beta_{*})x_0^T\beta_{*}] - E_{x_0}[(f(x_0) - x_0^T\beta_{*})E{\hat{f}_\alpha (x_0)})] \end{align*} The first term (for convenience, I have omitted $x_0$ and replace it with $x$): \begin{align} &E{[(f(x) - x^T\beta_{*})x^T\beta_{*}]} = E{[f(x)x^T\beta_*]}- E{[(x^T\beta_*)^2]} \\ &= E[f(x)x^T]\beta_* - \left(Var{[x^T\beta_*]} + (E{[x^T\beta_*]})^2\right) \\ &= E[f(x)x^T]\beta_* - \left( \beta_*^T Var{[x]} \beta_* + (\beta_* ^T E[x])^2\right) \\ &= E[f(x)x^T]\beta_* - \left( \beta_*^T (E[xx^T] - E[x]E[x]^T) \beta_* + (\beta_* ^T E[x])^2\right) \\ &= E[f(x)x^T]\beta_* - E{[f(x)x^T]}E[xx^T]^{-1} E[xx^T]\beta_* + \beta_*^TE[x]E[x]^T \beta_*\\ &\;\;- \beta_*^TE[x]E[x]^T \beta_* \\ &= 0, \end{align} where we have used the variance identity $Var{[z]} = E{[zz^T]} - E{[z]}E{[z]}^T$ twice for both the second and forth step; we have substituted $\beta_*^T$ in the second last line and all the other steps follow due to standard expectation/variance properties. In particular, $\beta_*$ is a constant vector w.r.t the expectation, as it is independent from where $x$ (or $x_0$) is measured. The second term \begin{align} E{[(f(x) - x^T\beta_{*})E{\hat{f}_\alpha (x)}]} &= E{[(f(x) - x^T\beta_{*}) E{[x^T\hat{\beta}_\alpha]}]} \\ &= E{[E{[\hat{\beta}_\alpha}^T]x (f(x) - x^T\beta_{*})]} \\ &= E{\hat{\beta}_\alpha}^TE{[x f(x) - x x^T\beta_*]} \\ &=E{\hat{\beta}_\alpha}^T\left( E{[x f(x)]} - E[xx^T] E[xx^T]^{-1}E{[xf(x)]}\right)\\ &=0, \end{align} where the second equality holds because $E{\hat{f}_\alpha (x)}$ is a point-wise expectation where the randomness arises from the training data $y$, so $x$ is fixed; the third equality holds as $E{\hat{\beta}_\alpha}$ is independent from where $x$ ($x_0$) is predicted so it's a constant w.r.t the outside expectation. Combining the above results, the sum of these two terms is zero, which shows eq.(7.14). Although not related to the question, it is worth noting that $ f(X) = E[Y|X]$, i.e. $f(X)$ is the optimal regression function, as $$ f(X) = E{[f(X) +\varepsilon |X]} = E[Y|X].$$ Therefore, \begin{align} \beta_{*} &= E[XX^T]^{-1}E{[Xf(X)]} = E[XX^T]^{-1}E{[XE[Y|X]]} \\ &= E[XX^T]^{-1}E[E[XY|X]] \\ &= E[XX^T]^{-1}E[XY], \end{align} if we recall the last estimator is the best linear estimator, the above equation basically tells us, using the optimal regression function $f(x)$ or the noisy version, $y$ is the same as far as the point estimator is the concern. Of course, the estimator with $f$ will have better property/efficiency as it will lead to smaller variance, which can be easily seen from that fact $y$ introduces extra error, or variance.
Decomposition of average squared bias (in Elements of Statistical Learning) The result is basically due to the property of best linear estimator. Note that we don't assume $f(X)$ is linear here. Nevertheless we can find the linear predictor that approximates $f$ the best. Re
37,333
Decomposition of average squared bias (in Elements of Statistical Learning)
Yes, this isn't the usual bias-variance decomposition, rather it's a further decomposition of the bias. Someone else has already answered this.
Decomposition of average squared bias (in Elements of Statistical Learning)
Yes, this isn't the usual bias-variance decomposition, rather it's a further decomposition of the bias. Someone else has already answered this.
Decomposition of average squared bias (in Elements of Statistical Learning) Yes, this isn't the usual bias-variance decomposition, rather it's a further decomposition of the bias. Someone else has already answered this.
Decomposition of average squared bias (in Elements of Statistical Learning) Yes, this isn't the usual bias-variance decomposition, rather it's a further decomposition of the bias. Someone else has already answered this.
37,334
Are all balls in the urn the same color (when they can't be seen clearly)
I admit I didn't fully read the other answer, but a crude approach would just be to note that $a$ follows a binomial$(n, p = e_r)$ distribution when all the balls are robin's egg blue, so you can reject when $a$ is "too large" based on the binomial model. If this won't do then maybe a likelihood ratio test would be better, which seems to be what Zachary Blumenfeld is getting at.
Are all balls in the urn the same color (when they can't be seen clearly)
I admit I didn't fully read the other answer, but a crude approach would just be to note that $a$ follows a binomial$(n, p = e_r)$ distribution when all the balls are robin's egg blue, so you can reje
Are all balls in the urn the same color (when they can't be seen clearly) I admit I didn't fully read the other answer, but a crude approach would just be to note that $a$ follows a binomial$(n, p = e_r)$ distribution when all the balls are robin's egg blue, so you can reject when $a$ is "too large" based on the binomial model. If this won't do then maybe a likelihood ratio test would be better, which seems to be what Zachary Blumenfeld is getting at.
Are all balls in the urn the same color (when they can't be seen clearly) I admit I didn't fully read the other answer, but a crude approach would just be to note that $a$ follows a binomial$(n, p = e_r)$ distribution when all the balls are robin's egg blue, so you can reje
37,335
Are all balls in the urn the same color (when they can't be seen clearly)
I think I have the likelihood function (Full discloser, I am not 100% sure). Once you derive a likelihood though the rest of the hypothesis test should be easier. Suppose you drew a sample of size $n$ denoted as $(X_1,...X_n)$. For simplicity, lets say; $$ X_i = \begin{cases} 1\;\;\mathrm{if \; classified \; as\; color}\;\;\mathbf{a}\\ 0\;\;\mathrm{if \; classified \; as\; color}\;\;\mathbf{r} \end{cases} $$ Further denote the "true" color indicator of observation $i$ as $X_i^*$ such that; $$ X^*_i = \begin{cases} 1\;\;\mathrm{if \; observation\; \; is\; color}\;\;\mathbf{a}\\ 0\;\;\mathrm{if \; observation \; is\; color}\;\;\mathbf{r} \end{cases} $$ Suppose also that the error rate is known, $e_r \in (0,1)$. The probability of $X_i$ conditional on $X^*_i$, is then a Bernoulli distribution; $$ P(X_i=1|X^*_i,e_r)=\begin{cases} 1-e_r\;\;\mathrm{if}\;\;X^*_i=1\\ e_r\;\;\mathrm{if}\;\;X^*_i=0 \end{cases} $$ We can also express this as; $$ P(X_i|X^*_i,e_r)=X^*_i\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-X^*_i)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ We also know the probability of $X^*_i$ $$ P(X^*_i|p) = p^{X^*_i}(1-p)^{1-X^*_i} $$ and that $$ P(X_i|e_r,p)=P(X_i|X^*_i=1,e_r)P(X^*_i=1|p) + P(X_i|X^*_i=0,e_r)P(X^*_i=0|p) $$ and then after some algebra; $$ P(X_i|e_r,p) = p\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-p)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ So your likelihood is; $$ \mathcal{L}(p\mid X_1,..,X_n,e_r)=\prod_{i=1}^n P(X_i|e_r,p) $$ $$ = \prod_{i=1}^n p\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-p)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ Your hypothesis test reduces to $H_0: p=1$ vs $H_1: p\neq 0$. You can do that with a Bayes factor, or with standard error derived from the likelihood, or even via a parametric bootstrap. However you wish. Now that you have the likelihood, the rest should be easy.
Are all balls in the urn the same color (when they can't be seen clearly)
I think I have the likelihood function (Full discloser, I am not 100% sure). Once you derive a likelihood though the rest of the hypothesis test should be easier. Suppose you drew a sample of size $n
Are all balls in the urn the same color (when they can't be seen clearly) I think I have the likelihood function (Full discloser, I am not 100% sure). Once you derive a likelihood though the rest of the hypothesis test should be easier. Suppose you drew a sample of size $n$ denoted as $(X_1,...X_n)$. For simplicity, lets say; $$ X_i = \begin{cases} 1\;\;\mathrm{if \; classified \; as\; color}\;\;\mathbf{a}\\ 0\;\;\mathrm{if \; classified \; as\; color}\;\;\mathbf{r} \end{cases} $$ Further denote the "true" color indicator of observation $i$ as $X_i^*$ such that; $$ X^*_i = \begin{cases} 1\;\;\mathrm{if \; observation\; \; is\; color}\;\;\mathbf{a}\\ 0\;\;\mathrm{if \; observation \; is\; color}\;\;\mathbf{r} \end{cases} $$ Suppose also that the error rate is known, $e_r \in (0,1)$. The probability of $X_i$ conditional on $X^*_i$, is then a Bernoulli distribution; $$ P(X_i=1|X^*_i,e_r)=\begin{cases} 1-e_r\;\;\mathrm{if}\;\;X^*_i=1\\ e_r\;\;\mathrm{if}\;\;X^*_i=0 \end{cases} $$ We can also express this as; $$ P(X_i|X^*_i,e_r)=X^*_i\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-X^*_i)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ We also know the probability of $X^*_i$ $$ P(X^*_i|p) = p^{X^*_i}(1-p)^{1-X^*_i} $$ and that $$ P(X_i|e_r,p)=P(X_i|X^*_i=1,e_r)P(X^*_i=1|p) + P(X_i|X^*_i=0,e_r)P(X^*_i=0|p) $$ and then after some algebra; $$ P(X_i|e_r,p) = p\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-p)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ So your likelihood is; $$ \mathcal{L}(p\mid X_1,..,X_n,e_r)=\prod_{i=1}^n P(X_i|e_r,p) $$ $$ = \prod_{i=1}^n p\bigg[(1-e_r)^{X_i} e_r^{1-X_i}\bigg]+(1-p)\bigg[e_r^{X_i} (1-e_r)^{1-X_i}\bigg] $$ Your hypothesis test reduces to $H_0: p=1$ vs $H_1: p\neq 0$. You can do that with a Bayes factor, or with standard error derived from the likelihood, or even via a parametric bootstrap. However you wish. Now that you have the likelihood, the rest should be easy.
Are all balls in the urn the same color (when they can't be seen clearly) I think I have the likelihood function (Full discloser, I am not 100% sure). Once you derive a likelihood though the rest of the hypothesis test should be easier. Suppose you drew a sample of size $n
37,336
Mathematical foundation of using MCMC in global optimization
Consider the Metropolis Hastings algorithm with a target distribution $\pi(x) = e^{-\beta E(x)}$ where $E$ is a real valued function (historically the energy of the physical system). (note that $\pi(x) = \pi(E(x))$). Lets see what are the consequences of this choice to the expected relative number of samples with a given energy E, $H(E)$: $$H(E) = \int \delta(E - E(x)) \pi(E(x)) P(x) dx = e^{-\beta E} P(E)/Z$$ where $P(E)$ is the marginal of $E$, given by $$P(E) = \int P(x,E) dx = \int \delta(E - E(x)) P(x)dx$$ and $Z(\beta) = \int e^{-\beta E} P(E) dE$ is the normalization constant. It is customary to write $P(E) = \exp(S(E))$ where $S$ is the entropy. This leads to $$H(E) \propto e^{-\beta E + S(E)}$$ and the maximum of $H(E)$ occurs for $E^*$ solution of $\beta = dS/dE(E^*)$.(*) Under the assumption that $dS/dE(E^*)$ is monotonic decreasing, increasing $\beta$ decreases $E^*$. In particular, as $\beta$ approaches $\infty$ ($-\infty$), the distribution $H(E)$ approaches a Dirac delta at $E^*=E_\min$ ($E^*=E_\max$). When 1) $dS/dE(E^*)$ is not monotonically decreasing, this typically leads to an ergodicity breaking of the algorithm and other approaches are required. It can also happen that, even though $dS/dE(E^*)$ is monotonically decreasing, 2) the function $E(x)$ is very "rough" and the algorithm gets stuck on a particular local minima for an arbitrary long time. To avoid 2), stimulated annealing is typically employed. To avoid 1) I'm not sure what it is typically done. I often use flat-histogram for both 1) and 2). Two notes: Stimulated annealing is not a MCMC because changing $\beta$ during the simulation makes the transition probability to depend on $t$ and therefore makes the algorithm non-markovian. MH with $\beta \rightarrow \infty$ does not converge to the target distribution because it violates ergodicity. Once a given $E^*$ is achieved, any state with $E \le E^*$ is unvisitable. The tricky part about reaching or not reaching the global minima requires analysing the actual convergence of the algorithm (e.g. polynomial, exponential); but I'm not familiar with these results. Maybe others can help on this. (*) It is not a coincidence that this is also a definition of the microscopic thermodynamic temperature.
Mathematical foundation of using MCMC in global optimization
Consider the Metropolis Hastings algorithm with a target distribution $\pi(x) = e^{-\beta E(x)}$ where $E$ is a real valued function (historically the energy of the physical system). (note that $\pi(x
Mathematical foundation of using MCMC in global optimization Consider the Metropolis Hastings algorithm with a target distribution $\pi(x) = e^{-\beta E(x)}$ where $E$ is a real valued function (historically the energy of the physical system). (note that $\pi(x) = \pi(E(x))$). Lets see what are the consequences of this choice to the expected relative number of samples with a given energy E, $H(E)$: $$H(E) = \int \delta(E - E(x)) \pi(E(x)) P(x) dx = e^{-\beta E} P(E)/Z$$ where $P(E)$ is the marginal of $E$, given by $$P(E) = \int P(x,E) dx = \int \delta(E - E(x)) P(x)dx$$ and $Z(\beta) = \int e^{-\beta E} P(E) dE$ is the normalization constant. It is customary to write $P(E) = \exp(S(E))$ where $S$ is the entropy. This leads to $$H(E) \propto e^{-\beta E + S(E)}$$ and the maximum of $H(E)$ occurs for $E^*$ solution of $\beta = dS/dE(E^*)$.(*) Under the assumption that $dS/dE(E^*)$ is monotonic decreasing, increasing $\beta$ decreases $E^*$. In particular, as $\beta$ approaches $\infty$ ($-\infty$), the distribution $H(E)$ approaches a Dirac delta at $E^*=E_\min$ ($E^*=E_\max$). When 1) $dS/dE(E^*)$ is not monotonically decreasing, this typically leads to an ergodicity breaking of the algorithm and other approaches are required. It can also happen that, even though $dS/dE(E^*)$ is monotonically decreasing, 2) the function $E(x)$ is very "rough" and the algorithm gets stuck on a particular local minima for an arbitrary long time. To avoid 2), stimulated annealing is typically employed. To avoid 1) I'm not sure what it is typically done. I often use flat-histogram for both 1) and 2). Two notes: Stimulated annealing is not a MCMC because changing $\beta$ during the simulation makes the transition probability to depend on $t$ and therefore makes the algorithm non-markovian. MH with $\beta \rightarrow \infty$ does not converge to the target distribution because it violates ergodicity. Once a given $E^*$ is achieved, any state with $E \le E^*$ is unvisitable. The tricky part about reaching or not reaching the global minima requires analysing the actual convergence of the algorithm (e.g. polynomial, exponential); but I'm not familiar with these results. Maybe others can help on this. (*) It is not a coincidence that this is also a definition of the microscopic thermodynamic temperature.
Mathematical foundation of using MCMC in global optimization Consider the Metropolis Hastings algorithm with a target distribution $\pi(x) = e^{-\beta E(x)}$ where $E$ is a real valued function (historically the energy of the physical system). (note that $\pi(x
37,337
Expressing conditional covariance matrix in terms of covariance matrix
This rule holds when the random variables are jointly normally distributed, but it does not apply more generally; i.e., for other joint distributions it might not hold. ​In a related answer here it is shown that the Mahalanobis distance can be decomposed as follows: $$\begin{equation} \begin{aligned} D^2 (\boldsymbol{x}, \boldsymbol{y}) &= \begin{bmatrix} \boldsymbol{x} - \boldsymbol{\mu}_X \\ \boldsymbol{y} - \boldsymbol{\mu}_Y \end{bmatrix}^\text{T} \begin{bmatrix} \boldsymbol{\Sigma}_{XX} & \boldsymbol{\Sigma}_{XY} \\ \boldsymbol{\Sigma}_{YX} & \boldsymbol{\Sigma}_{YY} \end{bmatrix}^{-1} \begin{bmatrix} \boldsymbol{x} - \boldsymbol{\mu}_X \\ \boldsymbol{y} - \boldsymbol{\mu}_Y \end{bmatrix} \\[6pt] &= \underbrace{(\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})^\text{T} \boldsymbol{\Sigma}_{Y|X}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})}_\text{Conditional Part} + \underbrace{(\boldsymbol{x} - \boldsymbol{\mu}_X)^\text{T} \boldsymbol{\Sigma}_{XX}^{-1} (\boldsymbol{x} - \boldsymbol{\mu}_X)}_\text{Marginal Part}, \\[6pt] \end{aligned} \end{equation}$$ where we use the conditional mean vector and conditional variance matrix: $$\begin{align} \boldsymbol{\mu}_{Y|X} &\equiv \boldsymbol{\mu}_Y + \boldsymbol{\Sigma}_{YX} \boldsymbol{\Sigma}_{XX}^{-1} (\boldsymbol{x} - \boldsymbol{\mu}_X), \\[6pt] \boldsymbol{\Sigma}_{Y|X} \ &\equiv \boldsymbol{\Sigma}_{YY} - \boldsymbol{\Sigma}_{YX} \boldsymbol{\Sigma}_{XX}^{-1} \boldsymbol{\Sigma}_{XY}. \\[6pt] \end{align}$$ If the random vectors $\mathbf{X}$ and $\mathbf{Y}$ are jointly normally distributed, it follows that the conditional distribution of interest can be written as: $$\begin{equation} \begin{aligned} p(\boldsymbol{y} | \boldsymbol{x}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) &\overset{\boldsymbol{y}}{\propto} p(\boldsymbol{x} , \boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[12pt] &= \text{N}(\boldsymbol{x}, \boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[10pt] &\overset{\boldsymbol{y}}{\propto} \exp \Big( - \frac{1}{2} D^2 (\boldsymbol{x}, \boldsymbol{y}) \Big) \\[6pt] &\overset{\boldsymbol{y}}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})^\text{T} \boldsymbol{\Sigma}_{Y|X}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X}) \Big) \\[6pt] &\overset{\boldsymbol{y}}{\propto}\text{N}(\boldsymbol{y} | \boldsymbol{\mu}_{Y|X}, \boldsymbol{\Sigma}_{Y|X}), \\[6pt] \end{aligned} \end{equation}$$ which establishes that $\boldsymbol{\Sigma}_{Y|X}$ is the conditional covariance matrix. Note again that this result depends on the assumption that the random vectors are jointly normally distributed. It can be regarded as a "first-order" approximation to the conditional covariance in other cases.
Expressing conditional covariance matrix in terms of covariance matrix
This rule holds when the random variables are jointly normally distributed, but it does not apply more generally; i.e., for other joint distributions it might not hold. ​In a related answer here it i
Expressing conditional covariance matrix in terms of covariance matrix This rule holds when the random variables are jointly normally distributed, but it does not apply more generally; i.e., for other joint distributions it might not hold. ​In a related answer here it is shown that the Mahalanobis distance can be decomposed as follows: $$\begin{equation} \begin{aligned} D^2 (\boldsymbol{x}, \boldsymbol{y}) &= \begin{bmatrix} \boldsymbol{x} - \boldsymbol{\mu}_X \\ \boldsymbol{y} - \boldsymbol{\mu}_Y \end{bmatrix}^\text{T} \begin{bmatrix} \boldsymbol{\Sigma}_{XX} & \boldsymbol{\Sigma}_{XY} \\ \boldsymbol{\Sigma}_{YX} & \boldsymbol{\Sigma}_{YY} \end{bmatrix}^{-1} \begin{bmatrix} \boldsymbol{x} - \boldsymbol{\mu}_X \\ \boldsymbol{y} - \boldsymbol{\mu}_Y \end{bmatrix} \\[6pt] &= \underbrace{(\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})^\text{T} \boldsymbol{\Sigma}_{Y|X}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})}_\text{Conditional Part} + \underbrace{(\boldsymbol{x} - \boldsymbol{\mu}_X)^\text{T} \boldsymbol{\Sigma}_{XX}^{-1} (\boldsymbol{x} - \boldsymbol{\mu}_X)}_\text{Marginal Part}, \\[6pt] \end{aligned} \end{equation}$$ where we use the conditional mean vector and conditional variance matrix: $$\begin{align} \boldsymbol{\mu}_{Y|X} &\equiv \boldsymbol{\mu}_Y + \boldsymbol{\Sigma}_{YX} \boldsymbol{\Sigma}_{XX}^{-1} (\boldsymbol{x} - \boldsymbol{\mu}_X), \\[6pt] \boldsymbol{\Sigma}_{Y|X} \ &\equiv \boldsymbol{\Sigma}_{YY} - \boldsymbol{\Sigma}_{YX} \boldsymbol{\Sigma}_{XX}^{-1} \boldsymbol{\Sigma}_{XY}. \\[6pt] \end{align}$$ If the random vectors $\mathbf{X}$ and $\mathbf{Y}$ are jointly normally distributed, it follows that the conditional distribution of interest can be written as: $$\begin{equation} \begin{aligned} p(\boldsymbol{y} | \boldsymbol{x}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) &\overset{\boldsymbol{y}}{\propto} p(\boldsymbol{x} , \boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[12pt] &= \text{N}(\boldsymbol{x}, \boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[10pt] &\overset{\boldsymbol{y}}{\propto} \exp \Big( - \frac{1}{2} D^2 (\boldsymbol{x}, \boldsymbol{y}) \Big) \\[6pt] &\overset{\boldsymbol{y}}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X})^\text{T} \boldsymbol{\Sigma}_{Y|X}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}_{Y|X}) \Big) \\[6pt] &\overset{\boldsymbol{y}}{\propto}\text{N}(\boldsymbol{y} | \boldsymbol{\mu}_{Y|X}, \boldsymbol{\Sigma}_{Y|X}), \\[6pt] \end{aligned} \end{equation}$$ which establishes that $\boldsymbol{\Sigma}_{Y|X}$ is the conditional covariance matrix. Note again that this result depends on the assumption that the random vectors are jointly normally distributed. It can be regarded as a "first-order" approximation to the conditional covariance in other cases.
Expressing conditional covariance matrix in terms of covariance matrix This rule holds when the random variables are jointly normally distributed, but it does not apply more generally; i.e., for other joint distributions it might not hold. ​In a related answer here it i
37,338
ML covariance estimation from Expectation-Maximization with missing data
It turns out the algorithm is rather simple. Starting out with initial estimates of $\mathbf{\mu}$ and $\boldsymbol{\Sigma}$: Create a bias matrix $\mathbf{B}$ of dimension $nvar\times nvar$, initialized with zeros. For each row (observation) with missing data, denote available indices $a$ and missing indices $m$. Given the current estimates of $\boldsymbol{\Sigma}$ and $\mathbf{\mu}$, impute missing values $\hat{\mathbf{y}}_m=\mathbf{\mu}_m+\boldsymbol{\Sigma}_{m,a}\boldsymbol{\Sigma}_{a,a}^{-1}(\mathbf{y}_a-\mathbf{\mu}_a)$. Update $\mathbf{B}_{m,m}$ with $\mathbf{B}_{m,m}+\boldsymbol{\Sigma}_{m,m}-\boldsymbol{\Sigma}_{m,a}\boldsymbol{\Sigma}_{a,a}^{-1}\boldsymbol{\Sigma}_{a,m}$ Calculate $\mathbf{\mu}^{(i+1)}$ and $\boldsymbol{\Sigma}_{biased}$ from newly imputed data. Adjust for bias: $\boldsymbol{\Sigma}^{(i+1)}=\boldsymbol{\Sigma}_{biased}+\mathbf{B}n^{-1}$. Repeat 1-3 until convergence. For restricted maximum likelihood, replace $n^{-1}$ with $(n-1)^{-1}$ in covariance calculations. require(norm) dat <- matrix(rnorm(1000),ncol=5) # original data nvar <- ncol(dat) n <- nrow(dat) nmissing <- 50 dat_missing <- dat dat_missing[sample(length(dat_missing),nmissing)] <- NA is_na <- apply(dat_missing,2,is.na) # index if NAs dat_impute <- dat_missing # data matrix for imputation # set initial estimates to means from available data for(i in 1:ncol(dat_impute)) dat_impute[is_na[,i],i] <- colMeans(dat_missing,na.rm = TRUE)[i] # starting values for EM means <- colMeans(dat_impute) # NOTE: multiplying by (nrow-1)/(nrow) to get ML estimate # For comparability with norm package output sigma <- cov(dat_impute)*(nrow(dat_impute)-1)/nrow(dat_impute) # get estimates from norm package for comparison s <- prelim.norm(dat_missing) e <- em.norm(s,criterion=1e-32,showits = FALSE) # carry out EM over 100 iterations for(j in 1:100) { bias <- matrix(0,nvar,nvar) for(i in 1:n) { row_dat <- dat_missing[i,] avail <- which(!is.na(row_dat)) if(length(avail)<nvar) { bias[-avail,-avail] <- bias[-avail,-avail] + sigma[-avail,-avail] - sigma[-avail,avail] %*% solve(sigma[avail,avail]) %*% sigma[avail,-avail] dat_impute[i,-avail] <- means[-avail] + (sigma[-avail,avail] %*% solve(sigma[avail,avail])) %*% (row_dat[avail]-means[avail]) } } # get updated means and covariance matrix means <- colMeans(dat_impute) biased_sigma <- cov(dat_impute)*(n-1)/n # correct for bias in covariance matrix sigma <- biased_sigma + bias/n } # compare results to norm package output # compare means max(abs(getparam.norm(s,e)[[1]] - means)) # compare covariance matrix max(abs(getparam.norm(s,e)[[2]] - sigma))
ML covariance estimation from Expectation-Maximization with missing data
It turns out the algorithm is rather simple. Starting out with initial estimates of $\mathbf{\mu}$ and $\boldsymbol{\Sigma}$: Create a bias matrix $\mathbf{B}$ of dimension $nvar\times nvar$, initial
ML covariance estimation from Expectation-Maximization with missing data It turns out the algorithm is rather simple. Starting out with initial estimates of $\mathbf{\mu}$ and $\boldsymbol{\Sigma}$: Create a bias matrix $\mathbf{B}$ of dimension $nvar\times nvar$, initialized with zeros. For each row (observation) with missing data, denote available indices $a$ and missing indices $m$. Given the current estimates of $\boldsymbol{\Sigma}$ and $\mathbf{\mu}$, impute missing values $\hat{\mathbf{y}}_m=\mathbf{\mu}_m+\boldsymbol{\Sigma}_{m,a}\boldsymbol{\Sigma}_{a,a}^{-1}(\mathbf{y}_a-\mathbf{\mu}_a)$. Update $\mathbf{B}_{m,m}$ with $\mathbf{B}_{m,m}+\boldsymbol{\Sigma}_{m,m}-\boldsymbol{\Sigma}_{m,a}\boldsymbol{\Sigma}_{a,a}^{-1}\boldsymbol{\Sigma}_{a,m}$ Calculate $\mathbf{\mu}^{(i+1)}$ and $\boldsymbol{\Sigma}_{biased}$ from newly imputed data. Adjust for bias: $\boldsymbol{\Sigma}^{(i+1)}=\boldsymbol{\Sigma}_{biased}+\mathbf{B}n^{-1}$. Repeat 1-3 until convergence. For restricted maximum likelihood, replace $n^{-1}$ with $(n-1)^{-1}$ in covariance calculations. require(norm) dat <- matrix(rnorm(1000),ncol=5) # original data nvar <- ncol(dat) n <- nrow(dat) nmissing <- 50 dat_missing <- dat dat_missing[sample(length(dat_missing),nmissing)] <- NA is_na <- apply(dat_missing,2,is.na) # index if NAs dat_impute <- dat_missing # data matrix for imputation # set initial estimates to means from available data for(i in 1:ncol(dat_impute)) dat_impute[is_na[,i],i] <- colMeans(dat_missing,na.rm = TRUE)[i] # starting values for EM means <- colMeans(dat_impute) # NOTE: multiplying by (nrow-1)/(nrow) to get ML estimate # For comparability with norm package output sigma <- cov(dat_impute)*(nrow(dat_impute)-1)/nrow(dat_impute) # get estimates from norm package for comparison s <- prelim.norm(dat_missing) e <- em.norm(s,criterion=1e-32,showits = FALSE) # carry out EM over 100 iterations for(j in 1:100) { bias <- matrix(0,nvar,nvar) for(i in 1:n) { row_dat <- dat_missing[i,] avail <- which(!is.na(row_dat)) if(length(avail)<nvar) { bias[-avail,-avail] <- bias[-avail,-avail] + sigma[-avail,-avail] - sigma[-avail,avail] %*% solve(sigma[avail,avail]) %*% sigma[avail,-avail] dat_impute[i,-avail] <- means[-avail] + (sigma[-avail,avail] %*% solve(sigma[avail,avail])) %*% (row_dat[avail]-means[avail]) } } # get updated means and covariance matrix means <- colMeans(dat_impute) biased_sigma <- cov(dat_impute)*(n-1)/n # correct for bias in covariance matrix sigma <- biased_sigma + bias/n } # compare results to norm package output # compare means max(abs(getparam.norm(s,e)[[1]] - means)) # compare covariance matrix max(abs(getparam.norm(s,e)[[2]] - sigma))
ML covariance estimation from Expectation-Maximization with missing data It turns out the algorithm is rather simple. Starting out with initial estimates of $\mathbf{\mu}$ and $\boldsymbol{\Sigma}$: Create a bias matrix $\mathbf{B}$ of dimension $nvar\times nvar$, initial
37,339
Is a pair of null hypothesis and alternative hypothesis always complementary?
I have been having the same question, too. The part confuses me is that in many places I saw that $H_0$ and $H_a$ are not complementary, e.g., $H_0$: $\mu$ = 0, $H_a$: $\mu$ < 0, rather than $H_0$: $\mu$ >= 0, $H_a$: $\mu$ < 0, After going through some searches, my current understanding is that $H_0$ and $H_a$ should be complementary for hypothesis testing to work (otherwise, $H_a$ is not the only alternative to $H_0$, and rejecting $H_0$ doesn't mean we can accept $H_a$). But we only need to calculate test statistic under the assumption when null hypothesis takes the equal sign (in previous example, that is when $\mu$ = 0). You can refer to this lecture note. At the same time, below is my take to clarify my point using an example. Clarification with an example Suppose we want to test if a coin is biased towards head, then $H_a$ in this case should be "Probability of head, $\theta$, is greater than 0.5". Since this is a one sided test, then $H_0$ should be "Probability of head, $\theta$, is less than or equal to 0.5". In math expressions: $$ H_0: \theta <= 0.5\\ H_a: \theta > 0.5 $$ The next step is to: Suppose $H_0$ is true. Pick and calculates the test statistic under the assumption of $H_0$ being true, e.g., x. Test statistic is the number of heads in 100 throws. Calculates the probability that test statistic is at least as extreme as x. In this case "extreme" means only values that are too large, since null hypothesis assumes that coin is less likely to show head. When $H_0$ is true, this means that $\theta$ can be 0.5, 0.49, 0.3, or any value that's less than/equal to 0.5. If that's the case, doesn't that mean we should calculate not one but many test statistics? My understanding is yes. However, we don't really need to calculate all the test statistics to determine what is extreme. We can illustrate it by plotting the density curves with a few different parameters. df.l1.e1 = data.frame(x1=1:100, d1=dbinom(1:100, 100, prob = 0.5), d2=dbinom(1:100, 100, prob = 0.45), d3=dbinom(1:100, 100, prob = 0.4), d4=dbinom(1:100, 100, prob = 0.35), d5=dbinom(1:100, 100, prob = 0.3)) %>% gather(key = "scenario", value = "density", -x1) p1 = ggplot(df.l1.e1, aes(x1, density, color = scenario)) + geom_line() + geom_point() + geom_vline(xintercept = qbinom(1-0.05,100,0.5)) p1 The 5 different curves represents 5 differnt density functions of a binomial distribution. They have the same sample size (100), but different probability of success. The vertical line shows the critical value (in this case 58) for d1 scenario. It means under d1 scenario, the probability of getting heads 58 or more times (i.e., at least as extreme as 58) is 5%. As $\theta$ gets smaller, the density function moves more and more to the left. This means that the critial values for scenrarios, d2, d3, etc. will be smaller than 58. In other words, 58 is considered an extreme event for all these scenarios. Now we can see that if test statistic is 58 or more, then no matter what value $\theta$ takes, as long as it's less than/equal to 0.5, the test result will be an extreme event. If the test statistic is, for example, 57, then we cannot reject $H_0$, because although under d2, d3... scenarios the result is extreme, it is not extreme when under d1 scenario. As a result, we cannot reject $H_0$. So based on my current understaning, $H_0$ and $H_a$ should be complementary for one-sided test, but we only need to calculate test statistic under the assumption when null hypothesis takes the equal sign.
Is a pair of null hypothesis and alternative hypothesis always complementary?
I have been having the same question, too. The part confuses me is that in many places I saw that $H_0$ and $H_a$ are not complementary, e.g., $H_0$: $\mu$ = 0, $H_a$: $\mu$ < 0, rather than $H_0$: $\
Is a pair of null hypothesis and alternative hypothesis always complementary? I have been having the same question, too. The part confuses me is that in many places I saw that $H_0$ and $H_a$ are not complementary, e.g., $H_0$: $\mu$ = 0, $H_a$: $\mu$ < 0, rather than $H_0$: $\mu$ >= 0, $H_a$: $\mu$ < 0, After going through some searches, my current understanding is that $H_0$ and $H_a$ should be complementary for hypothesis testing to work (otherwise, $H_a$ is not the only alternative to $H_0$, and rejecting $H_0$ doesn't mean we can accept $H_a$). But we only need to calculate test statistic under the assumption when null hypothesis takes the equal sign (in previous example, that is when $\mu$ = 0). You can refer to this lecture note. At the same time, below is my take to clarify my point using an example. Clarification with an example Suppose we want to test if a coin is biased towards head, then $H_a$ in this case should be "Probability of head, $\theta$, is greater than 0.5". Since this is a one sided test, then $H_0$ should be "Probability of head, $\theta$, is less than or equal to 0.5". In math expressions: $$ H_0: \theta <= 0.5\\ H_a: \theta > 0.5 $$ The next step is to: Suppose $H_0$ is true. Pick and calculates the test statistic under the assumption of $H_0$ being true, e.g., x. Test statistic is the number of heads in 100 throws. Calculates the probability that test statistic is at least as extreme as x. In this case "extreme" means only values that are too large, since null hypothesis assumes that coin is less likely to show head. When $H_0$ is true, this means that $\theta$ can be 0.5, 0.49, 0.3, or any value that's less than/equal to 0.5. If that's the case, doesn't that mean we should calculate not one but many test statistics? My understanding is yes. However, we don't really need to calculate all the test statistics to determine what is extreme. We can illustrate it by plotting the density curves with a few different parameters. df.l1.e1 = data.frame(x1=1:100, d1=dbinom(1:100, 100, prob = 0.5), d2=dbinom(1:100, 100, prob = 0.45), d3=dbinom(1:100, 100, prob = 0.4), d4=dbinom(1:100, 100, prob = 0.35), d5=dbinom(1:100, 100, prob = 0.3)) %>% gather(key = "scenario", value = "density", -x1) p1 = ggplot(df.l1.e1, aes(x1, density, color = scenario)) + geom_line() + geom_point() + geom_vline(xintercept = qbinom(1-0.05,100,0.5)) p1 The 5 different curves represents 5 differnt density functions of a binomial distribution. They have the same sample size (100), but different probability of success. The vertical line shows the critical value (in this case 58) for d1 scenario. It means under d1 scenario, the probability of getting heads 58 or more times (i.e., at least as extreme as 58) is 5%. As $\theta$ gets smaller, the density function moves more and more to the left. This means that the critial values for scenrarios, d2, d3, etc. will be smaller than 58. In other words, 58 is considered an extreme event for all these scenarios. Now we can see that if test statistic is 58 or more, then no matter what value $\theta$ takes, as long as it's less than/equal to 0.5, the test result will be an extreme event. If the test statistic is, for example, 57, then we cannot reject $H_0$, because although under d2, d3... scenarios the result is extreme, it is not extreme when under d1 scenario. As a result, we cannot reject $H_0$. So based on my current understaning, $H_0$ and $H_a$ should be complementary for one-sided test, but we only need to calculate test statistic under the assumption when null hypothesis takes the equal sign.
Is a pair of null hypothesis and alternative hypothesis always complementary? I have been having the same question, too. The part confuses me is that in many places I saw that $H_0$ and $H_a$ are not complementary, e.g., $H_0$: $\mu$ = 0, $H_a$: $\mu$ < 0, rather than $H_0$: $\
37,340
Complexity Parameter in Decision Tree
This is answered in this rpart resource. From p. 25: For regression models (see next section) the scaled cp has a very direct interpretation: if any split does not increase the overall $R^2$ of the model by at least cp (where $R^2$ is the usual linear-models definition) then that split is decreed to be, a priori, not worth pursuing. The program does not split said branch any further, and saves considerable computational effort. That same page gives this formula for how the cp parameter affects calculation of a tree's risk: $$R_{cp}(T) ≡ R(T) + cp ∗ |T| ∗ R(T_1)$$ ($T_1$ here is a tree with no splits, $|T|$ the splits in the tree. The full formal definition of risk is outside the scope of your question, but for reference the definition is on p. 4.)
Complexity Parameter in Decision Tree
This is answered in this rpart resource. From p. 25: For regression models (see next section) the scaled cp has a very direct interpretation: if any split does not increase the overall $R^2$ of the m
Complexity Parameter in Decision Tree This is answered in this rpart resource. From p. 25: For regression models (see next section) the scaled cp has a very direct interpretation: if any split does not increase the overall $R^2$ of the model by at least cp (where $R^2$ is the usual linear-models definition) then that split is decreed to be, a priori, not worth pursuing. The program does not split said branch any further, and saves considerable computational effort. That same page gives this formula for how the cp parameter affects calculation of a tree's risk: $$R_{cp}(T) ≡ R(T) + cp ∗ |T| ∗ R(T_1)$$ ($T_1$ here is a tree with no splits, $|T|$ the splits in the tree. The full formal definition of risk is outside the scope of your question, but for reference the definition is on p. 4.)
Complexity Parameter in Decision Tree This is answered in this rpart resource. From p. 25: For regression models (see next section) the scaled cp has a very direct interpretation: if any split does not increase the overall $R^2$ of the m
37,341
Distribution of the exponential of an exponential random variable
\begin{align} F(y) &= P(Y<y) \\ &= P(e^{aX}<y) \\ &= P(aX<\ln y) \\ &= P(X<\frac{\ln y}{a}) \\ &=\int_0^{\frac{\ln y}{a}}\lambda e^{-\lambda y}dy \\ &= \left.-e^{-\lambda y}\right\vert_{0}^{\frac{\ln y}{a}}\\ &=1-y^{-\lambda/a} \end{align} We take derivatives of both side: $$f(y)=\frac{\lambda}{a}y^{-\lambda/a -1}$$ A Beta distribution when $0<y<1$ with $\beta=1$?
Distribution of the exponential of an exponential random variable
\begin{align} F(y) &= P(Y<y) \\ &= P(e^{aX}<y) \\ &= P(aX<\ln y) \\ &= P(X<\frac{\ln y}{a}) \\ &=\int_0^{\frac{\ln y}{a}}\lambda e^{-\lambda y}dy \\ &= \left.-e^{-\lambda y}\right\vert_{0}^{\frac{\ln
Distribution of the exponential of an exponential random variable \begin{align} F(y) &= P(Y<y) \\ &= P(e^{aX}<y) \\ &= P(aX<\ln y) \\ &= P(X<\frac{\ln y}{a}) \\ &=\int_0^{\frac{\ln y}{a}}\lambda e^{-\lambda y}dy \\ &= \left.-e^{-\lambda y}\right\vert_{0}^{\frac{\ln y}{a}}\\ &=1-y^{-\lambda/a} \end{align} We take derivatives of both side: $$f(y)=\frac{\lambda}{a}y^{-\lambda/a -1}$$ A Beta distribution when $0<y<1$ with $\beta=1$?
Distribution of the exponential of an exponential random variable \begin{align} F(y) &= P(Y<y) \\ &= P(e^{aX}<y) \\ &= P(aX<\ln y) \\ &= P(X<\frac{\ln y}{a}) \\ &=\int_0^{\frac{\ln y}{a}}\lambda e^{-\lambda y}dy \\ &= \left.-e^{-\lambda y}\right\vert_{0}^{\frac{\ln
37,342
Comprehensive list of combination techniques for word-level embeddings with pros/cons
The team around Bengio published an article a few years ago. Maybe this is good enough? Turian, J., Ratinov, L., & Bengio, Y. (2010). Word representations: a simple and general method for semi-supervised learning (pp. 384–394). Association for Computational Linguistics.
Comprehensive list of combination techniques for word-level embeddings with pros/cons
The team around Bengio published an article a few years ago. Maybe this is good enough? Turian, J., Ratinov, L., & Bengio, Y. (2010). Word representations: a simple and general method for semi-superv
Comprehensive list of combination techniques for word-level embeddings with pros/cons The team around Bengio published an article a few years ago. Maybe this is good enough? Turian, J., Ratinov, L., & Bengio, Y. (2010). Word representations: a simple and general method for semi-supervised learning (pp. 384–394). Association for Computational Linguistics.
Comprehensive list of combination techniques for word-level embeddings with pros/cons The team around Bengio published an article a few years ago. Maybe this is good enough? Turian, J., Ratinov, L., & Bengio, Y. (2010). Word representations: a simple and general method for semi-superv
37,343
Estimating the parameter of a geometric distribution from a single sample
By definition, an estimator is a function $t$ mapping the possible outcomes $\mathbb{N}^{+} = \{1,2,3,\ldots\}$ to the reals. If this is to be unbiased, then--writing $q=1-p$--the expectation must equal $1-q$ for all $q$ in the interval $[0,1]$. Applying the definition of expectation to the formula for the probabilities of a geometric distribution gives $$1-q = \mathbb{E}(t(X)) = \sum_{k=1}^\infty t(k) \Pr(X) = (1-q)\sum_{k=1}^\infty t(k) q^{k-1}.$$ For $q\ne 1$ we may divide both sides by $1-q$, revealing that $t(k)$ are the coefficients of a convergent power series representation of the function $1$ in the interval $[0,1)$. Two such power series can be equal in that interval if and only if they agree term by term, whence $$t(k) = \begin{cases} 1 & k=1 \\ 0 & k \gt 1 \end{cases}$$ is the unique unbiased estimator of $p$.
Estimating the parameter of a geometric distribution from a single sample
By definition, an estimator is a function $t$ mapping the possible outcomes $\mathbb{N}^{+} = \{1,2,3,\ldots\}$ to the reals. If this is to be unbiased, then--writing $q=1-p$--the expectation must equ
Estimating the parameter of a geometric distribution from a single sample By definition, an estimator is a function $t$ mapping the possible outcomes $\mathbb{N}^{+} = \{1,2,3,\ldots\}$ to the reals. If this is to be unbiased, then--writing $q=1-p$--the expectation must equal $1-q$ for all $q$ in the interval $[0,1]$. Applying the definition of expectation to the formula for the probabilities of a geometric distribution gives $$1-q = \mathbb{E}(t(X)) = \sum_{k=1}^\infty t(k) \Pr(X) = (1-q)\sum_{k=1}^\infty t(k) q^{k-1}.$$ For $q\ne 1$ we may divide both sides by $1-q$, revealing that $t(k)$ are the coefficients of a convergent power series representation of the function $1$ in the interval $[0,1)$. Two such power series can be equal in that interval if and only if they agree term by term, whence $$t(k) = \begin{cases} 1 & k=1 \\ 0 & k \gt 1 \end{cases}$$ is the unique unbiased estimator of $p$.
Estimating the parameter of a geometric distribution from a single sample By definition, an estimator is a function $t$ mapping the possible outcomes $\mathbb{N}^{+} = \{1,2,3,\ldots\}$ to the reals. If this is to be unbiased, then--writing $q=1-p$--the expectation must equ
37,344
Estimating the parameter of a geometric distribution from a single sample
UPDATE. Re-writing my previous sloppy answer. There is no unbiased estimator for $p$, here is the proof. The estimator $p=\frac{1}{k}$ is biased, but it's best you can get in the sense of MLE or method of moments. Here's the derivation in Math SE.
Estimating the parameter of a geometric distribution from a single sample
UPDATE. Re-writing my previous sloppy answer. There is no unbiased estimator for $p$, here is the proof. The estimator $p=\frac{1}{k}$ is biased, but it's best you can get in the sense of MLE or meth
Estimating the parameter of a geometric distribution from a single sample UPDATE. Re-writing my previous sloppy answer. There is no unbiased estimator for $p$, here is the proof. The estimator $p=\frac{1}{k}$ is biased, but it's best you can get in the sense of MLE or method of moments. Here's the derivation in Math SE.
Estimating the parameter of a geometric distribution from a single sample UPDATE. Re-writing my previous sloppy answer. There is no unbiased estimator for $p$, here is the proof. The estimator $p=\frac{1}{k}$ is biased, but it's best you can get in the sense of MLE or meth
37,345
Mechanics behind deviation from random distribution
The problem is that you assumed a certain random distribution of IPD and it's not fitting the empirical distribution. So, the formulation of your question is a little confusing given the explanation you given so far. The "deviation" is not from randomness, but of the empirical distribution from the assumed theoretical one. You generate locations $x_i\sim U(0,1000)$, where 0 and 1000 are bounds. Hence, the IPD is $\Delta x_i=|x_i-x_{i-1}|$. We can find the unconditional probability of a small IPD $$P(\Delta x_i)<\varepsilon$$ for any given small $\varepsilon>0$ as follows: $$P(\Delta x_i)<\varepsilon=\frac{\varepsilon}{500}-\frac{\varepsilon^2}{1,000,000}$$ This is a peculiar distribution. Here's its cumulative and density functions: The x-axis is IPD, and y-axis is cumulative (left) and density (right) probability functions. As you can see your choice of model (i.e. randi function), implies that the probability of a small distance is quite high, much higher than of a large IPD. Your biological phenomenon is probably not fitting into this model. You've got try some other model.
Mechanics behind deviation from random distribution
The problem is that you assumed a certain random distribution of IPD and it's not fitting the empirical distribution. So, the formulation of your question is a little confusing given the explanation y
Mechanics behind deviation from random distribution The problem is that you assumed a certain random distribution of IPD and it's not fitting the empirical distribution. So, the formulation of your question is a little confusing given the explanation you given so far. The "deviation" is not from randomness, but of the empirical distribution from the assumed theoretical one. You generate locations $x_i\sim U(0,1000)$, where 0 and 1000 are bounds. Hence, the IPD is $\Delta x_i=|x_i-x_{i-1}|$. We can find the unconditional probability of a small IPD $$P(\Delta x_i)<\varepsilon$$ for any given small $\varepsilon>0$ as follows: $$P(\Delta x_i)<\varepsilon=\frac{\varepsilon}{500}-\frac{\varepsilon^2}{1,000,000}$$ This is a peculiar distribution. Here's its cumulative and density functions: The x-axis is IPD, and y-axis is cumulative (left) and density (right) probability functions. As you can see your choice of model (i.e. randi function), implies that the probability of a small distance is quite high, much higher than of a large IPD. Your biological phenomenon is probably not fitting into this model. You've got try some other model.
Mechanics behind deviation from random distribution The problem is that you assumed a certain random distribution of IPD and it's not fitting the empirical distribution. So, the formulation of your question is a little confusing given the explanation y
37,346
Machine learning for pattern recognition in realtime sensor data
I worked on a similar problem a few weeks ago to detect the right peaks in a noisy sensor. The most important task here is to select the right features. Slope seems like a good idea, maybe not using neighboring points but data points separated by a fixed distance (say 10 or 100, depending on your sample rate). If it's crucial to you to use machine learning tools, I would suggest to build a training set which you can use to train supervised models like Multilayer Perceptrons (neural network), SVM and so on. The training data needs to be labelled to be able to train supervised models. Here is what I did: Building the data set I found the approximate starting position of the signal I wanted to detect (in your case the first few data points inside the peak). Let's say they are at x_10, x_30 and x_53. I would then build a dataset where I select a few of the neighbor points, which would result in the dataset looking like this: x_{8}, x_{9}, x_{10}, x_{11}, x_{12} x_{28}, x_{29}, x_{30}, x_{31}, x_{32} x_{51}, x_{52}, x_{53}, x_{54}, x_{55} furthermore I would select a few random points in the data where I know there is no desired signal and would add those data to the dataset, let's say those observations are at x_14 and x_41. The final dataset would then look like this: x_{8}, x_{9}, x_{10}, x_{11}, x_{12} x_{28}, x_{29}, x_{30}, x_{31}, x_{32} x_{51}, x_{52}, x_{53}, x_{54}, x_{55} x_{12}, x_{13}, x_{14}, x_{15}, x_{16} x_{39}, x_{40}, x_{41}, x_{42}, x_{43} The corresponding label vector, telling the model the category of my data would then look like this: [ 1, 1, 1, 0, 0 ] which means that the first 3 rows belong to class 1 (desired) and the last two rows to class 0 (undesired). I then put my data into a Multilayer Perceptron with 3 layers (1 input, 1 hidden, 1 output) (with sigmoid-functions for non-linearity). That worked surprisingly well for me. Abandoning machine learning Though the MLP worked fine, I ended up with solving my problem by a few if-else rules using more powerful features like local variance, absolute value of the signal and a few more. The thresholds were manually tuned (because it was easier than building a large training dataset). I would hence suggest to try simple rules and try different features. From the graphics you posted it seems that the signal is very clear and hence tuning the thresholds is not super crucial if you have good features. A few ideas: - Slope (e.g. x_{i} - x_{i-T} where T > 0) - Large deviation from the local mean (test for x_{i} > Average(x_{i-T},...,x_{i-1}) + a*StandardDeviation(x_{i-T},...,x_{i-1}), where a could be 3 or higher, depending on how clear/extreme the peaks are) - Value of x_{i} (if the value in the peaks are the same in every peak. Doesn't seem to be the case in your example)
Machine learning for pattern recognition in realtime sensor data
I worked on a similar problem a few weeks ago to detect the right peaks in a noisy sensor. The most important task here is to select the right features. Slope seems like a good idea, maybe not using n
Machine learning for pattern recognition in realtime sensor data I worked on a similar problem a few weeks ago to detect the right peaks in a noisy sensor. The most important task here is to select the right features. Slope seems like a good idea, maybe not using neighboring points but data points separated by a fixed distance (say 10 or 100, depending on your sample rate). If it's crucial to you to use machine learning tools, I would suggest to build a training set which you can use to train supervised models like Multilayer Perceptrons (neural network), SVM and so on. The training data needs to be labelled to be able to train supervised models. Here is what I did: Building the data set I found the approximate starting position of the signal I wanted to detect (in your case the first few data points inside the peak). Let's say they are at x_10, x_30 and x_53. I would then build a dataset where I select a few of the neighbor points, which would result in the dataset looking like this: x_{8}, x_{9}, x_{10}, x_{11}, x_{12} x_{28}, x_{29}, x_{30}, x_{31}, x_{32} x_{51}, x_{52}, x_{53}, x_{54}, x_{55} furthermore I would select a few random points in the data where I know there is no desired signal and would add those data to the dataset, let's say those observations are at x_14 and x_41. The final dataset would then look like this: x_{8}, x_{9}, x_{10}, x_{11}, x_{12} x_{28}, x_{29}, x_{30}, x_{31}, x_{32} x_{51}, x_{52}, x_{53}, x_{54}, x_{55} x_{12}, x_{13}, x_{14}, x_{15}, x_{16} x_{39}, x_{40}, x_{41}, x_{42}, x_{43} The corresponding label vector, telling the model the category of my data would then look like this: [ 1, 1, 1, 0, 0 ] which means that the first 3 rows belong to class 1 (desired) and the last two rows to class 0 (undesired). I then put my data into a Multilayer Perceptron with 3 layers (1 input, 1 hidden, 1 output) (with sigmoid-functions for non-linearity). That worked surprisingly well for me. Abandoning machine learning Though the MLP worked fine, I ended up with solving my problem by a few if-else rules using more powerful features like local variance, absolute value of the signal and a few more. The thresholds were manually tuned (because it was easier than building a large training dataset). I would hence suggest to try simple rules and try different features. From the graphics you posted it seems that the signal is very clear and hence tuning the thresholds is not super crucial if you have good features. A few ideas: - Slope (e.g. x_{i} - x_{i-T} where T > 0) - Large deviation from the local mean (test for x_{i} > Average(x_{i-T},...,x_{i-1}) + a*StandardDeviation(x_{i-T},...,x_{i-1}), where a could be 3 or higher, depending on how clear/extreme the peaks are) - Value of x_{i} (if the value in the peaks are the same in every peak. Doesn't seem to be the case in your example)
Machine learning for pattern recognition in realtime sensor data I worked on a similar problem a few weeks ago to detect the right peaks in a noisy sensor. The most important task here is to select the right features. Slope seems like a good idea, maybe not using n
37,347
How to build separate time series forecasts model for each of 3k customers?
You certainly don't want to build 3,000 separate models! Not only would that be computationally and administratively cumbersome, but it would also mean that each model only has data from one customer, and so you are ignoring data from other customers in each model. This would effectively mean that your predictions for a customer are based solely on their individual (small) dataset, and you are not using the (large) dataset from other customers to help with any aspect of your prediction. A much better approach than that monstrosity would be to formulate some kind of general time-series model that has an effect term for each customer. There are many ways this could be done, and the ultimate test is to see what model fits your data well, and makes good out-of-sample predictions. Here is an example of a simple model to get you started thinking about the possibilities. An example model: If you let $X_{i,t}$ be the log-revenue for customer $i$ at time $t$ you could formulate a simple Gaussian ARIMA model including customer-level mean and variance effects as: $$\phi(B) \Delta^d (X_{i,t} - \mu_i) = \theta(B) \sigma_i \varepsilon_t \quad \quad \quad \varepsilon_t \sim \text{IID N}(0,1),$$ where the AR and MA characteristic polynomials are: $$\phi(B) = 1 - \phi_1 B - ... - \phi_p B^p \quad \quad \quad \theta(B) = 1 + \theta_1 B + ... + \theta_q B^q.$$ As you can see, this is a standard Gaussian ARIMA model, but with each series varying with a different mean and variance parameter, for each customer. Once you have used the data to estimate the parameters you could then make predictions for an individual customer based on their estimated mean and variance in the series. Some customers give you more revenue, so they will have a higher mean. Some customers vary their revenue more, so they will have a higher variance. Nevertheless, other aspects of the model are estimated using the data from all the customers. It is important to note that there are many variations you could make to this model, such as using customer-level random effects, or a hidden-state process with another time-series process for the underlying mean for each customer. Really, there are all sorts of variations you could make, and you will need to see what fits your data. However, this kind of model has the advantage that all the data is used simultaneously to estimate the parameters, so the prediction for an individual customer still depends on all the data.
How to build separate time series forecasts model for each of 3k customers?
You certainly don't want to build 3,000 separate models! Not only would that be computationally and administratively cumbersome, but it would also mean that each model only has data from one customer
How to build separate time series forecasts model for each of 3k customers? You certainly don't want to build 3,000 separate models! Not only would that be computationally and administratively cumbersome, but it would also mean that each model only has data from one customer, and so you are ignoring data from other customers in each model. This would effectively mean that your predictions for a customer are based solely on their individual (small) dataset, and you are not using the (large) dataset from other customers to help with any aspect of your prediction. A much better approach than that monstrosity would be to formulate some kind of general time-series model that has an effect term for each customer. There are many ways this could be done, and the ultimate test is to see what model fits your data well, and makes good out-of-sample predictions. Here is an example of a simple model to get you started thinking about the possibilities. An example model: If you let $X_{i,t}$ be the log-revenue for customer $i$ at time $t$ you could formulate a simple Gaussian ARIMA model including customer-level mean and variance effects as: $$\phi(B) \Delta^d (X_{i,t} - \mu_i) = \theta(B) \sigma_i \varepsilon_t \quad \quad \quad \varepsilon_t \sim \text{IID N}(0,1),$$ where the AR and MA characteristic polynomials are: $$\phi(B) = 1 - \phi_1 B - ... - \phi_p B^p \quad \quad \quad \theta(B) = 1 + \theta_1 B + ... + \theta_q B^q.$$ As you can see, this is a standard Gaussian ARIMA model, but with each series varying with a different mean and variance parameter, for each customer. Once you have used the data to estimate the parameters you could then make predictions for an individual customer based on their estimated mean and variance in the series. Some customers give you more revenue, so they will have a higher mean. Some customers vary their revenue more, so they will have a higher variance. Nevertheless, other aspects of the model are estimated using the data from all the customers. It is important to note that there are many variations you could make to this model, such as using customer-level random effects, or a hidden-state process with another time-series process for the underlying mean for each customer. Really, there are all sorts of variations you could make, and you will need to see what fits your data. However, this kind of model has the advantage that all the data is used simultaneously to estimate the parameters, so the prediction for an individual customer still depends on all the data.
How to build separate time series forecasts model for each of 3k customers? You certainly don't want to build 3,000 separate models! Not only would that be computationally and administratively cumbersome, but it would also mean that each model only has data from one customer
37,348
How to build separate time series forecasts model for each of 3k customers?
I faced very similar task at work. First I used an automated ARIMA function applied to each timeseries separately. It worked sufficiently fast for my purposes. Then I studied the properties of timeseries by doing a big comparative analysis, using both ARIMA, and a number of simpler techniques like linear models with preprocessed inputs, random walk, random walk with a drift. I found that simpler models work tens times faster, but what comes to how well they work totally depends on your data. In my case, most timeseries were not distinguishable from random walk, so using the last value (or mean) as a forecast looked sane. Try linear models, they can be surprisingly fast and accurate.
How to build separate time series forecasts model for each of 3k customers?
I faced very similar task at work. First I used an automated ARIMA function applied to each timeseries separately. It worked sufficiently fast for my purposes. Then I studied the properties of timeser
How to build separate time series forecasts model for each of 3k customers? I faced very similar task at work. First I used an automated ARIMA function applied to each timeseries separately. It worked sufficiently fast for my purposes. Then I studied the properties of timeseries by doing a big comparative analysis, using both ARIMA, and a number of simpler techniques like linear models with preprocessed inputs, random walk, random walk with a drift. I found that simpler models work tens times faster, but what comes to how well they work totally depends on your data. In my case, most timeseries were not distinguishable from random walk, so using the last value (or mean) as a forecast looked sane. Try linear models, they can be surprisingly fast and accurate.
How to build separate time series forecasts model for each of 3k customers? I faced very similar task at work. First I used an automated ARIMA function applied to each timeseries separately. It worked sufficiently fast for my purposes. Then I studied the properties of timeser
37,349
How to build separate time series forecasts model for each of 3k customers?
I had a similar project where forecasting was required for 3000 Distributors for one of my clients. I used Automated ARIMA and looped it through clusters of Distributors by their type. I had to finally formulate a parallel package to handle multiple Distributor types in parallel in order to increase efficiency as models needed to be trained once a month. It can be easily done using Parallel packages in R and Rob Hyndsight Forecast package.
How to build separate time series forecasts model for each of 3k customers?
I had a similar project where forecasting was required for 3000 Distributors for one of my clients. I used Automated ARIMA and looped it through clusters of Distributors by their type. I had to finall
How to build separate time series forecasts model for each of 3k customers? I had a similar project where forecasting was required for 3000 Distributors for one of my clients. I used Automated ARIMA and looped it through clusters of Distributors by their type. I had to finally formulate a parallel package to handle multiple Distributor types in parallel in order to increase efficiency as models needed to be trained once a month. It can be easily done using Parallel packages in R and Rob Hyndsight Forecast package.
How to build separate time series forecasts model for each of 3k customers? I had a similar project where forecasting was required for 3000 Distributors for one of my clients. I used Automated ARIMA and looped it through clusters of Distributors by their type. I had to finall
37,350
R lmerTest and Tests of Multiple Random Effects
The documentation on the lmerTest::rand() function is definitely terse. From what I've been able to gather I think it tests the hypothesis that the random effect variation (i.e., a varying intercept (1 | Consumer) ) is significant versus the null that there is no between group-level variation, $H_0 : \sigma_{\alpha}^2 = 0$, where $\alpha_{j[i]} \sim N(\mu_\alpha, \sigma_\alpha^2)$ for $j = 1, \ldots, J$ is the group indicator. (I'm following Gelman & Hill's (2007) notation, see ch. 12). I'm no expert so the code confused me a bit. Specifically, I'm not clear on what the elimRandEffs function does but I'd guess it's converting $\alpha_{j[i]}$ to a fixed (that is pooled) term $\alpha$ and then comparing this to the original model. Hopefully someone with greater knowledge can clarify this. On the theoretical side, rand must be performing something like the test proposed in Lee and Braun 2012. However, unlike their extension to testing $0<r \leq q$ random effects at a time (sec 3.2), the rand output appears to test only one random effect at time. A simpler summary of the same idea is in slides 10-12 found here. So your intuition that the "lmerTest is evaluating the random slope for 'sens2' [and] might also be the covariance between the slope and intercept" is correct in that rand tests if the random effect variances are significantly different from zero. But it's incorrect to say that "the test for the random intercept is not included". The RE in your first specification: (1 + sens2 | Consumer) assumes non-zero correlation between the intercept and slope, meaning they vary together and so rand() tests that specification against a no RE model (i.e., reducing to the classical regression). The second specification (1 | Consumer) + (0 + sens2 | Consumer) gives two lines of output because the random effects are additively separable. Here rand tests (in the 1st output row) a model with a pooled/fixed intercept with a random slope against your specification. In the 2nd row, the test is against a pooled sloped with a random intercept. So, like the step function, rand tests independent RE's one at a time. I'm still puzzled by inside-R.org note that Note If the effect has random slopes, then first the correlations between itercept [sic] and slopes are checked for significance That's not in the package documentation, so I don't know where it came from nor where such a test would be found in the output. EDIT I think I'm wrong about the null model in a correlated slope/intercept model as in the first specification. The step documentation says: in the random part if correlation is present between slope and intercept, the simplified model will contain just an intercept. That is if the random part of the initial model is (1+c|f), then this model is compared to (1|f) by using LRT. I imagine the same principle is at work for rand.
R lmerTest and Tests of Multiple Random Effects
The documentation on the lmerTest::rand() function is definitely terse. From what I've been able to gather I think it tests the hypothesis that the random effect variation (i.e., a varying intercept
R lmerTest and Tests of Multiple Random Effects The documentation on the lmerTest::rand() function is definitely terse. From what I've been able to gather I think it tests the hypothesis that the random effect variation (i.e., a varying intercept (1 | Consumer) ) is significant versus the null that there is no between group-level variation, $H_0 : \sigma_{\alpha}^2 = 0$, where $\alpha_{j[i]} \sim N(\mu_\alpha, \sigma_\alpha^2)$ for $j = 1, \ldots, J$ is the group indicator. (I'm following Gelman & Hill's (2007) notation, see ch. 12). I'm no expert so the code confused me a bit. Specifically, I'm not clear on what the elimRandEffs function does but I'd guess it's converting $\alpha_{j[i]}$ to a fixed (that is pooled) term $\alpha$ and then comparing this to the original model. Hopefully someone with greater knowledge can clarify this. On the theoretical side, rand must be performing something like the test proposed in Lee and Braun 2012. However, unlike their extension to testing $0<r \leq q$ random effects at a time (sec 3.2), the rand output appears to test only one random effect at time. A simpler summary of the same idea is in slides 10-12 found here. So your intuition that the "lmerTest is evaluating the random slope for 'sens2' [and] might also be the covariance between the slope and intercept" is correct in that rand tests if the random effect variances are significantly different from zero. But it's incorrect to say that "the test for the random intercept is not included". The RE in your first specification: (1 + sens2 | Consumer) assumes non-zero correlation between the intercept and slope, meaning they vary together and so rand() tests that specification against a no RE model (i.e., reducing to the classical regression). The second specification (1 | Consumer) + (0 + sens2 | Consumer) gives two lines of output because the random effects are additively separable. Here rand tests (in the 1st output row) a model with a pooled/fixed intercept with a random slope against your specification. In the 2nd row, the test is against a pooled sloped with a random intercept. So, like the step function, rand tests independent RE's one at a time. I'm still puzzled by inside-R.org note that Note If the effect has random slopes, then first the correlations between itercept [sic] and slopes are checked for significance That's not in the package documentation, so I don't know where it came from nor where such a test would be found in the output. EDIT I think I'm wrong about the null model in a correlated slope/intercept model as in the first specification. The step documentation says: in the random part if correlation is present between slope and intercept, the simplified model will contain just an intercept. That is if the random part of the initial model is (1+c|f), then this model is compared to (1|f) by using LRT. I imagine the same principle is at work for rand.
R lmerTest and Tests of Multiple Random Effects The documentation on the lmerTest::rand() function is definitely terse. From what I've been able to gather I think it tests the hypothesis that the random effect variation (i.e., a varying intercept
37,351
R lmerTest and Tests of Multiple Random Effects
The lmerTest documentation describes rand() as yielding "...a vector of Chi square statistics and corresponding p-values of likelihood ratio tests." Thus, I believe it is a likelihood ratio test. That is, simply, the comparison of a model with a given random effect to that same model without the random effect. In regards to the example, rand() compares the model with a random slope of sens2 within Consumer to a model with the random intercept of Consumer. Compute the two models: m <- lmer(Preference ~ sens2+Homesize+(1+sens2|Consumer), data=carrots, REML = FALSE) m2 <- lmer(Preference ~ sens2+Homesize+(1|Consumer), data=carrots, REML = FALSE) Examine the output of rand(m): rand(m) Analysis of Random effects Table: Chi.sq Chi.DF p.value sens2:Consumer 6.99 2 0.03 * Perform likelihood ratio test comparing model m to model m2: anova(m, m2, test = "Chi") Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) m 5 3751.4 3777.0 -1870.7 3741.4 m2 7 3748.7 3784.5 -1867.4 3734.7 6.6989 2 0.0351 * Indeed, the anova() Chisq is slightly less than that of rand(m), but otherwise, the output is essentially identical. Perhaps my interpretation is inaccurate but I always assumed that this was how the rand() function generated its output.
R lmerTest and Tests of Multiple Random Effects
The lmerTest documentation describes rand() as yielding "...a vector of Chi square statistics and corresponding p-values of likelihood ratio tests." Thus, I believe it is a likelihood ratio test. T
R lmerTest and Tests of Multiple Random Effects The lmerTest documentation describes rand() as yielding "...a vector of Chi square statistics and corresponding p-values of likelihood ratio tests." Thus, I believe it is a likelihood ratio test. That is, simply, the comparison of a model with a given random effect to that same model without the random effect. In regards to the example, rand() compares the model with a random slope of sens2 within Consumer to a model with the random intercept of Consumer. Compute the two models: m <- lmer(Preference ~ sens2+Homesize+(1+sens2|Consumer), data=carrots, REML = FALSE) m2 <- lmer(Preference ~ sens2+Homesize+(1|Consumer), data=carrots, REML = FALSE) Examine the output of rand(m): rand(m) Analysis of Random effects Table: Chi.sq Chi.DF p.value sens2:Consumer 6.99 2 0.03 * Perform likelihood ratio test comparing model m to model m2: anova(m, m2, test = "Chi") Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) m 5 3751.4 3777.0 -1870.7 3741.4 m2 7 3748.7 3784.5 -1867.4 3734.7 6.6989 2 0.0351 * Indeed, the anova() Chisq is slightly less than that of rand(m), but otherwise, the output is essentially identical. Perhaps my interpretation is inaccurate but I always assumed that this was how the rand() function generated its output.
R lmerTest and Tests of Multiple Random Effects The lmerTest documentation describes rand() as yielding "...a vector of Chi square statistics and corresponding p-values of likelihood ratio tests." Thus, I believe it is a likelihood ratio test. T
37,352
Question on how to use EM to estimate parameters of this model
Let me recall the basics of the EM algorithm first. When looking for the maximum likelihood estimate of a likelihood of the form$$\int f(x,z|\beta)\text{d}z,$$ the algorithm proceeds by iteratively maximising (M) expected (E) complete log-likelihoods, which results in maximising (in $\beta$)at iteration $t$ the function $$Q(\beta|\beta_i)=\int \log f(x,z|\beta) f(z|x,\beta_t)\text{d}z$$ The algorithm must therefore starts by identifying the latent variable $z$ and its conditional distribution. In your case it seems that the latent variable is $\varpi$ made of the $w_i$'s while the parameter of interest is $\beta$. If you process both $\beta$ and $\varpi$ as latent variables there is no parameter left to optimise. However, this also means that the prior on $\beta$ is not used. If we look more precisely at the case of $w_i$, its conditional distribution is given by$$f(w_i|x_i,y_i,\beta)\propto\sqrt{w_i}\exp\left\{-w_i(y_i-\beta^Tx_i)^2/2\sigma^2\right\}\times w_i^{a-1}\exp\{-bw_i\}$$ which qualiifies as a $$\mathcal{G}\left(a+1/2,b+(y_i-\beta^Tx_i)^2/2\sigma^2\right)$$distribution. The completed log-likelihood being$$\sum_i \frac{1}{2}\left\{\log(w_i)- w_i(y_i-\beta^Tx_i)^2/\sigma^2\right\}$$ the part that depends on $\beta$ simplifies as$$-\sum_iw_i(y_i-\beta^Tx_i)^2/2\sigma^2$$and the function $-Q(\beta|\beta_t)$ is proportional to \begin{align*}\mathbb{E}\left[\sum_iw_i(y_i-\beta^Tx_i)^2\Big|X,Y,\beta_t\right]&=\sum_i\mathbb{E}[w_i|X,Y,\beta_t](y_i-\beta^Tx_i)^2\\&=\sum_i\frac{a+1/2}{b+(y_i-\beta_t^Tx_i)^2/2\sigma^2}(y_i-\beta^Tx_i)^2\end{align*} Maximising this function in $\beta$ amounts to a weighted linear regression, with weights $$\frac{a+1/2}{b+(y_i-\beta_t^Tx_i)^2/2\sigma^2}$$
Question on how to use EM to estimate parameters of this model
Let me recall the basics of the EM algorithm first. When looking for the maximum likelihood estimate of a likelihood of the form$$\int f(x,z|\beta)\text{d}z,$$ the algorithm proceeds by iteratively ma
Question on how to use EM to estimate parameters of this model Let me recall the basics of the EM algorithm first. When looking for the maximum likelihood estimate of a likelihood of the form$$\int f(x,z|\beta)\text{d}z,$$ the algorithm proceeds by iteratively maximising (M) expected (E) complete log-likelihoods, which results in maximising (in $\beta$)at iteration $t$ the function $$Q(\beta|\beta_i)=\int \log f(x,z|\beta) f(z|x,\beta_t)\text{d}z$$ The algorithm must therefore starts by identifying the latent variable $z$ and its conditional distribution. In your case it seems that the latent variable is $\varpi$ made of the $w_i$'s while the parameter of interest is $\beta$. If you process both $\beta$ and $\varpi$ as latent variables there is no parameter left to optimise. However, this also means that the prior on $\beta$ is not used. If we look more precisely at the case of $w_i$, its conditional distribution is given by$$f(w_i|x_i,y_i,\beta)\propto\sqrt{w_i}\exp\left\{-w_i(y_i-\beta^Tx_i)^2/2\sigma^2\right\}\times w_i^{a-1}\exp\{-bw_i\}$$ which qualiifies as a $$\mathcal{G}\left(a+1/2,b+(y_i-\beta^Tx_i)^2/2\sigma^2\right)$$distribution. The completed log-likelihood being$$\sum_i \frac{1}{2}\left\{\log(w_i)- w_i(y_i-\beta^Tx_i)^2/\sigma^2\right\}$$ the part that depends on $\beta$ simplifies as$$-\sum_iw_i(y_i-\beta^Tx_i)^2/2\sigma^2$$and the function $-Q(\beta|\beta_t)$ is proportional to \begin{align*}\mathbb{E}\left[\sum_iw_i(y_i-\beta^Tx_i)^2\Big|X,Y,\beta_t\right]&=\sum_i\mathbb{E}[w_i|X,Y,\beta_t](y_i-\beta^Tx_i)^2\\&=\sum_i\frac{a+1/2}{b+(y_i-\beta_t^Tx_i)^2/2\sigma^2}(y_i-\beta^Tx_i)^2\end{align*} Maximising this function in $\beta$ amounts to a weighted linear regression, with weights $$\frac{a+1/2}{b+(y_i-\beta_t^Tx_i)^2/2\sigma^2}$$
Question on how to use EM to estimate parameters of this model Let me recall the basics of the EM algorithm first. When looking for the maximum likelihood estimate of a likelihood of the form$$\int f(x,z|\beta)\text{d}z,$$ the algorithm proceeds by iteratively ma
37,353
How to obtain the quantile function when an analytical form of the distribution is not known
$\DeclareMathOperator*{\med}{med}$The median is the point that minimizes the expected $L^1$ distance: $$\med_Z f(Z) = \arg\min_m E_z|f(Z) - m|$$ Hence we can simplify your expression: $$\begin{equation}\med_{z_1 \sim F} \med_{z_2 \sim F} |z_1 - z_2| \\ = \arg\min_{m_1}E_{z_1 \sim F}\left| m_1 - \arg\min_{m_2} E_{z_2 \sim F}\left| m_2 - \left|z_1 - z_2\right|\right|\right| \end{equation}$$ I think this is a bilevel optimization problem, which I don't know too much about but perhaps there are standard techniques you can apply. Then again, it might not be any faster than just calculating the sample median of medians for larger samples until convergence.
How to obtain the quantile function when an analytical form of the distribution is not known
$\DeclareMathOperator*{\med}{med}$The median is the point that minimizes the expected $L^1$ distance: $$\med_Z f(Z) = \arg\min_m E_z|f(Z) - m|$$ Hence we can simplify your expression: $$\begin{equatio
How to obtain the quantile function when an analytical form of the distribution is not known $\DeclareMathOperator*{\med}{med}$The median is the point that minimizes the expected $L^1$ distance: $$\med_Z f(Z) = \arg\min_m E_z|f(Z) - m|$$ Hence we can simplify your expression: $$\begin{equation}\med_{z_1 \sim F} \med_{z_2 \sim F} |z_1 - z_2| \\ = \arg\min_{m_1}E_{z_1 \sim F}\left| m_1 - \arg\min_{m_2} E_{z_2 \sim F}\left| m_2 - \left|z_1 - z_2\right|\right|\right| \end{equation}$$ I think this is a bilevel optimization problem, which I don't know too much about but perhaps there are standard techniques you can apply. Then again, it might not be any faster than just calculating the sample median of medians for larger samples until convergence.
How to obtain the quantile function when an analytical form of the distribution is not known $\DeclareMathOperator*{\med}{med}$The median is the point that minimizes the expected $L^1$ distance: $$\med_Z f(Z) = \arg\min_m E_z|f(Z) - m|$$ Hence we can simplify your expression: $$\begin{equatio
37,354
How to obtain the quantile function when an analytical form of the distribution is not known
A straightforward data-driven approach to estimating the quantile function consists in: bootstrapping your observations to generate many more values than are in your original sample (especially, values beyond the range of the initial limited sample). A good strategy is to use a smoothed bootstrap simulation scheme to avoid the main limitations of the basic nonparametric bootstrap. This is equivalent to simulating from a Kernel Density Estimate. from this, you can get the empirical Cumulative Distribution Function (CDF) of the simulated values (ecdf function in R). The inverse of the CDF is nothing else than the quantile function (quantile function in R). See here to get the values and plot your quantile function. You can even get confidence bands. A pre-requisite though is that you sample features enough observations to at least get a good idea of the shape of your underlying PDF.
How to obtain the quantile function when an analytical form of the distribution is not known
A straightforward data-driven approach to estimating the quantile function consists in: bootstrapping your observations to generate many more values than are in your original sample (especially, valu
How to obtain the quantile function when an analytical form of the distribution is not known A straightforward data-driven approach to estimating the quantile function consists in: bootstrapping your observations to generate many more values than are in your original sample (especially, values beyond the range of the initial limited sample). A good strategy is to use a smoothed bootstrap simulation scheme to avoid the main limitations of the basic nonparametric bootstrap. This is equivalent to simulating from a Kernel Density Estimate. from this, you can get the empirical Cumulative Distribution Function (CDF) of the simulated values (ecdf function in R). The inverse of the CDF is nothing else than the quantile function (quantile function in R). See here to get the values and plot your quantile function. You can even get confidence bands. A pre-requisite though is that you sample features enough observations to at least get a good idea of the shape of your underlying PDF.
How to obtain the quantile function when an analytical form of the distribution is not known A straightforward data-driven approach to estimating the quantile function consists in: bootstrapping your observations to generate many more values than are in your original sample (especially, valu
37,355
How to obtain the quantile function when an analytical form of the distribution is not known
So, I think that the best way to obtain $$\text{med}_{Z\sim F} H(Z)$$ is to: compute the entries of the $n$ vector $\{H(z_i)\}_{i=1}^n$ of values of $H(z_i)$ corresponding to a grid of $n$ values of $\{z_i\}_{i=1}^n$ placed uniformly on $(F_Z^{-1}(\epsilon),F_Z^{-1}(1-\epsilon))$ Compute the weighted median of $\{H(z_i)\}_{i=1}^n$ with weights $F_Z^\prime(z_i)$.
How to obtain the quantile function when an analytical form of the distribution is not known
So, I think that the best way to obtain $$\text{med}_{Z\sim F} H(Z)$$ is to: compute the entries of the $n$ vector $\{H(z_i)\}_{i=1}^n$ of values of $H(z_i)$ corresponding to a grid of $n$ values o
How to obtain the quantile function when an analytical form of the distribution is not known So, I think that the best way to obtain $$\text{med}_{Z\sim F} H(Z)$$ is to: compute the entries of the $n$ vector $\{H(z_i)\}_{i=1}^n$ of values of $H(z_i)$ corresponding to a grid of $n$ values of $\{z_i\}_{i=1}^n$ placed uniformly on $(F_Z^{-1}(\epsilon),F_Z^{-1}(1-\epsilon))$ Compute the weighted median of $\{H(z_i)\}_{i=1}^n$ with weights $F_Z^\prime(z_i)$.
How to obtain the quantile function when an analytical form of the distribution is not known So, I think that the best way to obtain $$\text{med}_{Z\sim F} H(Z)$$ is to: compute the entries of the $n$ vector $\{H(z_i)\}_{i=1}^n$ of values of $H(z_i)$ corresponding to a grid of $n$ values o
37,356
Resources for the "ah ha" moment when learning Bayes' theorem
I think it's hard to beat Cohen's classic 1994 paper in the American Psychologist, "The earth is round, (p<.05)". You can always find it somewhere on Google, e.g. http://ist-socrates.berkeley.edu/~maccoun/PP279_Cohen1.pdf. He includes a brilliant example based on a medical screening test, and discusses the implications for interpreting p-values.
Resources for the "ah ha" moment when learning Bayes' theorem
I think it's hard to beat Cohen's classic 1994 paper in the American Psychologist, "The earth is round, (p<.05)". You can always find it somewhere on Google, e.g. http://ist-socrates.berkeley.edu/~mac
Resources for the "ah ha" moment when learning Bayes' theorem I think it's hard to beat Cohen's classic 1994 paper in the American Psychologist, "The earth is round, (p<.05)". You can always find it somewhere on Google, e.g. http://ist-socrates.berkeley.edu/~maccoun/PP279_Cohen1.pdf. He includes a brilliant example based on a medical screening test, and discusses the implications for interpreting p-values.
Resources for the "ah ha" moment when learning Bayes' theorem I think it's hard to beat Cohen's classic 1994 paper in the American Psychologist, "The earth is round, (p<.05)". You can always find it somewhere on Google, e.g. http://ist-socrates.berkeley.edu/~mac
37,357
Review of Box-Jenkins methodology
"Data preparation (identification and Difference data to obtain stationary series)" . Non-stationarity may be the symptom while the cause may be a simple change in the mean or a simple change in trend or a simple change in parameters or a simple change in error variance. Alternatively/conversely an unusual value (pulse) will increase the variance and increase the covariance thus the acf will be downwards biased yielding possibly false conclusions about non-existent ARIMA structure. Either way your design does not understand/follow the flow charts presented in your reference.
Review of Box-Jenkins methodology
"Data preparation (identification and Difference data to obtain stationary series)" . Non-stationarity may be the symptom while the cause may be a simple change in the mean or a simple change in trend
Review of Box-Jenkins methodology "Data preparation (identification and Difference data to obtain stationary series)" . Non-stationarity may be the symptom while the cause may be a simple change in the mean or a simple change in trend or a simple change in parameters or a simple change in error variance. Alternatively/conversely an unusual value (pulse) will increase the variance and increase the covariance thus the acf will be downwards biased yielding possibly false conclusions about non-existent ARIMA structure. Either way your design does not understand/follow the flow charts presented in your reference.
Review of Box-Jenkins methodology "Data preparation (identification and Difference data to obtain stationary series)" . Non-stationarity may be the symptom while the cause may be a simple change in the mean or a simple change in trend
37,358
What are hot research topics for PhD dissertation in Biostatistics?
Leah Welty, Emerging Trends, 2013 Davidian, Cutting Edge: Emerging trends in biostatistics, 2012 Modern Issues and Methods in Biostatistics, Springer, 2011
What are hot research topics for PhD dissertation in Biostatistics?
Leah Welty, Emerging Trends, 2013 Davidian, Cutting Edge: Emerging trends in biostatistics, 2012 Modern Issues and Methods in Biostatistics, Springer, 2011
What are hot research topics for PhD dissertation in Biostatistics? Leah Welty, Emerging Trends, 2013 Davidian, Cutting Edge: Emerging trends in biostatistics, 2012 Modern Issues and Methods in Biostatistics, Springer, 2011
What are hot research topics for PhD dissertation in Biostatistics? Leah Welty, Emerging Trends, 2013 Davidian, Cutting Edge: Emerging trends in biostatistics, 2012 Modern Issues and Methods in Biostatistics, Springer, 2011
37,359
What are the various basic kernels available?
Neural networks are not a kernel, they are a learning algorithm. Plenty of kernel functions exist, such as: sigmoid, popular in the early days of kernel methods due to their influence in neural networks; not really used heavily now Tanimoto/Jaccard/diffusion, popular for binary features tree/graph kernels, popular in natural language processing histogram kernel, popular in image processing -- essentially it's a very fast approximation to the RBF kernel The right kernel depends very much on the nature of the data. Often the best kernel is a custom-made one, particularly in bioinformatics. The Gaussian/RBF and linear kernels are by far the most popular ones, followed by the polynomial one.
What are the various basic kernels available?
Neural networks are not a kernel, they are a learning algorithm. Plenty of kernel functions exist, such as: sigmoid, popular in the early days of kernel methods due to their influence in neural netwo
What are the various basic kernels available? Neural networks are not a kernel, they are a learning algorithm. Plenty of kernel functions exist, such as: sigmoid, popular in the early days of kernel methods due to their influence in neural networks; not really used heavily now Tanimoto/Jaccard/diffusion, popular for binary features tree/graph kernels, popular in natural language processing histogram kernel, popular in image processing -- essentially it's a very fast approximation to the RBF kernel The right kernel depends very much on the nature of the data. Often the best kernel is a custom-made one, particularly in bioinformatics. The Gaussian/RBF and linear kernels are by far the most popular ones, followed by the polynomial one.
What are the various basic kernels available? Neural networks are not a kernel, they are a learning algorithm. Plenty of kernel functions exist, such as: sigmoid, popular in the early days of kernel methods due to their influence in neural netwo
37,360
Cox discrete time regression model question
The answer lies in the $dt$ terms in Cox's equation 21, which your question omitted. Cox says that in the continuous case equation 21, $$\frac{\lambda(t; z)\,dt}{1 - \lambda(t; z)\,dt} = e^{-z\beta} \frac{\lambda_0(t)\,dt}{1 - \lambda_0(t)\,dt},$$ reduces to equation 9, $$\lambda(t; z) = e^{z\beta} \lambda_0(t).$$ Cox notes that "in discrete time $\lambda_0(t; z) dt$ is a non-zero probability"--in particular, it's the probability of an event occurring between $t$ and $t + dt$, given survival up to time $t$. The continuous-time case corresponds to the limit in which you consider infinitesimally small discrete time slices, that is, $dt \to 0$. In that case, the $\lambda(t; z)\,dt$ and $\lambda_0(t)\,dt$ in the denominators go to zero, so the denominators becomes 1, and then canceling the $dt$ in the numerator yields equation 9.
Cox discrete time regression model question
The answer lies in the $dt$ terms in Cox's equation 21, which your question omitted. Cox says that in the continuous case equation 21, $$\frac{\lambda(t; z)\,dt}{1 - \lambda(t; z)\,dt} = e^{-z\beta} \
Cox discrete time regression model question The answer lies in the $dt$ terms in Cox's equation 21, which your question omitted. Cox says that in the continuous case equation 21, $$\frac{\lambda(t; z)\,dt}{1 - \lambda(t; z)\,dt} = e^{-z\beta} \frac{\lambda_0(t)\,dt}{1 - \lambda_0(t)\,dt},$$ reduces to equation 9, $$\lambda(t; z) = e^{z\beta} \lambda_0(t).$$ Cox notes that "in discrete time $\lambda_0(t; z) dt$ is a non-zero probability"--in particular, it's the probability of an event occurring between $t$ and $t + dt$, given survival up to time $t$. The continuous-time case corresponds to the limit in which you consider infinitesimally small discrete time slices, that is, $dt \to 0$. In that case, the $\lambda(t; z)\,dt$ and $\lambda_0(t)\,dt$ in the denominators go to zero, so the denominators becomes 1, and then canceling the $dt$ in the numerator yields equation 9.
Cox discrete time regression model question The answer lies in the $dt$ terms in Cox's equation 21, which your question omitted. Cox says that in the continuous case equation 21, $$\frac{\lambda(t; z)\,dt}{1 - \lambda(t; z)\,dt} = e^{-z\beta} \
37,361
Using random forest for survival analysis with time varying covariates
Although there is structure to your data, it may be that the variation in baseline risk does not vary substantially enough among your subjects to cause a model without a frailty term to form poor predictions. Of course, it's perfectly possible that a model with a frailty term would perform better than the pooled random forest model. Even if you did run a pooled and hierarchical model and the pooled model did as well or slightly better, you may still want to use the hierarchical model because the variance in baseline risk is very likely NOT zero among your subjects, and the hierarchical model would probably perform better in the long term on data that was in neither your test or training sets. As an aside, consider whether the cross validation score you are using aligns with the goals of your prediction task in the first place before comparing pooled and hierarchical models. If your goal is to make predictions on the same group of individuals as in your test/training data, then simple k fold or loo cross validation on the response is sufficient. But if you want to make predictions about new individuals, you should instead do k fold cross validation that samples at the individual level. In the first case you are scoring your predictions without regard for the structure of the data. In the second case you are estimating your ability to predict risk within individuals that are not in your sample. Lastly, remember always that CV is itself data dependent, and only an estimate of your model's predictive capabilities.
Using random forest for survival analysis with time varying covariates
Although there is structure to your data, it may be that the variation in baseline risk does not vary substantially enough among your subjects to cause a model without a frailty term to form poor pred
Using random forest for survival analysis with time varying covariates Although there is structure to your data, it may be that the variation in baseline risk does not vary substantially enough among your subjects to cause a model without a frailty term to form poor predictions. Of course, it's perfectly possible that a model with a frailty term would perform better than the pooled random forest model. Even if you did run a pooled and hierarchical model and the pooled model did as well or slightly better, you may still want to use the hierarchical model because the variance in baseline risk is very likely NOT zero among your subjects, and the hierarchical model would probably perform better in the long term on data that was in neither your test or training sets. As an aside, consider whether the cross validation score you are using aligns with the goals of your prediction task in the first place before comparing pooled and hierarchical models. If your goal is to make predictions on the same group of individuals as in your test/training data, then simple k fold or loo cross validation on the response is sufficient. But if you want to make predictions about new individuals, you should instead do k fold cross validation that samples at the individual level. In the first case you are scoring your predictions without regard for the structure of the data. In the second case you are estimating your ability to predict risk within individuals that are not in your sample. Lastly, remember always that CV is itself data dependent, and only an estimate of your model's predictive capabilities.
Using random forest for survival analysis with time varying covariates Although there is structure to your data, it may be that the variation in baseline risk does not vary substantially enough among your subjects to cause a model without a frailty term to form poor pred
37,362
Using random forest for survival analysis with time varying covariates
I am just starting to work with Random Forest but I believe it is mainly the bagging that needs to be iid for each tree and the subset of features selection at each node. I am unaware of any formal constraints on the data itself. Why it works so well on your data I can not say until I have investigated your data. But the non-iid'ness of your features will not influence performance too much.
Using random forest for survival analysis with time varying covariates
I am just starting to work with Random Forest but I believe it is mainly the bagging that needs to be iid for each tree and the subset of features selection at each node. I am unaware of any formal co
Using random forest for survival analysis with time varying covariates I am just starting to work with Random Forest but I believe it is mainly the bagging that needs to be iid for each tree and the subset of features selection at each node. I am unaware of any formal constraints on the data itself. Why it works so well on your data I can not say until I have investigated your data. But the non-iid'ness of your features will not influence performance too much.
Using random forest for survival analysis with time varying covariates I am just starting to work with Random Forest but I believe it is mainly the bagging that needs to be iid for each tree and the subset of features selection at each node. I am unaware of any formal co
37,363
High mutual information = (almost) deterministic relationship?
You’re on the right track, but I am hesitant to go that far. Consider variables forming a perfect “X” scatterplot. In that case, given one one of the variable values, you can narrow it down to two possibilities for the other value. That is, knowing the value of the one variable gives you tremendous information about the value of the second variable, thus, high mutual information between the two variables. However, those two possible values could be quite far apart, so I am hesitant to go as far as to consider the relationship (nearly) deterministic, even if there is technically a way to write $\epsilon$ to accommodate the uncertainty about which of the two possible values the other variable would take. The way I think about mutual information is that knowing the value of one of the marginal variables tightens up your sense of where the value of the other marginal variable would be. In my example, instead of just knowing the value to be somewhere on the infinite number of values on the “X”, you know that it is one of two values.
High mutual information = (almost) deterministic relationship?
You’re on the right track, but I am hesitant to go that far. Consider variables forming a perfect “X” scatterplot. In that case, given one one of the variable values, you can narrow it down to two pos
High mutual information = (almost) deterministic relationship? You’re on the right track, but I am hesitant to go that far. Consider variables forming a perfect “X” scatterplot. In that case, given one one of the variable values, you can narrow it down to two possibilities for the other value. That is, knowing the value of the one variable gives you tremendous information about the value of the second variable, thus, high mutual information between the two variables. However, those two possible values could be quite far apart, so I am hesitant to go as far as to consider the relationship (nearly) deterministic, even if there is technically a way to write $\epsilon$ to accommodate the uncertainty about which of the two possible values the other variable would take. The way I think about mutual information is that knowing the value of one of the marginal variables tightens up your sense of where the value of the other marginal variable would be. In my example, instead of just knowing the value to be somewhere on the infinite number of values on the “X”, you know that it is one of two values.
High mutual information = (almost) deterministic relationship? You’re on the right track, but I am hesitant to go that far. Consider variables forming a perfect “X” scatterplot. In that case, given one one of the variable values, you can narrow it down to two pos
37,364
High mutual information = (almost) deterministic relationship?
In addition to the answer by @Dave, I will give an example showing that a deterministic (or one-to-one) relationship is not always possible. I will use a categorical data example, so the joint distribution can be given by a two-way table. But first, one formula for the mutual information is $$ I(X, Y) = H(X) + H(Y) - H(X, Y) $$ where $H$ is the Shannon entropy, with one argument the marginal, with two arguments the joint. If $X, Y$ are independent, then $H(X,Y)= H(X)+H(Y)$, so then the mutual information is zero, the minimum possible value. But you ask for maximizing mutual information, so intuitively, to go as far away from independence as possible. And we cannot get further away from independence than a one-to-one relationship. So, to maximize mutual information (say, with the two marginals given) we must minimize $H(X, Y)$. First, one example where a 1-1 relationship is possible: 0.5 0.5 0.5 0.5 This shows only the two marginals. There are multiple joints that fulfills the conditions, just fill in two 0.5, two 0, with one 0 in each row and each column. Then an example where it is impossibe to fill in a solution which is 1-1: 0.2 0.2 0.2 0.2 0.2 0.4 0.6
High mutual information = (almost) deterministic relationship?
In addition to the answer by @Dave, I will give an example showing that a deterministic (or one-to-one) relationship is not always possible. I will use a categorical data example, so the joint distrib
High mutual information = (almost) deterministic relationship? In addition to the answer by @Dave, I will give an example showing that a deterministic (or one-to-one) relationship is not always possible. I will use a categorical data example, so the joint distribution can be given by a two-way table. But first, one formula for the mutual information is $$ I(X, Y) = H(X) + H(Y) - H(X, Y) $$ where $H$ is the Shannon entropy, with one argument the marginal, with two arguments the joint. If $X, Y$ are independent, then $H(X,Y)= H(X)+H(Y)$, so then the mutual information is zero, the minimum possible value. But you ask for maximizing mutual information, so intuitively, to go as far away from independence as possible. And we cannot get further away from independence than a one-to-one relationship. So, to maximize mutual information (say, with the two marginals given) we must minimize $H(X, Y)$. First, one example where a 1-1 relationship is possible: 0.5 0.5 0.5 0.5 This shows only the two marginals. There are multiple joints that fulfills the conditions, just fill in two 0.5, two 0, with one 0 in each row and each column. Then an example where it is impossibe to fill in a solution which is 1-1: 0.2 0.2 0.2 0.2 0.2 0.4 0.6
High mutual information = (almost) deterministic relationship? In addition to the answer by @Dave, I will give an example showing that a deterministic (or one-to-one) relationship is not always possible. I will use a categorical data example, so the joint distrib
37,365
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors
The major breakthrough of the joint models relative to the time-dependent Cox model is that they allow one to deal with the error measurements in the time dependent variables (longitudinal variable in this case). In a Cox model with time dependent covariates we assume that the variables are measured without error. Some references: Tsiatis, A. A. e M. Davidian (2004). Joint modeling of longitudinal and time-to-event data: An overview. Statistica Sinica 14 (3),809-834. Rizopoulos, D. (2012b). Joint Models for Longitudinal and Time-toEvent Data With Applications in R. Chapman and Hall/CRC Henderson, R., P. Diggle, e A. Dobson (2000, Dec). Joint modelling of longitudinal measurements and event time data. Biostatistics 1 (4), 465-480.
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors
The major breakthrough of the joint models relative to the time-dependent Cox model is that they allow one to deal with the error measurements in the time dependent variables (longitudinal variable in
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors The major breakthrough of the joint models relative to the time-dependent Cox model is that they allow one to deal with the error measurements in the time dependent variables (longitudinal variable in this case). In a Cox model with time dependent covariates we assume that the variables are measured without error. Some references: Tsiatis, A. A. e M. Davidian (2004). Joint modeling of longitudinal and time-to-event data: An overview. Statistica Sinica 14 (3),809-834. Rizopoulos, D. (2012b). Joint Models for Longitudinal and Time-toEvent Data With Applications in R. Chapman and Hall/CRC Henderson, R., P. Diggle, e A. Dobson (2000, Dec). Joint modelling of longitudinal measurements and event time data. Biostatistics 1 (4), 465-480.
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors The major breakthrough of the joint models relative to the time-dependent Cox model is that they allow one to deal with the error measurements in the time dependent variables (longitudinal variable in
37,366
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors
The major problem is that they are time consuming (computational speaking) and the software is not yet very friendly. There are several approaches and you can let all covariates to be time-dependent, but in this case you should consider to take a multivariate longitudinal model before. In addition for each situation you have to think what is the best thing to share between the longitudinal and survival models - sometimes is the random effects (if you use a mixed effects model for the longitudinal data) and sometimes the expected value of the longitudinal variable at the event time. Sometimes it is some other characteristics of the longitudinal trajectory like the variance, the cumulative effect of the longitudinal data and so on. JM should be adapted to each situation and to each type of data. It's not mandatory that the longitudinal model for HIV data be the same for PSA for example.
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors
The major problem is that they are time consuming (computational speaking) and the software is not yet very friendly. There are several approaches and you can let all covariates to be time-dependent,
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors The major problem is that they are time consuming (computational speaking) and the software is not yet very friendly. There are several approaches and you can let all covariates to be time-dependent, but in this case you should consider to take a multivariate longitudinal model before. In addition for each situation you have to think what is the best thing to share between the longitudinal and survival models - sometimes is the random effects (if you use a mixed effects model for the longitudinal data) and sometimes the expected value of the longitudinal variable at the event time. Sometimes it is some other characteristics of the longitudinal trajectory like the variance, the cumulative effect of the longitudinal data and so on. JM should be adapted to each situation and to each type of data. It's not mandatory that the longitudinal model for HIV data be the same for PSA for example.
Joint Models vs the 'usual' time-dependent Cox regression for time-varying predictors The major problem is that they are time consuming (computational speaking) and the software is not yet very friendly. There are several approaches and you can let all covariates to be time-dependent,
37,367
How does explaining away cause problems for learning?
The only problem I can see from such a "V structure" is that you have two "causes" - this means that there may be difficulty untangling the effect of each "parent". In terms of your example, it could be the case that you observe "earthquake" and "burglary" together (both "on" or both "off"). So you would get a good estimate for the value $ P (alarm|earthquake \cup burglary) $ but not the marginal effects. This is the equivalent of multicolinearity in linear regression. If you are using the vector $ (e_i, b_i, a_i) $ (e for earthquake, etc) and assuming a binomial model your probability for the ith data point is $ p_i=e_i\theta_e + b_i\theta_b $. The probabilities in question are $ Pr (alarm|earthquake)=\theta_e + Pr (burglary|earthquake)\theta_b $ and then $ Pr (alarm|burglary)=\theta_b + Pr (earthquake|burglary) \theta_e $ If you only observe them together $ (e_i=1, b_i=1) $ or $(e_i=0, b_i=0) $, then both probabilities are $\theta_b + \theta_e $. This means that the data is unhelpful in choosing which variable to keep.
How does explaining away cause problems for learning?
The only problem I can see from such a "V structure" is that you have two "causes" - this means that there may be difficulty untangling the effect of each "parent". In terms of your example, it could
How does explaining away cause problems for learning? The only problem I can see from such a "V structure" is that you have two "causes" - this means that there may be difficulty untangling the effect of each "parent". In terms of your example, it could be the case that you observe "earthquake" and "burglary" together (both "on" or both "off"). So you would get a good estimate for the value $ P (alarm|earthquake \cup burglary) $ but not the marginal effects. This is the equivalent of multicolinearity in linear regression. If you are using the vector $ (e_i, b_i, a_i) $ (e for earthquake, etc) and assuming a binomial model your probability for the ith data point is $ p_i=e_i\theta_e + b_i\theta_b $. The probabilities in question are $ Pr (alarm|earthquake)=\theta_e + Pr (burglary|earthquake)\theta_b $ and then $ Pr (alarm|burglary)=\theta_b + Pr (earthquake|burglary) \theta_e $ If you only observe them together $ (e_i=1, b_i=1) $ or $(e_i=0, b_i=0) $, then both probabilities are $\theta_b + \theta_e $. This means that the data is unhelpful in choosing which variable to keep.
How does explaining away cause problems for learning? The only problem I can see from such a "V structure" is that you have two "causes" - this means that there may be difficulty untangling the effect of each "parent". In terms of your example, it could
37,368
MLE vs MAP vs conditional MLE with regards to logistic regression
For MLE and MAP you are right. "Conditional MLE" is another way of saying "MLE in a conditional model". Logistic regression is a conditional model in the sense that $\theta$ only controls $P(Y|X)$ (it has no effect on $P(X)$). Therefore the MLE for logistic regression is a conditional MLE. If you have a model in which $\theta$ affects $P(X)$ then it is no longer a conditional model. The MLE in such a model cannot be regarded as a conditional MLE. A discriminative model is the same thing as a conditional model. The Wikipedia pages for generative model and discriminative model do a reasonably good job of explaining this, however the definitions there do not correctly handle the case when $P(X)$ exists but does not depend on $\theta$. Wikipedia would say that such a model is generative, even though it would behave in every way like a discriminative model. I would regard such a model as discriminative. For more explanation, see Discriminative models, not discriminative training.
MLE vs MAP vs conditional MLE with regards to logistic regression
For MLE and MAP you are right. "Conditional MLE" is another way of saying "MLE in a conditional model". Logistic regression is a conditional model in the sense that $\theta$ only controls $P(Y|X)$ (
MLE vs MAP vs conditional MLE with regards to logistic regression For MLE and MAP you are right. "Conditional MLE" is another way of saying "MLE in a conditional model". Logistic regression is a conditional model in the sense that $\theta$ only controls $P(Y|X)$ (it has no effect on $P(X)$). Therefore the MLE for logistic regression is a conditional MLE. If you have a model in which $\theta$ affects $P(X)$ then it is no longer a conditional model. The MLE in such a model cannot be regarded as a conditional MLE. A discriminative model is the same thing as a conditional model. The Wikipedia pages for generative model and discriminative model do a reasonably good job of explaining this, however the definitions there do not correctly handle the case when $P(X)$ exists but does not depend on $\theta$. Wikipedia would say that such a model is generative, even though it would behave in every way like a discriminative model. I would regard such a model as discriminative. For more explanation, see Discriminative models, not discriminative training.
MLE vs MAP vs conditional MLE with regards to logistic regression For MLE and MAP you are right. "Conditional MLE" is another way of saying "MLE in a conditional model". Logistic regression is a conditional model in the sense that $\theta$ only controls $P(Y|X)$ (
37,369
How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done?
Here is a simpler example using the link from Nemo. The question I answer is How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? I answer this as this is the title and as My question is: Even though the fit is very similar, why do my constrained B-Spline-columns differ from what gam provides? What did I miss? is rather unclear for reason I provide in the end. Here is the answer to the above question # simulate data library(splines) set.seed(100) n <- 1000 x <- seq(-4,4,length.out=n) df <- expand.grid(d = factor(c(0, 1)), x = x) df <- cbind(y = sin(x) + rnorm(length(df),0,1), df) x <- df$x # we start the other way and find the knots `mgcv` uses to make sure we have # the same knots... library(mgcv) mod_gam <- gam(y ~ s(x, bs="ps", k = 7), data = df) knots <- mod_gam$smooth[[1]]$knots # find constrained basis as OP describes X <- splineDesign(knots = knots, x) C <- rep(1, nrow(X)) %*% X qrc <- qr(t(C)) Z <- qr.Q(qrc,complete=TRUE)[,(nrow(C)+1):ncol(C)] XZ <- X%*%Z rep(1, nrow(X)) %*% XZ # all ~ zero as they should #R [,1] [,2] [,3] [,4] [,5] [,6] #R [1,] 2.239042e-13 -2.112754e-13 -3.225198e-13 -6.993017e-14 -2.011724e-13 -3.674838e-14 # now we get roughtly the same basis all.equal(model.matrix(mod_gam)[, -1], XZ, check.attributes = FALSE) #R [1] TRUE # if you want to use a binary by value mod_gam <- gam(y ~ s(x, bs="ps", k = 7, by = d), data = df) all.equal( model.matrix(mod_gam)[, -1], cbind(XZ * (df$d == 0), XZ * (df$d == 1)), check.attributes = FALSE) #R [1] TRUE You can do better in terms of computation speed than explicitly computing Z <- qr.Q(qrc,complete=TRUE)[,(nrow(C)+1):ncol(C)] XZ <- X%*%Z as described on page 211 of Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapman & Hall/CRC Texts in Statistical Science). CRC Press. There are some issues in the OP's code # drawing the sequence n <- 100 x <- seq(-4,4,length.out=n) z <- seq(-4,4,length.out=n) d <- as.factor(0:1) library(data.table) # OP did not load the library data <- CJ(x=x,z=z,d=d) set.seed(100) # setting up the model data[, y := # OP only simulate n random terms -- there are 20000 rows sin(x+I(d==0)) + sin(x+4*I(d==1)) + I(d==0)*z^2 + 3*I(d==1)*z^2 + rnorm(n,0,1)] # creating the uncentered B-Spline-Basis for x and z X <- data[,spline(x,min(x),max(x),5,2,by=d,intercept=FALSE)] # gets an error #R Error in spline(x, min(x), max(x), 5, 2, by = d, intercept = FALSE) : #R unused arguments (by = d, intercept = FALSE) str(formals(spline)) # here are the formals for `stats::spline` #R Dotted pair list of 8 #R $ x : symbol #R $ y : NULL #R $ n : language 3 * length(x) #R $ method: chr "fmm" #R $ xmin : language min(x) #R $ xmax : language max(x) #R $ xout : symbol #R $ ties : symbol mean To My question is: Even though the fit is very similar, why do my constrained B-Spline-columns differ from what gam provides? What did I miss? then I do not get how you would expect to get the same. You may have used different knots and I do not see how the spline function would yield the correct results here. Thw dotted line corresponds to my fit, the straight line to the gam-version If the latter is fitted with lm then it is un-penalized so the results should differ?
How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done?
Here is a simpler example using the link from Nemo. The question I answer is How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? I answer this as thi
How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? Here is a simpler example using the link from Nemo. The question I answer is How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? I answer this as this is the title and as My question is: Even though the fit is very similar, why do my constrained B-Spline-columns differ from what gam provides? What did I miss? is rather unclear for reason I provide in the end. Here is the answer to the above question # simulate data library(splines) set.seed(100) n <- 1000 x <- seq(-4,4,length.out=n) df <- expand.grid(d = factor(c(0, 1)), x = x) df <- cbind(y = sin(x) + rnorm(length(df),0,1), df) x <- df$x # we start the other way and find the knots `mgcv` uses to make sure we have # the same knots... library(mgcv) mod_gam <- gam(y ~ s(x, bs="ps", k = 7), data = df) knots <- mod_gam$smooth[[1]]$knots # find constrained basis as OP describes X <- splineDesign(knots = knots, x) C <- rep(1, nrow(X)) %*% X qrc <- qr(t(C)) Z <- qr.Q(qrc,complete=TRUE)[,(nrow(C)+1):ncol(C)] XZ <- X%*%Z rep(1, nrow(X)) %*% XZ # all ~ zero as they should #R [,1] [,2] [,3] [,4] [,5] [,6] #R [1,] 2.239042e-13 -2.112754e-13 -3.225198e-13 -6.993017e-14 -2.011724e-13 -3.674838e-14 # now we get roughtly the same basis all.equal(model.matrix(mod_gam)[, -1], XZ, check.attributes = FALSE) #R [1] TRUE # if you want to use a binary by value mod_gam <- gam(y ~ s(x, bs="ps", k = 7, by = d), data = df) all.equal( model.matrix(mod_gam)[, -1], cbind(XZ * (df$d == 0), XZ * (df$d == 1)), check.attributes = FALSE) #R [1] TRUE You can do better in terms of computation speed than explicitly computing Z <- qr.Q(qrc,complete=TRUE)[,(nrow(C)+1):ncol(C)] XZ <- X%*%Z as described on page 211 of Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapman & Hall/CRC Texts in Statistical Science). CRC Press. There are some issues in the OP's code # drawing the sequence n <- 100 x <- seq(-4,4,length.out=n) z <- seq(-4,4,length.out=n) d <- as.factor(0:1) library(data.table) # OP did not load the library data <- CJ(x=x,z=z,d=d) set.seed(100) # setting up the model data[, y := # OP only simulate n random terms -- there are 20000 rows sin(x+I(d==0)) + sin(x+4*I(d==1)) + I(d==0)*z^2 + 3*I(d==1)*z^2 + rnorm(n,0,1)] # creating the uncentered B-Spline-Basis for x and z X <- data[,spline(x,min(x),max(x),5,2,by=d,intercept=FALSE)] # gets an error #R Error in spline(x, min(x), max(x), 5, 2, by = d, intercept = FALSE) : #R unused arguments (by = d, intercept = FALSE) str(formals(spline)) # here are the formals for `stats::spline` #R Dotted pair list of 8 #R $ x : symbol #R $ y : NULL #R $ n : language 3 * length(x) #R $ method: chr "fmm" #R $ xmin : language min(x) #R $ xmax : language max(x) #R $ xout : symbol #R $ ties : symbol mean To My question is: Even though the fit is very similar, why do my constrained B-Spline-columns differ from what gam provides? What did I miss? then I do not get how you would expect to get the same. You may have used different knots and I do not see how the spline function would yield the correct results here. Thw dotted line corresponds to my fit, the straight line to the gam-version If the latter is fitted with lm then it is un-penalized so the results should differ?
How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? Here is a simpler example using the link from Nemo. The question I answer is How exactly is the sum (or mean) centering constraint for splines (also w.r.t. gam from mgcv) done? I answer this as thi
37,370
The distribution of a sufficient statistic
The functions are identical (same carrier measure, same log-normalizer).
The distribution of a sufficient statistic
The functions are identical (same carrier measure, same log-normalizer).
The distribution of a sufficient statistic The functions are identical (same carrier measure, same log-normalizer).
The distribution of a sufficient statistic The functions are identical (same carrier measure, same log-normalizer).
37,371
Most efficient way to check answers to exercises when learning from a statistics textbook?
Statistical simulation works nicely, though it can take some time to implement. This skill will pay off in the long run, however. Solutions manuals can often be found on-line. For instance, H & A have some solutions here.
Most efficient way to check answers to exercises when learning from a statistics textbook?
Statistical simulation works nicely, though it can take some time to implement. This skill will pay off in the long run, however. Solutions manuals can often be found on-line. For instance, H & A have
Most efficient way to check answers to exercises when learning from a statistics textbook? Statistical simulation works nicely, though it can take some time to implement. This skill will pay off in the long run, however. Solutions manuals can often be found on-line. For instance, H & A have some solutions here.
Most efficient way to check answers to exercises when learning from a statistics textbook? Statistical simulation works nicely, though it can take some time to implement. This skill will pay off in the long run, however. Solutions manuals can often be found on-line. For instance, H & A have
37,372
normalization in max-sum algorithm (loopy belief propagation)
Yes. You need to normalize the messages between each iteration in order to avoid underflow. You can normalize them so that $\log \sum_i \exp (m(i))$ = 0.
normalization in max-sum algorithm (loopy belief propagation)
Yes. You need to normalize the messages between each iteration in order to avoid underflow. You can normalize them so that $\log \sum_i \exp (m(i))$ = 0.
normalization in max-sum algorithm (loopy belief propagation) Yes. You need to normalize the messages between each iteration in order to avoid underflow. You can normalize them so that $\log \sum_i \exp (m(i))$ = 0.
normalization in max-sum algorithm (loopy belief propagation) Yes. You need to normalize the messages between each iteration in order to avoid underflow. You can normalize them so that $\log \sum_i \exp (m(i))$ = 0.
37,373
normalization in max-sum algorithm (loopy belief propagation)
I was thinking through @user7814's answer and decided to figure out where his answer came from -- I had a hard time finding information about normalizing in log space so I decided to figure it out myself: Probability Space vs Log Space Normalization We can normalize a message $\mathbf{u} = [u_1,\ u_2\, \ldots,\ u_n]$ in probability space by dividing each entry by the sum $N = \sum u_i$ resulting in the normalized message $\mathbf{\hat{u}} = \frac{1}{N}[u_1,\ u_2\, \ldots,\ u_n].$ Let the corresponding message in log probability space be $\mathbf{v} = [v_1,\ v_2\, \ldots,\ v_n]$ where $v_i = \log{u_i}$ or $u_i = e^{v_i}.$ The corresponding normalized message would be $$ \mathbf{\hat{v}} = [\log{\frac{u_1}{N}},\ \log{\frac{u_2}{N}},\ \ldots,\ \log{\frac{u_n}{N}}] \\ \mathbf{\hat{v}} = [\log{u_1} - \log{N},\ \log{u_2} - \log{N},\ \ldots,\ \log{u_n} - \log{N}] \\ \mathbf{\hat{v}} = [v_1 - \log{N},\ v_2 - \log{N},\ \ldots,\ v_n - \log{N}] $$ where $$ \log{N} = \log \left( \sum u_i \right) = \log \left( \sum e^{v_i} \right) $$ So letting $M = \log \sum e^{v_i}$ we get the corresponding normalized message $$ \mathbf{\hat{v}} = [v_1 - M,\ v_2 - M,\ \ldots,\ v_n - M]. $$ Practical Considerations In probability space if you are getting small $u$'s where $\sum u = N < 1$ (because you are multiplying lots of floating point numbers less than 1) then scaling by $1/N > 1$ will scale up the $u$'s (i.e., thus avoiding underflow). In log space you will have $M = \log N < \log 1 = 0$ so you will adding a small positive quantity $-M > 0$ to each $v$ we preventing the values from becoming too negative (and keeping the mean at zero). Keep in mind that as $u \rightarrow 0$ then $v \rightarrow -\infty$ so messages should avoid these conditions. Note that normalizing in log space is expensive! It might be worth approximating $e^x$ and $\ln{x}$ to avoid the cost.
normalization in max-sum algorithm (loopy belief propagation)
I was thinking through @user7814's answer and decided to figure out where his answer came from -- I had a hard time finding information about normalizing in log space so I decided to figure it out mys
normalization in max-sum algorithm (loopy belief propagation) I was thinking through @user7814's answer and decided to figure out where his answer came from -- I had a hard time finding information about normalizing in log space so I decided to figure it out myself: Probability Space vs Log Space Normalization We can normalize a message $\mathbf{u} = [u_1,\ u_2\, \ldots,\ u_n]$ in probability space by dividing each entry by the sum $N = \sum u_i$ resulting in the normalized message $\mathbf{\hat{u}} = \frac{1}{N}[u_1,\ u_2\, \ldots,\ u_n].$ Let the corresponding message in log probability space be $\mathbf{v} = [v_1,\ v_2\, \ldots,\ v_n]$ where $v_i = \log{u_i}$ or $u_i = e^{v_i}.$ The corresponding normalized message would be $$ \mathbf{\hat{v}} = [\log{\frac{u_1}{N}},\ \log{\frac{u_2}{N}},\ \ldots,\ \log{\frac{u_n}{N}}] \\ \mathbf{\hat{v}} = [\log{u_1} - \log{N},\ \log{u_2} - \log{N},\ \ldots,\ \log{u_n} - \log{N}] \\ \mathbf{\hat{v}} = [v_1 - \log{N},\ v_2 - \log{N},\ \ldots,\ v_n - \log{N}] $$ where $$ \log{N} = \log \left( \sum u_i \right) = \log \left( \sum e^{v_i} \right) $$ So letting $M = \log \sum e^{v_i}$ we get the corresponding normalized message $$ \mathbf{\hat{v}} = [v_1 - M,\ v_2 - M,\ \ldots,\ v_n - M]. $$ Practical Considerations In probability space if you are getting small $u$'s where $\sum u = N < 1$ (because you are multiplying lots of floating point numbers less than 1) then scaling by $1/N > 1$ will scale up the $u$'s (i.e., thus avoiding underflow). In log space you will have $M = \log N < \log 1 = 0$ so you will adding a small positive quantity $-M > 0$ to each $v$ we preventing the values from becoming too negative (and keeping the mean at zero). Keep in mind that as $u \rightarrow 0$ then $v \rightarrow -\infty$ so messages should avoid these conditions. Note that normalizing in log space is expensive! It might be worth approximating $e^x$ and $\ln{x}$ to avoid the cost.
normalization in max-sum algorithm (loopy belief propagation) I was thinking through @user7814's answer and decided to figure out where his answer came from -- I had a hard time finding information about normalizing in log space so I decided to figure it out mys
37,374
Inferring parameters for a regression with features of both multivariate probit and ordinal regression?
Have you looked into the stats literature on reduced-rank vector generalized linear models? It seems you might be trying to identify the model by introducing constraints that in effect make it a hybrid of the probit and ordinal regressions, plus with a multilevel structure on the right-hand side of the equation. This document (pdf alert) on the VGAM package in R offers a relatively straightforward introduction to reduced-rank generalized models.
Inferring parameters for a regression with features of both multivariate probit and ordinal regressi
Have you looked into the stats literature on reduced-rank vector generalized linear models? It seems you might be trying to identify the model by introducing constraints that in effect make it a hybri
Inferring parameters for a regression with features of both multivariate probit and ordinal regression? Have you looked into the stats literature on reduced-rank vector generalized linear models? It seems you might be trying to identify the model by introducing constraints that in effect make it a hybrid of the probit and ordinal regressions, plus with a multilevel structure on the right-hand side of the equation. This document (pdf alert) on the VGAM package in R offers a relatively straightforward introduction to reduced-rank generalized models.
Inferring parameters for a regression with features of both multivariate probit and ordinal regressi Have you looked into the stats literature on reduced-rank vector generalized linear models? It seems you might be trying to identify the model by introducing constraints that in effect make it a hybri
37,375
Formal justification of Bayesian inference as a model for belief
So far I've seen two threads along these lines: One of the earlier attempts is Cox's Theorem (Cox, R. T. (1946). "Probability, Frequency and Reasonable Expectation". American Journal of Physics 14: 1–10), which essentially assumes Bayes theorem, and then derives the features of the resulting belief functions, and finds them to be the laws of probability. Later, this approach was more fully explicated in E. T. Jaynes Probability Theory: The Logic of Science (the first few chapters are online), and summarized on Wikipedia. Another thread comes of of Savage's formulation of decision theory (Savage, L. J. (1954). The Foundation of Statistics, 2nd edn, Dover, New York.). Here the key assumption is that one can rank-order linear combinations of different outcomes/decisions. This allows one to impose an additive structure on the utility function, which is then conceptually factored into "value" and "belief" parts; the belief part behaving as per probabilities. One problem is that the factoring is not unique, however, for the purposes of constructing a model for belief, the utility function is, essentially, just a 0-1 loss function. Thus it drops out of the representation and you are left with probabilities as being the representation of belief. (I'm basing this discussion on Edi Karni _Savages' Subjective Expected Utility Model, JHU Tech Report(?), 2005)
Formal justification of Bayesian inference as a model for belief
So far I've seen two threads along these lines: One of the earlier attempts is Cox's Theorem (Cox, R. T. (1946). "Probability, Frequency and Reasonable Expectation". American Journal of Physics 14: 1–
Formal justification of Bayesian inference as a model for belief So far I've seen two threads along these lines: One of the earlier attempts is Cox's Theorem (Cox, R. T. (1946). "Probability, Frequency and Reasonable Expectation". American Journal of Physics 14: 1–10), which essentially assumes Bayes theorem, and then derives the features of the resulting belief functions, and finds them to be the laws of probability. Later, this approach was more fully explicated in E. T. Jaynes Probability Theory: The Logic of Science (the first few chapters are online), and summarized on Wikipedia. Another thread comes of of Savage's formulation of decision theory (Savage, L. J. (1954). The Foundation of Statistics, 2nd edn, Dover, New York.). Here the key assumption is that one can rank-order linear combinations of different outcomes/decisions. This allows one to impose an additive structure on the utility function, which is then conceptually factored into "value" and "belief" parts; the belief part behaving as per probabilities. One problem is that the factoring is not unique, however, for the purposes of constructing a model for belief, the utility function is, essentially, just a 0-1 loss function. Thus it drops out of the representation and you are left with probabilities as being the representation of belief. (I'm basing this discussion on Edi Karni _Savages' Subjective Expected Utility Model, JHU Tech Report(?), 2005)
Formal justification of Bayesian inference as a model for belief So far I've seen two threads along these lines: One of the earlier attempts is Cox's Theorem (Cox, R. T. (1946). "Probability, Frequency and Reasonable Expectation". American Journal of Physics 14: 1–
37,376
How to define train and test sets in financial time series for estimating machine learning parameters
If you're still looking for insight regarding financial time series & machine learning, you might want to check out this article from the Journal of Economic Perspectives, which gives a great overview of various ML methods pertaining to Economics/Finance. Essentially, the main problem you have is that most traditional machine learning techniques deal with cross-sectional data "where independently distributed data is a plausible assumption" (quoted from said article). However, since with Financial Time Series you, by and large, can't make that assumption, you're better off taking a totally different approach than the 'ole Training/Test set split-'em-up. Your best bet--as mentioned in that article (Seriously, it's really good)--may be to read up on Bayesian Structural Time Series (BFTS) (briefly mentioned in that article that you should be reading by now and described in more detail here and, well, I don't have the reps for a third link...). Now, if you're just looking to do some run-of-the-mill Time Series estimation you can settle for the choose-the-model-with-the-lowest-out-of-sample-RMSE approach. However, that may cause you to forfeit all your "Machine Learning" name-dropping privileges. Just a warning... Good luck!
How to define train and test sets in financial time series for estimating machine learning parameter
If you're still looking for insight regarding financial time series & machine learning, you might want to check out this article from the Journal of Economic Perspectives, which gives a great overview
How to define train and test sets in financial time series for estimating machine learning parameters If you're still looking for insight regarding financial time series & machine learning, you might want to check out this article from the Journal of Economic Perspectives, which gives a great overview of various ML methods pertaining to Economics/Finance. Essentially, the main problem you have is that most traditional machine learning techniques deal with cross-sectional data "where independently distributed data is a plausible assumption" (quoted from said article). However, since with Financial Time Series you, by and large, can't make that assumption, you're better off taking a totally different approach than the 'ole Training/Test set split-'em-up. Your best bet--as mentioned in that article (Seriously, it's really good)--may be to read up on Bayesian Structural Time Series (BFTS) (briefly mentioned in that article that you should be reading by now and described in more detail here and, well, I don't have the reps for a third link...). Now, if you're just looking to do some run-of-the-mill Time Series estimation you can settle for the choose-the-model-with-the-lowest-out-of-sample-RMSE approach. However, that may cause you to forfeit all your "Machine Learning" name-dropping privileges. Just a warning... Good luck!
How to define train and test sets in financial time series for estimating machine learning parameter If you're still looking for insight regarding financial time series & machine learning, you might want to check out this article from the Journal of Economic Perspectives, which gives a great overview
37,377
How to define train and test sets in financial time series for estimating machine learning parameters
Generally cross-validation is one of the methods to evaluate a model by splitting data into train and test data sets. Leave-one-out cross-validation splits the dataset, say n datapoints as (n-1) for train data and test on nth datapoint. this process is repeated until each data point serves as a test datapoint. This ensures fairness in splitting the training data and rigorous evaluation of the model.
How to define train and test sets in financial time series for estimating machine learning parameter
Generally cross-validation is one of the methods to evaluate a model by splitting data into train and test data sets. Leave-one-out cross-validation splits the dataset, say n datapoints as (n-1) for t
How to define train and test sets in financial time series for estimating machine learning parameters Generally cross-validation is one of the methods to evaluate a model by splitting data into train and test data sets. Leave-one-out cross-validation splits the dataset, say n datapoints as (n-1) for train data and test on nth datapoint. this process is repeated until each data point serves as a test datapoint. This ensures fairness in splitting the training data and rigorous evaluation of the model.
How to define train and test sets in financial time series for estimating machine learning parameter Generally cross-validation is one of the methods to evaluate a model by splitting data into train and test data sets. Leave-one-out cross-validation splits the dataset, say n datapoints as (n-1) for t
37,378
How to describe and present the issue of perfect separation?
While performing my excavation activities on no-answer questions, I found this very sensible one, to which, I guess, by now the OP has found an answer. But I realized that I had various questions of my own regarding the issue of perfect separation in logistic regression, and a (quick) search in the literature, did not seem to answer them. So I decided to start a little research project of my own (probably re-inventing the wheel), and with this answer I would want to share some of its preliminary results. I believe these results contribute towards an understanding of whether the issue of perfect separation is a purely "technical" one, or whether it can be given a more intuitive description/explanation. My first concern was to understand the phenomenon in algorithmic terms, rather than the general theory behind it: under which conditions the maximum likelihood estimation approach will "break-down" if fed with a data sample that contains a regressor for which the phenomenon of perfect separation exists? Preliminary results (theoretical and simulated) indicate that: 1) It matters whether a constant term is included in the logit specification. 2) It matters whether the regressor in question is dichotomous (in the sample), or not. 3) If dichotomous, it may matter whether it takes the value $0$ or not. 4) It matters whether other regressors are present in the specification or not. 5) It matters how the above 4 issues are combined. I will now present a set of sufficient conditions for perfect separation to make the MLE break-down. This is unrelated to whether the various statistical softwares give warning of the phenomenon -they may do so by scanning the data sample prior to attempting to execute maximum likelihood estimation. I am concerned with the cases where the maximum likelihood estimation will begin -and when it will break down in the process. Assume a "usual" binary-choice logistic regression model $$P(Y_i \mid \beta_0, X_i, \mathbf z_i) = \Lambda (g(\beta_0,x_i, \mathbf z_i)), \;\; g(\beta_0,x_i, \mathbf z_i) = \beta_0 +\beta_1x_i + \mathbf z_i'\mathbf \gamma$$ $X$ is the regressor with perfect separation, while $\mathbf Z$ is a collection of other regressors that are not characterized by perfect separation. Also $$\Lambda (g(\beta_0,x_i, \mathbf z_i)) = \frac 1{1+e^{-g(\beta_0,x_i, \mathbf z_i)}}\equiv \Lambda_i$$ The log-likelihood for a sample of size $n$ is $$\ln L=\sum_{i=1}^{n}\left[y_i\ln(\Lambda_i)+(1-y_i)\ln(1-\Lambda_i)\right]$$ The MLE will be found by setting the derivatives equal to zero. In particular we want $$ \sum_{i=1}^{n}(y_i-\Lambda_i) = 0 \tag{1}$$ $$\sum_{i=1}^{n}(y_i-\Lambda_i)x_i = 0 \tag{2}$$ The first equation comes from taking the derivative with respect to the constant term, the 2nd from taking the derivative with respect to $X$. Assume now that in all cases where $y_1 =1$ we have $x_i = a_k$, and that $x_i$ never takes the value $a_k$ when $y_i=0$. This is the phenomenon of complete separation, or "perfect prediction": if we observe $x_i = a_k$ we know that $y_i=1$. If we observe $x_i \neq a_k$ we know that $y_i=0$. This holds irrespective of whether, in theory or in the sample, $X$ is discrete or continuous, dichotomous or not. But also, this is a sample-specific phenomenon -we do not argue that it will hold over the population. But the specific sample is what we have in our hands to feed the MLE. Now denote the abolute frequency of $y_i =1$ by $n_y$ $$n_y \equiv \sum_{i=1}^ny_i = \sum_{y_i=1}y_i \tag{3}$$ We can then re-write eq $(1)$ as $$n_y = \sum_{i=1}^n\Lambda_i = \sum_{y_i=1}\Lambda_i+\sum_{y_i=0}\Lambda_i \Rightarrow n_y - \sum_{y_i=1}\Lambda_i = \sum_{y_i=0}\Lambda_i \tag{4}$$ Turning to eq. $(2)$ we have $$\sum_{i=1}^{n}y_ix_i -\sum_{i=1}^{n}\Lambda_ix_i = 0 \Rightarrow \sum_{y_i=1}y_ia_k+\sum_{y_i=0}y_ix_i - \sum_{y_i=1}\Lambda_ia_k-\sum_{y_i=0}\Lambda_ix_i =0$$ using $(3)$ we have $$n_ya_k + 0 - a_k\sum_{y_i=1}\Lambda_i-\sum_{y_i=0}\Lambda_ix_i =0$$ $$\Rightarrow a_k\left(n_y-\sum_{y_i=1}\Lambda_i\right) -\sum_{y_i=0}\Lambda_ix_i =0$$ and using $(4)$ we obtain $$a_k\sum_{y_i=0}\Lambda_ix_i -\sum_{y_i=0}\Lambda_ix_i =0 \Rightarrow \sum_{y_i=0}(a_k-x_i)\Lambda_i=0 \tag {5}$$ So : if the specification contains a constant term and there is perfect separation with respect to regressor $X$, the MLE will attempt to satisfy, among others, eq $(5)$ also. But note, that the summation is over the sub-sample where $y_i=0$ in which $x_i\neq a_k$ by assumption. This implies the following: 1) if $X$ is dichotomous in the sample, then $(a_k-x_i) \neq 0$ for all $i$ in the summation in $(5)$. 2) If $X$ is not dichotomous in the sample, but $a_k$ is either its minimum or its maximum value in the sample, then again $(a_k-x_i) \neq 0$ for all $i$ in the summation in $(5)$. In these two cases, and since moreover $\Lambda_i$ is non-negative by construction, the only way that eq. $(5)$ can be satisfied is when $\Lambda_i=0$ for all $i$ in the summation. But $$\Lambda_i = \frac 1{1+e^{-g(\beta_0,x_i, \mathbf z_i)}}$$ and so the only way that $\Lambda_i$ can become equal to $0$, is if the parameter estimates are such that $g(\beta_0,x_i, \mathbf z_i) \rightarrow -\infty$. And since $g()$ is linear in the parameters, this implies that at least one of the parameter estimates should be "infinity": this is what it means for the MLE to "break down": to not produce finite valued estimates. So cases 1) and 2) are sufficient conditions for a break-down of the MLE procedure. But consider now the case where $X$ is not dichotomous, and $a_k$ is not its minimum, or its maximum value in the sample. We still have complete separation, "perfect prediction", but now, in eq. $(5)$ some of the terms $(a_k-x_i)$ will be positive and some will be negative. This means that it is possible that the MLE will be able to satisfy eq. $(5)$ producing finite estimates for all parameters. And simulation results confirm that this is so. I am not saying that such a sample does not create undesirable consequences for the properties of the estimator etc: I just note that in such a case, the estimation algorithm will run as usual. Moreover, simulation results show that if there is no constant term in the specification, $X$ is not dichotomous but $a_k$ is an extreme value, and there are other regressors present, again the MLE will run -indicating that the presence of the constant term (whose theoretical consequences we used in the previous results, namely the requirement for the MLE to satisfy eq. $(1)$), is important.
How to describe and present the issue of perfect separation?
While performing my excavation activities on no-answer questions, I found this very sensible one, to which, I guess, by now the OP has found an answer. But I realized that I had various questions of m
How to describe and present the issue of perfect separation? While performing my excavation activities on no-answer questions, I found this very sensible one, to which, I guess, by now the OP has found an answer. But I realized that I had various questions of my own regarding the issue of perfect separation in logistic regression, and a (quick) search in the literature, did not seem to answer them. So I decided to start a little research project of my own (probably re-inventing the wheel), and with this answer I would want to share some of its preliminary results. I believe these results contribute towards an understanding of whether the issue of perfect separation is a purely "technical" one, or whether it can be given a more intuitive description/explanation. My first concern was to understand the phenomenon in algorithmic terms, rather than the general theory behind it: under which conditions the maximum likelihood estimation approach will "break-down" if fed with a data sample that contains a regressor for which the phenomenon of perfect separation exists? Preliminary results (theoretical and simulated) indicate that: 1) It matters whether a constant term is included in the logit specification. 2) It matters whether the regressor in question is dichotomous (in the sample), or not. 3) If dichotomous, it may matter whether it takes the value $0$ or not. 4) It matters whether other regressors are present in the specification or not. 5) It matters how the above 4 issues are combined. I will now present a set of sufficient conditions for perfect separation to make the MLE break-down. This is unrelated to whether the various statistical softwares give warning of the phenomenon -they may do so by scanning the data sample prior to attempting to execute maximum likelihood estimation. I am concerned with the cases where the maximum likelihood estimation will begin -and when it will break down in the process. Assume a "usual" binary-choice logistic regression model $$P(Y_i \mid \beta_0, X_i, \mathbf z_i) = \Lambda (g(\beta_0,x_i, \mathbf z_i)), \;\; g(\beta_0,x_i, \mathbf z_i) = \beta_0 +\beta_1x_i + \mathbf z_i'\mathbf \gamma$$ $X$ is the regressor with perfect separation, while $\mathbf Z$ is a collection of other regressors that are not characterized by perfect separation. Also $$\Lambda (g(\beta_0,x_i, \mathbf z_i)) = \frac 1{1+e^{-g(\beta_0,x_i, \mathbf z_i)}}\equiv \Lambda_i$$ The log-likelihood for a sample of size $n$ is $$\ln L=\sum_{i=1}^{n}\left[y_i\ln(\Lambda_i)+(1-y_i)\ln(1-\Lambda_i)\right]$$ The MLE will be found by setting the derivatives equal to zero. In particular we want $$ \sum_{i=1}^{n}(y_i-\Lambda_i) = 0 \tag{1}$$ $$\sum_{i=1}^{n}(y_i-\Lambda_i)x_i = 0 \tag{2}$$ The first equation comes from taking the derivative with respect to the constant term, the 2nd from taking the derivative with respect to $X$. Assume now that in all cases where $y_1 =1$ we have $x_i = a_k$, and that $x_i$ never takes the value $a_k$ when $y_i=0$. This is the phenomenon of complete separation, or "perfect prediction": if we observe $x_i = a_k$ we know that $y_i=1$. If we observe $x_i \neq a_k$ we know that $y_i=0$. This holds irrespective of whether, in theory or in the sample, $X$ is discrete or continuous, dichotomous or not. But also, this is a sample-specific phenomenon -we do not argue that it will hold over the population. But the specific sample is what we have in our hands to feed the MLE. Now denote the abolute frequency of $y_i =1$ by $n_y$ $$n_y \equiv \sum_{i=1}^ny_i = \sum_{y_i=1}y_i \tag{3}$$ We can then re-write eq $(1)$ as $$n_y = \sum_{i=1}^n\Lambda_i = \sum_{y_i=1}\Lambda_i+\sum_{y_i=0}\Lambda_i \Rightarrow n_y - \sum_{y_i=1}\Lambda_i = \sum_{y_i=0}\Lambda_i \tag{4}$$ Turning to eq. $(2)$ we have $$\sum_{i=1}^{n}y_ix_i -\sum_{i=1}^{n}\Lambda_ix_i = 0 \Rightarrow \sum_{y_i=1}y_ia_k+\sum_{y_i=0}y_ix_i - \sum_{y_i=1}\Lambda_ia_k-\sum_{y_i=0}\Lambda_ix_i =0$$ using $(3)$ we have $$n_ya_k + 0 - a_k\sum_{y_i=1}\Lambda_i-\sum_{y_i=0}\Lambda_ix_i =0$$ $$\Rightarrow a_k\left(n_y-\sum_{y_i=1}\Lambda_i\right) -\sum_{y_i=0}\Lambda_ix_i =0$$ and using $(4)$ we obtain $$a_k\sum_{y_i=0}\Lambda_ix_i -\sum_{y_i=0}\Lambda_ix_i =0 \Rightarrow \sum_{y_i=0}(a_k-x_i)\Lambda_i=0 \tag {5}$$ So : if the specification contains a constant term and there is perfect separation with respect to regressor $X$, the MLE will attempt to satisfy, among others, eq $(5)$ also. But note, that the summation is over the sub-sample where $y_i=0$ in which $x_i\neq a_k$ by assumption. This implies the following: 1) if $X$ is dichotomous in the sample, then $(a_k-x_i) \neq 0$ for all $i$ in the summation in $(5)$. 2) If $X$ is not dichotomous in the sample, but $a_k$ is either its minimum or its maximum value in the sample, then again $(a_k-x_i) \neq 0$ for all $i$ in the summation in $(5)$. In these two cases, and since moreover $\Lambda_i$ is non-negative by construction, the only way that eq. $(5)$ can be satisfied is when $\Lambda_i=0$ for all $i$ in the summation. But $$\Lambda_i = \frac 1{1+e^{-g(\beta_0,x_i, \mathbf z_i)}}$$ and so the only way that $\Lambda_i$ can become equal to $0$, is if the parameter estimates are such that $g(\beta_0,x_i, \mathbf z_i) \rightarrow -\infty$. And since $g()$ is linear in the parameters, this implies that at least one of the parameter estimates should be "infinity": this is what it means for the MLE to "break down": to not produce finite valued estimates. So cases 1) and 2) are sufficient conditions for a break-down of the MLE procedure. But consider now the case where $X$ is not dichotomous, and $a_k$ is not its minimum, or its maximum value in the sample. We still have complete separation, "perfect prediction", but now, in eq. $(5)$ some of the terms $(a_k-x_i)$ will be positive and some will be negative. This means that it is possible that the MLE will be able to satisfy eq. $(5)$ producing finite estimates for all parameters. And simulation results confirm that this is so. I am not saying that such a sample does not create undesirable consequences for the properties of the estimator etc: I just note that in such a case, the estimation algorithm will run as usual. Moreover, simulation results show that if there is no constant term in the specification, $X$ is not dichotomous but $a_k$ is an extreme value, and there are other regressors present, again the MLE will run -indicating that the presence of the constant term (whose theoretical consequences we used in the previous results, namely the requirement for the MLE to satisfy eq. $(1)$), is important.
How to describe and present the issue of perfect separation? While performing my excavation activities on no-answer questions, I found this very sensible one, to which, I guess, by now the OP has found an answer. But I realized that I had various questions of m
37,379
Simultaneous Z-test for the equality of two proportions (binomial distribution)
There is no simple test for what you are looking for, because you have to account for the unknown correlation of the 10 responses within a subject. Some options, many of which are usually not quite adequate, include: Collapse the 10 answers to "at least one yes", and use a chi-square test - this might be reasonable for rare events Collapse the 10 answers to "the number of yeses" and use Wilcoxon's rank-sum test - this might be reasonable for events with similar probabilities Test each question separately with 10 different chi-square test, and then adjust for multiple testing. The simplest option is Bonferroni adjustment - multiply all your p-values by 10 (the number of tests) in this case. A better approach is a resampling based test for the maximum test statistic (or minimum p-value). Use an appropriate generalized estimating equation model (GEE). There are multiple ways you could set it up depending what you are willing to assume. I think you could use some clustered tests for surveys, but it is not straightforward to set up either.
Simultaneous Z-test for the equality of two proportions (binomial distribution)
There is no simple test for what you are looking for, because you have to account for the unknown correlation of the 10 responses within a subject. Some options, many of which are usually not quite ad
Simultaneous Z-test for the equality of two proportions (binomial distribution) There is no simple test for what you are looking for, because you have to account for the unknown correlation of the 10 responses within a subject. Some options, many of which are usually not quite adequate, include: Collapse the 10 answers to "at least one yes", and use a chi-square test - this might be reasonable for rare events Collapse the 10 answers to "the number of yeses" and use Wilcoxon's rank-sum test - this might be reasonable for events with similar probabilities Test each question separately with 10 different chi-square test, and then adjust for multiple testing. The simplest option is Bonferroni adjustment - multiply all your p-values by 10 (the number of tests) in this case. A better approach is a resampling based test for the maximum test statistic (or minimum p-value). Use an appropriate generalized estimating equation model (GEE). There are multiple ways you could set it up depending what you are willing to assume. I think you could use some clustered tests for surveys, but it is not straightforward to set up either.
Simultaneous Z-test for the equality of two proportions (binomial distribution) There is no simple test for what you are looking for, because you have to account for the unknown correlation of the 10 responses within a subject. Some options, many of which are usually not quite ad
37,380
Simultaneous Z-test for the equality of two proportions (binomial distribution)
Perhaps a hierarchical log-linear model would work. Picture your data as a 2 x 10 x 2 array: the first dimension is agree/disagree; the second dimension contains the questions and the third dimension indexes the two groups. You fit a series of models until you get one that fits. Thus condition on the margins only. This assumes independence throughout. In R, the call would be loglin(x, list(1,2,3)). Condition on the margins and the relationship between answer and questions. loglin(x, list(c(1,2),3). This allows relationships between the questions and allows for different numbers in the two groups. Then fit the full model, loglin(x, list(c(1,2,3)). If the two groups behave the same way, then model 2 would be non-significant. If not, then not -- and you would need model 3, which implies that the groups differ. This does not account for the fact the questions are nested in subjects, as noted by @Aniko, but the log-linear model accepts the pattern of responses it gets and attempts to model it across both groups. If there are clear differences between the groups with respect to how they respond, this test should pick them up. Another possibility would be to write down the likelihood under the null hypothesis of no difference between groups and the alternate. Then do a likelihood ratio test. If you are prepared to ignore correlations between questions, the likelihood would be that of a multinomial distribution. If you are concerned about correlations between the questions, you might want to go for a latent variable model with categorical predictors and look at the difference between the groups using a SEM analysis. You would want a lot of data for this and a plausible reason for a latent variable model.
Simultaneous Z-test for the equality of two proportions (binomial distribution)
Perhaps a hierarchical log-linear model would work. Picture your data as a 2 x 10 x 2 array: the first dimension is agree/disagree; the second dimension contains the questions and the third dimension
Simultaneous Z-test for the equality of two proportions (binomial distribution) Perhaps a hierarchical log-linear model would work. Picture your data as a 2 x 10 x 2 array: the first dimension is agree/disagree; the second dimension contains the questions and the third dimension indexes the two groups. You fit a series of models until you get one that fits. Thus condition on the margins only. This assumes independence throughout. In R, the call would be loglin(x, list(1,2,3)). Condition on the margins and the relationship between answer and questions. loglin(x, list(c(1,2),3). This allows relationships between the questions and allows for different numbers in the two groups. Then fit the full model, loglin(x, list(c(1,2,3)). If the two groups behave the same way, then model 2 would be non-significant. If not, then not -- and you would need model 3, which implies that the groups differ. This does not account for the fact the questions are nested in subjects, as noted by @Aniko, but the log-linear model accepts the pattern of responses it gets and attempts to model it across both groups. If there are clear differences between the groups with respect to how they respond, this test should pick them up. Another possibility would be to write down the likelihood under the null hypothesis of no difference between groups and the alternate. Then do a likelihood ratio test. If you are prepared to ignore correlations between questions, the likelihood would be that of a multinomial distribution. If you are concerned about correlations between the questions, you might want to go for a latent variable model with categorical predictors and look at the difference between the groups using a SEM analysis. You would want a lot of data for this and a plausible reason for a latent variable model.
Simultaneous Z-test for the equality of two proportions (binomial distribution) Perhaps a hierarchical log-linear model would work. Picture your data as a 2 x 10 x 2 array: the first dimension is agree/disagree; the second dimension contains the questions and the third dimension
37,381
What is the relationship between graphical models and hierarchical Bayesian models?
Firstly I would like you to see this example of modelling cancer rates. https://stats.stackexchange.com/a/86231/29568 Graphical models are graphs which encode independencies between the random variables in the model. A graphical model with the assumptions on random variables can give us the joint distribution given the parameters. We may or may not know the parameters of the graphical models. We may or may not put priors over the parameters and may or maynot put prior on priors. Heirarchical bayes is more about sharing the common things at a higher level while having the variations at more granular level. Another way to see this is data generating process, as we sample variables in a heirachy of multiple levels of unknown quantities(Usually seen in plate form). We can also see this as aiming to compute the posterior $p(\theta|D)$ but for that we need to specify a prior $p(\theta|\eta)$ where $\eta$ is hyperparameters. Most probably we dont know what $\eta$ is. A more bayesian approach is to put priors on $\eta$. So for the simple heirarchy in this case could be $\eta \rightarrow \theta \rightarrow D$. Heirarchical bayes is about modelling. Inference is a different issue here. JTA or message passing can be done on DAG or MN(DAG could be converted to UGM by doing stuff like for example moralization). Also in terms of learning probability tables where parameters itself are tables and are fixed(could be done by MLE). By being more bayesian I mean that I want to model the uncertainty in the estimation of the table and I would like to model distribution over tables, i.e. distribution over discrete distribution in this case. By similar argument we want to model the uncertainty in the hyperparameters and set priors over them.
What is the relationship between graphical models and hierarchical Bayesian models?
Firstly I would like you to see this example of modelling cancer rates. https://stats.stackexchange.com/a/86231/29568 Graphical models are graphs which encode independencies between the random varia
What is the relationship between graphical models and hierarchical Bayesian models? Firstly I would like you to see this example of modelling cancer rates. https://stats.stackexchange.com/a/86231/29568 Graphical models are graphs which encode independencies between the random variables in the model. A graphical model with the assumptions on random variables can give us the joint distribution given the parameters. We may or may not know the parameters of the graphical models. We may or may not put priors over the parameters and may or maynot put prior on priors. Heirarchical bayes is more about sharing the common things at a higher level while having the variations at more granular level. Another way to see this is data generating process, as we sample variables in a heirachy of multiple levels of unknown quantities(Usually seen in plate form). We can also see this as aiming to compute the posterior $p(\theta|D)$ but for that we need to specify a prior $p(\theta|\eta)$ where $\eta$ is hyperparameters. Most probably we dont know what $\eta$ is. A more bayesian approach is to put priors on $\eta$. So for the simple heirarchy in this case could be $\eta \rightarrow \theta \rightarrow D$. Heirarchical bayes is about modelling. Inference is a different issue here. JTA or message passing can be done on DAG or MN(DAG could be converted to UGM by doing stuff like for example moralization). Also in terms of learning probability tables where parameters itself are tables and are fixed(could be done by MLE). By being more bayesian I mean that I want to model the uncertainty in the estimation of the table and I would like to model distribution over tables, i.e. distribution over discrete distribution in this case. By similar argument we want to model the uncertainty in the hyperparameters and set priors over them.
What is the relationship between graphical models and hierarchical Bayesian models? Firstly I would like you to see this example of modelling cancer rates. https://stats.stackexchange.com/a/86231/29568 Graphical models are graphs which encode independencies between the random varia
37,382
What is the relationship between graphical models and hierarchical Bayesian models?
There's a brief reference to this subject at section 2.4.1 in Graphical Models, Exponential Families, and Variational Inference
What is the relationship between graphical models and hierarchical Bayesian models?
There's a brief reference to this subject at section 2.4.1 in Graphical Models, Exponential Families, and Variational Inference
What is the relationship between graphical models and hierarchical Bayesian models? There's a brief reference to this subject at section 2.4.1 in Graphical Models, Exponential Families, and Variational Inference
What is the relationship between graphical models and hierarchical Bayesian models? There's a brief reference to this subject at section 2.4.1 in Graphical Models, Exponential Families, and Variational Inference
37,383
What is the relationship between graphical models and hierarchical Bayesian models?
My understanding is that a hierarchical model can be represented by a DAG. Hence all our knowledge about probabilistic graphical models applies. One difference (which is not really a difference) is that in hierarchical models, usually hyperpriors are included. Or at least hyperpriors are discussed in the context of hierarchical models more often.
What is the relationship between graphical models and hierarchical Bayesian models?
My understanding is that a hierarchical model can be represented by a DAG. Hence all our knowledge about probabilistic graphical models applies. One difference (which is not really a difference) is th
What is the relationship between graphical models and hierarchical Bayesian models? My understanding is that a hierarchical model can be represented by a DAG. Hence all our knowledge about probabilistic graphical models applies. One difference (which is not really a difference) is that in hierarchical models, usually hyperpriors are included. Or at least hyperpriors are discussed in the context of hierarchical models more often.
What is the relationship between graphical models and hierarchical Bayesian models? My understanding is that a hierarchical model can be represented by a DAG. Hence all our knowledge about probabilistic graphical models applies. One difference (which is not really a difference) is th
37,384
Correct number of parameters of AR models for AIC / BIC ?
For AIC/BIC selection it doesn't really matter whether you choose to count the variance parameter, as long as you are consistent across models, because inference based on information-theoretic criteria only depends on the differences between the values for different models. Thus, adding 2 (for an additional parameter) across the board doesn't change any of the delta-*IC values, which is all you are using to choose models (or do model averaging, compute model weights, etc. etc.). (However, you do have to be careful if you're going to compare models fitted with different procedures or different software packages, because they may count parameters in different ways.) It does matter if you are going to use AICc or some other finite-size-corrected criterion, because then the residual information in the data set is used (the denominator of the correction term is $n-k-1$). Then the question you have to ask is whether a nuisance parameter such as the residual variance, which can be computed from the residuals without modifying the estimation procedure, should be included. I wrote in this r-sig-mixed-models post that I'm not sure about the right procedure here. However, looking quickly at Hurvich and Tsai's original paper (Hurvich, Clifford M., and Chih-Ling Tsai. 1989. “Regression and Time Series Model Selection in Small Samples.” Biometrika 76 (2) (June 1): 297–307, doi:10.1093/biomet/76.2.297, http://biomet.oxfordjournals.org/content/76/2/297.abstract), it does appear that they include the variance parameter, i.e. they use $k=m+1$ for a linear model with $m$ linear coefficients ... I would further quote Press et al. (Numerical Recipes in C) that: We might also comment that if the difference between $N$ and $N-1$ ever matters to you, then you are probably up to no good anyway - e.g. trying to substantiate a questionable hypothesis with marginal data. (They are discussing the bias correction term in the sample variance calculation, but the principle applies here as well.)
Correct number of parameters of AR models for AIC / BIC ?
For AIC/BIC selection it doesn't really matter whether you choose to count the variance parameter, as long as you are consistent across models, because inference based on information-theoretic criteri
Correct number of parameters of AR models for AIC / BIC ? For AIC/BIC selection it doesn't really matter whether you choose to count the variance parameter, as long as you are consistent across models, because inference based on information-theoretic criteria only depends on the differences between the values for different models. Thus, adding 2 (for an additional parameter) across the board doesn't change any of the delta-*IC values, which is all you are using to choose models (or do model averaging, compute model weights, etc. etc.). (However, you do have to be careful if you're going to compare models fitted with different procedures or different software packages, because they may count parameters in different ways.) It does matter if you are going to use AICc or some other finite-size-corrected criterion, because then the residual information in the data set is used (the denominator of the correction term is $n-k-1$). Then the question you have to ask is whether a nuisance parameter such as the residual variance, which can be computed from the residuals without modifying the estimation procedure, should be included. I wrote in this r-sig-mixed-models post that I'm not sure about the right procedure here. However, looking quickly at Hurvich and Tsai's original paper (Hurvich, Clifford M., and Chih-Ling Tsai. 1989. “Regression and Time Series Model Selection in Small Samples.” Biometrika 76 (2) (June 1): 297–307, doi:10.1093/biomet/76.2.297, http://biomet.oxfordjournals.org/content/76/2/297.abstract), it does appear that they include the variance parameter, i.e. they use $k=m+1$ for a linear model with $m$ linear coefficients ... I would further quote Press et al. (Numerical Recipes in C) that: We might also comment that if the difference between $N$ and $N-1$ ever matters to you, then you are probably up to no good anyway - e.g. trying to substantiate a questionable hypothesis with marginal data. (They are discussing the bias correction term in the sample variance calculation, but the principle applies here as well.)
Correct number of parameters of AR models for AIC / BIC ? For AIC/BIC selection it doesn't really matter whether you choose to count the variance parameter, as long as you are consistent across models, because inference based on information-theoretic criteri
37,385
Correct number of parameters of AR models for AIC / BIC ?
To obtain the AIC or BIC criteria in ARMA models, you need to find the number of estimated parameters in the model (except for the residual variance) including the constant term if it is estimated as mentioned in "Time Series Analysis and Forecasting by Example", by Søren Bisgaard, Murat Kulahci, page 164. So the number of estimated parameters in ARMA(p,q) that contains an intercept or constant term is $p+q+1$ and $p+q$ when you don't have an intercept. See for example Section 6.5 in "Time Series Analysis: With Applications in R" by Jonathan D. Cryer, Kung-Sik Chan.
Correct number of parameters of AR models for AIC / BIC ?
To obtain the AIC or BIC criteria in ARMA models, you need to find the number of estimated parameters in the model (except for the residual variance) including the constant term if it is estimated as
Correct number of parameters of AR models for AIC / BIC ? To obtain the AIC or BIC criteria in ARMA models, you need to find the number of estimated parameters in the model (except for the residual variance) including the constant term if it is estimated as mentioned in "Time Series Analysis and Forecasting by Example", by Søren Bisgaard, Murat Kulahci, page 164. So the number of estimated parameters in ARMA(p,q) that contains an intercept or constant term is $p+q+1$ and $p+q$ when you don't have an intercept. See for example Section 6.5 in "Time Series Analysis: With Applications in R" by Jonathan D. Cryer, Kung-Sik Chan.
Correct number of parameters of AR models for AIC / BIC ? To obtain the AIC or BIC criteria in ARMA models, you need to find the number of estimated parameters in the model (except for the residual variance) including the constant term if it is estimated as
37,386
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses?
Bonferroni can't be liberal, regardless of dependence, if your p-values are computed correctly. Let A be the event of Type I error in one test and let B be the event of Type I error in another test. The probability that A or B (or both) will occur is: P(A or B) = P(A) + P(B) - P(A and B) Because P(A and B) is a probability and thus can't be negative, there’s no possible way for that equation to produce a value higher than P(A) + P(B). The highest value the equation can produce is when P(A and B) = 0, i.e. when A and B are perfectly negatively dependent. In that case, you can fill in the equation as follows, assuming both nulls true and a Bonferroni-adjusted alpha level of .025: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - 0 = .05 Under any other dependence structure, P(A and B) > 0, so the equation produces a value even smaller than .05. For example, under perfect positive dependence, P(A and B) = P(A), in which case you can fill in the equation as follows: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - .025 = .025 Another example: under independence, P(A and B) = P(A)P(B). Hence: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - .025*.025 = .0494 As you can see, if one event has a probability of .025 and another event also has a probability of .025, it’s impossible for the probability of “one or both” events to be greater than .05, because it’s impossible for P(A or B) to be greater than P(A) + P(B). Any claim to the contrary is logically nonsensical. "But that's assuming both nulls are true," you might say. "What if the first null is true and the second is false?" In that case, B is impossible because you can't have a Type I error where the null hypothesis is false. Thus, P(B) = 0 and P(A and B) = 0. So let's fill in our general formula for the FWER of two tests: P(A or B) = P(A) + P(B) - P(A and B) = .025 + 0 - 0 = .025 So once again the FWER is < .05. Note that dependence is irrelevant here because P(A and B) is always 0. Another possible scenario is that both nulls are false, but it should be obvious that the FWER would then be 0, and thus < .05.
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses?
Bonferroni can't be liberal, regardless of dependence, if your p-values are computed correctly. Let A be the event of Type I error in one test and let B be the event of Type I error in another test. T
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses? Bonferroni can't be liberal, regardless of dependence, if your p-values are computed correctly. Let A be the event of Type I error in one test and let B be the event of Type I error in another test. The probability that A or B (or both) will occur is: P(A or B) = P(A) + P(B) - P(A and B) Because P(A and B) is a probability and thus can't be negative, there’s no possible way for that equation to produce a value higher than P(A) + P(B). The highest value the equation can produce is when P(A and B) = 0, i.e. when A and B are perfectly negatively dependent. In that case, you can fill in the equation as follows, assuming both nulls true and a Bonferroni-adjusted alpha level of .025: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - 0 = .05 Under any other dependence structure, P(A and B) > 0, so the equation produces a value even smaller than .05. For example, under perfect positive dependence, P(A and B) = P(A), in which case you can fill in the equation as follows: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - .025 = .025 Another example: under independence, P(A and B) = P(A)P(B). Hence: P(A or B) = P(A) + P(B) - P(A and B) = .025 + .025 - .025*.025 = .0494 As you can see, if one event has a probability of .025 and another event also has a probability of .025, it’s impossible for the probability of “one or both” events to be greater than .05, because it’s impossible for P(A or B) to be greater than P(A) + P(B). Any claim to the contrary is logically nonsensical. "But that's assuming both nulls are true," you might say. "What if the first null is true and the second is false?" In that case, B is impossible because you can't have a Type I error where the null hypothesis is false. Thus, P(B) = 0 and P(A and B) = 0. So let's fill in our general formula for the FWER of two tests: P(A or B) = P(A) + P(B) - P(A and B) = .025 + 0 - 0 = .025 So once again the FWER is < .05. Note that dependence is irrelevant here because P(A and B) is always 0. Another possible scenario is that both nulls are false, but it should be obvious that the FWER would then be 0, and thus < .05.
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses? Bonferroni can't be liberal, regardless of dependence, if your p-values are computed correctly. Let A be the event of Type I error in one test and let B be the event of Type I error in another test. T
37,387
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses?
I think I finally have the answer. I need an additional requirement on the distribution of $P(p_1,p_2|H_1=0, H_2=0)$. Before, I only required that $P(p_1|H_1=0)$ is uniform between 0 and 1. In this case my example is correct and Bonferroni would be too liberal. However, if I additionally require the uniformity of $P(p_1|H_1=0, H_2=0)$ then it is easy to derive that Bonferroni can never be too conservative. My example violates this assumption. In more general terms, the assumption is that the distribution of all p-values given that all null hypotheses are true must have the form of a copula: Jointly they don't need to be uniform, but marginally they do. Comment: If anyone can point me to a source where this assumption is clearly stated (textbook, paper), I'll accept this answer.
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses?
I think I finally have the answer. I need an additional requirement on the distribution of $P(p_1,p_2|H_1=0, H_2=0)$. Before, I only required that $P(p_1|H_1=0)$ is uniform between 0 and 1. In this ca
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses? I think I finally have the answer. I need an additional requirement on the distribution of $P(p_1,p_2|H_1=0, H_2=0)$. Before, I only required that $P(p_1|H_1=0)$ is uniform between 0 and 1. In this case my example is correct and Bonferroni would be too liberal. However, if I additionally require the uniformity of $P(p_1|H_1=0, H_2=0)$ then it is easy to derive that Bonferroni can never be too conservative. My example violates this assumption. In more general terms, the assumption is that the distribution of all p-values given that all null hypotheses are true must have the form of a copula: Jointly they don't need to be uniform, but marginally they do. Comment: If anyone can point me to a source where this assumption is clearly stated (textbook, paper), I'll accept this answer.
Is Bonferroni correction too anti-conservative/liberal for some dependent hypotheses? I think I finally have the answer. I need an additional requirement on the distribution of $P(p_1,p_2|H_1=0, H_2=0)$. Before, I only required that $P(p_1|H_1=0)$ is uniform between 0 and 1. In this ca
37,388
Matrix Factorization Model for recommender systems how to determine number of latent features?
In response to your first question, cross validation is a widely used approach. One possible scheme is the following. For each K value within a pre-selected range, use cross validation to estimate model performance (e.g. prediction accuracy). This will provide one estimated model performance metric per k-value. Then, select the k that corresponds to the highest performance. In response to your second question, I would look at examples of a 'hybrid approach' e.g. in http://www.stanford.edu/~abhijeet/papers/cs345areport.pdf
Matrix Factorization Model for recommender systems how to determine number of latent features?
In response to your first question, cross validation is a widely used approach. One possible scheme is the following. For each K value within a pre-selected range, use cross validation to estimate mod
Matrix Factorization Model for recommender systems how to determine number of latent features? In response to your first question, cross validation is a widely used approach. One possible scheme is the following. For each K value within a pre-selected range, use cross validation to estimate model performance (e.g. prediction accuracy). This will provide one estimated model performance metric per k-value. Then, select the k that corresponds to the highest performance. In response to your second question, I would look at examples of a 'hybrid approach' e.g. in http://www.stanford.edu/~abhijeet/papers/cs345areport.pdf
Matrix Factorization Model for recommender systems how to determine number of latent features? In response to your first question, cross validation is a widely used approach. One possible scheme is the following. For each K value within a pre-selected range, use cross validation to estimate mod
37,389
Matrix Factorization Model for recommender systems how to determine number of latent features?
To answer your first question, I would do cross-validation, and for the second question, I would say you should look into tensor factorization. If you have multi-dimensional data representation, you may definitely consider tensor factorization which allows you to play with some additional data as another dimensions. You can check the following link for it. https://github.com/kuleshov/tensor-factorization
Matrix Factorization Model for recommender systems how to determine number of latent features?
To answer your first question, I would do cross-validation, and for the second question, I would say you should look into tensor factorization. If you have multi-dimensional data representation, you m
Matrix Factorization Model for recommender systems how to determine number of latent features? To answer your first question, I would do cross-validation, and for the second question, I would say you should look into tensor factorization. If you have multi-dimensional data representation, you may definitely consider tensor factorization which allows you to play with some additional data as another dimensions. You can check the following link for it. https://github.com/kuleshov/tensor-factorization
Matrix Factorization Model for recommender systems how to determine number of latent features? To answer your first question, I would do cross-validation, and for the second question, I would say you should look into tensor factorization. If you have multi-dimensional data representation, you m
37,390
How do I combine multiple prior components and a likelihood?
I don't think there is anything wrong with your approach, but there are some technical details which makes your implementation incorrect. Now when you have a mixture prior, you will also get a mixture posterior. The easiest way to procede, I think, is to calculate the component posteriors and the component weights separately for each component. So the prior is given by: $$p(\theta|I)\propto\sum_{c}w_cf_c(\theta)$$ Where you need to ensure that each $f_c(.)$ is a properly normalised density. The proportionality sign accounts for the sum of the weights not necessarily suming to one. Multiply by the likelihood and you have a posterior proportional to: $$p(\theta|DI)\propto\sum_{c}w_c\left[f_c(\theta)p(D|\theta)\right]$$ Now we can turn this into a new mixture distribution by normalising the term in brackets, so we have: $$p(\theta|DI)\propto\sum_{c}w_cf_c(D)\left[\frac{f_c(\theta)p(D|\theta)}{f_c(D)}\right]$$ Where $f_c(D)=\int f_c(\theta)p(D|\theta) d\theta$. As with the prior, the proportionality constant is just the sum of the "new" weights, for a properly normalised posterior of: $$p(\theta|DI)=\left(\sum_{c}w_cf_c(D)\right)^{-1}\sum_{c}w_cf_c(D)\left[\frac{f_c(\theta)p(D|\theta)}{f_c(D)}\right]$$ This expression means that you can procede in two steps. Calculate the posterior based on each component individually Update the weights by multiplying them by the marginal likelihood for the data This is very simple for your data, as you have two congujate "normal-normal" components, and one "uniform-normal" components. I would go further but I don't know if your data-based variance is assumed known or estimated from the data.
How do I combine multiple prior components and a likelihood?
I don't think there is anything wrong with your approach, but there are some technical details which makes your implementation incorrect. Now when you have a mixture prior, you will also get a mixtur
How do I combine multiple prior components and a likelihood? I don't think there is anything wrong with your approach, but there are some technical details which makes your implementation incorrect. Now when you have a mixture prior, you will also get a mixture posterior. The easiest way to procede, I think, is to calculate the component posteriors and the component weights separately for each component. So the prior is given by: $$p(\theta|I)\propto\sum_{c}w_cf_c(\theta)$$ Where you need to ensure that each $f_c(.)$ is a properly normalised density. The proportionality sign accounts for the sum of the weights not necessarily suming to one. Multiply by the likelihood and you have a posterior proportional to: $$p(\theta|DI)\propto\sum_{c}w_c\left[f_c(\theta)p(D|\theta)\right]$$ Now we can turn this into a new mixture distribution by normalising the term in brackets, so we have: $$p(\theta|DI)\propto\sum_{c}w_cf_c(D)\left[\frac{f_c(\theta)p(D|\theta)}{f_c(D)}\right]$$ Where $f_c(D)=\int f_c(\theta)p(D|\theta) d\theta$. As with the prior, the proportionality constant is just the sum of the "new" weights, for a properly normalised posterior of: $$p(\theta|DI)=\left(\sum_{c}w_cf_c(D)\right)^{-1}\sum_{c}w_cf_c(D)\left[\frac{f_c(\theta)p(D|\theta)}{f_c(D)}\right]$$ This expression means that you can procede in two steps. Calculate the posterior based on each component individually Update the weights by multiplying them by the marginal likelihood for the data This is very simple for your data, as you have two congujate "normal-normal" components, and one "uniform-normal" components. I would go further but I don't know if your data-based variance is assumed known or estimated from the data.
How do I combine multiple prior components and a likelihood? I don't think there is anything wrong with your approach, but there are some technical details which makes your implementation incorrect. Now when you have a mixture prior, you will also get a mixtur
37,391
Why does the log likelihood need to go to minus infinity when the parameter approaches the boundary of the parameter space?
in order for the maximum likelihood estimate to be valid, the log likelihood needs to go to minus infinity as the parameter goes to the boundary This is equal to saying, the Likelihood of a parameter needs to become 0 at the boundary of the parameter space in order for the result to be valid. Well first of all, you can restrict the parameter space to values that all have a positive likelihood and still obtain a valid estimate. Secondly, even if you use, say $(-\infty,\infty)$, you don't come close to the boundary since any off the shelf optimisation package performs some sort of random initialisation and then approaches the minimum using some method such as gradient descent, conjugate gradient or another. In either case, you almost never end up approaching the boundary of the parameter space, so I don't quite understand why the boundaries matter in the first place. And even if you do that on purpose, at one point you will hit the floating point precision of your operating system. I can guarantee you that at that point, you haven't really approached the boundary $-\infty$ by much. :) Personally I find the underflow problem arising when calculating sums and products of very small likelihoods and the log sum exp trick much more interesting and more noteworthy issue that actually matters a lot in practice, unlike reaching the boundaries of the parameter space.
Why does the log likelihood need to go to minus infinity when the parameter approaches the boundary
in order for the maximum likelihood estimate to be valid, the log likelihood needs to go to minus infinity as the parameter goes to the boundary This is equal to saying, the Likelihood of a parameter
Why does the log likelihood need to go to minus infinity when the parameter approaches the boundary of the parameter space? in order for the maximum likelihood estimate to be valid, the log likelihood needs to go to minus infinity as the parameter goes to the boundary This is equal to saying, the Likelihood of a parameter needs to become 0 at the boundary of the parameter space in order for the result to be valid. Well first of all, you can restrict the parameter space to values that all have a positive likelihood and still obtain a valid estimate. Secondly, even if you use, say $(-\infty,\infty)$, you don't come close to the boundary since any off the shelf optimisation package performs some sort of random initialisation and then approaches the minimum using some method such as gradient descent, conjugate gradient or another. In either case, you almost never end up approaching the boundary of the parameter space, so I don't quite understand why the boundaries matter in the first place. And even if you do that on purpose, at one point you will hit the floating point precision of your operating system. I can guarantee you that at that point, you haven't really approached the boundary $-\infty$ by much. :) Personally I find the underflow problem arising when calculating sums and products of very small likelihoods and the log sum exp trick much more interesting and more noteworthy issue that actually matters a lot in practice, unlike reaching the boundaries of the parameter space.
Why does the log likelihood need to go to minus infinity when the parameter approaches the boundary in order for the maximum likelihood estimate to be valid, the log likelihood needs to go to minus infinity as the parameter goes to the boundary This is equal to saying, the Likelihood of a parameter
37,392
Hidden Markov Model to predict the next state
You use the forward algorithm to predict $P(X_{t+1})$. $P(X_{t+1}|X_t, Y_{1:t} ) = \sum_{X} P(X_{t+1}|X_t) \cdot P(X_t|Y_{1:t}) $ So, you use the same principle for predicting $P(X_{t})$, but without being able to incorporate $Y_{t+1}$, since it is not observed yet.
Hidden Markov Model to predict the next state
You use the forward algorithm to predict $P(X_{t+1})$. $P(X_{t+1}|X_t, Y_{1:t} ) = \sum_{X} P(X_{t+1}|X_t) \cdot P(X_t|Y_{1:t}) $ So, you use the same principle for predicting $P(X_{t})$, but without
Hidden Markov Model to predict the next state You use the forward algorithm to predict $P(X_{t+1})$. $P(X_{t+1}|X_t, Y_{1:t} ) = \sum_{X} P(X_{t+1}|X_t) \cdot P(X_t|Y_{1:t}) $ So, you use the same principle for predicting $P(X_{t})$, but without being able to incorporate $Y_{t+1}$, since it is not observed yet.
Hidden Markov Model to predict the next state You use the forward algorithm to predict $P(X_{t+1})$. $P(X_{t+1}|X_t, Y_{1:t} ) = \sum_{X} P(X_{t+1}|X_t) \cdot P(X_t|Y_{1:t}) $ So, you use the same principle for predicting $P(X_{t})$, but without
37,393
Hidden Markov Model to predict the next state
To get the probability over hidden states at t_2, just multiply your posterior over t_1 by your transition matrix. https://stackoverflow.com/questions/15554923/how-to-perform-a-prediction-with-matlabs-hidden-markov-model-statistics-toolbo
Hidden Markov Model to predict the next state
To get the probability over hidden states at t_2, just multiply your posterior over t_1 by your transition matrix. https://stackoverflow.com/questions/15554923/how-to-perform-a-prediction-with-matlabs
Hidden Markov Model to predict the next state To get the probability over hidden states at t_2, just multiply your posterior over t_1 by your transition matrix. https://stackoverflow.com/questions/15554923/how-to-perform-a-prediction-with-matlabs-hidden-markov-model-statistics-toolbo
Hidden Markov Model to predict the next state To get the probability over hidden states at t_2, just multiply your posterior over t_1 by your transition matrix. https://stackoverflow.com/questions/15554923/how-to-perform-a-prediction-with-matlabs
37,394
Power of a hypothesis test
I have not checked out your text, but your understanding is correct. The alternative hypothesis (Ha) is usually stated vaguely as something like the difference between the two population means is not zero. But for the purpose of computing and interpreting power you need a definitive Ha, say that the difference between population means equals 10 (or some value). If that Ha is true, and if you accept all the assumptions of the test, power is the probability that random sampling of data from the two populations with the specified sample size will result in a P value less than alpha. So yes, it is the power against the null hypothesis and for the alternative.
Power of a hypothesis test
I have not checked out your text, but your understanding is correct. The alternative hypothesis (Ha) is usually stated vaguely as something like the difference between the two population means is not
Power of a hypothesis test I have not checked out your text, but your understanding is correct. The alternative hypothesis (Ha) is usually stated vaguely as something like the difference between the two population means is not zero. But for the purpose of computing and interpreting power you need a definitive Ha, say that the difference between population means equals 10 (or some value). If that Ha is true, and if you accept all the assumptions of the test, power is the probability that random sampling of data from the two populations with the specified sample size will result in a P value less than alpha. So yes, it is the power against the null hypothesis and for the alternative.
Power of a hypothesis test I have not checked out your text, but your understanding is correct. The alternative hypothesis (Ha) is usually stated vaguely as something like the difference between the two population means is not
37,395
Power of a hypothesis test
It seems to me that it is the power of the test itself, rather than the power against either of the hypothesis. If there is very little data, the test will be unable to reject $H_0$ whether it is false or not, so the test itself has little power to reveal anything about the problem. If we have lots of data, we will be able to confidently expect to reject a false $H_0$ even if the likely effect size under $H_a$ is rather small, so the test is powerful. Perhaps we should think of the power of a test being somewhat analogous to the (optical) power of a microscope or telescope, in that it gives an idea of how fine a distinction we can reasonably expect to resolve. I suppose you could however argue that if we are unable to reject the null hypothesis using a test with high power, then this is "powerful" support for the idea that $H_a$ is probably false (as if $H_a$ were true, the test would be highly unlikely to fail to reject $H_0$). See statistical power for info on the statistical meaning of "power".
Power of a hypothesis test
It seems to me that it is the power of the test itself, rather than the power against either of the hypothesis. If there is very little data, the test will be unable to reject $H_0$ whether it is fal
Power of a hypothesis test It seems to me that it is the power of the test itself, rather than the power against either of the hypothesis. If there is very little data, the test will be unable to reject $H_0$ whether it is false or not, so the test itself has little power to reveal anything about the problem. If we have lots of data, we will be able to confidently expect to reject a false $H_0$ even if the likely effect size under $H_a$ is rather small, so the test is powerful. Perhaps we should think of the power of a test being somewhat analogous to the (optical) power of a microscope or telescope, in that it gives an idea of how fine a distinction we can reasonably expect to resolve. I suppose you could however argue that if we are unable to reject the null hypothesis using a test with high power, then this is "powerful" support for the idea that $H_a$ is probably false (as if $H_a$ were true, the test would be highly unlikely to fail to reject $H_0$). See statistical power for info on the statistical meaning of "power".
Power of a hypothesis test It seems to me that it is the power of the test itself, rather than the power against either of the hypothesis. If there is very little data, the test will be unable to reject $H_0$ whether it is fal
37,396
Meaning of Likelihood in layman's terms [duplicate]
Let's say the data comes from a distribution with a parameter $\theta$ whose value we don't know. You can think of the likelihood function ${L(\theta)}$, very roughly, as "the probability of observing the data given this value of $\theta$". From this comes the maximum likelihood principle: when we want to estimate $\theta$ from some data, we solve for the value that maximises $L(\theta)$. This value is the one that is "most likely" to have generated the data we've observed. This principle is the basis of a huge variety of statistical techniques, including many of the most-used ones such as linear regression. This description is technically incorrect, since the likelihood is not actually a probability. That's why it's called a likelihood function, not a probability (density) function. But as long as you're not trying to prove theorems or derive new methods, this should suffice.
Meaning of Likelihood in layman's terms [duplicate]
Let's say the data comes from a distribution with a parameter $\theta$ whose value we don't know. You can think of the likelihood function ${L(\theta)}$, very roughly, as "the probability of observing
Meaning of Likelihood in layman's terms [duplicate] Let's say the data comes from a distribution with a parameter $\theta$ whose value we don't know. You can think of the likelihood function ${L(\theta)}$, very roughly, as "the probability of observing the data given this value of $\theta$". From this comes the maximum likelihood principle: when we want to estimate $\theta$ from some data, we solve for the value that maximises $L(\theta)$. This value is the one that is "most likely" to have generated the data we've observed. This principle is the basis of a huge variety of statistical techniques, including many of the most-used ones such as linear regression. This description is technically incorrect, since the likelihood is not actually a probability. That's why it's called a likelihood function, not a probability (density) function. But as long as you're not trying to prove theorems or derive new methods, this should suffice.
Meaning of Likelihood in layman's terms [duplicate] Let's say the data comes from a distribution with a parameter $\theta$ whose value we don't know. You can think of the likelihood function ${L(\theta)}$, very roughly, as "the probability of observing
37,397
Survey design chi square
If you are going along the path of stacking the data sets together, then you should define super-strata corresponding to the two data sets/waves, so that svydesign() knows that they are independent. Thus your new svydesign will have strata = cross of year and strata, the PSUs from the original designs, and the weights from the original designs. As I suggested in the comment, other ways of combining estimates and tests have been proposed in the literature. Wu (2004) uses empirical likelihood based on common variables between the two data sets. For continuous variables, ideally, you would want to use Kolmogorov-Smirnov test with "flat" data, but I don't know whether extensions for it work for survey data; I doubt it. So you may have to convert your continuous variables to ordinal ones into say $[\log_2(n)]$ percentile groups or equal width bins of the variable range (where the above function of the sample size is a commonly used number of bins for a histogram), and apply the Rao-Scott $\chi^2$ to them.
Survey design chi square
If you are going along the path of stacking the data sets together, then you should define super-strata corresponding to the two data sets/waves, so that svydesign() knows that they are independent. T
Survey design chi square If you are going along the path of stacking the data sets together, then you should define super-strata corresponding to the two data sets/waves, so that svydesign() knows that they are independent. Thus your new svydesign will have strata = cross of year and strata, the PSUs from the original designs, and the weights from the original designs. As I suggested in the comment, other ways of combining estimates and tests have been proposed in the literature. Wu (2004) uses empirical likelihood based on common variables between the two data sets. For continuous variables, ideally, you would want to use Kolmogorov-Smirnov test with "flat" data, but I don't know whether extensions for it work for survey data; I doubt it. So you may have to convert your continuous variables to ordinal ones into say $[\log_2(n)]$ percentile groups or equal width bins of the variable range (where the above function of the sample size is a commonly used number of bins for a histogram), and apply the Rao-Scott $\chi^2$ to them.
Survey design chi square If you are going along the path of stacking the data sets together, then you should define super-strata corresponding to the two data sets/waves, so that svydesign() knows that they are independent. T
37,398
When to use sample median as an estimator for the median of a lognormal distribution?
Apparently the concept of unbiasedness has already been discussed a long time ago. I feel it is a topic worth of dicussion as mean-unbiasedness is a standard requirement for a good estimator but for small sample it does not mean as much as in large sample estimations. I post these two references as an answer to my second question in the post. Brown, George W. "On Small-Sample Estimation." The Annals of Mathematical Statistics, vol. 18, no. 4 (Dec., 1947), pp. 582–585. JSTOR 2236236. Lehmann, E. L. "A General Concept of Unbiasedness" The Annals of Mathematical Statistics, vol. 22, no. 4 (Dec., 1951), pp. 587–592. JSTOR 2236928
When to use sample median as an estimator for the median of a lognormal distribution?
Apparently the concept of unbiasedness has already been discussed a long time ago. I feel it is a topic worth of dicussion as mean-unbiasedness is a standard requirement for a good estimator but for s
When to use sample median as an estimator for the median of a lognormal distribution? Apparently the concept of unbiasedness has already been discussed a long time ago. I feel it is a topic worth of dicussion as mean-unbiasedness is a standard requirement for a good estimator but for small sample it does not mean as much as in large sample estimations. I post these two references as an answer to my second question in the post. Brown, George W. "On Small-Sample Estimation." The Annals of Mathematical Statistics, vol. 18, no. 4 (Dec., 1947), pp. 582–585. JSTOR 2236236. Lehmann, E. L. "A General Concept of Unbiasedness" The Annals of Mathematical Statistics, vol. 22, no. 4 (Dec., 1951), pp. 587–592. JSTOR 2236928
When to use sample median as an estimator for the median of a lognormal distribution? Apparently the concept of unbiasedness has already been discussed a long time ago. I feel it is a topic worth of dicussion as mean-unbiasedness is a standard requirement for a good estimator but for s
37,399
Force-directed methods to draw graphs
If the subgraphs are not connected to each other, as in your picture, then their relative placement is arbitrary. Fruchterman-Reingold layout is only based on connections.
Force-directed methods to draw graphs
If the subgraphs are not connected to each other, as in your picture, then their relative placement is arbitrary. Fruchterman-Reingold layout is only based on connections.
Force-directed methods to draw graphs If the subgraphs are not connected to each other, as in your picture, then their relative placement is arbitrary. Fruchterman-Reingold layout is only based on connections.
Force-directed methods to draw graphs If the subgraphs are not connected to each other, as in your picture, then their relative placement is arbitrary. Fruchterman-Reingold layout is only based on connections.
37,400
Choosing non-informative priors
Jeffreys priors are indeed unmanageable outside standard families and not even necessarily recommended in high dimensions. If the model is complex enough, priors should take advantage of the hierarchical structures underlying this model... Using the actual data to produce or select a "prior" is a contradiction in terms! However you can use the sampling distribution to simulate pseudo-data sets and check the impact of various priors on those datasets. For instance, looking at distances between prior and posteriors. For instance, you can use simulated data associated with a parameter $\theta$ to derive an approximate asymptotic variance $\hat{I}(\theta)$ for the associated MLE or MAP $\hat(\theta)$ and then use $|\hat{I}(\theta)|^{-1/2}$ as your substitute Jeffreys prior. In a regression setting like the one presented here, Zellner's G-prior would handle the difference of scale for $X$ and $\exp(-X)$ rather naturally.
Choosing non-informative priors
Jeffreys priors are indeed unmanageable outside standard families and not even necessarily recommended in high dimensions. If the model is complex enough, priors should take advantage of the hierarchi
Choosing non-informative priors Jeffreys priors are indeed unmanageable outside standard families and not even necessarily recommended in high dimensions. If the model is complex enough, priors should take advantage of the hierarchical structures underlying this model... Using the actual data to produce or select a "prior" is a contradiction in terms! However you can use the sampling distribution to simulate pseudo-data sets and check the impact of various priors on those datasets. For instance, looking at distances between prior and posteriors. For instance, you can use simulated data associated with a parameter $\theta$ to derive an approximate asymptotic variance $\hat{I}(\theta)$ for the associated MLE or MAP $\hat(\theta)$ and then use $|\hat{I}(\theta)|^{-1/2}$ as your substitute Jeffreys prior. In a regression setting like the one presented here, Zellner's G-prior would handle the difference of scale for $X$ and $\exp(-X)$ rather naturally.
Choosing non-informative priors Jeffreys priors are indeed unmanageable outside standard families and not even necessarily recommended in high dimensions. If the model is complex enough, priors should take advantage of the hierarchi