idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,901
Marginalizing out discrete response variables in Stan
There is confusion and also some wrong math. The gist is that in your example marginalizing unobserved responses doesn't make sense: the missing observations don't contribute to the likelihood and so they don't contribute to the posterior. Note: This is not necessarily always the case. In the comments you point to a different example from the Stan manual, where observations are censored below a threshold U, so the proportion of missing observations contains information about how close the threshold U is to the population location parameter μ. So let's assume that observations are missing completely at random or missing at random (that is, the missingness can be explained by what we know, eg. the predictors x but doesn't depend on what we don't know, eg. the parameters θ). In your example can integrate out $p(y^{miss} | x^{obs},\theta)$ because the support of the probability density function / probability mass function is not constrained. $$ \int p(y^{miss} | x^{obs},\theta)\pi(\theta)~dy^{miss} \propto \pi(\theta|x^{obs}) = \pi(\theta) $$ Contrast this with why it is useful to marginalize out unobserved predictors for observed responses. These extra observations do contain information about the model parameters. $$ \int p(y^{obs} | x^{miss},\theta)\pi(\theta)~dx^{miss} \propto \pi(\theta | y^{obs}) \color{white}{= \pi(\theta)} $$ Note: The missing observations do tell us something about the data collection process. For example, in your simulation you use them to estimate the missingness rate $\rho$. Now let's look at the two analyses in more detail. Option 1 There is an error right at the start of your derivation. By the law of total probability: $$ p(y_i^{miss} = 1 | \eta_i)p(\eta_i) + p(y_i^{miss} = 0 | \eta_i)p(\eta_i) = p(\eta_i) $$ Since we didn't observe $y_i^{miss}$, the posterior for $\eta_i$ is the same as the prior. So don't add the log_mix component to the target. You will get valid posterior estimates for $\alpha$, $\beta$ and $\rho$. Option 2 You introduce additional parameters $\theta_i$ for the missing data points. Essentially the model for missing observation becomes: $$ \left[\theta_i\eta_i\right]^{y_i^{miss}}\left[(1-\theta_i)(1-\eta_i)\right]^{1-y_i^{miss}} $$ As you point out, this doesn't make sense. The success probability in this over-parameterized model is $\theta_i\eta_i$ and the failure probability is $(1-\theta_i)(1-\eta_j)$, so the probabilities don't sum up to 1 (are not normalized). And only the $\eta_i$s depend on $x_i$. Still it seems to "work" in the sense that you get correct inferences for $\alpha$, $\beta$ and $\rho$. Why? Again, the missing observations are irrelevant as they bring no information. It doesn't really matter how you choose to parameterize their probabilities of success and failure. And it's easy to check that posterior of the extra parameters $\theta_i$ is the same as their prior.
Marginalizing out discrete response variables in Stan
There is confusion and also some wrong math. The gist is that in your example marginalizing unobserved responses doesn't make sense: the missing observations don't contribute to the likelihood and so
Marginalizing out discrete response variables in Stan There is confusion and also some wrong math. The gist is that in your example marginalizing unobserved responses doesn't make sense: the missing observations don't contribute to the likelihood and so they don't contribute to the posterior. Note: This is not necessarily always the case. In the comments you point to a different example from the Stan manual, where observations are censored below a threshold U, so the proportion of missing observations contains information about how close the threshold U is to the population location parameter μ. So let's assume that observations are missing completely at random or missing at random (that is, the missingness can be explained by what we know, eg. the predictors x but doesn't depend on what we don't know, eg. the parameters θ). In your example can integrate out $p(y^{miss} | x^{obs},\theta)$ because the support of the probability density function / probability mass function is not constrained. $$ \int p(y^{miss} | x^{obs},\theta)\pi(\theta)~dy^{miss} \propto \pi(\theta|x^{obs}) = \pi(\theta) $$ Contrast this with why it is useful to marginalize out unobserved predictors for observed responses. These extra observations do contain information about the model parameters. $$ \int p(y^{obs} | x^{miss},\theta)\pi(\theta)~dx^{miss} \propto \pi(\theta | y^{obs}) \color{white}{= \pi(\theta)} $$ Note: The missing observations do tell us something about the data collection process. For example, in your simulation you use them to estimate the missingness rate $\rho$. Now let's look at the two analyses in more detail. Option 1 There is an error right at the start of your derivation. By the law of total probability: $$ p(y_i^{miss} = 1 | \eta_i)p(\eta_i) + p(y_i^{miss} = 0 | \eta_i)p(\eta_i) = p(\eta_i) $$ Since we didn't observe $y_i^{miss}$, the posterior for $\eta_i$ is the same as the prior. So don't add the log_mix component to the target. You will get valid posterior estimates for $\alpha$, $\beta$ and $\rho$. Option 2 You introduce additional parameters $\theta_i$ for the missing data points. Essentially the model for missing observation becomes: $$ \left[\theta_i\eta_i\right]^{y_i^{miss}}\left[(1-\theta_i)(1-\eta_i)\right]^{1-y_i^{miss}} $$ As you point out, this doesn't make sense. The success probability in this over-parameterized model is $\theta_i\eta_i$ and the failure probability is $(1-\theta_i)(1-\eta_j)$, so the probabilities don't sum up to 1 (are not normalized). And only the $\eta_i$s depend on $x_i$. Still it seems to "work" in the sense that you get correct inferences for $\alpha$, $\beta$ and $\rho$. Why? Again, the missing observations are irrelevant as they bring no information. It doesn't really matter how you choose to parameterize their probabilities of success and failure. And it's easy to check that posterior of the extra parameters $\theta_i$ is the same as their prior.
Marginalizing out discrete response variables in Stan There is confusion and also some wrong math. The gist is that in your example marginalizing unobserved responses doesn't make sense: the missing observations don't contribute to the likelihood and so
32,902
In reinforcement learning, what is the correct definition of "value function"?
Which one is correct and why is there such a seemingly large amount of definitions? Let me answer the second question first: Why is there such a (seemingly) large amount of definitions? I would guess that there are two different reasons for that: (probably prominently) Lack of knowledge of many mathematical things: What is a random variable actually? What is a Markov process actually mathematically? What are sigma algebras and measures? Further, people do often not know that we must assume or show that random variables have densities, that the word 'density' itself does not make sense but it always has to be a density with respect to a 'natural', alternative measure on the target space, etc etc (could continue here for a while because I have seen all sorts of wild stuff out there, Suttons book is an example of this: My attempt to really understand the 'proof' of the Bellman equation in that book amounted to my answer here: Deriving Bellman's Equation in Reinforcement Learning) Different setups in which one can do Markov Processes / Reinforcement Learning. For example: do we only allow deterministic policies. Given a current state $s$, do we regard the policy as a function $\pi(s)$ that returns one single action or do we rather regard the policy as something indeterministic that gives rise to a probability distribution over the actions? Now let's look at the 'one true' definition of the value function. Let me first comment why (in my personal perspective) all of the ones that you name are 'flawed'/make special assumptions: does mathematically not make sense. An expected condition looks either like $E[X|Y]$ where $X, Y$ are both random variables and it itself is then a new random variable or like $E[X|Y=\cdot]$ and is actually a function in $y=\cdot$. Hence, 1. is not defined. Ng is introducing a new notion here which is (mathematically speaking) simply not defined (also not by him I guess). Apart from that: What exactly is $R(s_t)$? In the Markovian process thare are random variables $R_t, S_t, A_t$ but the symbol $R(s_t)$ is just not yet defined (and probably will never be). is mathematically senseful but makes certain assumptions on the Markovian process: stationarity (i.e. that the whole setup does not depend on the time). However, one can also view the whole process when policies depend on the time (i.e. you do not have $\pi$ but potentially infinitely many $\pi_t$). But in general, this seems to be the 'correct one' (see below). Is a perfect example of the 'law of equal amount of work'. Let's say you have a complicated theorem that states that $A=B$ and you are a lecturer of a course and you do not want to go into the details of the proof of $A=B$. Since you are a really clever guy, you simply define $A:=B$, then the proof that $A=B$ is easy, right? It is true by definition. Unfortunately, this universe is a mean one. In 100% of the observed cases (again, just personal observation) you will need some property of $A$ that is only valid when it is defined as original $A$ and not as $B$. No matter which way you go, you cannot get around the work of understanding the proof of $A=B$! For example: Why is this thing in 4. even called value function? It is supposed to give the value of a certain policy given that we start at a certain state... how are these things related? Why is it nevertheless the same as the other things (up to all the things that make mathematically no sense)? Because of the Bellman equation. But this Bellman equation is a complicated proof that you cannot avoid :-) seems to assume that the policy is a deterministic function. However, in many mathematical constructions (for example, finding the best policy using value iteration) one needs the policies to be probabilistic and not deterministic. Is just very weird. The left hand side depends on $s_t$ (also unclear what influence $t$ has on the whole thing...) but the right hand side does not... ? What is the true, most general definition? With regards to all the questions in the area of the foundations of RL and Markov Processes, etc I can only recommend one single source of mathematical truth: Puterman, Markov Decision Processes. First some basic rules: A Markov process is a touple of random variables $(R_t, S_t, A_t)$ that satisfy certain properties. Let us assume that we look only at stationary processes (that is some additional condition that assure that the time $t$ is irrelevant). How does the policy go in here? Well, that is related to the difference between Markov Automata and Markov Processes. What we fix is the transitions $p(s_{t+1}|s_t, a_t)$ and the rewards $p(r_t|s_{t+1}, a_t, s_t)$ as functions independently of time. However, a Markov process is only uniquely defined if we also specify $p(s_0)$ and $p(a_t|s_t)$. On the other hand, given those two further ingredients, we can actually give rise to this unique Markov Process using a somewhat involved but straightforward construction similar to here. Summarized: Fix some $\gamma \in (0,1)$. Given the transition probabilities and the rewards that only defines a Markov automata (that is something completely different than random variables). For each choice of probability distributions for the initial state $s_0$ and a policy we can give rise to a Markov process (i.e. now, secretly, implicitly all the random variables $R_t, A_t, S_t$ depend on $\pi$ and the distribution of $s_0$ but if we referred to this all the time then our notation would become really clumsy, that's why we leave it blank and only remind ourselves that the variables itself depend on $\pi$ when we write $V^\pi$ instead of just $V$). Then $$R_t + \gamma R_{t+1} + \gamma^2 R_{t+1} + ...$$ converges almost everywhere and $$V^{t, \pi}(s) = E[R_t + \gamma R_{t+1} + \gamma^2 R_{t+1} + ... | S_t=s]$$ is actually independent of $t$. With this value function one can show many nice things. For example, if certain conditions on the state and action set are met then there is a provably best, deterministic policy (see Thm. 6.2.9 in the book of Puterman). In particular, these conditions are met when the state and action space are finite. That means that all common board games are actually boring because somewhere out in this universe, there is a best strategy to play this game and both players should actually just follow this strategy to maximize their (discounted) reward.
In reinforcement learning, what is the correct definition of "value function"?
Which one is correct and why is there such a seemingly large amount of definitions? Let me answer the second question first: Why is there such a (seemingly) large amount of definitions? I would guess
In reinforcement learning, what is the correct definition of "value function"? Which one is correct and why is there such a seemingly large amount of definitions? Let me answer the second question first: Why is there such a (seemingly) large amount of definitions? I would guess that there are two different reasons for that: (probably prominently) Lack of knowledge of many mathematical things: What is a random variable actually? What is a Markov process actually mathematically? What are sigma algebras and measures? Further, people do often not know that we must assume or show that random variables have densities, that the word 'density' itself does not make sense but it always has to be a density with respect to a 'natural', alternative measure on the target space, etc etc (could continue here for a while because I have seen all sorts of wild stuff out there, Suttons book is an example of this: My attempt to really understand the 'proof' of the Bellman equation in that book amounted to my answer here: Deriving Bellman's Equation in Reinforcement Learning) Different setups in which one can do Markov Processes / Reinforcement Learning. For example: do we only allow deterministic policies. Given a current state $s$, do we regard the policy as a function $\pi(s)$ that returns one single action or do we rather regard the policy as something indeterministic that gives rise to a probability distribution over the actions? Now let's look at the 'one true' definition of the value function. Let me first comment why (in my personal perspective) all of the ones that you name are 'flawed'/make special assumptions: does mathematically not make sense. An expected condition looks either like $E[X|Y]$ where $X, Y$ are both random variables and it itself is then a new random variable or like $E[X|Y=\cdot]$ and is actually a function in $y=\cdot$. Hence, 1. is not defined. Ng is introducing a new notion here which is (mathematically speaking) simply not defined (also not by him I guess). Apart from that: What exactly is $R(s_t)$? In the Markovian process thare are random variables $R_t, S_t, A_t$ but the symbol $R(s_t)$ is just not yet defined (and probably will never be). is mathematically senseful but makes certain assumptions on the Markovian process: stationarity (i.e. that the whole setup does not depend on the time). However, one can also view the whole process when policies depend on the time (i.e. you do not have $\pi$ but potentially infinitely many $\pi_t$). But in general, this seems to be the 'correct one' (see below). Is a perfect example of the 'law of equal amount of work'. Let's say you have a complicated theorem that states that $A=B$ and you are a lecturer of a course and you do not want to go into the details of the proof of $A=B$. Since you are a really clever guy, you simply define $A:=B$, then the proof that $A=B$ is easy, right? It is true by definition. Unfortunately, this universe is a mean one. In 100% of the observed cases (again, just personal observation) you will need some property of $A$ that is only valid when it is defined as original $A$ and not as $B$. No matter which way you go, you cannot get around the work of understanding the proof of $A=B$! For example: Why is this thing in 4. even called value function? It is supposed to give the value of a certain policy given that we start at a certain state... how are these things related? Why is it nevertheless the same as the other things (up to all the things that make mathematically no sense)? Because of the Bellman equation. But this Bellman equation is a complicated proof that you cannot avoid :-) seems to assume that the policy is a deterministic function. However, in many mathematical constructions (for example, finding the best policy using value iteration) one needs the policies to be probabilistic and not deterministic. Is just very weird. The left hand side depends on $s_t$ (also unclear what influence $t$ has on the whole thing...) but the right hand side does not... ? What is the true, most general definition? With regards to all the questions in the area of the foundations of RL and Markov Processes, etc I can only recommend one single source of mathematical truth: Puterman, Markov Decision Processes. First some basic rules: A Markov process is a touple of random variables $(R_t, S_t, A_t)$ that satisfy certain properties. Let us assume that we look only at stationary processes (that is some additional condition that assure that the time $t$ is irrelevant). How does the policy go in here? Well, that is related to the difference between Markov Automata and Markov Processes. What we fix is the transitions $p(s_{t+1}|s_t, a_t)$ and the rewards $p(r_t|s_{t+1}, a_t, s_t)$ as functions independently of time. However, a Markov process is only uniquely defined if we also specify $p(s_0)$ and $p(a_t|s_t)$. On the other hand, given those two further ingredients, we can actually give rise to this unique Markov Process using a somewhat involved but straightforward construction similar to here. Summarized: Fix some $\gamma \in (0,1)$. Given the transition probabilities and the rewards that only defines a Markov automata (that is something completely different than random variables). For each choice of probability distributions for the initial state $s_0$ and a policy we can give rise to a Markov process (i.e. now, secretly, implicitly all the random variables $R_t, A_t, S_t$ depend on $\pi$ and the distribution of $s_0$ but if we referred to this all the time then our notation would become really clumsy, that's why we leave it blank and only remind ourselves that the variables itself depend on $\pi$ when we write $V^\pi$ instead of just $V$). Then $$R_t + \gamma R_{t+1} + \gamma^2 R_{t+1} + ...$$ converges almost everywhere and $$V^{t, \pi}(s) = E[R_t + \gamma R_{t+1} + \gamma^2 R_{t+1} + ... | S_t=s]$$ is actually independent of $t$. With this value function one can show many nice things. For example, if certain conditions on the state and action set are met then there is a provably best, deterministic policy (see Thm. 6.2.9 in the book of Puterman). In particular, these conditions are met when the state and action space are finite. That means that all common board games are actually boring because somewhere out in this universe, there is a best strategy to play this game and both players should actually just follow this strategy to maximize their (discounted) reward.
In reinforcement learning, what is the correct definition of "value function"? Which one is correct and why is there such a seemingly large amount of definitions? Let me answer the second question first: Why is there such a (seemingly) large amount of definitions? I would guess
32,903
How to best respond to investigator who wants to study secondary outcome
You are right that the specified analysis is among a non-randomized set. You are wrong however that the intervention is meaningless because we restrict analyses to this non-randomized set. One does, however, need to adjust for possible confounders. In fact, this analysis does not "depend" on the significance of the first secondary hypothesis as one might suppose. Given statins are the most widely prescribed medication (period), I'm assuming the issue doesn't concern whether any people start statins, but it rather has to do with the failure of the intervention to delay the time at which one starts statin medications. The failure to declare statistical significance of the first secondary endpoint simply means you did not have enough to power to detect a change if there was one. So the second secondary hypothesis needs to consider that intervention might affect compliance regardless. Identifying the right confounds to adjust for in an analysis is a serious statistical hurdle that needs careful deliberation in the protocol (via an amendment) or the SAP (if database lock has not been achieved yet). You're absolutely right that intervention may have opposite effects in the risk of starting a medication versus the risk of adhering to that medication. I can think of numerous examples. Consider fluoridation of water: it offsets time to dental carries, but it does nothing to improve dental hygiene so gum recess, gingivitis, etc. so the major surgical dental adverse events may not be affected at all. In other words: this is why we need a complete set of secondary endpoints, otherwise we are doing salami science.
How to best respond to investigator who wants to study secondary outcome
You are right that the specified analysis is among a non-randomized set. You are wrong however that the intervention is meaningless because we restrict analyses to this non-randomized set. One does, h
How to best respond to investigator who wants to study secondary outcome You are right that the specified analysis is among a non-randomized set. You are wrong however that the intervention is meaningless because we restrict analyses to this non-randomized set. One does, however, need to adjust for possible confounders. In fact, this analysis does not "depend" on the significance of the first secondary hypothesis as one might suppose. Given statins are the most widely prescribed medication (period), I'm assuming the issue doesn't concern whether any people start statins, but it rather has to do with the failure of the intervention to delay the time at which one starts statin medications. The failure to declare statistical significance of the first secondary endpoint simply means you did not have enough to power to detect a change if there was one. So the second secondary hypothesis needs to consider that intervention might affect compliance regardless. Identifying the right confounds to adjust for in an analysis is a serious statistical hurdle that needs careful deliberation in the protocol (via an amendment) or the SAP (if database lock has not been achieved yet). You're absolutely right that intervention may have opposite effects in the risk of starting a medication versus the risk of adhering to that medication. I can think of numerous examples. Consider fluoridation of water: it offsets time to dental carries, but it does nothing to improve dental hygiene so gum recess, gingivitis, etc. so the major surgical dental adverse events may not be affected at all. In other words: this is why we need a complete set of secondary endpoints, otherwise we are doing salami science.
How to best respond to investigator who wants to study secondary outcome You are right that the specified analysis is among a non-randomized set. You are wrong however that the intervention is meaningless because we restrict analyses to this non-randomized set. One does, h
32,904
Modified sleeping beauty paradox
What this line of thinking fails to account for is the fact that, if it's tails and she cancelled the bet on Monday, then cancelling the bet again on Tuesday does nothing. The benefit of the action of cancelling the bet on Tuesday depends on what she did on Monday. From the sleeping beauty's perspective, when she's awakened, she should be thinking "what is my expected gain, from the baseline scenario of the bet, if I cancel the bet?" In other words, what is the expected difference between cancelling the bet, and not cancelling the bet? (This, of course, is just the negative of the EV of the bet, because the value of no bet is simply zero.) Well let's examine the three cases: Case 1 - Monday Heads: If she cancels the bet, she loses 3 Case 2 - Monday Tails: If she cancels the bet, she gains 2 Case 3 - Tuesday Tails: If she cancels the bet, and she didn't cancel the bet on Monday, she gains 2. If she cancels the bet, but she already cancelled the bet on Monday, then nothing changes. To ascertain the expected gain of the bet-cancelling action, we have to know the probability that she cancelled the bet on Monday, given that she cancels on Tuesday, because her gain from the action of "cancel bet" on Tuesday depends on that. However, the problem isn't specified enough to tell us that. We can make a simplifying assumption. Let's assume she does the same thing every time, that is, she picks a strategy "do this if I'm awakened". So if she cancels the bet on Tuesday, then she necessarily cancelled the bet on Monday. Thus, her gain from the baseline, for cancelling the bet on Tuesday, is zero. Then her expected gain from cancelling the bet is $-3 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} + 0 \cdot \frac{1}{3} = -\frac{1}{3}.$ This is in contrast to the naive approach, where the expected gain from cancelling the bet on Tuesday is equal to that of Monday on a tails roll, in which case, the calculation is (falsely) $-3 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} = \frac{1}{3}.$ Notice that this is simply the negative of what you wrote as the EV of the bet in the thirder's perspective. The expected gain from cancelling a bet is the negative of the expected value of the bet. But that expected gain naively ignores the fact that cancelling on Tuesday does nothing if you cancelled on Monday. When you don't ignore that, as I showed above, the expected gain of cancelling the bet is negative. Thus, even from the Thirder's perspective, she should not cancel the bet.
Modified sleeping beauty paradox
What this line of thinking fails to account for is the fact that, if it's tails and she cancelled the bet on Monday, then cancelling the bet again on Tuesday does nothing. The benefit of the action of
Modified sleeping beauty paradox What this line of thinking fails to account for is the fact that, if it's tails and she cancelled the bet on Monday, then cancelling the bet again on Tuesday does nothing. The benefit of the action of cancelling the bet on Tuesday depends on what she did on Monday. From the sleeping beauty's perspective, when she's awakened, she should be thinking "what is my expected gain, from the baseline scenario of the bet, if I cancel the bet?" In other words, what is the expected difference between cancelling the bet, and not cancelling the bet? (This, of course, is just the negative of the EV of the bet, because the value of no bet is simply zero.) Well let's examine the three cases: Case 1 - Monday Heads: If she cancels the bet, she loses 3 Case 2 - Monday Tails: If she cancels the bet, she gains 2 Case 3 - Tuesday Tails: If she cancels the bet, and she didn't cancel the bet on Monday, she gains 2. If she cancels the bet, but she already cancelled the bet on Monday, then nothing changes. To ascertain the expected gain of the bet-cancelling action, we have to know the probability that she cancelled the bet on Monday, given that she cancels on Tuesday, because her gain from the action of "cancel bet" on Tuesday depends on that. However, the problem isn't specified enough to tell us that. We can make a simplifying assumption. Let's assume she does the same thing every time, that is, she picks a strategy "do this if I'm awakened". So if she cancels the bet on Tuesday, then she necessarily cancelled the bet on Monday. Thus, her gain from the baseline, for cancelling the bet on Tuesday, is zero. Then her expected gain from cancelling the bet is $-3 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} + 0 \cdot \frac{1}{3} = -\frac{1}{3}.$ This is in contrast to the naive approach, where the expected gain from cancelling the bet on Tuesday is equal to that of Monday on a tails roll, in which case, the calculation is (falsely) $-3 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} + 2 \cdot \frac{1}{3} = \frac{1}{3}.$ Notice that this is simply the negative of what you wrote as the EV of the bet in the thirder's perspective. The expected gain from cancelling a bet is the negative of the expected value of the bet. But that expected gain naively ignores the fact that cancelling on Tuesday does nothing if you cancelled on Monday. When you don't ignore that, as I showed above, the expected gain of cancelling the bet is negative. Thus, even from the Thirder's perspective, she should not cancel the bet.
Modified sleeping beauty paradox What this line of thinking fails to account for is the fact that, if it's tails and she cancelled the bet on Monday, then cancelling the bet again on Tuesday does nothing. The benefit of the action of
32,905
Moments of $Y=X_1 + X_2 X_3 + X_4 X_5 X_6 +\cdots$
Let $t_1=X_1$ and $t_2=X_2X_3$, etc., so $Y=\sum_i t_i$. Then with some help from Mathematica (like this): \begin{align} E[Y^3] &= E\left[6\sum_{i<j<k}t_it_jt_k + 3\sum_{i\neq j}t_i^2 t_j + \sum_{i}t_i^3\right] \\ &= 6\sum_{i<j<k}E[X]^{i+j+k} + 3\sum_{i\neq j}E[X^2]^iE[X]^j + \sum_{i}E[X^3]^i \\ &= \frac{6E[X]^6}{(1-E[X])(1-E[X]^2)(1-E[X]^3)}\\ \\[-10pt] &\ \ + \frac{3E[X]E[X^2]}{(1-E[X])(1-E[X^2])} - \frac{3E[X]E[X^2]}{1-E[X]E[X^2]} + \frac{E[X^3]}{1-E[X^3]} \\ \end{align} For $Z$, we can similarly let $u_1=X_1$ and $u_2=X_1X_2$, etc., so $Z=\sum_i u_i$. Then: \begin{align} E[Z^3] &= E\left[6\sum_{i<j<k}u_iu_ju_k + 3\sum_{i\neq j}u_i^2 u_j + \sum_{i}u_i^3\right] \\ &= 6\sum_{i<j<k}E[X^3]^iE[X^2]^{j-i}E[X]^{k-j} + 3\sum_{i< j}E[X^3]^iE[X]^{j-i} \\ &\ \ + 3\sum_{i>j}E[X^3]^jE[X^2]^{i-j}+ \sum_{i}E[X^3]^i \\ &= s_3(6s_2s_1+ 3s_1 + 3s_2+ 1) \end{align} where $s_n=\sum_i E[X^n]^i=E[X^n]/(1-E[X^n])$.
Moments of $Y=X_1 + X_2 X_3 + X_4 X_5 X_6 +\cdots$
Let $t_1=X_1$ and $t_2=X_2X_3$, etc., so $Y=\sum_i t_i$. Then with some help from Mathematica (like this): \begin{align} E[Y^3] &= E\left[6\sum_{i<j<k}t_it_jt_k + 3\sum_{i\neq j}t_i^2 t_j + \sum_{i}t_
Moments of $Y=X_1 + X_2 X_3 + X_4 X_5 X_6 +\cdots$ Let $t_1=X_1$ and $t_2=X_2X_3$, etc., so $Y=\sum_i t_i$. Then with some help from Mathematica (like this): \begin{align} E[Y^3] &= E\left[6\sum_{i<j<k}t_it_jt_k + 3\sum_{i\neq j}t_i^2 t_j + \sum_{i}t_i^3\right] \\ &= 6\sum_{i<j<k}E[X]^{i+j+k} + 3\sum_{i\neq j}E[X^2]^iE[X]^j + \sum_{i}E[X^3]^i \\ &= \frac{6E[X]^6}{(1-E[X])(1-E[X]^2)(1-E[X]^3)}\\ \\[-10pt] &\ \ + \frac{3E[X]E[X^2]}{(1-E[X])(1-E[X^2])} - \frac{3E[X]E[X^2]}{1-E[X]E[X^2]} + \frac{E[X^3]}{1-E[X^3]} \\ \end{align} For $Z$, we can similarly let $u_1=X_1$ and $u_2=X_1X_2$, etc., so $Z=\sum_i u_i$. Then: \begin{align} E[Z^3] &= E\left[6\sum_{i<j<k}u_iu_ju_k + 3\sum_{i\neq j}u_i^2 u_j + \sum_{i}u_i^3\right] \\ &= 6\sum_{i<j<k}E[X^3]^iE[X^2]^{j-i}E[X]^{k-j} + 3\sum_{i< j}E[X^3]^iE[X]^{j-i} \\ &\ \ + 3\sum_{i>j}E[X^3]^jE[X^2]^{i-j}+ \sum_{i}E[X^3]^i \\ &= s_3(6s_2s_1+ 3s_1 + 3s_2+ 1) \end{align} where $s_n=\sum_i E[X^n]^i=E[X^n]/(1-E[X^n])$.
Moments of $Y=X_1 + X_2 X_3 + X_4 X_5 X_6 +\cdots$ Let $t_1=X_1$ and $t_2=X_2X_3$, etc., so $Y=\sum_i t_i$. Then with some help from Mathematica (like this): \begin{align} E[Y^3] &= E\left[6\sum_{i<j<k}t_it_jt_k + 3\sum_{i\neq j}t_i^2 t_j + \sum_{i}t_
32,906
How does the marginal distribution become the prior distribution?
The answer is that, to set up a problem that makes statistical sense, z and x are not the same kinds of quantities, despite the notational similarity. z is parameter that cannot be directly observed. x is observable data. With those constraints in place, the likelihood piece is truly trivial, i.e. P(x|z), the probability of a set of observations given a parameter, is the definition of likelihood. Statisticians have been arguing about the second part for centuries, but without using the "prior" terminology, P(z) is the probability that the parameter z has a certain value, as has been determined in the absence of the set of observations x. One condition where x definitely has not be incorporated into the determination of z is before the observations x are known, so the term "prior" is appropriate.
How does the marginal distribution become the prior distribution?
The answer is that, to set up a problem that makes statistical sense, z and x are not the same kinds of quantities, despite the notational similarity. z is parameter that cannot be directly observe
How does the marginal distribution become the prior distribution? The answer is that, to set up a problem that makes statistical sense, z and x are not the same kinds of quantities, despite the notational similarity. z is parameter that cannot be directly observed. x is observable data. With those constraints in place, the likelihood piece is truly trivial, i.e. P(x|z), the probability of a set of observations given a parameter, is the definition of likelihood. Statisticians have been arguing about the second part for centuries, but without using the "prior" terminology, P(z) is the probability that the parameter z has a certain value, as has been determined in the absence of the set of observations x. One condition where x definitely has not be incorporated into the determination of z is before the observations x are known, so the term "prior" is appropriate.
How does the marginal distribution become the prior distribution? The answer is that, to set up a problem that makes statistical sense, z and x are not the same kinds of quantities, despite the notational similarity. z is parameter that cannot be directly observe
32,907
How does the marginal distribution become the prior distribution?
...what are the arguments that assert that the marginal density is indeed the proper object to represent what we want the "prior density" to represent?" At an essential level, I think the argument is that this distribution is "prior" to the data because it does not condition on the data ---i.e., the marginal distribution of the parameters is inherently "prior" to the data because if it were not, it would need to condition on the data. This seems to me to be the essence of the prior/posterior difference.
How does the marginal distribution become the prior distribution?
...what are the arguments that assert that the marginal density is indeed the proper object to represent what we want the "prior density" to represent?" At an essential level, I think the argument is
How does the marginal distribution become the prior distribution? ...what are the arguments that assert that the marginal density is indeed the proper object to represent what we want the "prior density" to represent?" At an essential level, I think the argument is that this distribution is "prior" to the data because it does not condition on the data ---i.e., the marginal distribution of the parameters is inherently "prior" to the data because if it were not, it would need to condition on the data. This seems to me to be the essence of the prior/posterior difference.
How does the marginal distribution become the prior distribution? ...what are the arguments that assert that the marginal density is indeed the proper object to represent what we want the "prior density" to represent?" At an essential level, I think the argument is
32,908
Intuitive explanation/motivation of stationary distribution of a process
There are various motivations for interest in stationary distributions in this context, but probably the most important aspect is that they are closely related to limiting distributions. For most time-series processes, there is a close connection between the stationary distribution and limiting distribution of the process. Under very broad conditions, time-series processes based on IID error terms have a stationary distribution, and they converge to this stationary distribution as a limiting distribution for any starting distribution you specify. That means that if you let the process run for a long time, its distribution will be close to the stationary distribution regardless of how it started off. Thus, if you have reason to believe that the process has been running for a long time, you can reasonably assume it follows its stationary distribution. In your question you use the example of an AR($1$) time-series process with IID error terms with an arbitrary marginal distribution. If $|\alpha|<1$ then this model is a recurrent time-homogeneous Markov chain and its stationary distribution can be found by inverting it to an MA($\infty$) process: $$X_t = \sum_{k=0}^\infty \alpha^{k} e_{t-k} \quad \quad \quad e_t \sim \text{IID }f.$$ We can see that the process is a weighted sum of an infinite chain of IID error terms, where the weightings are exponentially decaying. The limiting distribution can be obtained from the error distribution $f$ by an appropriate convolution for this weighted sum. In general, this will depend on the form of $f$ and it may be a complicated distribution. However, it is worth noting that if the error distribution is not heavy-tailed, and if $\alpha \approx 1$ so that the decay is slow, then the limiting distribution will be close to a normal distribution, owing to approximation by the central limit theorem. Practical applications: In most applications of the AR($1$) time-series process we assume a normal error distribution $e_t \sim \text{IID N}(0, \sigma^2)$, which means that the stationary distribution of the process is: $$X_t \sim \text{N} \Big( 0, \frac{\sigma^2}{1-\alpha^2} \Big).$$ Regardless of the starting distribution for the process, this stationary distribution is the limiting distribution of the process. If we have reason to believe that the process has been running for a reasonable amount of time then we know that the process will have converged close to this limiting distribution, so it makes sense to assume that the process follows this distribution. Of course, as with any application of statistical modelling, we look at diagnostic plots/tests to see if the data falsify our assumed model form. Nevertheless, this form fits a broad class of cases where the AR($1$) model is used. What if a stationary distribution does not exist: There are certain time-series processes where the stationary distribution does not exist. This is most common when there is some fixed periodic aspect to the series, or some absorbing state (or other non-communicating classes of states). In this case there may not be a limiting distribution, or the limiting distribution might be a marginal distribution that is aggregated across multiple non-communicating classes, which is not all that useful. This is not inherently a problem - it just means you need a different kind of model that correctly represents the non-stationary nature of the process. This is more complicated, but statistical theory has ways and means of dealing with this.
Intuitive explanation/motivation of stationary distribution of a process
There are various motivations for interest in stationary distributions in this context, but probably the most important aspect is that they are closely related to limiting distributions. For most tim
Intuitive explanation/motivation of stationary distribution of a process There are various motivations for interest in stationary distributions in this context, but probably the most important aspect is that they are closely related to limiting distributions. For most time-series processes, there is a close connection between the stationary distribution and limiting distribution of the process. Under very broad conditions, time-series processes based on IID error terms have a stationary distribution, and they converge to this stationary distribution as a limiting distribution for any starting distribution you specify. That means that if you let the process run for a long time, its distribution will be close to the stationary distribution regardless of how it started off. Thus, if you have reason to believe that the process has been running for a long time, you can reasonably assume it follows its stationary distribution. In your question you use the example of an AR($1$) time-series process with IID error terms with an arbitrary marginal distribution. If $|\alpha|<1$ then this model is a recurrent time-homogeneous Markov chain and its stationary distribution can be found by inverting it to an MA($\infty$) process: $$X_t = \sum_{k=0}^\infty \alpha^{k} e_{t-k} \quad \quad \quad e_t \sim \text{IID }f.$$ We can see that the process is a weighted sum of an infinite chain of IID error terms, where the weightings are exponentially decaying. The limiting distribution can be obtained from the error distribution $f$ by an appropriate convolution for this weighted sum. In general, this will depend on the form of $f$ and it may be a complicated distribution. However, it is worth noting that if the error distribution is not heavy-tailed, and if $\alpha \approx 1$ so that the decay is slow, then the limiting distribution will be close to a normal distribution, owing to approximation by the central limit theorem. Practical applications: In most applications of the AR($1$) time-series process we assume a normal error distribution $e_t \sim \text{IID N}(0, \sigma^2)$, which means that the stationary distribution of the process is: $$X_t \sim \text{N} \Big( 0, \frac{\sigma^2}{1-\alpha^2} \Big).$$ Regardless of the starting distribution for the process, this stationary distribution is the limiting distribution of the process. If we have reason to believe that the process has been running for a reasonable amount of time then we know that the process will have converged close to this limiting distribution, so it makes sense to assume that the process follows this distribution. Of course, as with any application of statistical modelling, we look at diagnostic plots/tests to see if the data falsify our assumed model form. Nevertheless, this form fits a broad class of cases where the AR($1$) model is used. What if a stationary distribution does not exist: There are certain time-series processes where the stationary distribution does not exist. This is most common when there is some fixed periodic aspect to the series, or some absorbing state (or other non-communicating classes of states). In this case there may not be a limiting distribution, or the limiting distribution might be a marginal distribution that is aggregated across multiple non-communicating classes, which is not all that useful. This is not inherently a problem - it just means you need a different kind of model that correctly represents the non-stationary nature of the process. This is more complicated, but statistical theory has ways and means of dealing with this.
Intuitive explanation/motivation of stationary distribution of a process There are various motivations for interest in stationary distributions in this context, but probably the most important aspect is that they are closely related to limiting distributions. For most tim
32,909
Consistency of M-estimator based on plug-in estimator?
Background: For consistency when $\eta_0$ is known, we typically need a function $S(\theta, \eta)$ such that for every $\epsilon > 0$ we have $$\sup_{\theta \in \Theta} \frac{| S_n(\theta,\eta_0) - S(\theta,\eta_0) |}{1 + | S_n(\theta,\eta_0)| + |S(\theta,\eta_0) |} \xrightarrow{p}\ 0$$ $$\inf_{|\theta - \theta_0| > \delta}| S(\theta,\eta_0) | > 0 = |S(\theta_0, \eta_0)|$$ with $S_n(\tilde{\theta},\eta_0) = op(1)$. Note that a more restrictive version of the first assumption is $$\sup_{\theta \in \Theta} | S_n(\theta,\eta_0) - S(\theta,\eta_0) | \xrightarrow{p}\ 0$$ From the infimum condition, for any $\delta >0 $ we have an $\epsilon > 0$ such that $$ P\left( \left| \tilde{\theta} - \theta_0 \right| > \delta \right) \le P\left( \left| S(\tilde{\theta},\eta_0) \right| \ge \epsilon \right) $$ Consistency can then be proved through $$ \begin{align}| S(\tilde{\theta},\eta_0) | &\le | S_n(\tilde{\theta}, \eta_0) | + |S(\tilde{\theta},\eta_0) - S_n(\tilde{\theta}, \eta_0) | \\ &\le op(1) + op(1+|S_n(\tilde{\theta},\eta_0)| + |S(\tilde{\theta}, \eta_0)|) \\ &= op(1 + S(\tilde{\theta}, \eta_0)) = op(1) \end{align} $$ Hence $P\left( | S(\tilde{\theta},\eta_0) | \ge \epsilon \right) \to 0$ which proves consistency. Solution: Suppose that in addition to the previous assumptions, either (1) $S_n(\theta,\eta)$ is stochastically continuous uniformly in $\theta$ with respect to $\eta$ at $\eta_0$ or (2) $S(\theta,\eta)$ is continuous uniformly in $\theta$ with respect to $\eta$ at $\eta_0$ with $S_n(\hat{\theta},\hat{\eta}) = op(1)$. If (1) is true the proof is trivial, with $$ \begin{align} |S_n(\hat{\theta},\hat{\eta})| &\le |S_n(\hat{\theta},\eta_0)| + |S_n(\hat{\theta},\hat{\eta}) - S_n(\hat{\theta},\eta_0)| \\ &\le |S_n(\hat{\theta},\eta_0)| + \sup_{\theta \in \Theta}|S_n(\theta,\hat{\eta}) - S_n(\theta,\eta_0)| \\ &= |S_n(\hat{\theta},\eta_0)| + op(1) \end{align}$$ with the last line true because of (1). We conclude that the $\hat{\theta}$ also satisfies $S_n(\hat{\theta},\eta_0) = op(1)$, and the theory in the background can be applied automatically. If (2) is true, from the infimum condition, we get that for any $\delta >0 $ we have an $\epsilon_1 > 0$ and $\epsilon_2 > 0$ such that $$\inf_{\theta :|\theta-\theta_0| > \delta}\inf_{|\eta -\eta_0| \le \epsilon_2 }| S(\theta,\eta) | > \epsilon_1 $$ Therefore, we have $$ P\left( \left| \hat{\theta} - \theta_0 \right| > \delta \right) \le P\left( \left| S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) \right| > \epsilon_1 \right) + P(|\hat{\eta} - \eta_0| > \epsilon_2)$$ The last term goes to zero as $n \to \infty$. Then, we have $$ \begin{align} | S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) | &\le |S(\hat{\theta},\eta_0) - S(\theta_0,\eta_0)| \\ &+ |S(\hat{\theta},\hat{\eta}) - S(\hat{\theta},\eta_0)| + |S( \theta_0,\hat{\eta}) - S(\theta_0,\eta_0)| \\ &\le op(1) + 2\sup_{\theta \in \Theta}|S( \theta,\hat{\eta}) - S(\theta,\eta_0)| \\ &= op(1) \end{align} $$ where the last line is true because of (2).
Consistency of M-estimator based on plug-in estimator?
Background: For consistency when $\eta_0$ is known, we typically need a function $S(\theta, \eta)$ such that for every $\epsilon > 0$ we have $$\sup_{\theta \in \Theta} \frac{| S_n(\theta,\eta_0) - S
Consistency of M-estimator based on plug-in estimator? Background: For consistency when $\eta_0$ is known, we typically need a function $S(\theta, \eta)$ such that for every $\epsilon > 0$ we have $$\sup_{\theta \in \Theta} \frac{| S_n(\theta,\eta_0) - S(\theta,\eta_0) |}{1 + | S_n(\theta,\eta_0)| + |S(\theta,\eta_0) |} \xrightarrow{p}\ 0$$ $$\inf_{|\theta - \theta_0| > \delta}| S(\theta,\eta_0) | > 0 = |S(\theta_0, \eta_0)|$$ with $S_n(\tilde{\theta},\eta_0) = op(1)$. Note that a more restrictive version of the first assumption is $$\sup_{\theta \in \Theta} | S_n(\theta,\eta_0) - S(\theta,\eta_0) | \xrightarrow{p}\ 0$$ From the infimum condition, for any $\delta >0 $ we have an $\epsilon > 0$ such that $$ P\left( \left| \tilde{\theta} - \theta_0 \right| > \delta \right) \le P\left( \left| S(\tilde{\theta},\eta_0) \right| \ge \epsilon \right) $$ Consistency can then be proved through $$ \begin{align}| S(\tilde{\theta},\eta_0) | &\le | S_n(\tilde{\theta}, \eta_0) | + |S(\tilde{\theta},\eta_0) - S_n(\tilde{\theta}, \eta_0) | \\ &\le op(1) + op(1+|S_n(\tilde{\theta},\eta_0)| + |S(\tilde{\theta}, \eta_0)|) \\ &= op(1 + S(\tilde{\theta}, \eta_0)) = op(1) \end{align} $$ Hence $P\left( | S(\tilde{\theta},\eta_0) | \ge \epsilon \right) \to 0$ which proves consistency. Solution: Suppose that in addition to the previous assumptions, either (1) $S_n(\theta,\eta)$ is stochastically continuous uniformly in $\theta$ with respect to $\eta$ at $\eta_0$ or (2) $S(\theta,\eta)$ is continuous uniformly in $\theta$ with respect to $\eta$ at $\eta_0$ with $S_n(\hat{\theta},\hat{\eta}) = op(1)$. If (1) is true the proof is trivial, with $$ \begin{align} |S_n(\hat{\theta},\hat{\eta})| &\le |S_n(\hat{\theta},\eta_0)| + |S_n(\hat{\theta},\hat{\eta}) - S_n(\hat{\theta},\eta_0)| \\ &\le |S_n(\hat{\theta},\eta_0)| + \sup_{\theta \in \Theta}|S_n(\theta,\hat{\eta}) - S_n(\theta,\eta_0)| \\ &= |S_n(\hat{\theta},\eta_0)| + op(1) \end{align}$$ with the last line true because of (1). We conclude that the $\hat{\theta}$ also satisfies $S_n(\hat{\theta},\eta_0) = op(1)$, and the theory in the background can be applied automatically. If (2) is true, from the infimum condition, we get that for any $\delta >0 $ we have an $\epsilon_1 > 0$ and $\epsilon_2 > 0$ such that $$\inf_{\theta :|\theta-\theta_0| > \delta}\inf_{|\eta -\eta_0| \le \epsilon_2 }| S(\theta,\eta) | > \epsilon_1 $$ Therefore, we have $$ P\left( \left| \hat{\theta} - \theta_0 \right| > \delta \right) \le P\left( \left| S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) \right| > \epsilon_1 \right) + P(|\hat{\eta} - \eta_0| > \epsilon_2)$$ The last term goes to zero as $n \to \infty$. Then, we have $$ \begin{align} | S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) | &\le |S(\hat{\theta},\eta_0) - S(\theta_0,\eta_0)| \\ &+ |S(\hat{\theta},\hat{\eta}) - S(\hat{\theta},\eta_0)| + |S( \theta_0,\hat{\eta}) - S(\theta_0,\eta_0)| \\ &\le op(1) + 2\sup_{\theta \in \Theta}|S( \theta,\hat{\eta}) - S(\theta,\eta_0)| \\ &= op(1) \end{align} $$ where the last line is true because of (2).
Consistency of M-estimator based on plug-in estimator? Background: For consistency when $\eta_0$ is known, we typically need a function $S(\theta, \eta)$ such that for every $\epsilon > 0$ we have $$\sup_{\theta \in \Theta} \frac{| S_n(\theta,\eta_0) - S
32,910
Should I normalize featurewise or samplewise
In business, data is mostly normalized feature-wise as the aim is to study relationship across samples and being able to predict well about new samples. However, if your question aims at understanding relationship across features (which I haven't experienced yet), it would be a different scenario. To classify people by their height to arm length ratio, I would suggest to introduce a new feature as 'height to arm length ratio' before normalization or standardization (you can find mathematical formulas at https://stats.stackexchange.com/a/10298) and then proceed. Hope this helps!
Should I normalize featurewise or samplewise
In business, data is mostly normalized feature-wise as the aim is to study relationship across samples and being able to predict well about new samples. However, if your question aims at understanding
Should I normalize featurewise or samplewise In business, data is mostly normalized feature-wise as the aim is to study relationship across samples and being able to predict well about new samples. However, if your question aims at understanding relationship across features (which I haven't experienced yet), it would be a different scenario. To classify people by their height to arm length ratio, I would suggest to introduce a new feature as 'height to arm length ratio' before normalization or standardization (you can find mathematical formulas at https://stats.stackexchange.com/a/10298) and then proceed. Hope this helps!
Should I normalize featurewise or samplewise In business, data is mostly normalized feature-wise as the aim is to study relationship across samples and being able to predict well about new samples. However, if your question aims at understanding
32,911
Should I normalize featurewise or samplewise
You always normalize feature-wise. (Typically you would subtract the feature mean, then divide by the feature standard deviation, rather than the proportion-of-total you consider.) In your example, that Arm_length is higher in Subject_2 than Height after normalizing is not a problem, because ML/statistics algorithms don't compare feature values to one another within subjects. They only compare between subjects, within features. Comparing features would be comparing apples to oranges. Relationships can be modeled using interactions, which work fine with standardization.
Should I normalize featurewise or samplewise
You always normalize feature-wise. (Typically you would subtract the feature mean, then divide by the feature standard deviation, rather than the proportion-of-total you consider.) In your example, th
Should I normalize featurewise or samplewise You always normalize feature-wise. (Typically you would subtract the feature mean, then divide by the feature standard deviation, rather than the proportion-of-total you consider.) In your example, that Arm_length is higher in Subject_2 than Height after normalizing is not a problem, because ML/statistics algorithms don't compare feature values to one another within subjects. They only compare between subjects, within features. Comparing features would be comparing apples to oranges. Relationships can be modeled using interactions, which work fine with standardization.
Should I normalize featurewise or samplewise You always normalize feature-wise. (Typically you would subtract the feature mean, then divide by the feature standard deviation, rather than the proportion-of-total you consider.) In your example, th
32,912
What are the best methods for reducing false positives in TensorFlow Mask-RCNN object detection framework using transfer learning?
A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom classifier(s) at home. For example people want to use SSD Mobile or Faster RCNN Inception to detect objects like "Person w/ helmet," "pistol," or "tool box," etc. The general process is to feed in images of that object, but most of the time, no matter how many images...200 to 2000, you still end up with false positives when you go actually run it at your desk. The object classifier works great when you show it the object in its own context, but you end up getting 99% match on every day items like your bedroom window, your desk, your computer monitor, keyboard, etc. People have mentioned the strategy of introducing negative images or soft images. I think the problem has to do with limited context in the images that most people use. The pretrained models were trained with over a dozen classifiers in many variety of environments like in one example could be a Car on the street. The CNN sees the car and then everything in that image that is not a car is a negative image which includes the street, buildings, sky, etc.. In another image, it can see a Bottle and everything in that image which includes desks, tables, windows, etc. I think the problem with training custom classifiers is that it is a negative image problem. Even if you have enough images of the object itself, there isn't enough data of that that same object in different contexts and backgrounds. So in a sense, there is not enough negative images even if conceptually you shouldn't need negative images. When you run the algorithm at home you get false positives all over the place identifying objects around your own room. I think the idea of transfer learning in this way is flawed. We just end up seeing a lot of great tutorials online of people identifying playing cards, Millenium Falcons, etc., but none of those models are deployable in the real world as they all would generate a bunch of false positives when it sees anything outside of its image pool. The best strategy would be to retrain the CNN from scratch with a multiple classifiers and add the desired ones in there as well. I suggest re-introducing a previous dataset from ImageNet or Pascal with 10-20 pre-existing classifiers and add your own ones and retrain it.
What are the best methods for reducing false positives in TensorFlow Mask-RCNN object detection fram
A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom class
What are the best methods for reducing false positives in TensorFlow Mask-RCNN object detection framework using transfer learning? A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom classifier(s) at home. For example people want to use SSD Mobile or Faster RCNN Inception to detect objects like "Person w/ helmet," "pistol," or "tool box," etc. The general process is to feed in images of that object, but most of the time, no matter how many images...200 to 2000, you still end up with false positives when you go actually run it at your desk. The object classifier works great when you show it the object in its own context, but you end up getting 99% match on every day items like your bedroom window, your desk, your computer monitor, keyboard, etc. People have mentioned the strategy of introducing negative images or soft images. I think the problem has to do with limited context in the images that most people use. The pretrained models were trained with over a dozen classifiers in many variety of environments like in one example could be a Car on the street. The CNN sees the car and then everything in that image that is not a car is a negative image which includes the street, buildings, sky, etc.. In another image, it can see a Bottle and everything in that image which includes desks, tables, windows, etc. I think the problem with training custom classifiers is that it is a negative image problem. Even if you have enough images of the object itself, there isn't enough data of that that same object in different contexts and backgrounds. So in a sense, there is not enough negative images even if conceptually you shouldn't need negative images. When you run the algorithm at home you get false positives all over the place identifying objects around your own room. I think the idea of transfer learning in this way is flawed. We just end up seeing a lot of great tutorials online of people identifying playing cards, Millenium Falcons, etc., but none of those models are deployable in the real world as they all would generate a bunch of false positives when it sees anything outside of its image pool. The best strategy would be to retrain the CNN from scratch with a multiple classifiers and add the desired ones in there as well. I suggest re-introducing a previous dataset from ImageNet or Pascal with 10-20 pre-existing classifiers and add your own ones and retrain it.
What are the best methods for reducing false positives in TensorFlow Mask-RCNN object detection fram A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom class
32,913
Termination Condition(s) for Expectation Maximization
The two options mentioned in the answer by @YairDaon (likelihood difference and relative likelihood difference) are the most straightforward and robust options. However, I would caution against using the relative likelihood difference: $l(\theta)$ is often not the full log-likelihood, but only a part of it with constant (and some other less important) terms omitted. Thus, the magnitude of $|\left[l(\theta_{n+1})-l(\theta_n)\right]/l(\theta_n)|$ would depend on what is omitted, and will be hard to interpret. Moreover, the absolute value of the likelihood is likely to be sensitive to the sample size (e.g., number of terms in $l(\theta)=\sum_{i=1}^N\log p(x_i|\theta)$), which means that the stopping criterion would produce results of different quality depending on the sample size. On the other hand, the absolute difference $|\left[l(\theta_{n+1})-l(\theta_n)\right]|<\epsilon$ has very transparent interpretation of the probability changing by a factor of $\epsilon$. $\epsilon=0.001$ is indeed the value usually taken as the first try, as indicated in other answers.
Termination Condition(s) for Expectation Maximization
The two options mentioned in the answer by @YairDaon (likelihood difference and relative likelihood difference) are the most straightforward and robust options. However, I would caution against using
Termination Condition(s) for Expectation Maximization The two options mentioned in the answer by @YairDaon (likelihood difference and relative likelihood difference) are the most straightforward and robust options. However, I would caution against using the relative likelihood difference: $l(\theta)$ is often not the full log-likelihood, but only a part of it with constant (and some other less important) terms omitted. Thus, the magnitude of $|\left[l(\theta_{n+1})-l(\theta_n)\right]/l(\theta_n)|$ would depend on what is omitted, and will be hard to interpret. Moreover, the absolute value of the likelihood is likely to be sensitive to the sample size (e.g., number of terms in $l(\theta)=\sum_{i=1}^N\log p(x_i|\theta)$), which means that the stopping criterion would produce results of different quality depending on the sample size. On the other hand, the absolute difference $|\left[l(\theta_{n+1})-l(\theta_n)\right]|<\epsilon$ has very transparent interpretation of the probability changing by a factor of $\epsilon$. $\epsilon=0.001$ is indeed the value usually taken as the first try, as indicated in other answers.
Termination Condition(s) for Expectation Maximization The two options mentioned in the answer by @YairDaon (likelihood difference and relative likelihood difference) are the most straightforward and robust options. However, I would caution against using
32,914
Termination Condition(s) for Expectation Maximization
You can use an absolute error criterion (e.g. difference between successive iterations $|\ell(\theta_{n+1}) - \ell(\theta_n)|< 10^{-4}$). You can also choose a relative error criterion, e.g. $\frac{|\ell(\theta_{n+1}) - \ell(\theta_n)|}{|\ell(\theta_n)|}< 10^{-4}$. This is somewhat more reasonable because you do not really know how large or small your log-likelihood is and normalizing makes your stopping condition independent of scale. Just make sure you do not set the desired tolerance too small or you will never achieve it.
Termination Condition(s) for Expectation Maximization
You can use an absolute error criterion (e.g. difference between successive iterations $|\ell(\theta_{n+1}) - \ell(\theta_n)|< 10^{-4}$). You can also choose a relative error criterion, e.g. $\frac{|\
Termination Condition(s) for Expectation Maximization You can use an absolute error criterion (e.g. difference between successive iterations $|\ell(\theta_{n+1}) - \ell(\theta_n)|< 10^{-4}$). You can also choose a relative error criterion, e.g. $\frac{|\ell(\theta_{n+1}) - \ell(\theta_n)|}{|\ell(\theta_n)|}< 10^{-4}$. This is somewhat more reasonable because you do not really know how large or small your log-likelihood is and normalizing makes your stopping condition independent of scale. Just make sure you do not set the desired tolerance too small or you will never achieve it.
Termination Condition(s) for Expectation Maximization You can use an absolute error criterion (e.g. difference between successive iterations $|\ell(\theta_{n+1}) - \ell(\theta_n)|< 10^{-4}$). You can also choose a relative error criterion, e.g. $\frac{|\
32,915
Termination Condition(s) for Expectation Maximization
Practically speaking, you set the change of likelihood, for instance, the 1e-3 defaulted in sklearn, or/and you set the max iteration as 100. And you try several different initializations, to try to prevent converging to local optimum points, and choose the best model. If time and computation permit you can enlarge the iteration or/and make the likelihood change threshold smaller, like 1e-4 or the like.
Termination Condition(s) for Expectation Maximization
Practically speaking, you set the change of likelihood, for instance, the 1e-3 defaulted in sklearn, or/and you set the max iteration as 100. And you try several different initializations, to try to p
Termination Condition(s) for Expectation Maximization Practically speaking, you set the change of likelihood, for instance, the 1e-3 defaulted in sklearn, or/and you set the max iteration as 100. And you try several different initializations, to try to prevent converging to local optimum points, and choose the best model. If time and computation permit you can enlarge the iteration or/and make the likelihood change threshold smaller, like 1e-4 or the like.
Termination Condition(s) for Expectation Maximization Practically speaking, you set the change of likelihood, for instance, the 1e-3 defaulted in sklearn, or/and you set the max iteration as 100. And you try several different initializations, to try to p
32,916
Termination Condition(s) for Expectation Maximization
TLDR: try a default threshold like $10^{-3}$ first. If that is slow, switch to a faster-than-linear algorithm that uses/approximates the Hessian. It greatly depends. If your log likelihood has a low curvature, then you could be stuck waiting a long time for the objective function increments to fall below a fixed threshold like $10^{-3}$. This is because EM is only a first-order optimization method. So, my advice would be to first run your EM algorithm using one of the default recommendations like $10^{-3}$. By the way, this default number should be with respect to the average objective (e.g., divide the ELBO sum by the sample size $N$). Ideally, it would terminate relatively quickly. However, if you get stuck in a long loop where each step is always increasing the objective function by only a small amount (but larger than your threshold), then you should look into an algorithm that has better convergence rate. For example, if you can compute the EM steps, you can probably calculate the gradient instead (eq. 8). With the gradient you can run a faster-than-linear algorithm like BFGS that is widely implemented. Personally, this trick has saved me countless hours.
Termination Condition(s) for Expectation Maximization
TLDR: try a default threshold like $10^{-3}$ first. If that is slow, switch to a faster-than-linear algorithm that uses/approximates the Hessian. It greatly depends. If your log likelihood has a low c
Termination Condition(s) for Expectation Maximization TLDR: try a default threshold like $10^{-3}$ first. If that is slow, switch to a faster-than-linear algorithm that uses/approximates the Hessian. It greatly depends. If your log likelihood has a low curvature, then you could be stuck waiting a long time for the objective function increments to fall below a fixed threshold like $10^{-3}$. This is because EM is only a first-order optimization method. So, my advice would be to first run your EM algorithm using one of the default recommendations like $10^{-3}$. By the way, this default number should be with respect to the average objective (e.g., divide the ELBO sum by the sample size $N$). Ideally, it would terminate relatively quickly. However, if you get stuck in a long loop where each step is always increasing the objective function by only a small amount (but larger than your threshold), then you should look into an algorithm that has better convergence rate. For example, if you can compute the EM steps, you can probably calculate the gradient instead (eq. 8). With the gradient you can run a faster-than-linear algorithm like BFGS that is widely implemented. Personally, this trick has saved me countless hours.
Termination Condition(s) for Expectation Maximization TLDR: try a default threshold like $10^{-3}$ first. If that is slow, switch to a faster-than-linear algorithm that uses/approximates the Hessian. It greatly depends. If your log likelihood has a low c
32,917
Cross-correlation of two non-stationary time series?
why don't you post your data and I will try and help you. Box and Jenkins suggested pre-filtering where the differencing operator identified as part of the ARIMA process was used as part of the filter to identify model form via the resultant cross-correlation . See https://web.archive.org/web/20160216193539/https://onlinecourses.science.psu.edu/stat510/node/75/ AND http://www.math.cts.nthu.edu.tw/download.php?filename=569_fe0ff1a2.pdf&dir=publish&title=Ruey+S.+Tsay-Lec1 AND http://autobox.com/cms/images/dllupdate/TFFLOW.png . However if both X and Y are I(1) .. it is still possible that Y and X can related without any differencing at all which suggests that alternative approaches to incorporating differencing in the TF model or not be on the table.
Cross-correlation of two non-stationary time series?
why don't you post your data and I will try and help you. Box and Jenkins suggested pre-filtering where the differencing operator identified as part of the ARIMA process was used as part of the filter
Cross-correlation of two non-stationary time series? why don't you post your data and I will try and help you. Box and Jenkins suggested pre-filtering where the differencing operator identified as part of the ARIMA process was used as part of the filter to identify model form via the resultant cross-correlation . See https://web.archive.org/web/20160216193539/https://onlinecourses.science.psu.edu/stat510/node/75/ AND http://www.math.cts.nthu.edu.tw/download.php?filename=569_fe0ff1a2.pdf&dir=publish&title=Ruey+S.+Tsay-Lec1 AND http://autobox.com/cms/images/dllupdate/TFFLOW.png . However if both X and Y are I(1) .. it is still possible that Y and X can related without any differencing at all which suggests that alternative approaches to incorporating differencing in the TF model or not be on the table.
Cross-correlation of two non-stationary time series? why don't you post your data and I will try and help you. Box and Jenkins suggested pre-filtering where the differencing operator identified as part of the ARIMA process was used as part of the filter
32,918
High dimensional, correlated data and top features/ covariates discovered; multiple hypothesis testing?
There are no problems with the accuracy of the predictions. The uncertainty in your predictions is estimated well by crossvalidation. Maybe one caveat there is that if you test a lot of parameter settings, then you overestimate the accuracy, so you should use a validation set to estimate the accuracy of your final model. Also, your data should be representative of the data that you are going to do predictions on. It is clear to you, and it should be clear to the reader, that your predictors are not causes of the effect, they are just predictors that make a good prediction, and work well empirically. While I completely agree with your caution, inferring any causation from observational data is problematic in any case. Things like significance and such are "valid" concepts in well-designed, controlled studies, and outside of that they are merely tools that you, and others, should interpret wisely and with caution. There can be common causes, spurious effects, masking and other things going on in a normal linear regression with reported confidence intervals, as well as in a lasso model, as well as in a gradient boosted tree model.
High dimensional, correlated data and top features/ covariates discovered; multiple hypothesis testi
There are no problems with the accuracy of the predictions. The uncertainty in your predictions is estimated well by crossvalidation. Maybe one caveat there is that if you test a lot of parameter sett
High dimensional, correlated data and top features/ covariates discovered; multiple hypothesis testing? There are no problems with the accuracy of the predictions. The uncertainty in your predictions is estimated well by crossvalidation. Maybe one caveat there is that if you test a lot of parameter settings, then you overestimate the accuracy, so you should use a validation set to estimate the accuracy of your final model. Also, your data should be representative of the data that you are going to do predictions on. It is clear to you, and it should be clear to the reader, that your predictors are not causes of the effect, they are just predictors that make a good prediction, and work well empirically. While I completely agree with your caution, inferring any causation from observational data is problematic in any case. Things like significance and such are "valid" concepts in well-designed, controlled studies, and outside of that they are merely tools that you, and others, should interpret wisely and with caution. There can be common causes, spurious effects, masking and other things going on in a normal linear regression with reported confidence intervals, as well as in a lasso model, as well as in a gradient boosted tree model.
High dimensional, correlated data and top features/ covariates discovered; multiple hypothesis testi There are no problems with the accuracy of the predictions. The uncertainty in your predictions is estimated well by crossvalidation. Maybe one caveat there is that if you test a lot of parameter sett
32,919
Are there realistic/relevant use-cases for one way ANOVA?
You pose the following question: Are there really research questions where all you want to know is whether at least any one of those groups is different from any other group, but where you don't care which ones they are? Yes, here is one such example. Research Question: Do students randomly assigned to different teaching assistants' recitation sections do comparably well on key course assessment indicators (say, the final exam)? I think the issue with the way you've presented your query is that it seems to suggest only ANOVA with statistically significant results are possible for RQs. However, the RQ here is still reasonable (something someone might want to know), and it so happens that the hope is most likely NOT to find a statistically significant finding. That said, if your query is specifically, are there other methods than 1-way ANOVA when you are expecting a difference? then I would agree...it might be harder to find an authentic RQ example. To address the second query posed: Yet in this frequent scenario, I often see the ANOVA done anyway. Is this just a historic relic? I would argue that starting with the planned comparisons without first confirming that a difference is actually present (i.e., an omnibus test) is a quasi-failure to confirm assumptions of the test. I would argue, thus, that it is not just an historic relic, but a process that should be encouraged (even in the more pedantic examples like an ANOVA). As a reviewer that has encountered more than one manuscript where subsequent significant findings (even with MCP adjustments) were reported when the MANOVA failed to detect a difference...I think there is something to be said in maintaining the omnibus protocol for 1-way ANOVA and subsequent MCPs.
Are there realistic/relevant use-cases for one way ANOVA?
You pose the following question: Are there really research questions where all you want to know is whether at least any one of those groups is different from any other group, but where you don't care
Are there realistic/relevant use-cases for one way ANOVA? You pose the following question: Are there really research questions where all you want to know is whether at least any one of those groups is different from any other group, but where you don't care which ones they are? Yes, here is one such example. Research Question: Do students randomly assigned to different teaching assistants' recitation sections do comparably well on key course assessment indicators (say, the final exam)? I think the issue with the way you've presented your query is that it seems to suggest only ANOVA with statistically significant results are possible for RQs. However, the RQ here is still reasonable (something someone might want to know), and it so happens that the hope is most likely NOT to find a statistically significant finding. That said, if your query is specifically, are there other methods than 1-way ANOVA when you are expecting a difference? then I would agree...it might be harder to find an authentic RQ example. To address the second query posed: Yet in this frequent scenario, I often see the ANOVA done anyway. Is this just a historic relic? I would argue that starting with the planned comparisons without first confirming that a difference is actually present (i.e., an omnibus test) is a quasi-failure to confirm assumptions of the test. I would argue, thus, that it is not just an historic relic, but a process that should be encouraged (even in the more pedantic examples like an ANOVA). As a reviewer that has encountered more than one manuscript where subsequent significant findings (even with MCP adjustments) were reported when the MANOVA failed to detect a difference...I think there is something to be said in maintaining the omnibus protocol for 1-way ANOVA and subsequent MCPs.
Are there realistic/relevant use-cases for one way ANOVA? You pose the following question: Are there really research questions where all you want to know is whether at least any one of those groups is different from any other group, but where you don't care
32,920
Are there realistic/relevant use-cases for one way ANOVA?
An interesting historical use-case is RA Fisher's explanation of ANOVA in the chapter 'Intraclass Correlations and the Analysis of Variance' in 'Statistical Methods for Research Workers' (there are several online versions which can be found for instance via the Wikipedia article). There he introduces ANOVA with intraclass correlation as a special case (which is an example of one-way ANOVA). (An earlier example of ANOVA occured in 1923, which compared yield from different type potatoes and manure, and is an example of two-way ANOVA, but the one-way ANOVA relating to intraclass correlation is just as much a realistic research question / use case. I actually do not see why two-way versus one-way makes a difference in suitability of the application of ANOVA) The question answered by ANOVA is just this: Where does variation come from? Is it due to intra-class variation or between class variation? One-way, two-way, multi-way. The dimension does not matter for such question. The question answered by ANOVA, in whatever dimension, is just whether samples of the same class correlate or not, whether the class plays a role in the variation. Another way to put my question is to state that the heavy lifting is not done by the ANOVA but by the planned contrasts or exhaustive post-hoc tests you do along with it. But do those really rely on the ANOVA? Many research question do not care about specific contrasts. For instance, in Fisher's example the research question could be: does the height of siblings correlate? And you do not care about a specific group of siblings to be different, and look at the overall variance between the groups (is the variance between groups just random variance of individuals, or is there some principle that causes variation between groups?) Yet in this frequent scenario, I often see the ANOVA done anyway. Is this just a historic relic? ANOVA can sometimes be done before individual tests of contrasts as a control in the multiple comparisons problem. E.g. see here: https://en.m.wikipedia.org/wiki/Multiple_comparisons_problem#Controlling_procedures Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA, MANOVA, or Tukey's range test. These methods generally provide only "weak" control of Type I error, except for certain numbers of hypotheses.
Are there realistic/relevant use-cases for one way ANOVA?
An interesting historical use-case is RA Fisher's explanation of ANOVA in the chapter 'Intraclass Correlations and the Analysis of Variance' in 'Statistical Methods for Research Workers' (there are se
Are there realistic/relevant use-cases for one way ANOVA? An interesting historical use-case is RA Fisher's explanation of ANOVA in the chapter 'Intraclass Correlations and the Analysis of Variance' in 'Statistical Methods for Research Workers' (there are several online versions which can be found for instance via the Wikipedia article). There he introduces ANOVA with intraclass correlation as a special case (which is an example of one-way ANOVA). (An earlier example of ANOVA occured in 1923, which compared yield from different type potatoes and manure, and is an example of two-way ANOVA, but the one-way ANOVA relating to intraclass correlation is just as much a realistic research question / use case. I actually do not see why two-way versus one-way makes a difference in suitability of the application of ANOVA) The question answered by ANOVA is just this: Where does variation come from? Is it due to intra-class variation or between class variation? One-way, two-way, multi-way. The dimension does not matter for such question. The question answered by ANOVA, in whatever dimension, is just whether samples of the same class correlate or not, whether the class plays a role in the variation. Another way to put my question is to state that the heavy lifting is not done by the ANOVA but by the planned contrasts or exhaustive post-hoc tests you do along with it. But do those really rely on the ANOVA? Many research question do not care about specific contrasts. For instance, in Fisher's example the research question could be: does the height of siblings correlate? And you do not care about a specific group of siblings to be different, and look at the overall variance between the groups (is the variance between groups just random variance of individuals, or is there some principle that causes variation between groups?) Yet in this frequent scenario, I often see the ANOVA done anyway. Is this just a historic relic? ANOVA can sometimes be done before individual tests of contrasts as a control in the multiple comparisons problem. E.g. see here: https://en.m.wikipedia.org/wiki/Multiple_comparisons_problem#Controlling_procedures Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA, MANOVA, or Tukey's range test. These methods generally provide only "weak" control of Type I error, except for certain numbers of hypotheses.
Are there realistic/relevant use-cases for one way ANOVA? An interesting historical use-case is RA Fisher's explanation of ANOVA in the chapter 'Intraclass Correlations and the Analysis of Variance' in 'Statistical Methods for Research Workers' (there are se
32,921
Can we conclude from $\DeclareMathOperator{\E}{\mathbb{E}}\E g(X)h(Y)=\E g(X) \E h(Y)$ that $X,Y$ are independent?
Let $(\Omega, \mathscr F, P)$ be a probability space. By definition two random variables $X, Y :\Omega \to \mathbb R$ are independent if their $\sigma$-algebras $S_X := \sigma(X)$ and $S_Y := \sigma(Y)$ are independent, i.e. $\forall A \in S_X, B \in S_Y$ we have $P(A \cap B) = P(A)P(B)$. Let $g_a(x) = I(x \leq a)$ and take $G = \{g_a : a \in \mathbb Q\}$ (thanks to @grand_chat for pointing out that $\mathbb Q$ suffices). Then we have $$ E\left(g_a(X)g_b(Y)\right) = E(I(X \leq a)I(Y \leq b)) = E(I(X \leq a, Y \leq b)) = P(X \leq a \cap Y \leq b) $$ and $$ E(g_a(X))E(g_b(Y)) = P(X \leq a)P(Y \leq b). $$ If we assume that $\forall a, b \in \mathbb Q$ $$ P(X \leq a \cap Y \leq b) = P(X \leq a)P(Y \leq b) $$ then we can appeal to the $\pi-\lambda$ theorem to show that $$ P(A \cap B) = P(A)P(B) \hspace{5mm} \forall A \in S_X, B \in S_Y $$ i.e. $X \perp Y$. So unless I've made a mistake, we've at least got a countable collection of such functions and this applies to any pair of random variables defined over a common probability space.
Can we conclude from $\DeclareMathOperator{\E}{\mathbb{E}}\E g(X)h(Y)=\E g(X) \E h(Y)$ that $X,Y$ ar
Let $(\Omega, \mathscr F, P)$ be a probability space. By definition two random variables $X, Y :\Omega \to \mathbb R$ are independent if their $\sigma$-algebras $S_X := \sigma(X)$ and $S_Y := \sigma(Y
Can we conclude from $\DeclareMathOperator{\E}{\mathbb{E}}\E g(X)h(Y)=\E g(X) \E h(Y)$ that $X,Y$ are independent? Let $(\Omega, \mathscr F, P)$ be a probability space. By definition two random variables $X, Y :\Omega \to \mathbb R$ are independent if their $\sigma$-algebras $S_X := \sigma(X)$ and $S_Y := \sigma(Y)$ are independent, i.e. $\forall A \in S_X, B \in S_Y$ we have $P(A \cap B) = P(A)P(B)$. Let $g_a(x) = I(x \leq a)$ and take $G = \{g_a : a \in \mathbb Q\}$ (thanks to @grand_chat for pointing out that $\mathbb Q$ suffices). Then we have $$ E\left(g_a(X)g_b(Y)\right) = E(I(X \leq a)I(Y \leq b)) = E(I(X \leq a, Y \leq b)) = P(X \leq a \cap Y \leq b) $$ and $$ E(g_a(X))E(g_b(Y)) = P(X \leq a)P(Y \leq b). $$ If we assume that $\forall a, b \in \mathbb Q$ $$ P(X \leq a \cap Y \leq b) = P(X \leq a)P(Y \leq b) $$ then we can appeal to the $\pi-\lambda$ theorem to show that $$ P(A \cap B) = P(A)P(B) \hspace{5mm} \forall A \in S_X, B \in S_Y $$ i.e. $X \perp Y$. So unless I've made a mistake, we've at least got a countable collection of such functions and this applies to any pair of random variables defined over a common probability space.
Can we conclude from $\DeclareMathOperator{\E}{\mathbb{E}}\E g(X)h(Y)=\E g(X) \E h(Y)$ that $X,Y$ ar Let $(\Omega, \mathscr F, P)$ be a probability space. By definition two random variables $X, Y :\Omega \to \mathbb R$ are independent if their $\sigma$-algebras $S_X := \sigma(X)$ and $S_Y := \sigma(Y
32,922
Using pseudo-priors properly in Bayesian model selection
What is going on here? This is a very generic question with the obvious answer to study in details Carlin & Chib (1995). The essential idea is to consider the joint parameter $(m,\theta_1,\theta_2)$ where $m$ denotes the model index ($m=1,2$) and $\theta_1,\theta_2$ the parameters of both models, in the sense that the data comes from the density $$f(x|m,\theta_1,\theta_2)=f_m(x|\theta_m)$$ i.e. one of the two parameters $\theta_{3-m}$ is superfluous once the model index $m$ is set. Once this completion is done, a prior has to be chosen on the triplet $(m,\theta_1,\theta_2)$, which is $$\pi(m,\theta_1,\theta_2)=\pi(m)\pi_m(\theta_m)\tilde\pi_m(\theta_{3-m})$$ where I denote by $\pi(m)$ and $\pi_m(\theta_m)$ the true priors on the model index and on the parameter of each model. The additional $\tilde\pi_m(\theta_{3-m})$ is free because the posterior on $\theta_{3-m}$ is equal to the prior: $$\pi(m,\theta_1,\theta_2|x)=\pi(m|x)\pi_m(\theta_m|x)\tilde\pi_m(\theta_{3-m})$$ The data does not impact the parameter it does not depend of. And thus inference about $\theta_m$ is not impacted by the choice of $\tilde\pi_m(.)$. In practice, this means that the algorithm of simulating from the augmented model produces a frequency for each model approximating the posterior probability of this model a sequence of parameters $\theta_m$ when $m$ is the model index, to be used for inference on this parameter a sequence of parameters $\theta_{3-m}$ when $m$ is the model index, to be ignored. how do we square this with my intuition and Plummer's comment? What Martyn Plummer means in his comment is that the pseudo-prior does not matter on the parameter with the other index $m$ but must be the true prior on the parameter with the current index $3-m$. This is 100% coherent with Carlin & Chib (1995) paper. does this mean it's impermissible to pseudo-priors corresponding exactly to the true priors? Pseudo-priors can be taken as the true priors, provided these are proper. But as Carlin & Chib (1995) indicate, it is much more efficient to take an approximation of the true posterior, $\pi_{3-m}(\theta_{3-m}|x)$, approximation that can be obtained by a preliminary MCMC run for each model. What if the indicator variable turns on and off a part of the model with several parameters The resolution to this conundrum is to consider different sets of parameters for all different models, i.e., to have no common parameters between any two models. If you are in a variable selection problem, this means using a different parameter and a different notation for the coefficient of variable $X_1$ when $X_2$ is part of the regression and when $X_2$ is not part of the regression. From this point, use any pseudo-prior you want on the superfluous parameters. What does @Xi'an mean in the first comment I mean that if the probabilities of visits to the two models are not the probabilities in the prior, the posterior probability of one model estimated by the simulated frequency must be corrected.
Using pseudo-priors properly in Bayesian model selection
What is going on here? This is a very generic question with the obvious answer to study in details Carlin & Chib (1995). The essential idea is to consider the joint parameter $(m,\theta_1,\theta_2)$
Using pseudo-priors properly in Bayesian model selection What is going on here? This is a very generic question with the obvious answer to study in details Carlin & Chib (1995). The essential idea is to consider the joint parameter $(m,\theta_1,\theta_2)$ where $m$ denotes the model index ($m=1,2$) and $\theta_1,\theta_2$ the parameters of both models, in the sense that the data comes from the density $$f(x|m,\theta_1,\theta_2)=f_m(x|\theta_m)$$ i.e. one of the two parameters $\theta_{3-m}$ is superfluous once the model index $m$ is set. Once this completion is done, a prior has to be chosen on the triplet $(m,\theta_1,\theta_2)$, which is $$\pi(m,\theta_1,\theta_2)=\pi(m)\pi_m(\theta_m)\tilde\pi_m(\theta_{3-m})$$ where I denote by $\pi(m)$ and $\pi_m(\theta_m)$ the true priors on the model index and on the parameter of each model. The additional $\tilde\pi_m(\theta_{3-m})$ is free because the posterior on $\theta_{3-m}$ is equal to the prior: $$\pi(m,\theta_1,\theta_2|x)=\pi(m|x)\pi_m(\theta_m|x)\tilde\pi_m(\theta_{3-m})$$ The data does not impact the parameter it does not depend of. And thus inference about $\theta_m$ is not impacted by the choice of $\tilde\pi_m(.)$. In practice, this means that the algorithm of simulating from the augmented model produces a frequency for each model approximating the posterior probability of this model a sequence of parameters $\theta_m$ when $m$ is the model index, to be used for inference on this parameter a sequence of parameters $\theta_{3-m}$ when $m$ is the model index, to be ignored. how do we square this with my intuition and Plummer's comment? What Martyn Plummer means in his comment is that the pseudo-prior does not matter on the parameter with the other index $m$ but must be the true prior on the parameter with the current index $3-m$. This is 100% coherent with Carlin & Chib (1995) paper. does this mean it's impermissible to pseudo-priors corresponding exactly to the true priors? Pseudo-priors can be taken as the true priors, provided these are proper. But as Carlin & Chib (1995) indicate, it is much more efficient to take an approximation of the true posterior, $\pi_{3-m}(\theta_{3-m}|x)$, approximation that can be obtained by a preliminary MCMC run for each model. What if the indicator variable turns on and off a part of the model with several parameters The resolution to this conundrum is to consider different sets of parameters for all different models, i.e., to have no common parameters between any two models. If you are in a variable selection problem, this means using a different parameter and a different notation for the coefficient of variable $X_1$ when $X_2$ is part of the regression and when $X_2$ is not part of the regression. From this point, use any pseudo-prior you want on the superfluous parameters. What does @Xi'an mean in the first comment I mean that if the probabilities of visits to the two models are not the probabilities in the prior, the posterior probability of one model estimated by the simulated frequency must be corrected.
Using pseudo-priors properly in Bayesian model selection What is going on here? This is a very generic question with the obvious answer to study in details Carlin & Chib (1995). The essential idea is to consider the joint parameter $(m,\theta_1,\theta_2)$
32,923
Cumulative hazard in the setting of Cox regression of repeated events
Therneau's main R survival vignette explains this pretty clearly in Chapter 2, "Survival curves," for the situation without covariates. Section 2.2, "Repeated events," covers "the case of a single event type, with the possibility of multiple events per subject." That's the situation posited in this question. Quoting (emphasis in the original): In multi-event data, the cumulative hazard is an estimate of the expected number of events for a unit that has been observed for the given amount of time, whereas the survival $S$ estimates the probability that a unit has had 0 repairs. The cumulative hazard is the more natural quantity to plot in such studies; in reliability analysis it is also known as the mean cumulative function. That is, the mean cumulative function is the cumulative hazard function in this situation. Section 3.2 of the vignette extends this non-parametric display of repeated-events data to semi-parametric Cox proportional-hazards models. That allows the cumulative hazard function, the expected number of events as a function of time, to be estimated for any specified set of covariate values. The identity of the "cumulative hazard function" and "mean cumulative count" is clear here, too: Perhaps more interesting in this situation is the expected number of infections, rather than the probability of having at least 1. The former is estimated by the cumulative hazard... as illustrated in the final figure of that section, a plot of the expected number of infections in the chronic granulotomous disease data set over time for 4 different combinations of covariate values.
Cumulative hazard in the setting of Cox regression of repeated events
Therneau's main R survival vignette explains this pretty clearly in Chapter 2, "Survival curves," for the situation without covariates. Section 2.2, "Repeated events," covers "the case of a single eve
Cumulative hazard in the setting of Cox regression of repeated events Therneau's main R survival vignette explains this pretty clearly in Chapter 2, "Survival curves," for the situation without covariates. Section 2.2, "Repeated events," covers "the case of a single event type, with the possibility of multiple events per subject." That's the situation posited in this question. Quoting (emphasis in the original): In multi-event data, the cumulative hazard is an estimate of the expected number of events for a unit that has been observed for the given amount of time, whereas the survival $S$ estimates the probability that a unit has had 0 repairs. The cumulative hazard is the more natural quantity to plot in such studies; in reliability analysis it is also known as the mean cumulative function. That is, the mean cumulative function is the cumulative hazard function in this situation. Section 3.2 of the vignette extends this non-parametric display of repeated-events data to semi-parametric Cox proportional-hazards models. That allows the cumulative hazard function, the expected number of events as a function of time, to be estimated for any specified set of covariate values. The identity of the "cumulative hazard function" and "mean cumulative count" is clear here, too: Perhaps more interesting in this situation is the expected number of infections, rather than the probability of having at least 1. The former is estimated by the cumulative hazard... as illustrated in the final figure of that section, a plot of the expected number of infections in the chronic granulotomous disease data set over time for 4 different combinations of covariate values.
Cumulative hazard in the setting of Cox regression of repeated events Therneau's main R survival vignette explains this pretty clearly in Chapter 2, "Survival curves," for the situation without covariates. Section 2.2, "Repeated events," covers "the case of a single eve
32,924
Understanding Median Frequency Balancing?
My interpretations is as follows: "number of pixels of class c": Represents the total number of pixels of class c across all images of the dataset. "The total number of pixels in images where c is present": Represents the total number of pixels across all images (where there is at least one pixel of class c) of the dataset. "median frequency is the median of these frequencies": Sort the frequencies calculated above and pick the median. Possible technique for calculation of frequencies of each class: classPixelCount = [array of class.size() zeros] classTotalCount = [array of class.size() zeros] for each image in dataset: perImageFrequencies = bincount(image) classPixelCount = element_wise_sum(classPixelCount, perImageFrequencies) nPixelsInImage = image.total_pixel_count() for each frequency in per_image_frequencies: if frequency > 0: classTotalCount = classTotalCount + nPixelsInImage return elementwiseDivision(classPixelCount, classTotalCount) If you assume that every image must have every class and every image is of the same size, this approximates to: classPixelCount = [array of class.size() zeros] for each image in dataset: perImageFrequencies = bincount(image) classPixelCount = element_wise_sum(classPixelCount, perImageFrequencies) totalPixels = sumElementsOf(classPixelCount) return elementwiseDivision(classPixelCount, totalPixels) Finally, to calculate the class weights: sortedFrequencies = sort(frequences) medianFreq = median(frequencies) return elementwiseDivision(medianFreq, sortedFrequencies)
Understanding Median Frequency Balancing?
My interpretations is as follows: "number of pixels of class c": Represents the total number of pixels of class c across all images of the dataset. "The total number of pixels in images where c is pr
Understanding Median Frequency Balancing? My interpretations is as follows: "number of pixels of class c": Represents the total number of pixels of class c across all images of the dataset. "The total number of pixels in images where c is present": Represents the total number of pixels across all images (where there is at least one pixel of class c) of the dataset. "median frequency is the median of these frequencies": Sort the frequencies calculated above and pick the median. Possible technique for calculation of frequencies of each class: classPixelCount = [array of class.size() zeros] classTotalCount = [array of class.size() zeros] for each image in dataset: perImageFrequencies = bincount(image) classPixelCount = element_wise_sum(classPixelCount, perImageFrequencies) nPixelsInImage = image.total_pixel_count() for each frequency in per_image_frequencies: if frequency > 0: classTotalCount = classTotalCount + nPixelsInImage return elementwiseDivision(classPixelCount, classTotalCount) If you assume that every image must have every class and every image is of the same size, this approximates to: classPixelCount = [array of class.size() zeros] for each image in dataset: perImageFrequencies = bincount(image) classPixelCount = element_wise_sum(classPixelCount, perImageFrequencies) totalPixels = sumElementsOf(classPixelCount) return elementwiseDivision(classPixelCount, totalPixels) Finally, to calculate the class weights: sortedFrequencies = sort(frequences) medianFreq = median(frequencies) return elementwiseDivision(medianFreq, sortedFrequencies)
Understanding Median Frequency Balancing? My interpretations is as follows: "number of pixels of class c": Represents the total number of pixels of class c across all images of the dataset. "The total number of pixels in images where c is pr
32,925
Understanding Median Frequency Balancing?
My implement code like this, and the weird thing is that I can get weight small than 1 in small class. Which could reduce the affect in loss when calculate the small class. from glob import glob from tqdm import tqdm from PIL import Image from collections import defaultdict import numpy as np path = '*.png' nb_class = 4 total_freq = [[0, 0] for _ in range(nb_class)] freq_list = defaultdict(list) for f in tqdm(glob(path)): image = Image.open(f) image = np.asarray(image) # total pixel in one image total_pixel = len(image.flatten()) for i in range(nb_class): # number of pixels of class c freq_c_num = len(image[image == i].flatten()) # frequency of pixel of class c freq_c = freq_c_num * 1.0 / total_pixel # where c is present if freq_c_num > 0: freq_list[i].append(freq_c) # number of pixel of class c total_freq[i][0] += len(image[image == i].flatten()) # total pixel total_freq[i][1] += total_pixel for i in range(nb_class): # media_freq median_freq = np.median(freq_list[i]) # freq(c) tmp = total_freq[i][0] * 1.0 / total_freq[i][1] # a_c print(i, median_freq / tmp) ```
Understanding Median Frequency Balancing?
My implement code like this, and the weird thing is that I can get weight small than 1 in small class. Which could reduce the affect in loss when calculate the small class. from glob import glob from
Understanding Median Frequency Balancing? My implement code like this, and the weird thing is that I can get weight small than 1 in small class. Which could reduce the affect in loss when calculate the small class. from glob import glob from tqdm import tqdm from PIL import Image from collections import defaultdict import numpy as np path = '*.png' nb_class = 4 total_freq = [[0, 0] for _ in range(nb_class)] freq_list = defaultdict(list) for f in tqdm(glob(path)): image = Image.open(f) image = np.asarray(image) # total pixel in one image total_pixel = len(image.flatten()) for i in range(nb_class): # number of pixels of class c freq_c_num = len(image[image == i].flatten()) # frequency of pixel of class c freq_c = freq_c_num * 1.0 / total_pixel # where c is present if freq_c_num > 0: freq_list[i].append(freq_c) # number of pixel of class c total_freq[i][0] += len(image[image == i].flatten()) # total pixel total_freq[i][1] += total_pixel for i in range(nb_class): # media_freq median_freq = np.median(freq_list[i]) # freq(c) tmp = total_freq[i][0] * 1.0 / total_freq[i][1] # a_c print(i, median_freq / tmp) ```
Understanding Median Frequency Balancing? My implement code like this, and the weird thing is that I can get weight small than 1 in small class. Which could reduce the affect in loss when calculate the small class. from glob import glob from
32,926
ATT vs ATE in propensity score matching when using DiD estimates
The article is blocked behind a paywall. Nonetheless, I think the major terms and components can be addressed based on your description. Propensity score weighting does not weight by the "odds" or weight by the "inverse". Propensity score weighting weights observations by the inverse of the probability of receipt of the treatment. A difference-in-differences is an estimand, not a response variable. The advantages of ANCOVA, modeling the outcome adjusting for baseline values as a covariate, over a change-score approach have been discussed several times on this site. See here for a lively and thorough discussion. Even so, the difference between the two approaches is a fixed effect vs. an offset; thus the outcome is always just the response variable; hence the formatting of the response variable and interpretation of the treatment receipt coefficient as a difference-in-differences is the same in both approaches. The average treatment effect on the treated and the average treatment effect (on the sample) is not a designation I've heard before. By definition we estimate the ATE by subtracting a comparable set of differences that would be found in an untreated group. In a clinical study this would be called Hawthorne effect, in observational studies this is usually a type of prevalent case bias. Together, they are types of pre/post differences that do not arise as a form of confounding, so it is not addressable by propensity score weighting. Conversely, regardless of the presence of these effects, confounding by indication is capable of exaggerating (or attenuating) treatment effects. Propensity score methods (matching or weighting) are still needed to control for confounding effects.
ATT vs ATE in propensity score matching when using DiD estimates
The article is blocked behind a paywall. Nonetheless, I think the major terms and components can be addressed based on your description. Propensity score weighting does not weight by the "odds" or wei
ATT vs ATE in propensity score matching when using DiD estimates The article is blocked behind a paywall. Nonetheless, I think the major terms and components can be addressed based on your description. Propensity score weighting does not weight by the "odds" or weight by the "inverse". Propensity score weighting weights observations by the inverse of the probability of receipt of the treatment. A difference-in-differences is an estimand, not a response variable. The advantages of ANCOVA, modeling the outcome adjusting for baseline values as a covariate, over a change-score approach have been discussed several times on this site. See here for a lively and thorough discussion. Even so, the difference between the two approaches is a fixed effect vs. an offset; thus the outcome is always just the response variable; hence the formatting of the response variable and interpretation of the treatment receipt coefficient as a difference-in-differences is the same in both approaches. The average treatment effect on the treated and the average treatment effect (on the sample) is not a designation I've heard before. By definition we estimate the ATE by subtracting a comparable set of differences that would be found in an untreated group. In a clinical study this would be called Hawthorne effect, in observational studies this is usually a type of prevalent case bias. Together, they are types of pre/post differences that do not arise as a form of confounding, so it is not addressable by propensity score weighting. Conversely, regardless of the presence of these effects, confounding by indication is capable of exaggerating (or attenuating) treatment effects. Propensity score methods (matching or weighting) are still needed to control for confounding effects.
ATT vs ATE in propensity score matching when using DiD estimates The article is blocked behind a paywall. Nonetheless, I think the major terms and components can be addressed based on your description. Propensity score weighting does not weight by the "odds" or wei
32,927
how to help the tree-based model extrapolate? [duplicate]
It seems to me that tree-based models are very bad at extrapolation, please look at this discussion https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/38352. Some people also pointed out that XGBoost has some weak "potential" at extrapolation https://github.com/dmlc/xgboost/issues/1581, but in general and in my personal applications, I do not see that "potential" really helps. Here is one way around using stacked model: https://www.kaggle.com/serigne/stacked-regressions-top-4-on-leaderboard I am hitting the wall of xgboost myself and turn to use deep learning algorithms for cases requiring extrapolation.
how to help the tree-based model extrapolate? [duplicate]
It seems to me that tree-based models are very bad at extrapolation, please look at this discussion https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/38352. Some people also poin
how to help the tree-based model extrapolate? [duplicate] It seems to me that tree-based models are very bad at extrapolation, please look at this discussion https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/38352. Some people also pointed out that XGBoost has some weak "potential" at extrapolation https://github.com/dmlc/xgboost/issues/1581, but in general and in my personal applications, I do not see that "potential" really helps. Here is one way around using stacked model: https://www.kaggle.com/serigne/stacked-regressions-top-4-on-leaderboard I am hitting the wall of xgboost myself and turn to use deep learning algorithms for cases requiring extrapolation.
how to help the tree-based model extrapolate? [duplicate] It seems to me that tree-based models are very bad at extrapolation, please look at this discussion https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/38352. Some people also poin
32,928
Probability of an independent Poisson process overtaking another
Let the collective times of the processes be $T = (0=t_0 \lt t_1 \lt t_2 \lt \cdots).$ Because these are independent Poisson processes, almost surely exactly one of them is observed at each of these times. For $i\gt 0,$ define $$B(i) = \left\{\matrix{+1 & \text{if } R(t_i)=1 \\ -1 & \text{if }M(t_i)=1}\right.$$ and accumulate the $B(i)$ into the process $W:$ that is, $W(0)=0$ and $W(i+1)=W(i)+B(i)$ for all $i \gt 0.$ $W(i)$ counts how many more times $R$ has appeared than $M$ just after time $t_i.$ This figure shows realizations of $R$ (in red) and $M$ (in medium blue) as "rug plots" at the top. The points plot the values of $(t_i, W(i))$. Each red point represents an increase in the excess $R(t_i)-M(t_i)$ while each blue point shows a decrease in the excess. For $b=0, 1, 2, \ldots, $ let $\mathcal{E}_b$ be chance at least one of the $W_i$ is less than or equal to $-b$ and let $f(b)$ be its probability. The question asks for $f(D+1).$ Let $\lambda=\lambda_R+\lambda_M.$ This is the rate of the combined processes. $W$ is a Binomial random walk, because $$\Pr(B(i) = 1) = \frac{\lambda_R}{\lambda}\text{ and } \Pr(B(i)=-1) = \frac{\lambda_M}{\lambda}.$$ Thus, The answer equals the chance that this Binomial random walk $W$ encounters an absorbing barrier at $-D-1.$ The most elementary way to find this chance observes that $$f(0)=1$$ because $W(0)=0;$ and, for all $b \gt 0,$ the two possible next steps of $\pm 1$ recursively yield $$f(b) = \frac{\lambda_R}{\lambda} f(b+1) + \frac{\lambda_M}{\lambda} f(b-1).$$ Assuming $\lambda_R \ge \lambda_M,$ the unique solution for $b\ge 0$ is $$f(b) = \left(\frac{\lambda_M}{\lambda_R}\right)^b,$$ as you can check by plugging this into the foregoing defining equations. Thus, The answer is $$\Pr(\mathcal{E}_{D+1})=f(D+1)=\left(\frac{\lambda_M}{\lambda_R}\right)^{D+1}.$$
Probability of an independent Poisson process overtaking another
Let the collective times of the processes be $T = (0=t_0 \lt t_1 \lt t_2 \lt \cdots).$ Because these are independent Poisson processes, almost surely exactly one of them is observed at each of these
Probability of an independent Poisson process overtaking another Let the collective times of the processes be $T = (0=t_0 \lt t_1 \lt t_2 \lt \cdots).$ Because these are independent Poisson processes, almost surely exactly one of them is observed at each of these times. For $i\gt 0,$ define $$B(i) = \left\{\matrix{+1 & \text{if } R(t_i)=1 \\ -1 & \text{if }M(t_i)=1}\right.$$ and accumulate the $B(i)$ into the process $W:$ that is, $W(0)=0$ and $W(i+1)=W(i)+B(i)$ for all $i \gt 0.$ $W(i)$ counts how many more times $R$ has appeared than $M$ just after time $t_i.$ This figure shows realizations of $R$ (in red) and $M$ (in medium blue) as "rug plots" at the top. The points plot the values of $(t_i, W(i))$. Each red point represents an increase in the excess $R(t_i)-M(t_i)$ while each blue point shows a decrease in the excess. For $b=0, 1, 2, \ldots, $ let $\mathcal{E}_b$ be chance at least one of the $W_i$ is less than or equal to $-b$ and let $f(b)$ be its probability. The question asks for $f(D+1).$ Let $\lambda=\lambda_R+\lambda_M.$ This is the rate of the combined processes. $W$ is a Binomial random walk, because $$\Pr(B(i) = 1) = \frac{\lambda_R}{\lambda}\text{ and } \Pr(B(i)=-1) = \frac{\lambda_M}{\lambda}.$$ Thus, The answer equals the chance that this Binomial random walk $W$ encounters an absorbing barrier at $-D-1.$ The most elementary way to find this chance observes that $$f(0)=1$$ because $W(0)=0;$ and, for all $b \gt 0,$ the two possible next steps of $\pm 1$ recursively yield $$f(b) = \frac{\lambda_R}{\lambda} f(b+1) + \frac{\lambda_M}{\lambda} f(b-1).$$ Assuming $\lambda_R \ge \lambda_M,$ the unique solution for $b\ge 0$ is $$f(b) = \left(\frac{\lambda_M}{\lambda_R}\right)^b,$$ as you can check by plugging this into the foregoing defining equations. Thus, The answer is $$\Pr(\mathcal{E}_{D+1})=f(D+1)=\left(\frac{\lambda_M}{\lambda_R}\right)^{D+1}.$$
Probability of an independent Poisson process overtaking another Let the collective times of the processes be $T = (0=t_0 \lt t_1 \lt t_2 \lt \cdots).$ Because these are independent Poisson processes, almost surely exactly one of them is observed at each of these
32,929
Lower than expected coverage for importance sampling with simulation
Importance sampling is quite sensitive to the choice of importance distribution. Since you chose $\lambda = 20$, the samples that you draw using rexp will have a mean of $1/20$ with variance $1/400$. This is the distribution you get However, the integral you want to evaluate goes from 0 to $\pi =3.14$. So you want to use a $\lambda$ that gives you such a range. I use $\lambda = 1$. Using $\lambda = 1$ I will be able to explore the full integral space of 0 to $\pi$, and seems like only a few draws over $\pi$ will be wasted. Now I rerun your code, and only change $\lambda = 1$. # clear the environment and set the seed for reproducibility rm(list=ls()) gc() graphics.off() set.seed(1) # function to be integrated f <- function(x){ 1 / (cos(x)^2+x^2) } # importance sampling importance.sampling <- function(lambda, f, B){ x <- rexp(B, lambda) f(x) / dexp(x, lambda)*dunif(x, 0, pi) } # mean value of f mu.num <- integrate(f,0,pi)$value/pi # initialize code means <- 0 sigmas <- 0 error <- 0 CI.min <- 0 CI.max <- 0 CI.covers.parameter <- FALSE # set a value for lambda: we will repeat importance sampling N times to verify # coverage N <- 100 lambda <- rep(1,N) # set the sample size for importance sampling B <- 10^4 # - estimate the mean value of f using importance sampling, N times # - compute a confidence interval for the mean each time # - CI.covers.parameter is set to TRUE if the estimated confidence # interval contains the mean value computed by integrate, otherwise # is set to FALSE j <- 0 for(i in lambda){ I <- importance.sampling(i, f, B) j <- j + 1 mu <- mean(I) std <- sd(I) lower.CB <- mu - 1.96*std/sqrt(B) upper.CB <- mu + 1.96*std/sqrt(B) means[j] <- mu sigmas[j] <- std error[j] <- abs(mu-mu.num) CI.min[j] <- lower.CB CI.max[j] <- upper.CB CI.covers.parameter[j] <- lower.CB < mu.num & mu.num < upper.CB } # build a dataframe in case you want to have a look at the results for each run df <- data.frame(lambda, means, sigmas, error, CI.min, CI.max, CI.covers.parameter) # so, what's the coverage? mean(CI.covers.parameter) #[1] .95 If you play around with $\lambda$, you will see that if you make it really small (.00001) or large, the coverage probabilities will be bad. EDIT------- Regarding the coverage probability decreasing once you go from $B = 10^4$ to $B = 10^6$, that is just a random occurrence, based on the fact that you use $N = 100$ replications. The confidence interval for the coverage probability at $B = 10^4$ is, $$.19 \pm 1.96*\sqrt{\dfrac{.19*(1-.19)}{100}} = .19 \pm .0769 = (.1131, .2669)\,.$$ So you can't really say that increasing $B = 10^6$ significantly lowers the coverage probability. In fact in your code for the same seed, change $N = 100$ to $N = 1000$, then with $B = 10^4$, coverage probability is .123 and with $B = 10^6$ coverage probability is $.158$. Now, the confidence interval around .123 is $$.123 \pm 1.96\sqrt{\dfrac{.123*(1 - .123)}{1000}} = .123 \pm .0203 = (.102, .143)\,. $$ Thus, now with $N = 1000$ replications, you get that the coverage probabiulity significantly increases.
Lower than expected coverage for importance sampling with simulation
Importance sampling is quite sensitive to the choice of importance distribution. Since you chose $\lambda = 20$, the samples that you draw using rexp will have a mean of $1/20$ with variance $1/400$.
Lower than expected coverage for importance sampling with simulation Importance sampling is quite sensitive to the choice of importance distribution. Since you chose $\lambda = 20$, the samples that you draw using rexp will have a mean of $1/20$ with variance $1/400$. This is the distribution you get However, the integral you want to evaluate goes from 0 to $\pi =3.14$. So you want to use a $\lambda$ that gives you such a range. I use $\lambda = 1$. Using $\lambda = 1$ I will be able to explore the full integral space of 0 to $\pi$, and seems like only a few draws over $\pi$ will be wasted. Now I rerun your code, and only change $\lambda = 1$. # clear the environment and set the seed for reproducibility rm(list=ls()) gc() graphics.off() set.seed(1) # function to be integrated f <- function(x){ 1 / (cos(x)^2+x^2) } # importance sampling importance.sampling <- function(lambda, f, B){ x <- rexp(B, lambda) f(x) / dexp(x, lambda)*dunif(x, 0, pi) } # mean value of f mu.num <- integrate(f,0,pi)$value/pi # initialize code means <- 0 sigmas <- 0 error <- 0 CI.min <- 0 CI.max <- 0 CI.covers.parameter <- FALSE # set a value for lambda: we will repeat importance sampling N times to verify # coverage N <- 100 lambda <- rep(1,N) # set the sample size for importance sampling B <- 10^4 # - estimate the mean value of f using importance sampling, N times # - compute a confidence interval for the mean each time # - CI.covers.parameter is set to TRUE if the estimated confidence # interval contains the mean value computed by integrate, otherwise # is set to FALSE j <- 0 for(i in lambda){ I <- importance.sampling(i, f, B) j <- j + 1 mu <- mean(I) std <- sd(I) lower.CB <- mu - 1.96*std/sqrt(B) upper.CB <- mu + 1.96*std/sqrt(B) means[j] <- mu sigmas[j] <- std error[j] <- abs(mu-mu.num) CI.min[j] <- lower.CB CI.max[j] <- upper.CB CI.covers.parameter[j] <- lower.CB < mu.num & mu.num < upper.CB } # build a dataframe in case you want to have a look at the results for each run df <- data.frame(lambda, means, sigmas, error, CI.min, CI.max, CI.covers.parameter) # so, what's the coverage? mean(CI.covers.parameter) #[1] .95 If you play around with $\lambda$, you will see that if you make it really small (.00001) or large, the coverage probabilities will be bad. EDIT------- Regarding the coverage probability decreasing once you go from $B = 10^4$ to $B = 10^6$, that is just a random occurrence, based on the fact that you use $N = 100$ replications. The confidence interval for the coverage probability at $B = 10^4$ is, $$.19 \pm 1.96*\sqrt{\dfrac{.19*(1-.19)}{100}} = .19 \pm .0769 = (.1131, .2669)\,.$$ So you can't really say that increasing $B = 10^6$ significantly lowers the coverage probability. In fact in your code for the same seed, change $N = 100$ to $N = 1000$, then with $B = 10^4$, coverage probability is .123 and with $B = 10^6$ coverage probability is $.158$. Now, the confidence interval around .123 is $$.123 \pm 1.96\sqrt{\dfrac{.123*(1 - .123)}{1000}} = .123 \pm .0203 = (.102, .143)\,. $$ Thus, now with $N = 1000$ replications, you get that the coverage probabiulity significantly increases.
Lower than expected coverage for importance sampling with simulation Importance sampling is quite sensitive to the choice of importance distribution. Since you chose $\lambda = 20$, the samples that you draw using rexp will have a mean of $1/20$ with variance $1/400$.
32,930
Why is it rarely reported in papers which type of sums of squares is used in Anova results?
This is not an easy question to answer (at least for me) but my guess is that the great majority of people go with the default settings of the program that they are using (irrespective of whether this is the correct approach or not). And I am fairly confident, that those who know which type of sums of squares to use (and deviate from the default settings), will make sure to mention this in the methods. I personally find it more concerning that many papers fail to mention carefully enough which program/package/functions were used to run the analyses. Knowing this may help narrowing down which settings were used. I don't have a paper at hand at the moment that would report the type of sums of squares that were used to run the analyses, but if I will come across one, I will add it to my answer. EDIT: Searching on Google Scholar using a key word of the field you are interested in with the addition of "sums of squares" will lead you to the papers you are asking for. For example in my case I searched xylem water sums of squares, which resulted in this paper http://onlinelibrary.wiley.com/doi/10.1046/j.1469-8137.2003.00816.x/full (see Table 1 and 2 footnotes).
Why is it rarely reported in papers which type of sums of squares is used in Anova results?
This is not an easy question to answer (at least for me) but my guess is that the great majority of people go with the default settings of the program that they are using (irrespective of whether this
Why is it rarely reported in papers which type of sums of squares is used in Anova results? This is not an easy question to answer (at least for me) but my guess is that the great majority of people go with the default settings of the program that they are using (irrespective of whether this is the correct approach or not). And I am fairly confident, that those who know which type of sums of squares to use (and deviate from the default settings), will make sure to mention this in the methods. I personally find it more concerning that many papers fail to mention carefully enough which program/package/functions were used to run the analyses. Knowing this may help narrowing down which settings were used. I don't have a paper at hand at the moment that would report the type of sums of squares that were used to run the analyses, but if I will come across one, I will add it to my answer. EDIT: Searching on Google Scholar using a key word of the field you are interested in with the addition of "sums of squares" will lead you to the papers you are asking for. For example in my case I searched xylem water sums of squares, which resulted in this paper http://onlinelibrary.wiley.com/doi/10.1046/j.1469-8137.2003.00816.x/full (see Table 1 and 2 footnotes).
Why is it rarely reported in papers which type of sums of squares is used in Anova results? This is not an easy question to answer (at least for me) but my guess is that the great majority of people go with the default settings of the program that they are using (irrespective of whether this
32,931
Will I miss anomalies/outliers due to PCA?
Long story short: It depends. If the sample in question is an outlier in a direction covered by your extracted PCs (i.e. where most variance occurs) you'll keep the information, if its an outlier in a direction orthogonal to all extracted PCs you'll loose it. To be honest: Reduction to only one single PC is an extreme case. When applying PCA in high dimensionl space one would rarely use only one PC. If in contrast you have lots of resulting final PCs, wouldn't it be a sensible assumption, that PCs that explain only a very small amount of total variance und thus get dropped finally, are also not relevant for outlier detection? All that said: Generally the motivation to apply PCA would be driven by the assumption that all dimensions in your space are of approximately equal importance (with the final weight defined by the selected scaling of the features), and there is no idea to preselect a small subset of relevant dimensions based on some domain knowledge. Is that the case here? Otherwise selection of features based on domain knowledge would be far better.
Will I miss anomalies/outliers due to PCA?
Long story short: It depends. If the sample in question is an outlier in a direction covered by your extracted PCs (i.e. where most variance occurs) you'll keep the information, if its an outlier in a
Will I miss anomalies/outliers due to PCA? Long story short: It depends. If the sample in question is an outlier in a direction covered by your extracted PCs (i.e. where most variance occurs) you'll keep the information, if its an outlier in a direction orthogonal to all extracted PCs you'll loose it. To be honest: Reduction to only one single PC is an extreme case. When applying PCA in high dimensionl space one would rarely use only one PC. If in contrast you have lots of resulting final PCs, wouldn't it be a sensible assumption, that PCs that explain only a very small amount of total variance und thus get dropped finally, are also not relevant for outlier detection? All that said: Generally the motivation to apply PCA would be driven by the assumption that all dimensions in your space are of approximately equal importance (with the final weight defined by the selected scaling of the features), and there is no idea to preselect a small subset of relevant dimensions based on some domain knowledge. Is that the case here? Otherwise selection of features based on domain knowledge would be far better.
Will I miss anomalies/outliers due to PCA? Long story short: It depends. If the sample in question is an outlier in a direction covered by your extracted PCs (i.e. where most variance occurs) you'll keep the information, if its an outlier in a
32,932
How to "highlight" an input feature of an artificial neural network?
Try the wide & deep network architecture? Directly link the "important" features with the output neuron. [1]. https://arxiv.org/abs/1606.07792
How to "highlight" an input feature of an artificial neural network?
Try the wide & deep network architecture? Directly link the "important" features with the output neuron. [1]. https://arxiv.org/abs/1606.07792
How to "highlight" an input feature of an artificial neural network? Try the wide & deep network architecture? Directly link the "important" features with the output neuron. [1]. https://arxiv.org/abs/1606.07792
How to "highlight" an input feature of an artificial neural network? Try the wide & deep network architecture? Directly link the "important" features with the output neuron. [1]. https://arxiv.org/abs/1606.07792
32,933
How to "highlight" an input feature of an artificial neural network?
{1} explored one way to take prior knowledge on features into account when training a neural network. Abstract: Different features have different relevance to a particular learning problem. Some features are less relevant; while some very important. Instead of selecting the most relevant features using feature selection, an algorithm can be given this knowledge of feature importance based on expert opinion or prior learning. Learning can be faster and more accurate if learners take feature importance into account. Correlation aided Neural Networks (CANN) is presented which is such an algorithm. CANN treats feature importance as the correlation coefficient between the target attribute and the features. CANN modifies normal feedforward Neural Network to fit both correlation values and training data. Empirical evaluation shows that CANN is faster and more accurate than applying the two step approach of feature selection and then using normal learning algorithms. I didn't read the paper carefully, I am unsure how sound it is, and I'd be quite cautious. The same author published a few other papers on the same topic, e.g. {2}. Personally I rely on backpropagation to do the job. Perhaps another way could be to change the weigh update rule and/or weight initialization rule for this feature, so as to bias the weights connected to your important feature to have an absolute value larger than the other weights connected to the other features. A last idea would be to connect your most important feature to layers other than the first layer. {1} Iqbal, Ridwan Al. "Using Feature Weights to Improve Performance of Neural Networks." arXiv preprint arXiv:1101.4918 (2011). https://scholar.google.com/scholar?cluster=15075021269543299652&hl=en&as_sdt=0,22 ; http://arxiv.org/abs/1101.4918 {2} Al Iqbal, Ridwan. "Empirical learning aided by weak domain knowledge in the form of feature importance." In Multimedia and Signal Processing (CMSP), 2011 International Conference on, vol. 1, pp. 126-130. IEEE, 2011. https://scholar.google.com/scholar?cluster=13856845400679996300&hl=en&as_sdt=0,22 ; http://arxiv.org/abs/1005.5556
How to "highlight" an input feature of an artificial neural network?
{1} explored one way to take prior knowledge on features into account when training a neural network. Abstract: Different features have different relevance to a particular learning problem. Some fe
How to "highlight" an input feature of an artificial neural network? {1} explored one way to take prior knowledge on features into account when training a neural network. Abstract: Different features have different relevance to a particular learning problem. Some features are less relevant; while some very important. Instead of selecting the most relevant features using feature selection, an algorithm can be given this knowledge of feature importance based on expert opinion or prior learning. Learning can be faster and more accurate if learners take feature importance into account. Correlation aided Neural Networks (CANN) is presented which is such an algorithm. CANN treats feature importance as the correlation coefficient between the target attribute and the features. CANN modifies normal feedforward Neural Network to fit both correlation values and training data. Empirical evaluation shows that CANN is faster and more accurate than applying the two step approach of feature selection and then using normal learning algorithms. I didn't read the paper carefully, I am unsure how sound it is, and I'd be quite cautious. The same author published a few other papers on the same topic, e.g. {2}. Personally I rely on backpropagation to do the job. Perhaps another way could be to change the weigh update rule and/or weight initialization rule for this feature, so as to bias the weights connected to your important feature to have an absolute value larger than the other weights connected to the other features. A last idea would be to connect your most important feature to layers other than the first layer. {1} Iqbal, Ridwan Al. "Using Feature Weights to Improve Performance of Neural Networks." arXiv preprint arXiv:1101.4918 (2011). https://scholar.google.com/scholar?cluster=15075021269543299652&hl=en&as_sdt=0,22 ; http://arxiv.org/abs/1101.4918 {2} Al Iqbal, Ridwan. "Empirical learning aided by weak domain knowledge in the form of feature importance." In Multimedia and Signal Processing (CMSP), 2011 International Conference on, vol. 1, pp. 126-130. IEEE, 2011. https://scholar.google.com/scholar?cluster=13856845400679996300&hl=en&as_sdt=0,22 ; http://arxiv.org/abs/1005.5556
How to "highlight" an input feature of an artificial neural network? {1} explored one way to take prior knowledge on features into account when training a neural network. Abstract: Different features have different relevance to a particular learning problem. Some fe
32,934
How to "highlight" an input feature of an artificial neural network?
In neural networks, the "importance" of each signal is established during the learning phase. It comes hard coded in the model, rather than expressed by a nice numeric parameter. I'm afraid you may not be able to manually alter the importance of a feature. One way of forcing it, if you are using dropout, is to avoid it on the signal the user judges "important". Other than this, I really can't see how to force it. Please, notice that I'm using "forcing" because what you want to do is ... counter-intuitive for almost any machine learning classifier.
How to "highlight" an input feature of an artificial neural network?
In neural networks, the "importance" of each signal is established during the learning phase. It comes hard coded in the model, rather than expressed by a nice numeric parameter. I'm afraid you may no
How to "highlight" an input feature of an artificial neural network? In neural networks, the "importance" of each signal is established during the learning phase. It comes hard coded in the model, rather than expressed by a nice numeric parameter. I'm afraid you may not be able to manually alter the importance of a feature. One way of forcing it, if you are using dropout, is to avoid it on the signal the user judges "important". Other than this, I really can't see how to force it. Please, notice that I'm using "forcing" because what you want to do is ... counter-intuitive for almost any machine learning classifier.
How to "highlight" an input feature of an artificial neural network? In neural networks, the "importance" of each signal is established during the learning phase. It comes hard coded in the model, rather than expressed by a nice numeric parameter. I'm afraid you may no
32,935
How to "highlight" an input feature of an artificial neural network?
You might consider interpreting your neural network as a probabilistic graphical model. From "An Introduction to Variational Methods for Graphical Models", Jordan et al: Neural networks are layered graphs endowed with a nonlinear "activation" function at each node (see figure 5). Let us consider activation functions that are bounded between zero and one, such as those obtained from the logistic function $f(z) = 1/(1 + e^{−z})$. We can treat such a neural network as a graphical model by associating a binary variable $S_i$ with each node and interpreting the activation of the node as the probability that the associated binary variable takes one of its two values. [...] The advantages of treating a neural network in this manner include the ability to perform diagnostic calculations, to handle missing data, and to treat unsupervised learning on the same footing as supervised learning. Realizing these benefits, however, requires that the inference problem be solved in an efficient way. Later portions of the paper discuss how to do this efficiently. It would seem you could "highlight" a feature by changing the prior placed on its associated parameters.
How to "highlight" an input feature of an artificial neural network?
You might consider interpreting your neural network as a probabilistic graphical model. From "An Introduction to Variational Methods for Graphical Models", Jordan et al: Neural networks are layered g
How to "highlight" an input feature of an artificial neural network? You might consider interpreting your neural network as a probabilistic graphical model. From "An Introduction to Variational Methods for Graphical Models", Jordan et al: Neural networks are layered graphs endowed with a nonlinear "activation" function at each node (see figure 5). Let us consider activation functions that are bounded between zero and one, such as those obtained from the logistic function $f(z) = 1/(1 + e^{−z})$. We can treat such a neural network as a graphical model by associating a binary variable $S_i$ with each node and interpreting the activation of the node as the probability that the associated binary variable takes one of its two values. [...] The advantages of treating a neural network in this manner include the ability to perform diagnostic calculations, to handle missing data, and to treat unsupervised learning on the same footing as supervised learning. Realizing these benefits, however, requires that the inference problem be solved in an efficient way. Later portions of the paper discuss how to do this efficiently. It would seem you could "highlight" a feature by changing the prior placed on its associated parameters.
How to "highlight" an input feature of an artificial neural network? You might consider interpreting your neural network as a probabilistic graphical model. From "An Introduction to Variational Methods for Graphical Models", Jordan et al: Neural networks are layered g
32,936
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't involve Batch Normalization or ReLu units?
Have you looked into RMSProp? Take a look at this set of slides from Geoff Hinton: Overview of mini-batch gradient descent Specifically page 29, entitled 'rmsprop: A mini-batch version of rprop', although it's probably worth reading through the full set to get a fuller idea of some of the related ideas. Also related is Yan Le Cun's No More Pesky Learning Rates and Brandyn Webb's SMORMS3. The main idea is to look at the sign of gradient and whether it's flip-flopping or not; if it's consistent then you want to move in that direction, and if the sign isn't flipping then whatever step you just took must be OK, provided it isn't vanishingly small, so there are ways of controlling the step size to keep it sensible and that are somewhat independent of the actual gradient. So the short answer to how to handle vanishing or exploding gradients is simply - don't use the gradient's magnitude!
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't i
Have you looked into RMSProp? Take a look at this set of slides from Geoff Hinton: Overview of mini-batch gradient descent Specifically page 29, entitled 'rmsprop: A mini-batch version of rprop', alth
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't involve Batch Normalization or ReLu units? Have you looked into RMSProp? Take a look at this set of slides from Geoff Hinton: Overview of mini-batch gradient descent Specifically page 29, entitled 'rmsprop: A mini-batch version of rprop', although it's probably worth reading through the full set to get a fuller idea of some of the related ideas. Also related is Yan Le Cun's No More Pesky Learning Rates and Brandyn Webb's SMORMS3. The main idea is to look at the sign of gradient and whether it's flip-flopping or not; if it's consistent then you want to move in that direction, and if the sign isn't flipping then whatever step you just took must be OK, provided it isn't vanishingly small, so there are ways of controlling the step size to keep it sensible and that are somewhat independent of the actual gradient. So the short answer to how to handle vanishing or exploding gradients is simply - don't use the gradient's magnitude!
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't i Have you looked into RMSProp? Take a look at this set of slides from Geoff Hinton: Overview of mini-batch gradient descent Specifically page 29, entitled 'rmsprop: A mini-batch version of rprop', alth
32,937
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't involve Batch Normalization or ReLu units?
Some of my understandings, may not be correct. The cause of the vanishing gradient problem is that sigmoid tanh (and RBF) saturate on both sides (-inf and inf), so it's very likely for the input of such non-linearity to fall on the saturated regions. The effect of BN is that it "pulls" the input of the non-linearity towards a small range around 0 $N(0,1)$ as a starting point , where such non-linearities don't saturate. So I guess it will work with RBF as well. To remove the non-linearity of ReLU, we can use the softplus funtion $\log(1+e^x)$, which is very close to ReLU, and was used in Geoffrey Hinton`s papper to explain why ReLU would work. Also the residual networks or the highway networks provide another way of addressing vanishing gradients (via shortcuts). From my experience such architecture gets trained way faster than only connecting the loss to the last layer. Moreover the difficulty of training deep networks is not solely because of the vanishing gradient, but other factors as well (e.g. the internal covariate shift). There's a recent paper layer normalization about another way of doing normalization, it doesn't say about vanishing gradients though, but maybe you'll be interested.
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't i
Some of my understandings, may not be correct. The cause of the vanishing gradient problem is that sigmoid tanh (and RBF) saturate on both sides (-inf and inf), so it's very likely for the input of
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't involve Batch Normalization or ReLu units? Some of my understandings, may not be correct. The cause of the vanishing gradient problem is that sigmoid tanh (and RBF) saturate on both sides (-inf and inf), so it's very likely for the input of such non-linearity to fall on the saturated regions. The effect of BN is that it "pulls" the input of the non-linearity towards a small range around 0 $N(0,1)$ as a starting point , where such non-linearities don't saturate. So I guess it will work with RBF as well. To remove the non-linearity of ReLU, we can use the softplus funtion $\log(1+e^x)$, which is very close to ReLU, and was used in Geoffrey Hinton`s papper to explain why ReLU would work. Also the residual networks or the highway networks provide another way of addressing vanishing gradients (via shortcuts). From my experience such architecture gets trained way faster than only connecting the loss to the last layer. Moreover the difficulty of training deep networks is not solely because of the vanishing gradient, but other factors as well (e.g. the internal covariate shift). There's a recent paper layer normalization about another way of doing normalization, it doesn't say about vanishing gradients though, but maybe you'll be interested.
Are there any ways to deal with the vanishing gradient for saturating non-linearities that doesn't i Some of my understandings, may not be correct. The cause of the vanishing gradient problem is that sigmoid tanh (and RBF) saturate on both sides (-inf and inf), so it's very likely for the input of
32,938
Why does the implementation of t-SNE in R default to the removal of duplicates?
t-SNE method does not require the removal of duplicates. The fact that it is a default feature in Rtsne does not imply its requirement. It is useful for some short-term event monitoring. For characterising long-term trends and/or patterns with big data sets, I see little use. The Rtsne default setup can be more inclined for characterising events in the time-domain, without any studies in Fourier domain. Assume you have points in the time-domain. The duplicate algorithm causes significant amount of false positives because the duplicate checking is mostly designed on the time-domain signal. Fourier space can show that those events which are considered by the algorithm duplicate are not necessary so. So my observation is that the algorithm is greedy about duplicate points in the time-domain, which is not useful for me when considering long-term signals, long-terms trends and long-term patterns. The fact that the point is duplicate in the time-domain does not actually mean that it is duplicate also in Fourier domain. I think it will be more a coincidence if is a duplicate in a time domain in the real-life applications. So turning off the feature, should be ok. To estimate how much of the points are really duplicates in both domains is specific on the case study. I get significantly better descriptors of events and/or phenomena by considering long-term data sets without the duplicate check in many real-life applications. I think the Rtsne documentation is not clear about the case in saying [turn off check_duplicates and] don't wast processing power. There are really other reasons as described above why the check_duplicates can be turned off as realised also by some other implementations of the method. The check_duplicates=TRUE is a personal selection of the Rtsne developer by default at the moment. I would love to hear if there is any implementation reasons for the decision.
Why does the implementation of t-SNE in R default to the removal of duplicates?
t-SNE method does not require the removal of duplicates. The fact that it is a default feature in Rtsne does not imply its requirement. It is useful for some short-term event monitoring. For charact
Why does the implementation of t-SNE in R default to the removal of duplicates? t-SNE method does not require the removal of duplicates. The fact that it is a default feature in Rtsne does not imply its requirement. It is useful for some short-term event monitoring. For characterising long-term trends and/or patterns with big data sets, I see little use. The Rtsne default setup can be more inclined for characterising events in the time-domain, without any studies in Fourier domain. Assume you have points in the time-domain. The duplicate algorithm causes significant amount of false positives because the duplicate checking is mostly designed on the time-domain signal. Fourier space can show that those events which are considered by the algorithm duplicate are not necessary so. So my observation is that the algorithm is greedy about duplicate points in the time-domain, which is not useful for me when considering long-term signals, long-terms trends and long-term patterns. The fact that the point is duplicate in the time-domain does not actually mean that it is duplicate also in Fourier domain. I think it will be more a coincidence if is a duplicate in a time domain in the real-life applications. So turning off the feature, should be ok. To estimate how much of the points are really duplicates in both domains is specific on the case study. I get significantly better descriptors of events and/or phenomena by considering long-term data sets without the duplicate check in many real-life applications. I think the Rtsne documentation is not clear about the case in saying [turn off check_duplicates and] don't wast processing power. There are really other reasons as described above why the check_duplicates can be turned off as realised also by some other implementations of the method. The check_duplicates=TRUE is a personal selection of the Rtsne developer by default at the moment. I would love to hear if there is any implementation reasons for the decision.
Why does the implementation of t-SNE in R default to the removal of duplicates? t-SNE method does not require the removal of duplicates. The fact that it is a default feature in Rtsne does not imply its requirement. It is useful for some short-term event monitoring. For charact
32,939
Why does the implementation of t-SNE in R default to the removal of duplicates?
The algorithm is designed to handle datasets without duplicate information, so the package mades a check before apply the technique. They suggest you to remove duplicates and set check_duplicates = FALSE for a performance improvement. The implementation in R is this: if (check_duplicates & !is_distance){ if (any(duplicated(X))) { stop("Remove duplicates before running TSNE.")} Whith default values check_duplicates = TRUE and is_distance = FALSE. The paper, for who wants to undestand more about the method, is here.
Why does the implementation of t-SNE in R default to the removal of duplicates?
The algorithm is designed to handle datasets without duplicate information, so the package mades a check before apply the technique. They suggest you to remove duplicates and set check_duplicates = FA
Why does the implementation of t-SNE in R default to the removal of duplicates? The algorithm is designed to handle datasets without duplicate information, so the package mades a check before apply the technique. They suggest you to remove duplicates and set check_duplicates = FALSE for a performance improvement. The implementation in R is this: if (check_duplicates & !is_distance){ if (any(duplicated(X))) { stop("Remove duplicates before running TSNE.")} Whith default values check_duplicates = TRUE and is_distance = FALSE. The paper, for who wants to undestand more about the method, is here.
Why does the implementation of t-SNE in R default to the removal of duplicates? The algorithm is designed to handle datasets without duplicate information, so the package mades a check before apply the technique. They suggest you to remove duplicates and set check_duplicates = FA
32,940
Why does the implementation of t-SNE in R default to the removal of duplicates?
In addition to Léo's answer it can even be 'dangerous' to remove duplicates. Imagine you want to explore a dataset describing a problem for machine learning to see if there is any correlation between the features and the labels: If you remove the duplicates in the feature space, tsne has a higher chance to create clusters with specific labels. But if there is no correlation between features and labels (i.e. the same feature-combination can result in different labels) this may tric you into thinking there is a correlation because these duplicate points were removed and your tsne-plot will look 'cleaner'.
Why does the implementation of t-SNE in R default to the removal of duplicates?
In addition to Léo's answer it can even be 'dangerous' to remove duplicates. Imagine you want to explore a dataset describing a problem for machine learning to see if there is any correlation between
Why does the implementation of t-SNE in R default to the removal of duplicates? In addition to Léo's answer it can even be 'dangerous' to remove duplicates. Imagine you want to explore a dataset describing a problem for machine learning to see if there is any correlation between the features and the labels: If you remove the duplicates in the feature space, tsne has a higher chance to create clusters with specific labels. But if there is no correlation between features and labels (i.e. the same feature-combination can result in different labels) this may tric you into thinking there is a correlation because these duplicate points were removed and your tsne-plot will look 'cleaner'.
Why does the implementation of t-SNE in R default to the removal of duplicates? In addition to Léo's answer it can even be 'dangerous' to remove duplicates. Imagine you want to explore a dataset describing a problem for machine learning to see if there is any correlation between
32,941
95% confidence intervals on prediction of censored binomial model estimated using mle2 / maximum-likelihood
Based on Ben Bolker's chapter and comment above, I in the end figured that the 95% population prediction intervals are given by # predicted EPP rate p as a function of n (nr of generations ago) # plus 95% population prediction intervals (cf chapter B. Bolker) pfun = function(a, b, n) exp(a+b*n)/(1+exp(a+b*n)) # model prediction xvals=1:max(n) set.seed(1001) library(MASS) nresamp=100000 pars.picked = mvrnorm(nresamp, mu = coef(estfit), Sigma = vcov(estfit)) # pick new parameter values by sampling from multivariate normal distribution based on fit yvals = matrix(0, nrow = nresamp, ncol = length(xvals)) for (i in 1:nresamp) { yvals[i,] = sapply(xvals,function (n) pfun(pars.picked[i,1], pars.picked[i,2], n)) } quant = function(col) quantile(col, c(0.025,0.975)) # 95% percentiles conflims = apply(yvals,2,quant) # 95% confidence intervals par(mfrow=c(1,1)) plot(xvals, sapply(xvals,function (n) pfun(coef(estfit)[1], coef(estfit)[2], n)), type="l", xlab="Generations ago (n)", ylab="EPP rate (p)", ylim=c(0,0.05)) lines(xvals, conflims[1,], col="red" , lty=2) lines(xvals, conflims[2,], col="red" , lty=2)
95% confidence intervals on prediction of censored binomial model estimated using mle2 / maximum-lik
Based on Ben Bolker's chapter and comment above, I in the end figured that the 95% population prediction intervals are given by # predicted EPP rate p as a function of n (nr of generations ago) # plus
95% confidence intervals on prediction of censored binomial model estimated using mle2 / maximum-likelihood Based on Ben Bolker's chapter and comment above, I in the end figured that the 95% population prediction intervals are given by # predicted EPP rate p as a function of n (nr of generations ago) # plus 95% population prediction intervals (cf chapter B. Bolker) pfun = function(a, b, n) exp(a+b*n)/(1+exp(a+b*n)) # model prediction xvals=1:max(n) set.seed(1001) library(MASS) nresamp=100000 pars.picked = mvrnorm(nresamp, mu = coef(estfit), Sigma = vcov(estfit)) # pick new parameter values by sampling from multivariate normal distribution based on fit yvals = matrix(0, nrow = nresamp, ncol = length(xvals)) for (i in 1:nresamp) { yvals[i,] = sapply(xvals,function (n) pfun(pars.picked[i,1], pars.picked[i,2], n)) } quant = function(col) quantile(col, c(0.025,0.975)) # 95% percentiles conflims = apply(yvals,2,quant) # 95% confidence intervals par(mfrow=c(1,1)) plot(xvals, sapply(xvals,function (n) pfun(coef(estfit)[1], coef(estfit)[2], n)), type="l", xlab="Generations ago (n)", ylab="EPP rate (p)", ylim=c(0,0.05)) lines(xvals, conflims[1,], col="red" , lty=2) lines(xvals, conflims[2,], col="red" , lty=2)
95% confidence intervals on prediction of censored binomial model estimated using mle2 / maximum-lik Based on Ben Bolker's chapter and comment above, I in the end figured that the 95% population prediction intervals are given by # predicted EPP rate p as a function of n (nr of generations ago) # plus
32,942
RNN learning sine waves of different frequencies
Your data basically cannot be learned with an RNN trained that way. Your input is $\sin(t)$ is $2\pi$-periodic $\sin(t) = \sin(t+2\pi)$ but your target $\sin(t/2)$ is $4\pi$-periodic and $\sin(t/2) = -\sin(t+2\pi)$ Therefore, in your dataset you'll have pairs of identical inputs with opposite outputs. In terms of Mean Squared Error, it means that the optimal solution is a null function. These are two slices of your plot where you can see identical inputs but opposite targets
RNN learning sine waves of different frequencies
Your data basically cannot be learned with an RNN trained that way. Your input is $\sin(t)$ is $2\pi$-periodic $\sin(t) = \sin(t+2\pi)$ but your target $\sin(t/2)$ is $4\pi$-periodic and $\sin(t/2) =
RNN learning sine waves of different frequencies Your data basically cannot be learned with an RNN trained that way. Your input is $\sin(t)$ is $2\pi$-periodic $\sin(t) = \sin(t+2\pi)$ but your target $\sin(t/2)$ is $4\pi$-periodic and $\sin(t/2) = -\sin(t+2\pi)$ Therefore, in your dataset you'll have pairs of identical inputs with opposite outputs. In terms of Mean Squared Error, it means that the optimal solution is a null function. These are two slices of your plot where you can see identical inputs but opposite targets
RNN learning sine waves of different frequencies Your data basically cannot be learned with an RNN trained that way. Your input is $\sin(t)$ is $2\pi$-periodic $\sin(t) = \sin(t+2\pi)$ but your target $\sin(t/2)$ is $4\pi$-periodic and $\sin(t/2) =
32,943
Scalable Random Forest For Massive Data
There was a PhD on random forests which included RF on large datasets - it is available at https://github.com/glouppe/phd-thesis I no longer remember the solution proposed but I remember that there was some comparative experiments on different alternatives.
Scalable Random Forest For Massive Data
There was a PhD on random forests which included RF on large datasets - it is available at https://github.com/glouppe/phd-thesis I no longer remember the solution proposed but I remember that there
Scalable Random Forest For Massive Data There was a PhD on random forests which included RF on large datasets - it is available at https://github.com/glouppe/phd-thesis I no longer remember the solution proposed but I remember that there was some comparative experiments on different alternatives.
Scalable Random Forest For Massive Data There was a PhD on random forests which included RF on large datasets - it is available at https://github.com/glouppe/phd-thesis I no longer remember the solution proposed but I remember that there
32,944
choosing prior parameters for variational mixture of Gaussians
Good priors depend on your actual problem - in particular, I don't believe there are any truly universal defaults. One good way is to try to formulate (possibly weak and vague) domain-specific knowledge about the process that generated your data, e.g.: "It's highly unlikely to have more than 12 components" "It's highly unlikely to observe values larger than 80" Note that those should not generally be informed by the actual data you collected but by what you would be able to say before gathering the data. (e.g. the data represent outdoor temperatures in Celsius therefore they will very likely lie in $[-50,80]$ even before looking at data). It is also OK to motivate your priors by the computational machinery you use (e.g. I will collect 100 datapoints, hence I can safely assume it is unlikely to have more than 10 components since I won't have enough data to locate more components anyway) Some of those statements can be translated directly into priors - e.g. you can set $m_0$ and $W_0^{-1}$ so that 95% of the prior mass is over the expected range of values. For the less intuitive parameters (or just as another robustness check), you can follow the Visualization in Bayesian workflow paper and do prior predictive checks: this means that you simulate a large number of new datasets starting from your prior. You can then visualize them to see if they don't violate your expectations too often (it is good to leave some room for surprises, hence aiming for something like 90% or 95% of simulations within your constraints) otherwise cover the whole spectrum of values reasonably well
choosing prior parameters for variational mixture of Gaussians
Good priors depend on your actual problem - in particular, I don't believe there are any truly universal defaults. One good way is to try to formulate (possibly weak and vague) domain-specific knowled
choosing prior parameters for variational mixture of Gaussians Good priors depend on your actual problem - in particular, I don't believe there are any truly universal defaults. One good way is to try to formulate (possibly weak and vague) domain-specific knowledge about the process that generated your data, e.g.: "It's highly unlikely to have more than 12 components" "It's highly unlikely to observe values larger than 80" Note that those should not generally be informed by the actual data you collected but by what you would be able to say before gathering the data. (e.g. the data represent outdoor temperatures in Celsius therefore they will very likely lie in $[-50,80]$ even before looking at data). It is also OK to motivate your priors by the computational machinery you use (e.g. I will collect 100 datapoints, hence I can safely assume it is unlikely to have more than 10 components since I won't have enough data to locate more components anyway) Some of those statements can be translated directly into priors - e.g. you can set $m_0$ and $W_0^{-1}$ so that 95% of the prior mass is over the expected range of values. For the less intuitive parameters (or just as another robustness check), you can follow the Visualization in Bayesian workflow paper and do prior predictive checks: this means that you simulate a large number of new datasets starting from your prior. You can then visualize them to see if they don't violate your expectations too often (it is good to leave some room for surprises, hence aiming for something like 90% or 95% of simulations within your constraints) otherwise cover the whole spectrum of values reasonably well
choosing prior parameters for variational mixture of Gaussians Good priors depend on your actual problem - in particular, I don't believe there are any truly universal defaults. One good way is to try to formulate (possibly weak and vague) domain-specific knowled
32,945
choosing prior parameters for variational mixture of Gaussians
If you're interested in performance over elegance, you could define some empirical goodness-of-fit measure and run a hyperparameter search to maximize it.
choosing prior parameters for variational mixture of Gaussians
If you're interested in performance over elegance, you could define some empirical goodness-of-fit measure and run a hyperparameter search to maximize it.
choosing prior parameters for variational mixture of Gaussians If you're interested in performance over elegance, you could define some empirical goodness-of-fit measure and run a hyperparameter search to maximize it.
choosing prior parameters for variational mixture of Gaussians If you're interested in performance over elegance, you could define some empirical goodness-of-fit measure and run a hyperparameter search to maximize it.
32,946
Additive bias in xgboost (and its correction?)
I will answer myself and let you know my findings in case anybody is interested. First the bias: I took the time to collect all the recent data and format it correclty and so on. I should have done this long before. The picture is the following: You see the data from the end of 2015 and then April 16. The price level is totally different. A model trained on 2015 data can in no way get this change. Second: The fit of xgboost. I really liked the following set-up. train and test error are much close now and still good: xgb_grid_1 <- expand.grid( nrounds = c(12000), eta = c(0.01), max_depth = c(3), gamma = 1, colsample_bytree = c(0.7), min_child_weight = c(5) ) xgb_train_1 <- train( x = training,y = model.data$Price[inTrain], trControl = ctrl, tuneGrid = xgb_grid_1, method="xgbTree" ,subsample = 0.8 ) Thus I use a lot of trees and all of them are at most 3 splits deep (as recommended here). Doing this the calculation is quick (the tree size grows by a factor of 2 with each split) and the overfit seems to be reduced. My summary: use trees with a small number of leaves but a lot of them and look for recent data. For the competition this was bad luck for me...
Additive bias in xgboost (and its correction?)
I will answer myself and let you know my findings in case anybody is interested. First the bias: I took the time to collect all the recent data and format it correclty and so on. I should have done th
Additive bias in xgboost (and its correction?) I will answer myself and let you know my findings in case anybody is interested. First the bias: I took the time to collect all the recent data and format it correclty and so on. I should have done this long before. The picture is the following: You see the data from the end of 2015 and then April 16. The price level is totally different. A model trained on 2015 data can in no way get this change. Second: The fit of xgboost. I really liked the following set-up. train and test error are much close now and still good: xgb_grid_1 <- expand.grid( nrounds = c(12000), eta = c(0.01), max_depth = c(3), gamma = 1, colsample_bytree = c(0.7), min_child_weight = c(5) ) xgb_train_1 <- train( x = training,y = model.data$Price[inTrain], trControl = ctrl, tuneGrid = xgb_grid_1, method="xgbTree" ,subsample = 0.8 ) Thus I use a lot of trees and all of them are at most 3 splits deep (as recommended here). Doing this the calculation is quick (the tree size grows by a factor of 2 with each split) and the overfit seems to be reduced. My summary: use trees with a small number of leaves but a lot of them and look for recent data. For the competition this was bad luck for me...
Additive bias in xgboost (and its correction?) I will answer myself and let you know my findings in case anybody is interested. First the bias: I took the time to collect all the recent data and format it correclty and so on. I should have done th
32,947
Algebraic classifiers, more information?
I read a bit into the article you mentioned, to me it seems like a construction using the approach from algebraic statistics. You may want to have a look at: Cencov, Nikolai Nikolaevich. Statistical decision rules and optimal inference. No. 53. American Mathematical Soc., 2000. This book is a bit out-of-date, one reason is that there are not many people interested in "categorical applications" nowadays, its original print is around 1980s. But almost all modern algebraic studies in statistics can be traced back to this title. Another very readable introduction used in the paper you mentioned is: Drton, Mathias, Bernd Sturmfels, and Seth Sullivant. Lectures on algebraic statistics. Vol. 39. Springer Science & Business Media, 2008. The paper you mentioned in your question is an application of monoid theoretical construction onto classifying problem, which looks interesting. So hope these references help.
Algebraic classifiers, more information?
I read a bit into the article you mentioned, to me it seems like a construction using the approach from algebraic statistics. You may want to have a look at: Cencov, Nikolai Nikolaevich. Statistical d
Algebraic classifiers, more information? I read a bit into the article you mentioned, to me it seems like a construction using the approach from algebraic statistics. You may want to have a look at: Cencov, Nikolai Nikolaevich. Statistical decision rules and optimal inference. No. 53. American Mathematical Soc., 2000. This book is a bit out-of-date, one reason is that there are not many people interested in "categorical applications" nowadays, its original print is around 1980s. But almost all modern algebraic studies in statistics can be traced back to this title. Another very readable introduction used in the paper you mentioned is: Drton, Mathias, Bernd Sturmfels, and Seth Sullivant. Lectures on algebraic statistics. Vol. 39. Springer Science & Business Media, 2008. The paper you mentioned in your question is an application of monoid theoretical construction onto classifying problem, which looks interesting. So hope these references help.
Algebraic classifiers, more information? I read a bit into the article you mentioned, to me it seems like a construction using the approach from algebraic statistics. You may want to have a look at: Cencov, Nikolai Nikolaevich. Statistical d
32,948
Can you give an intuition behind the FTRL update step?
Following McMahan's Follow-the-Regularized-Leader and Mirror Descent: Equivalence theorems. The paper shows that the simple gradient descent update rule can be written in a very similar way to the above rule. The intuitive update rule of FOBOS (a gradient descent variant) is: $$x_{t+1} = argmin_x[g_tx + \frac{1}{2\mu_t}|x-x_t|^2]$$ where $g_t$ is the gradient for the previous sample $t$ - we want to move in a that direction as it decreases the loss of our hypothesis on that sample. However, we don't want to change our hypothesis $x_t$ too much (for fear of predicting badly on examples we have already seen). $\mu_t$ is a step size for this sample, and it should make each step more conservative. We can find where the derivative is 0 and get an explicit update rule: $$x_{t+1}=x_t-\mu_tg_t$$ The paper goes on to show that the same intuitive update rule above can be also written as: $$x_{t+1} = argmin_x[ g_{1:t}x + \phi_{1:t-1}x+\psi(x) + \frac{1}{2}\sum_{s=1}^t{|x-x_s|^2}]$$ Which is pretty similar to the FTRL-proximal formulation. In fact, the gradient part (1st term) and the proximal strong convexity (3rd term) are the same, and these were the interesting parts for me.
Can you give an intuition behind the FTRL update step?
Following McMahan's Follow-the-Regularized-Leader and Mirror Descent: Equivalence theorems. The paper shows that the simple gradient descent update rule can be written in a very similar way to the abo
Can you give an intuition behind the FTRL update step? Following McMahan's Follow-the-Regularized-Leader and Mirror Descent: Equivalence theorems. The paper shows that the simple gradient descent update rule can be written in a very similar way to the above rule. The intuitive update rule of FOBOS (a gradient descent variant) is: $$x_{t+1} = argmin_x[g_tx + \frac{1}{2\mu_t}|x-x_t|^2]$$ where $g_t$ is the gradient for the previous sample $t$ - we want to move in a that direction as it decreases the loss of our hypothesis on that sample. However, we don't want to change our hypothesis $x_t$ too much (for fear of predicting badly on examples we have already seen). $\mu_t$ is a step size for this sample, and it should make each step more conservative. We can find where the derivative is 0 and get an explicit update rule: $$x_{t+1}=x_t-\mu_tg_t$$ The paper goes on to show that the same intuitive update rule above can be also written as: $$x_{t+1} = argmin_x[ g_{1:t}x + \phi_{1:t-1}x+\psi(x) + \frac{1}{2}\sum_{s=1}^t{|x-x_s|^2}]$$ Which is pretty similar to the FTRL-proximal formulation. In fact, the gradient part (1st term) and the proximal strong convexity (3rd term) are the same, and these were the interesting parts for me.
Can you give an intuition behind the FTRL update step? Following McMahan's Follow-the-Regularized-Leader and Mirror Descent: Equivalence theorems. The paper shows that the simple gradient descent update rule can be written in a very similar way to the abo
32,949
Can you give an intuition behind the FTRL update step?
for FOBOS, the original formulation is basically an extension of SGD: http://stanford.edu/~jduchi/projects/DuchiSi09c_slides.pdf the FTRL paper tries to give a unified view by formulating the Duchi closed-form update in a similar fashion to FTRL. the term g*x (also mentioned in ihadanny's answer) is a bit weird, but if u work from the above pdf, it's pretty clear: on page 8 of the above pdf, if we ignore the regularization term R for now, $$ \begin{eqnarray} \mathbf{w}_{t+1} &= &argmin_{\mathbf{w}} \{\frac{1}{2} \| \mathbf{w} - \mathbf{w}_{t+1/2} \|^2 \} \\ &=&argmin_{\mathbf{w}} \{\frac{1}{2} \| \mathbf{w} - (\mathbf{w}_{t} - \eta \mathbf{g}_t) \|^2 \} \mbox{considering page 7 of the Duchi pdf}\\ & = & (\mathbf{w} - \mathbf{w}_t)^t(\mathbf{w} - \mathbf{w}_t) + 2\eta (\mathbf{w} - \mathbf{w}_t)^t\mathbf{g}_t + \eta^2 \mathbf{g}_t^t\mathbf{g}_t \end{eqnarray} $$ the $\mathbf{w}_t$ and $\mathbf{g}_t$ above are all constants for the argmin, so are ignored, then you have the form given by ihadanny the $\mathbf{w} \mathbf{g}_t$ form makes sense (after the above equivalence derivation from the Duchi form), but in this form it is very unintuitive, and even more so is the $\mathbf{g}_{1:t}\mathbf{w}$ form in the FTRL paper. to understand the FTRL formula in the more intuitive Duchi form, note that the major difference between FTRL and FOBOS is simply the $\mathbf{g}_{1:t}$ -> $\mathbf{g}_{t}$ (see https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37013.pdf note there is actually a typo for FOBOS in the table on page 2, you should look at the equations in the paragraphs) then just change $\mathbf{g}_{t}$ to $\mathbf{g}_{1:t}$ in the above equivalence derivation, and you will find that FTRL is basically the closed-form FOBOS update with a more "conservative" for the value of $\mathbf{g}_{t}$ by using the average of $\mathbf{g}_{1:t}$
Can you give an intuition behind the FTRL update step?
for FOBOS, the original formulation is basically an extension of SGD: http://stanford.edu/~jduchi/projects/DuchiSi09c_slides.pdf the FTRL paper tries to give a unified view by formulating the Duchi cl
Can you give an intuition behind the FTRL update step? for FOBOS, the original formulation is basically an extension of SGD: http://stanford.edu/~jduchi/projects/DuchiSi09c_slides.pdf the FTRL paper tries to give a unified view by formulating the Duchi closed-form update in a similar fashion to FTRL. the term g*x (also mentioned in ihadanny's answer) is a bit weird, but if u work from the above pdf, it's pretty clear: on page 8 of the above pdf, if we ignore the regularization term R for now, $$ \begin{eqnarray} \mathbf{w}_{t+1} &= &argmin_{\mathbf{w}} \{\frac{1}{2} \| \mathbf{w} - \mathbf{w}_{t+1/2} \|^2 \} \\ &=&argmin_{\mathbf{w}} \{\frac{1}{2} \| \mathbf{w} - (\mathbf{w}_{t} - \eta \mathbf{g}_t) \|^2 \} \mbox{considering page 7 of the Duchi pdf}\\ & = & (\mathbf{w} - \mathbf{w}_t)^t(\mathbf{w} - \mathbf{w}_t) + 2\eta (\mathbf{w} - \mathbf{w}_t)^t\mathbf{g}_t + \eta^2 \mathbf{g}_t^t\mathbf{g}_t \end{eqnarray} $$ the $\mathbf{w}_t$ and $\mathbf{g}_t$ above are all constants for the argmin, so are ignored, then you have the form given by ihadanny the $\mathbf{w} \mathbf{g}_t$ form makes sense (after the above equivalence derivation from the Duchi form), but in this form it is very unintuitive, and even more so is the $\mathbf{g}_{1:t}\mathbf{w}$ form in the FTRL paper. to understand the FTRL formula in the more intuitive Duchi form, note that the major difference between FTRL and FOBOS is simply the $\mathbf{g}_{1:t}$ -> $\mathbf{g}_{t}$ (see https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37013.pdf note there is actually a typo for FOBOS in the table on page 2, you should look at the equations in the paragraphs) then just change $\mathbf{g}_{t}$ to $\mathbf{g}_{1:t}$ in the above equivalence derivation, and you will find that FTRL is basically the closed-form FOBOS update with a more "conservative" for the value of $\mathbf{g}_{t}$ by using the average of $\mathbf{g}_{1:t}$
Can you give an intuition behind the FTRL update step? for FOBOS, the original formulation is basically an extension of SGD: http://stanford.edu/~jduchi/projects/DuchiSi09c_slides.pdf the FTRL paper tries to give a unified view by formulating the Duchi cl
32,950
What are the pros and cons of employing LASSO for causal analysis?
I don't know all of them, I'm sure, so I hope no one will mind if we do this wiki-style. One important one though is that the LASSO is biased (source, Wasserman in lecture, sorry), which while acceptable in prediction, is a problem in causal inference. If you want causality, you probably want it for Science, so you're not just trying to estimate the most useful parameters (which happen strangely to predict well), you're trying to estimate the TRUE(!) parameters.
What are the pros and cons of employing LASSO for causal analysis?
I don't know all of them, I'm sure, so I hope no one will mind if we do this wiki-style. One important one though is that the LASSO is biased (source, Wasserman in lecture, sorry), which while accepta
What are the pros and cons of employing LASSO for causal analysis? I don't know all of them, I'm sure, so I hope no one will mind if we do this wiki-style. One important one though is that the LASSO is biased (source, Wasserman in lecture, sorry), which while acceptable in prediction, is a problem in causal inference. If you want causality, you probably want it for Science, so you're not just trying to estimate the most useful parameters (which happen strangely to predict well), you're trying to estimate the TRUE(!) parameters.
What are the pros and cons of employing LASSO for causal analysis? I don't know all of them, I'm sure, so I hope no one will mind if we do this wiki-style. One important one though is that the LASSO is biased (source, Wasserman in lecture, sorry), which while accepta
32,951
What are the pros and cons of employing LASSO for causal analysis?
I guess 6 years later, I can understand perfectly the answer to my own question. Suppose a causal model with $Y$ being an effect of $X$ and other covariates $L$. LASSO will help someone to come up with a estimate for $E[Y|X, L_1, L_2, ...]$. If one proves that $E[Y|do(X)] = E[Y|X, L_1, ... ]$, then it's alright. Not the case always. A common case is that $E[Y|do(X)] = E_{L_1,...}[E[Y|X,L_1, ...]]$ if L_1, L_2,... are observed confounders and backdoor adjustment is sufficient for identification. In this case, one could use something like g-formula over LASSO, and then they can estimate the causal effect. Of course, this will always depend on the proof of identifiability. Estimation $\neq$ identifiability. LASSO is a model for estimation.
What are the pros and cons of employing LASSO for causal analysis?
I guess 6 years later, I can understand perfectly the answer to my own question. Suppose a causal model with $Y$ being an effect of $X$ and other covariates $L$. LASSO will help someone to come up wi
What are the pros and cons of employing LASSO for causal analysis? I guess 6 years later, I can understand perfectly the answer to my own question. Suppose a causal model with $Y$ being an effect of $X$ and other covariates $L$. LASSO will help someone to come up with a estimate for $E[Y|X, L_1, L_2, ...]$. If one proves that $E[Y|do(X)] = E[Y|X, L_1, ... ]$, then it's alright. Not the case always. A common case is that $E[Y|do(X)] = E_{L_1,...}[E[Y|X,L_1, ...]]$ if L_1, L_2,... are observed confounders and backdoor adjustment is sufficient for identification. In this case, one could use something like g-formula over LASSO, and then they can estimate the causal effect. Of course, this will always depend on the proof of identifiability. Estimation $\neq$ identifiability. LASSO is a model for estimation.
What are the pros and cons of employing LASSO for causal analysis? I guess 6 years later, I can understand perfectly the answer to my own question. Suppose a causal model with $Y$ being an effect of $X$ and other covariates $L$. LASSO will help someone to come up wi
32,952
Cox's Theorem: controversy surrounding the proposition domain size
This doesn't exactly answer your question, but there has been an effort by Maurice J. Dupre and Frank J. Tipler of Tulane University to circumvent the issues that arise in Cox's and De Finetti's particular perspectives on Bayesian Probability by conjoining the two. Here is their paper that was published in 2009. New Axioms for Rigorous Bayesian Probability http://projecteuclid.org/euclid.ba/1340369856
Cox's Theorem: controversy surrounding the proposition domain size
This doesn't exactly answer your question, but there has been an effort by Maurice J. Dupre and Frank J. Tipler of Tulane University to circumvent the issues that arise in Cox's and De Finetti's parti
Cox's Theorem: controversy surrounding the proposition domain size This doesn't exactly answer your question, but there has been an effort by Maurice J. Dupre and Frank J. Tipler of Tulane University to circumvent the issues that arise in Cox's and De Finetti's particular perspectives on Bayesian Probability by conjoining the two. Here is their paper that was published in 2009. New Axioms for Rigorous Bayesian Probability http://projecteuclid.org/euclid.ba/1340369856
Cox's Theorem: controversy surrounding the proposition domain size This doesn't exactly answer your question, but there has been an effort by Maurice J. Dupre and Frank J. Tipler of Tulane University to circumvent the issues that arise in Cox's and De Finetti's parti
32,953
Using a black box MCMC algorithm as a proposal distribution
It a nearly equivalent way to do this is via importance sampling. In other words, 1) Draw $M$ samples from $P(X)$. Let's call these samples $X'$. 2) Get importance weights by computing $w = G(X')$ 3) Calculate whatever statistic you want (i.e. mean, sd, etc) by the weights incorporation $w' = \frac{w}{\sum w}$ 4) If you want to get an unweighted sample from P(X)G(X), draw from $X'$ with probability $w'$
Using a black box MCMC algorithm as a proposal distribution
It a nearly equivalent way to do this is via importance sampling. In other words, 1) Draw $M$ samples from $P(X)$. Let's call these samples $X'$. 2) Get importance weights by computing $w = G(X')$ 3)
Using a black box MCMC algorithm as a proposal distribution It a nearly equivalent way to do this is via importance sampling. In other words, 1) Draw $M$ samples from $P(X)$. Let's call these samples $X'$. 2) Get importance weights by computing $w = G(X')$ 3) Calculate whatever statistic you want (i.e. mean, sd, etc) by the weights incorporation $w' = \frac{w}{\sum w}$ 4) If you want to get an unweighted sample from P(X)G(X), draw from $X'$ with probability $w'$
Using a black box MCMC algorithm as a proposal distribution It a nearly equivalent way to do this is via importance sampling. In other words, 1) Draw $M$ samples from $P(X)$. Let's call these samples $X'$. 2) Get importance weights by computing $w = G(X')$ 3)
32,954
Asymptotic normality of order statistic of heavy tailed distributions
I'm assuming your derivation there comes from something like the one on this page. I have a distribution with only positive outcomes, and the confidence intervals include negative values. Well, given the normal approximation that makes sense. There is nothing stopping a normal approximation from giving you negative values, which is why it is a bad approximation for a bounded value when the sample size is small and/or the variance is large. If you crank up the sample size, then the intervals will shrink because the sample size is in the denominator of the expression for the width of the interval. The variance enters the problem through the density: for the same mean, a higher variance will have a different density, higher at the margins and lower near the center. A lower density means a wider confidence interval because the density is in the denominator of the expression. How the effects of changing sample size and variance together affect the width of the confidence interval and the quality of the approximation will depend on the distribution generating the data as well as the particular quantile. A bit of googling found this page, among others, which uses the normal approximation to the binomial distribution to construct the confidence limits. The basic idea is that each observation falls below the quantile with probability q, so that the distribution is binomial. When the sample size is sufficiently large (that's important), the binomial distribution is well approximated by a normal distribution with mean $nq$ and variance $nq(1-q)$. So the lower confidence limit will have index $j = nq - 1.96 \sqrt{nq(1-q)}$, and the upper confidence limit will have index $k = nq - 1.96 \sqrt{nq(1-q)}$. There's a possibility that either $k > n$ or $j < 1$ when working with quantiles near the edge, and the reference I found is silent on that. I chose to just treat the maximum or minimum as the relevant value. In the following re-write of your code I constructed the confidence limit on the empirical data and tested to see if the theoretical quantile falls inside of that. That makes more sense to me, because the quantile of the observed data set is the random variable. The coverage for n > 1000 is ~ 0.95. For n = 100 it is worse at 0.85, but that's to be expected for quantiles near the tails with small sample sizes. #find 0.975 quantile q <- 0.975 q_norm <- qnorm(q, mean=1, sd=1) #confidence bands absolute value (note depends on sample size) n <- 10000 band <- 1.96 * sqrt(n * q * (1 - q)) hit<-1:10000 for(i in 1:10000){ d<-sort(rnorm(n, mean=1, sd=1)) dq<-quantile(d, probs=q) u <- ceiling(n * q + band) l <- ceiling(n * q - band) if (u > n) u = n if (l < 1) l = 1 if(q_norm>=d[l] & q_norm<=d[u]) {hit[i]=1} else {hit[i]=0} } sum(hit)/10000 As far as determining what sample size is "big enough", well, bigger is better. Whether any particular sample is "big enough" depends strongly on the problem at hand, and how fussy you are about things like the coverage of your confidence limits.
Asymptotic normality of order statistic of heavy tailed distributions
I'm assuming your derivation there comes from something like the one on this page. I have a distribution with only positive outcomes, and the confidence intervals include negative values. Well, giv
Asymptotic normality of order statistic of heavy tailed distributions I'm assuming your derivation there comes from something like the one on this page. I have a distribution with only positive outcomes, and the confidence intervals include negative values. Well, given the normal approximation that makes sense. There is nothing stopping a normal approximation from giving you negative values, which is why it is a bad approximation for a bounded value when the sample size is small and/or the variance is large. If you crank up the sample size, then the intervals will shrink because the sample size is in the denominator of the expression for the width of the interval. The variance enters the problem through the density: for the same mean, a higher variance will have a different density, higher at the margins and lower near the center. A lower density means a wider confidence interval because the density is in the denominator of the expression. How the effects of changing sample size and variance together affect the width of the confidence interval and the quality of the approximation will depend on the distribution generating the data as well as the particular quantile. A bit of googling found this page, among others, which uses the normal approximation to the binomial distribution to construct the confidence limits. The basic idea is that each observation falls below the quantile with probability q, so that the distribution is binomial. When the sample size is sufficiently large (that's important), the binomial distribution is well approximated by a normal distribution with mean $nq$ and variance $nq(1-q)$. So the lower confidence limit will have index $j = nq - 1.96 \sqrt{nq(1-q)}$, and the upper confidence limit will have index $k = nq - 1.96 \sqrt{nq(1-q)}$. There's a possibility that either $k > n$ or $j < 1$ when working with quantiles near the edge, and the reference I found is silent on that. I chose to just treat the maximum or minimum as the relevant value. In the following re-write of your code I constructed the confidence limit on the empirical data and tested to see if the theoretical quantile falls inside of that. That makes more sense to me, because the quantile of the observed data set is the random variable. The coverage for n > 1000 is ~ 0.95. For n = 100 it is worse at 0.85, but that's to be expected for quantiles near the tails with small sample sizes. #find 0.975 quantile q <- 0.975 q_norm <- qnorm(q, mean=1, sd=1) #confidence bands absolute value (note depends on sample size) n <- 10000 band <- 1.96 * sqrt(n * q * (1 - q)) hit<-1:10000 for(i in 1:10000){ d<-sort(rnorm(n, mean=1, sd=1)) dq<-quantile(d, probs=q) u <- ceiling(n * q + band) l <- ceiling(n * q - band) if (u > n) u = n if (l < 1) l = 1 if(q_norm>=d[l] & q_norm<=d[u]) {hit[i]=1} else {hit[i]=0} } sum(hit)/10000 As far as determining what sample size is "big enough", well, bigger is better. Whether any particular sample is "big enough" depends strongly on the problem at hand, and how fussy you are about things like the coverage of your confidence limits.
Asymptotic normality of order statistic of heavy tailed distributions I'm assuming your derivation there comes from something like the one on this page. I have a distribution with only positive outcomes, and the confidence intervals include negative values. Well, giv
32,955
Theoretical link between the graph diffusion/heat kernel and spectral clustering
Yes there is indeed, and this link is the basis of the very related dimensionality reduction technique known as Diffusion Maps. Effectively it is a generalization of Laplacian Eigenmaps/Spectral Clustering using the random-walk normalized Laplacian (as opposed to the standard graph Laplacian or symmetric Laplacian). The generalization comes from first normalizing the Laplacian according to $\alpha$ (i.e. generating a reversible Markov Chain on the data): $$L^{(\alpha)} = D^{-\alpha}LD^{-\alpha}$$ before constructing the random walk normalized Laplacian: $$M = (D^{(\alpha)})^{-1}L^{(\alpha)}$$ Where $D^{(\alpha)}$ is the degree matrix of $L^{(\alpha)}$. The normalization step with $\alpha$ is introduced in order to "tune the influence of the data point density on the infinitesimal transition of the diffusion". Setting $\alpha = 1$ approximates the Neumann heat kernel (see [1] for full derivation). Setting $\alpha = 0$ recovers the Laplacian eigenmaps method. 1. Diffusion Maps (2006) - the original paper to propose the method. 2. Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators
Theoretical link between the graph diffusion/heat kernel and spectral clustering
Yes there is indeed, and this link is the basis of the very related dimensionality reduction technique known as Diffusion Maps. Effectively it is a generalization of Laplacian Eigenmaps/Spectral Clust
Theoretical link between the graph diffusion/heat kernel and spectral clustering Yes there is indeed, and this link is the basis of the very related dimensionality reduction technique known as Diffusion Maps. Effectively it is a generalization of Laplacian Eigenmaps/Spectral Clustering using the random-walk normalized Laplacian (as opposed to the standard graph Laplacian or symmetric Laplacian). The generalization comes from first normalizing the Laplacian according to $\alpha$ (i.e. generating a reversible Markov Chain on the data): $$L^{(\alpha)} = D^{-\alpha}LD^{-\alpha}$$ before constructing the random walk normalized Laplacian: $$M = (D^{(\alpha)})^{-1}L^{(\alpha)}$$ Where $D^{(\alpha)}$ is the degree matrix of $L^{(\alpha)}$. The normalization step with $\alpha$ is introduced in order to "tune the influence of the data point density on the infinitesimal transition of the diffusion". Setting $\alpha = 1$ approximates the Neumann heat kernel (see [1] for full derivation). Setting $\alpha = 0$ recovers the Laplacian eigenmaps method. 1. Diffusion Maps (2006) - the original paper to propose the method. 2. Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators
Theoretical link between the graph diffusion/heat kernel and spectral clustering Yes there is indeed, and this link is the basis of the very related dimensionality reduction technique known as Diffusion Maps. Effectively it is a generalization of Laplacian Eigenmaps/Spectral Clust
32,956
Graph clustering algorithms which consider negative weights
Have you tried mapping the values to [0;2]? Then many algorithms may work. Consider e.g. Dijkstra: it requires non-negative edge weights, but if you know the minimum a of the edges, you can run it on x-a and get the shortest cycle-free path. Update: for correlation values, you may either be interested in the absolute values abs(x) (which is the strength of the correlation!) or you may want to break the graph into two temporarily: first cluster on the positive correlations only, then on the negative correlations only if the sign is that important for clustering & the previous approaches don't work.
Graph clustering algorithms which consider negative weights
Have you tried mapping the values to [0;2]? Then many algorithms may work. Consider e.g. Dijkstra: it requires non-negative edge weights, but if you know the minimum a of the edges, you can run it on
Graph clustering algorithms which consider negative weights Have you tried mapping the values to [0;2]? Then many algorithms may work. Consider e.g. Dijkstra: it requires non-negative edge weights, but if you know the minimum a of the edges, you can run it on x-a and get the shortest cycle-free path. Update: for correlation values, you may either be interested in the absolute values abs(x) (which is the strength of the correlation!) or you may want to break the graph into two temporarily: first cluster on the positive correlations only, then on the negative correlations only if the sign is that important for clustering & the previous approaches don't work.
Graph clustering algorithms which consider negative weights Have you tried mapping the values to [0;2]? Then many algorithms may work. Consider e.g. Dijkstra: it requires non-negative edge weights, but if you know the minimum a of the edges, you can run it on
32,957
Graph clustering algorithms which consider negative weights
Yes, there is an algorithm called 'Affinity Propagation' that works with negative weights; I believe this is implemented in sklearn (see the documentation here). A reference for what is going on behind the scenes can be found here. Hope that's what you're looking for!
Graph clustering algorithms which consider negative weights
Yes, there is an algorithm called 'Affinity Propagation' that works with negative weights; I believe this is implemented in sklearn (see the documentation here). A reference for what is going on behi
Graph clustering algorithms which consider negative weights Yes, there is an algorithm called 'Affinity Propagation' that works with negative weights; I believe this is implemented in sklearn (see the documentation here). A reference for what is going on behind the scenes can be found here. Hope that's what you're looking for!
Graph clustering algorithms which consider negative weights Yes, there is an algorithm called 'Affinity Propagation' that works with negative weights; I believe this is implemented in sklearn (see the documentation here). A reference for what is going on behi
32,958
Graph clustering algorithms which consider negative weights
It seems to me the problem you describe is known as the Correlation Clustering Problem. This information should help you find some implementations, such as: This Matlab code by Shai Bagon This Python code by GitHub user filkry Note some community detection algorithms have also been modified in order to process signed networks, e.g. Amelio'13, Sharma'12, Anchuri'12, etc. However, I couldn't find any publicly available implementation.
Graph clustering algorithms which consider negative weights
It seems to me the problem you describe is known as the Correlation Clustering Problem. This information should help you find some implementations, such as: This Matlab code by Shai Bagon This Python
Graph clustering algorithms which consider negative weights It seems to me the problem you describe is known as the Correlation Clustering Problem. This information should help you find some implementations, such as: This Matlab code by Shai Bagon This Python code by GitHub user filkry Note some community detection algorithms have also been modified in order to process signed networks, e.g. Amelio'13, Sharma'12, Anchuri'12, etc. However, I couldn't find any publicly available implementation.
Graph clustering algorithms which consider negative weights It seems to me the problem you describe is known as the Correlation Clustering Problem. This information should help you find some implementations, such as: This Matlab code by Shai Bagon This Python
32,959
Graph clustering algorithms which consider negative weights
Take a look at this code, it is quite scalable, works with positive and negative edges, and solves Correlation Clustering (CC) as a special case (r = 0). However, for the case of CC (maximizing positive links and minimizing negative links inside clusters), I would suggest other methods that are specialized in solving this objective. To illustrate, Correlation Clustering (unlike what Community Detection literature pursues) does not take the positive density of clusters into account, so when a network has no or few negative ties (most real-world cases), all the network is put into one big cluster.
Graph clustering algorithms which consider negative weights
Take a look at this code, it is quite scalable, works with positive and negative edges, and solves Correlation Clustering (CC) as a special case (r = 0). However, for the case of CC (maximizing positi
Graph clustering algorithms which consider negative weights Take a look at this code, it is quite scalable, works with positive and negative edges, and solves Correlation Clustering (CC) as a special case (r = 0). However, for the case of CC (maximizing positive links and minimizing negative links inside clusters), I would suggest other methods that are specialized in solving this objective. To illustrate, Correlation Clustering (unlike what Community Detection literature pursues) does not take the positive density of clusters into account, so when a network has no or few negative ties (most real-world cases), all the network is put into one big cluster.
Graph clustering algorithms which consider negative weights Take a look at this code, it is quite scalable, works with positive and negative edges, and solves Correlation Clustering (CC) as a special case (r = 0). However, for the case of CC (maximizing positi
32,960
Graph clustering algorithms which consider negative weights
We sometimes call these graphs 'signed graphs' or 'signed networks' as links can be positive and negative (or even weighted, as in your case), where a negative link means the nodes are anti-correlated and thus should be repulsed iso attracted. In a social network setting for example a negative link could model a situation of two users that are enemies. Take a look at Traag's Leidenalg, who integrated repulsion between negative ties in various graph clustering (community detection) algorithms, all code available through the python leidenalg module, easy to use. https://leidenalg.readthedocs.io/en/stable/multiplex.html#negative-links https://www.nature.com/articles/s41598-019-41695-z some other useful refs (using the algorithm) https://journals.aps.org/pre/abstract/10.1103/PhysRevE.80.036115 https://journals.sagepub.com/doi/full/10.1177/0003122412463574 https://www.sciencedirect.com/science/article/pii/S0378873316300405
Graph clustering algorithms which consider negative weights
We sometimes call these graphs 'signed graphs' or 'signed networks' as links can be positive and negative (or even weighted, as in your case), where a negative link means the nodes are anti-correlated
Graph clustering algorithms which consider negative weights We sometimes call these graphs 'signed graphs' or 'signed networks' as links can be positive and negative (or even weighted, as in your case), where a negative link means the nodes are anti-correlated and thus should be repulsed iso attracted. In a social network setting for example a negative link could model a situation of two users that are enemies. Take a look at Traag's Leidenalg, who integrated repulsion between negative ties in various graph clustering (community detection) algorithms, all code available through the python leidenalg module, easy to use. https://leidenalg.readthedocs.io/en/stable/multiplex.html#negative-links https://www.nature.com/articles/s41598-019-41695-z some other useful refs (using the algorithm) https://journals.aps.org/pre/abstract/10.1103/PhysRevE.80.036115 https://journals.sagepub.com/doi/full/10.1177/0003122412463574 https://www.sciencedirect.com/science/article/pii/S0378873316300405
Graph clustering algorithms which consider negative weights We sometimes call these graphs 'signed graphs' or 'signed networks' as links can be positive and negative (or even weighted, as in your case), where a negative link means the nodes are anti-correlated
32,961
Can we always rewrite a right skewed distribution in terms of composition of an arbitrary and a symmetric distribution?
No! A simple counter-example is provided by the Tukey $g$ distribution (the special case for $h=0$ of the Tukey $g$ and $h$ distribution). For example, let $\mathcal{F}_X$ be the Tukey $g$ with parameter $g_X=0$ and $\mathcal{F}_Z$ be the Tukey $g$ with parameter $g_Z>0$ and $\mathcal{F}_Y$ a Tukey $g$ distribution for which $g_Y\leq g_Z$. Since $h=0$, theses three distributions satisfy: $$\mathcal{F}_{-X}=\mathcal{F}_X\preceq_c\mathcal{F}_Y\preceq_c\mathcal{F}_Z.$$ (the first one comes from the definition of the Tukey $g$ which is symmetric if $g=0$, the next ones from [0], Theorem 2.1(i)). For example, for $g_Z=0.5$, we have that: $$\min_{g_Y\leq g_Z}\max_z|F_Z(z)-F_YF^{-1}_XF_Y(z)|\approx0.005>0$$ (for some reason, the minimum seems to always be near $g_Y\approx g_Z/2$). [0] H.L. MacGillivray Shape properties of the g-and-h and Johnson families. Comm. Statist.—Theory Methods, 21 (5) (1992), pp. 1233–1250 Edit: In the case of the Weibull, the claim is true: Let $\mathcal{F}_Z$ be the Weibull distribution with shape parameter $w_Z$ (the scale parameter doesn't affect convex ordering so we can set it to 1 without loss of generality). Likewise $\mathcal{F}_Y$, $\mathcal{F}_X$ and $w_Y$ and $w_X$. First note that any three Weibull distributions can always be ordered in the sense of [0]. Next, note that: $$\mathcal{F}_X=\mathcal{F}_{-X}\implies w_X=3.602349.$$ Now, for the Weibull: $$F_Y(y)=1-\exp((-y)^{w_Y}),\;F_Y^{-1}(q)=(-\ln(1-q))^{1/w_Y},$$ so that $$F_YF_X^{-1}F_Y(z)=1-\exp(-z^{w_Y^2/w_X}),$$ since $$F_Z(z)=1-\exp(-z^{w_Z}).$$ Therefore, the claim can always be satisfied by setting $w_Y=\sqrt{w_Z/w_X}$. [0] van Zwet, W.R. (1979). Mean, median, mode II (1979). Statistica Neerlandica. Volume 33, Issue 1, pages 1--5. [1] Groeneveld, R.A. (1985). Skewness for the weibull family. Statistica Neerlandica. Volume 40, Issue 3, pages 135–140.
Can we always rewrite a right skewed distribution in terms of composition of an arbitrary and a symm
No! A simple counter-example is provided by the Tukey $g$ distribution (the special case for $h=0$ of the Tukey $g$ and $h$ distribution). For example, let $\mathcal{F}_X$ be the Tukey $g$ with parame
Can we always rewrite a right skewed distribution in terms of composition of an arbitrary and a symmetric distribution? No! A simple counter-example is provided by the Tukey $g$ distribution (the special case for $h=0$ of the Tukey $g$ and $h$ distribution). For example, let $\mathcal{F}_X$ be the Tukey $g$ with parameter $g_X=0$ and $\mathcal{F}_Z$ be the Tukey $g$ with parameter $g_Z>0$ and $\mathcal{F}_Y$ a Tukey $g$ distribution for which $g_Y\leq g_Z$. Since $h=0$, theses three distributions satisfy: $$\mathcal{F}_{-X}=\mathcal{F}_X\preceq_c\mathcal{F}_Y\preceq_c\mathcal{F}_Z.$$ (the first one comes from the definition of the Tukey $g$ which is symmetric if $g=0$, the next ones from [0], Theorem 2.1(i)). For example, for $g_Z=0.5$, we have that: $$\min_{g_Y\leq g_Z}\max_z|F_Z(z)-F_YF^{-1}_XF_Y(z)|\approx0.005>0$$ (for some reason, the minimum seems to always be near $g_Y\approx g_Z/2$). [0] H.L. MacGillivray Shape properties of the g-and-h and Johnson families. Comm. Statist.—Theory Methods, 21 (5) (1992), pp. 1233–1250 Edit: In the case of the Weibull, the claim is true: Let $\mathcal{F}_Z$ be the Weibull distribution with shape parameter $w_Z$ (the scale parameter doesn't affect convex ordering so we can set it to 1 without loss of generality). Likewise $\mathcal{F}_Y$, $\mathcal{F}_X$ and $w_Y$ and $w_X$. First note that any three Weibull distributions can always be ordered in the sense of [0]. Next, note that: $$\mathcal{F}_X=\mathcal{F}_{-X}\implies w_X=3.602349.$$ Now, for the Weibull: $$F_Y(y)=1-\exp((-y)^{w_Y}),\;F_Y^{-1}(q)=(-\ln(1-q))^{1/w_Y},$$ so that $$F_YF_X^{-1}F_Y(z)=1-\exp(-z^{w_Y^2/w_X}),$$ since $$F_Z(z)=1-\exp(-z^{w_Z}).$$ Therefore, the claim can always be satisfied by setting $w_Y=\sqrt{w_Z/w_X}$. [0] van Zwet, W.R. (1979). Mean, median, mode II (1979). Statistica Neerlandica. Volume 33, Issue 1, pages 1--5. [1] Groeneveld, R.A. (1985). Skewness for the weibull family. Statistica Neerlandica. Volume 40, Issue 3, pages 135–140.
Can we always rewrite a right skewed distribution in terms of composition of an arbitrary and a symm No! A simple counter-example is provided by the Tukey $g$ distribution (the special case for $h=0$ of the Tukey $g$ and $h$ distribution). For example, let $\mathcal{F}_X$ be the Tukey $g$ with parame
32,962
Why is the variable importance metric suggested by Breiman specific only to random forests?
Any bagged learner can produce an analogue of Random Forests importance metric. You can't get this kind of feature importance in a common cross-validation scheme, where all the features are used all the time.
Why is the variable importance metric suggested by Breiman specific only to random forests?
Any bagged learner can produce an analogue of Random Forests importance metric. You can't get this kind of feature importance in a common cross-validation scheme, where all the features are used all t
Why is the variable importance metric suggested by Breiman specific only to random forests? Any bagged learner can produce an analogue of Random Forests importance metric. You can't get this kind of feature importance in a common cross-validation scheme, where all the features are used all the time.
Why is the variable importance metric suggested by Breiman specific only to random forests? Any bagged learner can produce an analogue of Random Forests importance metric. You can't get this kind of feature importance in a common cross-validation scheme, where all the features are used all t
32,963
Why is the variable importance metric suggested by Breiman specific only to random forests?
Random Forrest and other techniques that incorporate bagging are using the fact that the bootstrap sample that is drawn for the current tree excludes some data points, the so-called Out-Of-Bag samples (OOB). Since these samples are not used to build the current tree, they can be used to evaluate it without the risk of overfitting. With other supervised learning techniques that usually do not suffer from instability as much as decision trees (e.g. SVM), you usually do not draw bootstrap samples and thus you can not estimate variable importance in this way. However, the approach of training a model with different subsets of variables and evaluate their performance using k-fold cross-validation is also perfectly valid and called Wrapper approach in the literature. For instance, a popular feature selection technique with SVM is recursive feature elimination (see https://pdfs.semanticscholar.org/fb6b/4b57f431a0cfbb83bb2af8beab4ee694e94c.pdf)
Why is the variable importance metric suggested by Breiman specific only to random forests?
Random Forrest and other techniques that incorporate bagging are using the fact that the bootstrap sample that is drawn for the current tree excludes some data points, the so-called Out-Of-Bag samples
Why is the variable importance metric suggested by Breiman specific only to random forests? Random Forrest and other techniques that incorporate bagging are using the fact that the bootstrap sample that is drawn for the current tree excludes some data points, the so-called Out-Of-Bag samples (OOB). Since these samples are not used to build the current tree, they can be used to evaluate it without the risk of overfitting. With other supervised learning techniques that usually do not suffer from instability as much as decision trees (e.g. SVM), you usually do not draw bootstrap samples and thus you can not estimate variable importance in this way. However, the approach of training a model with different subsets of variables and evaluate their performance using k-fold cross-validation is also perfectly valid and called Wrapper approach in the literature. For instance, a popular feature selection technique with SVM is recursive feature elimination (see https://pdfs.semanticscholar.org/fb6b/4b57f431a0cfbb83bb2af8beab4ee694e94c.pdf)
Why is the variable importance metric suggested by Breiman specific only to random forests? Random Forrest and other techniques that incorporate bagging are using the fact that the bootstrap sample that is drawn for the current tree excludes some data points, the so-called Out-Of-Bag samples
32,964
similarity measure between two different ordered sequences
As mentioned in @ttnphns' comment, there exist plenty of dissimilarity measures. Have a look at the review by Studer & Ritschard (2015) who examine the sensitivity of the measures to ordering, position (timing) and duration (how many times a state is repeated). The measures addressed in that paper are all provided by the seqdist function of the TraMineR R package. If you are primarily interested in the uncommon part between your two sequences, an edit distance such as optimal matching may be the solution. Optimal matching measures the minimal cost of transforming one sequence into the other by means of indels (insert or delete) and substitutions and can account for indel and substitution costs. If the difference say between rank 1 and 3 is twice the difference between rank 1 and 2 you could set the substitution costs as the rank differences for example. Such a measure works for sequences of different length. It would just account for the cost of the indels necessary to make the sequences of equal length. If you prefer to give more focus on the similarity in the ordering of the elements in the sequences, some other measures such as optimal matching of transitions for instance could be a better choice. Hope this helps.
similarity measure between two different ordered sequences
As mentioned in @ttnphns' comment, there exist plenty of dissimilarity measures. Have a look at the review by Studer & Ritschard (2015) who examine the sensitivity of the measures to ordering, positio
similarity measure between two different ordered sequences As mentioned in @ttnphns' comment, there exist plenty of dissimilarity measures. Have a look at the review by Studer & Ritschard (2015) who examine the sensitivity of the measures to ordering, position (timing) and duration (how many times a state is repeated). The measures addressed in that paper are all provided by the seqdist function of the TraMineR R package. If you are primarily interested in the uncommon part between your two sequences, an edit distance such as optimal matching may be the solution. Optimal matching measures the minimal cost of transforming one sequence into the other by means of indels (insert or delete) and substitutions and can account for indel and substitution costs. If the difference say between rank 1 and 3 is twice the difference between rank 1 and 2 you could set the substitution costs as the rank differences for example. Such a measure works for sequences of different length. It would just account for the cost of the indels necessary to make the sequences of equal length. If you prefer to give more focus on the similarity in the ordering of the elements in the sequences, some other measures such as optimal matching of transitions for instance could be a better choice. Hope this helps.
similarity measure between two different ordered sequences As mentioned in @ttnphns' comment, there exist plenty of dissimilarity measures. Have a look at the review by Studer & Ritschard (2015) who examine the sensitivity of the measures to ordering, positio
32,965
similarity measure between two different ordered sequences
It sounds to me that you are seeking something like a sub-sequence similarity, am I right? If so: Imagine both sequences A and B as strings, then you could apply: Longest common substring Longest common subsequence The length of the resulting string could be then divided by the maximum length regarding A and B. Would this be an option for you?
similarity measure between two different ordered sequences
It sounds to me that you are seeking something like a sub-sequence similarity, am I right? If so: Imagine both sequences A and B as strings, then you could apply: Longest common substring Longest co
similarity measure between two different ordered sequences It sounds to me that you are seeking something like a sub-sequence similarity, am I right? If so: Imagine both sequences A and B as strings, then you could apply: Longest common substring Longest common subsequence The length of the resulting string could be then divided by the maximum length regarding A and B. Would this be an option for you?
similarity measure between two different ordered sequences It sounds to me that you are seeking something like a sub-sequence similarity, am I right? If so: Imagine both sequences A and B as strings, then you could apply: Longest common substring Longest co
32,966
How to generate uncorrelated white noise sequence in R without using arima.sim?
You will have to specify some distribution, but if you are happy to go with the default choice of a normal distribution (as, in fact, does arima.sim, unless you override the default with some other choice of its rand.gen argument), then rnorm(200) will do the trick: it yields a series of uncorrelated (in fact, even independent) and identically distributed r.v.s.
How to generate uncorrelated white noise sequence in R without using arima.sim?
You will have to specify some distribution, but if you are happy to go with the default choice of a normal distribution (as, in fact, does arima.sim, unless you override the default with some other ch
How to generate uncorrelated white noise sequence in R without using arima.sim? You will have to specify some distribution, but if you are happy to go with the default choice of a normal distribution (as, in fact, does arima.sim, unless you override the default with some other choice of its rand.gen argument), then rnorm(200) will do the trick: it yields a series of uncorrelated (in fact, even independent) and identically distributed r.v.s.
How to generate uncorrelated white noise sequence in R without using arima.sim? You will have to specify some distribution, but if you are happy to go with the default choice of a normal distribution (as, in fact, does arima.sim, unless you override the default with some other ch
32,967
How to generate uncorrelated white noise sequence in R without using arima.sim?
White noise is simply a sequence of i.i.d random variables. Due to that, you could just use: rnorm(n, mean = 0, sd = 1) To give you a little more insight, here's how I would use it to generate a random walk: set.seed(15) x=NULL x[1]=0 for (i in 2:100) { x[i] = x[i-1] + rnorm(1,0,1) } ts.plot(x, main = 'Random walk 1(Xt)', xlab = 'Time', ylab = '', col='blue', lwd = 2)
How to generate uncorrelated white noise sequence in R without using arima.sim?
White noise is simply a sequence of i.i.d random variables. Due to that, you could just use: rnorm(n, mean = 0, sd = 1) To give you a little more insight, here's how I would use it to generate a rando
How to generate uncorrelated white noise sequence in R without using arima.sim? White noise is simply a sequence of i.i.d random variables. Due to that, you could just use: rnorm(n, mean = 0, sd = 1) To give you a little more insight, here's how I would use it to generate a random walk: set.seed(15) x=NULL x[1]=0 for (i in 2:100) { x[i] = x[i-1] + rnorm(1,0,1) } ts.plot(x, main = 'Random walk 1(Xt)', xlab = 'Time', ylab = '', col='blue', lwd = 2)
How to generate uncorrelated white noise sequence in R without using arima.sim? White noise is simply a sequence of i.i.d random variables. Due to that, you could just use: rnorm(n, mean = 0, sd = 1) To give you a little more insight, here's how I would use it to generate a rando
32,968
What are good examples to show to undergraduate students?
One good way can be to install R (http://www.r-project.org/) and use its examples for teaching. You can access the help in R with commands "?t.test" etc. At end of each help file are examples. For t.test, for example: > t.test(extra ~ group, data = sleep) Welch Two Sample t-test data: extra by group t = -1.8608, df = 17.776, p-value = 0.07939 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -3.3654832 0.2054832 sample estimates: mean in group 1 mean in group 2 0.75 2.33 > plot(extra ~ group, data = sleep)
What are good examples to show to undergraduate students?
One good way can be to install R (http://www.r-project.org/) and use its examples for teaching. You can access the help in R with commands "?t.test" etc. At end of each help file are examples. For t.t
What are good examples to show to undergraduate students? One good way can be to install R (http://www.r-project.org/) and use its examples for teaching. You can access the help in R with commands "?t.test" etc. At end of each help file are examples. For t.test, for example: > t.test(extra ~ group, data = sleep) Welch Two Sample t-test data: extra by group t = -1.8608, df = 17.776, p-value = 0.07939 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -3.3654832 0.2054832 sample estimates: mean in group 1 mean in group 2 0.75 2.33 > plot(extra ~ group, data = sleep)
What are good examples to show to undergraduate students? One good way can be to install R (http://www.r-project.org/) and use its examples for teaching. You can access the help in R with commands "?t.test" etc. At end of each help file are examples. For t.t
32,969
What are good examples to show to undergraduate students?
I suggest an application of the central limit theorem for pre-determination of a sample size and finding an answer to questions like "did I send out enough questionnaires" etc. http://web.as.uky.edu/statistics/users/pbreheny/580-F10/notes/9.pdf provides a fine real-world example of how to apply the central limit theorem. A didactic strategy might be: A) theory *make clear the difference between a sampling distribution and the distribution of an estimate, e.g. by the "flat" distribution of rolling a die versus the distribution of the mean of N dice (use R or let the students even play themselves with Excel drawing single-value distributions versus distribution of means) *show the formula-based calculation of percentiles for the distribution of the mean (as you are deep into maths, you might want to derive the formula) -- this point corresponds to slides 10-17 in the presentation linked above and then (as in slide 20 from the presentation linked above): B) application *show how the central limit theorem helps to determine sample sizes for a desired exactnes in estimates of the mean This application B) is what in my experience non-statisticians expect from a statistician - answering questions of the type "do I have enough data?"
What are good examples to show to undergraduate students?
I suggest an application of the central limit theorem for pre-determination of a sample size and finding an answer to questions like "did I send out enough questionnaires" etc. http://web.as.uky.edu/s
What are good examples to show to undergraduate students? I suggest an application of the central limit theorem for pre-determination of a sample size and finding an answer to questions like "did I send out enough questionnaires" etc. http://web.as.uky.edu/statistics/users/pbreheny/580-F10/notes/9.pdf provides a fine real-world example of how to apply the central limit theorem. A didactic strategy might be: A) theory *make clear the difference between a sampling distribution and the distribution of an estimate, e.g. by the "flat" distribution of rolling a die versus the distribution of the mean of N dice (use R or let the students even play themselves with Excel drawing single-value distributions versus distribution of means) *show the formula-based calculation of percentiles for the distribution of the mean (as you are deep into maths, you might want to derive the formula) -- this point corresponds to slides 10-17 in the presentation linked above and then (as in slide 20 from the presentation linked above): B) application *show how the central limit theorem helps to determine sample sizes for a desired exactnes in estimates of the mean This application B) is what in my experience non-statisticians expect from a statistician - answering questions of the type "do I have enough data?"
What are good examples to show to undergraduate students? I suggest an application of the central limit theorem for pre-determination of a sample size and finding an answer to questions like "did I send out enough questionnaires" etc. http://web.as.uky.edu/s
32,970
What are good examples to show to undergraduate students?
Since you are teaching CS students, a nice application of the Central Limit Theorem may be to estimate the mean from a massive datasets (i.e. > 100 millions of records). It might be instructive to show that it's not necessary to calculate the mean for the entire dataset, but instead to sample from the dataset and use the sample mean to estimate the mean from the entire dataset/database. You could take this a step further if you wanted and simulate a datatset that has drastically different values for different subgroups. You could then have the students explore stratified sampling to obtain more accurate estimates. Again, since there are CS students, you may want to do some bootstrapping to obtain confidence intervals as well or to estimate the variances of more complex statistics. This is a nice intersection of statistics and computer since, in my opinion and might lead to greater interest in the subject matter.
What are good examples to show to undergraduate students?
Since you are teaching CS students, a nice application of the Central Limit Theorem may be to estimate the mean from a massive datasets (i.e. > 100 millions of records). It might be instructive to sh
What are good examples to show to undergraduate students? Since you are teaching CS students, a nice application of the Central Limit Theorem may be to estimate the mean from a massive datasets (i.e. > 100 millions of records). It might be instructive to show that it's not necessary to calculate the mean for the entire dataset, but instead to sample from the dataset and use the sample mean to estimate the mean from the entire dataset/database. You could take this a step further if you wanted and simulate a datatset that has drastically different values for different subgroups. You could then have the students explore stratified sampling to obtain more accurate estimates. Again, since there are CS students, you may want to do some bootstrapping to obtain confidence intervals as well or to estimate the variances of more complex statistics. This is a nice intersection of statistics and computer since, in my opinion and might lead to greater interest in the subject matter.
What are good examples to show to undergraduate students? Since you are teaching CS students, a nice application of the Central Limit Theorem may be to estimate the mean from a massive datasets (i.e. > 100 millions of records). It might be instructive to sh
32,971
What are good examples to show to undergraduate students?
I started by typing a comment but it became too lengthy... Keep in mind that they are CS student. You won't please them the way you please mathematicians (with $\sigma$ algebras) or biologist, physicians (with biological or medical data, and classical recipes for testing good old null hypotheses). If you have enough freedom to decide the orientation of the lecture, if the point is that they learn basic concepts, my advice is to make a radical change of orientation. Of course, if other teachers want them to be able to perform some predefined tasks, you are a bit stuck. So, in my opinion, they will like it if you present inference from a "learning" point of view, and if you present tests from a "decision theory" or "classification" point of view -- in short, they're supposed to like algorithms. To grok algorithms! Also, try to find CS related datasets ; e.g. the duration of connections and the number of request per unit of time to an html server can help to illustrate many concepts. They will love to learn simulation techniques. Lehmer generators are easy to implement. Show them how to simulate other distributions by inverting the cdf. If you're into this, show them Marsaglia's Ziggurat algorithm. Oh, and the MWC256 generator by Marsaglia is a little gem. The Diehard tests by Marsaglia (tests for fairness of uniform generators) can help to illustrate many concepts of probability and statistics. You can even chose to present probability theory based on "(independent) streams of random doubles, oups, I mean reals" -- this is a bit cheeky, but it can be grand. Also, remember that page rank is based on a Markov chain. This is not easy matter but following the presentation from Arthur Engel (I think the reference is the probabilistic abacus -- if you read French, this book is absolutely a must read), you can easily present a few toy examples that they'll like. I think that CS science student will like Discrete Markov chains much more than $t$-tests, even if it seems more difficult material (Engel's presentation makes it very easy). If you master your subject enough, don't hesitate to be original. "Classical" lectures are ok when you teach something you are not fully familiar with. Good luck, and if you release some lecture notes please let me know!
What are good examples to show to undergraduate students?
I started by typing a comment but it became too lengthy... Keep in mind that they are CS student. You won't please them the way you please mathematicians (with $\sigma$ algebras) or biologist, physici
What are good examples to show to undergraduate students? I started by typing a comment but it became too lengthy... Keep in mind that they are CS student. You won't please them the way you please mathematicians (with $\sigma$ algebras) or biologist, physicians (with biological or medical data, and classical recipes for testing good old null hypotheses). If you have enough freedom to decide the orientation of the lecture, if the point is that they learn basic concepts, my advice is to make a radical change of orientation. Of course, if other teachers want them to be able to perform some predefined tasks, you are a bit stuck. So, in my opinion, they will like it if you present inference from a "learning" point of view, and if you present tests from a "decision theory" or "classification" point of view -- in short, they're supposed to like algorithms. To grok algorithms! Also, try to find CS related datasets ; e.g. the duration of connections and the number of request per unit of time to an html server can help to illustrate many concepts. They will love to learn simulation techniques. Lehmer generators are easy to implement. Show them how to simulate other distributions by inverting the cdf. If you're into this, show them Marsaglia's Ziggurat algorithm. Oh, and the MWC256 generator by Marsaglia is a little gem. The Diehard tests by Marsaglia (tests for fairness of uniform generators) can help to illustrate many concepts of probability and statistics. You can even chose to present probability theory based on "(independent) streams of random doubles, oups, I mean reals" -- this is a bit cheeky, but it can be grand. Also, remember that page rank is based on a Markov chain. This is not easy matter but following the presentation from Arthur Engel (I think the reference is the probabilistic abacus -- if you read French, this book is absolutely a must read), you can easily present a few toy examples that they'll like. I think that CS science student will like Discrete Markov chains much more than $t$-tests, even if it seems more difficult material (Engel's presentation makes it very easy). If you master your subject enough, don't hesitate to be original. "Classical" lectures are ok when you teach something you are not fully familiar with. Good luck, and if you release some lecture notes please let me know!
What are good examples to show to undergraduate students? I started by typing a comment but it became too lengthy... Keep in mind that they are CS student. You won't please them the way you please mathematicians (with $\sigma$ algebras) or biologist, physici
32,972
What are good examples to show to undergraduate students?
You say this is computer-science students. What are their interests, is this mainly theoretical computer science, or students mainly motivated by preparing for jobs? You could also tell us what is the course description! But, whatever your answer to those questions, you could start with some practical statistics occurring in informatics contexts, such as (for example) web design. This site from time to time has questions about this, such as Conversion rates over time or https://stats.stackexchange.com/questions/96853/comparing-sales-person-conversion-rates or AB Testing other factors besides conversion rate . There are lots of questions here such as these, seemingly from people involved in web design. The situation is that you have some web page (say, you sell something). The "conversion rate", as I understand it, is the percentage of visitors which go on to some preferred task (such as buying, or some other goal you have for your visitors). Then you, as web designer, ask if your layout of the page influence this behavior. So you program two (or more) versions of the web page, choose randomly which version to present to some new customer, and can so compare the conversion rates, and finally choose to implement the version with highest conversion rate. This is a problem of design of a comparison experiment, and you need statistical methods to compare percentages, or maybe directly the contingency table of designs versus convert/no convert. That example could show them that statistics could actually be useful for them in some web development job! And, from the statistical side, it opens for a lot of interesting questions about validity of assumptions ... To connect to what you say about central limit theorem, you can ask how many observations you need before you can treat the percentages as normally distributed, and have them study that using simulation ... You can search this site for other stats questions posed by programmer types ...
What are good examples to show to undergraduate students?
You say this is computer-science students. What are their interests, is this mainly theoretical computer science, or students mainly motivated by preparing for jobs? You could also tell us what is th
What are good examples to show to undergraduate students? You say this is computer-science students. What are their interests, is this mainly theoretical computer science, or students mainly motivated by preparing for jobs? You could also tell us what is the course description! But, whatever your answer to those questions, you could start with some practical statistics occurring in informatics contexts, such as (for example) web design. This site from time to time has questions about this, such as Conversion rates over time or https://stats.stackexchange.com/questions/96853/comparing-sales-person-conversion-rates or AB Testing other factors besides conversion rate . There are lots of questions here such as these, seemingly from people involved in web design. The situation is that you have some web page (say, you sell something). The "conversion rate", as I understand it, is the percentage of visitors which go on to some preferred task (such as buying, or some other goal you have for your visitors). Then you, as web designer, ask if your layout of the page influence this behavior. So you program two (or more) versions of the web page, choose randomly which version to present to some new customer, and can so compare the conversion rates, and finally choose to implement the version with highest conversion rate. This is a problem of design of a comparison experiment, and you need statistical methods to compare percentages, or maybe directly the contingency table of designs versus convert/no convert. That example could show them that statistics could actually be useful for them in some web development job! And, from the statistical side, it opens for a lot of interesting questions about validity of assumptions ... To connect to what you say about central limit theorem, you can ask how many observations you need before you can treat the percentages as normally distributed, and have them study that using simulation ... You can search this site for other stats questions posed by programmer types ...
What are good examples to show to undergraduate students? You say this is computer-science students. What are their interests, is this mainly theoretical computer science, or students mainly motivated by preparing for jobs? You could also tell us what is th
32,973
What are good examples to show to undergraduate students?
I suggest that, before any good examples, it is better to focus on clear-definitions. In my experience, undergraduate probability and statistics is a course filled with words that none of the students understand. As an experiment, ask students who just finished a probability course what a "random variable" is. They might give you examples, but I doubt that most will give you a clear definition of it. What exactly is "probability"? What is a "distribution"? The terminology in statistics is even more confusing. Most undergraduate books I seen do a very bad job as explaining this. Examples and computations are nice, but without clear definitions it is not as helpful as one would think. Speaking from my experience, this was exactly why I hated probability theory as an undergraduate. Even though my interests as far as removed from probability as one can have, I now appreciate the subject, because I eventually taught myself what all the terminology really means. I apologize that this is not exactly what you asked, but given that you are teaching such a class I thought that this would be useful advice.
What are good examples to show to undergraduate students?
I suggest that, before any good examples, it is better to focus on clear-definitions. In my experience, undergraduate probability and statistics is a course filled with words that none of the students
What are good examples to show to undergraduate students? I suggest that, before any good examples, it is better to focus on clear-definitions. In my experience, undergraduate probability and statistics is a course filled with words that none of the students understand. As an experiment, ask students who just finished a probability course what a "random variable" is. They might give you examples, but I doubt that most will give you a clear definition of it. What exactly is "probability"? What is a "distribution"? The terminology in statistics is even more confusing. Most undergraduate books I seen do a very bad job as explaining this. Examples and computations are nice, but without clear definitions it is not as helpful as one would think. Speaking from my experience, this was exactly why I hated probability theory as an undergraduate. Even though my interests as far as removed from probability as one can have, I now appreciate the subject, because I eventually taught myself what all the terminology really means. I apologize that this is not exactly what you asked, but given that you are teaching such a class I thought that this would be useful advice.
What are good examples to show to undergraduate students? I suggest that, before any good examples, it is better to focus on clear-definitions. In my experience, undergraduate probability and statistics is a course filled with words that none of the students
32,974
Nadaraya-Watson Optimal Bandwidth
It's optimal in that it minimized the mean (integrated) squared error for a data generating process as a function of some parameters and the sample size. The trick is that "proportional to" means there's an unknown factor multiplying $n^{-\frac{1}{5}}$. There are various candidates that are more or less data-driven, but the simplest RoT bandwidth when using a second order kernel is $$h=\sigma_x \cdot n^{-\frac{1}{5}}.$$ See Li and Racine, Nonparametric Econometrics: Theory and Practice, bottom of p.66. Usually, one can do much better than this by using CV to pick $h$ instead.
Nadaraya-Watson Optimal Bandwidth
It's optimal in that it minimized the mean (integrated) squared error for a data generating process as a function of some parameters and the sample size. The trick is that "proportional to" means ther
Nadaraya-Watson Optimal Bandwidth It's optimal in that it minimized the mean (integrated) squared error for a data generating process as a function of some parameters and the sample size. The trick is that "proportional to" means there's an unknown factor multiplying $n^{-\frac{1}{5}}$. There are various candidates that are more or less data-driven, but the simplest RoT bandwidth when using a second order kernel is $$h=\sigma_x \cdot n^{-\frac{1}{5}}.$$ See Li and Racine, Nonparametric Econometrics: Theory and Practice, bottom of p.66. Usually, one can do much better than this by using CV to pick $h$ instead.
Nadaraya-Watson Optimal Bandwidth It's optimal in that it minimized the mean (integrated) squared error for a data generating process as a function of some parameters and the sample size. The trick is that "proportional to" means ther
32,975
Type of inference to use with log-linear Poisson glm on contingency table frequency counts
If I had to choose based on how you set this up, I guess I would go with Anova(), but neither makes much sense. The order R enters the variables into the model is standardized and arbitrary. I would not use that to define the tests I would run. Instead, use ?loglin or ?loglm in the MASS library, and then drop the specific variables / combinations that you are interested in testing. There is an R loglm tutorial using the Titanic dataset here. As @DWin notes, the sequential vs not distinction corresponds to meaningfully different hypotheses. So that cannot be answered except by the researcher. The standard version of this point in the R world is Venables' paper, Exegesis on linear models (pdf). Given that you state you just wonder "which factors are associated with each other", that seems less like a conditional inference and more like dropping the specified association from the full model and testing that, or perhaps testing associations dropped from stripped down models where the other variables aren't included at all.
Type of inference to use with log-linear Poisson glm on contingency table frequency counts
If I had to choose based on how you set this up, I guess I would go with Anova(), but neither makes much sense. The order R enters the variables into the model is standardized and arbitrary. I would
Type of inference to use with log-linear Poisson glm on contingency table frequency counts If I had to choose based on how you set this up, I guess I would go with Anova(), but neither makes much sense. The order R enters the variables into the model is standardized and arbitrary. I would not use that to define the tests I would run. Instead, use ?loglin or ?loglm in the MASS library, and then drop the specific variables / combinations that you are interested in testing. There is an R loglm tutorial using the Titanic dataset here. As @DWin notes, the sequential vs not distinction corresponds to meaningfully different hypotheses. So that cannot be answered except by the researcher. The standard version of this point in the R world is Venables' paper, Exegesis on linear models (pdf). Given that you state you just wonder "which factors are associated with each other", that seems less like a conditional inference and more like dropping the specified association from the full model and testing that, or perhaps testing associations dropped from stripped down models where the other variables aren't included at all.
Type of inference to use with log-linear Poisson glm on contingency table frequency counts If I had to choose based on how you set this up, I guess I would go with Anova(), but neither makes much sense. The order R enters the variables into the model is standardized and arbitrary. I would
32,976
Type of inference to use with log-linear Poisson glm on contingency table frequency counts
The approach recommended in CAR for this type of analysis is to use Anova Type II tests, which conform to the principle of marginality. Fit saturated model: library(COUNT) data(titanic) titanic=droplevels(titanic) mytable=xtabs(~class+age+sex+survived, data=titanic) freqdata=data.frame(mytable) fullmodel=glm(Freq~class*age*sex*survived, family=poisson, data=freqdata) Then Type II Anova: > Anova(fullmodel) Analysis of Deviance Table (Type II tests) Response: Freq LR Chisq Df Pr(>Chisq) class 231.18 2 < 2.2e-16 *** age 1072.61 1 < 2.2e-16 *** sex 137.74 1 < 2.2e-16 *** survived 77.61 1 < 2.2e-16 *** class:age 41.24 2 1.107e-09 *** class:sex 2.30 2 0.316761 age:sex 0.27 1 0.604214 class:survived 114.88 2 < 2.2e-16 *** age:survived 20.34 1 6.486e-06 *** sex:survived 318.53 1 < 2.2e-16 *** class:age:sex 9.78 2 0.007509 ** class:age:survived 37.26 2 8.101e-09 *** class:sex:survived 64.07 2 1.220e-14 *** age:sex:survived 1.69 1 0.194209 class:age:sex:survived 0.00 2 1.000000 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 And you work your way around this table from bottom to top. It is clear that the terms class:age:sex:survived and age:sex:survived have large p values, so they can probably be ignored. We refit the model without those terms: secondmodel=glm(Freq~class*age*sex*survived - age:sex:survived - class:age:sex:survived, family=poisson, data=freqdata) Anova(secondmodel) Yielding: > Anova(secondmodel) Analysis of Deviance Table (Type II tests) Response: Freq LR Chisq Df Pr(>Chisq) class 231.18 2 < 2.2e-16 *** age 1072.61 1 < 2.2e-16 *** sex 137.74 1 < 2.2e-16 *** survived 77.61 1 < 2.2e-16 *** class:age 48.00 2 3.767e-11 *** class:sex 0.85 2 0.6530 age:sex 0.27 1 0.6042 class:survived 115.42 2 < 2.2e-16 *** age:survived 20.34 1 6.486e-06 *** sex:survived 318.53 1 < 2.2e-16 *** class:age:sex 20.27 2 3.971e-05 *** class:age:survived 44.21 2 2.507e-10 *** class:sex:survived 73.71 2 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 We can inspect the loglinear fit itself: > secondmodel Call: glm(formula = Freq ~ class * age * sex * survived - age:sex:survived - class:age:sex:survived, family = poisson, data = freqdata) Coefficients: (Intercept) class2nd class -25.43983 1.08137 class3rd class ageadults 28.11861 26.82613 sexman survivedyes 5.89242 25.43983 class2nd class:ageadults class3rd class:ageadults 0.09728 -24.98930 class2nd class:sexman class3rd class:sexman -1.84450 -4.94864 ageadults:sexman class2nd class:survivedyes -2.50803 1.48358 class3rd class:survivedyes ageadults:survivedyes -25.31933 -21.88449 sexman:survivedyes class2nd class:ageadults:sexman -4.28298 0.93211 class3rd class:ageadults:sexman class2nd class:ageadults:survivedyes 3.00077 -3.22185 class3rd class:ageadults:survivedyes class2nd class:sexman:survivedyes 21.54657 0.06801 class3rd class:sexman:survivedyes 2.89768 Degrees of Freedom: 23 Total (i.e. Null); 3 Residual Null Deviance: 2173 Residual Deviance: 1.685 AIC: 147.8 > 1 - pchisq(1.685, 3) [1] 0.6402738 We do not reject the null for the lack-of-fit test (fitted model vs saturated model), so we conclude that the model fits the data well. You can then inspect the coefficients from summary(secondmodel).
Type of inference to use with log-linear Poisson glm on contingency table frequency counts
The approach recommended in CAR for this type of analysis is to use Anova Type II tests, which conform to the principle of marginality. Fit saturated model: library(COUNT) data(titanic) titanic=drop
Type of inference to use with log-linear Poisson glm on contingency table frequency counts The approach recommended in CAR for this type of analysis is to use Anova Type II tests, which conform to the principle of marginality. Fit saturated model: library(COUNT) data(titanic) titanic=droplevels(titanic) mytable=xtabs(~class+age+sex+survived, data=titanic) freqdata=data.frame(mytable) fullmodel=glm(Freq~class*age*sex*survived, family=poisson, data=freqdata) Then Type II Anova: > Anova(fullmodel) Analysis of Deviance Table (Type II tests) Response: Freq LR Chisq Df Pr(>Chisq) class 231.18 2 < 2.2e-16 *** age 1072.61 1 < 2.2e-16 *** sex 137.74 1 < 2.2e-16 *** survived 77.61 1 < 2.2e-16 *** class:age 41.24 2 1.107e-09 *** class:sex 2.30 2 0.316761 age:sex 0.27 1 0.604214 class:survived 114.88 2 < 2.2e-16 *** age:survived 20.34 1 6.486e-06 *** sex:survived 318.53 1 < 2.2e-16 *** class:age:sex 9.78 2 0.007509 ** class:age:survived 37.26 2 8.101e-09 *** class:sex:survived 64.07 2 1.220e-14 *** age:sex:survived 1.69 1 0.194209 class:age:sex:survived 0.00 2 1.000000 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 And you work your way around this table from bottom to top. It is clear that the terms class:age:sex:survived and age:sex:survived have large p values, so they can probably be ignored. We refit the model without those terms: secondmodel=glm(Freq~class*age*sex*survived - age:sex:survived - class:age:sex:survived, family=poisson, data=freqdata) Anova(secondmodel) Yielding: > Anova(secondmodel) Analysis of Deviance Table (Type II tests) Response: Freq LR Chisq Df Pr(>Chisq) class 231.18 2 < 2.2e-16 *** age 1072.61 1 < 2.2e-16 *** sex 137.74 1 < 2.2e-16 *** survived 77.61 1 < 2.2e-16 *** class:age 48.00 2 3.767e-11 *** class:sex 0.85 2 0.6530 age:sex 0.27 1 0.6042 class:survived 115.42 2 < 2.2e-16 *** age:survived 20.34 1 6.486e-06 *** sex:survived 318.53 1 < 2.2e-16 *** class:age:sex 20.27 2 3.971e-05 *** class:age:survived 44.21 2 2.507e-10 *** class:sex:survived 73.71 2 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 We can inspect the loglinear fit itself: > secondmodel Call: glm(formula = Freq ~ class * age * sex * survived - age:sex:survived - class:age:sex:survived, family = poisson, data = freqdata) Coefficients: (Intercept) class2nd class -25.43983 1.08137 class3rd class ageadults 28.11861 26.82613 sexman survivedyes 5.89242 25.43983 class2nd class:ageadults class3rd class:ageadults 0.09728 -24.98930 class2nd class:sexman class3rd class:sexman -1.84450 -4.94864 ageadults:sexman class2nd class:survivedyes -2.50803 1.48358 class3rd class:survivedyes ageadults:survivedyes -25.31933 -21.88449 sexman:survivedyes class2nd class:ageadults:sexman -4.28298 0.93211 class3rd class:ageadults:sexman class2nd class:ageadults:survivedyes 3.00077 -3.22185 class3rd class:ageadults:survivedyes class2nd class:sexman:survivedyes 21.54657 0.06801 class3rd class:sexman:survivedyes 2.89768 Degrees of Freedom: 23 Total (i.e. Null); 3 Residual Null Deviance: 2173 Residual Deviance: 1.685 AIC: 147.8 > 1 - pchisq(1.685, 3) [1] 0.6402738 We do not reject the null for the lack-of-fit test (fitted model vs saturated model), so we conclude that the model fits the data well. You can then inspect the coefficients from summary(secondmodel).
Type of inference to use with log-linear Poisson glm on contingency table frequency counts The approach recommended in CAR for this type of analysis is to use Anova Type II tests, which conform to the principle of marginality. Fit saturated model: library(COUNT) data(titanic) titanic=drop
32,977
Violation of Cox Proportional Hazards by a continuous variable
I think you've presented your question very well. I often use the extended Cox model (with time-varying covariates) and I've been struck by the same obstacles several times. Like you pointed out: cox.zph(model) rho chisq p s 0.2066 12.536 0.000399 CTRUE 0.0453 2.212 0.136984 l 0.0461 0.835 0.360842 GLOBAL NA 14.684 0.002108 Your model violates the PH assumption; the model as a whole violates it due to the marked violation of the s variable. Then you show us this plot: I rarely bother to assess the plots if my cox.zph() is alright. However, I guess that the plot y axis presents beta coefficient of s and it appears to vary during follow-up. The proportional hazards assumption states that the hazard of any variables must be constant throughout the study period. That is, hazard of s should not fluctuate (vary) with time. This plot shows that the hazard associated with s is less pronounced between 200 and 700 days. To me thats the graphical evidence for the p=0.000399 in the cox.zph(). Why did that happen? Subject matter will be your guide to decide why this occurred. I have not seen violations being as sudden as this one; the patterns are often more successive than your graph presents. More importantly, you did the right thing to include the interaction term between s and stop. See John Fox example here. Some would argue that you should be satisfied with that (without rechecking cox.zph(). I would recommend a recheck, however.
Violation of Cox Proportional Hazards by a continuous variable
I think you've presented your question very well. I often use the extended Cox model (with time-varying covariates) and I've been struck by the same obstacles several times. Like you pointed out: cox
Violation of Cox Proportional Hazards by a continuous variable I think you've presented your question very well. I often use the extended Cox model (with time-varying covariates) and I've been struck by the same obstacles several times. Like you pointed out: cox.zph(model) rho chisq p s 0.2066 12.536 0.000399 CTRUE 0.0453 2.212 0.136984 l 0.0461 0.835 0.360842 GLOBAL NA 14.684 0.002108 Your model violates the PH assumption; the model as a whole violates it due to the marked violation of the s variable. Then you show us this plot: I rarely bother to assess the plots if my cox.zph() is alright. However, I guess that the plot y axis presents beta coefficient of s and it appears to vary during follow-up. The proportional hazards assumption states that the hazard of any variables must be constant throughout the study period. That is, hazard of s should not fluctuate (vary) with time. This plot shows that the hazard associated with s is less pronounced between 200 and 700 days. To me thats the graphical evidence for the p=0.000399 in the cox.zph(). Why did that happen? Subject matter will be your guide to decide why this occurred. I have not seen violations being as sudden as this one; the patterns are often more successive than your graph presents. More importantly, you did the right thing to include the interaction term between s and stop. See John Fox example here. Some would argue that you should be satisfied with that (without rechecking cox.zph(). I would recommend a recheck, however.
Violation of Cox Proportional Hazards by a continuous variable I think you've presented your question very well. I often use the extended Cox model (with time-varying covariates) and I've been struck by the same obstacles several times. Like you pointed out: cox
32,978
How do I cite the iris dataset in a paper?
I would cite both papers (Anderson, 1936; Fisher, 1936), but not scikit-learn, as the dataset is simply bundled with the library, but is not unique to it (for example, the same iris dataset is bundled with R environment, as well). Having said that, scikit-learn certainly has to be cited as well, if used, but not due to use of the dataset.
How do I cite the iris dataset in a paper?
I would cite both papers (Anderson, 1936; Fisher, 1936), but not scikit-learn, as the dataset is simply bundled with the library, but is not unique to it (for example, the same iris dataset is bundled
How do I cite the iris dataset in a paper? I would cite both papers (Anderson, 1936; Fisher, 1936), but not scikit-learn, as the dataset is simply bundled with the library, but is not unique to it (for example, the same iris dataset is bundled with R environment, as well). Having said that, scikit-learn certainly has to be cited as well, if used, but not due to use of the dataset.
How do I cite the iris dataset in a paper? I would cite both papers (Anderson, 1936; Fisher, 1936), but not scikit-learn, as the dataset is simply bundled with the library, but is not unique to it (for example, the same iris dataset is bundled
32,979
How do I cite the iris dataset in a paper?
I think that citing Scikit-learn is sufficient. According to Scikit-Learn documentation you should cite their paper. You can always add a reference the Iris datset in Scikit-Learn by providing a link to the page. EDIT - I stand corrected. The accepted answer is spot on
How do I cite the iris dataset in a paper?
I think that citing Scikit-learn is sufficient. According to Scikit-Learn documentation you should cite their paper. You can always add a reference the Iris datset in Scikit-Learn by providing a link
How do I cite the iris dataset in a paper? I think that citing Scikit-learn is sufficient. According to Scikit-Learn documentation you should cite their paper. You can always add a reference the Iris datset in Scikit-Learn by providing a link to the page. EDIT - I stand corrected. The accepted answer is spot on
How do I cite the iris dataset in a paper? I think that citing Scikit-learn is sufficient. According to Scikit-Learn documentation you should cite their paper. You can always add a reference the Iris datset in Scikit-Learn by providing a link
32,980
Is multicollinearity a problem with gradient boosted trees (i.e. GBM)?
I believe I can answer that, although it is an old one: Boosted Trees are immune to multicollinearity: https://datascience.stackexchange.com/questions/12554/does-xgboost-handle-multicollinearity-by-itself See also the newest implementation of Boosted Trees with EBM from Microsoft: https://interpret.ml/docs/ebm.html The boosting procedure is carefully restricted to train on one feature at a time in round-robin fashion using a very low learning rate so that feature order does not matter. It round-robin cycles through features to mitigate the effects of co-linearity and to learn the best feature function for each feature to show how each feature contributes to the model’s prediction for the problem. But! As you can see from the first link. The second answer there highlights, that boosted trees can not work out multicollinearity when it comes to inference or feature importance. Boosted Trees do not know, if you for example have added a second feature which is just perfectly linearly dependent from another. The Trees will just say that both features (the original one and the artifical one) are now important maybe they will share the feature importance. just make a simple experiment on that. You will see they can not deal with multicoll. in terms of yeah lets say causality. If you would want such a thing you first need to aggregate features or do a regularization method. Update 2022/1/17 I made an experiment examining the explanatory part of the multicollinearity in boostes trees and gams and decision trees. While for prediction, multicollinearity has no effect, the explanatory part is highly influenced by it. So far, only the EBM offers a handling of multicollinearity due to its round robin procedure. See my other post: How shap values behave in terms of multicollinearity in Trees, Ensemble, GradientBoosting and GAM/Boosting
Is multicollinearity a problem with gradient boosted trees (i.e. GBM)?
I believe I can answer that, although it is an old one: Boosted Trees are immune to multicollinearity: https://datascience.stackexchange.com/questions/12554/does-xgboost-handle-multicollinearity-by-it
Is multicollinearity a problem with gradient boosted trees (i.e. GBM)? I believe I can answer that, although it is an old one: Boosted Trees are immune to multicollinearity: https://datascience.stackexchange.com/questions/12554/does-xgboost-handle-multicollinearity-by-itself See also the newest implementation of Boosted Trees with EBM from Microsoft: https://interpret.ml/docs/ebm.html The boosting procedure is carefully restricted to train on one feature at a time in round-robin fashion using a very low learning rate so that feature order does not matter. It round-robin cycles through features to mitigate the effects of co-linearity and to learn the best feature function for each feature to show how each feature contributes to the model’s prediction for the problem. But! As you can see from the first link. The second answer there highlights, that boosted trees can not work out multicollinearity when it comes to inference or feature importance. Boosted Trees do not know, if you for example have added a second feature which is just perfectly linearly dependent from another. The Trees will just say that both features (the original one and the artifical one) are now important maybe they will share the feature importance. just make a simple experiment on that. You will see they can not deal with multicoll. in terms of yeah lets say causality. If you would want such a thing you first need to aggregate features or do a regularization method. Update 2022/1/17 I made an experiment examining the explanatory part of the multicollinearity in boostes trees and gams and decision trees. While for prediction, multicollinearity has no effect, the explanatory part is highly influenced by it. So far, only the EBM offers a handling of multicollinearity due to its round robin procedure. See my other post: How shap values behave in terms of multicollinearity in Trees, Ensemble, GradientBoosting and GAM/Boosting
Is multicollinearity a problem with gradient boosted trees (i.e. GBM)? I believe I can answer that, although it is an old one: Boosted Trees are immune to multicollinearity: https://datascience.stackexchange.com/questions/12554/does-xgboost-handle-multicollinearity-by-it
32,981
What interpretation do the parameters of a generalised linear model with effect coding have?
Under effect coding, the intercept in the summary table summary(out) is the average logit (log-odds or the log of odds ratio) across all the four periods in your case, and each of the other effects is the logit difference of the corresponding period relative to the average logit. You can easily verify your interpretation by comparing your current results to a different coding method such as dummy coding on your data: out2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial, contrasts = list(period = "contr.treatment")) summary(out2)
What interpretation do the parameters of a generalised linear model with effect coding have?
Under effect coding, the intercept in the summary table summary(out) is the average logit (log-odds or the log of odds ratio) across all the four periods in your case, and each of the other effects is
What interpretation do the parameters of a generalised linear model with effect coding have? Under effect coding, the intercept in the summary table summary(out) is the average logit (log-odds or the log of odds ratio) across all the four periods in your case, and each of the other effects is the logit difference of the corresponding period relative to the average logit. You can easily verify your interpretation by comparing your current results to a different coding method such as dummy coding on your data: out2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial, contrasts = list(period = "contr.treatment")) summary(out2)
What interpretation do the parameters of a generalised linear model with effect coding have? Under effect coding, the intercept in the summary table summary(out) is the average logit (log-odds or the log of odds ratio) across all the four periods in your case, and each of the other effects is
32,982
Quantifying how much "more correlation" a correlation matrix A contains compared to a correlation matrix B
The determinant of the covariance isn't a terrible idea, but you probably want to use the inverse of the determinant. Picture the contours (lines of equal probability density) of a bivariate distribution. You can think of the determinant as (approximately) measuring the volume of a given contour. Then a highly correlated set of variables actually has less volume, because the contours are so stretched. For example: If $X \sim N(0, 1)$ and $Y = X + \epsilon$, where $\epsilon \sim N(0, .01)$, then $$ Cov (X, Y) = \begin{bmatrix} 1 & 1 \\ 1 & 1.01 \end{bmatrix} $$ so $$ Corr (X, Y) \approx \begin{bmatrix} 1 & .995 \\ .995 & 1 \end{bmatrix} $$ so the determinant is $\approx .0099$. On the other hand, if $X, Y$ are independent $N(0, 1)$, then the determinant is 1. As any pair of variables becomes more nearly linearly dependent, the determinant approaches zero, since it's the product of the eigenvalues of the correlation matrix. So the determinant may not be able to distinguish between a single pair of nearly-dependent variables, as opposed to many pairs, and this is unlikely to be a behavior you desire. I would suggest simulating such a scenario. You could use a scheme like this: Fix a dimension P, an approximate rank r, and let s be a large constant Let A[1], ..., A[r] be random vectors, drawn iid from N(0, s) distribution Set Sigma = Identity(P) For i=1..r: Sigma = Sigma + A[i] * A[i]^T Set rho to be Sigma scaled as a correlation matrix Then rho will have approximate rank r, which determines how many nearly linearly independent variables you have. You can see how the determinant reflects the approximate rank r and scaling s.
Quantifying how much "more correlation" a correlation matrix A contains compared to a correlation ma
The determinant of the covariance isn't a terrible idea, but you probably want to use the inverse of the determinant. Picture the contours (lines of equal probability density) of a bivariate distribu
Quantifying how much "more correlation" a correlation matrix A contains compared to a correlation matrix B The determinant of the covariance isn't a terrible idea, but you probably want to use the inverse of the determinant. Picture the contours (lines of equal probability density) of a bivariate distribution. You can think of the determinant as (approximately) measuring the volume of a given contour. Then a highly correlated set of variables actually has less volume, because the contours are so stretched. For example: If $X \sim N(0, 1)$ and $Y = X + \epsilon$, where $\epsilon \sim N(0, .01)$, then $$ Cov (X, Y) = \begin{bmatrix} 1 & 1 \\ 1 & 1.01 \end{bmatrix} $$ so $$ Corr (X, Y) \approx \begin{bmatrix} 1 & .995 \\ .995 & 1 \end{bmatrix} $$ so the determinant is $\approx .0099$. On the other hand, if $X, Y$ are independent $N(0, 1)$, then the determinant is 1. As any pair of variables becomes more nearly linearly dependent, the determinant approaches zero, since it's the product of the eigenvalues of the correlation matrix. So the determinant may not be able to distinguish between a single pair of nearly-dependent variables, as opposed to many pairs, and this is unlikely to be a behavior you desire. I would suggest simulating such a scenario. You could use a scheme like this: Fix a dimension P, an approximate rank r, and let s be a large constant Let A[1], ..., A[r] be random vectors, drawn iid from N(0, s) distribution Set Sigma = Identity(P) For i=1..r: Sigma = Sigma + A[i] * A[i]^T Set rho to be Sigma scaled as a correlation matrix Then rho will have approximate rank r, which determines how many nearly linearly independent variables you have. You can see how the determinant reflects the approximate rank r and scaling s.
Quantifying how much "more correlation" a correlation matrix A contains compared to a correlation ma The determinant of the covariance isn't a terrible idea, but you probably want to use the inverse of the determinant. Picture the contours (lines of equal probability density) of a bivariate distribu
32,983
Why does $r^2$ between two variables represent proportion of shared variance?
One can only guess what one particular author might mean by "shared variance." We might hope to circumscribe the possibilities by considering what properties this concept ought (intuitively) to have. We know that "variances add": the variance of a sum $X+\varepsilon$ is the sum of the variances of $X$ and $\varepsilon$ when $X$ and $\varepsilon$ have zero covariance. It is natural to define the "shared variance" of $X$ with the sum to be the fraction of the variance of the sum represented by the variance of $X$. This is enough to imply the shared variances of any two random variables $X$ and $Y$ must be the square of their correlation coefficient. This result gives meaning to the interpretation of a squared correlation coefficient as a "shared variance": in a suitable sense, it really is a fraction of a total variance that can be assigned to one variable in the sum. The details follow. Principles and their implications Of course if $Y=X$, their "shared variance" (let's call it "SV" from now on) ought to be 100%. But what if $Y$ and $X$ are just scaled or shifted versions of one another? For instance, what if $Y$ represents the temperature of a city in degrees F and $X$ represents the temperature in degrees C? I would like to suggest that in such cases $X$ and $Y$ should still have 100% SV, so that this concept will remain meaningful regardless of how $X$ and $Y$ might be measured: $$\operatorname{SV}(\alpha + \beta X, \gamma + \delta Y) = \operatorname{SV}(X,Y)\tag{1}$$ for any numbers $\alpha, \gamma$ and nonzero numbers $\beta, \delta$. Another principle might be that when $\varepsilon$ is a random variable independent of $X$, then the variance of $X+\varepsilon$ can be uniquely decomposed into two non-negative parts, $$\operatorname{Var}(X+\varepsilon) = \operatorname{Var}(X) + \operatorname{Var}(\varepsilon),$$ suggesting we attempt to define SV in this special case as $$\operatorname{SV}(X, X+\varepsilon) = \frac{\operatorname{Var}(X)}{\operatorname{Var}(X) + \operatorname{Var}(\epsilon)}.\tag{2}$$ Since all these criteria are only up to second order--they only involve the first and second moments of the variables in the forms of expectations and variances--let's relax the requirement that $X$ and $\varepsilon$ be independent and only demand that they be uncorrelated. This will make the analysis much more general than it otherwise might be. The results These principles--if you accept them--lead to a unique, familiar, interpretable concept. The trick will be to reduce the general case to the special case of a sum, where we can apply definition $(2)$. Given $(X,Y)$, we simply attempt to decompose $Y$ into a scaled, shifted version of $X$ plus a variable that is uncorrelated with $X$: that is, let's find (if it's possible) constants $\alpha$ and $\beta$ and a random variable $\epsilon$ for which $$Y = \alpha + \beta X + \varepsilon\tag{3}$$ with $\operatorname{Cov}(X, \varepsilon)=0$. For the decomposition to have any chance of being unique, we should demand $$\mathbb{E}[\varepsilon]=0$$ so that once $\beta $ is found, $\alpha$ is determined by $$\alpha = \mathbb{E}[Y] - \beta\, \mathbb{E}[X].$$ This looks an awful lot like linear regression and indeed it is. The first principle says we may rescale $X$ and $Y$ to have unit variance (assuming they each have nonzero variance) and that when it is done, standard regression results assert the value of $\beta$ in $(3)$ is the correlation of $X$ and $Y$: $$\beta = \rho(X,Y)\tag{4}.$$ Moreover, taking the variances of $(1)$ gives $$1 = \operatorname{Var}(Y) = \beta^2 \operatorname{Var}(X) + \operatorname{Var}(\varepsilon) = \beta^2 + \operatorname{Var}(\varepsilon),$$ implying $$\operatorname{Var}(\varepsilon) = 1-\beta^2 = 1-\rho^2.\tag{5}$$ Consequently $$\eqalign{ \operatorname{SV}(X,Y) &= \operatorname{SV}(X, \alpha+\beta X + \varepsilon) &\text{(Model 3)}\\ &= \operatorname{SV}(\beta X, \beta X + \varepsilon) &\text{(Property 1)}\\ &= \frac{\operatorname{Var}(\beta X)}{\operatorname{Var}(\beta X) + \operatorname{Var}(\epsilon)} & \text{(Definition 2)}\\ &= \frac{\beta^2}{\beta^2 + (1-\beta^2)} = \beta^2 &\text{(Result 5)}\\ & = \rho^2 &\text{(Relation 4)}. }$$ Note that because the regression coefficient on $Y$ (when standardized to unit variance) is $\rho(Y,X)=\rho(X,Y)$, the "shared variance" itself is symmetric, justifying a terminology that suggests the order of $X$ and $Y$ does not matter: $$\operatorname{SV}(X,Y) = \rho(X,Y)^2 = \rho(Y,X)^2 = \operatorname{SV}(Y,X).$$
Why does $r^2$ between two variables represent proportion of shared variance?
One can only guess what one particular author might mean by "shared variance." We might hope to circumscribe the possibilities by considering what properties this concept ought (intuitively) to have.
Why does $r^2$ between two variables represent proportion of shared variance? One can only guess what one particular author might mean by "shared variance." We might hope to circumscribe the possibilities by considering what properties this concept ought (intuitively) to have. We know that "variances add": the variance of a sum $X+\varepsilon$ is the sum of the variances of $X$ and $\varepsilon$ when $X$ and $\varepsilon$ have zero covariance. It is natural to define the "shared variance" of $X$ with the sum to be the fraction of the variance of the sum represented by the variance of $X$. This is enough to imply the shared variances of any two random variables $X$ and $Y$ must be the square of their correlation coefficient. This result gives meaning to the interpretation of a squared correlation coefficient as a "shared variance": in a suitable sense, it really is a fraction of a total variance that can be assigned to one variable in the sum. The details follow. Principles and their implications Of course if $Y=X$, their "shared variance" (let's call it "SV" from now on) ought to be 100%. But what if $Y$ and $X$ are just scaled or shifted versions of one another? For instance, what if $Y$ represents the temperature of a city in degrees F and $X$ represents the temperature in degrees C? I would like to suggest that in such cases $X$ and $Y$ should still have 100% SV, so that this concept will remain meaningful regardless of how $X$ and $Y$ might be measured: $$\operatorname{SV}(\alpha + \beta X, \gamma + \delta Y) = \operatorname{SV}(X,Y)\tag{1}$$ for any numbers $\alpha, \gamma$ and nonzero numbers $\beta, \delta$. Another principle might be that when $\varepsilon$ is a random variable independent of $X$, then the variance of $X+\varepsilon$ can be uniquely decomposed into two non-negative parts, $$\operatorname{Var}(X+\varepsilon) = \operatorname{Var}(X) + \operatorname{Var}(\varepsilon),$$ suggesting we attempt to define SV in this special case as $$\operatorname{SV}(X, X+\varepsilon) = \frac{\operatorname{Var}(X)}{\operatorname{Var}(X) + \operatorname{Var}(\epsilon)}.\tag{2}$$ Since all these criteria are only up to second order--they only involve the first and second moments of the variables in the forms of expectations and variances--let's relax the requirement that $X$ and $\varepsilon$ be independent and only demand that they be uncorrelated. This will make the analysis much more general than it otherwise might be. The results These principles--if you accept them--lead to a unique, familiar, interpretable concept. The trick will be to reduce the general case to the special case of a sum, where we can apply definition $(2)$. Given $(X,Y)$, we simply attempt to decompose $Y$ into a scaled, shifted version of $X$ plus a variable that is uncorrelated with $X$: that is, let's find (if it's possible) constants $\alpha$ and $\beta$ and a random variable $\epsilon$ for which $$Y = \alpha + \beta X + \varepsilon\tag{3}$$ with $\operatorname{Cov}(X, \varepsilon)=0$. For the decomposition to have any chance of being unique, we should demand $$\mathbb{E}[\varepsilon]=0$$ so that once $\beta $ is found, $\alpha$ is determined by $$\alpha = \mathbb{E}[Y] - \beta\, \mathbb{E}[X].$$ This looks an awful lot like linear regression and indeed it is. The first principle says we may rescale $X$ and $Y$ to have unit variance (assuming they each have nonzero variance) and that when it is done, standard regression results assert the value of $\beta$ in $(3)$ is the correlation of $X$ and $Y$: $$\beta = \rho(X,Y)\tag{4}.$$ Moreover, taking the variances of $(1)$ gives $$1 = \operatorname{Var}(Y) = \beta^2 \operatorname{Var}(X) + \operatorname{Var}(\varepsilon) = \beta^2 + \operatorname{Var}(\varepsilon),$$ implying $$\operatorname{Var}(\varepsilon) = 1-\beta^2 = 1-\rho^2.\tag{5}$$ Consequently $$\eqalign{ \operatorname{SV}(X,Y) &= \operatorname{SV}(X, \alpha+\beta X + \varepsilon) &\text{(Model 3)}\\ &= \operatorname{SV}(\beta X, \beta X + \varepsilon) &\text{(Property 1)}\\ &= \frac{\operatorname{Var}(\beta X)}{\operatorname{Var}(\beta X) + \operatorname{Var}(\epsilon)} & \text{(Definition 2)}\\ &= \frac{\beta^2}{\beta^2 + (1-\beta^2)} = \beta^2 &\text{(Result 5)}\\ & = \rho^2 &\text{(Relation 4)}. }$$ Note that because the regression coefficient on $Y$ (when standardized to unit variance) is $\rho(Y,X)=\rho(X,Y)$, the "shared variance" itself is symmetric, justifying a terminology that suggests the order of $X$ and $Y$ does not matter: $$\operatorname{SV}(X,Y) = \rho(X,Y)^2 = \rho(Y,X)^2 = \operatorname{SV}(Y,X).$$
Why does $r^2$ between two variables represent proportion of shared variance? One can only guess what one particular author might mean by "shared variance." We might hope to circumscribe the possibilities by considering what properties this concept ought (intuitively) to have.
32,984
Concentration inequality of weighted sum of random variables given a tail inequality
Here's some work. I don't think this counts as a complete answer, though. Sorry. First, $$ \left|\sum_i s_iX_i\right| \le \sum_i|s_i X_i| \le \left(\sum_i |s_i|^q\right)^{1/q}\left(\sum_i |X_i|^p \right)^{1/p}, $$ by the triangle inequality and Holder's inequality. Taking this, \begin{align*} P(|Z|>t) &\le P\left( \sum_i |X_i|^p > t^p\left(\sum_i |s_i|^q\right)^{-p/q}\right) \\ &= P\left( \sum_i |X_i|^p > \frac{t^p}{ \left\Vert s\right\Vert_q^p }\right) . \end{align*} I don't know how to sharpen it with the minumum, though. I tried using the fact that $P(\max_i |X_i| \le t) = P(|X_i|\le t)^n$ and the complement rule to get: $$ P\left( \sum_i |X_i|^p > \frac{t^p}{ \left\Vert s\right\Vert_q^p }\right) \le P\left( \max_i |x_i|^q > \frac{t^p}{ n \left\Vert s\right\Vert_q^p }\right) \le 1 - \left[1 - \exp\left(-\left\{\frac{t^p}{ n \left\Vert s\right\Vert_q^p }\right\}^p \right) \right]^n. $$ Not sure though.
Concentration inequality of weighted sum of random variables given a tail inequality
Here's some work. I don't think this counts as a complete answer, though. Sorry. First, $$ \left|\sum_i s_iX_i\right| \le \sum_i|s_i X_i| \le \left(\sum_i |s_i|^q\right)^{1/q}\left(\sum_i |X_i|^p \rig
Concentration inequality of weighted sum of random variables given a tail inequality Here's some work. I don't think this counts as a complete answer, though. Sorry. First, $$ \left|\sum_i s_iX_i\right| \le \sum_i|s_i X_i| \le \left(\sum_i |s_i|^q\right)^{1/q}\left(\sum_i |X_i|^p \right)^{1/p}, $$ by the triangle inequality and Holder's inequality. Taking this, \begin{align*} P(|Z|>t) &\le P\left( \sum_i |X_i|^p > t^p\left(\sum_i |s_i|^q\right)^{-p/q}\right) \\ &= P\left( \sum_i |X_i|^p > \frac{t^p}{ \left\Vert s\right\Vert_q^p }\right) . \end{align*} I don't know how to sharpen it with the minumum, though. I tried using the fact that $P(\max_i |X_i| \le t) = P(|X_i|\le t)^n$ and the complement rule to get: $$ P\left( \sum_i |X_i|^p > \frac{t^p}{ \left\Vert s\right\Vert_q^p }\right) \le P\left( \max_i |x_i|^q > \frac{t^p}{ n \left\Vert s\right\Vert_q^p }\right) \le 1 - \left[1 - \exp\left(-\left\{\frac{t^p}{ n \left\Vert s\right\Vert_q^p }\right\}^p \right) \right]^n. $$ Not sure though.
Concentration inequality of weighted sum of random variables given a tail inequality Here's some work. I don't think this counts as a complete answer, though. Sorry. First, $$ \left|\sum_i s_iX_i\right| \le \sum_i|s_i X_i| \le \left(\sum_i |s_i|^q\right)^{1/q}\left(\sum_i |X_i|^p \rig
32,985
Why doesn't correlation of residuals matter when testing for normality?
In your notation, $H$ is the projection an the column space of $X$, i.e. the subspace spanned of all regressors. Therefore $M:=I_{n}-H$ is the projection on everything orthogonal to the subspace spanned by all regressors. If $X\in\mathbb{R}^{n\times k}$, then $\hat{e}\in\mathbb{R}^{n}$ is singular normal distributed and the elements are correlated, as you state. The errors $\varepsilon$ are unobservable and are in general not orthogonal to the subspace spanned by $X$. For the sake of argument, assume that the error $\varepsilon\perp\operatorname{span}\left(X\right)$. If this was true, we would have $y=X\beta+\varepsilon=\tilde{y}+\varepsilon$ with $\tilde{y}\perp\varepsilon$. Since $\tilde{y}=X\beta\in\operatorname{span}\left(X\right)$, we could decompose $y$ and get the true $\varepsilon$. Assume we have a basis $b_{1},\ldots,b_{n}$ of $\mathbb{R}^{n}$, where the first $b_{1},\ldots,b_{k}$ basis vector span the subspace $\operatorname{span}\left(X\right)$ and the remaining $b_{k+1},\ldots,b_{n}$ span $\operatorname{span}\left(X\right)^{\perp}$. In general, the error $\varepsilon=\alpha_{1}b_{1}+\ldots+\alpha_{n}b_{n}$ will have non-zero components $\alpha_{i}$ for $i\in\left\{1,\ldots,k\right\}$. This non-zero components will get mixed up with $X\beta$ and therefore can not be recovered by projection on $\operatorname{span}\left(X\right)$. Since we can never hope to recover the true errors $\varepsilon$ and $\hat{e}$ are correlated singular $n$-dimensional normal, we could transform $\hat{e}\in\mathbb{R}^{n}\mapsto e^{*}\in\mathbb{R}^{n-k}$. There we can have that \begin{equation} e^{*}\sim\mathcal{N}_{n-k}\left(0,\sigma^{2}I_{n-k}\right) \textrm{,} \end{equation} i.e. $e^{*}$ is non-singular uncorrelated and homoscedastic normal distributed. The residuals $e^{*}$ are called Theil's BLUS residuals. In the short paper On the Testing of Regression Disturbances for Normality you find a comparison of OLS and BLUS residuals. In the tested Monte Carlo setting the OLS residuals are superior to BLUS residuals. But this should give you some starting point.
Why doesn't correlation of residuals matter when testing for normality?
In your notation, $H$ is the projection an the column space of $X$, i.e. the subspace spanned of all regressors. Therefore $M:=I_{n}-H$ is the projection on everything orthogonal to the subspace spann
Why doesn't correlation of residuals matter when testing for normality? In your notation, $H$ is the projection an the column space of $X$, i.e. the subspace spanned of all regressors. Therefore $M:=I_{n}-H$ is the projection on everything orthogonal to the subspace spanned by all regressors. If $X\in\mathbb{R}^{n\times k}$, then $\hat{e}\in\mathbb{R}^{n}$ is singular normal distributed and the elements are correlated, as you state. The errors $\varepsilon$ are unobservable and are in general not orthogonal to the subspace spanned by $X$. For the sake of argument, assume that the error $\varepsilon\perp\operatorname{span}\left(X\right)$. If this was true, we would have $y=X\beta+\varepsilon=\tilde{y}+\varepsilon$ with $\tilde{y}\perp\varepsilon$. Since $\tilde{y}=X\beta\in\operatorname{span}\left(X\right)$, we could decompose $y$ and get the true $\varepsilon$. Assume we have a basis $b_{1},\ldots,b_{n}$ of $\mathbb{R}^{n}$, where the first $b_{1},\ldots,b_{k}$ basis vector span the subspace $\operatorname{span}\left(X\right)$ and the remaining $b_{k+1},\ldots,b_{n}$ span $\operatorname{span}\left(X\right)^{\perp}$. In general, the error $\varepsilon=\alpha_{1}b_{1}+\ldots+\alpha_{n}b_{n}$ will have non-zero components $\alpha_{i}$ for $i\in\left\{1,\ldots,k\right\}$. This non-zero components will get mixed up with $X\beta$ and therefore can not be recovered by projection on $\operatorname{span}\left(X\right)$. Since we can never hope to recover the true errors $\varepsilon$ and $\hat{e}$ are correlated singular $n$-dimensional normal, we could transform $\hat{e}\in\mathbb{R}^{n}\mapsto e^{*}\in\mathbb{R}^{n-k}$. There we can have that \begin{equation} e^{*}\sim\mathcal{N}_{n-k}\left(0,\sigma^{2}I_{n-k}\right) \textrm{,} \end{equation} i.e. $e^{*}$ is non-singular uncorrelated and homoscedastic normal distributed. The residuals $e^{*}$ are called Theil's BLUS residuals. In the short paper On the Testing of Regression Disturbances for Normality you find a comparison of OLS and BLUS residuals. In the tested Monte Carlo setting the OLS residuals are superior to BLUS residuals. But this should give you some starting point.
Why doesn't correlation of residuals matter when testing for normality? In your notation, $H$ is the projection an the column space of $X$, i.e. the subspace spanned of all regressors. Therefore $M:=I_{n}-H$ is the projection on everything orthogonal to the subspace spann
32,986
What is the autocorrelation function of a time series arising from computing a moving standard deviation?
The ACF of the rolling standard deviation cannot generally be obtained from the ACF of the time series, because the rolling standard deviation is fundamentally a nonlinear filter. To avoid boundary effects take $(X_t)_{t \in\mathbb{Z}}$ to be a doubly infinite stationary process with mean 0. As I understand the rolling window computation we introduce the rolling variance estimator $$s_t^2 = \sum_{i=0}^w \frac{1}{w+1} X_{t-i}^2,$$ which is a backward moving average of the squared process. The standard deviation, being $s_t = \sqrt{s_t^2}$, is even more so a nonlinear filter. However, $(s_t^2)_{t \in \mathbb{Z}}$ is a causal linear filter of the squared process, and its ACF can therefore be derived from the ACF of $(X_t^2)_{t \in \mathbb{Z}}$. If the times series is a sequence of i.i.d. variables so is the squared process, in which case $(s_t^2)_{t \in \mathbb{Z}}$ is an MA$(w)$ process with all weights equal to $1/(w+1)$. Using a ARCH(1) model we can, on the other hand, find an example where the process itself is a white noise process, but the squared process is not. In fact, for the ARCH(1) model the ACF for the squared process coincides with the ACF for an AR(1) process, in which case the ACF for the rolling variance is the same as for a moving average of an AR(1) process. Clearly, the computations above are idealized, since we would probably also use a rolling mean in practice to center the time series. As I see it, this would just mess up explicit computations even more. With explicit assumptions about the time series (ARCH-structure, or a Gaussian distribution) there is a certain chance you can compute the ACF for the squared process, and from this the ACF for the rolling variance. On a more qualitative level the rolling variance and rolling standard deviation will inherit ergodicity and various mixing properties from the time series itself. This is useful if you want to apply general tools from (nonlinear) time series analysis and stochastic processes to assess if the rolling standard deviation is stationary (which I understand is of interest).
What is the autocorrelation function of a time series arising from computing a moving standard devia
The ACF of the rolling standard deviation cannot generally be obtained from the ACF of the time series, because the rolling standard deviation is fundamentally a nonlinear filter. To avoid boundary e
What is the autocorrelation function of a time series arising from computing a moving standard deviation? The ACF of the rolling standard deviation cannot generally be obtained from the ACF of the time series, because the rolling standard deviation is fundamentally a nonlinear filter. To avoid boundary effects take $(X_t)_{t \in\mathbb{Z}}$ to be a doubly infinite stationary process with mean 0. As I understand the rolling window computation we introduce the rolling variance estimator $$s_t^2 = \sum_{i=0}^w \frac{1}{w+1} X_{t-i}^2,$$ which is a backward moving average of the squared process. The standard deviation, being $s_t = \sqrt{s_t^2}$, is even more so a nonlinear filter. However, $(s_t^2)_{t \in \mathbb{Z}}$ is a causal linear filter of the squared process, and its ACF can therefore be derived from the ACF of $(X_t^2)_{t \in \mathbb{Z}}$. If the times series is a sequence of i.i.d. variables so is the squared process, in which case $(s_t^2)_{t \in \mathbb{Z}}$ is an MA$(w)$ process with all weights equal to $1/(w+1)$. Using a ARCH(1) model we can, on the other hand, find an example where the process itself is a white noise process, but the squared process is not. In fact, for the ARCH(1) model the ACF for the squared process coincides with the ACF for an AR(1) process, in which case the ACF for the rolling variance is the same as for a moving average of an AR(1) process. Clearly, the computations above are idealized, since we would probably also use a rolling mean in practice to center the time series. As I see it, this would just mess up explicit computations even more. With explicit assumptions about the time series (ARCH-structure, or a Gaussian distribution) there is a certain chance you can compute the ACF for the squared process, and from this the ACF for the rolling variance. On a more qualitative level the rolling variance and rolling standard deviation will inherit ergodicity and various mixing properties from the time series itself. This is useful if you want to apply general tools from (nonlinear) time series analysis and stochastic processes to assess if the rolling standard deviation is stationary (which I understand is of interest).
What is the autocorrelation function of a time series arising from computing a moving standard devia The ACF of the rolling standard deviation cannot generally be obtained from the ACF of the time series, because the rolling standard deviation is fundamentally a nonlinear filter. To avoid boundary e
32,987
How would you visualize a segmented funnel? (and could you do it with Python?)
This plot displays a two-way contingency table whose data are approximately these: Branded Unbranded Social Referring Direct RSS First-time... 177276 472737 88638 265915 472737 59092 Return Visits... 236002 629339 118001 354003 629339 78667 4+ Visits in ... 166514 444037 83257 249771 444037 55505 10+ Visit in ... 28782 76751 14391 43172 76751 9594 At Least One Visit... 6707 17886 3354 10061 17886 2236 Last Touch... 660 1759 330 989 1759 220 There are myriad ways to construct this plot. For instance, you could calculate the positions of each rectangular patch of color and separately plat each patch. In general, though, it helps to find a succinct description of how a plot represents data. As a point of departure, we may view this one as a variation of a stacked bar chart. This plot scarcely needs a description: through familiarity we know that each row of rectangles corresponds to each row of the contingency table; that lengths of the rectangles are directly proportional to their counts; that they do not overlap; and that the colors correspond to the columns of the table. If we convert this table into a "data frame" or "data table" $X$ having one row per count with fields indicating the row name, column name, and count, then plotting it typically amounts to calling a suitable function and stipulating where to find the row names, the column names, and the counts. Using a Grammar of Graphics implementation (the ggplot2 package for R) this would look something like ggplot(X, aes(Outcome, Count, fill=Referral)) + geom_col() The details of the graphic, such as how wide a row of bars is and what colors to use, typically need to be stipulated explicitly. How that is done depends on the plotting environment (and so is of relatively little interest: you just have to look it up). This particular implementation of the Grammar of Graphics provides little flexibility in positioning the bars. One way to produce the desired look, with minimal effort, is to insert an invisible category at the base of each bar so that the bars are centered. A little thinking suggests the fake count needed to center each bar must be the average of the bar's total length and that of the longest bar. For this example this would be an initial column with the values 254478.0 0.0 301115.0 897955.0 993610.5 1019817.0 Here is the resulting stacked bar chart showing the fake data in light gray: The desired figure is created by making the graphics for the fake column invisible: The Grammar of Graphics description of the plot does not need to change: we have simply supplied a different contingency table to be rendered according to the same description (and overrode the default color assignment for the fake column). Comments These graphics are honest: the horizontal extent of each colored patch is directly proportional to the underlying data, without distortion. Comparing them to the original (in the question) reveals how extreme its distortion is (Tufte's Lie Factor). If it is desired to show details at the bottom of the "funnel," consider representing counts by area rather than length. You could make the lengths of the bars proportional to the square roots of the total lengths and their widths (in the vertical direction) also proportional to the square roots. Now the bottom of the "funnel" would be about one-twentieth the longest length, rather than one four-hundredth of it, permitting some detail to show. Unfortunately, the ggplot2 implementation does not allow one to map a variable to the bar width, and so a more involved work-around is needed (one which indeed describes each rectangle individually). Perhaps there is a Python implementation that is more flexible. References Edward Tufte, The Visual Display of Quantitative Information. Cheshire Press 1984. Leland Wilkinson, The Grammar of Graphics. Springer 2005.
How would you visualize a segmented funnel? (and could you do it with Python?)
This plot displays a two-way contingency table whose data are approximately these: Branded Unbranded Social Referring Direct RSS First-time... 177276 472737 88638
How would you visualize a segmented funnel? (and could you do it with Python?) This plot displays a two-way contingency table whose data are approximately these: Branded Unbranded Social Referring Direct RSS First-time... 177276 472737 88638 265915 472737 59092 Return Visits... 236002 629339 118001 354003 629339 78667 4+ Visits in ... 166514 444037 83257 249771 444037 55505 10+ Visit in ... 28782 76751 14391 43172 76751 9594 At Least One Visit... 6707 17886 3354 10061 17886 2236 Last Touch... 660 1759 330 989 1759 220 There are myriad ways to construct this plot. For instance, you could calculate the positions of each rectangular patch of color and separately plat each patch. In general, though, it helps to find a succinct description of how a plot represents data. As a point of departure, we may view this one as a variation of a stacked bar chart. This plot scarcely needs a description: through familiarity we know that each row of rectangles corresponds to each row of the contingency table; that lengths of the rectangles are directly proportional to their counts; that they do not overlap; and that the colors correspond to the columns of the table. If we convert this table into a "data frame" or "data table" $X$ having one row per count with fields indicating the row name, column name, and count, then plotting it typically amounts to calling a suitable function and stipulating where to find the row names, the column names, and the counts. Using a Grammar of Graphics implementation (the ggplot2 package for R) this would look something like ggplot(X, aes(Outcome, Count, fill=Referral)) + geom_col() The details of the graphic, such as how wide a row of bars is and what colors to use, typically need to be stipulated explicitly. How that is done depends on the plotting environment (and so is of relatively little interest: you just have to look it up). This particular implementation of the Grammar of Graphics provides little flexibility in positioning the bars. One way to produce the desired look, with minimal effort, is to insert an invisible category at the base of each bar so that the bars are centered. A little thinking suggests the fake count needed to center each bar must be the average of the bar's total length and that of the longest bar. For this example this would be an initial column with the values 254478.0 0.0 301115.0 897955.0 993610.5 1019817.0 Here is the resulting stacked bar chart showing the fake data in light gray: The desired figure is created by making the graphics for the fake column invisible: The Grammar of Graphics description of the plot does not need to change: we have simply supplied a different contingency table to be rendered according to the same description (and overrode the default color assignment for the fake column). Comments These graphics are honest: the horizontal extent of each colored patch is directly proportional to the underlying data, without distortion. Comparing them to the original (in the question) reveals how extreme its distortion is (Tufte's Lie Factor). If it is desired to show details at the bottom of the "funnel," consider representing counts by area rather than length. You could make the lengths of the bars proportional to the square roots of the total lengths and their widths (in the vertical direction) also proportional to the square roots. Now the bottom of the "funnel" would be about one-twentieth the longest length, rather than one four-hundredth of it, permitting some detail to show. Unfortunately, the ggplot2 implementation does not allow one to map a variable to the bar width, and so a more involved work-around is needed (one which indeed describes each rectangle individually). Perhaps there is a Python implementation that is more flexible. References Edward Tufte, The Visual Display of Quantitative Information. Cheshire Press 1984. Leland Wilkinson, The Grammar of Graphics. Springer 2005.
How would you visualize a segmented funnel? (and could you do it with Python?) This plot displays a two-way contingency table whose data are approximately these: Branded Unbranded Social Referring Direct RSS First-time... 177276 472737 88638
32,988
How would you visualize a segmented funnel? (and could you do it with Python?)
You can try use plotly segmented funnel in python to build it. Here's a tutorial: https://moderndata.plot.ly/segmented-funnel-charts-in-python-using-plotly/ Hope this helps.
How would you visualize a segmented funnel? (and could you do it with Python?)
You can try use plotly segmented funnel in python to build it. Here's a tutorial: https://moderndata.plot.ly/segmented-funnel-charts-in-python-using-plotly/ Hope this helps.
How would you visualize a segmented funnel? (and could you do it with Python?) You can try use plotly segmented funnel in python to build it. Here's a tutorial: https://moderndata.plot.ly/segmented-funnel-charts-in-python-using-plotly/ Hope this helps.
How would you visualize a segmented funnel? (and could you do it with Python?) You can try use plotly segmented funnel in python to build it. Here's a tutorial: https://moderndata.plot.ly/segmented-funnel-charts-in-python-using-plotly/ Hope this helps.
32,989
Testing a 2x2 contingency table: male/female, employed/unemployed
Some immediate responses: 1) Your lecturer means that the data show autocorrelation. This leads to inefficient estimates of regression coefficients in simple linear regression. Depending on whether it was covered in your course, that's a mistake. 2) Maybe I do not understand the problem fully, but IMAO the chi-squared test of independence is used correctly here, except for two other issues: 3) Your chi-square test has an immense power, because of the sample size. It's hard not be significant even if effects were very small. Furthermore, it appears you have a census of the population. In this situation statistical inference is unnecessary, because you obseve all population units. But that's not what the lecturer remarks. 4) You seem to aggregate the data across time points. You should actually test once per time point, since otherwise you aggregate effects over time (you count units multiple times). But that's also not what the lecturer remarks. The lecturer actually remarks that you want to test the null of homogeneity, where you tests the null of independence. So what does he mean by homogeneity? I suppose he refers to the test of marginal homogeneity in paired test data. This test is used to assess whether there was a change across time (repeated measures). This is however not what you want to assess in the first place. My guess is that he did not understand you want to test whether gender and employment at time point x are related. Maybe he also tried to suggest that what you should test is change across time (or no change, in which case the multiple repeated contingency would be called homogenous indeed).
Testing a 2x2 contingency table: male/female, employed/unemployed
Some immediate responses: 1) Your lecturer means that the data show autocorrelation. This leads to inefficient estimates of regression coefficients in simple linear regression. Depending on whether it
Testing a 2x2 contingency table: male/female, employed/unemployed Some immediate responses: 1) Your lecturer means that the data show autocorrelation. This leads to inefficient estimates of regression coefficients in simple linear regression. Depending on whether it was covered in your course, that's a mistake. 2) Maybe I do not understand the problem fully, but IMAO the chi-squared test of independence is used correctly here, except for two other issues: 3) Your chi-square test has an immense power, because of the sample size. It's hard not be significant even if effects were very small. Furthermore, it appears you have a census of the population. In this situation statistical inference is unnecessary, because you obseve all population units. But that's not what the lecturer remarks. 4) You seem to aggregate the data across time points. You should actually test once per time point, since otherwise you aggregate effects over time (you count units multiple times). But that's also not what the lecturer remarks. The lecturer actually remarks that you want to test the null of homogeneity, where you tests the null of independence. So what does he mean by homogeneity? I suppose he refers to the test of marginal homogeneity in paired test data. This test is used to assess whether there was a change across time (repeated measures). This is however not what you want to assess in the first place. My guess is that he did not understand you want to test whether gender and employment at time point x are related. Maybe he also tried to suggest that what you should test is change across time (or no change, in which case the multiple repeated contingency would be called homogenous indeed).
Testing a 2x2 contingency table: male/female, employed/unemployed Some immediate responses: 1) Your lecturer means that the data show autocorrelation. This leads to inefficient estimates of regression coefficients in simple linear regression. Depending on whether it
32,990
Testing a 2x2 contingency table: male/female, employed/unemployed
It is very opaque feedback - sounds to me like they're saying "you didn't do well this time - try harder next time". The only way to understand it is to be brave, and ask your lecturer for a meeting to discuss things further. Your lecturer seems to be disappointed with your choice of research questions perhaps? I think they may have been looking for some "buzz words" like "auto-/serial-/correlation" "time series" "seasonal effects/adjustment" "business cycles" "trend". I don't know what you were expected to know when doing the assignment. Anyway, here's what I think. Your assignment shows a good ability to perform a statistical test, but from a data analysis perspective shows a strange choice of examples. Analysis should be about telling a story. Personally I liked the choice of male vs female employment as a theme. However, I would have put the "second example" first, as it is a simpler question "is there a gender difference now?". After showing that there clearly is a difference (as you do), you could have then gone to the more complex question of "has there been a consistent gender difference over time?" Of course this question may be beyond the scope of your "statistical toolbox" to answer in a formal manner. One way you could do this with linear regression is to model the odds of being employed vs unemployed (or log-odds if this gives a better fit) for males and females. You then have a simple ols model of $$ y_i=\beta_0+\beta_1x_i +e_i $$ Where $ y_i $ is the ratio "employed"/"unemployed" and $ x_i $ is a dummy variable equal to one if the ratio is for males and zero otherwise, and $ e_i $ is the residual. You then test if $\beta_1=0 $. You could take the model further, and include a time covariate as well as an interaction between time and gender. This is all part of building your analysis work as a story ("the plot thickens" so to speak). This of course depends on knowing about multiple regression ( which may be outside the course content). I wouldn't have used that first example at all, of course linear regression was inappropriate. Your lecturer (probably) wants to see an example of a good use of linear regression. Of course, the ols example I gave above may also not be appropriate - this depends on assessing the model.
Testing a 2x2 contingency table: male/female, employed/unemployed
It is very opaque feedback - sounds to me like they're saying "you didn't do well this time - try harder next time". The only way to understand it is to be brave, and ask your lecturer for a meeting
Testing a 2x2 contingency table: male/female, employed/unemployed It is very opaque feedback - sounds to me like they're saying "you didn't do well this time - try harder next time". The only way to understand it is to be brave, and ask your lecturer for a meeting to discuss things further. Your lecturer seems to be disappointed with your choice of research questions perhaps? I think they may have been looking for some "buzz words" like "auto-/serial-/correlation" "time series" "seasonal effects/adjustment" "business cycles" "trend". I don't know what you were expected to know when doing the assignment. Anyway, here's what I think. Your assignment shows a good ability to perform a statistical test, but from a data analysis perspective shows a strange choice of examples. Analysis should be about telling a story. Personally I liked the choice of male vs female employment as a theme. However, I would have put the "second example" first, as it is a simpler question "is there a gender difference now?". After showing that there clearly is a difference (as you do), you could have then gone to the more complex question of "has there been a consistent gender difference over time?" Of course this question may be beyond the scope of your "statistical toolbox" to answer in a formal manner. One way you could do this with linear regression is to model the odds of being employed vs unemployed (or log-odds if this gives a better fit) for males and females. You then have a simple ols model of $$ y_i=\beta_0+\beta_1x_i +e_i $$ Where $ y_i $ is the ratio "employed"/"unemployed" and $ x_i $ is a dummy variable equal to one if the ratio is for males and zero otherwise, and $ e_i $ is the residual. You then test if $\beta_1=0 $. You could take the model further, and include a time covariate as well as an interaction between time and gender. This is all part of building your analysis work as a story ("the plot thickens" so to speak). This of course depends on knowing about multiple regression ( which may be outside the course content). I wouldn't have used that first example at all, of course linear regression was inappropriate. Your lecturer (probably) wants to see an example of a good use of linear regression. Of course, the ols example I gave above may also not be appropriate - this depends on assessing the model.
Testing a 2x2 contingency table: male/female, employed/unemployed It is very opaque feedback - sounds to me like they're saying "you didn't do well this time - try harder next time". The only way to understand it is to be brave, and ask your lecturer for a meeting
32,991
Instrumental variables and mixed/multilevel models
The paper of Peter Ebbes et al. (2005) proposes a Latent IV estimation, where you do not need external IVs. Ebbes, Peter; Wedel, Michel; Böckenholt, Ulf; Steerneman, Ton; (2005). "Solving and Testing for Regressor-Error (in)Dependence When no Instrumental Variables are Available: With New Evidence for the Effect of Education on Income." Quantitative Marketing and Economics 3(4): 365-392. http://hdl.handle.net/2027.42/47579 Also the paper by Kim and Frees 2007 proposes a GMM estimation that helps you address the endogeneity problems in MLM. Jee-Seon Kim, & Edward W. Frees (2007). "Multilevel Modelling with Correlated Effects". Psychometrika, 72, 4, pp. 505-533. However, I have not seen any R code for any of the two approaches :(.
Instrumental variables and mixed/multilevel models
The paper of Peter Ebbes et al. (2005) proposes a Latent IV estimation, where you do not need external IVs. Ebbes, Peter; Wedel, Michel; Böckenholt, Ulf; Steerneman, Ton; (2005). "Solving and Testing
Instrumental variables and mixed/multilevel models The paper of Peter Ebbes et al. (2005) proposes a Latent IV estimation, where you do not need external IVs. Ebbes, Peter; Wedel, Michel; Böckenholt, Ulf; Steerneman, Ton; (2005). "Solving and Testing for Regressor-Error (in)Dependence When no Instrumental Variables are Available: With New Evidence for the Effect of Education on Income." Quantitative Marketing and Economics 3(4): 365-392. http://hdl.handle.net/2027.42/47579 Also the paper by Kim and Frees 2007 proposes a GMM estimation that helps you address the endogeneity problems in MLM. Jee-Seon Kim, & Edward W. Frees (2007). "Multilevel Modelling with Correlated Effects". Psychometrika, 72, 4, pp. 505-533. However, I have not seen any R code for any of the two approaches :(.
Instrumental variables and mixed/multilevel models The paper of Peter Ebbes et al. (2005) proposes a Latent IV estimation, where you do not need external IVs. Ebbes, Peter; Wedel, Michel; Böckenholt, Ulf; Steerneman, Ton; (2005). "Solving and Testing
32,992
Getting the prediction standard error from a natural spline fit
You can get the design matrix for a linear model in R using model.matrix(): X <- model.matrix(myFit) sigma <- summary(myFit)$sigma var.Yhat <- (diag(X %*% solve(t(X) %*% X) %*% t(X)) + 1) * sigma^2 Or, if you want to get the prediction variance for new values of $X$, use ns() to transform into the natural spline basis first: X.new <- cbind(1, ns(x.new, knots=knots)) var.Yhat <- (diag(X.new %*% solve(t(X) %*% X) %*% t(X.new)) + 1) * sigma^2
Getting the prediction standard error from a natural spline fit
You can get the design matrix for a linear model in R using model.matrix(): X <- model.matrix(myFit) sigma <- summary(myFit)$sigma var.Yhat <- (diag(X %*% solve(t(X) %*% X) %*% t(X)) + 1) * sigma^2 O
Getting the prediction standard error from a natural spline fit You can get the design matrix for a linear model in R using model.matrix(): X <- model.matrix(myFit) sigma <- summary(myFit)$sigma var.Yhat <- (diag(X %*% solve(t(X) %*% X) %*% t(X)) + 1) * sigma^2 Or, if you want to get the prediction variance for new values of $X$, use ns() to transform into the natural spline basis first: X.new <- cbind(1, ns(x.new, knots=knots)) var.Yhat <- (diag(X.new %*% solve(t(X) %*% X) %*% t(X.new)) + 1) * sigma^2
Getting the prediction standard error from a natural spline fit You can get the design matrix for a linear model in R using model.matrix(): X <- model.matrix(myFit) sigma <- summary(myFit)$sigma var.Yhat <- (diag(X %*% solve(t(X) %*% X) %*% t(X)) + 1) * sigma^2 O
32,993
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance?
I don't think there is a need for a mathematical derivation of the fact that in ML, with increasing training test size, the prediction error rates decrease. LOO -- compared to k-fold validation -- maximizes the training set size, as you have observed. However, LOO can be sensitive to "twinning" -- when you have highly correlated samples, with LOO you have the guarantee that for each sample used as a test set, the remaining "twins" will be in the training set. This can be diagnosed by a rapid decrease in accuracy when LOO is replaced by, say, 10-fold crossvalidation (or a stratified validation, if for example the samples are paired). In my experience, this can lead to a disaster if generally your data set is small. In a perfect world, you have also a validation set that you never use to train your model, not even in a CV setting. You keep it for the sole purpose of testing the final performance of a model before you send of the paper :-)
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance?
I don't think there is a need for a mathematical derivation of the fact that in ML, with increasing training test size, the prediction error rates decrease. LOO -- compared to k-fold validation -- max
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance? I don't think there is a need for a mathematical derivation of the fact that in ML, with increasing training test size, the prediction error rates decrease. LOO -- compared to k-fold validation -- maximizes the training set size, as you have observed. However, LOO can be sensitive to "twinning" -- when you have highly correlated samples, with LOO you have the guarantee that for each sample used as a test set, the remaining "twins" will be in the training set. This can be diagnosed by a rapid decrease in accuracy when LOO is replaced by, say, 10-fold crossvalidation (or a stratified validation, if for example the samples are paired). In my experience, this can lead to a disaster if generally your data set is small. In a perfect world, you have also a validation set that you never use to train your model, not even in a CV setting. You keep it for the sole purpose of testing the final performance of a model before you send of the paper :-)
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance? I don't think there is a need for a mathematical derivation of the fact that in ML, with increasing training test size, the prediction error rates decrease. LOO -- compared to k-fold validation -- max
32,994
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance?
I am new to this topic but I think I can give you a concrete and elementary example of how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of integrated mean square error ($\mathrm{IMSE}$ for short) in the context of kernel density estimation. Recall a typical univariate kernel density estimator $\widehat{f}$: $$ \widehat{f}(x)=\frac{1}{n h} \sum_{i=1}^n K\left(\frac{X_i-x}{h}\right), $$ where we have imposed the following assumptions: A.1 $X_1, \cdots, X_n \stackrel{i . i . d .}{\sim} f$. A.2 $f^{\prime\prime}(x)$ is continuous and bounded in the neighborhood of $x$. A.3 The kernel function $K(\cdot)$ is a symmetric pdf which maximized at $0$ and satisfies: $$ (i)\begin{aligned}\int K(u) d u=1\end{aligned},\quad (ii)\begin{aligned}\nu_2:=\int K^2(u) d u<\infty\end{aligned},\quad (iii)\begin{aligned}\kappa_2:=\int u^2 K(u) d u \in(0, \infty)\end{aligned} $$ A.4 $h \rightarrow 0$, $n h \rightarrow \infty$ as $n \rightarrow \infty$, which means $n^{-1}=o\left((n h)^{-1}\right)$. Under the above assumptions, the integrated mean square error can be computed as: $$ \begin{aligned} \mathrm{IMSE}\left(\widehat{f}\right) =& \int \operatorname{MSE} \widehat{f}(x) d x\\ =& \frac{1}{n h}\int K(u)^2 d u+\frac{h^4}{4}\left(\int u^2 K(u) d u\right)^2 \int\left[f^{\prime\prime}(x)\right]^2 d x+o\left(h^4+\frac{1}{n h}\right), \end{aligned} $$ which can be easily derived from the bias rendered by KDE (the derivation can be referred to in this question). Accompanied with this knowledge, let us see how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of $\mathrm{IMSE}\left(\widehat{f}\right)$. Let us define the objective function of least square cross-validation as follows: $$ \begin{aligned} LSCV&=\int[\widehat{f}(x)-f(x)]^2 d x \\ &=\underbrace{\int[\widehat{f}(x)]^2 d x}_{=:\mathcal{I}_{1n}}-2 \underbrace{\int \widehat{f}(x) f(x) d x}_{=:\mathcal{I}_{2n}}+\int f^2(x) d x \\ &=\mathcal{I}_{1n} - 2\mathcal{I}_{2n}+\int f^2(x) d x\\ &={\mathcal{I}}_{1n} - 2\widehat{\mathcal{I}}_{2n}+\int f^2(x) d x + o_{\mathbb{P}}(1),\\ \end{aligned} $$ where $\widehat{\mathcal{I}}_{2n}$ denotes the estimand of ${\mathcal{I}}_{2n}$. The leave-one-out method is applied to the computation of $\widehat{\mathcal{I}}_{2n}$. To ease of exposition, let me define two versions of $\mathcal{I}_{2n}$: $$ \begin{aligned} \widehat{\mathcal{I}_{2n}}^{\text{LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}_{-i}\left(X_i\right) =\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n K\left(\frac{X_j-X_i}{h}\right),\\ \widehat{\mathcal{I}_{2n}}^{\text{without LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}\left(X_i\right)=\frac{1}{n^{2}h} \sum_{i=1}^n\sum_{j=1}^n K\left(\frac{X_j-X_i}{h}\right). \end{aligned} $$ Notice that their only difference lies in: $$ \widehat{\mathcal{I}_{2n}}^{\text{without LOO}} = \widehat{\mathcal{I}_{2n}}^{\text{LOO}} + \frac{1}{n h} K(0). $$ But as we will see later, this difference plays a key role in the conclusion that without leave-one-out, the objective function $LSCV$ is biased concerning $\mathrm{IMSE}$ and is minimized at $h=0$, which violates the condition of $n h \rightarrow \infty$ as $n \rightarrow \infty$. Now, taking the expectation $\mathbb{E}_{X}(\cdot)$ to $LSCV^{\text{LOO}}$ gives: $$ \mathbb{E}_{X}\left(LSCV^{\text{LOO}}\right) = \mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]-2 \mathbb{E}_{X}\left[\widehat{\mathcal{I}}_{2 n}^{\text{LOO}}\right]+\int f^2(x) d x. $$ The first term is computed as: $$ \begin{aligned} \mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]=& \mathbb{E}_{X}\left[\int[\widehat{f}(x)]^2 d x\right] = \int\mathbb{E}_{X}\left[[\widehat{f}(x)]^2\right] d x\\ =& \int\mathrm{Var}\widehat{f}(x) d x + \int\mathbb{E}_{X}^2\left[\widehat{f}(x)\right] d x \\ =&\int \frac{f(x)}{n h} \nu_2+o\left(\frac{1}{n h}\right) + \left(f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2 + o\left(h^2\right)\right)^{2} dx \\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right)\\ & + \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx , \end{aligned} $$ where the third line can be referred (again) to the link and $$ \nu_2:=\int K^2(u) d u. $$ The second term is computed as: $$ \begin{aligned} 2 \mathbb{E}_X\left[\widehat{\mathcal{I}}_{2 n}^{\mathrm{LOO}}\right] =& \mathbb{E}_{X}\left[\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n2 K\left(\frac{X_j-X_i}{h}\right)\right] = \mathbb{E}_{X}\left[\frac{2(n-1)}{n^{2}} \sum_{i=1}^n \widehat{f}_{-i}(X_{i})\right] \\ =& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[\mathbb{E}_{X_{-i}}\left[\widehat{f}_{-i}(X_{i})\right]\right] \\ =& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right] \\ =& \frac{2(n-1)}{n} \int\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right]f(X_{i}) d X_{i}\\ =& \frac{2(n-1)}{n} \int\left[f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2+o\left(h^2\right)\right]f(x) d x\\ =& \frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right) \\ \end{aligned} $$ Inserting the two expressions back $\mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]$ gives: $$ \begin{aligned} & \mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) \\ &+ \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx \\ &-\left[\frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right)\right]+\int f^2(x) d x\\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) + O\left(\frac{1}{n}\right), \end{aligned} $$ Now, we can see that $$ \mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\sim \operatorname{IMSE}(\widehat{f}). $$ And it is quite evident to see that $$ \mathbb{E}_{X}\left[LSC V^{\text {without } L O O}(h)\right]=\mathbb{E}_{X}\left[LSC V^{L O O}(h)\right]-\frac{2}{n h} K(0). $$ To this end, one can be informed in the context of KDE that the leave-one-out cross-validation provides a relatively “unbiased estimate of the true generalization performance”.
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance?
I am new to this topic but I think I can give you a concrete and elementary example of how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of integrated mea
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance? I am new to this topic but I think I can give you a concrete and elementary example of how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of integrated mean square error ($\mathrm{IMSE}$ for short) in the context of kernel density estimation. Recall a typical univariate kernel density estimator $\widehat{f}$: $$ \widehat{f}(x)=\frac{1}{n h} \sum_{i=1}^n K\left(\frac{X_i-x}{h}\right), $$ where we have imposed the following assumptions: A.1 $X_1, \cdots, X_n \stackrel{i . i . d .}{\sim} f$. A.2 $f^{\prime\prime}(x)$ is continuous and bounded in the neighborhood of $x$. A.3 The kernel function $K(\cdot)$ is a symmetric pdf which maximized at $0$ and satisfies: $$ (i)\begin{aligned}\int K(u) d u=1\end{aligned},\quad (ii)\begin{aligned}\nu_2:=\int K^2(u) d u<\infty\end{aligned},\quad (iii)\begin{aligned}\kappa_2:=\int u^2 K(u) d u \in(0, \infty)\end{aligned} $$ A.4 $h \rightarrow 0$, $n h \rightarrow \infty$ as $n \rightarrow \infty$, which means $n^{-1}=o\left((n h)^{-1}\right)$. Under the above assumptions, the integrated mean square error can be computed as: $$ \begin{aligned} \mathrm{IMSE}\left(\widehat{f}\right) =& \int \operatorname{MSE} \widehat{f}(x) d x\\ =& \frac{1}{n h}\int K(u)^2 d u+\frac{h^4}{4}\left(\int u^2 K(u) d u\right)^2 \int\left[f^{\prime\prime}(x)\right]^2 d x+o\left(h^4+\frac{1}{n h}\right), \end{aligned} $$ which can be easily derived from the bias rendered by KDE (the derivation can be referred to in this question). Accompanied with this knowledge, let us see how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of $\mathrm{IMSE}\left(\widehat{f}\right)$. Let us define the objective function of least square cross-validation as follows: $$ \begin{aligned} LSCV&=\int[\widehat{f}(x)-f(x)]^2 d x \\ &=\underbrace{\int[\widehat{f}(x)]^2 d x}_{=:\mathcal{I}_{1n}}-2 \underbrace{\int \widehat{f}(x) f(x) d x}_{=:\mathcal{I}_{2n}}+\int f^2(x) d x \\ &=\mathcal{I}_{1n} - 2\mathcal{I}_{2n}+\int f^2(x) d x\\ &={\mathcal{I}}_{1n} - 2\widehat{\mathcal{I}}_{2n}+\int f^2(x) d x + o_{\mathbb{P}}(1),\\ \end{aligned} $$ where $\widehat{\mathcal{I}}_{2n}$ denotes the estimand of ${\mathcal{I}}_{2n}$. The leave-one-out method is applied to the computation of $\widehat{\mathcal{I}}_{2n}$. To ease of exposition, let me define two versions of $\mathcal{I}_{2n}$: $$ \begin{aligned} \widehat{\mathcal{I}_{2n}}^{\text{LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}_{-i}\left(X_i\right) =\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n K\left(\frac{X_j-X_i}{h}\right),\\ \widehat{\mathcal{I}_{2n}}^{\text{without LOO}} :=& \frac{1}{n} \sum_{i=1}^n \widehat{f}\left(X_i\right)=\frac{1}{n^{2}h} \sum_{i=1}^n\sum_{j=1}^n K\left(\frac{X_j-X_i}{h}\right). \end{aligned} $$ Notice that their only difference lies in: $$ \widehat{\mathcal{I}_{2n}}^{\text{without LOO}} = \widehat{\mathcal{I}_{2n}}^{\text{LOO}} + \frac{1}{n h} K(0). $$ But as we will see later, this difference plays a key role in the conclusion that without leave-one-out, the objective function $LSCV$ is biased concerning $\mathrm{IMSE}$ and is minimized at $h=0$, which violates the condition of $n h \rightarrow \infty$ as $n \rightarrow \infty$. Now, taking the expectation $\mathbb{E}_{X}(\cdot)$ to $LSCV^{\text{LOO}}$ gives: $$ \mathbb{E}_{X}\left(LSCV^{\text{LOO}}\right) = \mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]-2 \mathbb{E}_{X}\left[\widehat{\mathcal{I}}_{2 n}^{\text{LOO}}\right]+\int f^2(x) d x. $$ The first term is computed as: $$ \begin{aligned} \mathbb{E}_{X}\left[{\mathcal{I}}_{1 n}\right]=& \mathbb{E}_{X}\left[\int[\widehat{f}(x)]^2 d x\right] = \int\mathbb{E}_{X}\left[[\widehat{f}(x)]^2\right] d x\\ =& \int\mathrm{Var}\widehat{f}(x) d x + \int\mathbb{E}_{X}^2\left[\widehat{f}(x)\right] d x \\ =&\int \frac{f(x)}{n h} \nu_2+o\left(\frac{1}{n h}\right) + \left(f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2 + o\left(h^2\right)\right)^{2} dx \\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right)\\ & + \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx , \end{aligned} $$ where the third line can be referred (again) to the link and $$ \nu_2:=\int K^2(u) d u. $$ The second term is computed as: $$ \begin{aligned} 2 \mathbb{E}_X\left[\widehat{\mathcal{I}}_{2 n}^{\mathrm{LOO}}\right] =& \mathbb{E}_{X}\left[\frac{1}{n^2 h} \sum_{i=1}^n \sum_{j=1, j \neq i}^n2 K\left(\frac{X_j-X_i}{h}\right)\right] = \mathbb{E}_{X}\left[\frac{2(n-1)}{n^{2}} \sum_{i=1}^n \widehat{f}_{-i}(X_{i})\right] \\ =& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[\mathbb{E}_{X_{-i}}\left[\widehat{f}_{-i}(X_{i})\right]\right] \\ =& \frac{2(n-1)}{n^{2}} \sum_{i=1}^n \mathbb{E}_{X_{i}}\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right] \\ =& \frac{2(n-1)}{n} \int\left[f(X_{i})+\frac{h^2}{2} f^{\prime \prime}(X_{i}) \kappa_2+o\left(h^2\right)\right]f(X_{i}) d X_{i}\\ =& \frac{2(n-1)}{n} \int\left[f(x)+\frac{h^2}{2} f^{\prime \prime}(x) \kappa_2+o\left(h^2\right)\right]f(x) d x\\ =& \frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right) \\ \end{aligned} $$ Inserting the two expressions back $\mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]$ gives: $$ \begin{aligned} & \mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \int f^{2}(x) dx + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) \\ &+ \int h^{2}f(x)f^{\prime \prime}(x) \kappa_2 dx + 2 \int f(x)o\left(h^2\right) dx \\ &-\left[\frac{2(n-1)}{n} \int f^{2}(x) dx +\frac{2(n-1)}{n} \int \frac{h^2}{2} f^{\prime \prime}(x)f(x) \kappa_2 dx + \frac{2(n-1)}{n} o\left(h^2\right)\right]+\int f^2(x) d x\\ =& \frac{\nu_2}{n h}+o\left(\frac{1}{n h}\right) + \frac{h^4}{4} \int u^2 K(u) d u \int\left[f^{\prime \prime}(x)\right]^2 d x+o\left(h^4\right) + O\left(\frac{1}{n}\right), \end{aligned} $$ Now, we can see that $$ \mathbb{E}_{X}\left[{LSCV}^{\text{LOO}}(h)\right]\sim \operatorname{IMSE}(\widehat{f}). $$ And it is quite evident to see that $$ \mathbb{E}_{X}\left[LSC V^{\text {without } L O O}(h)\right]=\mathbb{E}_{X}\left[LSC V^{L O O}(h)\right]-\frac{2}{n h} K(0). $$ To this end, one can be informed in the context of KDE that the leave-one-out cross-validation provides a relatively “unbiased estimate of the true generalization performance”.
Leave-one-out cross validation: Relatively unbiased estimate of generalization performance? I am new to this topic but I think I can give you a concrete and elementary example of how least square cross-validation with the leave-one-out method produces an "unbiased" estimate of integrated mea
32,995
What is the meaning of subscript in $p_{\theta}(x)$ and ${\mathbb E}_{\theta}\left[S(\theta)\right]$?
This is mostly answered in comments which I will summarize here. $p_\theta(x)$ means the same as $p(x; \theta)$, its a shorthand. This is a density with respect to $x$, not with respect to $\theta$. So while by necessity $\int p(x;\theta)\; dx = 1$ it does not follow that $\int p(x;\theta)\; d\theta=1$, it could be anything, including $\infty$. So, ${\mathbb E}_{\theta}\left[S(\theta)\right]$ is the expectation of $S(\theta)$ with respect to the distribution $p_\theta(x)$. The subscript $\theta$ is there for clarity, not because it is necessary, so ${\mathbb E}\left[(S(\theta)\right]$ has the same meaning. The distribution with respect to which we calculate the expectation should be clear from context, or indicated somehow (like by a subscript).
What is the meaning of subscript in $p_{\theta}(x)$ and ${\mathbb E}_{\theta}\left[S(\theta)\right]$
This is mostly answered in comments which I will summarize here. $p_\theta(x)$ means the same as $p(x; \theta)$, its a shorthand. This is a density with respect to $x$, not with respect to $\theta$.
What is the meaning of subscript in $p_{\theta}(x)$ and ${\mathbb E}_{\theta}\left[S(\theta)\right]$? This is mostly answered in comments which I will summarize here. $p_\theta(x)$ means the same as $p(x; \theta)$, its a shorthand. This is a density with respect to $x$, not with respect to $\theta$. So while by necessity $\int p(x;\theta)\; dx = 1$ it does not follow that $\int p(x;\theta)\; d\theta=1$, it could be anything, including $\infty$. So, ${\mathbb E}_{\theta}\left[S(\theta)\right]$ is the expectation of $S(\theta)$ with respect to the distribution $p_\theta(x)$. The subscript $\theta$ is there for clarity, not because it is necessary, so ${\mathbb E}\left[(S(\theta)\right]$ has the same meaning. The distribution with respect to which we calculate the expectation should be clear from context, or indicated somehow (like by a subscript).
What is the meaning of subscript in $p_{\theta}(x)$ and ${\mathbb E}_{\theta}\left[S(\theta)\right]$ This is mostly answered in comments which I will summarize here. $p_\theta(x)$ means the same as $p(x; \theta)$, its a shorthand. This is a density with respect to $x$, not with respect to $\theta$.
32,996
Visualizing results from multiple latent class models
So far, the best options I've found, thanks to your suggestions, are these: library (igraph) library (ggparallel) # Generate random data x1 <- sample(1:1, 1000, replace=T) x2 <- sample(2:3, 1000, replace=T) x3 <- sample(4:6, 1000, replace=T) x4 <- sample(7:10, 1000, replace=T) x5 <- sample(11:15, 1000, replace=T) results <- cbind (x1, x2, x3, x4, x5) results <-as.data.frame(results) # Make a data frame for the edges and counts g1 <- count (results, c("x1", "x2")) g2 <- count (results, c("x2", "x3")) colnames(g2) <- c ("x1", "x2", "freq") g3 <- count (results, c("x3", "x4")) colnames(g3) <- c ("x1", "x2", "freq") g4 <- count (results, c("x4", "x5")) colnames(g4) <- c ("x1", "x2", "freq") edges <- rbind (g1, g2, g3, g4) # Make a data frame for the class sizes h1 <- count (results, c("x1")) h2 <- count (results, c("x2")) colnames (h2) <- c ("x1", "freq") h3 <- count (results, c("x3")) colnames (h3) <- c ("x1", "freq") h4 <- count (results, c("x4")) colnames (h4) <- c ("x1", "freq") h5 <- count (results, c("x5")) colnames (h5) <- c ("x1", "freq") cSizes <- rbind (h1, h2, h3, h4, h5) # Graph with igraph gph <- graph.data.frame (edges, directed=TRUE) layout <- layout.reingold.tilford (gph, root = 1) plot (gph, layout = layout, edge.label = edges$freq, edge.curved = FALSE, edge.label.cex = .8, edge.label.color = "black", edge.color = "grey", edge.arrow.mode = 0, vertex.label = cSizes$x1 , vertex.shape = "square", vertex.size = cSizes$freq/20) # The same idea, using ggparallel a <- c("x1", "x2", "x3", "x4", "x5") ggparallel (list (a), data = results, method = "hammock", asp = .7, alpha = .5, width = .5, text.angle = 0) Done with igraph Done with ggparallel Still too rough to share in a journal, but I've certainly found having a quick look at these very useful. There is also a possible option from this question on stack overflow, but I haven't had a chance to implement it yet; and another possibility here.
Visualizing results from multiple latent class models
So far, the best options I've found, thanks to your suggestions, are these: library (igraph) library (ggparallel) # Generate random data x1 <- sample(1:1, 1000, replace=T) x2 <- sample(2:3,
Visualizing results from multiple latent class models So far, the best options I've found, thanks to your suggestions, are these: library (igraph) library (ggparallel) # Generate random data x1 <- sample(1:1, 1000, replace=T) x2 <- sample(2:3, 1000, replace=T) x3 <- sample(4:6, 1000, replace=T) x4 <- sample(7:10, 1000, replace=T) x5 <- sample(11:15, 1000, replace=T) results <- cbind (x1, x2, x3, x4, x5) results <-as.data.frame(results) # Make a data frame for the edges and counts g1 <- count (results, c("x1", "x2")) g2 <- count (results, c("x2", "x3")) colnames(g2) <- c ("x1", "x2", "freq") g3 <- count (results, c("x3", "x4")) colnames(g3) <- c ("x1", "x2", "freq") g4 <- count (results, c("x4", "x5")) colnames(g4) <- c ("x1", "x2", "freq") edges <- rbind (g1, g2, g3, g4) # Make a data frame for the class sizes h1 <- count (results, c("x1")) h2 <- count (results, c("x2")) colnames (h2) <- c ("x1", "freq") h3 <- count (results, c("x3")) colnames (h3) <- c ("x1", "freq") h4 <- count (results, c("x4")) colnames (h4) <- c ("x1", "freq") h5 <- count (results, c("x5")) colnames (h5) <- c ("x1", "freq") cSizes <- rbind (h1, h2, h3, h4, h5) # Graph with igraph gph <- graph.data.frame (edges, directed=TRUE) layout <- layout.reingold.tilford (gph, root = 1) plot (gph, layout = layout, edge.label = edges$freq, edge.curved = FALSE, edge.label.cex = .8, edge.label.color = "black", edge.color = "grey", edge.arrow.mode = 0, vertex.label = cSizes$x1 , vertex.shape = "square", vertex.size = cSizes$freq/20) # The same idea, using ggparallel a <- c("x1", "x2", "x3", "x4", "x5") ggparallel (list (a), data = results, method = "hammock", asp = .7, alpha = .5, width = .5, text.angle = 0) Done with igraph Done with ggparallel Still too rough to share in a journal, but I've certainly found having a quick look at these very useful. There is also a possible option from this question on stack overflow, but I haven't had a chance to implement it yet; and another possibility here.
Visualizing results from multiple latent class models So far, the best options I've found, thanks to your suggestions, are these: library (igraph) library (ggparallel) # Generate random data x1 <- sample(1:1, 1000, replace=T) x2 <- sample(2:3,
32,997
Interpreta​tion of main effect when interactio​n term is significan​t (ex. lme)
If there is interaction in the model, the interpretations for main effects would change. For example, in your model, if there is no interaction between Time and Diet, Diet2 means the difference between Diet2 and Diet1 regardless the value of Time; however, if you add the interaction Time*Diet, Diet2 means the difference between Diet2 and Diet1 when Time equals 0, i.e. the difference of intercepts. This depends on model formula, but not significance. You can say that "body mass increased significant with Time for Diet1" just based on the significance of Time; but for Diet2 and Diet3, to test the slopes, you may need to test the linear combinations of parameters, say for Diet2, to test the significance of Time+Time*Diet2. for Diet1: weight = 251.60 + 0.36*Time; for Diet2: weight = (251.60 + 200.78) + (0.36 + 0.60)*Time; for Diet3: weight = (251.60 + 252.17) + (0.36 + 0.30)*Time. You can split the data and run the test separately, but it is formal to integrate the three regressions into one by using the interaction. By the way, if you need to test the significance of Time*Diet, you may use anova() since it is actually a factor.
Interpreta​tion of main effect when interactio​n term is significan​t (ex. lme)
If there is interaction in the model, the interpretations for main effects would change. For example, in your model, if there is no interaction between Time and Diet, Diet2 means the difference betwee
Interpreta​tion of main effect when interactio​n term is significan​t (ex. lme) If there is interaction in the model, the interpretations for main effects would change. For example, in your model, if there is no interaction between Time and Diet, Diet2 means the difference between Diet2 and Diet1 regardless the value of Time; however, if you add the interaction Time*Diet, Diet2 means the difference between Diet2 and Diet1 when Time equals 0, i.e. the difference of intercepts. This depends on model formula, but not significance. You can say that "body mass increased significant with Time for Diet1" just based on the significance of Time; but for Diet2 and Diet3, to test the slopes, you may need to test the linear combinations of parameters, say for Diet2, to test the significance of Time+Time*Diet2. for Diet1: weight = 251.60 + 0.36*Time; for Diet2: weight = (251.60 + 200.78) + (0.36 + 0.60)*Time; for Diet3: weight = (251.60 + 252.17) + (0.36 + 0.30)*Time. You can split the data and run the test separately, but it is formal to integrate the three regressions into one by using the interaction. By the way, if you need to test the significance of Time*Diet, you may use anova() since it is actually a factor.
Interpreta​tion of main effect when interactio​n term is significan​t (ex. lme) If there is interaction in the model, the interpretations for main effects would change. For example, in your model, if there is no interaction between Time and Diet, Diet2 means the difference betwee
32,998
Term used to describe two lines appearing closer to each other when lines are at a greater gradient
This psycho-visual problem is the consequence of the mind making effective bisquare approximations when thinking about "distance". It is most often addressed by plotting the variable of interest, in this case the difference, as its own variable. Personally I would put this into a subplot with y-axis label as "difference between domestic and international" or such.
Term used to describe two lines appearing closer to each other when lines are at a greater gradient
This psycho-visual problem is the consequence of the mind making effective bisquare approximations when thinking about "distance". It is most often addressed by plotting the variable of interest, in
Term used to describe two lines appearing closer to each other when lines are at a greater gradient This psycho-visual problem is the consequence of the mind making effective bisquare approximations when thinking about "distance". It is most often addressed by plotting the variable of interest, in this case the difference, as its own variable. Personally I would put this into a subplot with y-axis label as "difference between domestic and international" or such.
Term used to describe two lines appearing closer to each other when lines are at a greater gradient This psycho-visual problem is the consequence of the mind making effective bisquare approximations when thinking about "distance". It is most often addressed by plotting the variable of interest, in
32,999
Term used to describe two lines appearing closer to each other when lines are at a greater gradient
It would be interesting to see what the Cognitive Sciences stack exchange calls it, and I am not an expert in this, but I have been reading on it recently, and I think you would be justified in calling it a "subjective contour illusion". The textbook Vision Science focuses on illusory contours like Kanizsa's Triangle, where viewers perceive a white triangle in an image that shows only lines and pac men: It seems to me that mentally filling in the area between the two curves is just what is going wrong when viewers underestimate the gap between the lines in your example. If it doesn't have a better name already, let's call it a subjective contour illusion.
Term used to describe two lines appearing closer to each other when lines are at a greater gradient
It would be interesting to see what the Cognitive Sciences stack exchange calls it, and I am not an expert in this, but I have been reading on it recently, and I think you would be justified in callin
Term used to describe two lines appearing closer to each other when lines are at a greater gradient It would be interesting to see what the Cognitive Sciences stack exchange calls it, and I am not an expert in this, but I have been reading on it recently, and I think you would be justified in calling it a "subjective contour illusion". The textbook Vision Science focuses on illusory contours like Kanizsa's Triangle, where viewers perceive a white triangle in an image that shows only lines and pac men: It seems to me that mentally filling in the area between the two curves is just what is going wrong when viewers underestimate the gap between the lines in your example. If it doesn't have a better name already, let's call it a subjective contour illusion.
Term used to describe two lines appearing closer to each other when lines are at a greater gradient It would be interesting to see what the Cognitive Sciences stack exchange calls it, and I am not an expert in this, but I have been reading on it recently, and I think you would be justified in callin
33,000
Type III sums of squares
I've found differences in the estimation of regressors between R 2.15.1 and SAS 9.2, but after updating R to 3.0.1 version the results were the same. So, first I suggets you to update R to the latest version. You're using the wrong approach because you're calculating the sum of square against two different models, which implies two different design matrices. This lead you to totally different estimation in the regressors used by lm() to compute the predicted values (you're using regressors with different values between the two models). SS3 is computed based on an hypotesis test, assuming that all the conditioning regressors equal zero, while the conditioned regressor equals 1. For the computations, you use the same design matrix used to estimate the full model, as for the regressor estimated in the full model. Remember that the SS3s aren't full additive. This means that if you sum the estimated SS3, you don't obtain the model SS (SSM). Here I suggest a R implementation of the mathematics that implements the GLS algorithm used to estimate SS3 and regressors. The values generated by this code are exactly the same generated using SAS 9.2 as for the results you gave in your code, while the SS3(B|A,AB) is 0.167486 instead of 0.15075. For this reason I suggest again to update your R version to the latest available. Hope this helps :) A<-as.factor(rep(c("male","female"), each=5)) set.seed(1) B<-runif(10) set.seed(5) y<-runif(10) # Create a dummy vector of 0s and 1s dummy <- as.numeric(A=="male") # Create the design matrix R <- cbind(rep(1, length(y)), dummy, B, dummy*B) # Estimate the regressors bhat <- solve(t(R) %*% R) %*% t(R) %*% y yhat <- R %*% bhat ehat <- y - yhat # Sum of Squares Total # SST <- t(y)%*%y - length(y)*mean(y)**2 # Sum of Squares Error # SSE <- t(ehat) %*% ehat # Sum of Squares Model # SSM <- SST - SSE # used for ginv() library(MASS) # Returns the Sum of Squares of the hypotesis test contained in the C matrix SSH_estimate <- function(C) { teta <- C%*%bhat M <- C %*% ginv(t(R)%*%R) %*% t(C) SSH <- t(teta) %*% ginv(M) %*% teta SSH } # SS(A|B,AB) # 0.001481682 SSH_estimate(matrix(c(0, 1, 0, 0), nrow=1, ncol=4)) # SS(B|A,AB) # 0.167486 SSH_estimate(matrix(c(0, 0, 1, 0), nrow=1, ncol=4)) # SS(AB|A,B) # 0.01627824 SSH_estimate(matrix(c(0, 0, 0, 1), nrow=1, ncol=4))
Type III sums of squares
I've found differences in the estimation of regressors between R 2.15.1 and SAS 9.2, but after updating R to 3.0.1 version the results were the same. So, first I suggets you to update R to the latest
Type III sums of squares I've found differences in the estimation of regressors between R 2.15.1 and SAS 9.2, but after updating R to 3.0.1 version the results were the same. So, first I suggets you to update R to the latest version. You're using the wrong approach because you're calculating the sum of square against two different models, which implies two different design matrices. This lead you to totally different estimation in the regressors used by lm() to compute the predicted values (you're using regressors with different values between the two models). SS3 is computed based on an hypotesis test, assuming that all the conditioning regressors equal zero, while the conditioned regressor equals 1. For the computations, you use the same design matrix used to estimate the full model, as for the regressor estimated in the full model. Remember that the SS3s aren't full additive. This means that if you sum the estimated SS3, you don't obtain the model SS (SSM). Here I suggest a R implementation of the mathematics that implements the GLS algorithm used to estimate SS3 and regressors. The values generated by this code are exactly the same generated using SAS 9.2 as for the results you gave in your code, while the SS3(B|A,AB) is 0.167486 instead of 0.15075. For this reason I suggest again to update your R version to the latest available. Hope this helps :) A<-as.factor(rep(c("male","female"), each=5)) set.seed(1) B<-runif(10) set.seed(5) y<-runif(10) # Create a dummy vector of 0s and 1s dummy <- as.numeric(A=="male") # Create the design matrix R <- cbind(rep(1, length(y)), dummy, B, dummy*B) # Estimate the regressors bhat <- solve(t(R) %*% R) %*% t(R) %*% y yhat <- R %*% bhat ehat <- y - yhat # Sum of Squares Total # SST <- t(y)%*%y - length(y)*mean(y)**2 # Sum of Squares Error # SSE <- t(ehat) %*% ehat # Sum of Squares Model # SSM <- SST - SSE # used for ginv() library(MASS) # Returns the Sum of Squares of the hypotesis test contained in the C matrix SSH_estimate <- function(C) { teta <- C%*%bhat M <- C %*% ginv(t(R)%*%R) %*% t(C) SSH <- t(teta) %*% ginv(M) %*% teta SSH } # SS(A|B,AB) # 0.001481682 SSH_estimate(matrix(c(0, 1, 0, 0), nrow=1, ncol=4)) # SS(B|A,AB) # 0.167486 SSH_estimate(matrix(c(0, 0, 1, 0), nrow=1, ncol=4)) # SS(AB|A,B) # 0.01627824 SSH_estimate(matrix(c(0, 0, 0, 1), nrow=1, ncol=4))
Type III sums of squares I've found differences in the estimation of regressors between R 2.15.1 and SAS 9.2, but after updating R to 3.0.1 version the results were the same. So, first I suggets you to update R to the latest