idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
29,901
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
You play a coin flip game with your friend, but you know that your friend somehow tends to toss heads almost every time. So, you can say something like "Ha I know my friend (prior belief), he always tosses heads, so the probability of him toss heads again will be somewhere between $[0.7,0.9]$", i.e. $\theta$ can be whatever value you want inside that interval. A natural way for saying that in a probabilistic language, that the probability of success (for your friend) lies inside $[0.7,0.9]$ is to let $\theta$ be a random variable. Now that $\theta$ is a random variable you can assign it a distribution. But your distribution has to reflect your prior belief, that the probability of success for your friend will be inside the interval $[0.7,0.9]$. A good choice for distribution for $\theta$ would be a $Beta(a,b)$ distribution (as it takes values inside $[0,1]$ where probabilities also do). However, this $Beta(a,b)$ distribution must give more attention to values inside $[0.7-0.9]$ which is your prior belief right, that your friend always toss heads. To do that you can center the distribution around $0.8$ which is the midpoint of the interval $[0.7-0.9]$ You can do that by solving $\frac{a}{a+b}=0.8$ a potential solution for that can be choosing $a=10$ and then $b=2.5$. So, $\pi(\theta)= Beta(\theta;10,2.5)$ reflect your prior belief about the success probability of your friend that lies inside the interval $[0.7-0.9]$. Now if you want to say something about as $n$ (the number of samples tends to infinity) then check that the mean of the posterior is $$Mean = \frac{a+\sum x}{a + \sum x + b +n - \sum x} = \frac{a}{b+n} + \frac{\sum x}{b+n}$$ where for $n\rightarrow \infty$, the mean of your posterior belief $\pi(\theta|x)$ goes to $\bar{x}$
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
You play a coin flip game with your friend, but you know that your friend somehow tends to toss heads almost every time. So, you can say something like "Ha I know my friend (prior belief), he always t
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? You play a coin flip game with your friend, but you know that your friend somehow tends to toss heads almost every time. So, you can say something like "Ha I know my friend (prior belief), he always tosses heads, so the probability of him toss heads again will be somewhere between $[0.7,0.9]$", i.e. $\theta$ can be whatever value you want inside that interval. A natural way for saying that in a probabilistic language, that the probability of success (for your friend) lies inside $[0.7,0.9]$ is to let $\theta$ be a random variable. Now that $\theta$ is a random variable you can assign it a distribution. But your distribution has to reflect your prior belief, that the probability of success for your friend will be inside the interval $[0.7,0.9]$. A good choice for distribution for $\theta$ would be a $Beta(a,b)$ distribution (as it takes values inside $[0,1]$ where probabilities also do). However, this $Beta(a,b)$ distribution must give more attention to values inside $[0.7-0.9]$ which is your prior belief right, that your friend always toss heads. To do that you can center the distribution around $0.8$ which is the midpoint of the interval $[0.7-0.9]$ You can do that by solving $\frac{a}{a+b}=0.8$ a potential solution for that can be choosing $a=10$ and then $b=2.5$. So, $\pi(\theta)= Beta(\theta;10,2.5)$ reflect your prior belief about the success probability of your friend that lies inside the interval $[0.7-0.9]$. Now if you want to say something about as $n$ (the number of samples tends to infinity) then check that the mean of the posterior is $$Mean = \frac{a+\sum x}{a + \sum x + b +n - \sum x} = \frac{a}{b+n} + \frac{\sum x}{b+n}$$ where for $n\rightarrow \infty$, the mean of your posterior belief $\pi(\theta|x)$ goes to $\bar{x}$
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p You play a coin flip game with your friend, but you know that your friend somehow tends to toss heads almost every time. So, you can say something like "Ha I know my friend (prior belief), he always t
29,902
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
I think part of the problem is that there are some notational problems in the question, and a degree of people talking past each other due to having different backgrounds/positions, so I'll go through the question trying to understand what was meant. I will be happy to be corrected if I am wrong and will edit the answer until we understand each other. The first issue is what does the author mean by $P(X=x|\theta=c)$? I think this is intended to mean the probability that the random variable $X$ has the value $x$ if the parameter of the model, $\theta$ has its "true" value, $c$. How do Bayesians interpret θ=c, the probability of heads? θ of course is an unrealized or unobservable realization of a random variable, This is a problematic line for me as $\theta$ is not a random variable, but a parameter of the model. If we knew what $c$ was, we would just set $\theta = c$ and there would be no need for a prior or a posterior. But we don't know the optimal value of the parameter, so what do we do? The traditional Bayesian approach is to contsruct a prior for the unknown parameter value, $\pi(\theta)$ that represents what we know about the parameter a-priori (which may be very little). If we want to know what values of $\theta$ are plausible, given our prior and our data point, $X = x$, then we use Bayes rule, giving $p(\theta|X = x) = \frac{P(X = x|\theta)p(\theta)}{P(X=x)}$ Notice I have written $P(X=x|\theta)$ rather than $P(X = x|\theta = c)$. This is because we are not interested in a single number telling us the probability of a head. We want to continue representing our knowledge in the form of a distribution of relative plausibilities of all possible values of $\theta$. Representing knowledge in the form distributions, rather than point values is fairly central to Bayesianism. IF we wanted to give a single number representing the probability of a head, then we might take the mode of $P(\theta|X=x)$ or the expectation of $\theta$ with respect to $P(\theta|X=x)$. But asking how Bayesians interpret $\theta = c$ seems meaningless, it is just setting a parameter of our model to a particular value. For instance, if P(X=1|θ=c)=c is my belief that the coin will land heads, then π(θ) is my belief about my belief, and in some sense so too is the prior predictive distribution P(X=1)=∫θπ(θ)dθ=aa+b. To say "if θ=c is known" is to say that I know my own beliefs. To say "if θ is unknown" is to say I only have a belief about my beliefs. This seems very confused. In the case of flipping a coin (a Bernoulli trial), then $P(X=1|\theta=c) = c$ is a tautology as the parameter of a Bernoulli distribution is the probability that $X=1$, so this equation only holds when the parameter of the distribution is equal to its true value. But we don't know the value of $c$, so Bayesians wouldn't encounter this. $\theta$ is a parameter of a model, $c$ is it's true value, what more could there be? $P(X=1|θ=c)=c$ is not my belief that the coin will land heads, it is the true probability that it will land heads. It can't be my belief as it relies on me knowing the correct value of the parameter $\theta$, but I don't. This means that "then π(θ) is my belief about my belief," is incorrect, because the premise was incorrect. It is just your belief about the relative plausibilities of different values of the parameter $\theta$. To say "if θ=c is known" is to say that I know my own beliefs. No, this would be equivalent to saying that you know the true value of the parameter $\theta$, so it is just saying the prior should be a delta function centered on $c$. It is just a direct statement of your prior belief/state of knowledge. To say "if θ is unknown" is to say I only have a belief about my beliefs. Again, this is incorrect because the premise at the start of the paragraph was false. It just means you don't know the true value of parameter $\theta$ so perhaps a flat prior distribution on the interval 0 to 1 would be appropriate (encoded as a Beta distribution for convenience). I think I'll leave it at that for now, adding more is likely to just be further talking past eachother, so I will wait for @GeoffreyJohnson 's comments/corrections.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
I think part of the problem is that there are some notational problems in the question, and a degree of people talking past each other due to having different backgrounds/positions, so I'll go through
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? I think part of the problem is that there are some notational problems in the question, and a degree of people talking past each other due to having different backgrounds/positions, so I'll go through the question trying to understand what was meant. I will be happy to be corrected if I am wrong and will edit the answer until we understand each other. The first issue is what does the author mean by $P(X=x|\theta=c)$? I think this is intended to mean the probability that the random variable $X$ has the value $x$ if the parameter of the model, $\theta$ has its "true" value, $c$. How do Bayesians interpret θ=c, the probability of heads? θ of course is an unrealized or unobservable realization of a random variable, This is a problematic line for me as $\theta$ is not a random variable, but a parameter of the model. If we knew what $c$ was, we would just set $\theta = c$ and there would be no need for a prior or a posterior. But we don't know the optimal value of the parameter, so what do we do? The traditional Bayesian approach is to contsruct a prior for the unknown parameter value, $\pi(\theta)$ that represents what we know about the parameter a-priori (which may be very little). If we want to know what values of $\theta$ are plausible, given our prior and our data point, $X = x$, then we use Bayes rule, giving $p(\theta|X = x) = \frac{P(X = x|\theta)p(\theta)}{P(X=x)}$ Notice I have written $P(X=x|\theta)$ rather than $P(X = x|\theta = c)$. This is because we are not interested in a single number telling us the probability of a head. We want to continue representing our knowledge in the form of a distribution of relative plausibilities of all possible values of $\theta$. Representing knowledge in the form distributions, rather than point values is fairly central to Bayesianism. IF we wanted to give a single number representing the probability of a head, then we might take the mode of $P(\theta|X=x)$ or the expectation of $\theta$ with respect to $P(\theta|X=x)$. But asking how Bayesians interpret $\theta = c$ seems meaningless, it is just setting a parameter of our model to a particular value. For instance, if P(X=1|θ=c)=c is my belief that the coin will land heads, then π(θ) is my belief about my belief, and in some sense so too is the prior predictive distribution P(X=1)=∫θπ(θ)dθ=aa+b. To say "if θ=c is known" is to say that I know my own beliefs. To say "if θ is unknown" is to say I only have a belief about my beliefs. This seems very confused. In the case of flipping a coin (a Bernoulli trial), then $P(X=1|\theta=c) = c$ is a tautology as the parameter of a Bernoulli distribution is the probability that $X=1$, so this equation only holds when the parameter of the distribution is equal to its true value. But we don't know the value of $c$, so Bayesians wouldn't encounter this. $\theta$ is a parameter of a model, $c$ is it's true value, what more could there be? $P(X=1|θ=c)=c$ is not my belief that the coin will land heads, it is the true probability that it will land heads. It can't be my belief as it relies on me knowing the correct value of the parameter $\theta$, but I don't. This means that "then π(θ) is my belief about my belief," is incorrect, because the premise was incorrect. It is just your belief about the relative plausibilities of different values of the parameter $\theta$. To say "if θ=c is known" is to say that I know my own beliefs. No, this would be equivalent to saying that you know the true value of the parameter $\theta$, so it is just saying the prior should be a delta function centered on $c$. It is just a direct statement of your prior belief/state of knowledge. To say "if θ is unknown" is to say I only have a belief about my beliefs. Again, this is incorrect because the premise at the start of the paragraph was false. It just means you don't know the true value of parameter $\theta$ so perhaps a flat prior distribution on the interval 0 to 1 would be appropriate (encoded as a Beta distribution for convenience). I think I'll leave it at that for now, adding more is likely to just be further talking past eachother, so I will wait for @GeoffreyJohnson 's comments/corrections.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p I think part of the problem is that there are some notational problems in the question, and a degree of people talking past each other due to having different backgrounds/positions, so I'll go through
29,903
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
Let us think about what you are attempting to ask. If we define $X=x\in\chi, \chi$ being the sample space, as observed, then in Bayesian thought, it is a constant. It is an observable. Instead of using the language of parameters and data, we can think in terms of observables and unobservables. There is no randomness here. $\theta$ is normally a random variable in the parameter space, but it is now a constant. It is crucial that we know how it became so. It appears from the language of your posting that we are conditioning on it in the likelihood function so that, for our purposes, the likelihood is now $.82^1$. So it is not a random variable either. So it is senseless to talk about a probability when everything is a constant. It would be like discussing the probability that $2+2=4$. It isn’t impossible to discuss this, but it is difficult for several reasons. First, the interpretation can change depending on the axioms used to derive Bayes rule. For example, if we are conditioning on $\theta=.82$, then de Finetti’s axioms would require a prior with mass only on .82. What if we were using some other axiomatization and the mass of the prior was zero at the point we conditioned it? Cox’s axioms would find that problematic as well. Savage’s might not if we allowed for time inconsistency, though why you would change your mind on the prior and not the likelihood is beyond me. We also need a better definition of what a constant is. For example, conditioning some parameters on constants is not that unusual in Bayesian thinking. Sometimes you do know one of them. There is another case, though, that wrenches up even the Frequentist toolset. To give an example, the speed of light is known precisely. However, as distance is now normed against the speed of light, distance is uncertain. We used to measure the speed of light with uncertainty; we now measure distance with uncertainty. Let us imagine we get out our carefully built scientific equipment and decide to measure out five kilometers for our morning run. Our equipment is accurate to within plus or minus twenty meters. When our device measures five kilometers, we know it is somewhere within 4980 and 5020 meters in reality. It is close enough. If this is part of our measuring, we could condition on it being five kilometers as it is close enough for our purposes. It is also definitely wrong. Because distance is a value in the real numbers, the probability that our actual distance is five kilometers when it registers five kilometers is a measure zero event. Our conditioning is wrong with certainty. A second issue with this type of conditioning problem is a non-mathematical issue. If, instead, we were running a wrecking ball and hit our intended building, plus or minus twenty meters, we could be hitting the wrong building. At the same time, we have conditioned our uncertainty away. Had our wrecking ball been run by a robot, a la E.T. Jaynes, we would have no way to know our decision process was bad. On the surface, you may think that would not matter, but de Finetti’s coherence wrecks that idea if we are gambling money. A bad constant could create a Dutch Book. As I see it, there is no randomness in your problem. We observed the outcome; it is a certainty. We observed the parameter. It is being treated as a certainty. We are being bigoted in our conditioning in that we are saying there is no uncertainty. What is probability in the face of perfect certainty? What do you mean by your question?
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
Let us think about what you are attempting to ask. If we define $X=x\in\chi, \chi$ being the sample space, as observed, then in Bayesian thought, it is a constant. It is an observable. Instead of u
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? Let us think about what you are attempting to ask. If we define $X=x\in\chi, \chi$ being the sample space, as observed, then in Bayesian thought, it is a constant. It is an observable. Instead of using the language of parameters and data, we can think in terms of observables and unobservables. There is no randomness here. $\theta$ is normally a random variable in the parameter space, but it is now a constant. It is crucial that we know how it became so. It appears from the language of your posting that we are conditioning on it in the likelihood function so that, for our purposes, the likelihood is now $.82^1$. So it is not a random variable either. So it is senseless to talk about a probability when everything is a constant. It would be like discussing the probability that $2+2=4$. It isn’t impossible to discuss this, but it is difficult for several reasons. First, the interpretation can change depending on the axioms used to derive Bayes rule. For example, if we are conditioning on $\theta=.82$, then de Finetti’s axioms would require a prior with mass only on .82. What if we were using some other axiomatization and the mass of the prior was zero at the point we conditioned it? Cox’s axioms would find that problematic as well. Savage’s might not if we allowed for time inconsistency, though why you would change your mind on the prior and not the likelihood is beyond me. We also need a better definition of what a constant is. For example, conditioning some parameters on constants is not that unusual in Bayesian thinking. Sometimes you do know one of them. There is another case, though, that wrenches up even the Frequentist toolset. To give an example, the speed of light is known precisely. However, as distance is now normed against the speed of light, distance is uncertain. We used to measure the speed of light with uncertainty; we now measure distance with uncertainty. Let us imagine we get out our carefully built scientific equipment and decide to measure out five kilometers for our morning run. Our equipment is accurate to within plus or minus twenty meters. When our device measures five kilometers, we know it is somewhere within 4980 and 5020 meters in reality. It is close enough. If this is part of our measuring, we could condition on it being five kilometers as it is close enough for our purposes. It is also definitely wrong. Because distance is a value in the real numbers, the probability that our actual distance is five kilometers when it registers five kilometers is a measure zero event. Our conditioning is wrong with certainty. A second issue with this type of conditioning problem is a non-mathematical issue. If, instead, we were running a wrecking ball and hit our intended building, plus or minus twenty meters, we could be hitting the wrong building. At the same time, we have conditioned our uncertainty away. Had our wrecking ball been run by a robot, a la E.T. Jaynes, we would have no way to know our decision process was bad. On the surface, you may think that would not matter, but de Finetti’s coherence wrecks that idea if we are gambling money. A bad constant could create a Dutch Book. As I see it, there is no randomness in your problem. We observed the outcome; it is a certainty. We observed the parameter. It is being treated as a certainty. We are being bigoted in our conditioning in that we are saying there is no uncertainty. What is probability in the face of perfect certainty? What do you mean by your question?
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p Let us think about what you are attempting to ask. If we define $X=x\in\chi, \chi$ being the sample space, as observed, then in Bayesian thought, it is a constant. It is an observable. Instead of u
29,904
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
$P(x=1|\theta=c)=c=1-P(x=0|\theta=c)=\theta_i=P(x=1|\theta_i)$ The $\theta$ is the aleatoric uncertainty inherited from the coin, and before we observe any events we can only guess that a certain fraction would be most possible, for instance 0.5, but $\theta_i$ can take very value between 0 and 1. We assume that it is discrete (for simplification) and can only take values from this list containing 11 values: $\theta_i \in$ [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]. $P(\theta)$ It is a beta distribution, the probability of that probability is high or low given the observations or our belief. It acts as a prior, meaning that $P(\theta)=P(\theta|X)$ for the new observations after we have seen $X$. Since we assume $\theta$ is discrete, then every $\theta_i$ takes a value/probability representing the probability of $P(x=1|\theta_i)$. However in frequentist, it is a delta distribution. Say we see three heads in three tossing, then $P(\theta=1)=1$ and 0 elsewhere for $\theta$. Its aleatoric uncertainty is 0. If we observe 3 more events but all with tails, the delta distribution would change with $P(\theta=0.5)=1$ and 0 elsewhere. And $P(\theta=1)$ would also become 0. Its aleatoric uncertainty is 0.5 now. $P(x=1)=\sum_{i=1}^NP(x=1|\theta_i)P(\theta_i)$ Say we don't know the $P(\theta)$, meaning we don't know every $P(\theta_i)$ or the probability of every probability is 1/11. Then $\begin{align*} P(x=1)&=\sum_{i=1}^{11}P(x=1|\theta_i)P(\theta_i)=\sum_{i=1}^{11}\theta_iP(\theta_i)\\&=\frac{1}{11} \sum_{i=1}^{11}\theta_i=\frac{1}{11} \sum_{c=0}^{1} P(\theta=c)\\&=\frac{1}{11}\sum [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] =0.5 \end{align*}$ $P(\theta|X)$ Following the above and we observe some data $X$. It is a distribution and for every $\theta_i$, $\begin{align*} P(\theta_i|X)&=\frac{P(X|\theta_i)P(\theta_i)}{\sum_{i=1}^{11}P(X|\theta_i)P(\theta_i)}\\&=\frac{P(x=1|\theta_i)^k P(x=0|\theta_i)^{N-k}P(\theta_i)}{\sum_{i=1}^{11}P(x=1|\theta_i)^k P(x=0|\theta_i)^{N-k}P(\theta_i)}\\&= \frac{P(x=1|\theta_i)^k (1-P(x=1|\theta_i))^{N-k}P(\theta_i)}{\sum_{i=1}^{11}P(x=1|\theta_i)^k (1-P(x=1|\theta_i))^{N-k}P(\theta_i)} \end{align*}$ where k is the number of observations of heads and N is the total tosses. After we obtained $P(\theta_i|X)$, say we observed some other data $Z$, then to calculate $P(\theta_i|Z)$ we would treat $P(\theta) = P(\theta|X)$ for every 11 values of $\theta_i$. With enough observations from the initial prior $P(\theta_i) = 1/11$ would vanish. For $P(\theta\leq s|X)$ you just sum up all the $P(\theta_i|X)$ where $\theta_i$ less than or equal to s.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
$P(x=1|\theta=c)=c=1-P(x=0|\theta=c)=\theta_i=P(x=1|\theta_i)$ The $\theta$ is the aleatoric uncertainty inherited from the coin, and before we observe any events we can only guess that a certain frac
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? $P(x=1|\theta=c)=c=1-P(x=0|\theta=c)=\theta_i=P(x=1|\theta_i)$ The $\theta$ is the aleatoric uncertainty inherited from the coin, and before we observe any events we can only guess that a certain fraction would be most possible, for instance 0.5, but $\theta_i$ can take very value between 0 and 1. We assume that it is discrete (for simplification) and can only take values from this list containing 11 values: $\theta_i \in$ [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]. $P(\theta)$ It is a beta distribution, the probability of that probability is high or low given the observations or our belief. It acts as a prior, meaning that $P(\theta)=P(\theta|X)$ for the new observations after we have seen $X$. Since we assume $\theta$ is discrete, then every $\theta_i$ takes a value/probability representing the probability of $P(x=1|\theta_i)$. However in frequentist, it is a delta distribution. Say we see three heads in three tossing, then $P(\theta=1)=1$ and 0 elsewhere for $\theta$. Its aleatoric uncertainty is 0. If we observe 3 more events but all with tails, the delta distribution would change with $P(\theta=0.5)=1$ and 0 elsewhere. And $P(\theta=1)$ would also become 0. Its aleatoric uncertainty is 0.5 now. $P(x=1)=\sum_{i=1}^NP(x=1|\theta_i)P(\theta_i)$ Say we don't know the $P(\theta)$, meaning we don't know every $P(\theta_i)$ or the probability of every probability is 1/11. Then $\begin{align*} P(x=1)&=\sum_{i=1}^{11}P(x=1|\theta_i)P(\theta_i)=\sum_{i=1}^{11}\theta_iP(\theta_i)\\&=\frac{1}{11} \sum_{i=1}^{11}\theta_i=\frac{1}{11} \sum_{c=0}^{1} P(\theta=c)\\&=\frac{1}{11}\sum [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] =0.5 \end{align*}$ $P(\theta|X)$ Following the above and we observe some data $X$. It is a distribution and for every $\theta_i$, $\begin{align*} P(\theta_i|X)&=\frac{P(X|\theta_i)P(\theta_i)}{\sum_{i=1}^{11}P(X|\theta_i)P(\theta_i)}\\&=\frac{P(x=1|\theta_i)^k P(x=0|\theta_i)^{N-k}P(\theta_i)}{\sum_{i=1}^{11}P(x=1|\theta_i)^k P(x=0|\theta_i)^{N-k}P(\theta_i)}\\&= \frac{P(x=1|\theta_i)^k (1-P(x=1|\theta_i))^{N-k}P(\theta_i)}{\sum_{i=1}^{11}P(x=1|\theta_i)^k (1-P(x=1|\theta_i))^{N-k}P(\theta_i)} \end{align*}$ where k is the number of observations of heads and N is the total tosses. After we obtained $P(\theta_i|X)$, say we observed some other data $Z$, then to calculate $P(\theta_i|Z)$ we would treat $P(\theta) = P(\theta|X)$ for every 11 values of $\theta_i$. With enough observations from the initial prior $P(\theta_i) = 1/11$ would vanish. For $P(\theta\leq s|X)$ you just sum up all the $P(\theta_i|X)$ where $\theta_i$ less than or equal to s.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p $P(x=1|\theta=c)=c=1-P(x=0|\theta=c)=\theta_i=P(x=1|\theta_i)$ The $\theta$ is the aleatoric uncertainty inherited from the coin, and before we observe any events we can only guess that a certain frac
29,905
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
It's a measure of $X$ be $x$ given $\theta=c$ assuming that I will look through the distribution/estochastic-variability of $\theta$ (notice that depends from my distribution belief) in this particular event $\theta=c$ that is I will measure the odd of $X=x$ assuming both $\theta=c$ and its probabilistic uncertainty, furthermore, the math will show that $P(X=x|\theta=c)$ will depend from my belief to $\theta=c$ because the probability-view of the event $\theta=c$ (that will come up by the math of $P(X=x|\theta=c)$ ) will be my prior distribution of $\theta$ evaluating other belief that looks at $P(\theta=c)$ so $P(X=x|\theta=c)$ will depend from this other belief. I mean you are measuring the odd of $X=x$ taking your belief about $\theta=c$ although this was not implicitly clear.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
It's a measure of $X$ be $x$ given $\theta=c$ assuming that I will look through the distribution/estochastic-variability of $\theta$ (notice that depends from my distribution belief) in this particula
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? It's a measure of $X$ be $x$ given $\theta=c$ assuming that I will look through the distribution/estochastic-variability of $\theta$ (notice that depends from my distribution belief) in this particular event $\theta=c$ that is I will measure the odd of $X=x$ assuming both $\theta=c$ and its probabilistic uncertainty, furthermore, the math will show that $P(X=x|\theta=c)$ will depend from my belief to $\theta=c$ because the probability-view of the event $\theta=c$ (that will come up by the math of $P(X=x|\theta=c)$ ) will be my prior distribution of $\theta$ evaluating other belief that looks at $P(\theta=c)$ so $P(X=x|\theta=c)$ will depend from this other belief. I mean you are measuring the odd of $X=x$ taking your belief about $\theta=c$ although this was not implicitly clear.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p It's a measure of $X$ be $x$ given $\theta=c$ assuming that I will look through the distribution/estochastic-variability of $\theta$ (notice that depends from my distribution belief) in this particula
29,906
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior?
Below are four different interpretations using the coin toss example that was provided in the original question. Option 1 a) appears to be the appropriate interpretation under the Bayesian paradigm. If you find one of these that maps to your answer, please identify it and offer suggestions for improvement if needed. Option 1: Probability statements about $X$ and probability statements about $\theta$ are both statements of personal belief. a) $\theta$ is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant with value $c$ and is not a probability. The only valid probability is my belief about any given flip, which I set equal to this unknown fixed constant. Therefore, $P\{X=1|\theta=c\}=c$ is my personal belief about the coin landing heads in any given throw if I know this limiting proportion. Since I do not know what to believe, $\pi(\theta)$ is my belief about the limiting proportion (not my belief about the probability of heads). If I were to integrate the data pmf using the prior distribution I would get the prior predictive distribution. Then $P\{X=1\}=\frac{a}{a+b}$ where $\frac{a}{a+b}$ is a "known" constant. In a different sense this would be my belief about the coin landing heads when not knowing the limiting proportion. The posterior is my belief about the limiting proportion given the observed data. Nevertheless, the prior and posterior probabilities do not represent factual statements about the limiting proportion of heads for the coin under investigation, nor are they statements about the experiment. Option 1 a) amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. b) $c$ is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant and not a probability. The only valid probability is my belief about any given flip, which I set equal to this unknown fixed constant. Therefore $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is my personal belief about the coin landing heads in any given throw if I know this limiting proportion. Since I do not know what to believe, $\pi(\theta)$ is my belief… about my belief. If I were to integrate the data pmf using the prior distribution I would get the prior predictive distribution. Then $P\{X=1\}=\frac{a}{a+b}$ where $\frac{a}{a+b}$ is a "known" constant. In a different sense this would also be my belief about my belief. This option has us applying a belief probability measure to a belief probability measure. Similarly for $\pi(\theta|\boldsymbol{x})$ and $P\{X=1|\boldsymbol{x}\}$. However, because of my original correspondence we can interpret prior and posterior probabilities as beliefs about the limiting proportion of heads for the coin under investigation. Nevertheless, these prior and posterior probabilities do not represent factual statements about the limiting proportion of heads for the coin under investigation, nor are they statements about the experiment. Option 1 b) amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. Option 2: Probability statements about $X$ and probability statements about $\theta$ both have a frequentist interpretation. a) $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant. The density $\pi(\theta)$ depicts a collection of $\theta$’s (coins) or the limiting proportions of randomly selected $\theta$’s (coins) as the number of draws from $\pi(\theta)$ tends to infinity. These probabilities are considered known constants. The unknown true $\theta=c$ under investigation was randomly selected from the known collection or prevalence of $\theta$’s according to $\pi(\theta)$, and the observed data is used to subset this collection forming the posterior. If we are to apply these posterior probability statements to make inference on the unknown true $\theta$ (coin) under investigation we have to change our sampling frame. We must imagine instead that the unknown true $\theta$ was instead randomly selected from the posterior. This, then, has cause and effect reversed since the posterior distribution, from which we selected $\theta$, depends on the data… but the data depended on the $\theta$ we had not yet selected from the posterior. We could imagine drawing a new $\theta$ (coin) from the posterior, but this would not be the same $\theta=c$ we started with under investigation. The challenge here is applying the probability statement in the posterior distribution to the unknown true $\theta$ (coin) under investigation in a meaningful way. b) $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant. The density $\pi(\theta)$ depicts a collection of other $\theta$’s (coins) I have given to myself or the limiting proportions of randomly selected $\theta$’s (coins) as the number of draws from $\pi(\theta)$ tends to infinity. These probabilities are considered known constants. The observed data is used to subset this collection forming the posterior. The posterior is a legitimate sampling distribution of $\theta$'s (coins) I have given myself. The challenge here is applying the probability statements in the posterior distribution to the unknown true $\theta$ (coin) under investigation in a meaningful way since at no point was the true $\theta$ (coin) sampled from the posterior. Option 3: Probability statements about $X$ have a frequentist interpretation and probability statements about $\theta$ represent personal belief. $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation. Since I do not know this limiting proportion, $\pi(\theta)$ is my personal belief about this unknown fixed quantity. On the surface this seems the most reasonable. However, this would have Bayes theorem blending two different interpretations of probability as if they are compatible or equivalent, and it does not provide a clear link between posterior probability and the unknown fixed true $\theta$ under investigation. This would mean we are dealing with Option 1 or Option 2. Even if one insists on two different yet compatible interpretations of probability, Option 3 amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. Option 4: Probability statements about $X$ have a frequentist interpretation and there are no probability statements about $\theta$. $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads for the coin under investigation. The reason Bayesian statistics can provide reasonable point and interval estimates despite the shortcomings above regarding interpretation is that at the core of every prior is a likelihood. Something was witnessed or observed that gave rise to a likelihood, and therefore the prior. There are in fact no probability statements about $\theta$, belief, long-run, or otherwise. Bayes theorem amounts to multiplying independent likelihoods equivalent to a fixed effect meta-analysis, except the Bayesian normalizes the joint likelihood instead of inverting a hypothesis test. If we view the Bayesian inference machine as a frequentist meta-analytic testing procedure the shortcomings above vanish. The posterior is an asymptotic confidence distribution. Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p
Below are four different interpretations using the coin toss example that was provided in the original question. Option 1 a) appears to be the appropriate interpretation under the Bayesian paradigm.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the posterior? Below are four different interpretations using the coin toss example that was provided in the original question. Option 1 a) appears to be the appropriate interpretation under the Bayesian paradigm. If you find one of these that maps to your answer, please identify it and offer suggestions for improvement if needed. Option 1: Probability statements about $X$ and probability statements about $\theta$ are both statements of personal belief. a) $\theta$ is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant with value $c$ and is not a probability. The only valid probability is my belief about any given flip, which I set equal to this unknown fixed constant. Therefore, $P\{X=1|\theta=c\}=c$ is my personal belief about the coin landing heads in any given throw if I know this limiting proportion. Since I do not know what to believe, $\pi(\theta)$ is my belief about the limiting proportion (not my belief about the probability of heads). If I were to integrate the data pmf using the prior distribution I would get the prior predictive distribution. Then $P\{X=1\}=\frac{a}{a+b}$ where $\frac{a}{a+b}$ is a "known" constant. In a different sense this would be my belief about the coin landing heads when not knowing the limiting proportion. The posterior is my belief about the limiting proportion given the observed data. Nevertheless, the prior and posterior probabilities do not represent factual statements about the limiting proportion of heads for the coin under investigation, nor are they statements about the experiment. Option 1 a) amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. b) $c$ is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant and not a probability. The only valid probability is my belief about any given flip, which I set equal to this unknown fixed constant. Therefore $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is my personal belief about the coin landing heads in any given throw if I know this limiting proportion. Since I do not know what to believe, $\pi(\theta)$ is my belief… about my belief. If I were to integrate the data pmf using the prior distribution I would get the prior predictive distribution. Then $P\{X=1\}=\frac{a}{a+b}$ where $\frac{a}{a+b}$ is a "known" constant. In a different sense this would also be my belief about my belief. This option has us applying a belief probability measure to a belief probability measure. Similarly for $\pi(\theta|\boldsymbol{x})$ and $P\{X=1|\boldsymbol{x}\}$. However, because of my original correspondence we can interpret prior and posterior probabilities as beliefs about the limiting proportion of heads for the coin under investigation. Nevertheless, these prior and posterior probabilities do not represent factual statements about the limiting proportion of heads for the coin under investigation, nor are they statements about the experiment. Option 1 b) amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. Option 2: Probability statements about $X$ and probability statements about $\theta$ both have a frequentist interpretation. a) $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant. The density $\pi(\theta)$ depicts a collection of $\theta$’s (coins) or the limiting proportions of randomly selected $\theta$’s (coins) as the number of draws from $\pi(\theta)$ tends to infinity. These probabilities are considered known constants. The unknown true $\theta=c$ under investigation was randomly selected from the known collection or prevalence of $\theta$’s according to $\pi(\theta)$, and the observed data is used to subset this collection forming the posterior. If we are to apply these posterior probability statements to make inference on the unknown true $\theta$ (coin) under investigation we have to change our sampling frame. We must imagine instead that the unknown true $\theta$ was instead randomly selected from the posterior. This, then, has cause and effect reversed since the posterior distribution, from which we selected $\theta$, depends on the data… but the data depended on the $\theta$ we had not yet selected from the posterior. We could imagine drawing a new $\theta$ (coin) from the posterior, but this would not be the same $\theta=c$ we started with under investigation. The challenge here is applying the probability statement in the posterior distribution to the unknown true $\theta$ (coin) under investigation in a meaningful way. b) $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation, an unknown fixed constant. The density $\pi(\theta)$ depicts a collection of other $\theta$’s (coins) I have given to myself or the limiting proportions of randomly selected $\theta$’s (coins) as the number of draws from $\pi(\theta)$ tends to infinity. These probabilities are considered known constants. The observed data is used to subset this collection forming the posterior. The posterior is a legitimate sampling distribution of $\theta$'s (coins) I have given myself. The challenge here is applying the probability statements in the posterior distribution to the unknown true $\theta$ (coin) under investigation in a meaningful way since at no point was the true $\theta$ (coin) sampled from the posterior. Option 3: Probability statements about $X$ have a frequentist interpretation and probability statements about $\theta$ represent personal belief. $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads as the number of flips tends to infinity for the coin under investigation. Since I do not know this limiting proportion, $\pi(\theta)$ is my personal belief about this unknown fixed quantity. On the surface this seems the most reasonable. However, this would have Bayes theorem blending two different interpretations of probability as if they are compatible or equivalent, and it does not provide a clear link between posterior probability and the unknown fixed true $\theta$ under investigation. This would mean we are dealing with Option 1 or Option 2. Even if one insists on two different yet compatible interpretations of probability, Option 3 amounts to Option 2 b) since this is how the posterior is operationalized in practice using Monte Carlo simulations. Option 4: Probability statements about $X$ have a frequentist interpretation and there are no probability statements about $\theta$. $\theta=c$, and equivalently $P\{X=1|\theta=c\}=c$, is the limiting proportion of heads for the coin under investigation. The reason Bayesian statistics can provide reasonable point and interval estimates despite the shortcomings above regarding interpretation is that at the core of every prior is a likelihood. Something was witnessed or observed that gave rise to a likelihood, and therefore the prior. There are in fact no probability statements about $\theta$, belief, long-run, or otherwise. Bayes theorem amounts to multiplying independent likelihoods equivalent to a fixed effect meta-analysis, except the Bayesian normalizes the joint likelihood instead of inverting a hypothesis test. If we view the Bayesian inference machine as a frequentist meta-analytic testing procedure the shortcomings above vanish. The posterior is an asymptotic confidence distribution. Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment.
How do Bayesians interpret $P(X=x|\theta=c)$, and does this pose a challenge when interpreting the p Below are four different interpretations using the coin toss example that was provided in the original question. Option 1 a) appears to be the appropriate interpretation under the Bayesian paradigm.
29,907
What are the chances rolling 6, 6-sided dice that there will be a 6?
Probability of a dice not turning up $n$ is $1-1/n$. Probability of not turning up $n$ in any of the $n$ dice is $(1-1/n)^n$. If one subtracts this from $1$, it'll be the probability of at least one $n$ turning up when one throws $n$ dice, i.e. $$p=1-(1-1/n)^n\rightarrow 1-e^{-1}$$ as $n$ goes to $\infty.$
What are the chances rolling 6, 6-sided dice that there will be a 6?
Probability of a dice not turning up $n$ is $1-1/n$. Probability of not turning up $n$ in any of the $n$ dice is $(1-1/n)^n$. If one subtracts this from $1$, it'll be the probability of at least one $
What are the chances rolling 6, 6-sided dice that there will be a 6? Probability of a dice not turning up $n$ is $1-1/n$. Probability of not turning up $n$ in any of the $n$ dice is $(1-1/n)^n$. If one subtracts this from $1$, it'll be the probability of at least one $n$ turning up when one throws $n$ dice, i.e. $$p=1-(1-1/n)^n\rightarrow 1-e^{-1}$$ as $n$ goes to $\infty.$
What are the chances rolling 6, 6-sided dice that there will be a 6? Probability of a dice not turning up $n$ is $1-1/n$. Probability of not turning up $n$ in any of the $n$ dice is $(1-1/n)^n$. If one subtracts this from $1$, it'll be the probability of at least one $
29,908
What are the chances rolling 6, 6-sided dice that there will be a 6?
The event $A:=$ "at least one die turns up on side $n$" is the complement of the event $B:=$ "all dice turn up on non-$n$ sides". So $P(A)=1-P(B)$. What's $P(B)$? All dice are independent, so $$ P(\text{all $n$ dice turn up on non-$n$ sides}) = P(\text{a single die turns up non-$n$})^n = \bigg(\frac{n-1}{n}\bigg)^n.$$ So $$ P(A) = 1-\bigg(\frac{n-1}{n}\bigg)^n. $$ Trust but verify. I like to do so in R: > nn <- 6 > n_sims <- 1e5 > sum(replicate(n_sims,any(sample(1:nn,nn,replace=TRUE)==nn)))/n_sims [1] 0.66355 > 1-((nn-1)/nn)^nn [1] 0.665102 Looks good. Try this with other values of nn. Here is a plot: nn <- 2:100 plot(nn,1-((nn-1)/nn)^nn,type="o",pch=19,ylim=c(1-1/exp(1),1)) abline(h=1-1/exp(1),col="red") We note how our probability in the limit is $$ P(A) = 1-\bigg(\frac{n-1}{n}\bigg)^n =1-\bigg(1-\frac{1}{n}\bigg)^n \to 1-\frac{1}{e}\approx 0.6321206 \quad\text{as }n\to\infty, $$ because of an identity involving $e$.
What are the chances rolling 6, 6-sided dice that there will be a 6?
The event $A:=$ "at least one die turns up on side $n$" is the complement of the event $B:=$ "all dice turn up on non-$n$ sides". So $P(A)=1-P(B)$. What's $P(B)$? All dice are independent, so $$ P(\te
What are the chances rolling 6, 6-sided dice that there will be a 6? The event $A:=$ "at least one die turns up on side $n$" is the complement of the event $B:=$ "all dice turn up on non-$n$ sides". So $P(A)=1-P(B)$. What's $P(B)$? All dice are independent, so $$ P(\text{all $n$ dice turn up on non-$n$ sides}) = P(\text{a single die turns up non-$n$})^n = \bigg(\frac{n-1}{n}\bigg)^n.$$ So $$ P(A) = 1-\bigg(\frac{n-1}{n}\bigg)^n. $$ Trust but verify. I like to do so in R: > nn <- 6 > n_sims <- 1e5 > sum(replicate(n_sims,any(sample(1:nn,nn,replace=TRUE)==nn)))/n_sims [1] 0.66355 > 1-((nn-1)/nn)^nn [1] 0.665102 Looks good. Try this with other values of nn. Here is a plot: nn <- 2:100 plot(nn,1-((nn-1)/nn)^nn,type="o",pch=19,ylim=c(1-1/exp(1),1)) abline(h=1-1/exp(1),col="red") We note how our probability in the limit is $$ P(A) = 1-\bigg(\frac{n-1}{n}\bigg)^n =1-\bigg(1-\frac{1}{n}\bigg)^n \to 1-\frac{1}{e}\approx 0.6321206 \quad\text{as }n\to\infty, $$ because of an identity involving $e$.
What are the chances rolling 6, 6-sided dice that there will be a 6? The event $A:=$ "at least one die turns up on side $n$" is the complement of the event $B:=$ "all dice turn up on non-$n$ sides". So $P(A)=1-P(B)$. What's $P(B)$? All dice are independent, so $$ P(\te
29,909
What are the chances rolling 6, 6-sided dice that there will be a 6?
Answers by @StephanKolassa (+1) and @gunes (+1) are both fine. But this problem can be solved with reference to binomial and Poisson distributions as follows: If $X_n$ is the number of ns seen in $n$ rolls of a fair $n$-sided die, then $X_n \sim \mathsf{Binom}(n, 1/n),$ so that $P(X_n \ge 1) = 1 - P(X_n = 0)= 1-(1-1/n)^n.$ As $n\rightarrow\infty,$ one has $X_n \stackrel{prob}{\rightarrow} Y \sim\mathsf{Pois}(\lambda=1),$ with $P(Y \ge 1) = 1 - P(Y = 0) = 1 - e^{-1}.$
What are the chances rolling 6, 6-sided dice that there will be a 6?
Answers by @StephanKolassa (+1) and @gunes (+1) are both fine. But this problem can be solved with reference to binomial and Poisson distributions as follows: If $X_n$ is the number of ns seen in $n$
What are the chances rolling 6, 6-sided dice that there will be a 6? Answers by @StephanKolassa (+1) and @gunes (+1) are both fine. But this problem can be solved with reference to binomial and Poisson distributions as follows: If $X_n$ is the number of ns seen in $n$ rolls of a fair $n$-sided die, then $X_n \sim \mathsf{Binom}(n, 1/n),$ so that $P(X_n \ge 1) = 1 - P(X_n = 0)= 1-(1-1/n)^n.$ As $n\rightarrow\infty,$ one has $X_n \stackrel{prob}{\rightarrow} Y \sim\mathsf{Pois}(\lambda=1),$ with $P(Y \ge 1) = 1 - P(Y = 0) = 1 - e^{-1}.$
What are the chances rolling 6, 6-sided dice that there will be a 6? Answers by @StephanKolassa (+1) and @gunes (+1) are both fine. But this problem can be solved with reference to binomial and Poisson distributions as follows: If $X_n$ is the number of ns seen in $n$
29,910
What are the chances rolling 6, 6-sided dice that there will be a 6?
The answer can be arrived at by purely counting the described events as well, although the accepted answer is more elegant. We'll consider the case of the die, and hopefully the generalization is obvious. We'll let the event space be all sequences of numbers from $\{1,2,...,6\}$ of length $6$. Here are a few examples (chosen at random): 3 2 3 5 6 1 1 1 2 5 2 4 1 2 1 1 6 3 4 4 3 3 4 2 6 1 1 6 3 4 6 3 5 4 5 1 The point is, our space has a total of $6^6$ events, and due to independence we suppose that any one of them is as probable as the other (uniformly distributed). We need to count how many sequences have at least one $6$ in them. We partition the space we are counting by how many $6$'s appear, so consider the case that exactly one $6$ appears. How many possible ways can this happen? The six may appear in any position (6 different positions), and when it does the other 5 postions can have any of 5 different symbols (from $\{1,2,...,5\}$). Then the total number of sequences with exactly one $6$ is: $ \binom{6}{1}5^5 $. Similarly for the case where there are exactly two $6$'s: we get that there are exactly $\binom{6}{2}5^4$ such sequences. Now it's time for fun with sums: $$ \sum_{k=1}^6 \binom{6}{k}5^{6-k} = \sum_{k=0}^6 \binom{6}{k}5^{6-k}1^k - 5^6 = (5+1)^6 - 5^6$$ To get a probability from this count, we divide by the total number of events: $$ \frac{6^6 - 5^6}{6^6} = 1 - (5/6)^6 = 1 - (1-1/6)^6 $$ I think that this generalizes pretty well, since for any $n$ other than $6$, the exact same arguments holds, only replace each occurrence of $6$ with $n$, and $5$ with $n-1$. It's also worth noting that this number $5^6 = \binom{6}{0}5^6$ is the contribution of sequences in which no $6$ occurs, and is much easier to calculate (as used in the accepted answer).
What are the chances rolling 6, 6-sided dice that there will be a 6?
The answer can be arrived at by purely counting the described events as well, although the accepted answer is more elegant. We'll consider the case of the die, and hopefully the generalization is obv
What are the chances rolling 6, 6-sided dice that there will be a 6? The answer can be arrived at by purely counting the described events as well, although the accepted answer is more elegant. We'll consider the case of the die, and hopefully the generalization is obvious. We'll let the event space be all sequences of numbers from $\{1,2,...,6\}$ of length $6$. Here are a few examples (chosen at random): 3 2 3 5 6 1 1 1 2 5 2 4 1 2 1 1 6 3 4 4 3 3 4 2 6 1 1 6 3 4 6 3 5 4 5 1 The point is, our space has a total of $6^6$ events, and due to independence we suppose that any one of them is as probable as the other (uniformly distributed). We need to count how many sequences have at least one $6$ in them. We partition the space we are counting by how many $6$'s appear, so consider the case that exactly one $6$ appears. How many possible ways can this happen? The six may appear in any position (6 different positions), and when it does the other 5 postions can have any of 5 different symbols (from $\{1,2,...,5\}$). Then the total number of sequences with exactly one $6$ is: $ \binom{6}{1}5^5 $. Similarly for the case where there are exactly two $6$'s: we get that there are exactly $\binom{6}{2}5^4$ such sequences. Now it's time for fun with sums: $$ \sum_{k=1}^6 \binom{6}{k}5^{6-k} = \sum_{k=0}^6 \binom{6}{k}5^{6-k}1^k - 5^6 = (5+1)^6 - 5^6$$ To get a probability from this count, we divide by the total number of events: $$ \frac{6^6 - 5^6}{6^6} = 1 - (5/6)^6 = 1 - (1-1/6)^6 $$ I think that this generalizes pretty well, since for any $n$ other than $6$, the exact same arguments holds, only replace each occurrence of $6$ with $n$, and $5$ with $n-1$. It's also worth noting that this number $5^6 = \binom{6}{0}5^6$ is the contribution of sequences in which no $6$ occurs, and is much easier to calculate (as used in the accepted answer).
What are the chances rolling 6, 6-sided dice that there will be a 6? The answer can be arrived at by purely counting the described events as well, although the accepted answer is more elegant. We'll consider the case of the die, and hopefully the generalization is obv
29,911
What are the chances rolling 6, 6-sided dice that there will be a 6?
I found BruceET's answer interesting, relating to the number of events. An alternative way to approach this problem is to use the correspondence between waiting time and number of events. The use of that would be that the problem will be able to be generalized in some ways more easily. Viewing the problem as a waiting time problem This correspondence, as for instance explained/used here and here, is For the number of dice rolls $m$ and number of hits/events $k$ you get: $$\begin{array}{ccc} \overbrace{P(K \geq k| m)}^{\text{this is what you are looking for}} &=& \overbrace{P(M \leq m|k)}^{\text{we will express this instead}} \\ {\small\text{$\mathbb{P}$ $k$ or more events in $m$ dice rolls}} &=& {\small\text{$\mathbb{P}$ dice rolls below $m$ given $k$ events}} \end{array} $$ In words: the probability to get more than $K \geq k$ events (e.g. $\geq 1$ times rolling 6) within a number of dice rolls $m$ equals the probability to need $m$ or less dice rolls to get $k$ such events. This approach relates many distributions. Distribution of Distribution of Waiting time between events number of events Exponential Poisson Erlang/Gamma over/under-dispersed Poisson Geometric Binomial Negative Binomial over/under-dispersed Binomial So in our situation the waiting time is a geometric distribution. The probability that the number of dice rolls $M$ before you roll the first $n$ is less than or equal to $m$ (and given a probability to roll $n$ equals $1/n$) is the following CDF for the geometric distribution: $$P(M \leq m) = 1-\left(1-\frac{1}{n}\right)^m$$ and we are looking for the situation $m=n$ so you get: $$P(\text{there will be a $n$ rolled within $n$ rolls}) = P(M \leq n) = 1-\left(1-\frac{1}{n}\right)^n$$ Generalizations, when $n \to \infty$ The first generalization is that for $n \to \infty$ the distribution of the number of events becomes Poisson with factor $\lambda$ and the waiting time becomes an exponential distribution with factor $\lambda$. So the waiting time for rolling an event in the Poisson dice rolling process becomes $(1-e^{-\lambda \times t})$ and with $t=1$ we get the same $\approx 0.632$ result as the other answers. This generalization is not yet so special as it only reproduces the other results, but for the next one I do not see so directly how the generalization could work without thinking about waiting times. Generalizations, when dices are not fair You might consider the situation where the dice are not fair. For instance one time you will roll with a dice that has 0.17 probability to roll a 6, and another time you roll a dice that has 0.16 probability to roll a 6. This will mean that the 6's get more clustered around the dice with positive bias, and that the probability to roll a 6 in 6 turns will be less than the $1-1/e$ figure. (it means that based on the average probability of a single roll, say you determined it from a sample of many rolls, you can not determine the probability in many rolls with the same dice, because you need to take into account the correlation of the dice) So say a dice does not have a constant probability $p = 1/n$, but instead it is drawn from a beta distribution with a mean $\bar{p} = 1/n$ and some shape parameter $\nu$ $$p \sim Beta \left( \alpha = \nu \frac{1}{n}, \beta = \nu \frac{n-1}{n} \right)$$ Then the number of events for a particular dice being rolled $n$ time will be beta binomial distributed. And the probability for 1 or more events will be: $$P(k \geq 1) = 1 - \frac{B(\alpha, n + \beta)}{B(\alpha, \beta)} = 1 - \frac{B(\nu \frac{1}{n}, n +\nu \frac{n-1}{n})}{B(\nu \frac{1}{n}, n +\nu \frac{n-1}{n})} $$ I can verify computationally that this works... ### compute outcome for rolling a n-sided dice n times rolldice <- function(n,nu) { p <- rbeta(1,nu*1/n,nu*(n-1)/n) k <- rbinom(1,n,p) out <- (k>0) out } ### compute the average for a sample of dice meandice <- function(n,nu,reps = 10^4) { sum(replicate(reps,rolldice(n,nu)))/reps } meandice <- Vectorize((meandice)) ### simulate and compute for variance n set.seed(1) n <- 6 nu <- 10^seq(-1,3,0.1) y <- meandice(n,nu) plot(nu,1-beta(nu*1/n,n+nu*(n-1)/n)/beta(nu*1/n,nu*(n-1)/n), log = "x", xlab = expression(nu), ylab = "fraction of dices", main ="comparing simulation (dots) \n with formula based on beta (line)", main.cex = 1, type = "l") points(nu,y, lty =1, pch = 21, col = "black", bg = "white") .... But I have no good way to analytically solve the expression for $n \to \infty$. With the waiting time However, with waiting times, then I can express the the limit of the beta binomial distribution (which is now more like a beta Poisson distribution) with a variance in the exponential factor of the waiting times. So instead of $1-e^{-1}$ we are looking for $$1- \int e^{-\lambda} p(\lambda) \, \text{d}\, \lambda$$. Now that integral term is related to the moment generating function (with $t=-1$). So if $\lambda$ is normal distributed with $\mu = 1$ and variance $\sigma^2$ then we should use: $$ 1-e^{-(1-\sigma^2/2)} \quad \text{instead of} \quad 1-e^{-1}$$ Application These dice rolls are a toy model. Many real-life problems will have variation and not completely fair dice situations. For instance, say you wish to study probability that a person might get sick from a virus given some contact time. One could base calculations for this based on some experiments that verify the probability of a transmission (e.g. either some theoretical work, or some lab experiments measuring/determining the number/frequency of transmissions in an entire population over a short duration), and then extrapolate this transmission to an entire month. Say, you find that the transmission is 1 transmission per month per person, then you could conclude that $1-1/e \approx 0.63 \%$ of the population will get sick. However, this might be an overestimation because not everybody might get sick/transmission with the same rate. The percentage will probably lower. However, this is only true if the variance is very large. For this the distribution of $\lambda$ must be very skewed. Because, although we expressed it as a normal distribution before, negative values are not possible and distributions without negative distributions will typically not have large ratios $\sigma/\mu$, unless they are highly skewed. A situation with high skew is modeled below: Now we use the MGF for a Bernoulli distribution (the exponent of it), because we modeled the distribution as either $\lambda = 0$ with probability $1-p$ or $\lambda = 1/p$ with probability $p$. set.seed(1) rate = 1 time = 1 CV = 1 ### compute outcome for getting sick with variable rate getsick <- function(rate,CV=0.1,time=1) { ### truncating changes sd and mean but not co much if CV is small p <- 1/(CV^2+1) lambda <- rbinom(1,1,p)/(p)*rate k <- rpois(1,lambda*time) out <- (k>0) out } CV <- seq(0,2,0.1) plot(-1,-1, xlim = c(0,2), ylim = c(0,1), xlab = "coefficient of variance", ylab = "fraction", cex.main = 1, main = "if rates are bernouilli distributed \n fraction p with lambda/p and 1-p with 0") for (cv in CV) { points(cv,sum(replicate(10^4,getsick(rate=1,cv, time = 1)))/10^4) } p <- 1/(CV^2+1) lines(CV,1-(1-p)-p*exp(-1/p),col=1) lines(CV,p, col = 2, lty = 2) legend(2,1, c("simulation", "computed", "percent of subsceptible population"), col = c(1,1,2), lty = c(NA,1,2), pch = c(1,NA,NA),xjust =1, cex = 0.7) The consequence is. Say you have high $n$ and have no possibilities to observe $n$ dice rolls (e.g. it takes to long), and instead you screen the number of $n$ rolls only for a short time for many different dice. Then you could compute the number of dices that did roll a number $n$ during this short time and based on that compute what would happen for $n$ rolls. But you would not be knowing how much the events correlate within the dice. It could be that you are dealing with a high probability in a small group of dice, instead of an evenly distributed probability among all dice. This 'error' (or you could say simplification) relates to the situation with COVID-19 where the idea goes around that we need 60% of the people immune in order to reach herd immunity. However, that may not be the case. The current infection rate is determined for only a small group of people, it can be that this is only an indication for the infectiousness among a small group of people.
What are the chances rolling 6, 6-sided dice that there will be a 6?
I found BruceET's answer interesting, relating to the number of events. An alternative way to approach this problem is to use the correspondence between waiting time and number of events. The use of t
What are the chances rolling 6, 6-sided dice that there will be a 6? I found BruceET's answer interesting, relating to the number of events. An alternative way to approach this problem is to use the correspondence between waiting time and number of events. The use of that would be that the problem will be able to be generalized in some ways more easily. Viewing the problem as a waiting time problem This correspondence, as for instance explained/used here and here, is For the number of dice rolls $m$ and number of hits/events $k$ you get: $$\begin{array}{ccc} \overbrace{P(K \geq k| m)}^{\text{this is what you are looking for}} &=& \overbrace{P(M \leq m|k)}^{\text{we will express this instead}} \\ {\small\text{$\mathbb{P}$ $k$ or more events in $m$ dice rolls}} &=& {\small\text{$\mathbb{P}$ dice rolls below $m$ given $k$ events}} \end{array} $$ In words: the probability to get more than $K \geq k$ events (e.g. $\geq 1$ times rolling 6) within a number of dice rolls $m$ equals the probability to need $m$ or less dice rolls to get $k$ such events. This approach relates many distributions. Distribution of Distribution of Waiting time between events number of events Exponential Poisson Erlang/Gamma over/under-dispersed Poisson Geometric Binomial Negative Binomial over/under-dispersed Binomial So in our situation the waiting time is a geometric distribution. The probability that the number of dice rolls $M$ before you roll the first $n$ is less than or equal to $m$ (and given a probability to roll $n$ equals $1/n$) is the following CDF for the geometric distribution: $$P(M \leq m) = 1-\left(1-\frac{1}{n}\right)^m$$ and we are looking for the situation $m=n$ so you get: $$P(\text{there will be a $n$ rolled within $n$ rolls}) = P(M \leq n) = 1-\left(1-\frac{1}{n}\right)^n$$ Generalizations, when $n \to \infty$ The first generalization is that for $n \to \infty$ the distribution of the number of events becomes Poisson with factor $\lambda$ and the waiting time becomes an exponential distribution with factor $\lambda$. So the waiting time for rolling an event in the Poisson dice rolling process becomes $(1-e^{-\lambda \times t})$ and with $t=1$ we get the same $\approx 0.632$ result as the other answers. This generalization is not yet so special as it only reproduces the other results, but for the next one I do not see so directly how the generalization could work without thinking about waiting times. Generalizations, when dices are not fair You might consider the situation where the dice are not fair. For instance one time you will roll with a dice that has 0.17 probability to roll a 6, and another time you roll a dice that has 0.16 probability to roll a 6. This will mean that the 6's get more clustered around the dice with positive bias, and that the probability to roll a 6 in 6 turns will be less than the $1-1/e$ figure. (it means that based on the average probability of a single roll, say you determined it from a sample of many rolls, you can not determine the probability in many rolls with the same dice, because you need to take into account the correlation of the dice) So say a dice does not have a constant probability $p = 1/n$, but instead it is drawn from a beta distribution with a mean $\bar{p} = 1/n$ and some shape parameter $\nu$ $$p \sim Beta \left( \alpha = \nu \frac{1}{n}, \beta = \nu \frac{n-1}{n} \right)$$ Then the number of events for a particular dice being rolled $n$ time will be beta binomial distributed. And the probability for 1 or more events will be: $$P(k \geq 1) = 1 - \frac{B(\alpha, n + \beta)}{B(\alpha, \beta)} = 1 - \frac{B(\nu \frac{1}{n}, n +\nu \frac{n-1}{n})}{B(\nu \frac{1}{n}, n +\nu \frac{n-1}{n})} $$ I can verify computationally that this works... ### compute outcome for rolling a n-sided dice n times rolldice <- function(n,nu) { p <- rbeta(1,nu*1/n,nu*(n-1)/n) k <- rbinom(1,n,p) out <- (k>0) out } ### compute the average for a sample of dice meandice <- function(n,nu,reps = 10^4) { sum(replicate(reps,rolldice(n,nu)))/reps } meandice <- Vectorize((meandice)) ### simulate and compute for variance n set.seed(1) n <- 6 nu <- 10^seq(-1,3,0.1) y <- meandice(n,nu) plot(nu,1-beta(nu*1/n,n+nu*(n-1)/n)/beta(nu*1/n,nu*(n-1)/n), log = "x", xlab = expression(nu), ylab = "fraction of dices", main ="comparing simulation (dots) \n with formula based on beta (line)", main.cex = 1, type = "l") points(nu,y, lty =1, pch = 21, col = "black", bg = "white") .... But I have no good way to analytically solve the expression for $n \to \infty$. With the waiting time However, with waiting times, then I can express the the limit of the beta binomial distribution (which is now more like a beta Poisson distribution) with a variance in the exponential factor of the waiting times. So instead of $1-e^{-1}$ we are looking for $$1- \int e^{-\lambda} p(\lambda) \, \text{d}\, \lambda$$. Now that integral term is related to the moment generating function (with $t=-1$). So if $\lambda$ is normal distributed with $\mu = 1$ and variance $\sigma^2$ then we should use: $$ 1-e^{-(1-\sigma^2/2)} \quad \text{instead of} \quad 1-e^{-1}$$ Application These dice rolls are a toy model. Many real-life problems will have variation and not completely fair dice situations. For instance, say you wish to study probability that a person might get sick from a virus given some contact time. One could base calculations for this based on some experiments that verify the probability of a transmission (e.g. either some theoretical work, or some lab experiments measuring/determining the number/frequency of transmissions in an entire population over a short duration), and then extrapolate this transmission to an entire month. Say, you find that the transmission is 1 transmission per month per person, then you could conclude that $1-1/e \approx 0.63 \%$ of the population will get sick. However, this might be an overestimation because not everybody might get sick/transmission with the same rate. The percentage will probably lower. However, this is only true if the variance is very large. For this the distribution of $\lambda$ must be very skewed. Because, although we expressed it as a normal distribution before, negative values are not possible and distributions without negative distributions will typically not have large ratios $\sigma/\mu$, unless they are highly skewed. A situation with high skew is modeled below: Now we use the MGF for a Bernoulli distribution (the exponent of it), because we modeled the distribution as either $\lambda = 0$ with probability $1-p$ or $\lambda = 1/p$ with probability $p$. set.seed(1) rate = 1 time = 1 CV = 1 ### compute outcome for getting sick with variable rate getsick <- function(rate,CV=0.1,time=1) { ### truncating changes sd and mean but not co much if CV is small p <- 1/(CV^2+1) lambda <- rbinom(1,1,p)/(p)*rate k <- rpois(1,lambda*time) out <- (k>0) out } CV <- seq(0,2,0.1) plot(-1,-1, xlim = c(0,2), ylim = c(0,1), xlab = "coefficient of variance", ylab = "fraction", cex.main = 1, main = "if rates are bernouilli distributed \n fraction p with lambda/p and 1-p with 0") for (cv in CV) { points(cv,sum(replicate(10^4,getsick(rate=1,cv, time = 1)))/10^4) } p <- 1/(CV^2+1) lines(CV,1-(1-p)-p*exp(-1/p),col=1) lines(CV,p, col = 2, lty = 2) legend(2,1, c("simulation", "computed", "percent of subsceptible population"), col = c(1,1,2), lty = c(NA,1,2), pch = c(1,NA,NA),xjust =1, cex = 0.7) The consequence is. Say you have high $n$ and have no possibilities to observe $n$ dice rolls (e.g. it takes to long), and instead you screen the number of $n$ rolls only for a short time for many different dice. Then you could compute the number of dices that did roll a number $n$ during this short time and based on that compute what would happen for $n$ rolls. But you would not be knowing how much the events correlate within the dice. It could be that you are dealing with a high probability in a small group of dice, instead of an evenly distributed probability among all dice. This 'error' (or you could say simplification) relates to the situation with COVID-19 where the idea goes around that we need 60% of the people immune in order to reach herd immunity. However, that may not be the case. The current infection rate is determined for only a small group of people, it can be that this is only an indication for the infectiousness among a small group of people.
What are the chances rolling 6, 6-sided dice that there will be a 6? I found BruceET's answer interesting, relating to the number of events. An alternative way to approach this problem is to use the correspondence between waiting time and number of events. The use of t
29,912
What are the chances rolling 6, 6-sided dice that there will be a 6?
Simplify and then extend. Start with a coin. A coin is a die with 2 sides (S=2). The exhaustive probability space is T | H Two possibilities. One satisfies the condition of all heads. So your odds of all heads with one coin (n=1) are 1/2. So try two coins (n=2). All outcomes: TT | TH | HT | HH Four possibilities. Only one matches your criteria. It is worth noting that the probability of one being heads and the other being tails is 2/4 because two possibilities of the four match your criteria. But there is only one way to get all heads. Add one more coin (n=3)... TTT | THT | HTT | HHT TTH | THH | HTH | HHH 8 possibilities. Only one fits the criteria - so 1/8 chances of all heads. The pattern is (1/S)^n or (1/2)^3. For dice S = 6, and we have 6 of them. Probability of getting a 6 on any given roll is 1/6. Rolls are independent events. So using 2 dice getting all 6's is (1/6)*(1/6) or 1/36. (1/6)^6 is about 1 in 46,656
What are the chances rolling 6, 6-sided dice that there will be a 6?
Simplify and then extend. Start with a coin. A coin is a die with 2 sides (S=2). The exhaustive probability space is T | H Two possibilities. One satisfies the condition of all heads. So your odds o
What are the chances rolling 6, 6-sided dice that there will be a 6? Simplify and then extend. Start with a coin. A coin is a die with 2 sides (S=2). The exhaustive probability space is T | H Two possibilities. One satisfies the condition of all heads. So your odds of all heads with one coin (n=1) are 1/2. So try two coins (n=2). All outcomes: TT | TH | HT | HH Four possibilities. Only one matches your criteria. It is worth noting that the probability of one being heads and the other being tails is 2/4 because two possibilities of the four match your criteria. But there is only one way to get all heads. Add one more coin (n=3)... TTT | THT | HTT | HHT TTH | THH | HTH | HHH 8 possibilities. Only one fits the criteria - so 1/8 chances of all heads. The pattern is (1/S)^n or (1/2)^3. For dice S = 6, and we have 6 of them. Probability of getting a 6 on any given roll is 1/6. Rolls are independent events. So using 2 dice getting all 6's is (1/6)*(1/6) or 1/36. (1/6)^6 is about 1 in 46,656
What are the chances rolling 6, 6-sided dice that there will be a 6? Simplify and then extend. Start with a coin. A coin is a die with 2 sides (S=2). The exhaustive probability space is T | H Two possibilities. One satisfies the condition of all heads. So your odds o
29,913
Sum of squares of residuals instead of sum of residuals [duplicate]
The sums of residuals will always be 0, so that won't work. A more interesting question is why use sum of squared residuals vs. sum of absolute value of residuals. This penalizes large residuals more than small ones. I believe the reason this is done is because the math works out more easily and, back before computers, it was much easier to estimate the regression using squared residuals. Nowadays, this reason no longer applies mean absolute deviation regression is, indeed, possible. It is one form of robust regression.
Sum of squares of residuals instead of sum of residuals [duplicate]
The sums of residuals will always be 0, so that won't work. A more interesting question is why use sum of squared residuals vs. sum of absolute value of residuals. This penalizes large residuals more
Sum of squares of residuals instead of sum of residuals [duplicate] The sums of residuals will always be 0, so that won't work. A more interesting question is why use sum of squared residuals vs. sum of absolute value of residuals. This penalizes large residuals more than small ones. I believe the reason this is done is because the math works out more easily and, back before computers, it was much easier to estimate the regression using squared residuals. Nowadays, this reason no longer applies mean absolute deviation regression is, indeed, possible. It is one form of robust regression.
Sum of squares of residuals instead of sum of residuals [duplicate] The sums of residuals will always be 0, so that won't work. A more interesting question is why use sum of squared residuals vs. sum of absolute value of residuals. This penalizes large residuals more
29,914
Sum of squares of residuals instead of sum of residuals [duplicate]
Another way to motivate the squared residuals is by making the often reasonable assumption that the residuals are Gaussian distributed. In other words, we assume that $$y = ax + b + \varepsilon$$ for Gaussian noise $\varepsilon$. In this case, the log-likelihood of the parameters $a, b$ is given by $$\log p(y \mid x, a, b) = \log \mathcal{N}(y; ax + b, 1) = -\frac{1}{2} (y - [a + bx])^2 + \text{const},$$ so that maximizing the likelihood amounts to minimizing the squared residuals. If the noise $\varepsilon$ was Laplace distributed, the absolute value of residuals would be more appropriate. But because of the central limit theorem, Gaussian noise is much more common.
Sum of squares of residuals instead of sum of residuals [duplicate]
Another way to motivate the squared residuals is by making the often reasonable assumption that the residuals are Gaussian distributed. In other words, we assume that $$y = ax + b + \varepsilon$$ for
Sum of squares of residuals instead of sum of residuals [duplicate] Another way to motivate the squared residuals is by making the often reasonable assumption that the residuals are Gaussian distributed. In other words, we assume that $$y = ax + b + \varepsilon$$ for Gaussian noise $\varepsilon$. In this case, the log-likelihood of the parameters $a, b$ is given by $$\log p(y \mid x, a, b) = \log \mathcal{N}(y; ax + b, 1) = -\frac{1}{2} (y - [a + bx])^2 + \text{const},$$ so that maximizing the likelihood amounts to minimizing the squared residuals. If the noise $\varepsilon$ was Laplace distributed, the absolute value of residuals would be more appropriate. But because of the central limit theorem, Gaussian noise is much more common.
Sum of squares of residuals instead of sum of residuals [duplicate] Another way to motivate the squared residuals is by making the often reasonable assumption that the residuals are Gaussian distributed. In other words, we assume that $$y = ax + b + \varepsilon$$ for
29,915
Sum of squares of residuals instead of sum of residuals [duplicate]
Good answers, but maybe I can give a more intuitive answer. Suppose you are fitting a linear model, represented here by a straight line parameterized by a slope and intercept. Each residual is a spring between each data point and the line, and it is trying to pull the line to itself. A sensible thing to do is find the slope and intercept that minimizes the energy of the system. The energy in each spring (i.e. residual) is proportional to its length squared. So what the system does is minimize the sum of the squared residuals, i.e. minimize the sum of energy in the springs.
Sum of squares of residuals instead of sum of residuals [duplicate]
Good answers, but maybe I can give a more intuitive answer. Suppose you are fitting a linear model, represented here by a straight line parameterized by a slope and intercept. Each residual is a sprin
Sum of squares of residuals instead of sum of residuals [duplicate] Good answers, but maybe I can give a more intuitive answer. Suppose you are fitting a linear model, represented here by a straight line parameterized by a slope and intercept. Each residual is a spring between each data point and the line, and it is trying to pull the line to itself. A sensible thing to do is find the slope and intercept that minimizes the energy of the system. The energy in each spring (i.e. residual) is proportional to its length squared. So what the system does is minimize the sum of the squared residuals, i.e. minimize the sum of energy in the springs.
Sum of squares of residuals instead of sum of residuals [duplicate] Good answers, but maybe I can give a more intuitive answer. Suppose you are fitting a linear model, represented here by a straight line parameterized by a slope and intercept. Each residual is a sprin
29,916
Sum of squares of residuals instead of sum of residuals [duplicate]
In addition to the points made by Peter Flom and Lucas, a reason for minimizing the sum of squared residuals is the Gauss-Markov Theorem. This says that if the assumptions of classical linear regression are met, then the ordinary least squares estimator is more efficient than any other linear unbiased estimator. 'More efficient' implies that the variances of the estimated coefficients are lower; in other words, the estimated coefficients are more precise. The theorem holds even if the residuals do not have a normal or Gaussian distribution. However, the theorem is not relevant to the specific comparison between minimizing the sum of absolute values and minimizing the sum of squares since the former is not a linear estimator. See this table contrasting their properties, showing advantages of least squares as stability in response to small changes in data, and always having a single solution.
Sum of squares of residuals instead of sum of residuals [duplicate]
In addition to the points made by Peter Flom and Lucas, a reason for minimizing the sum of squared residuals is the Gauss-Markov Theorem. This says that if the assumptions of classical linear regressi
Sum of squares of residuals instead of sum of residuals [duplicate] In addition to the points made by Peter Flom and Lucas, a reason for minimizing the sum of squared residuals is the Gauss-Markov Theorem. This says that if the assumptions of classical linear regression are met, then the ordinary least squares estimator is more efficient than any other linear unbiased estimator. 'More efficient' implies that the variances of the estimated coefficients are lower; in other words, the estimated coefficients are more precise. The theorem holds even if the residuals do not have a normal or Gaussian distribution. However, the theorem is not relevant to the specific comparison between minimizing the sum of absolute values and minimizing the sum of squares since the former is not a linear estimator. See this table contrasting their properties, showing advantages of least squares as stability in response to small changes in data, and always having a single solution.
Sum of squares of residuals instead of sum of residuals [duplicate] In addition to the points made by Peter Flom and Lucas, a reason for minimizing the sum of squared residuals is the Gauss-Markov Theorem. This says that if the assumptions of classical linear regressi
29,917
Sum of squares of residuals instead of sum of residuals [duplicate]
This is more a response to @PeterFlom's comment on my comment, but it is too big to fit in a comment (and does relate to the original question). Here is some R code to show a case where there are multiple lines that all give the same minimum MAD/SAD values. The first part of the example is clearly contrived data to demonstrate, but the end includes more of a random element to demonstrate that the general concept will still hold in some more realistic cases. x <- rep(1:10, each=2) y <- x/10 + 0:1 plot(x,y) sad <- function(x,y,coef) { # mad is sad/n yhat <- coef[1] + coef[2]*x resid <- y - yhat sum( abs( resid ) ) } library(quantreg) fit0 <- rq( y~x ) abline(fit0) fit1 <- lm( y~x, subset= c(1,20) ) fit2 <- lm( y~x, subset= c(2,19) ) fit3 <- lm( y~x, subset= c(2,20) ) fit4 <- lm( y~x, subset= c(1,19) ) fit5.coef <- c(0.5, 1/10) abline(fit1) abline(fit2) abline(fit3) abline(fit4) abline(fit5.coef) for (i in seq( -0.5, 0.5, by=0.1 ) ) { abline( fit5.coef + c(i,0) ) } tmp1 <- seq( coef(fit1)[1], coef(fit2)[1], len=10 ) tmp2 <- seq( coef(fit1)[2], coef(fit2)[2], len=10 ) for (i in seq_along(tmp1) ) { abline( tmp1[i], tmp2[i] ) } sad(x,y, coef(fit0)) sad(x,y, coef(fit1)) sad(x,y, coef(fit2)) sad(x,y, coef(fit3)) sad(x,y, coef(fit4)) sad(x,y, fit5.coef ) for (i in seq( -0.5, 0.5, by=0.1 ) ) { print(sad(x,y, fit5.coef + c(i,0) )) } for (i in seq_along(tmp1) ) { print(sad(x,y, c(tmp1[i], tmp2[i]) ) ) } set.seed(1) y2 <- y + rnorm(20,0,0.25) plot(x,y2) fitnew <- rq(y2~x) # note the still non-unique warning abline(fitnew) abline(coef(fitnew) + c(.1,0)) abline(coef(fitnew) + c(0, 0.01) ) sad( x,y2, coef(fitnew) ) sad( x,y2, coef(fitnew)+c(.1,0)) sad( x,y2, coef(fitnew)+c(0,0.01))
Sum of squares of residuals instead of sum of residuals [duplicate]
This is more a response to @PeterFlom's comment on my comment, but it is too big to fit in a comment (and does relate to the original question). Here is some R code to show a case where there are mult
Sum of squares of residuals instead of sum of residuals [duplicate] This is more a response to @PeterFlom's comment on my comment, but it is too big to fit in a comment (and does relate to the original question). Here is some R code to show a case where there are multiple lines that all give the same minimum MAD/SAD values. The first part of the example is clearly contrived data to demonstrate, but the end includes more of a random element to demonstrate that the general concept will still hold in some more realistic cases. x <- rep(1:10, each=2) y <- x/10 + 0:1 plot(x,y) sad <- function(x,y,coef) { # mad is sad/n yhat <- coef[1] + coef[2]*x resid <- y - yhat sum( abs( resid ) ) } library(quantreg) fit0 <- rq( y~x ) abline(fit0) fit1 <- lm( y~x, subset= c(1,20) ) fit2 <- lm( y~x, subset= c(2,19) ) fit3 <- lm( y~x, subset= c(2,20) ) fit4 <- lm( y~x, subset= c(1,19) ) fit5.coef <- c(0.5, 1/10) abline(fit1) abline(fit2) abline(fit3) abline(fit4) abline(fit5.coef) for (i in seq( -0.5, 0.5, by=0.1 ) ) { abline( fit5.coef + c(i,0) ) } tmp1 <- seq( coef(fit1)[1], coef(fit2)[1], len=10 ) tmp2 <- seq( coef(fit1)[2], coef(fit2)[2], len=10 ) for (i in seq_along(tmp1) ) { abline( tmp1[i], tmp2[i] ) } sad(x,y, coef(fit0)) sad(x,y, coef(fit1)) sad(x,y, coef(fit2)) sad(x,y, coef(fit3)) sad(x,y, coef(fit4)) sad(x,y, fit5.coef ) for (i in seq( -0.5, 0.5, by=0.1 ) ) { print(sad(x,y, fit5.coef + c(i,0) )) } for (i in seq_along(tmp1) ) { print(sad(x,y, c(tmp1[i], tmp2[i]) ) ) } set.seed(1) y2 <- y + rnorm(20,0,0.25) plot(x,y2) fitnew <- rq(y2~x) # note the still non-unique warning abline(fitnew) abline(coef(fitnew) + c(.1,0)) abline(coef(fitnew) + c(0, 0.01) ) sad( x,y2, coef(fitnew) ) sad( x,y2, coef(fitnew)+c(.1,0)) sad( x,y2, coef(fitnew)+c(0,0.01))
Sum of squares of residuals instead of sum of residuals [duplicate] This is more a response to @PeterFlom's comment on my comment, but it is too big to fit in a comment (and does relate to the original question). Here is some R code to show a case where there are mult
29,918
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
I think there exists two very good and popular references for you (I started with these ones as well having a background of master in actuarial science): An Introduction to Statistical Learning (with application in R) by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. It is freely available on the site, pretty comprehensive and easy to understand with pratical examples. You can start learning many things even without a very strong statistical background, this reference is good for various profils and includes adequate number of popular algorithms together with its implementation in R without going deep into the mathematical details. The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, Jerome Friedman. Comparing to the first one, this book goes deeper into the mathematical aspects if you want to explore further on the particular algorithms that you find useful for you. (is is free as well) And of course Cross Validated is one of the best sources where you can learn many things, for me: best pratices, statistical misunderstanding and misuse, and many more. After several years of learning at schools / universities as well as seft-learning, I found that my knownledge is too limited when I first went to Cross Validated. I continue to go here every day since the first visit and learn so much.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
I think there exists two very good and popular references for you (I started with these ones as well having a background of master in actuarial science): An Introduction to Statistical Learning (with
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] I think there exists two very good and popular references for you (I started with these ones as well having a background of master in actuarial science): An Introduction to Statistical Learning (with application in R) by Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. It is freely available on the site, pretty comprehensive and easy to understand with pratical examples. You can start learning many things even without a very strong statistical background, this reference is good for various profils and includes adequate number of popular algorithms together with its implementation in R without going deep into the mathematical details. The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, Jerome Friedman. Comparing to the first one, this book goes deeper into the mathematical aspects if you want to explore further on the particular algorithms that you find useful for you. (is is free as well) And of course Cross Validated is one of the best sources where you can learn many things, for me: best pratices, statistical misunderstanding and misuse, and many more. After several years of learning at schools / universities as well as seft-learning, I found that my knownledge is too limited when I first went to Cross Validated. I continue to go here every day since the first visit and learn so much.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I I think there exists two very good and popular references for you (I started with these ones as well having a background of master in actuarial science): An Introduction to Statistical Learning (with
29,919
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
Here are a couple of free online courses that I've heard are highly recommended: http://projects.iq.harvard.edu/stat110/home (Depending on your current comfort with probability theory. Dr. Blitzstein's course became very popular at Harvard even for those who weren't into stats/probability. I've watched a few of the lectures for my own review and found them very helpful. ) https://www.coursera.org/learn/machine-learning (This is the current version of one of Stanford's first massive online courses by Andrew Ng, who ended up co-founding Coursera. I've been meaning to take this course, but haven't had the time.)
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
Here are a couple of free online courses that I've heard are highly recommended: http://projects.iq.harvard.edu/stat110/home (Depending on your current comfort with probability theory. Dr. Blitzste
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] Here are a couple of free online courses that I've heard are highly recommended: http://projects.iq.harvard.edu/stat110/home (Depending on your current comfort with probability theory. Dr. Blitzstein's course became very popular at Harvard even for those who weren't into stats/probability. I've watched a few of the lectures for my own review and found them very helpful. ) https://www.coursera.org/learn/machine-learning (This is the current version of one of Stanford's first massive online courses by Andrew Ng, who ended up co-founding Coursera. I've been meaning to take this course, but haven't had the time.)
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I Here are a couple of free online courses that I've heard are highly recommended: http://projects.iq.harvard.edu/stat110/home (Depending on your current comfort with probability theory. Dr. Blitzste
29,920
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
you don't need measure theory. Measure theory is used by mathematicians to justify other mathematical procedures eg taking limits of integrals approximations. Most engineers would not have studied measure theory, they would just use the results. The math knowledge required for ML is roughly characterised by being able to integrate a multivariate Gaussian- If you are confident about that then you probably have the multivariable calculus,linear algebra and probability theory background necessary. I would recommend Think Stats by Allen Downey - which aims to teach probability/statistics to programmers. The idea is to leverage programming expertise to do simulations and therefore understand probability theory/statistical methods. allen downey blog (he has written others ) Think stats (free) pdf)
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
you don't need measure theory. Measure theory is used by mathematicians to justify other mathematical procedures eg taking limits of integrals approximations. Most engineers would not have studied me
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] you don't need measure theory. Measure theory is used by mathematicians to justify other mathematical procedures eg taking limits of integrals approximations. Most engineers would not have studied measure theory, they would just use the results. The math knowledge required for ML is roughly characterised by being able to integrate a multivariate Gaussian- If you are confident about that then you probably have the multivariable calculus,linear algebra and probability theory background necessary. I would recommend Think Stats by Allen Downey - which aims to teach probability/statistics to programmers. The idea is to leverage programming expertise to do simulations and therefore understand probability theory/statistical methods. allen downey blog (he has written others ) Think stats (free) pdf)
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I you don't need measure theory. Measure theory is used by mathematicians to justify other mathematical procedures eg taking limits of integrals approximations. Most engineers would not have studied me
29,921
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
Since you're interested in machine learning, I'd skip probability and mesaure, and jump right into the ML. Andrew Ng's course is a great place to start. You can literally finish it in two weeks. Play with what you've learned for a few weeks, then go back to the roots and study some probabilities. If you're an engineer, then I'm puzzled with how you managed to skip in in college. It used to be the required course in engineering. Anyhow, you can catch up by taking MIT OCW course here. I don't think you need measure theory. Nobody needs measure theory. Those who do, they won't come here to ask, because their advisor will tell them which course to take. If you don't have an advisor then you definitely don't need it. Tautology, but true. The thing with a measure theory's that you can't learn it by "easy reading". You have to do the exercises and problems, basically, do it hard way. That's virtually impossible outside of the class room, in my opinion. The best option here is to take a class at the local college, if they offer such. Sometimes, PhD level probabilities course will do the measure and probabilities in one class, which is probably the best deal. I would not recommend taking a pure measure theory class in Math department, unless you really want to torture yourself, though in the end you'd be greatly satisfied.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
Since you're interested in machine learning, I'd skip probability and mesaure, and jump right into the ML. Andrew Ng's course is a great place to start. You can literally finish it in two weeks. Play
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] Since you're interested in machine learning, I'd skip probability and mesaure, and jump right into the ML. Andrew Ng's course is a great place to start. You can literally finish it in two weeks. Play with what you've learned for a few weeks, then go back to the roots and study some probabilities. If you're an engineer, then I'm puzzled with how you managed to skip in in college. It used to be the required course in engineering. Anyhow, you can catch up by taking MIT OCW course here. I don't think you need measure theory. Nobody needs measure theory. Those who do, they won't come here to ask, because their advisor will tell them which course to take. If you don't have an advisor then you definitely don't need it. Tautology, but true. The thing with a measure theory's that you can't learn it by "easy reading". You have to do the exercises and problems, basically, do it hard way. That's virtually impossible outside of the class room, in my opinion. The best option here is to take a class at the local college, if they offer such. Sometimes, PhD level probabilities course will do the measure and probabilities in one class, which is probably the best deal. I would not recommend taking a pure measure theory class in Math department, unless you really want to torture yourself, though in the end you'd be greatly satisfied.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I Since you're interested in machine learning, I'd skip probability and mesaure, and jump right into the ML. Andrew Ng's course is a great place to start. You can literally finish it in two weeks. Play
29,922
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
For machine learning, I think Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach can be a good resource to start with. It gives a general introduction to machine learning with intuitive examples, and is suitable for beginners. I like this book particularly because of the last chapter, which deals with machine learning experiments. While learning about machine learning, getting to know different models is not enough, and one should be able to compare different machine learning algorithms. I think this book has made it easier to understand how to compare those algorithms. Lecture slides can be found here.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
For machine learning, I think Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach can be a good resource to start with. It gives a general introduction to machin
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] For machine learning, I think Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach can be a good resource to start with. It gives a general introduction to machine learning with intuitive examples, and is suitable for beginners. I like this book particularly because of the last chapter, which deals with machine learning experiments. While learning about machine learning, getting to know different models is not enough, and one should be able to compare different machine learning algorithms. I think this book has made it easier to understand how to compare those algorithms. Lecture slides can be found here.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I For machine learning, I think Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach can be a good resource to start with. It gives a general introduction to machin
29,923
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed]
To add to the excellent suggestions above, I would say if you are interested in getting a firm grasp on more basic concepts in probability and statistics, "From Algorithms to Z-Scores: Probabilistic Computing in Statistics" is an excellent primer on using computers to understand some of the most important beginner/intermediate concepts in probability theory and stochastic processes. I'll also second either "An Introduction to Statistical Learning" or "Elements of Statistical Learning" (ESL) as an introduction to machine learning (ML). I think ESL in particular is amazing, but it does take a much more mathematics-heavy look at the ML concepts, so if you only consider yourself "okay" at stats, you might want to give it a read once you've gotten more experience with ML. If you're interested in Machine Learning for the sake of being employed or solving problems, getting hands-on experience is key. Take some introduction to data science/machine learning courses. Andrew Ng does an amazing introduction to machine learning in his course at Coursera here. I would also suggest you download some datasets and start playing around with them. If you haven't already, download R and RStudio (in my opinion, more friendly to beginners than Python or Matlab), and sign up at kaggle and do some of their beginner problems. They have great walkthroughs that can get you using ML with basically no idea what's actually happening, but it gives you an idea about the kind of steps you'd need to take to actually implement an ML solution. I'd personally encourage a combination of starting off using ML tools without really knowing what they do (using Kaggle datasets or similar); and learning fundamental concepts like cross-validation, overfitting, using confusion matrices, different measures of how good a model is, etc. To me, it's much more important to know how to use the algorithms, and knowing how to identify when things are working/aren't working, than it is to understand how the algorithms work.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I
To add to the excellent suggestions above, I would say if you are interested in getting a firm grasp on more basic concepts in probability and statistics, "From Algorithms to Z-Scores: Probabilistic C
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I start? [closed] To add to the excellent suggestions above, I would say if you are interested in getting a firm grasp on more basic concepts in probability and statistics, "From Algorithms to Z-Scores: Probabilistic Computing in Statistics" is an excellent primer on using computers to understand some of the most important beginner/intermediate concepts in probability theory and stochastic processes. I'll also second either "An Introduction to Statistical Learning" or "Elements of Statistical Learning" (ESL) as an introduction to machine learning (ML). I think ESL in particular is amazing, but it does take a much more mathematics-heavy look at the ML concepts, so if you only consider yourself "okay" at stats, you might want to give it a read once you've gotten more experience with ML. If you're interested in Machine Learning for the sake of being employed or solving problems, getting hands-on experience is key. Take some introduction to data science/machine learning courses. Andrew Ng does an amazing introduction to machine learning in his course at Coursera here. I would also suggest you download some datasets and start playing around with them. If you haven't already, download R and RStudio (in my opinion, more friendly to beginners than Python or Matlab), and sign up at kaggle and do some of their beginner problems. They have great walkthroughs that can get you using ML with basically no idea what's actually happening, but it gives you an idea about the kind of steps you'd need to take to actually implement an ML solution. I'd personally encourage a combination of starting off using ML tools without really knowing what they do (using Kaggle datasets or similar); and learning fundamental concepts like cross-validation, overfitting, using confusion matrices, different measures of how good a model is, etc. To me, it's much more important to know how to use the algorithms, and knowing how to identify when things are working/aren't working, than it is to understand how the algorithms work.
I’d like to learn about probability theory, measure theory and finally machine learning. Where do I To add to the excellent suggestions above, I would say if you are interested in getting a firm grasp on more basic concepts in probability and statistics, "From Algorithms to Z-Scores: Probabilistic C
29,924
Correlation significant in each group but non-significant over all?
Yes, it is possible and it could happen all sorts of ways. One obvious example is when membership of A and B is chosen in some way that reflects the values of x and y. Other examples are possible, eg @Macro's comment suggests an alternative possibility. Consider the example below, written in R. x and y are iid standard normal variables, but if I allocate them to groups based on the relative values of x and y I get the siutation you name. Within group A and group B there is strong statistically significant correlation between x and y, but if you ignore the grouping structure there is no correlation. > library(ggplot2) > x <- rnorm(1000) > y <- rnorm(1000) > Group <- ifelse(x>y, "A", "B") > cor.test(x,y) Pearson's product-moment correlation data: x and y t = -0.9832, df = 998, p-value = 0.3257 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.09292 0.03094 sample estimates: cor -0.03111 > cor.test(x[Group=="A"], y[Group=="A"]) Pearson's product-moment correlation data: x[Group == "A"] and y[Group == "A"] t = 11.93, df = 487, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.4040 0.5414 sample estimates: cor 0.4756 > cor.test(x[Group=="B"], y[Group=="B"]) Pearson's product-moment correlation data: x[Group == "B"] and y[Group == "B"] t = 9.974, df = 509, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.3292 0.4744 sample estimates: cor 0.4043 > qplot(x,y, color=Group)
Correlation significant in each group but non-significant over all?
Yes, it is possible and it could happen all sorts of ways. One obvious example is when membership of A and B is chosen in some way that reflects the values of x and y. Other examples are possible, e
Correlation significant in each group but non-significant over all? Yes, it is possible and it could happen all sorts of ways. One obvious example is when membership of A and B is chosen in some way that reflects the values of x and y. Other examples are possible, eg @Macro's comment suggests an alternative possibility. Consider the example below, written in R. x and y are iid standard normal variables, but if I allocate them to groups based on the relative values of x and y I get the siutation you name. Within group A and group B there is strong statistically significant correlation between x and y, but if you ignore the grouping structure there is no correlation. > library(ggplot2) > x <- rnorm(1000) > y <- rnorm(1000) > Group <- ifelse(x>y, "A", "B") > cor.test(x,y) Pearson's product-moment correlation data: x and y t = -0.9832, df = 998, p-value = 0.3257 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.09292 0.03094 sample estimates: cor -0.03111 > cor.test(x[Group=="A"], y[Group=="A"]) Pearson's product-moment correlation data: x[Group == "A"] and y[Group == "A"] t = 11.93, df = 487, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.4040 0.5414 sample estimates: cor 0.4756 > cor.test(x[Group=="B"], y[Group=="B"]) Pearson's product-moment correlation data: x[Group == "B"] and y[Group == "B"] t = 9.974, df = 509, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.3292 0.4744 sample estimates: cor 0.4043 > qplot(x,y, color=Group)
Correlation significant in each group but non-significant over all? Yes, it is possible and it could happen all sorts of ways. One obvious example is when membership of A and B is chosen in some way that reflects the values of x and y. Other examples are possible, e
29,925
Correlation significant in each group but non-significant over all?
One possibility is that the effects may be going in different directions in each group and are cancelled out when you aggregate them. This is also related to how, when you leave out an important interaction term in a regression model, the main effects can be misleading. For example, suppose in group $\rm A$, the true relationship between the response $y_i$ and the predictor $x_i$ is: $$ E(y_i|x_i, {\rm Group \ A}) = 1 + x_i $$ and in group $\rm B$, $$ E(y_i|x_i, {\rm Group \ B}) = 1 - x_i $$ Suppose group membership is distributed so that $$P({\rm Group \ A}) = 1-P( {\rm Group \ B}) = p$$ Then, if you marginalize over the group membership and calculate $E(y_i|x_i)$ by Law of Total Expectation you get \begin{align*} E(y_i | x_i) = E( E(y_i|x_i,{\rm Group}) ) &= p(1+ x_i) + (1-p)(1-x_i) \\ &= p + px_i + 1 - x_i - p + px_i \\ &= 1 - x_i(2p-1) \end{align*} Therefore, if $p = 1/2$, $E(y_i | x_i) = 1$ and does not depend on $x_i$ at all. So, there is a relationship within both groups but, when you aggregate them, there is no relationship. In other words, for a randomly selected individual in the population, whose group membership we don't know, there will, on average, be no relationship between $x_i$ and $y_i$. But, within each group there is. Any example where the value of $p$ perfectly balances the effect sizes within each group would also lead to this result - this was just this toy example to make the calculations easy :) Note: With normal errors, significance of a linear regression coefficient is equivalent to significance of the Pearson's correlation, so this example highlights one explanation for what you're seeing.
Correlation significant in each group but non-significant over all?
One possibility is that the effects may be going in different directions in each group and are cancelled out when you aggregate them. This is also related to how, when you leave out an important inter
Correlation significant in each group but non-significant over all? One possibility is that the effects may be going in different directions in each group and are cancelled out when you aggregate them. This is also related to how, when you leave out an important interaction term in a regression model, the main effects can be misleading. For example, suppose in group $\rm A$, the true relationship between the response $y_i$ and the predictor $x_i$ is: $$ E(y_i|x_i, {\rm Group \ A}) = 1 + x_i $$ and in group $\rm B$, $$ E(y_i|x_i, {\rm Group \ B}) = 1 - x_i $$ Suppose group membership is distributed so that $$P({\rm Group \ A}) = 1-P( {\rm Group \ B}) = p$$ Then, if you marginalize over the group membership and calculate $E(y_i|x_i)$ by Law of Total Expectation you get \begin{align*} E(y_i | x_i) = E( E(y_i|x_i,{\rm Group}) ) &= p(1+ x_i) + (1-p)(1-x_i) \\ &= p + px_i + 1 - x_i - p + px_i \\ &= 1 - x_i(2p-1) \end{align*} Therefore, if $p = 1/2$, $E(y_i | x_i) = 1$ and does not depend on $x_i$ at all. So, there is a relationship within both groups but, when you aggregate them, there is no relationship. In other words, for a randomly selected individual in the population, whose group membership we don't know, there will, on average, be no relationship between $x_i$ and $y_i$. But, within each group there is. Any example where the value of $p$ perfectly balances the effect sizes within each group would also lead to this result - this was just this toy example to make the calculations easy :) Note: With normal errors, significance of a linear regression coefficient is equivalent to significance of the Pearson's correlation, so this example highlights one explanation for what you're seeing.
Correlation significant in each group but non-significant over all? One possibility is that the effects may be going in different directions in each group and are cancelled out when you aggregate them. This is also related to how, when you leave out an important inter
29,926
Datasets for data visualization examples, teaching and research
There are large number of databases available on internet. Depending on the subject, you can get different sources. For example, in Human Development subject area you can have data sources at (http://hdrstats.undp.org/): http://hdrstats.undp.org/en/tables/default.html For Climate change observation, there is a web with high resolution climate data at (http://www.ipcc-data.org/), for example: http://www.ipcc-data.org/obs/cru_ts2_1.html Both examples, contains real data, used in published scientific papers, with large quantity of data. Time related and/or space related data. Visualization possibilities of those data are endless.
Datasets for data visualization examples, teaching and research
There are large number of databases available on internet. Depending on the subject, you can get different sources. For example, in Human Development subject area you can have data sources at (http:/
Datasets for data visualization examples, teaching and research There are large number of databases available on internet. Depending on the subject, you can get different sources. For example, in Human Development subject area you can have data sources at (http://hdrstats.undp.org/): http://hdrstats.undp.org/en/tables/default.html For Climate change observation, there is a web with high resolution climate data at (http://www.ipcc-data.org/), for example: http://www.ipcc-data.org/obs/cru_ts2_1.html Both examples, contains real data, used in published scientific papers, with large quantity of data. Time related and/or space related data. Visualization possibilities of those data are endless.
Datasets for data visualization examples, teaching and research There are large number of databases available on internet. Depending on the subject, you can get different sources. For example, in Human Development subject area you can have data sources at (http:/
29,927
Datasets for data visualization examples, teaching and research
I like to use the Anscombe data sets (also available in R) to show the importance of plotting when doing regressions. If you aren't familiar, you get the same regression line and diagnostics from all four data sets, even though the sets themselves all look quite different. You can take the plots below and turn them into residual plots to illustrate problems that you might look for in the residuals after performing a regression.
Datasets for data visualization examples, teaching and research
I like to use the Anscombe data sets (also available in R) to show the importance of plotting when doing regressions. If you aren't familiar, you get the same regression line and diagnostics from all
Datasets for data visualization examples, teaching and research I like to use the Anscombe data sets (also available in R) to show the importance of plotting when doing regressions. If you aren't familiar, you get the same regression line and diagnostics from all four data sets, even though the sets themselves all look quite different. You can take the plots below and turn them into residual plots to illustrate problems that you might look for in the residuals after performing a regression.
Datasets for data visualization examples, teaching and research I like to use the Anscombe data sets (also available in R) to show the importance of plotting when doing regressions. If you aren't familiar, you get the same regression line and diagnostics from all
29,928
Datasets for data visualization examples, teaching and research
which is the best example from the real world to show the advantages of graphing? Any big table. For examples, google images of "official census table". You'll see things like the one below. Also look at Gelman et al. (2002) Let's Practice What We Preach: Turning Tables into Graphs. American Statistician 56:121-130
Datasets for data visualization examples, teaching and research
which is the best example from the real world to show the advantages of graphing? Any big table. For examples, google images of "official census table". You'll see things like the one below. Also
Datasets for data visualization examples, teaching and research which is the best example from the real world to show the advantages of graphing? Any big table. For examples, google images of "official census table". You'll see things like the one below. Also look at Gelman et al. (2002) Let's Practice What We Preach: Turning Tables into Graphs. American Statistician 56:121-130
Datasets for data visualization examples, teaching and research which is the best example from the real world to show the advantages of graphing? Any big table. For examples, google images of "official census table". You'll see things like the one below. Also
29,929
Datasets for data visualization examples, teaching and research
William S. Cleveland has two books full of great uses of graphics, and the data and code to create the graphs in Visualizing Data is on his website
Datasets for data visualization examples, teaching and research
William S. Cleveland has two books full of great uses of graphics, and the data and code to create the graphs in Visualizing Data is on his website
Datasets for data visualization examples, teaching and research William S. Cleveland has two books full of great uses of graphics, and the data and code to create the graphs in Visualizing Data is on his website
Datasets for data visualization examples, teaching and research William S. Cleveland has two books full of great uses of graphics, and the data and code to create the graphs in Visualizing Data is on his website
29,930
Datasets for data visualization examples, teaching and research
Possibly you already know of these, but here they are anyway: The UCI Machine Learning Repository has many publicly accessible, real world data sets. The US Government makes many of its datasets public at data.gov. If you want some tricky visualization data, I'd suggest looking at a classification task. Seems to me that the Bag of Words set on the UCI MLR has some nice properties, but I could be mistaken (been a while since I used it).
Datasets for data visualization examples, teaching and research
Possibly you already know of these, but here they are anyway: The UCI Machine Learning Repository has many publicly accessible, real world data sets. The US Government makes many of its datasets publ
Datasets for data visualization examples, teaching and research Possibly you already know of these, but here they are anyway: The UCI Machine Learning Repository has many publicly accessible, real world data sets. The US Government makes many of its datasets public at data.gov. If you want some tricky visualization data, I'd suggest looking at a classification task. Seems to me that the Bag of Words set on the UCI MLR has some nice properties, but I could be mistaken (been a while since I used it).
Datasets for data visualization examples, teaching and research Possibly you already know of these, but here they are anyway: The UCI Machine Learning Repository has many publicly accessible, real world data sets. The US Government makes many of its datasets publ
29,931
Datasets for data visualization examples, teaching and research
Here are a few. Sci2 Tool Sample Datasets http://wiki.cns.iu.edu/display/SCI2TUTORIAL/2.5+Sample+Datasets Sample datasets that come bundled with Sci2 Tool. Tableau Sample Data Sets https://public.tableau.com/s/resources?qt-overview_resources=1#qt-overview_resources Sample data sets for getting started with Tableau. Awesome Public Datasets https://github.com/caesar0301/awesome-public-datasets/blob/master/README.rst This list of public data sources is collected and tidied from blogs, answers, and user responses. Most of the data sets are free, some are not. This thread is rather old, hoping this bump will get some new contributions!
Datasets for data visualization examples, teaching and research
Here are a few. Sci2 Tool Sample Datasets http://wiki.cns.iu.edu/display/SCI2TUTORIAL/2.5+Sample+Datasets Sample datasets that come bundled with Sci2 Tool. Tableau Sample Data Sets https://public.tabl
Datasets for data visualization examples, teaching and research Here are a few. Sci2 Tool Sample Datasets http://wiki.cns.iu.edu/display/SCI2TUTORIAL/2.5+Sample+Datasets Sample datasets that come bundled with Sci2 Tool. Tableau Sample Data Sets https://public.tableau.com/s/resources?qt-overview_resources=1#qt-overview_resources Sample data sets for getting started with Tableau. Awesome Public Datasets https://github.com/caesar0301/awesome-public-datasets/blob/master/README.rst This list of public data sources is collected and tidied from blogs, answers, and user responses. Most of the data sets are free, some are not. This thread is rather old, hoping this bump will get some new contributions!
Datasets for data visualization examples, teaching and research Here are a few. Sci2 Tool Sample Datasets http://wiki.cns.iu.edu/display/SCI2TUTORIAL/2.5+Sample+Datasets Sample datasets that come bundled with Sci2 Tool. Tableau Sample Data Sets https://public.tabl
29,932
Datasets for data visualization examples, teaching and research
I just noticed loads of datasets here: http://www.inside-r.org/howto/finding-data-internet Don't know if that's any use? I'm afraid I don't teach visualisation so I can't comment on your specific questions.
Datasets for data visualization examples, teaching and research
I just noticed loads of datasets here: http://www.inside-r.org/howto/finding-data-internet Don't know if that's any use? I'm afraid I don't teach visualisation so I can't comment on your specific ques
Datasets for data visualization examples, teaching and research I just noticed loads of datasets here: http://www.inside-r.org/howto/finding-data-internet Don't know if that's any use? I'm afraid I don't teach visualisation so I can't comment on your specific questions.
Datasets for data visualization examples, teaching and research I just noticed loads of datasets here: http://www.inside-r.org/howto/finding-data-internet Don't know if that's any use? I'm afraid I don't teach visualisation so I can't comment on your specific ques
29,933
Datasets for data visualization examples, teaching and research
The datasaurus, available as an R package on CRAN provides a great alternative to Anscombe's data - it is also simulated, but really helped me to teach the importance of exploratory data viz. It is a set of 12 datasets that all have the same summary statistics despite having radically different distributions - including a fun one that plots as a dinosaur shape. I think it is superior to Anscombe because it highlights that exploratory data viz is not just about identifying how a regression line can be fit, but about exploring the nature of the data.
Datasets for data visualization examples, teaching and research
The datasaurus, available as an R package on CRAN provides a great alternative to Anscombe's data - it is also simulated, but really helped me to teach the importance of exploratory data viz. It is a
Datasets for data visualization examples, teaching and research The datasaurus, available as an R package on CRAN provides a great alternative to Anscombe's data - it is also simulated, but really helped me to teach the importance of exploratory data viz. It is a set of 12 datasets that all have the same summary statistics despite having radically different distributions - including a fun one that plots as a dinosaur shape. I think it is superior to Anscombe because it highlights that exploratory data viz is not just about identifying how a regression line can be fit, but about exploring the nature of the data.
Datasets for data visualization examples, teaching and research The datasaurus, available as an R package on CRAN provides a great alternative to Anscombe's data - it is also simulated, but really helped me to teach the importance of exploratory data viz. It is a
29,934
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
To supplement Sycorax's answer on how a neural network might represent the function, I thought I'd see whether a simple network can learn that representation. The target network has two hidden neurons with ReLU activation and an output neuron with sigmoid activation. Notebook Here's my setup: from sklearn.neural_network import MLPClassifier from sklearn.model_selection import train_test_split import numpy as np n = 1000 np.random.seed(314) x1 = np.random.randint(-100, 101, size=n) p = np.random.poisson(size=n) x2 = x1 + p X = np.vstack((x1, x2)).T X = X / 100.0 y = (p == 0) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) I cannot coax scikit-learn's MLPClassifier to learn the two-neuron structure. Perhaps by trying loads of initial states I could get something close enough that the learning process would settle down to the desired state, but with just a handful of attempts I couldn't make it. Expanding to 100 hidden neurons, just a little fiddling with other hyperparameters gives perfect accuracy on an iid test set; but with that many neurons it seems to be overfitting on the training set, because it fails on an out-of-scale test set (x1 defined from 200 to 300, the rest as above). Fiddling by hand with the hyperparameters some more, I'm able to get a good-looking network with 5 hidden neurons: model = MLPClassifier( (5,), learning_rate_init=0.05, learning_rate="adaptive", alpha=0, max_iter=1000, random_state=0, ) model.fit(X_train, y_train) print(model.score(X_test, y_test)) #> 0.964 print(model.coefs_) #> [array([[-5.28193302, -4.71679774, -0.20732829, -0.82536738, -0.14136384], [ 5.26312562, 4.70770845, 0.31451317, 1.42101998, -0.21582437]]), array([[-16.27265811], [-18.1835566 ], [ 0.1559244 ], [ -0.38534808], [ 0.7400243 ]])] You can see that the first and second neurons are finding the right idea, while the last three are a bit off; and the output neuron is starting to ignore those three in favor of the first two (with large negative coefficients, the $\delta$ of Sycorax's formula). More data would probably strengthen the correct relationship, but this already performs well on the out-of-range test data. Oh, by taking just a Poisson difference above, always $x_2\ge x_1$, which explains why the two important neurons are both firing on something like $x_2-x_1$ rather than one being $x_1-x_2$. Multiplying p by a random sign, I have a much harder time getting MLPClassifier to train a good model. By switching away from ReLU to tanh I can, and in fact manage with just a 2-neuron layer: model = MLPClassifier( (2,), activation='tanh', solver='lbfgs', max_iter=1000, random_state=0, ) #> [array([[-155.04975387, 62.57832368], [ 155.0491934 , -62.57812308]]), array([[ 75.66146126], [168.25414012]])]
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
To supplement Sycorax's answer on how a neural network might represent the function, I thought I'd see whether a simple network can learn that representation. The target network has two hidden neuron
Can a neural network learn "a == b" and "a != b" relationship, with limited data? To supplement Sycorax's answer on how a neural network might represent the function, I thought I'd see whether a simple network can learn that representation. The target network has two hidden neurons with ReLU activation and an output neuron with sigmoid activation. Notebook Here's my setup: from sklearn.neural_network import MLPClassifier from sklearn.model_selection import train_test_split import numpy as np n = 1000 np.random.seed(314) x1 = np.random.randint(-100, 101, size=n) p = np.random.poisson(size=n) x2 = x1 + p X = np.vstack((x1, x2)).T X = X / 100.0 y = (p == 0) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) I cannot coax scikit-learn's MLPClassifier to learn the two-neuron structure. Perhaps by trying loads of initial states I could get something close enough that the learning process would settle down to the desired state, but with just a handful of attempts I couldn't make it. Expanding to 100 hidden neurons, just a little fiddling with other hyperparameters gives perfect accuracy on an iid test set; but with that many neurons it seems to be overfitting on the training set, because it fails on an out-of-scale test set (x1 defined from 200 to 300, the rest as above). Fiddling by hand with the hyperparameters some more, I'm able to get a good-looking network with 5 hidden neurons: model = MLPClassifier( (5,), learning_rate_init=0.05, learning_rate="adaptive", alpha=0, max_iter=1000, random_state=0, ) model.fit(X_train, y_train) print(model.score(X_test, y_test)) #> 0.964 print(model.coefs_) #> [array([[-5.28193302, -4.71679774, -0.20732829, -0.82536738, -0.14136384], [ 5.26312562, 4.70770845, 0.31451317, 1.42101998, -0.21582437]]), array([[-16.27265811], [-18.1835566 ], [ 0.1559244 ], [ -0.38534808], [ 0.7400243 ]])] You can see that the first and second neurons are finding the right idea, while the last three are a bit off; and the output neuron is starting to ignore those three in favor of the first two (with large negative coefficients, the $\delta$ of Sycorax's formula). More data would probably strengthen the correct relationship, but this already performs well on the out-of-range test data. Oh, by taking just a Poisson difference above, always $x_2\ge x_1$, which explains why the two important neurons are both firing on something like $x_2-x_1$ rather than one being $x_1-x_2$. Multiplying p by a random sign, I have a much harder time getting MLPClassifier to train a good model. By switching away from ReLU to tanh I can, and in fact manage with just a 2-neuron layer: model = MLPClassifier( (2,), activation='tanh', solver='lbfgs', max_iter=1000, random_state=0, ) #> [array([[-155.04975387, 62.57832368], [ 155.0491934 , -62.57812308]]), array([[ 75.66146126], [168.25414012]])]
Can a neural network learn "a == b" and "a != b" relationship, with limited data? To supplement Sycorax's answer on how a neural network might represent the function, I thought I'd see whether a simple network can learn that representation. The target network has two hidden neuron
29,935
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
The absolute value function can be written as $$|a-b|=\text{ReLU}(a-b) + \text{ReLU}(b-a),$$ and has a minimum at 0 for $a = b$. We can compose this with a sigmoid layer $$\sigma\left(\delta\left(\text{ReLU}(a-b) + \text{ReLU}(b-a)+\epsilon \right) \right),$$ and this is very close to what is desired for $\epsilon < 0$ because this shifts the minimum below 0 and choosing $\delta < 0$ means that a negative value maps to a number greater than 0.5, and a positive value to a number less than 0.5. Naturally, there will be "wrong answers" for $|a-b|$ that are close to $\epsilon$. This is unavoidable with continuous functions (such as those used in neural networks). Changing the magnitude of $\epsilon$ controls this. A deficiency with this is that its outputs are not exactly 0 and exactly 1. You can't obtain these values exactly because the sigmoid function only obtains 0 and 1 in a limit of infinitely large or small values. It's probably also hard to train a neural network to find weights that work well, especially for representing $|a-b|$.
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
The absolute value function can be written as $$|a-b|=\text{ReLU}(a-b) + \text{ReLU}(b-a),$$ and has a minimum at 0 for $a = b$. We can compose this with a sigmoid layer $$\sigma\left(\delta\left(\te
Can a neural network learn "a == b" and "a != b" relationship, with limited data? The absolute value function can be written as $$|a-b|=\text{ReLU}(a-b) + \text{ReLU}(b-a),$$ and has a minimum at 0 for $a = b$. We can compose this with a sigmoid layer $$\sigma\left(\delta\left(\text{ReLU}(a-b) + \text{ReLU}(b-a)+\epsilon \right) \right),$$ and this is very close to what is desired for $\epsilon < 0$ because this shifts the minimum below 0 and choosing $\delta < 0$ means that a negative value maps to a number greater than 0.5, and a positive value to a number less than 0.5. Naturally, there will be "wrong answers" for $|a-b|$ that are close to $\epsilon$. This is unavoidable with continuous functions (such as those used in neural networks). Changing the magnitude of $\epsilon$ controls this. A deficiency with this is that its outputs are not exactly 0 and exactly 1. You can't obtain these values exactly because the sigmoid function only obtains 0 and 1 in a limit of infinitely large or small values. It's probably also hard to train a neural network to find weights that work well, especially for representing $|a-b|$.
Can a neural network learn "a == b" and "a != b" relationship, with limited data? The absolute value function can be written as $$|a-b|=\text{ReLU}(a-b) + \text{ReLU}(b-a),$$ and has a minimum at 0 for $a = b$. We can compose this with a sigmoid layer $$\sigma\left(\delta\left(\te
29,936
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
You function can be represented as: $$f(x,y) = \lim_{n\to+\infty}(\sigma(1/|x-y|))^n$$ A good approximation can be obtained with a large $n$. A neural network can approximate further that function. In fact, depending on what you consider a neural network to be, the function itself is already a neural network with special activation functions.
Can a neural network learn "a == b" and "a != b" relationship, with limited data?
You function can be represented as: $$f(x,y) = \lim_{n\to+\infty}(\sigma(1/|x-y|))^n$$ A good approximation can be obtained with a large $n$. A neural network can approximate further that function. In
Can a neural network learn "a == b" and "a != b" relationship, with limited data? You function can be represented as: $$f(x,y) = \lim_{n\to+\infty}(\sigma(1/|x-y|))^n$$ A good approximation can be obtained with a large $n$. A neural network can approximate further that function. In fact, depending on what you consider a neural network to be, the function itself is already a neural network with special activation functions.
Can a neural network learn "a == b" and "a != b" relationship, with limited data? You function can be represented as: $$f(x,y) = \lim_{n\to+\infty}(\sigma(1/|x-y|))^n$$ A good approximation can be obtained with a large $n$. A neural network can approximate further that function. In
29,937
One sentence explanation of the AIC for non-technical types
AIC is a number that is helpful for comparing models as it includes measures of both how well the model fits the data and how complex the model is.
One sentence explanation of the AIC for non-technical types
AIC is a number that is helpful for comparing models as it includes measures of both how well the model fits the data and how complex the model is.
One sentence explanation of the AIC for non-technical types AIC is a number that is helpful for comparing models as it includes measures of both how well the model fits the data and how complex the model is.
One sentence explanation of the AIC for non-technical types AIC is a number that is helpful for comparing models as it includes measures of both how well the model fits the data and how complex the model is.
29,938
One sentence explanation of the AIC for non-technical types
What would be the best explanation depends on what exactly is meant by "non-technical types". I like the statements that have been offered so far, but I have one quibble: They tend to use the term "complex", and what precisely that is understood to mean could vary. Let me offer this variation: The AIC is a measure of how well a model fits a dataset, while adjusting for the ability of that model to fit any dataset whether or not it's related.
One sentence explanation of the AIC for non-technical types
What would be the best explanation depends on what exactly is meant by "non-technical types". I like the statements that have been offered so far, but I have one quibble: They tend to use the term "
One sentence explanation of the AIC for non-technical types What would be the best explanation depends on what exactly is meant by "non-technical types". I like the statements that have been offered so far, but I have one quibble: They tend to use the term "complex", and what precisely that is understood to mean could vary. Let me offer this variation: The AIC is a measure of how well a model fits a dataset, while adjusting for the ability of that model to fit any dataset whether or not it's related.
One sentence explanation of the AIC for non-technical types What would be the best explanation depends on what exactly is meant by "non-technical types". I like the statements that have been offered so far, but I have one quibble: They tend to use the term "
29,939
One sentence explanation of the AIC for non-technical types
Here's a definition that locates AIC in the menagerie of techniques used for model selection. AIC is just one of several reasonable ways to capture the trade-off between goodness of fit (which is improved by adding model complexity in the form of extra explanatory variables, or adding caveats like "but only on Thursday, when raining") and parsimony (simpler==better) in comparing non-nested models. Here's the fine print: I believe the OP's definition only applies to linear models. For things like probits, the AIC are usually defined in terms of log-likelihood. Some other criteria are adjusted $R^{2}$ (which has the least adjustment for extra explanatory variables), Kullback-Leibler IC, BIC/SC, and even more exotic ones, like Amemiya's prediction criterion, rarely seen in the wilds of applied work. These criteria differ on how steeply they penalize model complexity. Some have argued that the AIC tends to select models that are overparameterized, because the model-size penalty is pretty low. The BIC/SC also increases the penalty as the sample size increases, which seems like a handy-dandy feature. A nice way to sidestep participating in America's Top Information Criterion, is to admit that these criteria are arbitrary and considerable approximations are involved in deriving them, especially in the non-linear case. In practice, the choice of a model from a set of models should probably depend on the intended use of that model. If the purpose is to explain the main features of a complex problem, parsimony should be worth its weight in gold. If prediction is the name of the game, parsimony should be less dear. Some would even add that theory/domain knowledge should also play a bigger role. In any case, what you plan to do with the model should determine what criterion you might use. For nested models, the standard hypothesis test restricting the parameters to zero should suffice.
One sentence explanation of the AIC for non-technical types
Here's a definition that locates AIC in the menagerie of techniques used for model selection. AIC is just one of several reasonable ways to capture the trade-off between goodness of fit (which is impr
One sentence explanation of the AIC for non-technical types Here's a definition that locates AIC in the menagerie of techniques used for model selection. AIC is just one of several reasonable ways to capture the trade-off between goodness of fit (which is improved by adding model complexity in the form of extra explanatory variables, or adding caveats like "but only on Thursday, when raining") and parsimony (simpler==better) in comparing non-nested models. Here's the fine print: I believe the OP's definition only applies to linear models. For things like probits, the AIC are usually defined in terms of log-likelihood. Some other criteria are adjusted $R^{2}$ (which has the least adjustment for extra explanatory variables), Kullback-Leibler IC, BIC/SC, and even more exotic ones, like Amemiya's prediction criterion, rarely seen in the wilds of applied work. These criteria differ on how steeply they penalize model complexity. Some have argued that the AIC tends to select models that are overparameterized, because the model-size penalty is pretty low. The BIC/SC also increases the penalty as the sample size increases, which seems like a handy-dandy feature. A nice way to sidestep participating in America's Top Information Criterion, is to admit that these criteria are arbitrary and considerable approximations are involved in deriving them, especially in the non-linear case. In practice, the choice of a model from a set of models should probably depend on the intended use of that model. If the purpose is to explain the main features of a complex problem, parsimony should be worth its weight in gold. If prediction is the name of the game, parsimony should be less dear. Some would even add that theory/domain knowledge should also play a bigger role. In any case, what you plan to do with the model should determine what criterion you might use. For nested models, the standard hypothesis test restricting the parameters to zero should suffice.
One sentence explanation of the AIC for non-technical types Here's a definition that locates AIC in the menagerie of techniques used for model selection. AIC is just one of several reasonable ways to capture the trade-off between goodness of fit (which is impr
29,940
One sentence explanation of the AIC for non-technical types
How about: AIC helps you find the best-fitting model that uses the fewest variables. If that is too far in the non-technical direction, let me know in comments and I'll come up with another.
One sentence explanation of the AIC for non-technical types
How about: AIC helps you find the best-fitting model that uses the fewest variables. If that is too far in the non-technical direction, let me know in comments and I'll come up with another.
One sentence explanation of the AIC for non-technical types How about: AIC helps you find the best-fitting model that uses the fewest variables. If that is too far in the non-technical direction, let me know in comments and I'll come up with another.
One sentence explanation of the AIC for non-technical types How about: AIC helps you find the best-fitting model that uses the fewest variables. If that is too far in the non-technical direction, let me know in comments and I'll come up with another.
29,941
One sentence explanation of the AIC for non-technical types
AIC is a measure of how well the data is explained by the model corrected for how complex the model is.
One sentence explanation of the AIC for non-technical types
AIC is a measure of how well the data is explained by the model corrected for how complex the model is.
One sentence explanation of the AIC for non-technical types AIC is a measure of how well the data is explained by the model corrected for how complex the model is.
One sentence explanation of the AIC for non-technical types AIC is a measure of how well the data is explained by the model corrected for how complex the model is.
29,942
One sentence explanation of the AIC for non-technical types
The flip side of @gung's excellent answer: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires more elaborate models to be significantly more accurate in order to rate more highly. EDIT: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires models that are significantly more elaborate or flexible to also be significantly more accurate.
One sentence explanation of the AIC for non-technical types
The flip side of @gung's excellent answer: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires more elaborate models to be significantly more accurate i
One sentence explanation of the AIC for non-technical types The flip side of @gung's excellent answer: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires more elaborate models to be significantly more accurate in order to rate more highly. EDIT: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires models that are significantly more elaborate or flexible to also be significantly more accurate.
One sentence explanation of the AIC for non-technical types The flip side of @gung's excellent answer: The AIC is a number that measures how well a model fits a dataset, on a sliding scale that requires more elaborate models to be significantly more accurate i
29,943
One sentence explanation of the AIC for non-technical types
Let k be the number of parameters of a model and MaxL be the value of the likelihood function at its maximum. Then the Akaike Information Criterion is defined as $AIC=2k-2\ln\left(MaxL\right)$. The aim is to find a model which minimizes the AIC. Given this definition, the AIC is a criterion used to choose the model which yields the best compromise between sparsity in the number of parameters and the maximum likelihood for the estimation of those parameters.
One sentence explanation of the AIC for non-technical types
Let k be the number of parameters of a model and MaxL be the value of the likelihood function at its maximum. Then the Akaike Information Criterion is defined as $AIC=2k-2\ln\left(MaxL\right)$. The ai
One sentence explanation of the AIC for non-technical types Let k be the number of parameters of a model and MaxL be the value of the likelihood function at its maximum. Then the Akaike Information Criterion is defined as $AIC=2k-2\ln\left(MaxL\right)$. The aim is to find a model which minimizes the AIC. Given this definition, the AIC is a criterion used to choose the model which yields the best compromise between sparsity in the number of parameters and the maximum likelihood for the estimation of those parameters.
One sentence explanation of the AIC for non-technical types Let k be the number of parameters of a model and MaxL be the value of the likelihood function at its maximum. Then the Akaike Information Criterion is defined as $AIC=2k-2\ln\left(MaxL\right)$. The ai
29,944
Can we say 50% of data will be between 25th-75th percentile?
Yes. 75% of your data are below the 75th percentile. 25% of your data are below the 25th percentile. Therefore, 50% (=75%-25%) of your data are between the two, i.e., between the 25th and the 75th percentile. Completely analogously, 98% of your data are between the 1st and the 99th percentile. And the bottom half of your data, again 50%, are below the 50th percentile. These numbers may not be completely correct, especially if you have low numbers of data. Note also that there are different conventions on how quantiles and percentiles are actually computed.
Can we say 50% of data will be between 25th-75th percentile?
Yes. 75% of your data are below the 75th percentile. 25% of your data are below the 25th percentile. Therefore, 50% (=75%-25%) of your data are between the two, i.e., between the 25th and the 75th pe
Can we say 50% of data will be between 25th-75th percentile? Yes. 75% of your data are below the 75th percentile. 25% of your data are below the 25th percentile. Therefore, 50% (=75%-25%) of your data are between the two, i.e., between the 25th and the 75th percentile. Completely analogously, 98% of your data are between the 1st and the 99th percentile. And the bottom half of your data, again 50%, are below the 50th percentile. These numbers may not be completely correct, especially if you have low numbers of data. Note also that there are different conventions on how quantiles and percentiles are actually computed.
Can we say 50% of data will be between 25th-75th percentile? Yes. 75% of your data are below the 75th percentile. 25% of your data are below the 25th percentile. Therefore, 50% (=75%-25%) of your data are between the two, i.e., between the 25th and the 75th pe
29,945
Can we say 50% of data will be between 25th-75th percentile?
Ideally, yes. Percentiles are usually interpreted in terms of the normal distribution (as normality is often an underlying, sometimes unstated, assumption when computing any sort of elementary statistical measures). The distribution does not have to be normal, however. According to this website... The standard normal distribution can also be useful for computing percentiles. For example, the median is the 50th percentile, the first quartile is the 25th percentile, and the third quartile is the 75th percentile. In some instances it may be of interest to compute other percentiles, for example the 5th or 95th. The formula below is used to compute percentiles of a normal distribution: $X = \mu + Z \sigma$ So, if we assume normality, we can easily compute any percentile we are looking for. Percentiles require no distributional assumptions, however, and are bound to the data from which they are computed. This means that percentiles can provide meaningful benchmarks for both normal and non-normal distributions. You may also use percentiles in a probability interpretation, of course based on the measurements you currently have, which could be good or bad indicators of the true underlying distribution. According to this site... Direct interpretation: consider the 10th ($P_{10}$) and 90th ($P_{90}$) percentiles: "given the available data, we know that soil property $p < P_{10}$ 10% of the time, and, $p < P_{90}$ 90% of the time". This same statement can be framed using probabilities or proportions: "given the available data, soil property $p$ is within the range of {$P_{10} − P_{90}$} 80% of the time".
Can we say 50% of data will be between 25th-75th percentile?
Ideally, yes. Percentiles are usually interpreted in terms of the normal distribution (as normality is often an underlying, sometimes unstated, assumption when computing any sort of elementary stati
Can we say 50% of data will be between 25th-75th percentile? Ideally, yes. Percentiles are usually interpreted in terms of the normal distribution (as normality is often an underlying, sometimes unstated, assumption when computing any sort of elementary statistical measures). The distribution does not have to be normal, however. According to this website... The standard normal distribution can also be useful for computing percentiles. For example, the median is the 50th percentile, the first quartile is the 25th percentile, and the third quartile is the 75th percentile. In some instances it may be of interest to compute other percentiles, for example the 5th or 95th. The formula below is used to compute percentiles of a normal distribution: $X = \mu + Z \sigma$ So, if we assume normality, we can easily compute any percentile we are looking for. Percentiles require no distributional assumptions, however, and are bound to the data from which they are computed. This means that percentiles can provide meaningful benchmarks for both normal and non-normal distributions. You may also use percentiles in a probability interpretation, of course based on the measurements you currently have, which could be good or bad indicators of the true underlying distribution. According to this site... Direct interpretation: consider the 10th ($P_{10}$) and 90th ($P_{90}$) percentiles: "given the available data, we know that soil property $p < P_{10}$ 10% of the time, and, $p < P_{90}$ 90% of the time". This same statement can be framed using probabilities or proportions: "given the available data, soil property $p$ is within the range of {$P_{10} − P_{90}$} 80% of the time".
Can we say 50% of data will be between 25th-75th percentile? Ideally, yes. Percentiles are usually interpreted in terms of the normal distribution (as normality is often an underlying, sometimes unstated, assumption when computing any sort of elementary stati
29,946
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
You might be thinking of Goodhart's Law. It is named after economist Charles Goodhart and was stated by Marilyn Strathern in the form: "When a measure becomes a target, it ceases to be a good measure." in ‘Improving ratings’: audit in the British University system.
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
You might be thinking of Goodhart's Law. It is named after economist Charles Goodhart and was stated by Marilyn Strathern in the form: "When a measure becomes a target, it ceases to be a good measure.
Does a statistic used as a measure becomes invalid after it's reported on? [closed] You might be thinking of Goodhart's Law. It is named after economist Charles Goodhart and was stated by Marilyn Strathern in the form: "When a measure becomes a target, it ceases to be a good measure." in ‘Improving ratings’: audit in the British University system.
Does a statistic used as a measure becomes invalid after it's reported on? [closed] You might be thinking of Goodhart's Law. It is named after economist Charles Goodhart and was stated by Marilyn Strathern in the form: "When a measure becomes a target, it ceases to be a good measure.
29,947
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
I believe you are thinking of the Hawthorne Effect, which describes a situation in which individuals modify an aspect of their behavior in response to their awareness of being observed. Another possibility is Campbell's law, which suggests "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Campbell's law is very similar to Goodhart's law, suggested by @JW. I should add, based on the limited information you provided, that just because you report on a measure doesn't necessarily make it invalid. As an example, if a football coach's performance is measured on Wins and Losses (which are obviously reported on), this measure could be considered valid. What matters is whether a loophole exists in which individuals can 'game' the measurement system, making their performance appear better than it actually is.
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
I believe you are thinking of the Hawthorne Effect, which describes a situation in which individuals modify an aspect of their behavior in response to their awareness of being observed. Another possi
Does a statistic used as a measure becomes invalid after it's reported on? [closed] I believe you are thinking of the Hawthorne Effect, which describes a situation in which individuals modify an aspect of their behavior in response to their awareness of being observed. Another possibility is Campbell's law, which suggests "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Campbell's law is very similar to Goodhart's law, suggested by @JW. I should add, based on the limited information you provided, that just because you report on a measure doesn't necessarily make it invalid. As an example, if a football coach's performance is measured on Wins and Losses (which are obviously reported on), this measure could be considered valid. What matters is whether a loophole exists in which individuals can 'game' the measurement system, making their performance appear better than it actually is.
Does a statistic used as a measure becomes invalid after it's reported on? [closed] I believe you are thinking of the Hawthorne Effect, which describes a situation in which individuals modify an aspect of their behavior in response to their awareness of being observed. Another possi
29,948
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
What you measure is what you get, so be careful what you measure. In the world of software development people dread releasing software with bugs. Naturally, you'll want to release the software without any bugs, but how do you measure "success". Number of bugs found? -- Team stops reporting bugs, QA finds nitpicky bugs, Dev gets defensive about the process. Bugs per line of code? -- Introduce lots of lines of code to rig the count even though the extra lines don't do anything. Ultimately, having the name to put with the human instinct to game a system will probably not help your situation. People tend to fight unnaturally hard when they're told they're wrong (even if they are indeed wrong), but when you drop telling them wrong and instead show help they tend to work with you. In this case, try to work with the director. "I don't understand how tracking this helps the team, can you explain it to me? What if we tracked X instead? Can we make this a 1 hour team building event to spitball other ways in which we can achieve your goal?" Sometimes the mere act of trying to explain it to someone that just keeps asking "Why?" and "How?" will expose flaws and get them receptive to new ideas.
Does a statistic used as a measure becomes invalid after it's reported on? [closed]
What you measure is what you get, so be careful what you measure. In the world of software development people dread releasing software with bugs. Naturally, you'll want to release the software without
Does a statistic used as a measure becomes invalid after it's reported on? [closed] What you measure is what you get, so be careful what you measure. In the world of software development people dread releasing software with bugs. Naturally, you'll want to release the software without any bugs, but how do you measure "success". Number of bugs found? -- Team stops reporting bugs, QA finds nitpicky bugs, Dev gets defensive about the process. Bugs per line of code? -- Introduce lots of lines of code to rig the count even though the extra lines don't do anything. Ultimately, having the name to put with the human instinct to game a system will probably not help your situation. People tend to fight unnaturally hard when they're told they're wrong (even if they are indeed wrong), but when you drop telling them wrong and instead show help they tend to work with you. In this case, try to work with the director. "I don't understand how tracking this helps the team, can you explain it to me? What if we tracked X instead? Can we make this a 1 hour team building event to spitball other ways in which we can achieve your goal?" Sometimes the mere act of trying to explain it to someone that just keeps asking "Why?" and "How?" will expose flaws and get them receptive to new ideas.
Does a statistic used as a measure becomes invalid after it's reported on? [closed] What you measure is what you get, so be careful what you measure. In the world of software development people dread releasing software with bugs. Naturally, you'll want to release the software without
29,949
What's a good book or reference for data visualization?
I think that the work of William Cleveland is going to be closer to what you want that that of Tufte. Cleveland wrote two books: Visualizing Data (1993) The Elements of Graphing Data (1985) The first book, in particular, may be what you want. Here is a publisher's description: Visualizing Data is about visualization tools that provide deep insight into the structure of data. There are graphical tools such as coplots, multiway dot plots, and the equal count algorithm. There are fitting tools such as loess and bisquare that fit equations, nonparametric curves, and nonparametric surfaces to data. But the book is much more than just a compendium of useful tools. It conveys a strategy for data analysis that stresses the use of visualization to thoroughly study the structure of data and to check the validity of statistical models fitted to data. The result of the tools and the strategy is a vast increase in what you can learn from your data. The book demonstrates this by reanalyzing many data sets from the scientific literature, revealing missed effects and inappropriate models fitted to data. An even more theoretical book is The Grammar of Graphics by Leland Wilkinson. The description: This book was written for statisticians, computer scientists, geographers, researchers, and others interested in visualizing data. It presents a unique foundation for producing almost every quantitative graphic found in scientific journals, newspapers, statistical packages, and data visualization systems. While the tangible results of this work have been several visualization software libraries, this book focuses on the deep structures involved in producing quantitative graphics from data. What are the rules that underlie the production of pie charts, bar charts, scatterplots, function plots, maps, mosaics, and radar charts? Those less interested in the theoretical and mathematical foundations can still get a sense of the richness and structure of the system by examining the numerous and often unique color graphics it can produce. The second edition is almost twice the size of the original, with six new chapters and substantial revision. Much of the added material makes this book suitable for survey courses in visualization and statistical graphics. This book is very theoretical.
What's a good book or reference for data visualization?
I think that the work of William Cleveland is going to be closer to what you want that that of Tufte. Cleveland wrote two books: Visualizing Data (1993) The Elements of Graphing Data (1985) The fir
What's a good book or reference for data visualization? I think that the work of William Cleveland is going to be closer to what you want that that of Tufte. Cleveland wrote two books: Visualizing Data (1993) The Elements of Graphing Data (1985) The first book, in particular, may be what you want. Here is a publisher's description: Visualizing Data is about visualization tools that provide deep insight into the structure of data. There are graphical tools such as coplots, multiway dot plots, and the equal count algorithm. There are fitting tools such as loess and bisquare that fit equations, nonparametric curves, and nonparametric surfaces to data. But the book is much more than just a compendium of useful tools. It conveys a strategy for data analysis that stresses the use of visualization to thoroughly study the structure of data and to check the validity of statistical models fitted to data. The result of the tools and the strategy is a vast increase in what you can learn from your data. The book demonstrates this by reanalyzing many data sets from the scientific literature, revealing missed effects and inappropriate models fitted to data. An even more theoretical book is The Grammar of Graphics by Leland Wilkinson. The description: This book was written for statisticians, computer scientists, geographers, researchers, and others interested in visualizing data. It presents a unique foundation for producing almost every quantitative graphic found in scientific journals, newspapers, statistical packages, and data visualization systems. While the tangible results of this work have been several visualization software libraries, this book focuses on the deep structures involved in producing quantitative graphics from data. What are the rules that underlie the production of pie charts, bar charts, scatterplots, function plots, maps, mosaics, and radar charts? Those less interested in the theoretical and mathematical foundations can still get a sense of the richness and structure of the system by examining the numerous and often unique color graphics it can produce. The second edition is almost twice the size of the original, with six new chapters and substantial revision. Much of the added material makes this book suitable for survey courses in visualization and statistical graphics. This book is very theoretical.
What's a good book or reference for data visualization? I think that the work of William Cleveland is going to be closer to what you want that that of Tufte. Cleveland wrote two books: Visualizing Data (1993) The Elements of Graphing Data (1985) The fir
29,950
What's a good book or reference for data visualization?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Look at the series of books written by Ed Tufte. They are discussed by wikipedia in the article https://en.wikipedia.org/wiki/Edward_Tufte. The Visual Display of Quantitative Information. 1983; Second Edition 2001. Cheshire, CT: Graphics Press Envisioning information. 1990. Cheshire, CT: Graphics Press Visual Explanations: Images and Quantities, Evidence and Narrative Graphics Press. 1997. Cheshire, CT: Graphics Press Beautiful Evidence. 2006. Cheshire, CT: Graphics Press Seeing with Fresh Eyes. 2020. Cheshire, CT: Graphics Press.
What's a good book or reference for data visualization?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What's a good book or reference for data visualization? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Look at the series of books written by Ed Tufte. They are discussed by wikipedia in the article https://en.wikipedia.org/wiki/Edward_Tufte. The Visual Display of Quantitative Information. 1983; Second Edition 2001. Cheshire, CT: Graphics Press Envisioning information. 1990. Cheshire, CT: Graphics Press Visual Explanations: Images and Quantities, Evidence and Narrative Graphics Press. 1997. Cheshire, CT: Graphics Press Beautiful Evidence. 2006. Cheshire, CT: Graphics Press Seeing with Fresh Eyes. 2020. Cheshire, CT: Graphics Press.
What's a good book or reference for data visualization? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
29,951
What's a good book or reference for data visualization?
At the risk of being crucified, I would advise against Tufte, Wilkinson, Cleveland etc. and all other classics if you're just starting out. The reason is the following objective laid out by you (emphasis added): I'm looking for some references on creating effective graphs/data visualizations. So even though you don't explicitly want language dependent books/tutorials, you want your knowledge to be applied rather than an abstract theoretical exercise over coffee. Starting with what I call the classics is like reading Shakespeare because you want your language to be more eloquent. The discussions in the books are excellent for laying the foundations to understand effective data visualization; but considering the technological advancements up to today - the books aren't much help in developing the applied bent of mind (Grammar of Graphics- Wilkinson being the slight exception because of the relevance to ggplot2 but in that case I would advise reading works of Hadley Wickham, the package author instead). Some good resources you could look at are FlowingData (Nathan Yau), Perceptual Edge (Stephen Few) and Storytelling with Data (Cole Knaflic) and the books by the blog authors. The reason being as follows: These works already encompass the research from the classics The language is less academic and easier to understanding The regularly updated blogs act as supplemental material to the books It's a pity Aaron Koblin hasn't published any books about his unique take on large data visualizations. I do not discount how useful Tufte, Cleveland and Wilkinson's work is, but after toiling through a few of them and still only being marginally better at modern data visualization tools, Stephen Few's "Show me the Numbers" was like a light switch went on.
What's a good book or reference for data visualization?
At the risk of being crucified, I would advise against Tufte, Wilkinson, Cleveland etc. and all other classics if you're just starting out. The reason is the following objective laid out by you (empha
What's a good book or reference for data visualization? At the risk of being crucified, I would advise against Tufte, Wilkinson, Cleveland etc. and all other classics if you're just starting out. The reason is the following objective laid out by you (emphasis added): I'm looking for some references on creating effective graphs/data visualizations. So even though you don't explicitly want language dependent books/tutorials, you want your knowledge to be applied rather than an abstract theoretical exercise over coffee. Starting with what I call the classics is like reading Shakespeare because you want your language to be more eloquent. The discussions in the books are excellent for laying the foundations to understand effective data visualization; but considering the technological advancements up to today - the books aren't much help in developing the applied bent of mind (Grammar of Graphics- Wilkinson being the slight exception because of the relevance to ggplot2 but in that case I would advise reading works of Hadley Wickham, the package author instead). Some good resources you could look at are FlowingData (Nathan Yau), Perceptual Edge (Stephen Few) and Storytelling with Data (Cole Knaflic) and the books by the blog authors. The reason being as follows: These works already encompass the research from the classics The language is less academic and easier to understanding The regularly updated blogs act as supplemental material to the books It's a pity Aaron Koblin hasn't published any books about his unique take on large data visualizations. I do not discount how useful Tufte, Cleveland and Wilkinson's work is, but after toiling through a few of them and still only being marginally better at modern data visualization tools, Stephen Few's "Show me the Numbers" was like a light switch went on.
What's a good book or reference for data visualization? At the risk of being crucified, I would advise against Tufte, Wilkinson, Cleveland etc. and all other classics if you're just starting out. The reason is the following objective laid out by you (empha
29,952
What's a good book or reference for data visualization?
It depends strongly on the language you prefer. As I am not using Python for data visualisation frequently I can only recommend you books relating to data visualisation in R. After writing this post I reread your question and Nr. 1, Nr. 2 and maybe Nr. 4 might be the most theoretical. Though Nr. 6 also explaines you theoretical aspects it is specialised on visualising unsupervised machine learning techniques. R Graphics by Paul Murrell The author Paul Murrell has a significant part in developing the graphics of the R language. He developed the "Grammar of Graphics" concept which is the concept underlying the ggplot2 library. The book is rather advanced although you do not need a lot of preknowledge necesarrily and pretty theoretical. It is the best book for people who genuinely want to understand the concepts of data visualisation in R, but I do not recommend it for beginners. HTML Widgets Is a must for interactive data visualisation. Various JavaScript libraries are translated into and adapted to R. You can include most Widgets in RShiny, Markdown (rendered as HTML) or in the console). My favorite HTML Widgets are Plotly (A library on interactive data visualisation which is also available for various other languages such as Python and Matlab) Leaflet (interactive visualisations with Maps) dygraph (which offers a broad variety for interactive time series visualisation) datatable (written by Yuhui Xe from RStudio who also wrote the knitR and the bookdown package. Prolific for showing tables)) Guide to create beautiful graphics in R This book is rather beginner friendly. Its examples are primarily shown in ggplot2. When I started learning advanced data visualisation techniques in R I primarily used this one and the official ggplot2 website. The official ggplot2 website Is the best starting point to learn ggplot2, but it can appear overwhelmingly if you are not willing to be passionate and if you don't have a lot of time. ggplot2 is awesome, but it can have a steep learning curve, e.g. you cannot write the "+" at the beginning of the line. All theoretical concepts are also explained. Official Shiny gallery Shiny is the most used R-library for building up apps with R. It can be substituted by BI tools like Tableau or Qlickview. shinyjs is a great extension of shiny which combines shiny with javascript, but you can also include HTML, CSS and JavaScript on your own. Cluster Analysis in R This book comes from the same authors as the Guide to beautiful graphics (nr.3). It is a specialized book for visualising unsupervised machine learning techniques and particularly clustering. 7.Easy tutorial In case you just start visualising and I overwhelmed you a little bit.
What's a good book or reference for data visualization?
It depends strongly on the language you prefer. As I am not using Python for data visualisation frequently I can only recommend you books relating to data visualisation in R. After writing this post I
What's a good book or reference for data visualization? It depends strongly on the language you prefer. As I am not using Python for data visualisation frequently I can only recommend you books relating to data visualisation in R. After writing this post I reread your question and Nr. 1, Nr. 2 and maybe Nr. 4 might be the most theoretical. Though Nr. 6 also explaines you theoretical aspects it is specialised on visualising unsupervised machine learning techniques. R Graphics by Paul Murrell The author Paul Murrell has a significant part in developing the graphics of the R language. He developed the "Grammar of Graphics" concept which is the concept underlying the ggplot2 library. The book is rather advanced although you do not need a lot of preknowledge necesarrily and pretty theoretical. It is the best book for people who genuinely want to understand the concepts of data visualisation in R, but I do not recommend it for beginners. HTML Widgets Is a must for interactive data visualisation. Various JavaScript libraries are translated into and adapted to R. You can include most Widgets in RShiny, Markdown (rendered as HTML) or in the console). My favorite HTML Widgets are Plotly (A library on interactive data visualisation which is also available for various other languages such as Python and Matlab) Leaflet (interactive visualisations with Maps) dygraph (which offers a broad variety for interactive time series visualisation) datatable (written by Yuhui Xe from RStudio who also wrote the knitR and the bookdown package. Prolific for showing tables)) Guide to create beautiful graphics in R This book is rather beginner friendly. Its examples are primarily shown in ggplot2. When I started learning advanced data visualisation techniques in R I primarily used this one and the official ggplot2 website. The official ggplot2 website Is the best starting point to learn ggplot2, but it can appear overwhelmingly if you are not willing to be passionate and if you don't have a lot of time. ggplot2 is awesome, but it can have a steep learning curve, e.g. you cannot write the "+" at the beginning of the line. All theoretical concepts are also explained. Official Shiny gallery Shiny is the most used R-library for building up apps with R. It can be substituted by BI tools like Tableau or Qlickview. shinyjs is a great extension of shiny which combines shiny with javascript, but you can also include HTML, CSS and JavaScript on your own. Cluster Analysis in R This book comes from the same authors as the Guide to beautiful graphics (nr.3). It is a specialized book for visualising unsupervised machine learning techniques and particularly clustering. 7.Easy tutorial In case you just start visualising and I overwhelmed you a little bit.
What's a good book or reference for data visualization? It depends strongly on the language you prefer. As I am not using Python for data visualisation frequently I can only recommend you books relating to data visualisation in R. After writing this post I
29,953
What's a good book or reference for data visualization?
Several answers here ignore the request in the question "I'm looking for a reference that explains different types of charts with respect to stats/math. I want more theory than process." In particular, books by Few, Knaflic and Yau are often weak (and even sometimes quite incorrect) in linking their discussion to statistical principles. Antony Unwin's Graphical Data Analysis with R see publisher's site has much more on the logic of statistical graphics than many of the books mentioned in other answers. Its use of R as a vehicle need not be a disadvantage to people who mostly or wholly use some other software. I am one such person but I have found the discussion rich and challenging. Even I when disagree with the author on some details, it is worth working out why. This is a 2015 book: while I suspect that the R details may be a little out-of-date, otherwise it wears very well and bears repeated consultation and reflection.
What's a good book or reference for data visualization?
Several answers here ignore the request in the question "I'm looking for a reference that explains different types of charts with respect to stats/math. I want more theory than process." In particular
What's a good book or reference for data visualization? Several answers here ignore the request in the question "I'm looking for a reference that explains different types of charts with respect to stats/math. I want more theory than process." In particular, books by Few, Knaflic and Yau are often weak (and even sometimes quite incorrect) in linking their discussion to statistical principles. Antony Unwin's Graphical Data Analysis with R see publisher's site has much more on the logic of statistical graphics than many of the books mentioned in other answers. Its use of R as a vehicle need not be a disadvantage to people who mostly or wholly use some other software. I am one such person but I have found the discussion rich and challenging. Even I when disagree with the author on some details, it is worth working out why. This is a 2015 book: while I suspect that the R details may be a little out-of-date, otherwise it wears very well and bears repeated consultation and reflection.
What's a good book or reference for data visualization? Several answers here ignore the request in the question "I'm looking for a reference that explains different types of charts with respect to stats/math. I want more theory than process." In particular
29,954
What's a good book or reference for data visualization?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. R for Data Science by Garret Grolemund and Hadley Wickham Top 50 ggplot2 Visualizations The R Graph Gallery r4stats.com
What's a good book or reference for data visualization?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What's a good book or reference for data visualization? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. R for Data Science by Garret Grolemund and Hadley Wickham Top 50 ggplot2 Visualizations The R Graph Gallery r4stats.com
What's a good book or reference for data visualization? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
29,955
Why can't I calculate 1.5 standard deviations using basic math?
The reason that we cannot (linearly) interpolate between 0.3413 and 0.4772 is because the pdf of the Normal distribution is not uniform (flat at a single value). Consider this more simple example, where we can use geometry to find the areas. The total area of the plot is 1 (it's a square cut diagonally, with the two pieces rearranged to be a triangle). Using Base*Height/2 we can find that the area of region A is 0.5, and the total area of regions B and C is also 0.5. But the areas of B and C are not equal. The area of region C is 0.5*0.5/2 = 0.125, and therefore the area of region B is 0.375. So even though regions B and C are equally wide along the x-axis, since the height is not constant, they have different areas. The Normal distribution that you are dealing with in your exercise is similar, but with a more complicated function for the height instead of a simple triangle. Because of this, the area between two values can't be solved as simply - hence the use of Z-scores and a table to find probabilities.
Why can't I calculate 1.5 standard deviations using basic math?
The reason that we cannot (linearly) interpolate between 0.3413 and 0.4772 is because the pdf of the Normal distribution is not uniform (flat at a single value). Consider this more simple example, whe
Why can't I calculate 1.5 standard deviations using basic math? The reason that we cannot (linearly) interpolate between 0.3413 and 0.4772 is because the pdf of the Normal distribution is not uniform (flat at a single value). Consider this more simple example, where we can use geometry to find the areas. The total area of the plot is 1 (it's a square cut diagonally, with the two pieces rearranged to be a triangle). Using Base*Height/2 we can find that the area of region A is 0.5, and the total area of regions B and C is also 0.5. But the areas of B and C are not equal. The area of region C is 0.5*0.5/2 = 0.125, and therefore the area of region B is 0.375. So even though regions B and C are equally wide along the x-axis, since the height is not constant, they have different areas. The Normal distribution that you are dealing with in your exercise is similar, but with a more complicated function for the height instead of a simple triangle. Because of this, the area between two values can't be solved as simply - hence the use of Z-scores and a table to find probabilities.
Why can't I calculate 1.5 standard deviations using basic math? The reason that we cannot (linearly) interpolate between 0.3413 and 0.4772 is because the pdf of the Normal distribution is not uniform (flat at a single value). Consider this more simple example, whe
29,956
Why can't I calculate 1.5 standard deviations using basic math?
Just to provide a different illustration on the same topic... In your initial calculation you would be treating the normal curve as a uniform distribution, in which case your initial approach would be the correct mathematical calculation for the double hatched rectangle in the plot below (with different actual values), simply because you'd be able to express the area as a simple linear dependency of the $x$ axis distance: $A_{1.5\,SD} =\large\frac{A_{2\,SD} - A_{1\,SD}}{2} = \small height * \large\frac{X_{2\,SD} - X_{1\,SD}}{2} $ But you want to calculate the diagonally hatched area under the curve of the Gaussian distribution, which as stated before wouldn't keep a linear relationship with the distance along the $x$ axis even if the distribution was triangular: 7
Why can't I calculate 1.5 standard deviations using basic math?
Just to provide a different illustration on the same topic... In your initial calculation you would be treating the normal curve as a uniform distribution, in which case your initial approach would be
Why can't I calculate 1.5 standard deviations using basic math? Just to provide a different illustration on the same topic... In your initial calculation you would be treating the normal curve as a uniform distribution, in which case your initial approach would be the correct mathematical calculation for the double hatched rectangle in the plot below (with different actual values), simply because you'd be able to express the area as a simple linear dependency of the $x$ axis distance: $A_{1.5\,SD} =\large\frac{A_{2\,SD} - A_{1\,SD}}{2} = \small height * \large\frac{X_{2\,SD} - X_{1\,SD}}{2} $ But you want to calculate the diagonally hatched area under the curve of the Gaussian distribution, which as stated before wouldn't keep a linear relationship with the distance along the $x$ axis even if the distribution was triangular: 7
Why can't I calculate 1.5 standard deviations using basic math? Just to provide a different illustration on the same topic... In your initial calculation you would be treating the normal curve as a uniform distribution, in which case your initial approach would be
29,957
Why can't I calculate 1.5 standard deviations using basic math?
The formula for the Gaussian distribution is: Where sigma = std deviation and mu = mean (stolen from wikipedia) When you are asking for the area, you are integrating this function over the range specified. This integral does not have a "closed form" solution: there is no way to come up with an expression using "normal" math functions like factorial, multiplication, exponentiation, roots, etc. that equals that integral. It's just like logarithms or trigonometric functions: you can't produce a closed form equation for them using other algebraic functions (you can use infinite series, but that's not "closed"). So you use a table (if you are feeling retro, or a calculator, which simply uses a table for you behind the scenes embedded in its processor as a starting point) when you need to actually calculate it. In fact, the parallel with logarithms is quite apt: one can also define a logarithm by an integral, namely ln(x) = integral of (1/x) from 0 to x.
Why can't I calculate 1.5 standard deviations using basic math?
The formula for the Gaussian distribution is: Where sigma = std deviation and mu = mean (stolen from wikipedia) When you are asking for the area, you are integrating this function over the range spec
Why can't I calculate 1.5 standard deviations using basic math? The formula for the Gaussian distribution is: Where sigma = std deviation and mu = mean (stolen from wikipedia) When you are asking for the area, you are integrating this function over the range specified. This integral does not have a "closed form" solution: there is no way to come up with an expression using "normal" math functions like factorial, multiplication, exponentiation, roots, etc. that equals that integral. It's just like logarithms or trigonometric functions: you can't produce a closed form equation for them using other algebraic functions (you can use infinite series, but that's not "closed"). So you use a table (if you are feeling retro, or a calculator, which simply uses a table for you behind the scenes embedded in its processor as a starting point) when you need to actually calculate it. In fact, the parallel with logarithms is quite apt: one can also define a logarithm by an integral, namely ln(x) = integral of (1/x) from 0 to x.
Why can't I calculate 1.5 standard deviations using basic math? The formula for the Gaussian distribution is: Where sigma = std deviation and mu = mean (stolen from wikipedia) When you are asking for the area, you are integrating this function over the range spec
29,958
Why can't I calculate 1.5 standard deviations using basic math?
Geometrically, .4772 - .3413, represents the area under the graph between 1 standard deviation and 2 standard deviations. If you split this region half way across horizontally, the part to the left of the split will be the area between 1 and 1.5 standard deviations, as you want. Fine so far. However when you take (.4772 - .3413) / 2 you're getting half the area, but not necessarily what you were looking for, which is however much area was half way across horizontally. With this graph, that left part of the split isn't half of the area - the line is sloping downward (going from the top left to the bottom right) so there's more space in the left part than the right part. If this graph was a straight horizontal line, then the area you were splitting would be a rectangle, and half the area really would be half way across.
Why can't I calculate 1.5 standard deviations using basic math?
Geometrically, .4772 - .3413, represents the area under the graph between 1 standard deviation and 2 standard deviations. If you split this region half way across horizontally, the part to the left o
Why can't I calculate 1.5 standard deviations using basic math? Geometrically, .4772 - .3413, represents the area under the graph between 1 standard deviation and 2 standard deviations. If you split this region half way across horizontally, the part to the left of the split will be the area between 1 and 1.5 standard deviations, as you want. Fine so far. However when you take (.4772 - .3413) / 2 you're getting half the area, but not necessarily what you were looking for, which is however much area was half way across horizontally. With this graph, that left part of the split isn't half of the area - the line is sloping downward (going from the top left to the bottom right) so there's more space in the left part than the right part. If this graph was a straight horizontal line, then the area you were splitting would be a rectangle, and half the area really would be half way across.
Why can't I calculate 1.5 standard deviations using basic math? Geometrically, .4772 - .3413, represents the area under the graph between 1 standard deviation and 2 standard deviations. If you split this region half way across horizontally, the part to the left o
29,959
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
Personally I would start here: http://en.wikibooks.org/wiki/LaTeX That will teach you how to make a document in LaTeX that compiles. Once you've done that I would just start working with Sweave, and learn about figures, graphics, tables etc. as you go depending on what your needs are (the link above and the marvellous StackExchange (LaTeX/ Cross Validated, Stack Overflow) should keep you going with all that). Note also that personally I like to have Brew: http://cran.r-project.org/web/packages/brew/brew.pdf in my back pocket as well because it's easier for big loopy bits of code where you want to make 50 million graphs or something like that. Note finally that I was reading about knitr the other day: http://yihui.name/knitr/ Which apparently plays nicely with ggplot2. It's pretty similar to Sweave, I will check it out some time myself, haven't got round to it yet. RStudio: http://rstudio.org/ Is an absolute delight to use both with Sweave and LaTeX documents and a brilliant IDE to boot if you don't already use that.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
Personally I would start here: http://en.wikibooks.org/wiki/LaTeX That will teach you how to make a document in LaTeX that compiles. Once you've done that I would just start working with Sweave, and l
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] Personally I would start here: http://en.wikibooks.org/wiki/LaTeX That will teach you how to make a document in LaTeX that compiles. Once you've done that I would just start working with Sweave, and learn about figures, graphics, tables etc. as you go depending on what your needs are (the link above and the marvellous StackExchange (LaTeX/ Cross Validated, Stack Overflow) should keep you going with all that). Note also that personally I like to have Brew: http://cran.r-project.org/web/packages/brew/brew.pdf in my back pocket as well because it's easier for big loopy bits of code where you want to make 50 million graphs or something like that. Note finally that I was reading about knitr the other day: http://yihui.name/knitr/ Which apparently plays nicely with ggplot2. It's pretty similar to Sweave, I will check it out some time myself, haven't got round to it yet. RStudio: http://rstudio.org/ Is an absolute delight to use both with Sweave and LaTeX documents and a brilliant IDE to boot if you don't already use that.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] Personally I would start here: http://en.wikibooks.org/wiki/LaTeX That will teach you how to make a document in LaTeX that compiles. Once you've done that I would just start working with Sweave, and l
29,960
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
A Relevant Question As a complement to the excellent answers above, I would also ask: Do you really want to learn to use Beamer? The package has a learning curve - perhaps more than any other slide package for LaTeX - so it's worth checking the pros and cons. For me these are... Pros: Almost everybody seems to use it (but does everyone share their source with you? If not, then visual conformity is the only advantage) Signals that you know LaTeX and will probably have math in your slides, which may have some cachet. [This is not meant ironically; sometimes it is helpful] Cons: It's easy to make an article into a talk and vice versa by cut and pasting. You fit a very large amount of text and math on a slide with the defaults. Code snippets can be awkward to escape properly. The built in styles almost all encourage large amounts of boilerplate visuals: sequence in slide set, etc. Things you could argue either way: There is a pause command for building up slides line by line (Do you like this? I don't) Templates are difficult to change so you mostly end up with the built-in ones For these reasons I've always decided against. For me, visually more pleasing and much simpler options include Foiltex or [gasp] LaTeX's own built in slides class. Relevance to the Original Question The relevance of these considerations to the original question is the following: With the tools mentioned above, once one knows how to write the most basic latex document and include pictures in it, there is nothing more to know to be able to make slides. Not only does this leaves more time for debugging Sweave, which you'll do a lot of, but also frees up time to figure out things like xtable, apsrtable and/or the mtable function in memisc that will turn R model objects to nice LaTeX. These are all worth figuring out before wrestling with a slides package because they are more generally useful.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
A Relevant Question As a complement to the excellent answers above, I would also ask: Do you really want to learn to use Beamer? The package has a learning curve - perhaps more than any other slide p
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] A Relevant Question As a complement to the excellent answers above, I would also ask: Do you really want to learn to use Beamer? The package has a learning curve - perhaps more than any other slide package for LaTeX - so it's worth checking the pros and cons. For me these are... Pros: Almost everybody seems to use it (but does everyone share their source with you? If not, then visual conformity is the only advantage) Signals that you know LaTeX and will probably have math in your slides, which may have some cachet. [This is not meant ironically; sometimes it is helpful] Cons: It's easy to make an article into a talk and vice versa by cut and pasting. You fit a very large amount of text and math on a slide with the defaults. Code snippets can be awkward to escape properly. The built in styles almost all encourage large amounts of boilerplate visuals: sequence in slide set, etc. Things you could argue either way: There is a pause command for building up slides line by line (Do you like this? I don't) Templates are difficult to change so you mostly end up with the built-in ones For these reasons I've always decided against. For me, visually more pleasing and much simpler options include Foiltex or [gasp] LaTeX's own built in slides class. Relevance to the Original Question The relevance of these considerations to the original question is the following: With the tools mentioned above, once one knows how to write the most basic latex document and include pictures in it, there is nothing more to know to be able to make slides. Not only does this leaves more time for debugging Sweave, which you'll do a lot of, but also frees up time to figure out things like xtable, apsrtable and/or the mtable function in memisc that will turn R model objects to nice LaTeX. These are all worth figuring out before wrestling with a slides package because they are more generally useful.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] A Relevant Question As a complement to the excellent answers above, I would also ask: Do you really want to learn to use Beamer? The package has a learning curve - perhaps more than any other slide p
29,961
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
Although this is not exactly what you have asked for I recommend you have a look at org-mode, an emacs mode incorporating all your needs. Why do I recommend org-mode? (i.e., the pros) org-mode allows you to write text and code within one document, with emphasizing both parts equally, text and code (although I have never used sweave I feel the focus is more on code). To this end, org-mode allows for a lot of simplifications when writing text compared to pure LaTeX (i.e., & is & instead of \&, text becomes italic by surrounding it with /, or bold with *). These markup elements will be exported to real LaTeX but make life a lot easier. org-mode allows you to export your text not only as LaTeX or beamer but even html or other formats (e.g., TaskJuggler, ...) org-mode can be used for other tasks suchs as organizing ones life using gtd. Emacs is one of the most popular and mature text editors, available for all platforms, and productively being used since the late 70 for programming tasks of all sorts. Additionally, there exists a very popular connection to R, ESS, developed by, inter alia, R core members Kurt Hornik and Martin Maechler. When using emacs you can use it for all tasks, not only sweave and R integration (that is one reason why some people refer to Emacs as an operating system rather than an editor). Sidenote: Emacs was initialy developed by GNU mastermind Richard Stallman. The cons: instead of only learning one thing at a time, you will have to learn even more things all at once: Emacs (which arguably has a complicated handling), org-mode and LaTeX. installing Emacs, org-mode, ESS can be a hassle. Especially if you (as I) know nothing about lisp, writing your .emacs file really sucks. If you want to give it a try (I highly recommend it), there is a very recent paper on org-mode in the Journal of Statistical Software that should get you started. What I recommend to get started is to first try to do your first documents in org-mode and export them as LaTeX or pdfs (i.e., without R). When succesful, simply try to add some R code to the document and see how you can export the relevant stuff. I highly recommend obtaining the cheat sheets or reference cards for all of the used programs (Emacs, org-mode, LaTeX, TeX and ESS). Furthermore, a basic understanding of LaTeX as pointed at by Chris Beely (wikibooks) definitely helps a lot, too. My current setup is that I usually work with three buffers in parallel. One org-mode buffer with the document, one ESS mode R script to keep code and try out different things, and one R console being accessible from both scripts. This works really great. Some stuff that I like to use: ESS uses Shift-ENTER and other shortcuts count words in org-mode buffer (without counting code) cua mode for Emacs (to use normal copy shortcuts) finding and highlighting instances of words using incremental search C-s C-w C-s stackoverflow (tags ess and org-mode) and tex.stackexchange
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
Although this is not exactly what you have asked for I recommend you have a look at org-mode, an emacs mode incorporating all your needs. Why do I recommend org-mode? (i.e., the pros) org-mode allows
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] Although this is not exactly what you have asked for I recommend you have a look at org-mode, an emacs mode incorporating all your needs. Why do I recommend org-mode? (i.e., the pros) org-mode allows you to write text and code within one document, with emphasizing both parts equally, text and code (although I have never used sweave I feel the focus is more on code). To this end, org-mode allows for a lot of simplifications when writing text compared to pure LaTeX (i.e., & is & instead of \&, text becomes italic by surrounding it with /, or bold with *). These markup elements will be exported to real LaTeX but make life a lot easier. org-mode allows you to export your text not only as LaTeX or beamer but even html or other formats (e.g., TaskJuggler, ...) org-mode can be used for other tasks suchs as organizing ones life using gtd. Emacs is one of the most popular and mature text editors, available for all platforms, and productively being used since the late 70 for programming tasks of all sorts. Additionally, there exists a very popular connection to R, ESS, developed by, inter alia, R core members Kurt Hornik and Martin Maechler. When using emacs you can use it for all tasks, not only sweave and R integration (that is one reason why some people refer to Emacs as an operating system rather than an editor). Sidenote: Emacs was initialy developed by GNU mastermind Richard Stallman. The cons: instead of only learning one thing at a time, you will have to learn even more things all at once: Emacs (which arguably has a complicated handling), org-mode and LaTeX. installing Emacs, org-mode, ESS can be a hassle. Especially if you (as I) know nothing about lisp, writing your .emacs file really sucks. If you want to give it a try (I highly recommend it), there is a very recent paper on org-mode in the Journal of Statistical Software that should get you started. What I recommend to get started is to first try to do your first documents in org-mode and export them as LaTeX or pdfs (i.e., without R). When succesful, simply try to add some R code to the document and see how you can export the relevant stuff. I highly recommend obtaining the cheat sheets or reference cards for all of the used programs (Emacs, org-mode, LaTeX, TeX and ESS). Furthermore, a basic understanding of LaTeX as pointed at by Chris Beely (wikibooks) definitely helps a lot, too. My current setup is that I usually work with three buffers in parallel. One org-mode buffer with the document, one ESS mode R script to keep code and try out different things, and one R console being accessible from both scripts. This works really great. Some stuff that I like to use: ESS uses Shift-ENTER and other shortcuts count words in org-mode buffer (without counting code) cua mode for Emacs (to use normal copy shortcuts) finding and highlighting instances of words using incremental search C-s C-w C-s stackoverflow (tags ess and org-mode) and tex.stackexchange
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] Although this is not exactly what you have asked for I recommend you have a look at org-mode, an emacs mode incorporating all your needs. Why do I recommend org-mode? (i.e., the pros) org-mode allows
29,962
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
You should definitely learn some LaTeX before starting on beamer. How much LaTeX you want to learn before adding Sweave (or while learning Sweave) depends on what you will do with LaTeX other than write things from R code. LaTeX is huge.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed]
You should definitely learn some LaTeX before starting on beamer. How much LaTeX you want to learn before adding Sweave (or while learning Sweave) depends on what you will do with LaTeX other than wr
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] You should definitely learn some LaTeX before starting on beamer. How much LaTeX you want to learn before adding Sweave (or while learning Sweave) depends on what you will do with LaTeX other than write things from R code. LaTeX is huge.
Most efficient order to learn LaTeX, Sweave, Beamer? [closed] You should definitely learn some LaTeX before starting on beamer. How much LaTeX you want to learn before adding Sweave (or while learning Sweave) depends on what you will do with LaTeX other than wr
29,963
Why the `cooks.distance()` function doesn't detect an obvious outlier?
It’s just a simple programming mistake. The row numbers don’t correspond to the row names. For example, row number 258, containing the outlier, has row name 262: > data[258,] VeDBA.V13AP VeDBA.X16 262 0.08008333 0.07891688 In your code, you turn the row names into numbers and use the numbers as if they were row numbers. If you used either the rows names directly (i.e., without as.numeric()) or extracted the row numbers, everything would work fine. So either of these options will work (I prefer the second one): influential_obs <- names(cooksD)[(cooksD >= (4/n))] # Row names influential_obs <- which(cooksD >= 4/n) # Row numbers And here are the points which your Cook’s distance ‘test’ thinks are outliers: plot(VeDBA.X16 ~ VeDBA.V13AP, data = data) points(VeDBA.X16 ~ VeDBA.V13AP, data = data[influential_obs,], col="red", pch=19)
Why the `cooks.distance()` function doesn't detect an obvious outlier?
It’s just a simple programming mistake. The row numbers don’t correspond to the row names. For example, row number 258, containing the outlier, has row name 262: > data[258,] VeDBA.V13AP VeDBA.X1
Why the `cooks.distance()` function doesn't detect an obvious outlier? It’s just a simple programming mistake. The row numbers don’t correspond to the row names. For example, row number 258, containing the outlier, has row name 262: > data[258,] VeDBA.V13AP VeDBA.X16 262 0.08008333 0.07891688 In your code, you turn the row names into numbers and use the numbers as if they were row numbers. If you used either the rows names directly (i.e., without as.numeric()) or extracted the row numbers, everything would work fine. So either of these options will work (I prefer the second one): influential_obs <- names(cooksD)[(cooksD >= (4/n))] # Row names influential_obs <- which(cooksD >= 4/n) # Row numbers And here are the points which your Cook’s distance ‘test’ thinks are outliers: plot(VeDBA.X16 ~ VeDBA.V13AP, data = data) points(VeDBA.X16 ~ VeDBA.V13AP, data = data[influential_obs,], col="red", pch=19)
Why the `cooks.distance()` function doesn't detect an obvious outlier? It’s just a simple programming mistake. The row numbers don’t correspond to the row names. For example, row number 258, containing the outlier, has row name 262: > data[258,] VeDBA.V13AP VeDBA.X1
29,964
Why the `cooks.distance()` function doesn't detect an obvious outlier?
I think you might be misunderstanding what Cook's D does. Influential observations are those that, when removed from the sample, cause a noticeable change in the parameter of interest. In this case, it's the slope of the regression line. It would appear from your plot that there is a greater concentration of data points in the lower left quadrant, and so it's feasible that the upper left "outlier" does not sufficiently leverage the slope, whereas the lower right points do because of the relative sparseness of data in that region.
Why the `cooks.distance()` function doesn't detect an obvious outlier?
I think you might be misunderstanding what Cook's D does. Influential observations are those that, when removed from the sample, cause a noticeable change in the parameter of interest. In this case, i
Why the `cooks.distance()` function doesn't detect an obvious outlier? I think you might be misunderstanding what Cook's D does. Influential observations are those that, when removed from the sample, cause a noticeable change in the parameter of interest. In this case, it's the slope of the regression line. It would appear from your plot that there is a greater concentration of data points in the lower left quadrant, and so it's feasible that the upper left "outlier" does not sufficiently leverage the slope, whereas the lower right points do because of the relative sparseness of data in that region.
Why the `cooks.distance()` function doesn't detect an obvious outlier? I think you might be misunderstanding what Cook's D does. Influential observations are those that, when removed from the sample, cause a noticeable change in the parameter of interest. In this case, i
29,965
Minimum number of observations for logistic regression?
There is one way to get at a solid starting point. Suppose there were no covariates, so that the only parameter in the model were the intercept. What is the sample size required to allow the estimate of the intercept to be precise enough so that the predicted probability is within 0.1 of the true probability with 95% confidence, when the true intercept is in the neighborhood of zero? The answer is n=96. What if there were one covariate, and it was binary with a prevalence of 0.5? One would need 96 subjects with x=0 and 96 with x=1 to have an upper bound on the margin of error for estimating Prob[Y=1 | X=x] not exceed 0.1. The general formula for the sample size required to achieve a margin of error of $\delta$ in estimating a true probability of $p$ at the 0.95 confidence level is $n = (\frac{1.96}{\delta})^{2} \times p(1-p)$. Set $p = 0.5$ for the worst case.
Minimum number of observations for logistic regression?
There is one way to get at a solid starting point. Suppose there were no covariates, so that the only parameter in the model were the intercept. What is the sample size required to allow the estimat
Minimum number of observations for logistic regression? There is one way to get at a solid starting point. Suppose there were no covariates, so that the only parameter in the model were the intercept. What is the sample size required to allow the estimate of the intercept to be precise enough so that the predicted probability is within 0.1 of the true probability with 95% confidence, when the true intercept is in the neighborhood of zero? The answer is n=96. What if there were one covariate, and it was binary with a prevalence of 0.5? One would need 96 subjects with x=0 and 96 with x=1 to have an upper bound on the margin of error for estimating Prob[Y=1 | X=x] not exceed 0.1. The general formula for the sample size required to achieve a margin of error of $\delta$ in estimating a true probability of $p$ at the 0.95 confidence level is $n = (\frac{1.96}{\delta})^{2} \times p(1-p)$. Set $p = 0.5$ for the worst case.
Minimum number of observations for logistic regression? There is one way to get at a solid starting point. Suppose there were no covariates, so that the only parameter in the model were the intercept. What is the sample size required to allow the estimat
29,966
Minimum number of observations for logistic regression?
There isn't really a minimum number of observations. Essentially the more observations you have the more the parameters of your model are constrained by the data, and the more confident the model becomes. How many observations you need depends on the nature of the problem and how confident you need to be in your model. I don't think it is a good idea to rely too much on "rules of thumb" about this sort of thing, but use the all the data you can get and inspect the confidence/credible intervals on your model parameters and on predictions.
Minimum number of observations for logistic regression?
There isn't really a minimum number of observations. Essentially the more observations you have the more the parameters of your model are constrained by the data, and the more confident the model bec
Minimum number of observations for logistic regression? There isn't really a minimum number of observations. Essentially the more observations you have the more the parameters of your model are constrained by the data, and the more confident the model becomes. How many observations you need depends on the nature of the problem and how confident you need to be in your model. I don't think it is a good idea to rely too much on "rules of thumb" about this sort of thing, but use the all the data you can get and inspect the confidence/credible intervals on your model parameters and on predictions.
Minimum number of observations for logistic regression? There isn't really a minimum number of observations. Essentially the more observations you have the more the parameters of your model are constrained by the data, and the more confident the model bec
29,967
Minimum number of observations for logistic regression?
Update: I didn't see the above comment, by @David Harris, which is pretty much like mine. Sorry for that. You guys can delete my answer if it is too similar. I'd second Dikran Marsupail post and add my two cents. Take in consideration your prior knowledge about the effects that you expect from your independent variables. If you expect small effects, than you will need a huge sample. If the effects are expected to be big, than a small sample can do the job. As you might know, standard errors are a function of sample size, so the bigger the sample size, the smaller the standard errors. Thus, if effects are small, i.e., are near zero, only a small standard error will be able to detect this effect, i.e, to show that it is significantly different from zero. On the other hand, if the effect is big (far from zero), than even a large standard error will produce significant results. If you need some reference, take a look at Andrew Gelmans' Blog.
Minimum number of observations for logistic regression?
Update: I didn't see the above comment, by @David Harris, which is pretty much like mine. Sorry for that. You guys can delete my answer if it is too similar. I'd second Dikran Marsupail post and add m
Minimum number of observations for logistic regression? Update: I didn't see the above comment, by @David Harris, which is pretty much like mine. Sorry for that. You guys can delete my answer if it is too similar. I'd second Dikran Marsupail post and add my two cents. Take in consideration your prior knowledge about the effects that you expect from your independent variables. If you expect small effects, than you will need a huge sample. If the effects are expected to be big, than a small sample can do the job. As you might know, standard errors are a function of sample size, so the bigger the sample size, the smaller the standard errors. Thus, if effects are small, i.e., are near zero, only a small standard error will be able to detect this effect, i.e, to show that it is significantly different from zero. On the other hand, if the effect is big (far from zero), than even a large standard error will produce significant results. If you need some reference, take a look at Andrew Gelmans' Blog.
Minimum number of observations for logistic regression? Update: I didn't see the above comment, by @David Harris, which is pretty much like mine. Sorry for that. You guys can delete my answer if it is too similar. I'd second Dikran Marsupail post and add m
29,968
Minimum number of observations for logistic regression?
It seems that to get an acceptable estimation we have to apply the rules that have been examined by other researchers. I agree with the two rules of thumb above (10 obs for each var. and the formula by Harrell). Here, there is another question that the data are revealed or stated preference. Hosmer and Lemeshow in their book have provided a rule for revealed and Louviere and Hensher in their book (The methods of Stated preference) provided a rule for stated preference data
Minimum number of observations for logistic regression?
It seems that to get an acceptable estimation we have to apply the rules that have been examined by other researchers. I agree with the two rules of thumb above (10 obs for each var. and the formula b
Minimum number of observations for logistic regression? It seems that to get an acceptable estimation we have to apply the rules that have been examined by other researchers. I agree with the two rules of thumb above (10 obs for each var. and the formula by Harrell). Here, there is another question that the data are revealed or stated preference. Hosmer and Lemeshow in their book have provided a rule for revealed and Louviere and Hensher in their book (The methods of Stated preference) provided a rule for stated preference data
Minimum number of observations for logistic regression? It seems that to get an acceptable estimation we have to apply the rules that have been examined by other researchers. I agree with the two rules of thumb above (10 obs for each var. and the formula b
29,969
Normal distribution and monotonic transformations
Very good question. I feel that the answer depends on the whether you can identify the underlying process that gives rise to the measurement in question. If for example, you have evidence that height is a linear combination of several factors (e.g., height of parents, height of grandparents etc) then it would be natural to assume that height is normally distributed. On the other hand if you have evidence or perhaps even theory that the log of height is a linear combination of several variables (e.g., log parents heights, log of grandparents heights etc) then the log of height will be normally distributed. In most situations, we do not know the underlying process that drives the measurement of interest. Thus, we can do one of several things: (a) If the empirical distribution of heights looks normal then we use a the normal density for further analysis which implicitly assumes that height is a linear combination of several variables. (b) If the empirical distribution does not look normal then we can try some transformation as suggested by mbq (e.g. log(height)). In this case we implicitly assume that the transformed variable (i.e., log(height)) is a linear combination of several variables. (c) If (a) or (b) do not help then we have to abandon the advantages that CLT and an assumption of normality give us and model the variable using some other distribution.
Normal distribution and monotonic transformations
Very good question. I feel that the answer depends on the whether you can identify the underlying process that gives rise to the measurement in question. If for example, you have evidence that height
Normal distribution and monotonic transformations Very good question. I feel that the answer depends on the whether you can identify the underlying process that gives rise to the measurement in question. If for example, you have evidence that height is a linear combination of several factors (e.g., height of parents, height of grandparents etc) then it would be natural to assume that height is normally distributed. On the other hand if you have evidence or perhaps even theory that the log of height is a linear combination of several variables (e.g., log parents heights, log of grandparents heights etc) then the log of height will be normally distributed. In most situations, we do not know the underlying process that drives the measurement of interest. Thus, we can do one of several things: (a) If the empirical distribution of heights looks normal then we use a the normal density for further analysis which implicitly assumes that height is a linear combination of several variables. (b) If the empirical distribution does not look normal then we can try some transformation as suggested by mbq (e.g. log(height)). In this case we implicitly assume that the transformed variable (i.e., log(height)) is a linear combination of several variables. (c) If (a) or (b) do not help then we have to abandon the advantages that CLT and an assumption of normality give us and model the variable using some other distribution.
Normal distribution and monotonic transformations Very good question. I feel that the answer depends on the whether you can identify the underlying process that gives rise to the measurement in question. If for example, you have evidence that height
29,970
Normal distribution and monotonic transformations
The rescaling of a particular variable should, when possible, relate to some comprehensible scale for the reason that it helps make the resulting model interpretable. However, the resulting transformation need not absolutely carry a physical significance. Essentially you have to engage in a trade off between the violation of the normality assumption and the interpretability of your model. What I like to do in these situations is have the original data, data transformed in a way that makes sense, and the data transformed in a way that is most normal. If the data transformed in a way that makes sense is the same as the results when the data is transformed in a way that makes it most normal, I report it in a way that is interpretable with a side note that the results are the same in the case of the optimally transformed (and/or untransformed) data. When the untransformed data is behaving particularly poorly, I conduct my analyses with the transformed data but do my best to report the results in untransformed units. Also, I think you have a misconception in your statement that "quantities that occur in nature are normally distributed". This only holds true in cases where the value is "determined by the additive effect of a large number" of independent factors. That is, means and sums are normally distributed regardless of the underlying distribution from which they draw, where as individual values are not expected to be normally distributed. As was of example, individual draws from a binomial distribution do not look at all normal, but a distribution of the sums of 30 draws from a binomial distribution does look rather normal.
Normal distribution and monotonic transformations
The rescaling of a particular variable should, when possible, relate to some comprehensible scale for the reason that it helps make the resulting model interpretable. However, the resulting transform
Normal distribution and monotonic transformations The rescaling of a particular variable should, when possible, relate to some comprehensible scale for the reason that it helps make the resulting model interpretable. However, the resulting transformation need not absolutely carry a physical significance. Essentially you have to engage in a trade off between the violation of the normality assumption and the interpretability of your model. What I like to do in these situations is have the original data, data transformed in a way that makes sense, and the data transformed in a way that is most normal. If the data transformed in a way that makes sense is the same as the results when the data is transformed in a way that makes it most normal, I report it in a way that is interpretable with a side note that the results are the same in the case of the optimally transformed (and/or untransformed) data. When the untransformed data is behaving particularly poorly, I conduct my analyses with the transformed data but do my best to report the results in untransformed units. Also, I think you have a misconception in your statement that "quantities that occur in nature are normally distributed". This only holds true in cases where the value is "determined by the additive effect of a large number" of independent factors. That is, means and sums are normally distributed regardless of the underlying distribution from which they draw, where as individual values are not expected to be normally distributed. As was of example, individual draws from a binomial distribution do not look at all normal, but a distribution of the sums of 30 draws from a binomial distribution does look rather normal.
Normal distribution and monotonic transformations The rescaling of a particular variable should, when possible, relate to some comprehensible scale for the reason that it helps make the resulting model interpretable. However, the resulting transform
29,971
Normal distribution and monotonic transformations
I must admit that I do not really understand your question: your raindrops example is not very satisfying since this is not illustrating the fact that the Gaussian behaviour comes from the "average of a large number of iid random variables". if the quantity $X$ that you are interested in is an average $\frac{Y_1+\ldots+Y_N}{N}$ that fluctuates around its mean in a Gaussian way, you can also expect that $\frac{f(Y_1)+\ldots+f(Y_N)}{N}$ has a Gaussian behaviour. if the fluctuation of $X$ around its mean are approximately Gaussian and small, then so are the fluctuation of $f(X)$ around its mean (by Taylor expansion) could you cite some true examples of (real life) Gaussian behaviour coming from averaging: this is not very common! Gaussian behaviour is often used in statistics as a first rough approximation because the computations are very tractable. As physicists uses the harmonic approximation, statisticians uses the Gaussian approximation.
Normal distribution and monotonic transformations
I must admit that I do not really understand your question: your raindrops example is not very satisfying since this is not illustrating the fact that the Gaussian behaviour comes from the "average o
Normal distribution and monotonic transformations I must admit that I do not really understand your question: your raindrops example is not very satisfying since this is not illustrating the fact that the Gaussian behaviour comes from the "average of a large number of iid random variables". if the quantity $X$ that you are interested in is an average $\frac{Y_1+\ldots+Y_N}{N}$ that fluctuates around its mean in a Gaussian way, you can also expect that $\frac{f(Y_1)+\ldots+f(Y_N)}{N}$ has a Gaussian behaviour. if the fluctuation of $X$ around its mean are approximately Gaussian and small, then so are the fluctuation of $f(X)$ around its mean (by Taylor expansion) could you cite some true examples of (real life) Gaussian behaviour coming from averaging: this is not very common! Gaussian behaviour is often used in statistics as a first rough approximation because the computations are very tractable. As physicists uses the harmonic approximation, statisticians uses the Gaussian approximation.
Normal distribution and monotonic transformations I must admit that I do not really understand your question: your raindrops example is not very satisfying since this is not illustrating the fact that the Gaussian behaviour comes from the "average o
29,972
Normal distribution and monotonic transformations
Vipul, you're not being totally precise in your question. This is typically justified using the central limit theorem, which says that when you average a large number of iid random variables, you get a normal distribution. I'm not entirely sure this is what you're saying, but keep in mind that the raindrops in your example are not iid random variables. The mean calculated by sampling a certain number of those raindrops is a random variables, and as the means are calculated using a large enough sample size, the distribution of that sample mean is normal. The law of large numbers says that the value of that sample mean converges to the average value of the population (strong or weak depending on type of convergence). The CLT says that the sample mean, call it XM(n), which is a random variable, has a distribution, say G(n). As n approaches infintity, that distribution is the normal distribution. CLT is all about convergence in distribution, not a basic concept. The observations you draw (diameter, area, volume) don't have to be normal at all. They probably won't be if you plot them. But, the sample mean from taking all three observations will have a normal distribution. And, the volume won't be the cube of the diameter, nor will the area be the square of the diameter. The square of the sums is not going to be the sum of the squares, unless you get oddly lucky.
Normal distribution and monotonic transformations
Vipul, you're not being totally precise in your question. This is typically justified using the central limit theorem, which says that when you average a large number of iid random variables,
Normal distribution and monotonic transformations Vipul, you're not being totally precise in your question. This is typically justified using the central limit theorem, which says that when you average a large number of iid random variables, you get a normal distribution. I'm not entirely sure this is what you're saying, but keep in mind that the raindrops in your example are not iid random variables. The mean calculated by sampling a certain number of those raindrops is a random variables, and as the means are calculated using a large enough sample size, the distribution of that sample mean is normal. The law of large numbers says that the value of that sample mean converges to the average value of the population (strong or weak depending on type of convergence). The CLT says that the sample mean, call it XM(n), which is a random variable, has a distribution, say G(n). As n approaches infintity, that distribution is the normal distribution. CLT is all about convergence in distribution, not a basic concept. The observations you draw (diameter, area, volume) don't have to be normal at all. They probably won't be if you plot them. But, the sample mean from taking all three observations will have a normal distribution. And, the volume won't be the cube of the diameter, nor will the area be the square of the diameter. The square of the sums is not going to be the sum of the squares, unless you get oddly lucky.
Normal distribution and monotonic transformations Vipul, you're not being totally precise in your question. This is typically justified using the central limit theorem, which says that when you average a large number of iid random variables,
29,973
Normal distribution and monotonic transformations
Simply CLT (nor any other theorem) does not state that every quantity in the universe is normally distributed. Indeed, statisticians often use monotonic transformations to improve normality, so they could use their favorite tools.
Normal distribution and monotonic transformations
Simply CLT (nor any other theorem) does not state that every quantity in the universe is normally distributed. Indeed, statisticians often use monotonic transformations to improve normality, so they c
Normal distribution and monotonic transformations Simply CLT (nor any other theorem) does not state that every quantity in the universe is normally distributed. Indeed, statisticians often use monotonic transformations to improve normality, so they could use their favorite tools.
Normal distribution and monotonic transformations Simply CLT (nor any other theorem) does not state that every quantity in the universe is normally distributed. Indeed, statisticians often use monotonic transformations to improve normality, so they c
29,974
Normal distribution and monotonic transformations
I think you missunderstood (half of) the use statistician make of the normal distribution but I really like your question. I don't think it is a good idea to assume systematically normality and I admit it is done sometime (maybe because the normal distribution is tractable, unimodal ...) without verification. Hence your remark about monotonic map is excellent ! However the powerfull use of normality comes when you construct yourself new statistics such as the one that appears when you apply the empiriral counter part of expectation: the empirical mean. Hence empirical mean and more generally smoothing is what makes normality appear everywhere...
Normal distribution and monotonic transformations
I think you missunderstood (half of) the use statistician make of the normal distribution but I really like your question. I don't think it is a good idea to assume systematically normality and I ad
Normal distribution and monotonic transformations I think you missunderstood (half of) the use statistician make of the normal distribution but I really like your question. I don't think it is a good idea to assume systematically normality and I admit it is done sometime (maybe because the normal distribution is tractable, unimodal ...) without verification. Hence your remark about monotonic map is excellent ! However the powerfull use of normality comes when you construct yourself new statistics such as the one that appears when you apply the empiriral counter part of expectation: the empirical mean. Hence empirical mean and more generally smoothing is what makes normality appear everywhere...
Normal distribution and monotonic transformations I think you missunderstood (half of) the use statistician make of the normal distribution but I really like your question. I don't think it is a good idea to assume systematically normality and I ad
29,975
Normal distribution and monotonic transformations
Both a random variable and many transformations of it can be approximately normal; indeed if the variance is small compared to the mean, it can be that a very wide variety of transformations look pretty normal. > a<-rgamma(10000,1000,1000) > hist(a) > hist(1/a) > hist(a^2) > hist(a^(3/2)) (click for larger version)
Normal distribution and monotonic transformations
Both a random variable and many transformations of it can be approximately normal; indeed if the variance is small compared to the mean, it can be that a very wide variety of transformations look pret
Normal distribution and monotonic transformations Both a random variable and many transformations of it can be approximately normal; indeed if the variance is small compared to the mean, it can be that a very wide variety of transformations look pretty normal. > a<-rgamma(10000,1000,1000) > hist(a) > hist(1/a) > hist(a^2) > hist(a^(3/2)) (click for larger version)
Normal distribution and monotonic transformations Both a random variable and many transformations of it can be approximately normal; indeed if the variance is small compared to the mean, it can be that a very wide variety of transformations look pret
29,976
Conditioning a variable on itself and some other variable
What may be tripping you up here is a common imprecision in notation, where people (myself included) will use the same symbol to denote both a random variable, and a particular assignment or instantiation of that variable. I wonder if things will become clearer to you if we rewrite your expectation more precisely: $$ E(X|X=x,Y=y) $$ where $x$ and $y$ are the values of $X$ and $Y$ that we condition on. That is, we're calculating the expected value of $X$ given that we know that the random variable $X$ has value $x$, and $Y$ has value $y$. Hang on, you might say, we already know the value of $X$? Exactly. So the expected value is very simple: it is the value of $X$ that we already know: $$ E(X|X=x,Y=y)=x $$ And obviously $Y$ becomes irrelevant - since we already know $X$ there is no information that any other variable can give us about its value. (This may seem a little silly, because $x$ is still a placeholder for an unknown value in this equation, but at the same time it represents a "known" value of $X$. As is typical in maths, we're using variables as stand-ins for values that we could fill in. It just gets a little more gnarly when you're dealing with random variables, which are not only unknown, but do not have a definite value. $X$ here is the random variable, which is the outcome of a random phenomenon (e.g. the roll of a die). $X$ has a distribution, expected values, etc. $x$ is a particular value taken by $X$, and does not have a distribution - it just represents that particular value.)
Conditioning a variable on itself and some other variable
What may be tripping you up here is a common imprecision in notation, where people (myself included) will use the same symbol to denote both a random variable, and a particular assignment or instantia
Conditioning a variable on itself and some other variable What may be tripping you up here is a common imprecision in notation, where people (myself included) will use the same symbol to denote both a random variable, and a particular assignment or instantiation of that variable. I wonder if things will become clearer to you if we rewrite your expectation more precisely: $$ E(X|X=x,Y=y) $$ where $x$ and $y$ are the values of $X$ and $Y$ that we condition on. That is, we're calculating the expected value of $X$ given that we know that the random variable $X$ has value $x$, and $Y$ has value $y$. Hang on, you might say, we already know the value of $X$? Exactly. So the expected value is very simple: it is the value of $X$ that we already know: $$ E(X|X=x,Y=y)=x $$ And obviously $Y$ becomes irrelevant - since we already know $X$ there is no information that any other variable can give us about its value. (This may seem a little silly, because $x$ is still a placeholder for an unknown value in this equation, but at the same time it represents a "known" value of $X$. As is typical in maths, we're using variables as stand-ins for values that we could fill in. It just gets a little more gnarly when you're dealing with random variables, which are not only unknown, but do not have a definite value. $X$ here is the random variable, which is the outcome of a random phenomenon (e.g. the roll of a die). $X$ has a distribution, expected values, etc. $x$ is a particular value taken by $X$, and does not have a distribution - it just represents that particular value.)
Conditioning a variable on itself and some other variable What may be tripping you up here is a common imprecision in notation, where people (myself included) will use the same symbol to denote both a random variable, and a particular assignment or instantia
29,977
Conditioning a variable on itself and some other variable
It is conditional expectation (not probability), and $E[X|X,Y]=X$ because $X$ is already given.
Conditioning a variable on itself and some other variable
It is conditional expectation (not probability), and $E[X|X,Y]=X$ because $X$ is already given.
Conditioning a variable on itself and some other variable It is conditional expectation (not probability), and $E[X|X,Y]=X$ because $X$ is already given.
Conditioning a variable on itself and some other variable It is conditional expectation (not probability), and $E[X|X,Y]=X$ because $X$ is already given.
29,978
Logistic function with a slope but no asymptotes?
You could just add a term to a logistic function: $$ f(x; a, b, c, d, e)=\frac{a}{1+b\exp(-cx)} + dx + e $$ The asymptotes will have slopes $d$. Here is an example with $a=10, b = 1, c = 2, d = \frac{1}{20}, e = -5$:
Logistic function with a slope but no asymptotes?
You could just add a term to a logistic function: $$ f(x; a, b, c, d, e)=\frac{a}{1+b\exp(-cx)} + dx + e $$ The asymptotes will have slopes $d$. Here is an example with $a=10, b = 1, c = 2, d = \frac{
Logistic function with a slope but no asymptotes? You could just add a term to a logistic function: $$ f(x; a, b, c, d, e)=\frac{a}{1+b\exp(-cx)} + dx + e $$ The asymptotes will have slopes $d$. Here is an example with $a=10, b = 1, c = 2, d = \frac{1}{20}, e = -5$:
Logistic function with a slope but no asymptotes? You could just add a term to a logistic function: $$ f(x; a, b, c, d, e)=\frac{a}{1+b\exp(-cx)} + dx + e $$ The asymptotes will have slopes $d$. Here is an example with $a=10, b = 1, c = 2, d = \frac{
29,979
Logistic function with a slope but no asymptotes?
Initially I was thinking you did want the horizontal asymptotes at $0$ still; I moved my original answer to the end. If you instead want $\lim_{x\to\pm \infty} f(x) = \pm\infty$ then would something like the inverse hyperbolic sine work? $$ \text{asinh}(x) = \log\left(x + \sqrt{1 + x^2}\right) $$ This is unbounded but grows like $\log$ for large $|x|$ and looks like I like this function a lot as a data transformation when I've got heavy tails but possibly zeros or negative values. Another nice thing about this function is that $\text{asinh}'(x) = \frac{1}{\sqrt{1+x^2}}$ so it has a nice simple derivative. Original answer $\newcommand{\e}{\varepsilon}$Let $f : \mathbb R\to\mathbb R$ be our function and we'll assume $$ \lim_{x\to\pm \infty} f(x) = 0. $$ Suppose $f$ is continuous. Fix $\e > 0$. From the asymptotes we have $$ \exists x_1 : x < x_1 \implies |f(x)| < \e $$ and analogously there's an $x_2$ such that $x > x_2 \implies |f(x)| < \e$. Therefore outside of $[x_1,x_2]$ $f$ is within $(-\e, \e)$. And $[x_1,x_2]$ is a compact interval so by continuity $f$ is bounded on it. This means that any such function can't be continuous. Would something like $$ f(x) = \begin{cases} x^{-1} & x\neq 0 \\ 0 & x = 0\end{cases} $$ work?
Logistic function with a slope but no asymptotes?
Initially I was thinking you did want the horizontal asymptotes at $0$ still; I moved my original answer to the end. If you instead want $\lim_{x\to\pm \infty} f(x) = \pm\infty$ then would something l
Logistic function with a slope but no asymptotes? Initially I was thinking you did want the horizontal asymptotes at $0$ still; I moved my original answer to the end. If you instead want $\lim_{x\to\pm \infty} f(x) = \pm\infty$ then would something like the inverse hyperbolic sine work? $$ \text{asinh}(x) = \log\left(x + \sqrt{1 + x^2}\right) $$ This is unbounded but grows like $\log$ for large $|x|$ and looks like I like this function a lot as a data transformation when I've got heavy tails but possibly zeros or negative values. Another nice thing about this function is that $\text{asinh}'(x) = \frac{1}{\sqrt{1+x^2}}$ so it has a nice simple derivative. Original answer $\newcommand{\e}{\varepsilon}$Let $f : \mathbb R\to\mathbb R$ be our function and we'll assume $$ \lim_{x\to\pm \infty} f(x) = 0. $$ Suppose $f$ is continuous. Fix $\e > 0$. From the asymptotes we have $$ \exists x_1 : x < x_1 \implies |f(x)| < \e $$ and analogously there's an $x_2$ such that $x > x_2 \implies |f(x)| < \e$. Therefore outside of $[x_1,x_2]$ $f$ is within $(-\e, \e)$. And $[x_1,x_2]$ is a compact interval so by continuity $f$ is bounded on it. This means that any such function can't be continuous. Would something like $$ f(x) = \begin{cases} x^{-1} & x\neq 0 \\ 0 & x = 0\end{cases} $$ work?
Logistic function with a slope but no asymptotes? Initially I was thinking you did want the horizontal asymptotes at $0$ still; I moved my original answer to the end. If you instead want $\lim_{x\to\pm \infty} f(x) = \pm\infty$ then would something l
29,980
Logistic function with a slope but no asymptotes?
I will go ahead and turn the comment into an answer. I suggest $$ f(x) = \operatorname{sign}(x)\log{\left(1 + |x|\right)}, $$ which has slope tending towards zero, but is unbounded. edit by popular demand, a plot, for $|x|\le 30$:
Logistic function with a slope but no asymptotes?
I will go ahead and turn the comment into an answer. I suggest $$ f(x) = \operatorname{sign}(x)\log{\left(1 + |x|\right)}, $$ which has slope tending towards zero, but is unbounded. edit by popular de
Logistic function with a slope but no asymptotes? I will go ahead and turn the comment into an answer. I suggest $$ f(x) = \operatorname{sign}(x)\log{\left(1 + |x|\right)}, $$ which has slope tending towards zero, but is unbounded. edit by popular demand, a plot, for $|x|\le 30$:
Logistic function with a slope but no asymptotes? I will go ahead and turn the comment into an answer. I suggest $$ f(x) = \operatorname{sign}(x)\log{\left(1 + |x|\right)}, $$ which has slope tending towards zero, but is unbounded. edit by popular de
29,981
What makes a classifier misclassify data? [closed]
Let's assume you are talking about mis-classification on training data, i.e., difficult to minimize the loss on training data set, no testing data over-fitting problem involved. You are correct that, in most cases, the mis-classification can coming from "model is too simple" or "the data is too noisy". I would like to give two examples to further illustrate. The model is "too simple" to capture the "patterns in data". The example is shown in the left figure. Suppose we want to use a logistic regression / a line to separate two classes, but the two classes are not linear separable. In this case, there still are "notable patterns in the data", and if we change the model, we may getting better. For example, if we use KNN classifier, instead of logistic regression, we can have very good performance. The data has too much noise, that it is very hard to do the classification task. The example is shown in the right figure, where, if you check the code, you will see two classes are very similar (two classes are 2D Gaussian, the mean is $0.01\times 2$ apart, but the standard deviation for each class is $1.0$ ). It is essentially a very challenging task. Note that the two examples are trivial, since we can visualize the data and the classifier. In the real world, it is not the case, when we have millions of data points and super complicated classifiers. Code: library(mlbench) set.seed(0) par(mfrow=c(1,2)) d=mlbench.spirals(500) plot(d) lg_fit=glm(d$classes~d$x[,1]+d$x[,2]-1,family=binomial()) abline(0,-lg_fit$coefficients[1]/lg_fit$coefficients[2]) d2=mlbench.2dnormals(500,r=0.01) plot(d2)
What makes a classifier misclassify data? [closed]
Let's assume you are talking about mis-classification on training data, i.e., difficult to minimize the loss on training data set, no testing data over-fitting problem involved. You are correct that,
What makes a classifier misclassify data? [closed] Let's assume you are talking about mis-classification on training data, i.e., difficult to minimize the loss on training data set, no testing data over-fitting problem involved. You are correct that, in most cases, the mis-classification can coming from "model is too simple" or "the data is too noisy". I would like to give two examples to further illustrate. The model is "too simple" to capture the "patterns in data". The example is shown in the left figure. Suppose we want to use a logistic regression / a line to separate two classes, but the two classes are not linear separable. In this case, there still are "notable patterns in the data", and if we change the model, we may getting better. For example, if we use KNN classifier, instead of logistic regression, we can have very good performance. The data has too much noise, that it is very hard to do the classification task. The example is shown in the right figure, where, if you check the code, you will see two classes are very similar (two classes are 2D Gaussian, the mean is $0.01\times 2$ apart, but the standard deviation for each class is $1.0$ ). It is essentially a very challenging task. Note that the two examples are trivial, since we can visualize the data and the classifier. In the real world, it is not the case, when we have millions of data points and super complicated classifiers. Code: library(mlbench) set.seed(0) par(mfrow=c(1,2)) d=mlbench.spirals(500) plot(d) lg_fit=glm(d$classes~d$x[,1]+d$x[,2]-1,family=binomial()) abline(0,-lg_fit$coefficients[1]/lg_fit$coefficients[2]) d2=mlbench.2dnormals(500,r=0.01) plot(d2)
What makes a classifier misclassify data? [closed] Let's assume you are talking about mis-classification on training data, i.e., difficult to minimize the loss on training data set, no testing data over-fitting problem involved. You are correct that,
29,982
What makes a classifier misclassify data? [closed]
In addition to @hxd1011 (+1). Class imbalance in relative terms or absolute terms. In both cases we build an inadequate representation of the class of interest. Usually the later is more difficult to overcome. (Example reference: Learning from Imbalanced Data by He and Garcia) Improper classification criteria. We train our classifier using an inappropriate evaluation function and/or use inappropriate criteria to derive our final solution. Very common issue when using "canned solutions". (Example reference: Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules by Harrell) There is no class in reality. We wish there is something there but really there is nothing. Usually domain-expertise steers people away from this but as a new comer this is always an issue. (Example reference: Our daily life. Publication bias probably is an issue here too...) Overfitting. We have a decent model and a decent dataset but we fail to train appropriate building an unrealistic model. Usually this relates to point 2. (Extra-points for under-fitting!) (Example reference: The Problem of Overfitting by Hawkings) Concept drift. Things change and we don't retrain. Our classifier has excellent performance in our "Christmas sales" marketing sample - yeah, using this model in July probably will be a pain...(Example reference: A Survey on Concept Drift Adaptation by Gama et al.) Data leakage / Magic features. We train from information that will be unavailable at the time of prediction. Common when having event/time-series like data. (Example reference: Leakage in Data Mining: Formulation, Detection, and Avoidance by Kaufman et al.)
What makes a classifier misclassify data? [closed]
In addition to @hxd1011 (+1). Class imbalance in relative terms or absolute terms. In both cases we build an inadequate representation of the class of interest. Usually the later is more difficult to
What makes a classifier misclassify data? [closed] In addition to @hxd1011 (+1). Class imbalance in relative terms or absolute terms. In both cases we build an inadequate representation of the class of interest. Usually the later is more difficult to overcome. (Example reference: Learning from Imbalanced Data by He and Garcia) Improper classification criteria. We train our classifier using an inappropriate evaluation function and/or use inappropriate criteria to derive our final solution. Very common issue when using "canned solutions". (Example reference: Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules by Harrell) There is no class in reality. We wish there is something there but really there is nothing. Usually domain-expertise steers people away from this but as a new comer this is always an issue. (Example reference: Our daily life. Publication bias probably is an issue here too...) Overfitting. We have a decent model and a decent dataset but we fail to train appropriate building an unrealistic model. Usually this relates to point 2. (Extra-points for under-fitting!) (Example reference: The Problem of Overfitting by Hawkings) Concept drift. Things change and we don't retrain. Our classifier has excellent performance in our "Christmas sales" marketing sample - yeah, using this model in July probably will be a pain...(Example reference: A Survey on Concept Drift Adaptation by Gama et al.) Data leakage / Magic features. We train from information that will be unavailable at the time of prediction. Common when having event/time-series like data. (Example reference: Leakage in Data Mining: Formulation, Detection, and Avoidance by Kaufman et al.)
What makes a classifier misclassify data? [closed] In addition to @hxd1011 (+1). Class imbalance in relative terms or absolute terms. In both cases we build an inadequate representation of the class of interest. Usually the later is more difficult to
29,983
Is it the case that the log-likelihood *always* has negative curvature? Why?
Your conclusion doesn't follow: if the expected value of the curvature of the log-likelihood is negative, it is not necessarily everywhere negative. It just needs to be, on average, more negative than positive. Think of a bimodal distribution: there is indeed a region in between the modes with positively curved log-likelihood, so your claim cannot be true. Note the link with maximum likelihood estimation for intuition: in the neighborhood of the MLE, you may expect the curvature to be negative because you are at a maximum (although it is not necessarily, like if the maximum occurs on the boundary, for example). If the curvature is negative in the most likely regions, then the average should tend to be negative, intuitively. In fact, it must always be, under the regularity conditions that allow you to use the equivalency with the "variance of the slope" definition, as you point out.
Is it the case that the log-likelihood *always* has negative curvature? Why?
Your conclusion doesn't follow: if the expected value of the curvature of the log-likelihood is negative, it is not necessarily everywhere negative. It just needs to be, on average, more negative than
Is it the case that the log-likelihood *always* has negative curvature? Why? Your conclusion doesn't follow: if the expected value of the curvature of the log-likelihood is negative, it is not necessarily everywhere negative. It just needs to be, on average, more negative than positive. Think of a bimodal distribution: there is indeed a region in between the modes with positively curved log-likelihood, so your claim cannot be true. Note the link with maximum likelihood estimation for intuition: in the neighborhood of the MLE, you may expect the curvature to be negative because you are at a maximum (although it is not necessarily, like if the maximum occurs on the boundary, for example). If the curvature is negative in the most likely regions, then the average should tend to be negative, intuitively. In fact, it must always be, under the regularity conditions that allow you to use the equivalency with the "variance of the slope" definition, as you point out.
Is it the case that the log-likelihood *always* has negative curvature? Why? Your conclusion doesn't follow: if the expected value of the curvature of the log-likelihood is negative, it is not necessarily everywhere negative. It just needs to be, on average, more negative than
29,984
Is it the case that the log-likelihood *always* has negative curvature? Why?
For some classes of likelihood functions, one can prove that the likelihood is log-concave, i.e. that the log-likelihood has second derivatives $\leq 0$ everywhere, which makes life much easier (e.g. you can often prove the existence of unique global maxima, use specialized optimization methods ...) For example, this CV question shows that the exponential-family likelihood with the canonical link function is log-concave this paper "Concavity of the Log Likelihood" Pratt 1981, JASA proves log-concavity for a class of models with ordinal responses. There are certainly counterexamples as well (likelihoods that are provably non-log-concave). For example, any log-likelihood that is bi- or multimodal is non-log-concave ... e.g. "A note on bimodality in the log-likelihood function for penalized spline mixed models", Welham and Thompson 2009 CS&DA "Flat and Multimodal Likelihoods and Model Lack of Fit in Curved Exponential Families", Sundberg 2010 Scand J Stat "Problems with Likelihood Estimation of Covariance Functions of Spatial Gaussian Processes", Warnes and Ripley 1987 Biometrika
Is it the case that the log-likelihood *always* has negative curvature? Why?
For some classes of likelihood functions, one can prove that the likelihood is log-concave, i.e. that the log-likelihood has second derivatives $\leq 0$ everywhere, which makes life much easier (e.g.
Is it the case that the log-likelihood *always* has negative curvature? Why? For some classes of likelihood functions, one can prove that the likelihood is log-concave, i.e. that the log-likelihood has second derivatives $\leq 0$ everywhere, which makes life much easier (e.g. you can often prove the existence of unique global maxima, use specialized optimization methods ...) For example, this CV question shows that the exponential-family likelihood with the canonical link function is log-concave this paper "Concavity of the Log Likelihood" Pratt 1981, JASA proves log-concavity for a class of models with ordinal responses. There are certainly counterexamples as well (likelihoods that are provably non-log-concave). For example, any log-likelihood that is bi- or multimodal is non-log-concave ... e.g. "A note on bimodality in the log-likelihood function for penalized spline mixed models", Welham and Thompson 2009 CS&DA "Flat and Multimodal Likelihoods and Model Lack of Fit in Curved Exponential Families", Sundberg 2010 Scand J Stat "Problems with Likelihood Estimation of Covariance Functions of Spatial Gaussian Processes", Warnes and Ripley 1987 Biometrika
Is it the case that the log-likelihood *always* has negative curvature? Why? For some classes of likelihood functions, one can prove that the likelihood is log-concave, i.e. that the log-likelihood has second derivatives $\leq 0$ everywhere, which makes life much easier (e.g.
29,985
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
By 'does not exist' we mean that the original matrix $X^TX$ is not invertible, i.e. its inverse $(X^TX)^{-1}$ does not exist. Usually this relates to the presence of eigenvalues with extremely small magnitude (or zero) in the matrix $X^TX$. This issue of non-invertabilty suggests that the matrix $X^TX$ is rank deficient. A rank deficient matrix has a column space that does not span the vector space with the same dimensions as your data (think of having a 2D basis but wanting to map 3D points). Rank deficiency usually materialises as a problem in situations where you want to estimate $p$ parameters but your matrix rank $q$ is smaller than $p$. In this case one has an under-defined problem, $q$ equation and $p$ unknowns where $p>q$. Statically we mean that the information to solve this problem is simply unavailable. There is already a very good thread on what is rank deficiency, and how to deal with it? if you want to follow up this further.
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
By 'does not exist' we mean that the original matrix $X^TX$ is not invertible, i.e. its inverse $(X^TX)^{-1}$ does not exist. Usually this relates to the presence of eigenvalues with extremely small m
What does that statistically mean , if $(X'X)^{-1}$ does not exist? By 'does not exist' we mean that the original matrix $X^TX$ is not invertible, i.e. its inverse $(X^TX)^{-1}$ does not exist. Usually this relates to the presence of eigenvalues with extremely small magnitude (or zero) in the matrix $X^TX$. This issue of non-invertabilty suggests that the matrix $X^TX$ is rank deficient. A rank deficient matrix has a column space that does not span the vector space with the same dimensions as your data (think of having a 2D basis but wanting to map 3D points). Rank deficiency usually materialises as a problem in situations where you want to estimate $p$ parameters but your matrix rank $q$ is smaller than $p$. In this case one has an under-defined problem, $q$ equation and $p$ unknowns where $p>q$. Statically we mean that the information to solve this problem is simply unavailable. There is already a very good thread on what is rank deficiency, and how to deal with it? if you want to follow up this further.
What does that statistically mean , if $(X'X)^{-1}$ does not exist? By 'does not exist' we mean that the original matrix $X^TX$ is not invertible, i.e. its inverse $(X^TX)^{-1}$ does not exist. Usually this relates to the presence of eigenvalues with extremely small m
29,986
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
In the case of regression, consider the most basic linear model $$Y=Xb+\varepsilon,$$ the least square estimator $\hat{\beta}$ must satisfies $$\hat{Y}=X\hat{\beta}$$ where $\hat{Y}$ is the projection of $Y$ onto the space spanned by the columns of $X$. This leads us to the normal equation $$X'X\hat{\beta}=X'Y.$$ If $X$ has full rank then $X'X$ is invertible, so the (unique) solution to the equation is $$\hat{\beta}=(X'X)^{-1}X'Y.$$ However, if $(X'X)^{-1}$ does not exist, the solution to the normal equation will not be unique (e.g. any generalized inverse can solve the equation).
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
In the case of regression, consider the most basic linear model $$Y=Xb+\varepsilon,$$ the least square estimator $\hat{\beta}$ must satisfies $$\hat{Y}=X\hat{\beta}$$ where $\hat{Y}$ is the projection
What does that statistically mean , if $(X'X)^{-1}$ does not exist? In the case of regression, consider the most basic linear model $$Y=Xb+\varepsilon,$$ the least square estimator $\hat{\beta}$ must satisfies $$\hat{Y}=X\hat{\beta}$$ where $\hat{Y}$ is the projection of $Y$ onto the space spanned by the columns of $X$. This leads us to the normal equation $$X'X\hat{\beta}=X'Y.$$ If $X$ has full rank then $X'X$ is invertible, so the (unique) solution to the equation is $$\hat{\beta}=(X'X)^{-1}X'Y.$$ However, if $(X'X)^{-1}$ does not exist, the solution to the normal equation will not be unique (e.g. any generalized inverse can solve the equation).
What does that statistically mean , if $(X'X)^{-1}$ does not exist? In the case of regression, consider the most basic linear model $$Y=Xb+\varepsilon,$$ the least square estimator $\hat{\beta}$ must satisfies $$\hat{Y}=X\hat{\beta}$$ where $\hat{Y}$ is the projection
29,987
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
To complement the good answers already offered, if you would like a statistical implication of the singularity of $\left( \mathbf{X}^{T} \mathbf{X} \right)^{-1}$ you can think in terms of the variance of the OLS estimator: it explodes and all precision is lost. The confidence limits for the estimators in turn grow extremely large and inference becomes impossible. These implications often lead one to opt for ridge regression instead, as the introduction of biasing constant makes the inverse more stable and variances less inflated.
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
To complement the good answers already offered, if you would like a statistical implication of the singularity of $\left( \mathbf{X}^{T} \mathbf{X} \right)^{-1}$ you can think in terms of the variance
What does that statistically mean , if $(X'X)^{-1}$ does not exist? To complement the good answers already offered, if you would like a statistical implication of the singularity of $\left( \mathbf{X}^{T} \mathbf{X} \right)^{-1}$ you can think in terms of the variance of the OLS estimator: it explodes and all precision is lost. The confidence limits for the estimators in turn grow extremely large and inference becomes impossible. These implications often lead one to opt for ridge regression instead, as the introduction of biasing constant makes the inverse more stable and variances less inflated.
What does that statistically mean , if $(X'X)^{-1}$ does not exist? To complement the good answers already offered, if you would like a statistical implication of the singularity of $\left( \mathbf{X}^{T} \mathbf{X} \right)^{-1}$ you can think in terms of the variance
29,988
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
One extra answer from a more applied perspective. Imagine you want to measure how $Y =$ punching power of a person is related to $x_1$ = weight in kilograms of the person $x_2$ = weight in grams of the person $x_3$ = height of the person Since $x_1$ is collinear with $x_2$, the matrix $(X'X)$ won't be invertible. This means the $\hat{\beta}$ solution that gives you the relationship between $Y$ and the $X$ won't really exist. There might be infinite solutions or they might not be any solution. This, in fact, is the "reasonable" answer in this context: it doesn't make sense to assign a separate value to $\hat{\beta}_1$ and $\hat{\beta}_2$. You might assign the full effect of weight on $\hat{\beta}_1$, the full effect on $\hat{\beta}_2$ or some arbitrary combination. This is an extreme example, but when the $x$s approach collinearity (and, hence, the determinant is near zero) you'll have a similar, albeit not as extreme, issue.
What does that statistically mean , if $(X'X)^{-1}$ does not exist?
One extra answer from a more applied perspective. Imagine you want to measure how $Y =$ punching power of a person is related to $x_1$ = weight in kilograms of the person $x_2$ = weight in grams of t
What does that statistically mean , if $(X'X)^{-1}$ does not exist? One extra answer from a more applied perspective. Imagine you want to measure how $Y =$ punching power of a person is related to $x_1$ = weight in kilograms of the person $x_2$ = weight in grams of the person $x_3$ = height of the person Since $x_1$ is collinear with $x_2$, the matrix $(X'X)$ won't be invertible. This means the $\hat{\beta}$ solution that gives you the relationship between $Y$ and the $X$ won't really exist. There might be infinite solutions or they might not be any solution. This, in fact, is the "reasonable" answer in this context: it doesn't make sense to assign a separate value to $\hat{\beta}_1$ and $\hat{\beta}_2$. You might assign the full effect of weight on $\hat{\beta}_1$, the full effect on $\hat{\beta}_2$ or some arbitrary combination. This is an extreme example, but when the $x$s approach collinearity (and, hence, the determinant is near zero) you'll have a similar, albeit not as extreme, issue.
What does that statistically mean , if $(X'X)^{-1}$ does not exist? One extra answer from a more applied perspective. Imagine you want to measure how $Y =$ punching power of a person is related to $x_1$ = weight in kilograms of the person $x_2$ = weight in grams of t
29,989
How useful is Minitab in the real world? [closed]
Rexer Analytics does a tool survey every year that you can obtain by e-mailing them. This is not the best data in the world, but Minitab is pretty far down the list, though its users do seem to like it. The rarely-seen-in-the-wild characterization is consistent with the better Muenchen data from job postings and related sources (including the Rexer data), and my own experience in industry research. Based on the above data, I would spend your time learning R, unless there are some industry-specific reasons to focus on SAS. I write this as a heavy and happy Stata user that has never had an employer balk at purchasing a license. An added complication is that most people around you will use Excel for just about everything and you should learn a tool that plays nice with it, as well as being able to query SQL databases.
How useful is Minitab in the real world? [closed]
Rexer Analytics does a tool survey every year that you can obtain by e-mailing them. This is not the best data in the world, but Minitab is pretty far down the list, though its users do seem to like i
How useful is Minitab in the real world? [closed] Rexer Analytics does a tool survey every year that you can obtain by e-mailing them. This is not the best data in the world, but Minitab is pretty far down the list, though its users do seem to like it. The rarely-seen-in-the-wild characterization is consistent with the better Muenchen data from job postings and related sources (including the Rexer data), and my own experience in industry research. Based on the above data, I would spend your time learning R, unless there are some industry-specific reasons to focus on SAS. I write this as a heavy and happy Stata user that has never had an employer balk at purchasing a license. An added complication is that most people around you will use Excel for just about everything and you should learn a tool that plays nice with it, as well as being able to query SQL databases.
How useful is Minitab in the real world? [closed] Rexer Analytics does a tool survey every year that you can obtain by e-mailing them. This is not the best data in the world, but Minitab is pretty far down the list, though its users do seem to like i
29,990
How useful is Minitab in the real world? [closed]
Minitab's used a lot in production engineering, quality control, & Six Sigma (& when engineering companies use it in these areas, it may also have become the default statistical software in others). Based on my experience, I reckon graduates worry too much about software (& not enough about other things, especially consulting skills); demonstrated competence in statistical programming is generally important, but lack of familiarity with any particular language/software needed for some job is easily dealt with after starting it. I would say, though, that a Minitab (or SPSS) user, rather than a SAS (or R, or Stata) user, has perhaps to take pains to show that they can do more than point & click to run canned analyses—e.g. writing macros for non-linear regression, or whatever's not in the menus at the moment. Should you invest more time in learning SAS instead? Instead of investing more time in learning Minitab?—probably yes. Instead of investing more time in anything that gives you experience of working with real data on real problems &/or collaborating with domain experts in another field?—probably no.
How useful is Minitab in the real world? [closed]
Minitab's used a lot in production engineering, quality control, & Six Sigma (& when engineering companies use it in these areas, it may also have become the default statistical software in others). B
How useful is Minitab in the real world? [closed] Minitab's used a lot in production engineering, quality control, & Six Sigma (& when engineering companies use it in these areas, it may also have become the default statistical software in others). Based on my experience, I reckon graduates worry too much about software (& not enough about other things, especially consulting skills); demonstrated competence in statistical programming is generally important, but lack of familiarity with any particular language/software needed for some job is easily dealt with after starting it. I would say, though, that a Minitab (or SPSS) user, rather than a SAS (or R, or Stata) user, has perhaps to take pains to show that they can do more than point & click to run canned analyses—e.g. writing macros for non-linear regression, or whatever's not in the menus at the moment. Should you invest more time in learning SAS instead? Instead of investing more time in learning Minitab?—probably yes. Instead of investing more time in anything that gives you experience of working with real data on real problems &/or collaborating with domain experts in another field?—probably no.
How useful is Minitab in the real world? [closed] Minitab's used a lot in production engineering, quality control, & Six Sigma (& when engineering companies use it in these areas, it may also have become the default statistical software in others). B
29,991
How useful is Minitab in the real world? [closed]
Minitab is very popular for reliability and warranty analysis. Used all the time in these areas, especially for litigation purposes. I know a guy who has used Minitab since it was just a command line prompt. What he can do with it in a short time is very impressive. It's hard to say what to invest more time into unless you have a specific area you're interested in. Minitab is relatively easy to pick up compared to R or SAS.
How useful is Minitab in the real world? [closed]
Minitab is very popular for reliability and warranty analysis. Used all the time in these areas, especially for litigation purposes. I know a guy who has used Minitab since it was just a command lin
How useful is Minitab in the real world? [closed] Minitab is very popular for reliability and warranty analysis. Used all the time in these areas, especially for litigation purposes. I know a guy who has used Minitab since it was just a command line prompt. What he can do with it in a short time is very impressive. It's hard to say what to invest more time into unless you have a specific area you're interested in. Minitab is relatively easy to pick up compared to R or SAS.
How useful is Minitab in the real world? [closed] Minitab is very popular for reliability and warranty analysis. Used all the time in these areas, especially for litigation purposes. I know a guy who has used Minitab since it was just a command lin
29,992
How useful is Minitab in the real world? [closed]
Google Scholar search (9/15/2014) for: Program Hits SAS 2,610,000 SPSS or PSPP 1,640,000 Stata 1,280,000 Statistica 459,000 JMP 249,000 R and cran 86,500 Minitab 85,800 Systat 73,800 BMDP 45,900 SUDAAN 17,100 That said: there's lots of reasons besides popularity to choose a platform: performance for specific data set sizes cost extensibility how fast new techniques are released/old techniques are updated documentation and support perpetual versus rental license what your team uses portability multi-core support multi-user support the nature of the errors in the software or documentation does it do specifically what you need it to do interface &c.
How useful is Minitab in the real world? [closed]
Google Scholar search (9/15/2014) for: Program Hits SAS 2,610,000 SPSS or PSPP 1,640,000 Stata 1,280,000 Statistica 459,000 JMP 249,000 R and cran 86,500
How useful is Minitab in the real world? [closed] Google Scholar search (9/15/2014) for: Program Hits SAS 2,610,000 SPSS or PSPP 1,640,000 Stata 1,280,000 Statistica 459,000 JMP 249,000 R and cran 86,500 Minitab 85,800 Systat 73,800 BMDP 45,900 SUDAAN 17,100 That said: there's lots of reasons besides popularity to choose a platform: performance for specific data set sizes cost extensibility how fast new techniques are released/old techniques are updated documentation and support perpetual versus rental license what your team uses portability multi-core support multi-user support the nature of the errors in the software or documentation does it do specifically what you need it to do interface &c.
How useful is Minitab in the real world? [closed] Google Scholar search (9/15/2014) for: Program Hits SAS 2,610,000 SPSS or PSPP 1,640,000 Stata 1,280,000 Statistica 459,000 JMP 249,000 R and cran 86,500
29,993
Is confirmatory vs exploratory statistics "induction vs deduction"?
To add to @Peter Flom's answer, it is worth defining the other terms that were used: Deductive reasoning: Derive conclusions or predictions about specific cases from fundamental rules or theories. Inductive reasoning: Derive universal rules or theories from observation of many cases. Inferential statistics use both inductive and deductive reasoning. You are trying to establish rules about the behaviour of a system based on evidence, but you are testing models against probability theories derived deductively (i.e., probability distributions in parametric models or the combinatorics that are the basis of non-parametric models). Descriptive statistics don't really qualify as "reasoning" in my book. Saying the average of something is x and the standard deviation is s isn't any more of an argument than saying the colour of something is blue. You're describing what you have in front of you, not drawing any conclusions beyond it.
Is confirmatory vs exploratory statistics "induction vs deduction"?
To add to @Peter Flom's answer, it is worth defining the other terms that were used: Deductive reasoning: Derive conclusions or predictions about specific cases from fundamental rules or theories. In
Is confirmatory vs exploratory statistics "induction vs deduction"? To add to @Peter Flom's answer, it is worth defining the other terms that were used: Deductive reasoning: Derive conclusions or predictions about specific cases from fundamental rules or theories. Inductive reasoning: Derive universal rules or theories from observation of many cases. Inferential statistics use both inductive and deductive reasoning. You are trying to establish rules about the behaviour of a system based on evidence, but you are testing models against probability theories derived deductively (i.e., probability distributions in parametric models or the combinatorics that are the basis of non-parametric models). Descriptive statistics don't really qualify as "reasoning" in my book. Saying the average of something is x and the standard deviation is s isn't any more of an argument than saying the colour of something is blue. You're describing what you have in front of you, not drawing any conclusions beyond it.
Is confirmatory vs exploratory statistics "induction vs deduction"? To add to @Peter Flom's answer, it is worth defining the other terms that were used: Deductive reasoning: Derive conclusions or predictions about specific cases from fundamental rules or theories. In
29,994
Is confirmatory vs exploratory statistics "induction vs deduction"?
I don't think either the web-page or your statements are correct. I'd rather stick with more straightforward descriptions: Inferential statistics: Given a sample, what can we say about the population from which it was drawn? Descriptive statistics: Given a sample, what can we say about the sample? Both can be used as part of inductive or deductive reasoning - the type of reasoning is not supplied by the statistics.
Is confirmatory vs exploratory statistics "induction vs deduction"?
I don't think either the web-page or your statements are correct. I'd rather stick with more straightforward descriptions: Inferential statistics: Given a sample, what can we say about the population
Is confirmatory vs exploratory statistics "induction vs deduction"? I don't think either the web-page or your statements are correct. I'd rather stick with more straightforward descriptions: Inferential statistics: Given a sample, what can we say about the population from which it was drawn? Descriptive statistics: Given a sample, what can we say about the sample? Both can be used as part of inductive or deductive reasoning - the type of reasoning is not supplied by the statistics.
Is confirmatory vs exploratory statistics "induction vs deduction"? I don't think either the web-page or your statements are correct. I'd rather stick with more straightforward descriptions: Inferential statistics: Given a sample, what can we say about the population
29,995
Do we need to report the median or the Mean when using a Kruskal-Wallis test?
The Wilcoxon/Kruskal-Wallis test is not for either the mean or median although the median may be closer to what the test is testing. The estimator that is consistent with the test is the Hodges-Lehmann estimator. See http://en.wikipedia.org/wiki/Mann-whitney and http://en.wikipedia.org/wiki/Hodges%E2%80%93Lehmann_estimate . In R you can do the calculations easily - see for example http://biostat.mc.vanderbilt.edu/WilcoxonSoftware .
Do we need to report the median or the Mean when using a Kruskal-Wallis test?
The Wilcoxon/Kruskal-Wallis test is not for either the mean or median although the median may be closer to what the test is testing. The estimator that is consistent with the test is the Hodges-Lehma
Do we need to report the median or the Mean when using a Kruskal-Wallis test? The Wilcoxon/Kruskal-Wallis test is not for either the mean or median although the median may be closer to what the test is testing. The estimator that is consistent with the test is the Hodges-Lehmann estimator. See http://en.wikipedia.org/wiki/Mann-whitney and http://en.wikipedia.org/wiki/Hodges%E2%80%93Lehmann_estimate . In R you can do the calculations easily - see for example http://biostat.mc.vanderbilt.edu/WilcoxonSoftware .
Do we need to report the median or the Mean when using a Kruskal-Wallis test? The Wilcoxon/Kruskal-Wallis test is not for either the mean or median although the median may be closer to what the test is testing. The estimator that is consistent with the test is the Hodges-Lehma
29,996
Do we need to report the median or the Mean when using a Kruskal-Wallis test?
The Kruskal-Wallis test is said to test whether the median is the same in every group. According to that simple rule, you should report the median, which is my answer to your question. However, this gives me the occasion to show that the KW is not really a test of the median. The alternative hypothesis of the test is not that one of the distributions has a different median. It is that one the distributions has exactly the same shape as the others, but is shifted up- or down-wards. Here is a little R snippet that demonstrates this. I create two samples with the same median (namely 0) and I apply the Kruskal-Wallis test. set.seed(123) x <- exp(rnorm(100)) y <- exp(rnorm(100)) x <- x - median(x) y <- median(y) - y # Both x and y have median 0. kruskal.test(list(x,y)) It turns out that the p-value is 0.005676, which seems very low for two samples with exactly the same median. This is because the samples are taken from distributions that are very skewed in opposite directions (the sample x has a heavy tail on the right side and y on the left side). Is the KW test wrong? No. It is right to reject the null hypothesis that samples are taken from the same distribution. So the conclusion is that you cannot conclude that there is a difference in median just because you reject the null hypothesis. You might also reject the null hypothesis because of lack of independence, or as shown in the previous example because distributions do not have the same shapes. I think there is no need to mention all this while reporting the median, but these are elements you should have in mind every time you do the test.
Do we need to report the median or the Mean when using a Kruskal-Wallis test?
The Kruskal-Wallis test is said to test whether the median is the same in every group. According to that simple rule, you should report the median, which is my answer to your question. However, this g
Do we need to report the median or the Mean when using a Kruskal-Wallis test? The Kruskal-Wallis test is said to test whether the median is the same in every group. According to that simple rule, you should report the median, which is my answer to your question. However, this gives me the occasion to show that the KW is not really a test of the median. The alternative hypothesis of the test is not that one of the distributions has a different median. It is that one the distributions has exactly the same shape as the others, but is shifted up- or down-wards. Here is a little R snippet that demonstrates this. I create two samples with the same median (namely 0) and I apply the Kruskal-Wallis test. set.seed(123) x <- exp(rnorm(100)) y <- exp(rnorm(100)) x <- x - median(x) y <- median(y) - y # Both x and y have median 0. kruskal.test(list(x,y)) It turns out that the p-value is 0.005676, which seems very low for two samples with exactly the same median. This is because the samples are taken from distributions that are very skewed in opposite directions (the sample x has a heavy tail on the right side and y on the left side). Is the KW test wrong? No. It is right to reject the null hypothesis that samples are taken from the same distribution. So the conclusion is that you cannot conclude that there is a difference in median just because you reject the null hypothesis. You might also reject the null hypothesis because of lack of independence, or as shown in the previous example because distributions do not have the same shapes. I think there is no need to mention all this while reporting the median, but these are elements you should have in mind every time you do the test.
Do we need to report the median or the Mean when using a Kruskal-Wallis test? The Kruskal-Wallis test is said to test whether the median is the same in every group. According to that simple rule, you should report the median, which is my answer to your question. However, this g
29,997
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
There's some intricate cheating involved. The confidence interval $(s,l)$ does not use the information that the range of the uniform is 1, and is thus non-parametric, while the claim made about the sample with $l-s=0.9$ does, and is highly model-dependent. I am pretty sure one can improve either the coverage or the (expected) length of the confidence interval if this information is taken into account. For one thing, the end points of the distribution are at most $1-(l-s)$ away from either $s$ or $l$. Hence, a 100% confidence interval for $\mu$ is $(l-1/2, s+1/2)$. This particular problem falls into the domain of inference for partially identified distributions studied in the last 10-15 years extensively in theoretical econometrics. Likelihood, and hence Bayesian, inference for the uniform distribution is ugly, since it constitutes an non-regular problem (the support of the distribution depends on the unknown parameter).
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
There's some intricate cheating involved. The confidence interval $(s,l)$ does not use the information that the range of the uniform is 1, and is thus non-parametric, while the claim made about the sa
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) There's some intricate cheating involved. The confidence interval $(s,l)$ does not use the information that the range of the uniform is 1, and is thus non-parametric, while the claim made about the sample with $l-s=0.9$ does, and is highly model-dependent. I am pretty sure one can improve either the coverage or the (expected) length of the confidence interval if this information is taken into account. For one thing, the end points of the distribution are at most $1-(l-s)$ away from either $s$ or $l$. Hence, a 100% confidence interval for $\mu$ is $(l-1/2, s+1/2)$. This particular problem falls into the domain of inference for partially identified distributions studied in the last 10-15 years extensively in theoretical econometrics. Likelihood, and hence Bayesian, inference for the uniform distribution is ugly, since it constitutes an non-regular problem (the support of the distribution depends on the unknown parameter).
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) There's some intricate cheating involved. The confidence interval $(s,l)$ does not use the information that the range of the uniform is 1, and is thus non-parametric, while the claim made about the sa
29,998
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
I'm hesitant to answer this. These Frequentist vs. Bayesian spats are generally unproductive, and can be nasty and juvenile. For what it's worth, Wagenmakers is kind of a big deal, whereas largely forgotten 3k+ year old Chinese philosophers on the other hand... However, I would argue that the standard Frequentist interpretation of a 50% confidence interval is not that you should be 50% confident the true value lies within the interval, or that there is a 50% probability that it does. Rather, the idea is simply that, if the same process were repeated indefinitely, the percentage of CI's that included the true value would converge to 50%. For any given single interval, however, the probability that it includes the true value is either 0 or 1, but you don't know which.
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
I'm hesitant to answer this. These Frequentist vs. Bayesian spats are generally unproductive, and can be nasty and juvenile. For what it's worth, Wagenmakers is kind of a big deal, whereas largely f
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) I'm hesitant to answer this. These Frequentist vs. Bayesian spats are generally unproductive, and can be nasty and juvenile. For what it's worth, Wagenmakers is kind of a big deal, whereas largely forgotten 3k+ year old Chinese philosophers on the other hand... However, I would argue that the standard Frequentist interpretation of a 50% confidence interval is not that you should be 50% confident the true value lies within the interval, or that there is a 50% probability that it does. Rather, the idea is simply that, if the same process were repeated indefinitely, the percentage of CI's that included the true value would converge to 50%. For any given single interval, however, the probability that it includes the true value is either 0 or 1, but you don't know which.
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) I'm hesitant to answer this. These Frequentist vs. Bayesian spats are generally unproductive, and can be nasty and juvenile. For what it's worth, Wagenmakers is kind of a big deal, whereas largely f
29,999
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
I think it is a weak argument for a strong case. $(s,l)$ may be a 50% confidence interval in the sense defined, but so too is $\left(\dfrac{3l+s-1}{4},\dfrac{3s+l+1}{4}\right)$, and I think the latter can be justified as being a better one in these circumstances, as it extends without further adjustment to larger sample sizes; note also that latter confidence interval is never wider than $\frac12$ and its expected width for a sample of size $n$ is $\frac{1}{n+1}$.
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.)
I think it is a weak argument for a strong case. $(s,l)$ may be a 50% confidence interval in the sense defined, but so too is $\left(\dfrac{3l+s-1}{4},\dfrac{3s+l+1}{4}\right)$, and I think the latt
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) I think it is a weak argument for a strong case. $(s,l)$ may be a 50% confidence interval in the sense defined, but so too is $\left(\dfrac{3l+s-1}{4},\dfrac{3s+l+1}{4}\right)$, and I think the latter can be justified as being a better one in these circumstances, as it extends without further adjustment to larger sample sizes; note also that latter confidence interval is never wider than $\frac12$ and its expected width for a sample of size $n$ is $\frac{1}{n+1}$.
Frequentist reasoning and conditioning on observations (example from Wagenmakers et al.) I think it is a weak argument for a strong case. $(s,l)$ may be a 50% confidence interval in the sense defined, but so too is $\left(\dfrac{3l+s-1}{4},\dfrac{3s+l+1}{4}\right)$, and I think the latt
30,000
What is the meaning of an F value less than 1 in one-way ANOVA?
The F ratio is a statistic. When the null hypothesis of no group differences is true, then the expected value of the numerator and denominator of the F ratio will be equal. As a consequence, the expected value of the F ratio when the null hypothesis is true is also close to one (actually it's not exactly one, because of the properties of expected values of ratios). When the null hypothesis is false and there are group differences between the means, the expected value of the numerator will be larger than the denominator. As such the expected value of the F ratio will be larger than under the null hypothesis, and will also more likely be larger than one. However, the point is that both the numerator and denominator are random variables, and so is the F ratio. The F ratio is drawn from a distribution. If we assume the null hypothesis is true we get one distribution, and if we assume that it is false with various assumptions about effect size, sample size, and so forth we get another distribution. We then do a study and get an F value. When the null hypothesis is false, it is still possible to get an F ratio less than one. The larger the population effect size is (in combination with sample size), the more the F distribution will move to the right, and the less likely we will be to get a value less than one. The following graphic extracted from the G-Power3 demonstrates the idea given various assumptions. The red distribution is the distribution of F when H0 is true. The blue distribution is the distribution of F when H0 is false given various assumptions. Note that the blue distribution does include values less than one, yet they are very unlikely.
What is the meaning of an F value less than 1 in one-way ANOVA?
The F ratio is a statistic. When the null hypothesis of no group differences is true, then the expected value of the numerator and denominator of the F ratio will be equal. As a consequence, the expec
What is the meaning of an F value less than 1 in one-way ANOVA? The F ratio is a statistic. When the null hypothesis of no group differences is true, then the expected value of the numerator and denominator of the F ratio will be equal. As a consequence, the expected value of the F ratio when the null hypothesis is true is also close to one (actually it's not exactly one, because of the properties of expected values of ratios). When the null hypothesis is false and there are group differences between the means, the expected value of the numerator will be larger than the denominator. As such the expected value of the F ratio will be larger than under the null hypothesis, and will also more likely be larger than one. However, the point is that both the numerator and denominator are random variables, and so is the F ratio. The F ratio is drawn from a distribution. If we assume the null hypothesis is true we get one distribution, and if we assume that it is false with various assumptions about effect size, sample size, and so forth we get another distribution. We then do a study and get an F value. When the null hypothesis is false, it is still possible to get an F ratio less than one. The larger the population effect size is (in combination with sample size), the more the F distribution will move to the right, and the less likely we will be to get a value less than one. The following graphic extracted from the G-Power3 demonstrates the idea given various assumptions. The red distribution is the distribution of F when H0 is true. The blue distribution is the distribution of F when H0 is false given various assumptions. Note that the blue distribution does include values less than one, yet they are very unlikely.
What is the meaning of an F value less than 1 in one-way ANOVA? The F ratio is a statistic. When the null hypothesis of no group differences is true, then the expected value of the numerator and denominator of the F ratio will be equal. As a consequence, the expec