idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
15,101
What do/did you do to remember Bayes' rule?
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule. I simply say --- "A given B equals the reverse times A over B" That is to say, The probability of A given B P(A | B) equals the reverse (B | A) times A over B P(A) / P(B). Put in full, $$P(A | B) = \frac{P(B | A) * P(A)}{P(B)}...
What do/did you do to remember Bayes' rule?
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule. I simply say --- "A given B equals the reverse times A over B" That is to say, The probability of A given
What do/did you do to remember Bayes' rule? Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule. I simply say --- "A given B equals the reverse times A over B" That is to say, The probability of A given B P(A | B) equals the reverse (B | A) times A over B P(A) / P(B). Put in ful...
What do/did you do to remember Bayes' rule? Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule. I simply say --- "A given B equals the reverse times A over B" That is to say, The probability of A given
15,102
What do/did you do to remember Bayes' rule?
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possibility of confusion: $$ P(B|A)=\frac{P(A|B)P(B)}{P(A)} \quad \text{vs} \quad P(B|A)=\frac{P(A|B)P(A)}{P(B)}. $$ To remem...
What do/did you do to remember Bayes' rule?
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possi
What do/did you do to remember Bayes' rule? If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possibility of confusion: $$ P(B|A)=\frac{P(A|B)P(B)}{P(A)} \quad \text{vs} \quad...
What do/did you do to remember Bayes' rule? If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possi
15,103
What do/did you do to remember Bayes' rule?
A person --> disease --> test positive (red) A person --> disease --> test negative (yellow) A person --> no disease --> test positive (blue) A person --> no disease --> test negative (green) To better remember Bayes' rule, draw the above into a tree structure and mark the edges with color. Say we want to know P(diseas...
What do/did you do to remember Bayes' rule?
A person --> disease --> test positive (red) A person --> disease --> test negative (yellow) A person --> no disease --> test positive (blue) A person --> no disease --> test negative (green) To bette
What do/did you do to remember Bayes' rule? A person --> disease --> test positive (red) A person --> disease --> test negative (yellow) A person --> no disease --> test positive (blue) A person --> no disease --> test negative (green) To better remember Bayes' rule, draw the above into a tree structure and mark the ed...
What do/did you do to remember Bayes' rule? A person --> disease --> test positive (red) A person --> disease --> test negative (yellow) A person --> no disease --> test positive (blue) A person --> no disease --> test negative (green) To bette
15,104
Standard error of the median
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small. If you were selecting median because it's a small sample that's not a good justification. You select median because the median is an important value. It says something different fr...
Standard error of the median
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small. If you were selecting median because it's a small sample that
Standard error of the median Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small. If you were selecting median because it's a small sample that's not a good justification. You select median because the median is an important value. I...
Standard error of the median Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small. If you were selecting median because it's a small sample that
15,105
Standard error of the median
The magic number 1.253 comes from the asymptotic variance formula: $$ {\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n} $$ where $m$ is the true median, and $f(m)$ is the true density at that point. The magic number 1.253 is $\sqrt{\pi/2}$ from the normal distribution so... you still are assuming normality with that. For any ...
Standard error of the median
The magic number 1.253 comes from the asymptotic variance formula: $$ {\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n} $$ where $m$ is the true median, and $f(m)$ is the true density at that point. The magi
Standard error of the median The magic number 1.253 comes from the asymptotic variance formula: $$ {\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n} $$ where $m$ is the true median, and $f(m)$ is the true density at that point. The magic number 1.253 is $\sqrt{\pi/2}$ from the normal distribution so... you still are assuming ...
Standard error of the median The magic number 1.253 comes from the asymptotic variance formula: $$ {\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n} $$ where $m$ is the true median, and $f(m)$ is the true density at that point. The magi
15,106
Standard error of the median
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your question is no. See also here. One way to obtain the standard error and confidence intervals for the median in small samples...
Standard error of the median
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your ques
Standard error of the median Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your question is no. See also here. One way to obtain the standard error and confidence intervals fo...
Standard error of the median Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your ques
15,107
Standard error of the median
Not a solution here, but perhaps helpful: Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the number m such that P(m) = 1/2. Following this helpful page we can compute the distribution of a number $x$ being the median...
Standard error of the median
Not a solution here, but perhaps helpful: Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the num
Standard error of the median Not a solution here, but perhaps helpful: Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the number m such that P(m) = 1/2. Following this helpful page we can compute the distribution of ...
Standard error of the median Not a solution here, but perhaps helpful: Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the num
15,108
Standard error of the median
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecker, 2015 in the section named ‘Inferences about the median’. Stata implements the procedure as the ‘centiles’ command an...
Standard error of the median
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecke
Standard error of the median There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecker, 2015 in the section named ‘Inferences about the median’. Stata implements the procedure...
Standard error of the median There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecke
15,109
Why did statisticians define random matrices?
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specifically, it was the eigenvalues (which are energy levels in atomic physics) of random matrices that generated tons of in...
Why did statisticians define random matrices?
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specif
Why did statisticians define random matrices? It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specifically, it was the eigenvalues (which are energy levels in atomic physics)...
Why did statisticians define random matrices? It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specif
15,110
Why did statisticians define random matrices?
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series, look at Treasury bills 4-week, 3-month, 6-month and 1-year. You can think of these 4 rates as a vector with 4 elements...
Why did statisticians define random matrices?
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series
Why did statisticians define random matrices? You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series, look at Treasury bills 4-week, 3-month, 6-month and 1-year. You can thin...
Why did statisticians define random matrices? You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series
15,111
Why did statisticians define random matrices?
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries. My background in theoretical physics may cause me to present a slightly biased point of view here, but I would even go so far to suggest that the popularity of random ...
Why did statisticians define random matrices?
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries. My background in theoretical physics may cause
Why did statisticians define random matrices? In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries. My background in theoretical physics may cause me to present a slightly biased point of view here, but I would even go so...
Why did statisticians define random matrices? In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries. My background in theoretical physics may cause
15,112
Why did statisticians define random matrices?
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you want to consider random linear maps between those two spaces, you should come up with a theory of random matrices. Random...
Why did statisticians define random matrices?
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you wa
Why did statisticians define random matrices? A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you want to consider random linear maps between those two spaces, you should com...
Why did statisticians define random matrices? A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you wa
15,113
Why did statisticians define random matrices?
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for these matrices and play a role in the theory. Grossly simplified, it turns out that minimizing the L1 norm of a certain pr...
Why did statisticians define random matrices?
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for th
Why did statisticians define random matrices? Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for these matrices and play a role in the theory. Grossly simplified, it turns o...
Why did statisticians define random matrices? Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for th
15,114
How do I interpret a probit model in Stata?
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you change the value of a regressor, holding al...
How do I interpret a probit model in Stata?
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how
How do I interpret a probit model in Stata? In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you...
How do I interpret a probit model in Stata? In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how
15,115
How do I interpret a probit model in Stata?
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union" (see link). . webuse union . keep union age grade . probit union age grade Iteration 0: log likelihood = -13864....
How do I interpret a probit model in Stata?
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union
How do I interpret a probit model in Stata? Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union" (see link). . webuse union . keep union age grade . probit union age gra...
How do I interpret a probit model in Stata? Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union
15,116
Does Bayesian statistics make meta-analysis obsolete?
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in different order (see e.g. here or here). Notice that if previous experiments influence your future experiments, then also in the...
Does Bayesian statistics make meta-analysis obsolete?
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in differen
Does Bayesian statistics make meta-analysis obsolete? What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in different order (see e.g. here or here). Notice that if previous experimen...
Does Bayesian statistics make meta-analysis obsolete? What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in differen
15,117
Does Bayesian statistics make meta-analysis obsolete?
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter estimate. We are interested in whether effects are consistent among each other, of the same direction, have CI bounds t...
Does Bayesian statistics make meta-analysis obsolete?
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter
Does Bayesian statistics make meta-analysis obsolete? I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter estimate. We are interested in whether effects are consistent amo...
Does Bayesian statistics make meta-analysis obsolete? I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter
15,118
Does Bayesian statistics make meta-analysis obsolete?
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Spiegelhalter showed years ago that the most commonly used method for meta-analysis, the DerSimonian and Laird method, is ...
Does Bayesian statistics make meta-analysis obsolete?
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Sp
Does Bayesian statistics make meta-analysis obsolete? When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Spiegelhalter showed years ago that the most commonly used method fo...
Does Bayesian statistics make meta-analysis obsolete? When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Sp
15,119
Does Bayesian statistics make meta-analysis obsolete?
One important clarification about this question. You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the things you should be concerned about in a meta-analysis! Most directly to the point is that good methods for meta-analyses ...
Does Bayesian statistics make meta-analysis obsolete?
One important clarification about this question. You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the thin
Does Bayesian statistics make meta-analysis obsolete? One important clarification about this question. You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the things you should be concerned about in a meta-analysis! Most directl...
Does Bayesian statistics make meta-analysis obsolete? One important clarification about this question. You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the thin
15,120
Does Bayesian statistics make meta-analysis obsolete?
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough is already enough. For instance Wetterslev and colleagues in J Clin Epid here. The same authors have a number of public...
Does Bayesian statistics make meta-analysis obsolete?
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough
Does Bayesian statistics make meta-analysis obsolete? People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough is already enough. For instance Wetterslev and colleagues in J Cl...
Does Bayesian statistics make meta-analysis obsolete? People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough
15,121
Are MCMC without memory?
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "without memory" to the extent that only the previous value affects the present conditional probability, and all previous state...
Are MCMC without memory?
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "withou
Are MCMC without memory? The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "without memory" to the extent that only the previous value affects the present conditional probabilit...
Are MCMC without memory? The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "withou
15,122
Are MCMC without memory?
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ from vector $x_{i}$. Now, moment $i$ is metaphorically seen as "the present", and all vectors coming "earlier than" $x_{i}$...
Are MCMC without memory?
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ fro
Are MCMC without memory? While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ from vector $x_{i}$. Now, moment $i$ is metaphorically seen as "the present", and all vectors comi...
Are MCMC without memory? While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ fro
15,123
Are MCMC without memory?
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point in time. That is essentially the same situation as what is happening in MCMC. It is using the current information that...
Are MCMC without memory?
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point i
Are MCMC without memory? You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point in time. That is essentially the same situation as what is happening in MCMC. It is using the...
Are MCMC without memory? You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point i
15,124
Why f beta score define beta like that?
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two definitions represent only notational differences in the definition of the $F_\beta$ score. I have seen it defined both the ...
Why f beta score define beta like that?
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two defi
Why f beta score define beta like that? Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two definitions represent only notational differences in the definition of the $F_\beta$...
Why f beta score define beta like that? Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two defi
15,125
Why f beta score define beta like that?
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definition for what it means to attach $\beta$ times as much importance to recall than precision. The particular way of defining ...
Why f beta score define beta like that?
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definitio
Why f beta score define beta like that? The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definition for what it means to attach $\beta$ times as much importance to recall than pr...
Why f beta score define beta like that? The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definitio
15,126
Why f beta score define beta like that?
To point something out quickly. It means that as the beta value increases, you value precision more. I actually think it's the opposite - since higher is better in F-β scoring, you want the denominator to be small. Therefore, if you decrease β, then the model is punished less for having a good precision score. If you...
Why f beta score define beta like that?
To point something out quickly. It means that as the beta value increases, you value precision more. I actually think it's the opposite - since higher is better in F-β scoring, you want the denomina
Why f beta score define beta like that? To point something out quickly. It means that as the beta value increases, you value precision more. I actually think it's the opposite - since higher is better in F-β scoring, you want the denominator to be small. Therefore, if you decrease β, then the model is punished less f...
Why f beta score define beta like that? To point something out quickly. It means that as the beta value increases, you value precision more. I actually think it's the opposite - since higher is better in F-β scoring, you want the denomina
15,127
Why f beta score define beta like that?
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term. A Person's answer does well to show why $\beta^{2}$ appears, given Van Rijsbergen's chosen way to define the relative importance of precis...
Why f beta score define beta like that?
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term. A Person's answer
Why f beta score define beta like that? TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term. A Person's answer does well to show why $\beta^{2}$ appears, given Van Rijsbergen's chosen way to ...
Why f beta score define beta like that? TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term. A Person's answer
15,128
Why f beta score define beta like that?
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with recall that would also work, it would just mean that as the beta value increases you value recall more.
Why f beta score define beta like that?
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with reca
Why f beta score define beta like that? The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with recall that would also work, it would just mean that as the beta value increases you...
Why f beta score define beta like that? The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with reca
15,129
Why f beta score define beta like that?
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
Why f beta score define beta like that?
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
Why f beta score define beta like that? The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
Why f beta score define beta like that? The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
15,130
Why do we care more about test error than expected test error in Machine Learning?
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err? I can only guess, but I think it is a reasonable guess. The former concerns the error for the training set we have right now. It answers "If I were to use this dataset to train this model, what kind of error would I expect?". It is easy to think ...
Why do we care more about test error than expected test error in Machine Learning?
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err? I can only guess, but I think it is a reasonable guess. The former concerns the error for the training set we have right now. I
Why do we care more about test error than expected test error in Machine Learning? Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err? I can only guess, but I think it is a reasonable guess. The former concerns the error for the training set we have right now. It answers "If I were to use this data...
Why do we care more about test error than expected test error in Machine Learning? Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err? I can only guess, but I think it is a reasonable guess. The former concerns the error for the training set we have right now. I
15,131
Why do we care more about test error than expected test error in Machine Learning?
+1 to Demetri Pananos's answer. It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite different than $\mathrm{Err}_{\mathcal{T}'}$ - either much lower, or much higher. This may be of vastly larger importance whe...
Why do we care more about test error than expected test error in Machine Learning?
+1 to Demetri Pananos's answer. It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite diffe
Why do we care more about test error than expected test error in Machine Learning? +1 to Demetri Pananos's answer. It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite different than $\mathrm{Err}_{\mathcal{T}'...
Why do we care more about test error than expected test error in Machine Learning? +1 to Demetri Pananos's answer. It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite diffe
15,132
Why do we care more about test error than expected test error in Machine Learning?
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate implementation of one such bound (although IMHO the thing that makes it work well is the regularisation, rather than the hi...
Why do we care more about test error than expected test error in Machine Learning?
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate imp
Why do we care more about test error than expected test error in Machine Learning? Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate implementation of one such bound (althou...
Why do we care more about test error than expected test error in Machine Learning? Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate imp
15,133
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, though one could make a case for the one-sided alternate hypothesis $p>1/2$. We need to choose the significance level o...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, t
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, though one could make a case for th...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, t
15,134
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null hypothesis of a fair coin. So, the $p$-value - the probability of seeing a result this extreme or more extreme given that...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null h
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null hypothesis of a fair coin. So, the ...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null h
15,135
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head is unknown, so we use a flat prior distribution on 1. We then compute the bayes factor $K = \frac{p(x=490|M_0)}{p(x=490...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head is unknown, so we use a flat prior...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head
15,136
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Null hypothesis,Ho:P=0.5 (P=Q=0.5) H1:P>0.5 where P is the prob of occuring head. we know z = (p-P)/sqrt(PQ/N) where p =490/900 =0.54 Now z=(0.54-0.5)/sqrt((0.5*0.5)/900) z=2 hence at 5% LOS (i.e,1.64<2) Ho is rejected hence the coin is biased.....
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Null hypothesis,Ho:P=0.5 (P=Q=0.5) H1:P>0.5 where P is the prob of occuring head. we know z = (p-P)/sqrt(PQ/N) where p =490/900 =0.54 Now z=(0.54-0.5)/sqrt((0.5*0.5)/900) z=2 hence at 5% L
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Null hypothesis,Ho:P=0.5 (P=Q=0.5) H1:P>0.5 where P is the prob of occuring head. we know z = (p-P)/sqrt(PQ/N) where p =490/900 =0.54 Now z=(0.54-0.5)/sqrt((0.5*0.5)/900) z=2 hence at 5% LOS (i.e,1.64<2) Ho is rejected hen...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Null hypothesis,Ho:P=0.5 (P=Q=0.5) H1:P>0.5 where P is the prob of occuring head. we know z = (p-P)/sqrt(PQ/N) where p =490/900 =0.54 Now z=(0.54-0.5)/sqrt((0.5*0.5)/900) z=2 hence at 5% L
15,137
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Your question could be addressed in a few different ways. The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as the null hypothesis and see if the data (the 490 out of 900 heads) can be used to reject this null hypothesis by comput...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Your question could be addressed in a few different ways. The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Your question could be addressed in a few different ways. The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as the null hypothesis and see if th...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? Your question could be addressed in a few different ways. The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as
15,138
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
And an R illustration: Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased then p=probability of heads(or tails) = 0.5). If we would like to test the alternative that Ha: p<>0.5 at alpha 0.05 we...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
And an R illustration: Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? And an R illustration: Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased then p=probability of heads(or tai...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? And an R illustration: Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased
15,139
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
To clarify the Bayesian approach: You start by knowing nothing, except that P(Heads) is in [0,1]. So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution -> beta(1,1). Each time you flip the coin do a Bayesian up-date of the coin's P(Heads) by multiplying each point in the ...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
To clarify the Bayesian approach: You start by knowing nothing, except that P(Heads) is in [0,1]. So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? To clarify the Bayesian approach: You start by knowing nothing, except that P(Heads) is in [0,1]. So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution -> beta(1,1). Each time you flip t...
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased? To clarify the Bayesian approach: You start by knowing nothing, except that P(Heads) is in [0,1]. So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution
15,140
Among Matlab and Python, which language is good for statistical analysis?
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall back to using your default best language. At the very least, I would sugg...
Among Matlab and Python, which language is good for statistical analysis?
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are
Among Matlab and Python, which language is good for statistical analysis? As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall...
Among Matlab and Python, which language is good for statistical analysis? As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are
15,141
Among Matlab and Python, which language is good for statistical analysis?
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc). On the first, the biggest difference is that Python is a general purpose programming language. Matlab is great as long...
Among Matlab and Python, which language is good for statistical analysis?
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
Among Matlab and Python, which language is good for statistical analysis? Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc). On the first, the biggest difference is that ...
Among Matlab and Python, which language is good for statistical analysis? Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
15,142
Among Matlab and Python, which language is good for statistical analysis?
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I often needed to create algorithms for quick turnaround analysis. Since these algorithms often utilize matrix math, Mat...
Among Matlab and Python, which language is good for statistical analysis?
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I
Among Matlab and Python, which language is good for statistical analysis? I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I often needed to create algorithms for quick t...
Among Matlab and Python, which language is good for statistical analysis? I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I
15,143
Optimizing OLS with Newton's Method
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, based on the gradient and Hessian. The p...
Optimizing OLS with Newton's Method
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method construct
Optimizing OLS with Newton's Method If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, base...
Optimizing OLS with Newton's Method If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method construct
15,144
Optimizing OLS with Newton's Method
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact. Newton's method does $$\beta \gets \beta-\frac{f'(\beta)}{f''(\beta)}$$ and we have $$f(\beta)=\|y-x\beta\|^2$$ $$f'(\beta)=-...
Optimizing OLS with Newton's Method
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact. Newto
Optimizing OLS with Newton's Method It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact. Newton's method does $$\beta \gets \beta-\frac{f'(\beta)}{f''(\beta)}$$ and we have $$f(\...
Optimizing OLS with Newton's Method It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact. Newto
15,145
What does interpolating the training set actually mean?
Your question already got two nice answers, but I feel that some more context is needed. First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized models we mean such that have way more parameters than datapoints. For example, Neal (2019), and Neal et al (2018) trai...
What does interpolating the training set actually mean?
Your question already got two nice answers, but I feel that some more context is needed. First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized
What does interpolating the training set actually mean? Your question already got two nice answers, but I feel that some more context is needed. First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized models we mean such that have way more parameters than datapoin...
What does interpolating the training set actually mean? Your question already got two nice answers, but I feel that some more context is needed. First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized
15,146
What does interpolating the training set actually mean?
In layman's terms, an interpolator will literally 'join the dots'. Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does play a useful role in statistics/ml but should be used carefully. The black dots are training data and the red crossed ar...
What does interpolating the training set actually mean?
In layman's terms, an interpolator will literally 'join the dots'. Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does pl
What does interpolating the training set actually mean? In layman's terms, an interpolator will literally 'join the dots'. Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does play a useful role in statistics/ml but should be used carefully. ...
What does interpolating the training set actually mean? In layman's terms, an interpolator will literally 'join the dots'. Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does pl
15,147
What does interpolating the training set actually mean?
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/context means zero training loss but still not overfitting on the test set. Hence the curious phenomena, that normally we wo...
What does interpolating the training set actually mean?
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/cont
What does interpolating the training set actually mean? Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/context means zero training loss but still not overfitting on the te...
What does interpolating the training set actually mean? Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/cont
15,148
What does interpolating the training set actually mean?
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the data is generated by a process say $y= f(x) + \epsilon$ Where $\epsilon$ represents noise. By fitting the model such that...
What does interpolating the training set actually mean?
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the d
What does interpolating the training set actually mean? I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the data is generated by a process say $y= f(x) + \epsilon$ Where $\e...
What does interpolating the training set actually mean? I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the d
15,149
Two envelope problem revisited
1. UNNECESSARY PROBABILITIES. The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although straightforward, appears to be new. In particular, it identifies a set of decision procedures for the two envelope prob...
Two envelope problem revisited
1. UNNECESSARY PROBABILITIES. The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although
Two envelope problem revisited 1. UNNECESSARY PROBABILITIES. The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although straightforward, appears to be new. In particular, it identifies a set of decision proce...
Two envelope problem revisited 1. UNNECESSARY PROBABILITIES. The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although
15,150
Two envelope problem revisited
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The problem has been formulized incorrectly. However, a real world formulation of the problem is this: you have two identical en...
Two envelope problem revisited
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The prob
Two envelope problem revisited The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The problem has been formulized incorrectly. However, a real world formulation of the problem is ...
Two envelope problem revisited The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The prob
15,151
Two envelope problem revisited
My interpretation of the question I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair coin and based on that puts either $0.5X$ or $2X$ to the second envelope. The player knows all this, but not $X$ nor th...
Two envelope problem revisited
My interpretation of the question I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair
Two envelope problem revisited My interpretation of the question I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair coin and based on that puts either $0.5X$ or $2X$ to the second envelope. The player kno...
Two envelope problem revisited My interpretation of the question I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair
15,152
Two envelope problem revisited
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin. Problem 2: The problem is the same as problem 1 because you are told that there is an equal probability that either 5 or 20 is in the other envelope. ...
Two envelope problem revisited
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin. Problem 2: The problem is the
Two envelope problem revisited Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin. Problem 2: The problem is the same as problem 1 because you are told that there is an equal probability that either 5 o...
Two envelope problem revisited Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin. Problem 2: The problem is the
15,153
Two envelope problem revisited
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation. So the only thing that changed between problem 2 and problem 3 is that the amount became in the envelope you hold became random. If you al...
Two envelope problem revisited
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation. So the only thing
Two envelope problem revisited This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation. So the only thing that changed between problem 2 and problem 3 is that the amount became in the envelope yo...
Two envelope problem revisited This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation. So the only thing
15,154
Two envelope problem revisited
Overview I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope before the envelope is chosen Not distinguishing those scenarios lies at the root of many people's confusion. Problem 1 I...
Two envelope problem revisited
Overview I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope b
Two envelope problem revisited Overview I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope before the envelope is chosen Not distinguishing those scenarios lies at the root of many ...
Two envelope problem revisited Overview I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope b
15,155
Two envelope problem revisited
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side only. You then get to choose one side (the revealed, or the hidden), and you win the amount on that side. If you see "\$5,"...
Two envelope problem revisited
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side onl
Two envelope problem revisited Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side only. You then get to choose one side (the revealed, or the hidden), and you win the amount ...
Two envelope problem revisited Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side onl
15,156
Test whether variables follow the same distribution
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in which any proposed test might possibly be a good choic...
Test whether variables follow the same distribution
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumsta
Test whether variables follow the same distribution Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in wh...
Test whether variables follow the same distribution Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumsta
15,157
Test whether variables follow the same distribution
No, correlation is not a good test of this. x <- 1:100 #Uniform y <- sort(rnorm(100)) #Normal cor(x,y) #.98 I don't know of a good test that compares whether, e.g. two distributions are both normal, but possibly with different mean and s.d. Indirectly, you could test the normality of each, separately, and if both seem...
Test whether variables follow the same distribution
No, correlation is not a good test of this. x <- 1:100 #Uniform y <- sort(rnorm(100)) #Normal cor(x,y) #.98 I don't know of a good test that compares whether, e.g. two distributions are both normal,
Test whether variables follow the same distribution No, correlation is not a good test of this. x <- 1:100 #Uniform y <- sort(rnorm(100)) #Normal cor(x,y) #.98 I don't know of a good test that compares whether, e.g. two distributions are both normal, but possibly with different mean and s.d. Indirectly, you could test...
Test whether variables follow the same distribution No, correlation is not a good test of this. x <- 1:100 #Uniform y <- sort(rnorm(100)) #Normal cor(x,y) #.98 I don't know of a good test that compares whether, e.g. two distributions are both normal,
15,158
Test whether variables follow the same distribution
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it provides little means to estimate confidence that they might use the same model. A problem that you are liable to experie...
Test whether variables follow the same distribution
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it p
Test whether variables follow the same distribution If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it provides little means to estimate confidence that they might use the ...
Test whether variables follow the same distribution If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it p
15,159
Auto.arima vs autobox do they differ?
michael/wayne AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met 1) there are pulses in the data 2) there is 1 or more level/step shift in the data 3) if there are seasonal pulses in the data 4) there are 1 or more local time trends in the data that are no...
Auto.arima vs autobox do they differ?
michael/wayne AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met 1) there are pulses in the data 2) there is 1 or more level/step shift in
Auto.arima vs autobox do they differ? michael/wayne AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met 1) there are pulses in the data 2) there is 1 or more level/step shift in the data 3) if there are seasonal pulses in the data 4) there are 1 or more loc...
Auto.arima vs autobox do they differ? michael/wayne AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met 1) there are pulses in the data 2) there is 1 or more level/step shift in
15,160
Auto.arima vs autobox do they differ?
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox. auto.arima() fits (seasonal) ARIMA models including drift terms. Autobox fits transfer function models to handle level shifts and outliers. An ARIMA model is a special case of a tr...
Auto.arima vs autobox do they differ?
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox. auto.arima() fits (seasonal) ARIMA models including drift te
Auto.arima vs autobox do they differ? They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox. auto.arima() fits (seasonal) ARIMA models including drift terms. Autobox fits transfer function models to handle level shifts and outliers. An...
Auto.arima vs autobox do they differ? They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox. auto.arima() fits (seasonal) ARIMA models including drift te
15,161
Auto.arima vs autobox do they differ?
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will definitely be a difference: auto.arima doesn't care about outliers, while autobox will detect them and handle them appropriat...
Auto.arima vs autobox do they differ?
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will defin
Auto.arima vs autobox do they differ? EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will definitely be a difference: auto.arima doesn't care about outliers, while autobox will ...
Auto.arima vs autobox do they differ? EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will defin
15,162
How do I remove all but one specific duplicate record in an R data frame? [closed]
One way is to reverse-sort the data and use duplicated to drop all the duplicates. For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well. # Some data to start with: z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2)) # id var # 1 2 # 1 4 # 2 1 # 2 3 ...
How do I remove all but one specific duplicate record in an R data frame? [closed]
One way is to reverse-sort the data and use duplicated to drop all the duplicates. For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well. # Some
How do I remove all but one specific duplicate record in an R data frame? [closed] One way is to reverse-sort the data and use duplicated to drop all the duplicates. For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well. # Some data to start with: z <- data.frame(...
How do I remove all but one specific duplicate record in an R data frame? [closed] One way is to reverse-sort the data and use duplicated to drop all the duplicates. For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well. # Some
15,163
How do I remove all but one specific duplicate record in an R data frame? [closed]
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr: > dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) > ddply(dt,.(id),summarise,var_1=max(var)) id var_1 1 1 4 2 2 3 3 3 4 4 4 2 unique and duplicated is for removing duplic...
How do I remove all but one specific duplicate record in an R data frame? [closed]
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr: > dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) > ddply(dt,.(id)
How do I remove all but one specific duplicate record in an R data frame? [closed] You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr: > dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) > ddply(dt,.(id),summarise,var_1=max(var)) id var_...
How do I remove all but one specific duplicate record in an R data frame? [closed] You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr: > dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) > ddply(dt,.(id)
15,164
How do I remove all but one specific duplicate record in an R data frame? [closed]
The base-R solution would involve split, like this: z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),])) split splits the data frame into a list of chunks, on which we perform cutting to the single row with max value and then do.call(rbin...
How do I remove all but one specific duplicate record in an R data frame? [closed]
The base-R solution would involve split, like this: z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),])) split splits
How do I remove all but one specific duplicate record in an R data frame? [closed] The base-R solution would involve split, like this: z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),])) split splits the data frame into a list of chunks,...
How do I remove all but one specific duplicate record in an R data frame? [closed] The base-R solution would involve split, like this: z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),])) split splits
15,165
How do I remove all but one specific duplicate record in an R data frame? [closed]
I prefer using ave dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2)) ## use unique if you want to exclude duplicate maxima unique(subset(dt, var==ave(var, id, FUN=max)))
How do I remove all but one specific duplicate record in an R data frame? [closed]
I prefer using ave dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2)) ## use unique if you want to exclude duplicate maxima unique(subset(dt, var==ave(var, id, FUN=max)))
How do I remove all but one specific duplicate record in an R data frame? [closed] I prefer using ave dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2)) ## use unique if you want to exclude duplicate maxima unique(subset(dt, var==ave(var, id, FUN=max)))
How do I remove all but one specific duplicate record in an R data frame? [closed] I prefer using ave dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2)) ## use unique if you want to exclude duplicate maxima unique(subset(dt, var==ave(var, id, FUN=max)))
15,166
How do I remove all but one specific duplicate record in an R data frame? [closed]
Yet another way to do this with base: dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max)) id max 1 1 4 2 2 3 3 3 4 4 4 2 I prefer mpiktas ' plyr solution though.
How do I remove all but one specific duplicate record in an R data frame? [closed]
Yet another way to do this with base: dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max)) id max 1 1 4 2 2 3 3 3 4 4 4
How do I remove all but one specific duplicate record in an R data frame? [closed] Yet another way to do this with base: dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max)) id max 1 1 4 2 2 3 3 3 4 4 4 2 I prefer mpiktas ' plyr solution t...
How do I remove all but one specific duplicate record in an R data frame? [closed] Yet another way to do this with base: dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2)) data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max)) id max 1 1 4 2 2 3 3 3 4 4 4
15,167
How do I remove all but one specific duplicate record in an R data frame? [closed]
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is considered from the reverse side, keeping the last elements: z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2)) z[!dup...
How do I remove all but one specific duplicate record in an R data frame? [closed]
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is c
How do I remove all but one specific duplicate record in an R data frame? [closed] If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is considered from the reverse side, keep...
How do I remove all but one specific duplicate record in an R data frame? [closed] If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is c
15,168
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see wikipedia for further details. I suppose (here) in the UK they don't make the distinction between sample and population a...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see w
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see wikipedia for further details. I ...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see w
15,169
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that. For many reasons, it is appropriate to compare standard deviations in terms of their ratios rather than their differences. The ratio is $$s_n / s = \sqrt{\frac{N-1}{N}} = \sqrt{1 -...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that. For many reasons, it is appropriate to compare standard devia
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that. For many reasons, it is appropriate to compare standard deviations in terms of their ratios r...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that. For many reasons, it is appropriate to compare standard devia
15,170
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
15,171
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383). How to compute the SD with n-1 in the denominator Compute the square of the difference between each value and the sample mean. Add those values up. Divide the s...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383). How to compute the SD with n-1 in the den
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383). How to compute the SD with n-1 in the denominator Compute the square of ...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383). How to compute the SD with n-1 in the den
15,172
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the data set), so one should use N-1 when estimating the standard deviation from a data set for which one had to estimate the...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the d
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the data set), so one should use N-1 ...
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation? Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the d
15,173
Neural networks vs everything else
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather than a ensemble of nonlinear networks. If you take a look at the winners of past Kaggle competitions, excepting any challen...
Neural networks vs everything else
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather tha
Neural networks vs everything else Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather than a ensemble of nonlinear networks. If you take a look at the winners of past Kaggle ...
Neural networks vs everything else Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather tha
15,174
Neural networks vs everything else
I would add that there is not such thing as a machine learning panacea: By the no free lunch theorem: If an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems
Neural networks vs everything else
I would add that there is not such thing as a machine learning panacea: By the no free lunch theorem: If an algorithm performs well on a certain class of problems then it necessarily pays for that
Neural networks vs everything else I would add that there is not such thing as a machine learning panacea: By the no free lunch theorem: If an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems
Neural networks vs everything else I would add that there is not such thing as a machine learning panacea: By the no free lunch theorem: If an algorithm performs well on a certain class of problems then it necessarily pays for that
15,175
Area under the ROC curve or area under the PR curve for imbalanced data?
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The Relationship Between Precision-Recall and ROC Curves by Davis and Goadric...
Area under the ROC curve or area under the PR curve for imbalanced data?
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR cur
Area under the ROC curve or area under the PR curve for imbalanced data? The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The ...
Area under the ROC curve or area under the PR curve for imbalanced data? The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR cur
15,176
Area under the ROC curve or area under the PR curve for imbalanced data?
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are the standard for showing operating tradeoffs and I believe they would be most well received. Precision and Recall alone c...
Area under the ROC curve or area under the PR curve for imbalanced data?
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are t
Area under the ROC curve or area under the PR curve for imbalanced data? ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are the standard for showing operating tradeoffs and...
Area under the ROC curve or area under the PR curve for imbalanced data? ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are t
15,177
Area under the ROC curve or area under the PR curve for imbalanced data?
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only looking at your positive class. So in a balanced class situation and where you care about both negative and positive class...
Area under the ROC curve or area under the PR curve for imbalanced data?
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only loo
Area under the ROC curve or area under the PR curve for imbalanced data? I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only looking at your positive class. So in a balanced ...
Area under the ROC curve or area under the PR curve for imbalanced data? I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only loo
15,178
Sidak or Bonferroni?
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random variable. Specifically, it is taken from a binomial distribution with $p=\alpha$ and $n=k$. For example, if you plan to...
Sidak or Bonferroni?
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random v
Sidak or Bonferroni? If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random variable. Specifically, it is taken from a binomial distribution with $p=\alpha$ and $n=k$. For ex...
Sidak or Bonferroni? If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random v
15,179
Sidak or Bonferroni?
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like this (if the test are independent): $\alpha^*=1 − (1 − \alpha)^{1/n}$. Because $\alpha/n < 1 − (1 − \alpha)^{1/n}$, th...
Sidak or Bonferroni?
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like
Sidak or Bonferroni? Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like this (if the test are independent): $\alpha^*=1 − (1 − \alpha)^{1/n}$. Because $\alpha/n < 1 − (1...
Sidak or Bonferroni? Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like
15,180
Sidak or Bonferroni?
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 comparisons and a familywise alpha of .05, Sidak would conduct each test at .0253 and Bonferroni would conduct each test a...
Sidak or Bonferroni?
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 co
Sidak or Bonferroni? Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 comparisons and a familywise alpha of .05, Sidak would conduct each test at .0253 and Bonferroni woul...
Sidak or Bonferroni? Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 co
15,181
Sidak or Bonferroni?
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
Sidak or Bonferroni?
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
Sidak or Bonferroni? The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
Sidak or Bonferroni? The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
15,182
"It was the correct play even though I lost"
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this distribution and a specified loss function. Whether the predictive distribution is delivered by a Bayesian or by a frequ...
"It was the correct play even though I lost"
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this
"It was the correct play even though I lost" I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this distribution and a specified loss function. Whether the predictive distribu...
"It was the correct play even though I lost" I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this
15,183
"It was the correct play even though I lost"
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as: If you’ve won money, it doesn’t mean you played the hand well. If you’ve lost money, it doesn’t mean that you played t...
"It was the correct play even though I lost"
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as
"It was the correct play even though I lost" "The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as: If you’ve won money, it doesn’t mean you played the hand well. If you’ve ...
"It was the correct play even though I lost" "The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as
15,184
"It was the correct play even though I lost"
I don't think either that this is a question about frequentist vs bayesian. There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: what interest do I have on what happens to an experiment if I repeat it indefinitevely, if I actually don't have the possi...
"It was the correct play even though I lost"
I don't think either that this is a question about frequentist vs bayesian. There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: wh
"It was the correct play even though I lost" I don't think either that this is a question about frequentist vs bayesian. There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: what interest do I have on what happens to an experiment if I repeat it indef...
"It was the correct play even though I lost" I don't think either that this is a question about frequentist vs bayesian. There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: wh
15,185
"It was the correct play even though I lost"
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win or a loss. If you introduce that information into your framework, then you are leaving yourself open to hindsight bias (...
"It was the correct play even though I lost"
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win
"It was the correct play even though I lost" As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win or a loss. If you introduce that information into your framework, then you ...
"It was the correct play even though I lost" As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win
15,186
"It was the correct play even though I lost"
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says you will do well. If you get into "but but but..." then you no longer follow the winning strategy you've developed. Your...
"It was the correct play even though I lost"
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says
"It was the correct play even though I lost" "The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says you will do well. If you get into "but but but..." then you no longer follo...
"It was the correct play even though I lost" "The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says
15,187
How to equalize the chance of throwing the highest dice? (Riddle)
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$ More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$). As others have already noted, the (unscaled) score of player $A$ is $$X \sim Beta(n, 1)$$ and the score of player $B$ is...
How to equalize the chance of throwing the highest dice? (Riddle)
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$ More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$).
How to equalize the chance of throwing the highest dice? (Riddle) Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$ More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$). As others have already noted, the (unscaled) score of ...
How to equalize the chance of throwing the highest dice? (Riddle) Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$ More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$).
15,188
How to equalize the chance of throwing the highest dice? (Riddle)
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can. If you raise player-A's score to the $\frac{3}{7}$ you should have a fair game. Obviously, since scores are between 0 and 1, raising it to a power of between 0 and 1 (not...
How to equalize the chance of throwing the highest dice? (Riddle)
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can. If you raise player-A's score to the $\frac{3}{7}$
How to equalize the chance of throwing the highest dice? (Riddle) I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can. If you raise player-A's score to the $\frac{3}{7}$ you should have a fair game. Obviously, since scores ...
How to equalize the chance of throwing the highest dice? (Riddle) I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can. If you raise player-A's score to the $\frac{3}{7}$
15,189
How to equalize the chance of throwing the highest dice? (Riddle)
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution. As @whuber says in his Comment, the maximum $X_1$ of three independent standard uniform random variables has $X_1 \sim \mathsf{Beta}(3,1)$ and the maximum $X_2$ of seven independent standard unifo...
How to equalize the chance of throwing the highest dice? (Riddle)
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution. As @whuber says in his Comment, the maximum $X_1$ of three independent stand
How to equalize the chance of throwing the highest dice? (Riddle) I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution. As @whuber says in his Comment, the maximum $X_1$ of three independent standard uniform random variables has $X_1 \sim \mathsf{Bet...
How to equalize the chance of throwing the highest dice? (Riddle) I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution. As @whuber says in his Comment, the maximum $X_1$ of three independent stand
15,190
How to equalize the chance of throwing the highest dice? (Riddle)
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of player $b$. For each ratio I simulated 1000 games and computed the multiplicative constant. This what I got: For the dice ...
How to equalize the chance of throwing the highest dice? (Riddle)
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of pla
How to equalize the chance of throwing the highest dice? (Riddle) I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of player $b$. For each ratio I simulated 1000 games and com...
How to equalize the chance of throwing the highest dice? (Riddle) I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of pla
15,191
Why Normality assumption in linear regression
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practice. Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numero...
Why Normality assumption in linear regression
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practic
Why Normality assumption in linear regression We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practice. Laplace (double exponential errors) correspond to least absolute deviat...
Why Normality assumption in linear regression We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practic
15,192
Why Normality assumption in linear regression
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimization problem, which can be solved using pure linear algebra. Other choices of noise distributions yield more complicated ...
Why Normality assumption in linear regression
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimizat
Why Normality assumption in linear regression The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimization problem, which can be solved using pure linear algebra. Other choices ...
Why Normality assumption in linear regression The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimizat
15,193
Why Normality assumption in linear regression
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as well as confidence intervals for your predictions. In conclusion, the reason why we often choose normal distribution is...
Why Normality assumption in linear regression
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as
Why Normality assumption in linear regression When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as well as confidence intervals for your predictions. In conclusion, the rea...
Why Normality assumption in linear regression When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as
15,194
Why Normality assumption in linear regression
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions. However, why is the normal distribution chosen so often? The reason is that the normal distribution occurs in many places naturally. It is a bit the same...
Why Normality assumption in linear regression
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions. However, why is the normal distri
Why Normality assumption in linear regression Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions. However, why is the normal distribution chosen so often? The reason is that the normal distribution occurs ...
Why Normality assumption in linear regression Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions. However, why is the normal distri
15,195
Why Normality assumption in linear regression
Why we don't choose other distributions?—we do. Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and an input vector $x_i \in \mathbb R^n$. Let the prediction of the target given $x_i$ be $$\hat y_i = w^\intercal x_i.$$ T...
Why Normality assumption in linear regression
Why we don't choose other distributions?—we do. Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and a
Why Normality assumption in linear regression Why we don't choose other distributions?—we do. Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and an input vector $x_i \in \mathbb R^n$. Let the prediction of the target gi...
Why Normality assumption in linear regression Why we don't choose other distributions?—we do. Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and a
15,196
"-iles" terminology for the top half a percent
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.) By extension, the interval or bin between the upper or third quartile and the maximum is often also called the upper quartile, and sometimes...
"-iles" terminology for the top half a percent
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.) By extension, the in
"-iles" terminology for the top half a percent Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.) By extension, the interval or bin between the upper or third quartile and the maximum is ofte...
"-iles" terminology for the top half a percent Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.) By extension, the in
15,197
"-iles" terminology for the top half a percent
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequently used quantiles (terciles, quartiles, percentiles, etc.), but we don't have one for the rest. Technically I guess y...
"-iles" terminology for the top half a percent
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequ
"-iles" terminology for the top half a percent The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequently used quantiles (terciles, quartiles, percentiles, etc.), but we don...
"-iles" terminology for the top half a percent The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequ
15,198
"-iles" terminology for the top half a percent
It's called the top half-percentile or upper half-percentile. Google "top half-percentile" or "upper half-percentile" to find these terms used in practice, most often in economics.
"-iles" terminology for the top half a percent
It's called the top half-percentile or upper half-percentile. Google "top half-percentile" or "upper half-percentile" to find these terms used in practice, most often in economics.
"-iles" terminology for the top half a percent It's called the top half-percentile or upper half-percentile. Google "top half-percentile" or "upper half-percentile" to find these terms used in practice, most often in economics.
"-iles" terminology for the top half a percent It's called the top half-percentile or upper half-percentile. Google "top half-percentile" or "upper half-percentile" to find these terms used in practice, most often in economics.
15,199
"-iles" terminology for the top half a percent
There is percent (%) and permille (‰) so you could say the top five permille. However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4228404/ and https://openi.nlm.nih.gov/detailedresult.php?img=PMC4228404_1476-069X-12-92-1&r...
"-iles" terminology for the top half a percent
There is percent (%) and permille (‰) so you could say the top five permille. However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.
"-iles" terminology for the top half a percent There is percent (%) and permille (‰) so you could say the top five permille. However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4228404/ and https://openi.nlm.nih.gov/detail...
"-iles" terminology for the top half a percent There is percent (%) and permille (‰) so you could say the top five permille. However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.
15,200
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that there are some assumptions about your population that are made, for example, a bias in favor of one particular label ev...
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost? Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that there are some assumption...
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost? Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that