idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,401
Expectation of 500 coin flips after 500 realizations
Different schools of probability is a bit confusing, so let's do that on the computer as experiments. What your confusion is that If I have (say) 300 tails in the first 500 flips, should I expect 200 tails in the next 500 flips? If I have (say) 200 tails in the first 200 flips, should I expect (only) 300 tails in t...
Expectation of 500 coin flips after 500 realizations
Different schools of probability is a bit confusing, so let's do that on the computer as experiments. What your confusion is that If I have (say) 300 tails in the first 500 flips, should I expect 2
Expectation of 500 coin flips after 500 realizations Different schools of probability is a bit confusing, so let's do that on the computer as experiments. What your confusion is that If I have (say) 300 tails in the first 500 flips, should I expect 200 tails in the next 500 flips? If I have (say) 200 tails in the f...
Expectation of 500 coin flips after 500 realizations Different schools of probability is a bit confusing, so let's do that on the computer as experiments. What your confusion is that If I have (say) 300 tails in the first 500 flips, should I expect 2
8,402
Expectation of 500 coin flips after 500 realizations
Whether you flipped a coin many times in the past is irrelevant. Every time you throw a coin the expectation is there is a 50% chance it will be heads. If you are going to throw it 500 times it should be heads about 250 times. But there is no guarantee. All 500 times could be heads, or 0 times. Taken altogether, af...
Expectation of 500 coin flips after 500 realizations
Whether you flipped a coin many times in the past is irrelevant. Every time you throw a coin the expectation is there is a 50% chance it will be heads. If you are going to throw it 500 times it shou
Expectation of 500 coin flips after 500 realizations Whether you flipped a coin many times in the past is irrelevant. Every time you throw a coin the expectation is there is a 50% chance it will be heads. If you are going to throw it 500 times it should be heads about 250 times. But there is no guarantee. All 500 t...
Expectation of 500 coin flips after 500 realizations Whether you flipped a coin many times in the past is irrelevant. Every time you throw a coin the expectation is there is a 50% chance it will be heads. If you are going to throw it 500 times it shou
8,403
What does 'highly non linear' mean?
I don't think there's a formal definition. It's my impression that it simply means that not only is it non-linear, but attempting to model it with a linear approximation won't yield reasonable results and may even cause instability in the fitting method. Someone may also use it to simply mean that small input changes c...
What does 'highly non linear' mean?
I don't think there's a formal definition. It's my impression that it simply means that not only is it non-linear, but attempting to model it with a linear approximation won't yield reasonable results
What does 'highly non linear' mean? I don't think there's a formal definition. It's my impression that it simply means that not only is it non-linear, but attempting to model it with a linear approximation won't yield reasonable results and may even cause instability in the fitting method. Someone may also use it to si...
What does 'highly non linear' mean? I don't think there's a formal definition. It's my impression that it simply means that not only is it non-linear, but attempting to model it with a linear approximation won't yield reasonable results
8,404
What does 'highly non linear' mean?
In a formal sense, I believe one could say that the second derivative differs substantially from zero. If 0 were a "reasonable" approximation to the second derivative over the domain of interest, it is close to linear, but if it's not, the nonlinear effects become very important to capture. I've rarely heard terms lik...
What does 'highly non linear' mean?
In a formal sense, I believe one could say that the second derivative differs substantially from zero. If 0 were a "reasonable" approximation to the second derivative over the domain of interest, it
What does 'highly non linear' mean? In a formal sense, I believe one could say that the second derivative differs substantially from zero. If 0 were a "reasonable" approximation to the second derivative over the domain of interest, it is close to linear, but if it's not, the nonlinear effects become very important to ...
What does 'highly non linear' mean? In a formal sense, I believe one could say that the second derivative differs substantially from zero. If 0 were a "reasonable" approximation to the second derivative over the domain of interest, it
8,405
What does 'highly non linear' mean?
The important aspect missing from the other excellent answers is the domain. E.g., $f(x)=x^2$ is highly non-linear on $[-10;10]$ but not on either half of the domain (i.e., on both $[-10;0]$ and $[0;10]$ one can possibly use a linear approximation of $f$ without an immediate disaster). Another example is $f(x)=x^3-...
What does 'highly non linear' mean?
The important aspect missing from the other excellent answers is the domain. E.g., $f(x)=x^2$ is highly non-linear on $[-10;10]$ but not on either half of the domain (i.e., on both $[-10;0]$ and $[
What does 'highly non linear' mean? The important aspect missing from the other excellent answers is the domain. E.g., $f(x)=x^2$ is highly non-linear on $[-10;10]$ but not on either half of the domain (i.e., on both $[-10;0]$ and $[0;10]$ one can possibly use a linear approximation of $f$ without an immediate disas...
What does 'highly non linear' mean? The important aspect missing from the other excellent answers is the domain. E.g., $f(x)=x^2$ is highly non-linear on $[-10;10]$ but not on either half of the domain (i.e., on both $[-10;0]$ and $[
8,406
What does 'highly non linear' mean?
As other mentioned, I don't think there's a formal definition. I would define it as a function which can not be approximated linearly in the typical range of disturbances to the argument. For instance, you have $y=f(x)$, and $\sigma^2=var[x]$. Then if the approximation $f(x+\sigma)\approx f(x)+f'(x)\sigma$ breaks down,...
What does 'highly non linear' mean?
As other mentioned, I don't think there's a formal definition. I would define it as a function which can not be approximated linearly in the typical range of disturbances to the argument. For instance
What does 'highly non linear' mean? As other mentioned, I don't think there's a formal definition. I would define it as a function which can not be approximated linearly in the typical range of disturbances to the argument. For instance, you have $y=f(x)$, and $\sigma^2=var[x]$. Then if the approximation $f(x+\sigma)\a...
What does 'highly non linear' mean? As other mentioned, I don't think there's a formal definition. I would define it as a function which can not be approximated linearly in the typical range of disturbances to the argument. For instance
8,407
What does 'highly non linear' mean?
Informally ... "highly non linear" means "even a blind man can see its not a straight line!" ;) Personally I take it as a danger sign, that it will somehow "blow up in your face" when used with real world examples. The Tower of Hanoi could be called an example of highly non linear ... the legend being when the monks f...
What does 'highly non linear' mean?
Informally ... "highly non linear" means "even a blind man can see its not a straight line!" ;) Personally I take it as a danger sign, that it will somehow "blow up in your face" when used with real w
What does 'highly non linear' mean? Informally ... "highly non linear" means "even a blind man can see its not a straight line!" ;) Personally I take it as a danger sign, that it will somehow "blow up in your face" when used with real world examples. The Tower of Hanoi could be called an example of highly non linear ....
What does 'highly non linear' mean? Informally ... "highly non linear" means "even a blind man can see its not a straight line!" ;) Personally I take it as a danger sign, that it will somehow "blow up in your face" when used with real w
8,408
What does 'highly non linear' mean?
As professional mathematician I can confirm that "highly nonlinear" is not a mathematical precisely defined term. :) And none of "highly anything" I can think of. Nonlinear is precise and opposite of linear (obviously). But linear occurs in two different meanings: $f(x) = ax+b$ is called linear function only function...
What does 'highly non linear' mean?
As professional mathematician I can confirm that "highly nonlinear" is not a mathematical precisely defined term. :) And none of "highly anything" I can think of. Nonlinear is precise and opposite of
What does 'highly non linear' mean? As professional mathematician I can confirm that "highly nonlinear" is not a mathematical precisely defined term. :) And none of "highly anything" I can think of. Nonlinear is precise and opposite of linear (obviously). But linear occurs in two different meanings: $f(x) = ax+b$ is ...
What does 'highly non linear' mean? As professional mathematician I can confirm that "highly nonlinear" is not a mathematical precisely defined term. :) And none of "highly anything" I can think of. Nonlinear is precise and opposite of
8,409
What does 'highly non linear' mean?
For smooth functions, we usually say that something is "highly" nonlinear if the magnitude of the second derivative (or perhaps the curvature) is high. A linear function has zero second derivative and zero curvature, so it represents the extreme of low curvature.
What does 'highly non linear' mean?
For smooth functions, we usually say that something is "highly" nonlinear if the magnitude of the second derivative (or perhaps the curvature) is high. A linear function has zero second derivative an
What does 'highly non linear' mean? For smooth functions, we usually say that something is "highly" nonlinear if the magnitude of the second derivative (or perhaps the curvature) is high. A linear function has zero second derivative and zero curvature, so it represents the extreme of low curvature.
What does 'highly non linear' mean? For smooth functions, we usually say that something is "highly" nonlinear if the magnitude of the second derivative (or perhaps the curvature) is high. A linear function has zero second derivative an
8,410
What does 'highly non linear' mean?
As other answers said, there isn't a formal definition of the term from the mathematical point of view. But it has to do with the existence of the non zero derivative (the order) of the function. A function that can be defined as being linear is some function $f(x)$ that can be described by a polynomial $P_n(x)$, even ...
What does 'highly non linear' mean?
As other answers said, there isn't a formal definition of the term from the mathematical point of view. But it has to do with the existence of the non zero derivative (the order) of the function. A fu
What does 'highly non linear' mean? As other answers said, there isn't a formal definition of the term from the mathematical point of view. But it has to do with the existence of the non zero derivative (the order) of the function. A function that can be defined as being linear is some function $f(x)$ that can be descr...
What does 'highly non linear' mean? As other answers said, there isn't a formal definition of the term from the mathematical point of view. But it has to do with the existence of the non zero derivative (the order) of the function. A fu
8,411
Why does basic hypothesis testing focus on the mean and not on the median?
Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way - it's laborious, but possible. Tests for quantiles (such as the median) would be pretty much i...
Why does basic hypothesis testing focus on the mean and not on the median?
Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing me
Why does basic hypothesis testing focus on the mean and not on the median? Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way - it's laborious, bu...
Why does basic hypothesis testing focus on the mean and not on the median? Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing me
8,412
Why does basic hypothesis testing focus on the mean and not on the median?
I would like to add a third reason to the correct reasons given by Harrell and Flom. The reason is that we use Euclidean distance (or L2) and not Manhattan distance (or L1) as our standard measure of closeness or error. If one has a number of data points $x_1, \ldots x_n$ and one wants a single number $\theta$ to ...
Why does basic hypothesis testing focus on the mean and not on the median?
I would like to add a third reason to the correct reasons given by Harrell and Flom. The reason is that we use Euclidean distance (or L2) and not Manhattan distance (or L1) as our standard measure o
Why does basic hypothesis testing focus on the mean and not on the median? I would like to add a third reason to the correct reasons given by Harrell and Flom. The reason is that we use Euclidean distance (or L2) and not Manhattan distance (or L1) as our standard measure of closeness or error. If one has a number o...
Why does basic hypothesis testing focus on the mean and not on the median? I would like to add a third reason to the correct reasons given by Harrell and Flom. The reason is that we use Euclidean distance (or L2) and not Manhattan distance (or L1) as our standard measure o
8,413
Why does basic hypothesis testing focus on the mean and not on the median?
Often the mean is chosen over the median not because it's more representative, robust, or meaningful but because people confuse estimator with estimand. Put another way, some choose the population mean as the quantity of interest because with a normal distribution the sample mean is more precise than the sample median...
Why does basic hypothesis testing focus on the mean and not on the median?
Often the mean is chosen over the median not because it's more representative, robust, or meaningful but because people confuse estimator with estimand. Put another way, some choose the population me
Why does basic hypothesis testing focus on the mean and not on the median? Often the mean is chosen over the median not because it's more representative, robust, or meaningful but because people confuse estimator with estimand. Put another way, some choose the population mean as the quantity of interest because with a...
Why does basic hypothesis testing focus on the mean and not on the median? Often the mean is chosen over the median not because it's more representative, robust, or meaningful but because people confuse estimator with estimand. Put another way, some choose the population me
8,414
Why is a comma a bad record separator/delimiter in CSV files?
CSV format specification is defined in RFC 4180. This specification was published because there is no formal specification in existence, which allows for a wide variety of interpretations of CSV files Unfortunately, since 2005 (date of publishing the RFC), nothing has changed. We still have a wide variety of implemen...
Why is a comma a bad record separator/delimiter in CSV files?
CSV format specification is defined in RFC 4180. This specification was published because there is no formal specification in existence, which allows for a wide variety of interpretations of CSV file
Why is a comma a bad record separator/delimiter in CSV files? CSV format specification is defined in RFC 4180. This specification was published because there is no formal specification in existence, which allows for a wide variety of interpretations of CSV files Unfortunately, since 2005 (date of publishing the RFC),...
Why is a comma a bad record separator/delimiter in CSV files? CSV format specification is defined in RFC 4180. This specification was published because there is no formal specification in existence, which allows for a wide variety of interpretations of CSV file
8,415
Why is a comma a bad record separator/delimiter in CSV files?
Technically comma is as good as any other character to be used as a separator. The name of the format directly refers that values are comma separated (Comma-Separated Values). The description of CSV format is using comma as an separator. Any field containing comma should be double-quoted. So that does not cause a probl...
Why is a comma a bad record separator/delimiter in CSV files?
Technically comma is as good as any other character to be used as a separator. The name of the format directly refers that values are comma separated (Comma-Separated Values). The description of CSV f
Why is a comma a bad record separator/delimiter in CSV files? Technically comma is as good as any other character to be used as a separator. The name of the format directly refers that values are comma separated (Comma-Separated Values). The description of CSV format is using comma as an separator. Any field containing...
Why is a comma a bad record separator/delimiter in CSV files? Technically comma is as good as any other character to be used as a separator. The name of the format directly refers that values are comma separated (Comma-Separated Values). The description of CSV f
8,416
Why is a comma a bad record separator/delimiter in CSV files?
In addition to being a digit separator in numbers, it is also forms part of address (such as customer address etc) in many countries. While some countries have short well-define addresses, many others have, long-winding addresses including, sometimes two commas in the same line. Good CSV files enclose all such data in ...
Why is a comma a bad record separator/delimiter in CSV files?
In addition to being a digit separator in numbers, it is also forms part of address (such as customer address etc) in many countries. While some countries have short well-define addresses, many others
Why is a comma a bad record separator/delimiter in CSV files? In addition to being a digit separator in numbers, it is also forms part of address (such as customer address etc) in many countries. While some countries have short well-define addresses, many others have, long-winding addresses including, sometimes two com...
Why is a comma a bad record separator/delimiter in CSV files? In addition to being a digit separator in numbers, it is also forms part of address (such as customer address etc) in many countries. While some countries have short well-define addresses, many others
8,417
Why is a comma a bad record separator/delimiter in CSV files?
While @Tim s answer is correct - I would like to add that "csv" as a whole has no common standard - especially the escaping rules are not defined at all, leading to "formats" which are readable in one program, but not another. This is excarberated by the fact that every "programmer" under the sun just thinks "oooh csv-...
Why is a comma a bad record separator/delimiter in CSV files?
While @Tim s answer is correct - I would like to add that "csv" as a whole has no common standard - especially the escaping rules are not defined at all, leading to "formats" which are readable in one
Why is a comma a bad record separator/delimiter in CSV files? While @Tim s answer is correct - I would like to add that "csv" as a whole has no common standard - especially the escaping rules are not defined at all, leading to "formats" which are readable in one program, but not another. This is excarberated by the fac...
Why is a comma a bad record separator/delimiter in CSV files? While @Tim s answer is correct - I would like to add that "csv" as a whole has no common standard - especially the escaping rules are not defined at all, leading to "formats" which are readable in one
8,418
Why is a comma a bad record separator/delimiter in CSV files?
If you can ditch the comma delimiter and use a tab character you will have much better success. You can leave the file named .CSV and importing into most programs is usually not a problem. Just specify TAB delimited rather than comma when you import your file. If there are commas in your data you WILL have a problem w...
Why is a comma a bad record separator/delimiter in CSV files?
If you can ditch the comma delimiter and use a tab character you will have much better success. You can leave the file named .CSV and importing into most programs is usually not a problem. Just speci
Why is a comma a bad record separator/delimiter in CSV files? If you can ditch the comma delimiter and use a tab character you will have much better success. You can leave the file named .CSV and importing into most programs is usually not a problem. Just specify TAB delimited rather than comma when you import your fi...
Why is a comma a bad record separator/delimiter in CSV files? If you can ditch the comma delimiter and use a tab character you will have much better success. You can leave the file named .CSV and importing into most programs is usually not a problem. Just speci
8,419
Why is a comma a bad record separator/delimiter in CSV files?
ASCII provides us with four "separator" characters, as shown below in a snippet from the ascii(7) *nix man page: Oct Dec Hex Char ---------------------- 034 28 1C FS (file separator) 035 29 1D GS (group separator) 036 30 1E RS (record separator) 037 31 1F US (...
Why is a comma a bad record separator/delimiter in CSV files?
ASCII provides us with four "separator" characters, as shown below in a snippet from the ascii(7) *nix man page: Oct Dec Hex Char ---------------------- 034 28 1C FS (file sepa
Why is a comma a bad record separator/delimiter in CSV files? ASCII provides us with four "separator" characters, as shown below in a snippet from the ascii(7) *nix man page: Oct Dec Hex Char ---------------------- 034 28 1C FS (file separator) 035 29 1D GS (group separator) 036 ...
Why is a comma a bad record separator/delimiter in CSV files? ASCII provides us with four "separator" characters, as shown below in a snippet from the ascii(7) *nix man page: Oct Dec Hex Char ---------------------- 034 28 1C FS (file sepa
8,420
Why is a comma a bad record separator/delimiter in CSV files?
The problem is not the comma; the problem is quoting. Regardless of which record and field delimiters you use, you need to be prepared for meeting them in the content. So you need a quoting mechanism. AND THEN you need a way for the quoting character(s) to appear too. Following the RFC 4180 standard makes everything...
Why is a comma a bad record separator/delimiter in CSV files?
The problem is not the comma; the problem is quoting. Regardless of which record and field delimiters you use, you need to be prepared for meeting them in the content. So you need a quoting mechanis
Why is a comma a bad record separator/delimiter in CSV files? The problem is not the comma; the problem is quoting. Regardless of which record and field delimiters you use, you need to be prepared for meeting them in the content. So you need a quoting mechanism. AND THEN you need a way for the quoting character(s) t...
Why is a comma a bad record separator/delimiter in CSV files? The problem is not the comma; the problem is quoting. Regardless of which record and field delimiters you use, you need to be prepared for meeting them in the content. So you need a quoting mechanis
8,421
Problems with pie charts
I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte's work (he also discussed the use of multiple pie charts), but more funny is the fact that the second chapter of Wilkin...
Problems with pie charts
I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte
Problems with pie charts I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte's work (he also discussed the use of multiple pie charts), but more funny is the fact that the...
Problems with pie charts I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte
8,422
Problems with pie charts
My personal problem with pie charts is while they may be useful to show differences like this: way too many people use it to show that:
Problems with pie charts
My personal problem with pie charts is while they may be useful to show differences like this: way too many people use it to show that:
Problems with pie charts My personal problem with pie charts is while they may be useful to show differences like this: way too many people use it to show that:
Problems with pie charts My personal problem with pie charts is while they may be useful to show differences like this: way too many people use it to show that:
8,423
Problems with pie charts
Pie charts, like pie, may be delicious but they are not nutritious. In addition to points made already, one is that rotating a pie chart changes perception of the size of the angles, as does changing the color. If a pie chart has only a few categories, make a table. If it has a LOT of categories, then the slices will ...
Problems with pie charts
Pie charts, like pie, may be delicious but they are not nutritious. In addition to points made already, one is that rotating a pie chart changes perception of the size of the angles, as does changing
Problems with pie charts Pie charts, like pie, may be delicious but they are not nutritious. In addition to points made already, one is that rotating a pie chart changes perception of the size of the angles, as does changing the color. If a pie chart has only a few categories, make a table. If it has a LOT of categori...
Problems with pie charts Pie charts, like pie, may be delicious but they are not nutritious. In addition to points made already, one is that rotating a pie chart changes perception of the size of the angles, as does changing
8,424
Problems with pie charts
I think you've answered your own question for the 2nd bullet point. If you want to take up valuable real estate, so be it! However the first bullet is more important. With a bar chart the observer needs to estimate relative proportion based upon only 1 axis. With a pie chart judging along at least 2 axes are involve...
Problems with pie charts
I think you've answered your own question for the 2nd bullet point. If you want to take up valuable real estate, so be it! However the first bullet is more important. With a bar chart the observer n
Problems with pie charts I think you've answered your own question for the 2nd bullet point. If you want to take up valuable real estate, so be it! However the first bullet is more important. With a bar chart the observer needs to estimate relative proportion based upon only 1 axis. With a pie chart judging along at...
Problems with pie charts I think you've answered your own question for the 2nd bullet point. If you want to take up valuable real estate, so be it! However the first bullet is more important. With a bar chart the observer n
8,425
Problems with pie charts
I can think of almost no case in which a pie chart is better than a bar chart or stacked bar if you want to convey information. I do have a theory or two on how pie charts got to be so popular. My first thought is related to PC commercials. Early PCs had text screens (24 x 80 characters), often green like old mainfram...
Problems with pie charts
I can think of almost no case in which a pie chart is better than a bar chart or stacked bar if you want to convey information. I do have a theory or two on how pie charts got to be so popular. My fi
Problems with pie charts I can think of almost no case in which a pie chart is better than a bar chart or stacked bar if you want to convey information. I do have a theory or two on how pie charts got to be so popular. My first thought is related to PC commercials. Early PCs had text screens (24 x 80 characters), ofte...
Problems with pie charts I can think of almost no case in which a pie chart is better than a bar chart or stacked bar if you want to convey information. I do have a theory or two on how pie charts got to be so popular. My fi
8,426
Problems with pie charts
Your waffle chart needs the red and blue values switched. As to the question of pie vs waffle, I lean toward waffle. With waffle charts you can still get the information across at small sizes even if the blocks blend together, the color still represents the regions.
Problems with pie charts
Your waffle chart needs the red and blue values switched. As to the question of pie vs waffle, I lean toward waffle. With waffle charts you can still get the information across at small sizes even if
Problems with pie charts Your waffle chart needs the red and blue values switched. As to the question of pie vs waffle, I lean toward waffle. With waffle charts you can still get the information across at small sizes even if the blocks blend together, the color still represents the regions.
Problems with pie charts Your waffle chart needs the red and blue values switched. As to the question of pie vs waffle, I lean toward waffle. With waffle charts you can still get the information across at small sizes even if
8,427
How can we explain the "bad reputation" of higher-order polynomials?
High degree polynomials do not overfit the data This is a common misconception which is nonetheless found in many textbooks. In general, in order to specify a statistical model, it is necessary to specify both a hypothesis class and a fitting procedure. In order to define the Variance of the model ("variance" here in t...
How can we explain the "bad reputation" of higher-order polynomials?
High degree polynomials do not overfit the data This is a common misconception which is nonetheless found in many textbooks. In general, in order to specify a statistical model, it is necessary to spe
How can we explain the "bad reputation" of higher-order polynomials? High degree polynomials do not overfit the data This is a common misconception which is nonetheless found in many textbooks. In general, in order to specify a statistical model, it is necessary to specify both a hypothesis class and a fitting procedur...
How can we explain the "bad reputation" of higher-order polynomials? High degree polynomials do not overfit the data This is a common misconception which is nonetheless found in many textbooks. In general, in order to specify a statistical model, it is necessary to spe
8,428
How can we explain the "bad reputation" of higher-order polynomials?
Before I attempt to answer this, I'd like to just point out that what you are observing here (overfitting in a linear regression) is just a specific example of a more general phenomenon, bias-variance trade-off, which is also observed in more "modern" machine learning contexts as well as the "classical" setting of regr...
How can we explain the "bad reputation" of higher-order polynomials?
Before I attempt to answer this, I'd like to just point out that what you are observing here (overfitting in a linear regression) is just a specific example of a more general phenomenon, bias-variance
How can we explain the "bad reputation" of higher-order polynomials? Before I attempt to answer this, I'd like to just point out that what you are observing here (overfitting in a linear regression) is just a specific example of a more general phenomenon, bias-variance trade-off, which is also observed in more "modern"...
How can we explain the "bad reputation" of higher-order polynomials? Before I attempt to answer this, I'd like to just point out that what you are observing here (overfitting in a linear regression) is just a specific example of a more general phenomenon, bias-variance
8,429
How can we explain the "bad reputation" of higher-order polynomials?
It isn't something special about higher-order polynomials: the same effect happens for other sets of functions with many degrees of freedom. For example, let's call a function "special" if its graph consists of a horizontal segment, followed by a segment which slopes upwards at 45 degrees, followed by a straight line s...
How can we explain the "bad reputation" of higher-order polynomials?
It isn't something special about higher-order polynomials: the same effect happens for other sets of functions with many degrees of freedom. For example, let's call a function "special" if its graph c
How can we explain the "bad reputation" of higher-order polynomials? It isn't something special about higher-order polynomials: the same effect happens for other sets of functions with many degrees of freedom. For example, let's call a function "special" if its graph consists of a horizontal segment, followed by a segm...
How can we explain the "bad reputation" of higher-order polynomials? It isn't something special about higher-order polynomials: the same effect happens for other sets of functions with many degrees of freedom. For example, let's call a function "special" if its graph c
8,430
How can we explain the "bad reputation" of higher-order polynomials?
You have more problems than just Runge's phenomenon. Below is an example for fitting a tenth degree polynomial to 21 data points that follow the curve $$y = sin(6\pi x^2)$$ Runge's phenomenon: The black broken line is the least-squares fit to these 21 points if there is no noise. You see some larger error towards the...
How can we explain the "bad reputation" of higher-order polynomials?
You have more problems than just Runge's phenomenon. Below is an example for fitting a tenth degree polynomial to 21 data points that follow the curve $$y = sin(6\pi x^2)$$ Runge's phenomenon: The b
How can we explain the "bad reputation" of higher-order polynomials? You have more problems than just Runge's phenomenon. Below is an example for fitting a tenth degree polynomial to 21 data points that follow the curve $$y = sin(6\pi x^2)$$ Runge's phenomenon: The black broken line is the least-squares fit to these ...
How can we explain the "bad reputation" of higher-order polynomials? You have more problems than just Runge's phenomenon. Below is an example for fitting a tenth degree polynomial to 21 data points that follow the curve $$y = sin(6\pi x^2)$$ Runge's phenomenon: The b
8,431
How can we explain the "bad reputation" of higher-order polynomials?
It's much worse than just overfitting. The problems with polynomials don't become clear in examples with only 10 or 20 parameters, so I'll examine a function that we want to fit with 200,000 parameters, where 200,000 parameters really is the correct number - we aren't overfitting. Our function is a 10 second audio clip...
How can we explain the "bad reputation" of higher-order polynomials?
It's much worse than just overfitting. The problems with polynomials don't become clear in examples with only 10 or 20 parameters, so I'll examine a function that we want to fit with 200,000 parameter
How can we explain the "bad reputation" of higher-order polynomials? It's much worse than just overfitting. The problems with polynomials don't become clear in examples with only 10 or 20 parameters, so I'll examine a function that we want to fit with 200,000 parameters, where 200,000 parameters really is the correct n...
How can we explain the "bad reputation" of higher-order polynomials? It's much worse than just overfitting. The problems with polynomials don't become clear in examples with only 10 or 20 parameters, so I'll examine a function that we want to fit with 200,000 parameter
8,432
How can we explain the "bad reputation" of higher-order polynomials?
The ringing is an artifact of using uniformly spaced points, because the lagrange polynomials for this spacing are not tightly concentrated around the points they are trying to fit. E.g. for 11 evenly-spaced points on the interval [0,1], here is the degree 10 polynomial that is used to fit the value at x=0.4 (is zero a...
How can we explain the "bad reputation" of higher-order polynomials?
The ringing is an artifact of using uniformly spaced points, because the lagrange polynomials for this spacing are not tightly concentrated around the points they are trying to fit. E.g. for 11 evenly
How can we explain the "bad reputation" of higher-order polynomials? The ringing is an artifact of using uniformly spaced points, because the lagrange polynomials for this spacing are not tightly concentrated around the points they are trying to fit. E.g. for 11 evenly-spaced points on the interval [0,1], here is the d...
How can we explain the "bad reputation" of higher-order polynomials? The ringing is an artifact of using uniformly spaced points, because the lagrange polynomials for this spacing are not tightly concentrated around the points they are trying to fit. E.g. for 11 evenly
8,433
How can we explain the "bad reputation" of higher-order polynomials?
Is there any mathematical justification as to why (higher degree) polynomial functions overfit the data? Sure. As others mentioned, particularly stachyra and fblundun, it's about the complexity of the hypothesis class relative to the amount of data you have. A highly complex model will always find a way to explain a s...
How can we explain the "bad reputation" of higher-order polynomials?
Is there any mathematical justification as to why (higher degree) polynomial functions overfit the data? Sure. As others mentioned, particularly stachyra and fblundun, it's about the complexity of th
How can we explain the "bad reputation" of higher-order polynomials? Is there any mathematical justification as to why (higher degree) polynomial functions overfit the data? Sure. As others mentioned, particularly stachyra and fblundun, it's about the complexity of the hypothesis class relative to the amount of data y...
How can we explain the "bad reputation" of higher-order polynomials? Is there any mathematical justification as to why (higher degree) polynomial functions overfit the data? Sure. As others mentioned, particularly stachyra and fblundun, it's about the complexity of th
8,434
How can we explain the "bad reputation" of higher-order polynomials?
Using more terms means more degrees of freedom, hence more overfitting, but the real question is -- why are high order terms like $x^5$ so bad? If you had enough data to estimate one parameter $a$, and no special knowledge about the domain, a practitioner would always start with $ax$ rather than $a x^5$ in their model ...
How can we explain the "bad reputation" of higher-order polynomials?
Using more terms means more degrees of freedom, hence more overfitting, but the real question is -- why are high order terms like $x^5$ so bad? If you had enough data to estimate one parameter $a$, an
How can we explain the "bad reputation" of higher-order polynomials? Using more terms means more degrees of freedom, hence more overfitting, but the real question is -- why are high order terms like $x^5$ so bad? If you had enough data to estimate one parameter $a$, and no special knowledge about the domain, a practiti...
How can we explain the "bad reputation" of higher-order polynomials? Using more terms means more degrees of freedom, hence more overfitting, but the real question is -- why are high order terms like $x^5$ so bad? If you had enough data to estimate one parameter $a$, an
8,435
How can we explain the "bad reputation" of higher-order polynomials?
The blog post that you are discussing here and the chart have little to do with polynomial regressions per se. The author simply used polynomials to demonstrate the overfitting idea: i.e. fitting to the noise. When you leave no degrees of freedom, your fit become very rigid, i.e. very sensitive to both errors in y's an...
How can we explain the "bad reputation" of higher-order polynomials?
The blog post that you are discussing here and the chart have little to do with polynomial regressions per se. The author simply used polynomials to demonstrate the overfitting idea: i.e. fitting to t
How can we explain the "bad reputation" of higher-order polynomials? The blog post that you are discussing here and the chart have little to do with polynomial regressions per se. The author simply used polynomials to demonstrate the overfitting idea: i.e. fitting to the noise. When you leave no degrees of freedom, you...
How can we explain the "bad reputation" of higher-order polynomials? The blog post that you are discussing here and the chart have little to do with polynomial regressions per se. The author simply used polynomials to demonstrate the overfitting idea: i.e. fitting to t
8,436
Is logistic regression a specific case of a neural network?
You have to be very specific about what you mean. We can show mathematically that a certain neural network architecture trained with a certain loss coincides exactly with logistic regression at the optimal parameters. Other neural networks will not. A binary logistic regression makes predictions $\hat{y}$ using this eq...
Is logistic regression a specific case of a neural network?
You have to be very specific about what you mean. We can show mathematically that a certain neural network architecture trained with a certain loss coincides exactly with logistic regression at the op
Is logistic regression a specific case of a neural network? You have to be very specific about what you mean. We can show mathematically that a certain neural network architecture trained with a certain loss coincides exactly with logistic regression at the optimal parameters. Other neural networks will not. A binary l...
Is logistic regression a specific case of a neural network? You have to be very specific about what you mean. We can show mathematically that a certain neural network architecture trained with a certain loss coincides exactly with logistic regression at the op
8,437
Is logistic regression a specific case of a neural network?
Architecture-wise, yes, it's a special case of neural net. A logistic regression model can be constructed via neural network libraries. In the end, both have neurons having the same computations if the same activation and loss is chosen. This makes it a special NN, but since logistic regression is the simplest model, i...
Is logistic regression a specific case of a neural network?
Architecture-wise, yes, it's a special case of neural net. A logistic regression model can be constructed via neural network libraries. In the end, both have neurons having the same computations if th
Is logistic regression a specific case of a neural network? Architecture-wise, yes, it's a special case of neural net. A logistic regression model can be constructed via neural network libraries. In the end, both have neurons having the same computations if the same activation and loss is chosen. This makes it a specia...
Is logistic regression a specific case of a neural network? Architecture-wise, yes, it's a special case of neural net. A logistic regression model can be constructed via neural network libraries. In the end, both have neurons having the same computations if th
8,438
Is logistic regression a specific case of a neural network?
If you have logistic activation function in the output layer and you are trying to maximise the log-likelihood of observations belonging to their corresponding classes (e.g. via its negative as the cost function), then yes, each output-layer neuron can be said to be an implementation of a logistic model over its inputs...
Is logistic regression a specific case of a neural network?
If you have logistic activation function in the output layer and you are trying to maximise the log-likelihood of observations belonging to their corresponding classes (e.g. via its negative as the co
Is logistic regression a specific case of a neural network? If you have logistic activation function in the output layer and you are trying to maximise the log-likelihood of observations belonging to their corresponding classes (e.g. via its negative as the cost function), then yes, each output-layer neuron can be said...
Is logistic regression a specific case of a neural network? If you have logistic activation function in the output layer and you are trying to maximise the log-likelihood of observations belonging to their corresponding classes (e.g. via its negative as the co
8,439
Why does finding small effects in large studies indicate publication bias?
The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size. The grey points in the plot would not be published under a strict $p < .05$ regime. T...
Why does finding small effects in large studies indicate publication bias?
The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between
Why does finding small effects in large studies indicate publication bias? The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size. The grey p...
Why does finding small effects in large studies indicate publication bias? The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between
8,440
Why does finding small effects in large studies indicate publication bias?
First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature. A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it ...
Why does finding small effects in large studies indicate publication bias?
First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature. A fairly simple model for publication bias is that we collect some data and i
Why does finding small effects in large studies indicate publication bias? First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature. A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't...
Why does finding small effects in large studies indicate publication bias? First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature. A fairly simple model for publication bias is that we collect some data and i
8,441
Why does finding small effects in large studies indicate publication bias?
Read this statement a different way: If there is no publication bias, effect size should be independent of study size. That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study. Estimates of effect size could (and will) vary across studies, but if there is a syst...
Why does finding small effects in large studies indicate publication bias?
Read this statement a different way: If there is no publication bias, effect size should be independent of study size. That is, if you are studying one phenomenon, the effect size is a property of the
Why does finding small effects in large studies indicate publication bias? Read this statement a different way: If there is no publication bias, effect size should be independent of study size. That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study. Estimates ...
Why does finding small effects in large studies indicate publication bias? Read this statement a different way: If there is no publication bias, effect size should be independent of study size. That is, if you are studying one phenomenon, the effect size is a property of the
8,442
Should I teach Bayesian or frequentist statistics first?
Both Bayesian statistics and frequentist statistics are based on probability theory, but I'd say that the former relies more heavily on the theory from the start. On the other hand, surely the concept of a credible interval is more intuitive than that of a confidence interval, once the student has a good understanding...
Should I teach Bayesian or frequentist statistics first?
Both Bayesian statistics and frequentist statistics are based on probability theory, but I'd say that the former relies more heavily on the theory from the start. On the other hand, surely the concept
Should I teach Bayesian or frequentist statistics first? Both Bayesian statistics and frequentist statistics are based on probability theory, but I'd say that the former relies more heavily on the theory from the start. On the other hand, surely the concept of a credible interval is more intuitive than that of a confid...
Should I teach Bayesian or frequentist statistics first? Both Bayesian statistics and frequentist statistics are based on probability theory, but I'd say that the former relies more heavily on the theory from the start. On the other hand, surely the concept
8,443
Should I teach Bayesian or frequentist statistics first?
Bayesian and frequentist ask different questions. Bayesian asks what parameter values are credible, given the observed data. Frequentist asks about the probability of imaginary simulated data if some hypothetical parameter values were true. Frequentist decisions are motivated by controlling errors, Bayesian decisions a...
Should I teach Bayesian or frequentist statistics first?
Bayesian and frequentist ask different questions. Bayesian asks what parameter values are credible, given the observed data. Frequentist asks about the probability of imaginary simulated data if some
Should I teach Bayesian or frequentist statistics first? Bayesian and frequentist ask different questions. Bayesian asks what parameter values are credible, given the observed data. Frequentist asks about the probability of imaginary simulated data if some hypothetical parameter values were true. Frequentist decisions ...
Should I teach Bayesian or frequentist statistics first? Bayesian and frequentist ask different questions. Bayesian asks what parameter values are credible, given the observed data. Frequentist asks about the probability of imaginary simulated data if some
8,444
Should I teach Bayesian or frequentist statistics first?
This question risks being opinion-based, so I'll try to be really brief with my opinion, then give you a book suggestion. Sometimes it's worth taking a particular approach because it's the approach that a particularly good book takes. I would agree that Bayesian statistics are more intuitive. The Confidence Interval ve...
Should I teach Bayesian or frequentist statistics first?
This question risks being opinion-based, so I'll try to be really brief with my opinion, then give you a book suggestion. Sometimes it's worth taking a particular approach because it's the approach th
Should I teach Bayesian or frequentist statistics first? This question risks being opinion-based, so I'll try to be really brief with my opinion, then give you a book suggestion. Sometimes it's worth taking a particular approach because it's the approach that a particularly good book takes. I would agree that Bayesian ...
Should I teach Bayesian or frequentist statistics first? This question risks being opinion-based, so I'll try to be really brief with my opinion, then give you a book suggestion. Sometimes it's worth taking a particular approach because it's the approach th
8,445
Should I teach Bayesian or frequentist statistics first?
I have been taught the frequentist approach first, then the Bayesian one. I am not a professional statistician. I have to admit I didn't find my prior knowledge of the frequentist approach to be decisively useful in understanding the Bayesian approach. I would dare to say it depends on what concrete applications you w...
Should I teach Bayesian or frequentist statistics first?
I have been taught the frequentist approach first, then the Bayesian one. I am not a professional statistician. I have to admit I didn't find my prior knowledge of the frequentist approach to be decis
Should I teach Bayesian or frequentist statistics first? I have been taught the frequentist approach first, then the Bayesian one. I am not a professional statistician. I have to admit I didn't find my prior knowledge of the frequentist approach to be decisively useful in understanding the Bayesian approach. I would d...
Should I teach Bayesian or frequentist statistics first? I have been taught the frequentist approach first, then the Bayesian one. I am not a professional statistician. I have to admit I didn't find my prior knowledge of the frequentist approach to be decis
8,446
Should I teach Bayesian or frequentist statistics first?
The Bayesian framework is tightly coupled to general critical thinking skills. It's what you need in the following situations: You think about applying for a competitive job. What are your chances of getting in? What payoff do you expect from applying? A headline tells you mobile phones cause cancer in humans in the l...
Should I teach Bayesian or frequentist statistics first?
The Bayesian framework is tightly coupled to general critical thinking skills. It's what you need in the following situations: You think about applying for a competitive job. What are your chances of
Should I teach Bayesian or frequentist statistics first? The Bayesian framework is tightly coupled to general critical thinking skills. It's what you need in the following situations: You think about applying for a competitive job. What are your chances of getting in? What payoff do you expect from applying? A headlin...
Should I teach Bayesian or frequentist statistics first? The Bayesian framework is tightly coupled to general critical thinking skills. It's what you need in the following situations: You think about applying for a competitive job. What are your chances of
8,447
Should I teach Bayesian or frequentist statistics first?
I would stay away from Bayesian, follow the giants. Soviets had an excellent book series for secondary school students, roughly translated into English as "'Quant' little library." Kolmogorov contributed a book with co-authors titled "Introduction to a probability theory." I'm not sure it has ever been translated into ...
Should I teach Bayesian or frequentist statistics first?
I would stay away from Bayesian, follow the giants. Soviets had an excellent book series for secondary school students, roughly translated into English as "'Quant' little library." Kolmogorov contribu
Should I teach Bayesian or frequentist statistics first? I would stay away from Bayesian, follow the giants. Soviets had an excellent book series for secondary school students, roughly translated into English as "'Quant' little library." Kolmogorov contributed a book with co-authors titled "Introduction to a probabilit...
Should I teach Bayesian or frequentist statistics first? I would stay away from Bayesian, follow the giants. Soviets had an excellent book series for secondary school students, roughly translated into English as "'Quant' little library." Kolmogorov contribu
8,448
Should I teach Bayesian or frequentist statistics first?
No one has mentioned likelihood, which is foundational to Bayesian statistics. An argument in favor of teaching Bayes first is that the flow from probability, to likelihood, to Bayes, is pretty seamless. Bayes can be motivated from likelihood by noting that (i) the likelihood function looks (and acts) like a probabilit...
Should I teach Bayesian or frequentist statistics first?
No one has mentioned likelihood, which is foundational to Bayesian statistics. An argument in favor of teaching Bayes first is that the flow from probability, to likelihood, to Bayes, is pretty seamle
Should I teach Bayesian or frequentist statistics first? No one has mentioned likelihood, which is foundational to Bayesian statistics. An argument in favor of teaching Bayes first is that the flow from probability, to likelihood, to Bayes, is pretty seamless. Bayes can be motivated from likelihood by noting that (i) t...
Should I teach Bayesian or frequentist statistics first? No one has mentioned likelihood, which is foundational to Bayesian statistics. An argument in favor of teaching Bayes first is that the flow from probability, to likelihood, to Bayes, is pretty seamle
8,449
Should I teach Bayesian or frequentist statistics first?
Are you teaching for fun and insight or for practical use? If it's about teaching and understanding, I'd go Bayes. If for practical purposes, I'd definitely go Frequentist. In many fields -and I suppose most fields- of natural sciences, people are used to publish their papers with a p-value. Your "boys" will have to r...
Should I teach Bayesian or frequentist statistics first?
Are you teaching for fun and insight or for practical use? If it's about teaching and understanding, I'd go Bayes. If for practical purposes, I'd definitely go Frequentist. In many fields -and I supp
Should I teach Bayesian or frequentist statistics first? Are you teaching for fun and insight or for practical use? If it's about teaching and understanding, I'd go Bayes. If for practical purposes, I'd definitely go Frequentist. In many fields -and I suppose most fields- of natural sciences, people are used to publis...
Should I teach Bayesian or frequentist statistics first? Are you teaching for fun and insight or for practical use? If it's about teaching and understanding, I'd go Bayes. If for practical purposes, I'd definitely go Frequentist. In many fields -and I supp
8,450
What is "reduced-rank regression" all about?
1. What is reduced-rank regression (RRR)? Consider multivariate multiple linear regression, i.e. regression with $p$ independent variables and $q$ dependent variables. Let $\mathbf X$ and $\mathbf Y$ be centered predictor ($n \times p$) and response ($n\times q$) datasets. Then usual ordinary least squares (OLS) regres...
What is "reduced-rank regression" all about?
1. What is reduced-rank regression (RRR)? Consider multivariate multiple linear regression, i.e. regression with $p$ independent variables and $q$ dependent variables. Let $\mathbf X$ and $\mathbf Y$
What is "reduced-rank regression" all about? 1. What is reduced-rank regression (RRR)? Consider multivariate multiple linear regression, i.e. regression with $p$ independent variables and $q$ dependent variables. Let $\mathbf X$ and $\mathbf Y$ be centered predictor ($n \times p$) and response ($n\times q$) datasets. T...
What is "reduced-rank regression" all about? 1. What is reduced-rank regression (RRR)? Consider multivariate multiple linear regression, i.e. regression with $p$ independent variables and $q$ dependent variables. Let $\mathbf X$ and $\mathbf Y$
8,451
What is "reduced-rank regression" all about?
Reduced Rank Regression is a model where there is not a single Y outcome, but multiple Y outcomes. Of course, you can just fit a separate multivariate linear regression for each response, but this seems inefficient when the functional relationship between the predictors and each response is clearly similar. See this ka...
What is "reduced-rank regression" all about?
Reduced Rank Regression is a model where there is not a single Y outcome, but multiple Y outcomes. Of course, you can just fit a separate multivariate linear regression for each response, but this see
What is "reduced-rank regression" all about? Reduced Rank Regression is a model where there is not a single Y outcome, but multiple Y outcomes. Of course, you can just fit a separate multivariate linear regression for each response, but this seems inefficient when the functional relationship between the predictors and ...
What is "reduced-rank regression" all about? Reduced Rank Regression is a model where there is not a single Y outcome, but multiple Y outcomes. Of course, you can just fit a separate multivariate linear regression for each response, but this see
8,452
Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$
If there is a constant term in the model then $\mathbf{1_n}$ lies in the column space of $\mathbf{X}$ (as does $\bar{Y}\mathbf{1_n}$, which will come in useful later). The fitted $\mathbf{\hat{Y}}$ is the orthogonal projection of the observed $\mathbf{Y}$ onto the flat formed by that column space. This means the vector...
Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R
If there is a constant term in the model then $\mathbf{1_n}$ lies in the column space of $\mathbf{X}$ (as does $\bar{Y}\mathbf{1_n}$, which will come in useful later). The fitted $\mathbf{\hat{Y}}$ is
Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$ If there is a constant term in the model then $\mathbf{1_n}$ lies in the column space of $\mathbf{X}$ (as does $\bar{Y}\mathbf{1_n}$, which will come in useful later). The fitted $\mathbf{\hat{Y}}$ is the orthogonal ...
Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R If there is a constant term in the model then $\mathbf{1_n}$ lies in the column space of $\mathbf{X}$ (as does $\bar{Y}\mathbf{1_n}$, which will come in useful later). The fitted $\mathbf{\hat{Y}}$ is
8,453
How can the regression error term ever be correlated with the explanatory variables?
You are conflating two types of "error" term. Wikipedia actually has an article devoted to this distinction between errors and residuals. In an OLS regression, the residuals (your estimates of the error or disturbance term) $\hat \varepsilon$ are indeed guaranteed to be uncorrelated with the predictor variables, assumi...
How can the regression error term ever be correlated with the explanatory variables?
You are conflating two types of "error" term. Wikipedia actually has an article devoted to this distinction between errors and residuals. In an OLS regression, the residuals (your estimates of the err
How can the regression error term ever be correlated with the explanatory variables? You are conflating two types of "error" term. Wikipedia actually has an article devoted to this distinction between errors and residuals. In an OLS regression, the residuals (your estimates of the error or disturbance term) $\hat \vare...
How can the regression error term ever be correlated with the explanatory variables? You are conflating two types of "error" term. Wikipedia actually has an article devoted to this distinction between errors and residuals. In an OLS regression, the residuals (your estimates of the err
8,454
How can the regression error term ever be correlated with the explanatory variables?
Simple example: Let $x_{i,1}$ be the number of burgers I buy on visit $i$ Let $x_{i,2}$ be the number of buns I buy. Let $b_1$ be the price of a burger Let $b_2$ be the price of a bun. Independent of my burger and bun purchases, let me spend a random amount $a + \epsilon_i$ where $a$ is a scalar and $\epsilon_i$ is a ...
How can the regression error term ever be correlated with the explanatory variables?
Simple example: Let $x_{i,1}$ be the number of burgers I buy on visit $i$ Let $x_{i,2}$ be the number of buns I buy. Let $b_1$ be the price of a burger Let $b_2$ be the price of a bun. Independent of
How can the regression error term ever be correlated with the explanatory variables? Simple example: Let $x_{i,1}$ be the number of burgers I buy on visit $i$ Let $x_{i,2}$ be the number of buns I buy. Let $b_1$ be the price of a burger Let $b_2$ be the price of a bun. Independent of my burger and bun purchases, let m...
How can the regression error term ever be correlated with the explanatory variables? Simple example: Let $x_{i,1}$ be the number of burgers I buy on visit $i$ Let $x_{i,2}$ be the number of buns I buy. Let $b_1$ be the price of a burger Let $b_2$ be the price of a bun. Independent of
8,455
How can the regression error term ever be correlated with the explanatory variables?
Suppose that we're building a regression of the weight of an animal on its height. Clearly, the weight of a dolphin would be measured differently (in different procedure and using different instruments) from the weight of an elephant or a snake. This means that the model errors will be dependent on the height, i.e. exp...
How can the regression error term ever be correlated with the explanatory variables?
Suppose that we're building a regression of the weight of an animal on its height. Clearly, the weight of a dolphin would be measured differently (in different procedure and using different instrument
How can the regression error term ever be correlated with the explanatory variables? Suppose that we're building a regression of the weight of an animal on its height. Clearly, the weight of a dolphin would be measured differently (in different procedure and using different instruments) from the weight of an elephant o...
How can the regression error term ever be correlated with the explanatory variables? Suppose that we're building a regression of the weight of an animal on its height. Clearly, the weight of a dolphin would be measured differently (in different procedure and using different instrument
8,456
Produce a list of variable name in a for loop, then assign values to them
Your are looking for assign(). for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls() [1] "a1" "a2" "a3" and > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 Update I agree that using loops is (very often) bad R coding style (see discussion above). Using list2env() (thanks to @mbq for mentioning i...
Produce a list of variable name in a for loop, then assign values to them
Your are looking for assign(). for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls() [1] "a1" "a2" "a3" and > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 Update I agree that
Produce a list of variable name in a for loop, then assign values to them Your are looking for assign(). for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls() [1] "a1" "a2" "a3" and > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 Update I agree that using loops is (very often) bad R coding style...
Produce a list of variable name in a for loop, then assign values to them Your are looking for assign(). for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls() [1] "a1" "a2" "a3" and > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 Update I agree that
8,457
Produce a list of variable name in a for loop, then assign values to them
If the values are in vector, the loop is not necessary: vals <- rnorm(3) n <- length(vals) lhs <- paste("a", 1:n, sep="") rhs <- paste("vals[",1:n,"]", sep="") eq <- paste(paste(lhs, rhs, sep="<-"), collapse=";") eval(parse(text=eq)) As a side note, this is the reason why I love R.
Produce a list of variable name in a for loop, then assign values to them
If the values are in vector, the loop is not necessary: vals <- rnorm(3) n <- length(vals) lhs <- paste("a", 1:n, sep="") rhs <- paste("vals[",1:n,"]", sep="") eq <- paste(paste(lhs, rhs
Produce a list of variable name in a for loop, then assign values to them If the values are in vector, the loop is not necessary: vals <- rnorm(3) n <- length(vals) lhs <- paste("a", 1:n, sep="") rhs <- paste("vals[",1:n,"]", sep="") eq <- paste(paste(lhs, rhs, sep="<-"), collapse=";") eval(parse(text=eq)...
Produce a list of variable name in a for loop, then assign values to them If the values are in vector, the loop is not necessary: vals <- rnorm(3) n <- length(vals) lhs <- paste("a", 1:n, sep="") rhs <- paste("vals[",1:n,"]", sep="") eq <- paste(paste(lhs, rhs
8,458
Are there default functions for discrete uniform distributions in R?
As nico wrote, they're not implemented in R. Assuming we work in 1..k, those functions should look like: For random generation: rdu<-function(n,k) sample(1:k,n,replace=T) PDF: ddu<-function(x,k) ifelse(x>=1 & x<=k & round(x)==x,1/k,0) CDF: pdu<-function(x,k) ifelse(x<1,0,ifelse(x<=k,floor(x)/k,1))
Are there default functions for discrete uniform distributions in R?
As nico wrote, they're not implemented in R. Assuming we work in 1..k, those functions should look like: For random generation: rdu<-function(n,k) sample(1:k,n,replace=T) PDF: ddu<-function(x,k) ifel
Are there default functions for discrete uniform distributions in R? As nico wrote, they're not implemented in R. Assuming we work in 1..k, those functions should look like: For random generation: rdu<-function(n,k) sample(1:k,n,replace=T) PDF: ddu<-function(x,k) ifelse(x>=1 & x<=k & round(x)==x,1/k,0) CDF: pdu<-fun...
Are there default functions for discrete uniform distributions in R? As nico wrote, they're not implemented in R. Assuming we work in 1..k, those functions should look like: For random generation: rdu<-function(n,k) sample(1:k,n,replace=T) PDF: ddu<-function(x,k) ifel
8,459
Are there default functions for discrete uniform distributions in R?
Here is the code for the discrete uniform distribution in the range [min, max], adapted from mbq's post: dunifdisc<-function(x, min=0, max=1) ifelse(x>=min & x<=max & round(x)==x, 1/(max-min+1), 0) punifdisc<-function(q, min=0, max=1) ifelse(q<min, 0, ifelse(q>=max, 1, (floor(q)-min+1)/(max-min+1))) qunifdisc<-function...
Are there default functions for discrete uniform distributions in R?
Here is the code for the discrete uniform distribution in the range [min, max], adapted from mbq's post: dunifdisc<-function(x, min=0, max=1) ifelse(x>=min & x<=max & round(x)==x, 1/(max-min+1), 0) pu
Are there default functions for discrete uniform distributions in R? Here is the code for the discrete uniform distribution in the range [min, max], adapted from mbq's post: dunifdisc<-function(x, min=0, max=1) ifelse(x>=min & x<=max & round(x)==x, 1/(max-min+1), 0) punifdisc<-function(q, min=0, max=1) ifelse(q<min, 0,...
Are there default functions for discrete uniform distributions in R? Here is the code for the discrete uniform distribution in the range [min, max], adapted from mbq's post: dunifdisc<-function(x, min=0, max=1) ifelse(x>=min & x<=max & round(x)==x, 1/(max-min+1), 0) pu
8,460
Are there default functions for discrete uniform distributions in R?
The CRAN Task View: Probability Distributions page says: The discrete uniform distribution can be easily obtained with the basic functions. I guess something on the lines of this should do: a <- round(runif(1000, min=0, max=100)) EDIT As csgillespie pointed out, this is not correct... a <- ceiling(runif(1000, min=...
Are there default functions for discrete uniform distributions in R?
The CRAN Task View: Probability Distributions page says: The discrete uniform distribution can be easily obtained with the basic functions. I guess something on the lines of this should do: a <- ro
Are there default functions for discrete uniform distributions in R? The CRAN Task View: Probability Distributions page says: The discrete uniform distribution can be easily obtained with the basic functions. I guess something on the lines of this should do: a <- round(runif(1000, min=0, max=100)) EDIT As csgilles...
Are there default functions for discrete uniform distributions in R? The CRAN Task View: Probability Distributions page says: The discrete uniform distribution can be easily obtained with the basic functions. I guess something on the lines of this should do: a <- ro
8,461
Are there default functions for discrete uniform distributions in R?
This function might be what you are looking for: https://purrr.tidyverse.org/reference/rdunif.html rdunif(n, b, a = 1)
Are there default functions for discrete uniform distributions in R?
This function might be what you are looking for: https://purrr.tidyverse.org/reference/rdunif.html rdunif(n, b, a = 1)
Are there default functions for discrete uniform distributions in R? This function might be what you are looking for: https://purrr.tidyverse.org/reference/rdunif.html rdunif(n, b, a = 1)
Are there default functions for discrete uniform distributions in R? This function might be what you are looking for: https://purrr.tidyverse.org/reference/rdunif.html rdunif(n, b, a = 1)
8,462
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
The Average Treatment Effect (ATE) and the Average Treatment Effect on Treated (ATT) are commonly defined across the different groups of individuals. In addition, ATE and ATT are often different because they might measure outcomes ($Y$) that are not affected from the treatment $D$ in the same manner. First, some addit...
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
The Average Treatment Effect (ATE) and the Average Treatment Effect on Treated (ATT) are commonly defined across the different groups of individuals. In addition, ATE and ATT are often different becau
Why is Average Treatment Effect different from Average Treatment effect on the Treated? The Average Treatment Effect (ATE) and the Average Treatment Effect on Treated (ATT) are commonly defined across the different groups of individuals. In addition, ATE and ATT are often different because they might measure outcomes (...
Why is Average Treatment Effect different from Average Treatment effect on the Treated? The Average Treatment Effect (ATE) and the Average Treatment Effect on Treated (ATT) are commonly defined across the different groups of individuals. In addition, ATE and ATT are often different becau
8,463
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
ATE is the average treatment effect, and ATT is the average treatment effect on the treated. The ATT is the effect of the treatment actually applied. Medical studies typically use the ATT as the designated quantity of interest because they often only care about the causal effect of drugs for patients that receive or w...
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
ATE is the average treatment effect, and ATT is the average treatment effect on the treated. The ATT is the effect of the treatment actually applied. Medical studies typically use the ATT as the desi
Why is Average Treatment Effect different from Average Treatment effect on the Treated? ATE is the average treatment effect, and ATT is the average treatment effect on the treated. The ATT is the effect of the treatment actually applied. Medical studies typically use the ATT as the designated quantity of interest beca...
Why is Average Treatment Effect different from Average Treatment effect on the Treated? ATE is the average treatment effect, and ATT is the average treatment effect on the treated. The ATT is the effect of the treatment actually applied. Medical studies typically use the ATT as the desi
8,464
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
There seems to be something obvious not discussed above. If the treatment effect is constant over individuals $Y_i^1-Y_i^0=c$ for every $i$ then ATT and ATE as defined above should be equal. This may be seen as there being no "effect modifiers". My intuition is that balancing effect modifiers between groups would also ...
Why is Average Treatment Effect different from Average Treatment effect on the Treated?
There seems to be something obvious not discussed above. If the treatment effect is constant over individuals $Y_i^1-Y_i^0=c$ for every $i$ then ATT and ATE as defined above should be equal. This may
Why is Average Treatment Effect different from Average Treatment effect on the Treated? There seems to be something obvious not discussed above. If the treatment effect is constant over individuals $Y_i^1-Y_i^0=c$ for every $i$ then ATT and ATE as defined above should be equal. This may be seen as there being no "effec...
Why is Average Treatment Effect different from Average Treatment effect on the Treated? There seems to be something obvious not discussed above. If the treatment effect is constant over individuals $Y_i^1-Y_i^0=c$ for every $i$ then ATT and ATE as defined above should be equal. This may
8,465
How to include an interaction term in GAM?
The "a" in "gam" stands for "additive" which means no interactions, so if you fit interactions you are really not fitting a gam model any more. That said, there are ways to get some interaction like terms within the additive terms in a gam, you are already using one of those by using the by argument to s. You could tr...
How to include an interaction term in GAM?
The "a" in "gam" stands for "additive" which means no interactions, so if you fit interactions you are really not fitting a gam model any more. That said, there are ways to get some interaction like t
How to include an interaction term in GAM? The "a" in "gam" stands for "additive" which means no interactions, so if you fit interactions you are really not fitting a gam model any more. That said, there are ways to get some interaction like terms within the additive terms in a gam, you are already using one of those b...
How to include an interaction term in GAM? The "a" in "gam" stands for "additive" which means no interactions, so if you fit interactions you are really not fitting a gam model any more. That said, there are ways to get some interaction like t
8,466
How to include an interaction term in GAM?
For two continuous variables then you can do what you want (whether this is an interaction or not I'll leave others to discuss as per comments to @Greg's Answer) using: mod1 <- gam(Temp ~ Loc + s(Doy, bs = "cc", k = 5) + s(Doy, bs = "cc", by = Loc, k = 5, m = 1) + s(T...
How to include an interaction term in GAM?
For two continuous variables then you can do what you want (whether this is an interaction or not I'll leave others to discuss as per comments to @Greg's Answer) using: mod1 <- gam(Temp ~ Loc + s(Doy,
How to include an interaction term in GAM? For two continuous variables then you can do what you want (whether this is an interaction or not I'll leave others to discuss as per comments to @Greg's Answer) using: mod1 <- gam(Temp ~ Loc + s(Doy, bs = "cc", k = 5) + s(Doy, bs = "cc", by = Loc, k ...
How to include an interaction term in GAM? For two continuous variables then you can do what you want (whether this is an interaction or not I'll leave others to discuss as per comments to @Greg's Answer) using: mod1 <- gam(Temp ~ Loc + s(Doy,
8,467
Generating correlated binomial random variables
Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-p$) and $Y$ is a Bernoulli$(q)$ variable. To pi...
Generating correlated binomial random variables
Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose
Generating correlated binomial random variables Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-...
Generating correlated binomial random variables Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose
8,468
Generating correlated binomial random variables
A Python (python3) implementation of @whuber's solution: import numpy as np def bernoulli_sample(n=100, p=0.5, q=0.5, rho=0): p1 = rho * np.sqrt(p * q * (1 - p) * (1 - q)) + (1 - p) * (1 - q) p2 = 1 - p - p1 p3 = 1 - q - p1 p4 = p1 + p + q - 1 samples = np.random.choice([0, 1, 2, 3], size=n, repla...
Generating correlated binomial random variables
A Python (python3) implementation of @whuber's solution: import numpy as np def bernoulli_sample(n=100, p=0.5, q=0.5, rho=0): p1 = rho * np.sqrt(p * q * (1 - p) * (1 - q)) + (1 - p) * (1 - q)
Generating correlated binomial random variables A Python (python3) implementation of @whuber's solution: import numpy as np def bernoulli_sample(n=100, p=0.5, q=0.5, rho=0): p1 = rho * np.sqrt(p * q * (1 - p) * (1 - q)) + (1 - p) * (1 - q) p2 = 1 - p - p1 p3 = 1 - q - p1 p4 = p1 + p + q - 1 sample...
Generating correlated binomial random variables A Python (python3) implementation of @whuber's solution: import numpy as np def bernoulli_sample(n=100, p=0.5, q=0.5, rho=0): p1 = rho * np.sqrt(p * q * (1 - p) * (1 - q)) + (1 - p) * (1 - q)
8,469
Generating correlated binomial random variables
Using the method described by whuber in his excellent answer, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R. You can call this function to generate any desired number of correlated Bernoulli random variables, with specified pr...
Generating correlated binomial random variables
Using the method described by whuber in his excellent answer, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R
Generating correlated binomial random variables Using the method described by whuber in his excellent answer, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R. You can call this function to generate any desired number of correlat...
Generating correlated binomial random variables Using the method described by whuber in his excellent answer, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R
8,470
Generating correlated binomial random variables
Here is another R implementation that uses the linear transformation mentioned in the original post. set.seed(64378) corr_binomial <- function(n, k, p, rho) { z <- rbinom(n, k , p) x_raw <- rbinom(n, k , p) x <- z*rho + x_raw*(1 - rho^2)^0.5 observed_corr <- round(cor(x, z), 3) scatter_z_x <- plot(z, x) abline(lm(x ~ ...
Generating correlated binomial random variables
Here is another R implementation that uses the linear transformation mentioned in the original post. set.seed(64378) corr_binomial <- function(n, k, p, rho) { z <- rbinom(n, k , p) x_raw <- rbinom(n,
Generating correlated binomial random variables Here is another R implementation that uses the linear transformation mentioned in the original post. set.seed(64378) corr_binomial <- function(n, k, p, rho) { z <- rbinom(n, k , p) x_raw <- rbinom(n, k , p) x <- z*rho + x_raw*(1 - rho^2)^0.5 observed_corr <- round(cor(x,...
Generating correlated binomial random variables Here is another R implementation that uses the linear transformation mentioned in the original post. set.seed(64378) corr_binomial <- function(n, k, p, rho) { z <- rbinom(n, k , p) x_raw <- rbinom(n,
8,471
Why use gradient descent with neural networks?
Because we can't. The optimization surface $S(\mathbf{w})$ as a function of the weights $\mathbf{w}$ is nonlinear and no closed form solution exists for $\frac{d S(\mathbf{w})}{d\mathbf{w}}=0$. Gradient descent, by definition, descends. If you reach a stationary point after descending, it has to be a (local) minimum or...
Why use gradient descent with neural networks?
Because we can't. The optimization surface $S(\mathbf{w})$ as a function of the weights $\mathbf{w}$ is nonlinear and no closed form solution exists for $\frac{d S(\mathbf{w})}{d\mathbf{w}}=0$. Gradie
Why use gradient descent with neural networks? Because we can't. The optimization surface $S(\mathbf{w})$ as a function of the weights $\mathbf{w}$ is nonlinear and no closed form solution exists for $\frac{d S(\mathbf{w})}{d\mathbf{w}}=0$. Gradient descent, by definition, descends. If you reach a stationary point afte...
Why use gradient descent with neural networks? Because we can't. The optimization surface $S(\mathbf{w})$ as a function of the weights $\mathbf{w}$ is nonlinear and no closed form solution exists for $\frac{d S(\mathbf{w})}{d\mathbf{w}}=0$. Gradie
8,472
Why use gradient descent with neural networks?
Regarding Marc Claesen's answer, I believe that gradient descent could stop at a local maximum in situations where you initialize to a local maximum or you just happen to end up there due to bad luck or a mistuned rate parameter. The local maximum would have zero gradient and the algorithm would think it had converged...
Why use gradient descent with neural networks?
Regarding Marc Claesen's answer, I believe that gradient descent could stop at a local maximum in situations where you initialize to a local maximum or you just happen to end up there due to bad luck
Why use gradient descent with neural networks? Regarding Marc Claesen's answer, I believe that gradient descent could stop at a local maximum in situations where you initialize to a local maximum or you just happen to end up there due to bad luck or a mistuned rate parameter. The local maximum would have zero gradient...
Why use gradient descent with neural networks? Regarding Marc Claesen's answer, I believe that gradient descent could stop at a local maximum in situations where you initialize to a local maximum or you just happen to end up there due to bad luck
8,473
Why use gradient descent with neural networks?
In Newton-type methods, at each step one solves $\frac{d(\text{error})}{dw}=0$ for a linearized or approximate version of the problem. Then the problem is linearized about the new point, and the process repeats until convergence. Some people have done it for neural nets, but it has the following drawbacks, One needs t...
Why use gradient descent with neural networks?
In Newton-type methods, at each step one solves $\frac{d(\text{error})}{dw}=0$ for a linearized or approximate version of the problem. Then the problem is linearized about the new point, and the proce
Why use gradient descent with neural networks? In Newton-type methods, at each step one solves $\frac{d(\text{error})}{dw}=0$ for a linearized or approximate version of the problem. Then the problem is linearized about the new point, and the process repeats until convergence. Some people have done it for neural nets, b...
Why use gradient descent with neural networks? In Newton-type methods, at each step one solves $\frac{d(\text{error})}{dw}=0$ for a linearized or approximate version of the problem. Then the problem is linearized about the new point, and the proce
8,474
Metrics for evaluating ranking algorithms
I am actually looking for the same answer, however I should be able to at least partially answer your question. All of the metrics that you have mentioned have different traits and, unfortunately, the one you should pick depends on what you actually would like to measure. Here are some things that it would be worth to ...
Metrics for evaluating ranking algorithms
I am actually looking for the same answer, however I should be able to at least partially answer your question. All of the metrics that you have mentioned have different traits and, unfortunately, the
Metrics for evaluating ranking algorithms I am actually looking for the same answer, however I should be able to at least partially answer your question. All of the metrics that you have mentioned have different traits and, unfortunately, the one you should pick depends on what you actually would like to measure. Here ...
Metrics for evaluating ranking algorithms I am actually looking for the same answer, however I should be able to at least partially answer your question. All of the metrics that you have mentioned have different traits and, unfortunately, the
8,475
Metrics for evaluating ranking algorithms
In many cases where you apply ranking algorithms (e.g. Google search, Amazon product recommendation) you have hundreds and thousands of results. The user only wants to watch at the top ~20 or so. So the rest is completely irrelevant. To phrase it clearly: Only the top $k$ elements are relevant If this is true for your ...
Metrics for evaluating ranking algorithms
In many cases where you apply ranking algorithms (e.g. Google search, Amazon product recommendation) you have hundreds and thousands of results. The user only wants to watch at the top ~20 or so. So t
Metrics for evaluating ranking algorithms In many cases where you apply ranking algorithms (e.g. Google search, Amazon product recommendation) you have hundreds and thousands of results. The user only wants to watch at the top ~20 or so. So the rest is completely irrelevant. To phrase it clearly: Only the top $k$ eleme...
Metrics for evaluating ranking algorithms In many cases where you apply ranking algorithms (e.g. Google search, Amazon product recommendation) you have hundreds and thousands of results. The user only wants to watch at the top ~20 or so. So t
8,476
Metrics for evaluating ranking algorithms
I recently had to choose a metric for evaluating multilabel ranking algorithms and got to this subject, which was really helpful. Here are some additions to stpk's answer, which were helpful for making a choice. MAP can be adapted to multilabel problems, at the cost of an approximation MAP does not need to be computed...
Metrics for evaluating ranking algorithms
I recently had to choose a metric for evaluating multilabel ranking algorithms and got to this subject, which was really helpful. Here are some additions to stpk's answer, which were helpful for makin
Metrics for evaluating ranking algorithms I recently had to choose a metric for evaluating multilabel ranking algorithms and got to this subject, which was really helpful. Here are some additions to stpk's answer, which were helpful for making a choice. MAP can be adapted to multilabel problems, at the cost of an appr...
Metrics for evaluating ranking algorithms I recently had to choose a metric for evaluating multilabel ranking algorithms and got to this subject, which was really helpful. Here are some additions to stpk's answer, which were helpful for makin
8,477
White Noise in Statistics
TL;DR The answer is NO, it doesn't have to be normal; YES, it can be other distributions. Colors of the noise Let's talk about colors of the noise. The noise that an infant makes during the air travel is not white. It has color. The noise that an airplane engine makes is also not white, but it's not as colored as the ...
White Noise in Statistics
TL;DR The answer is NO, it doesn't have to be normal; YES, it can be other distributions. Colors of the noise Let's talk about colors of the noise. The noise that an infant makes during the air trave
White Noise in Statistics TL;DR The answer is NO, it doesn't have to be normal; YES, it can be other distributions. Colors of the noise Let's talk about colors of the noise. The noise that an infant makes during the air travel is not white. It has color. The noise that an airplane engine makes is also not white, but i...
White Noise in Statistics TL;DR The answer is NO, it doesn't have to be normal; YES, it can be other distributions. Colors of the noise Let's talk about colors of the noise. The noise that an infant makes during the air trave
8,478
White Noise in Statistics
White noise simply means that the sequence of samples are uncorrelated with zero mean and finite variance. There is no restriction on the distribution from which the samples are drawn. Now if the samples happen to be drawn from a Normal distribution, you have a special type of white noise called Gaussian white noise.
White Noise in Statistics
White noise simply means that the sequence of samples are uncorrelated with zero mean and finite variance. There is no restriction on the distribution from which the samples are drawn. Now if the samp
White Noise in Statistics White noise simply means that the sequence of samples are uncorrelated with zero mean and finite variance. There is no restriction on the distribution from which the samples are drawn. Now if the samples happen to be drawn from a Normal distribution, you have a special type of white noise call...
White Noise in Statistics White noise simply means that the sequence of samples are uncorrelated with zero mean and finite variance. There is no restriction on the distribution from which the samples are drawn. Now if the samp
8,479
Detecting significant predictors out of many independent variables
I would recommend trying a glm with lasso regularization. This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease. You should use cross-validation to select the value of the penalty parameter. If you have R, I suggest using the glmne...
Detecting significant predictors out of many independent variables
I would recommend trying a glm with lasso regularization. This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease.
Detecting significant predictors out of many independent variables I would recommend trying a glm with lasso regularization. This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease. You should use cross-validation to select the value ...
Detecting significant predictors out of many independent variables I would recommend trying a glm with lasso regularization. This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease.
8,480
Detecting significant predictors out of many independent variables
To expand on Zach's answer (+1), if you use the LASSO method in linear regression, you are trying to minimize the sum a quadratic function and an absolute value function, ie: $$\min_{\beta} \; \; (Y-X\beta)^{T}(Y-X\beta) + \sum_i |\beta_i| $$ The first part is quadratic in $\beta$ (gold below), and the second is a squa...
Detecting significant predictors out of many independent variables
To expand on Zach's answer (+1), if you use the LASSO method in linear regression, you are trying to minimize the sum a quadratic function and an absolute value function, ie: $$\min_{\beta} \; \; (Y-X
Detecting significant predictors out of many independent variables To expand on Zach's answer (+1), if you use the LASSO method in linear regression, you are trying to minimize the sum a quadratic function and an absolute value function, ie: $$\min_{\beta} \; \; (Y-X\beta)^{T}(Y-X\beta) + \sum_i |\beta_i| $$ The first ...
Detecting significant predictors out of many independent variables To expand on Zach's answer (+1), if you use the LASSO method in linear regression, you are trying to minimize the sum a quadratic function and an absolute value function, ie: $$\min_{\beta} \; \; (Y-X
8,481
Detecting significant predictors out of many independent variables
What is your prior belief on how many predictors are likely to be important? Is it likely that most of them have an exactly zero effect, or that everything affects the outcome, some variables only less than others? And how is the health status related to the predictive task? If you believe that only few variables are i...
Detecting significant predictors out of many independent variables
What is your prior belief on how many predictors are likely to be important? Is it likely that most of them have an exactly zero effect, or that everything affects the outcome, some variables only les
Detecting significant predictors out of many independent variables What is your prior belief on how many predictors are likely to be important? Is it likely that most of them have an exactly zero effect, or that everything affects the outcome, some variables only less than others? And how is the health status related t...
Detecting significant predictors out of many independent variables What is your prior belief on how many predictors are likely to be important? Is it likely that most of them have an exactly zero effect, or that everything affects the outcome, some variables only les
8,482
Detecting significant predictors out of many independent variables
Whatever you do, it is worthwhile getting bootstrap confidence intervals on the ranks of importance of the predictors to show that you can really do this with your dataset. I am doubtful that any of the methods can reliably find the "true" predictors.
Detecting significant predictors out of many independent variables
Whatever you do, it is worthwhile getting bootstrap confidence intervals on the ranks of importance of the predictors to show that you can really do this with your dataset. I am doubtful that any of
Detecting significant predictors out of many independent variables Whatever you do, it is worthwhile getting bootstrap confidence intervals on the ranks of importance of the predictors to show that you can really do this with your dataset. I am doubtful that any of the methods can reliably find the "true" predictors.
Detecting significant predictors out of many independent variables Whatever you do, it is worthwhile getting bootstrap confidence intervals on the ranks of importance of the predictors to show that you can really do this with your dataset. I am doubtful that any of
8,483
Detecting significant predictors out of many independent variables
I remember Lasso regression doesn't perform very well when $n \leq p$, but I'm not sure. I think in this case Elastic Net is more appropriate for variable selection.
Detecting significant predictors out of many independent variables
I remember Lasso regression doesn't perform very well when $n \leq p$, but I'm not sure. I think in this case Elastic Net is more appropriate for variable selection.
Detecting significant predictors out of many independent variables I remember Lasso regression doesn't perform very well when $n \leq p$, but I'm not sure. I think in this case Elastic Net is more appropriate for variable selection.
Detecting significant predictors out of many independent variables I remember Lasso regression doesn't perform very well when $n \leq p$, but I'm not sure. I think in this case Elastic Net is more appropriate for variable selection.
8,484
How does quantile regression "work"?
I recommend Koenker & Hallock (2001, Journal of Economic Perspectives) and Koenker's textbook Quantile Regression. The starting point is the observation that the median of a data set minimizes the sum of absolute errors. That is, the 50% quantile is a solution to a particular optimization problem (to find the value th...
How does quantile regression "work"?
I recommend Koenker & Hallock (2001, Journal of Economic Perspectives) and Koenker's textbook Quantile Regression. The starting point is the observation that the median of a data set minimizes the su
How does quantile regression "work"? I recommend Koenker & Hallock (2001, Journal of Economic Perspectives) and Koenker's textbook Quantile Regression. The starting point is the observation that the median of a data set minimizes the sum of absolute errors. That is, the 50% quantile is a solution to a particular optim...
How does quantile regression "work"? I recommend Koenker & Hallock (2001, Journal of Economic Perspectives) and Koenker's textbook Quantile Regression. The starting point is the observation that the median of a data set minimizes the su
8,485
How does quantile regression "work"?
The basic idea of quantile regression comes from the fact the the analyst is interested in distribution of data rather that just mean of data. Lets start with mean. Mean regression fits a line of the form of $y=X\beta$ to the mean of data. In other words, $E(Y|X=x)=x\beta$. A general approach to estimate this line is ...
How does quantile regression "work"?
The basic idea of quantile regression comes from the fact the the analyst is interested in distribution of data rather that just mean of data. Lets start with mean. Mean regression fits a line of the
How does quantile regression "work"? The basic idea of quantile regression comes from the fact the the analyst is interested in distribution of data rather that just mean of data. Lets start with mean. Mean regression fits a line of the form of $y=X\beta$ to the mean of data. In other words, $E(Y|X=x)=x\beta$. A gener...
How does quantile regression "work"? The basic idea of quantile regression comes from the fact the the analyst is interested in distribution of data rather that just mean of data. Lets start with mean. Mean regression fits a line of the
8,486
Fit a sinusoidal term to data
If you just want a good estimate of $\omega$ and don't care much about its standard error: ssp <- spectrum(y) per <- 1/ssp$freq[ssp$spec==max(ssp$spec)] reslm <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*t)) summary(reslm) rg <- diff(range(y)) plot(y~t,ylim=c(min(y)-0.1*rg,max(y)+0.1*rg)) lines(fitted(reslm)~t,col=4,lty=2...
Fit a sinusoidal term to data
If you just want a good estimate of $\omega$ and don't care much about its standard error: ssp <- spectrum(y) per <- 1/ssp$freq[ssp$spec==max(ssp$spec)] reslm <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*
Fit a sinusoidal term to data If you just want a good estimate of $\omega$ and don't care much about its standard error: ssp <- spectrum(y) per <- 1/ssp$freq[ssp$spec==max(ssp$spec)] reslm <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*t)) summary(reslm) rg <- diff(range(y)) plot(y~t,ylim=c(min(y)-0.1*rg,max(y)+0.1*rg)) lin...
Fit a sinusoidal term to data If you just want a good estimate of $\omega$ and don't care much about its standard error: ssp <- spectrum(y) per <- 1/ssp$freq[ssp$spec==max(ssp$spec)] reslm <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*
8,487
Fit a sinusoidal term to data
As @Stefan suggested, different starting values do seem to improve the fit dramatically. I eyeballed the data to suggest that omega should be about $2 \pi / 20$, since the peaks looked like they were about 20 units apart. When I put that into nls's start list, I got a curve that was much more reasonable, although it s...
Fit a sinusoidal term to data
As @Stefan suggested, different starting values do seem to improve the fit dramatically. I eyeballed the data to suggest that omega should be about $2 \pi / 20$, since the peaks looked like they were
Fit a sinusoidal term to data As @Stefan suggested, different starting values do seem to improve the fit dramatically. I eyeballed the data to suggest that omega should be about $2 \pi / 20$, since the peaks looked like they were about 20 units apart. When I put that into nls's start list, I got a curve that was much ...
Fit a sinusoidal term to data As @Stefan suggested, different starting values do seem to improve the fit dramatically. I eyeballed the data to suggest that omega should be about $2 \pi / 20$, since the peaks looked like they were
8,488
Fit a sinusoidal term to data
As an alternative to what has already been said, it may be worth noting that an AR(2) model from the class of ARIMA models can be used to generate forecasts with a sine wave pattern. An AR(2) model can be written as follows: \begin{equation} y_{t} = C + \phi_{1}y_{t-1} + \phi_{2}y_{t-2} + a_{t} \end{equation} where $C...
Fit a sinusoidal term to data
As an alternative to what has already been said, it may be worth noting that an AR(2) model from the class of ARIMA models can be used to generate forecasts with a sine wave pattern. An AR(2) model c
Fit a sinusoidal term to data As an alternative to what has already been said, it may be worth noting that an AR(2) model from the class of ARIMA models can be used to generate forecasts with a sine wave pattern. An AR(2) model can be written as follows: \begin{equation} y_{t} = C + \phi_{1}y_{t-1} + \phi_{2}y_{t-2} +...
Fit a sinusoidal term to data As an alternative to what has already been said, it may be worth noting that an AR(2) model from the class of ARIMA models can be used to generate forecasts with a sine wave pattern. An AR(2) model c
8,489
Fit a sinusoidal term to data
This isn't so different than some of the other solutions but we can simplify the use of nls. y and t and the parameters are as defined in the question. We use: the plinear algorithm of nls to avoid estimating A and C as plinear only requires initial values for parameters that do not enter linearly. In this case the...
Fit a sinusoidal term to data
This isn't so different than some of the other solutions but we can simplify the use of nls. y and t and the parameters are as defined in the question. We use: the plinear algorithm of nls to avoid
Fit a sinusoidal term to data This isn't so different than some of the other solutions but we can simplify the use of nls. y and t and the parameters are as defined in the question. We use: the plinear algorithm of nls to avoid estimating A and C as plinear only requires initial values for parameters that do not ent...
Fit a sinusoidal term to data This isn't so different than some of the other solutions but we can simplify the use of nls. y and t and the parameters are as defined in the question. We use: the plinear algorithm of nls to avoid
8,490
Fit a sinusoidal term to data
The current methods to fit a sin curve to a given data set require a first guess of the parameters, followed by an interative process. This is a non-linear regression problem. A different method consists in transforming the non-linear regression to a linear regression thanks to a convenient integral equation. Then, the...
Fit a sinusoidal term to data
The current methods to fit a sin curve to a given data set require a first guess of the parameters, followed by an interative process. This is a non-linear regression problem. A different method consi
Fit a sinusoidal term to data The current methods to fit a sin curve to a given data set require a first guess of the parameters, followed by an interative process. This is a non-linear regression problem. A different method consists in transforming the non-linear regression to a linear regression thanks to a convenien...
Fit a sinusoidal term to data The current methods to fit a sin curve to a given data set require a first guess of the parameters, followed by an interative process. This is a non-linear regression problem. A different method consi
8,491
Fit a sinusoidal term to data
Another option would be the functions sinusoid and mvrm from package BNSP. data <- data.frame(y, t) model <- y ~ sinusoid(t, harmonics = 2, amplitude = 1, period = 24) m1 <- mvrm(formula = model, data = data, sweeps = 10000, burn = 5000, thin = 2, seed = 1, StorageDir = getwd()) plotOptionsM <- list(geom_point(data =...
Fit a sinusoidal term to data
Another option would be the functions sinusoid and mvrm from package BNSP. data <- data.frame(y, t) model <- y ~ sinusoid(t, harmonics = 2, amplitude = 1, period = 24) m1 <- mvrm(formula = model, dat
Fit a sinusoidal term to data Another option would be the functions sinusoid and mvrm from package BNSP. data <- data.frame(y, t) model <- y ~ sinusoid(t, harmonics = 2, amplitude = 1, period = 24) m1 <- mvrm(formula = model, data = data, sweeps = 10000, burn = 5000, thin = 2, seed = 1, StorageDir = getwd()) plotOpti...
Fit a sinusoidal term to data Another option would be the functions sinusoid and mvrm from package BNSP. data <- data.frame(y, t) model <- y ~ sinusoid(t, harmonics = 2, amplitude = 1, period = 24) m1 <- mvrm(formula = model, dat
8,492
Fit a sinusoidal term to data
If you know the lowest and highest point of your cosine-looking data, you can use this simple function to compute all cosine coefficients: getMyCosine <- function(lowest_point=c(pi,-1), highest_point=c(0,1)){ cosine <- list( T = pi / abs(highest_point[1] - lowest_point[1]), b = - highest_point[1], k = (hi...
Fit a sinusoidal term to data
If you know the lowest and highest point of your cosine-looking data, you can use this simple function to compute all cosine coefficients: getMyCosine <- function(lowest_point=c(pi,-1), highest_point=
Fit a sinusoidal term to data If you know the lowest and highest point of your cosine-looking data, you can use this simple function to compute all cosine coefficients: getMyCosine <- function(lowest_point=c(pi,-1), highest_point=c(0,1)){ cosine <- list( T = pi / abs(highest_point[1] - lowest_point[1]), b = -...
Fit a sinusoidal term to data If you know the lowest and highest point of your cosine-looking data, you can use this simple function to compute all cosine coefficients: getMyCosine <- function(lowest_point=c(pi,-1), highest_point=
8,493
Fit a sinusoidal term to data
Another option is using the generic function optim or nls. I've tried both none of them is completely robust The following functions takes the data in y and calculates the parameters. calc.period <- function(y,t) { fs <- 1/(t[2]-t[1]) ssp <- spectrum(y,plot=FALSE ) fN <- ssp$freq[which.max(ssp$spec)] ...
Fit a sinusoidal term to data
Another option is using the generic function optim or nls. I've tried both none of them is completely robust The following functions takes the data in y and calculates the parameters. calc.period <-
Fit a sinusoidal term to data Another option is using the generic function optim or nls. I've tried both none of them is completely robust The following functions takes the data in y and calculates the parameters. calc.period <- function(y,t) { fs <- 1/(t[2]-t[1]) ssp <- spectrum(y,plot=FALSE ) fN <- ...
Fit a sinusoidal term to data Another option is using the generic function optim or nls. I've tried both none of them is completely robust The following functions takes the data in y and calculates the parameters. calc.period <-
8,494
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?
I would like to offer a very simple, intuitive explanation. It amounts to looking at a picture: the rest of this post explains the picture and draws conclusions from it. Here is what it comes down to: when there is a "probability mass" concentrated near $X=0$, there will be too much probability near $1/X\approx \pm \in...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat
I would like to offer a very simple, intuitive explanation. It amounts to looking at a picture: the rest of this post explains the picture and draws conclusions from it. Here is what it comes down to:
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that? I would like to offer a very simple, intuitive explanation. It amounts to looking at a picture: the rest of this post explains the picture and draws conclusions from it. Here is what it comes down to: ...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat I would like to offer a very simple, intuitive explanation. It amounts to looking at a picture: the rest of this post explains the picture and draws conclusions from it. Here is what it comes down to:
8,495
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?
Ratios and inverses are mostly meaningful with nonnegative random variables, so I will assume $X \ge 0$ almost surely. Then, if $X$ is a discrete variable which take on the value zero with positive probability, we will be dividing with zero with a positive probability, which explains why the expectation of $1/X$ will ...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat
Ratios and inverses are mostly meaningful with nonnegative random variables, so I will assume $X \ge 0$ almost surely. Then, if $X$ is a discrete variable which take on the value zero with positive p
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that? Ratios and inverses are mostly meaningful with nonnegative random variables, so I will assume $X \ge 0$ almost surely. Then, if $X$ is a discrete variable which take on the value zero with positive pr...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat Ratios and inverses are mostly meaningful with nonnegative random variables, so I will assume $X \ge 0$ almost surely. Then, if $X$ is a discrete variable which take on the value zero with positive p
8,496
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?
Let's offer a "dissenting" view: Ratios and inverses of random variables can be fine in the following sense: It may be the case that in many cases they do not possess moments But it is also the case that in many cases they result in recognizable, "named" and exhaustively studied distributions. ...and there is distribu...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat
Let's offer a "dissenting" view: Ratios and inverses of random variables can be fine in the following sense: It may be the case that in many cases they do not possess moments But it is also the case
I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that? Let's offer a "dissenting" view: Ratios and inverses of random variables can be fine in the following sense: It may be the case that in many cases they do not possess moments But it is also the case t...
I've heard that ratios or inverses of random variables often are problematic, in not having expectat Let's offer a "dissenting" view: Ratios and inverses of random variables can be fine in the following sense: It may be the case that in many cases they do not possess moments But it is also the case
8,497
From uniform distribution to exponential distribution and vice-versa
It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq x) = P(U\leq \ln x) = \ln x\,,\quad 1<x<e$ So $f_x(x) = \frac{d}{dx} ...
From uniform distribution to exponential distribution and vice-versa
It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let
From uniform distribution to exponential distribution and vice-versa It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq ...
From uniform distribution to exponential distribution and vice-versa It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let
8,498
From uniform distribution to exponential distribution and vice-versa
You almost have it back to front. You asked: "If $X$ has a uniform distribution, does it mean that $e^X$ follows an exponential distribution?" "Similarly, if $Y$ follows an exponential distribution, does it mean $\ln(Y)$ follows a uniform distribution?" In fact if $X$ is uniform on $[0,1]$ then $-\log_e(X)$ foll...
From uniform distribution to exponential distribution and vice-versa
You almost have it back to front. You asked: "If $X$ has a uniform distribution, does it mean that $e^X$ follows an exponential distribution?" "Similarly, if $Y$ follows an exponential distribution
From uniform distribution to exponential distribution and vice-versa You almost have it back to front. You asked: "If $X$ has a uniform distribution, does it mean that $e^X$ follows an exponential distribution?" "Similarly, if $Y$ follows an exponential distribution, does it mean $\ln(Y)$ follows a uniform distribut...
From uniform distribution to exponential distribution and vice-versa You almost have it back to front. You asked: "If $X$ has a uniform distribution, does it mean that $e^X$ follows an exponential distribution?" "Similarly, if $Y$ follows an exponential distribution
8,499
From uniform distribution to exponential distribution and vice-versa
Inspired by @Henry's fantastic answer: If $X \sim \operatorname{Exp}(8), U \sim \operatorname{Unif}(0, 1)$ Then we know CDF of X is $F_X(x) = 1 - e^{-8x}$ Set $1 - e^{-8x} = u$, solve for $x$, we got $x = \frac{\ln(1-u)}{-8}$ Therefore, $X = \frac{\ln(1-U)}{-8} \sim \operatorname{Exp}(8)$ Since $1-U \sim \operatorname{...
From uniform distribution to exponential distribution and vice-versa
Inspired by @Henry's fantastic answer: If $X \sim \operatorname{Exp}(8), U \sim \operatorname{Unif}(0, 1)$ Then we know CDF of X is $F_X(x) = 1 - e^{-8x}$ Set $1 - e^{-8x} = u$, solve for $x$, we got
From uniform distribution to exponential distribution and vice-versa Inspired by @Henry's fantastic answer: If $X \sim \operatorname{Exp}(8), U \sim \operatorname{Unif}(0, 1)$ Then we know CDF of X is $F_X(x) = 1 - e^{-8x}$ Set $1 - e^{-8x} = u$, solve for $x$, we got $x = \frac{\ln(1-u)}{-8}$ Therefore, $X = \frac{\ln...
From uniform distribution to exponential distribution and vice-versa Inspired by @Henry's fantastic answer: If $X \sim \operatorname{Exp}(8), U \sim \operatorname{Unif}(0, 1)$ Then we know CDF of X is $F_X(x) = 1 - e^{-8x}$ Set $1 - e^{-8x} = u$, solve for $x$, we got
8,500
Why will ridge regression not shrink some coefficients to zero like lasso?
This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE). That means that if you take any other unbiased estimator, it is bound to have a higher variance then the OLS solution. So why on earth should we consider anything else than that? Now the trick with regularization, such...
Why will ridge regression not shrink some coefficients to zero like lasso?
This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE). That means that if you take any other unbiased estimator, it is bound to have a higher variance th
Why will ridge regression not shrink some coefficients to zero like lasso? This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE). That means that if you take any other unbiased estimator, it is bound to have a higher variance then the OLS solution. So why on earth should w...
Why will ridge regression not shrink some coefficients to zero like lasso? This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE). That means that if you take any other unbiased estimator, it is bound to have a higher variance th