idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
18,701
Path to mathematical statistics without analysis background: ideal textbook for self study
On the grounds that you want something (a) well-motivated, (b) less dense, and (c) introductory (undergraduate or early graduate level), you might want to consider a text like "Mathematical statistics and its applications" by Larsen and Marx. The "and its applications" is important because the authors give a practical motivation to the theory that you may have found missing in Casella and Berger. This is still a "mathematical statistics" book though, not an applied practitioner's guide on how to apply statistical methods that are otherwise treated as a "black box". There are exercises in Minitab, which I am sure you could translate into another statistical language of your choice. It only covers a small fraction of what C&B do, and it may not be "pure" enough for your tastes; perhaps you will find the applications a sort of contamination rather than motivation! But C&B is quite a heavy book to hit, if it's the first that you take on. Larsen and Marx is (in my opinion) quite clearly written, covers simpler material, and is very well type-set. That all should make it easier to get through. Perhaps after working through a book pitched at this level, it would be easier to mount a second assault on C&B or similar. The reviews on amazon are pretty mixed; it's interesting that people who taught courses using the book were generally pretty favorable (one criticism is that it is not as mathematically rigorous as it might have been) while students on courses where the book was a set text were more negative. If you would prefer a text that was more mathematical in nature, then I think you might need to work on your background knowledge first. I can't see how it is possible to understand a rigorous proof of the Central Limit Theorem without a good background in analysis, for instance. There are some "intermediate" texts, of which Larsen and Marx is one, which are not so rigorous as to be incomprehensible to someone without an analysis background (so you get a "sketch proof" of the CLT rather than a formal one, for example), but which are still "mathematical statistics" rather than "applied statistics". I suspect your basic choice lies between the more mathematical approach, or reaching into statistics via this sort of intermediate-level book. But if you want to take things higher, then at some point you are going to need some more mathematics. MIT runs a course for introductory statistics for (undergraduate) economics, with a set text of "Probability and Statistics for Engineers and Scientists" by Sheldon Ross, and recommended texts of Larsen and Marx or alternatively DeGroot and Schervish, "Probability and Statistics". The MIT course authors compare them as: Larsen and Marx's book is a bit more chatty than Ross', while DeGroot and Schervish's is a very good book but somewhat more difficult If you want something antithetical to the dry style of C&B then the chattier style of L&M might suit you. But those other suggestions for texts of a similar difficulty level might also interest you.
Path to mathematical statistics without analysis background: ideal textbook for self study
On the grounds that you want something (a) well-motivated, (b) less dense, and (c) introductory (undergraduate or early graduate level), you might want to consider a text like "Mathematical statistics
Path to mathematical statistics without analysis background: ideal textbook for self study On the grounds that you want something (a) well-motivated, (b) less dense, and (c) introductory (undergraduate or early graduate level), you might want to consider a text like "Mathematical statistics and its applications" by Larsen and Marx. The "and its applications" is important because the authors give a practical motivation to the theory that you may have found missing in Casella and Berger. This is still a "mathematical statistics" book though, not an applied practitioner's guide on how to apply statistical methods that are otherwise treated as a "black box". There are exercises in Minitab, which I am sure you could translate into another statistical language of your choice. It only covers a small fraction of what C&B do, and it may not be "pure" enough for your tastes; perhaps you will find the applications a sort of contamination rather than motivation! But C&B is quite a heavy book to hit, if it's the first that you take on. Larsen and Marx is (in my opinion) quite clearly written, covers simpler material, and is very well type-set. That all should make it easier to get through. Perhaps after working through a book pitched at this level, it would be easier to mount a second assault on C&B or similar. The reviews on amazon are pretty mixed; it's interesting that people who taught courses using the book were generally pretty favorable (one criticism is that it is not as mathematically rigorous as it might have been) while students on courses where the book was a set text were more negative. If you would prefer a text that was more mathematical in nature, then I think you might need to work on your background knowledge first. I can't see how it is possible to understand a rigorous proof of the Central Limit Theorem without a good background in analysis, for instance. There are some "intermediate" texts, of which Larsen and Marx is one, which are not so rigorous as to be incomprehensible to someone without an analysis background (so you get a "sketch proof" of the CLT rather than a formal one, for example), but which are still "mathematical statistics" rather than "applied statistics". I suspect your basic choice lies between the more mathematical approach, or reaching into statistics via this sort of intermediate-level book. But if you want to take things higher, then at some point you are going to need some more mathematics. MIT runs a course for introductory statistics for (undergraduate) economics, with a set text of "Probability and Statistics for Engineers and Scientists" by Sheldon Ross, and recommended texts of Larsen and Marx or alternatively DeGroot and Schervish, "Probability and Statistics". The MIT course authors compare them as: Larsen and Marx's book is a bit more chatty than Ross', while DeGroot and Schervish's is a very good book but somewhat more difficult If you want something antithetical to the dry style of C&B then the chattier style of L&M might suit you. But those other suggestions for texts of a similar difficulty level might also interest you.
Path to mathematical statistics without analysis background: ideal textbook for self study On the grounds that you want something (a) well-motivated, (b) less dense, and (c) introductory (undergraduate or early graduate level), you might want to consider a text like "Mathematical statistics
18,702
Path to mathematical statistics without analysis background: ideal textbook for self study
For me, Hogg & Craig has always worked as my second reference and back-up for those moments when Casella & Berger didn't make much sense to me. While both are excellent and share more or less the same scope, I found the former easier to read (it has more textual explanations on how the formulae work) and the latter a bit more dry with the mathematics (maybe too economical with the derivations). I totally suggest you give this book a try and see if it fits your needs!
Path to mathematical statistics without analysis background: ideal textbook for self study
For me, Hogg & Craig has always worked as my second reference and back-up for those moments when Casella & Berger didn't make much sense to me. While both are excellent and share more or less the same
Path to mathematical statistics without analysis background: ideal textbook for self study For me, Hogg & Craig has always worked as my second reference and back-up for those moments when Casella & Berger didn't make much sense to me. While both are excellent and share more or less the same scope, I found the former easier to read (it has more textual explanations on how the formulae work) and the latter a bit more dry with the mathematics (maybe too economical with the derivations). I totally suggest you give this book a try and see if it fits your needs!
Path to mathematical statistics without analysis background: ideal textbook for self study For me, Hogg & Craig has always worked as my second reference and back-up for those moments when Casella & Berger didn't make much sense to me. While both are excellent and share more or less the same
18,703
Path to mathematical statistics without analysis background: ideal textbook for self study
I agree that it might be easier to answer this question with a little bit more about what you're looking for. However, after CB I would recommend Grimmett and Stirzaker and Wasserman's All of Statistics. G&S has a nice accompaniment with worked problems, so plenty of excitement there. Best of luck!
Path to mathematical statistics without analysis background: ideal textbook for self study
I agree that it might be easier to answer this question with a little bit more about what you're looking for. However, after CB I would recommend Grimmett and Stirzaker and Wasserman's All of Statisti
Path to mathematical statistics without analysis background: ideal textbook for self study I agree that it might be easier to answer this question with a little bit more about what you're looking for. However, after CB I would recommend Grimmett and Stirzaker and Wasserman's All of Statistics. G&S has a nice accompaniment with worked problems, so plenty of excitement there. Best of luck!
Path to mathematical statistics without analysis background: ideal textbook for self study I agree that it might be easier to answer this question with a little bit more about what you're looking for. However, after CB I would recommend Grimmett and Stirzaker and Wasserman's All of Statisti
18,704
Path to mathematical statistics without analysis background: ideal textbook for self study
The following are both a step down from Casella-Berger in terms of the level of detail they go into, but are rigorous enough that they are used as introductory graduate textbooks. They're both well presented and fairly recent. Plus they're different enough from each other in layout and content that you could read them in parallel without too much duplication: Rice: Mathematical Statistics and Data Analysis has a very clear writing style DeGroot-Schervish: Probability and Statistics has lots of examples and covers a wide range of topics
Path to mathematical statistics without analysis background: ideal textbook for self study
The following are both a step down from Casella-Berger in terms of the level of detail they go into, but are rigorous enough that they are used as introductory graduate textbooks. They're both well pr
Path to mathematical statistics without analysis background: ideal textbook for self study The following are both a step down from Casella-Berger in terms of the level of detail they go into, but are rigorous enough that they are used as introductory graduate textbooks. They're both well presented and fairly recent. Plus they're different enough from each other in layout and content that you could read them in parallel without too much duplication: Rice: Mathematical Statistics and Data Analysis has a very clear writing style DeGroot-Schervish: Probability and Statistics has lots of examples and covers a wide range of topics
Path to mathematical statistics without analysis background: ideal textbook for self study The following are both a step down from Casella-Berger in terms of the level of detail they go into, but are rigorous enough that they are used as introductory graduate textbooks. They're both well pr
18,705
Path to mathematical statistics without analysis background: ideal textbook for self study
Given that the OP has had some course in statistics and probability, maybe something like https://www.amazon.com/Mathematical-Statistics-Basic-Selected-Topics/dp/0132306379 the second edition of Bickel & Doksum's book (there is also a volume 2!). This book is maybe not very rigorous, but it includes many very modern ideas, especially from nonparametric statistics.
Path to mathematical statistics without analysis background: ideal textbook for self study
Given that the OP has had some course in statistics and probability, maybe something like https://www.amazon.com/Mathematical-Statistics-Basic-Selected-Topics/dp/0132306379 the second edition of Bic
Path to mathematical statistics without analysis background: ideal textbook for self study Given that the OP has had some course in statistics and probability, maybe something like https://www.amazon.com/Mathematical-Statistics-Basic-Selected-Topics/dp/0132306379 the second edition of Bickel & Doksum's book (there is also a volume 2!). This book is maybe not very rigorous, but it includes many very modern ideas, especially from nonparametric statistics.
Path to mathematical statistics without analysis background: ideal textbook for self study Given that the OP has had some course in statistics and probability, maybe something like https://www.amazon.com/Mathematical-Statistics-Basic-Selected-Topics/dp/0132306379 the second edition of Bic
18,706
Good practice for statistical analysis in a business environment
My advice in two words (TL;DR mode): reproducible research. For more details - largely not to repeat myself - let me refer you to my relevant answers elsewhere on StackExchange. These answers represent my thoughts (and some experience) on the topics: data cleaning: https://datascience.stackexchange.com/a/722/2452 reproducible research: https://datascience.stackexchange.com/a/759/2452 reports vs. dashboards: https://datascience.stackexchange.com/a/907/2452 data analysis workflows and EDA: https://datascience.stackexchange.com/a/1006/2452 big data and R: https://datascience.stackexchange.com/a/780/2452 Final note (sorry, if you find it obvious): regardless of the type of your business environment (which is unclear, by the way), I would recommend to start from business side of things and create a data analysis architecture, which (as all IT-related) should be aligned with business architecture, including business processes, organizational units, culture and people. I hope that this is helpful. UPDATE: In regards to creating a new or improving an existing data analysis architecture (also referred to as data architecture, in enterprise architecture terminology), I thought that these two sets of presentation slides might be useful as well: this and this.
Good practice for statistical analysis in a business environment
My advice in two words (TL;DR mode): reproducible research. For more details - largely not to repeat myself - let me refer you to my relevant answers elsewhere on StackExchange. These answers represen
Good practice for statistical analysis in a business environment My advice in two words (TL;DR mode): reproducible research. For more details - largely not to repeat myself - let me refer you to my relevant answers elsewhere on StackExchange. These answers represent my thoughts (and some experience) on the topics: data cleaning: https://datascience.stackexchange.com/a/722/2452 reproducible research: https://datascience.stackexchange.com/a/759/2452 reports vs. dashboards: https://datascience.stackexchange.com/a/907/2452 data analysis workflows and EDA: https://datascience.stackexchange.com/a/1006/2452 big data and R: https://datascience.stackexchange.com/a/780/2452 Final note (sorry, if you find it obvious): regardless of the type of your business environment (which is unclear, by the way), I would recommend to start from business side of things and create a data analysis architecture, which (as all IT-related) should be aligned with business architecture, including business processes, organizational units, culture and people. I hope that this is helpful. UPDATE: In regards to creating a new or improving an existing data analysis architecture (also referred to as data architecture, in enterprise architecture terminology), I thought that these two sets of presentation slides might be useful as well: this and this.
Good practice for statistical analysis in a business environment My advice in two words (TL;DR mode): reproducible research. For more details - largely not to repeat myself - let me refer you to my relevant answers elsewhere on StackExchange. These answers represen
18,707
Good practice for statistical analysis in a business environment
In banking the modelling must comply with model risk management guidelines, such as OCC 2011-12. I think it's an interesting document even if you're not in banking. MathWorks has this article on modeling standards. Since modeling involves writing software in one form or another I use elements of software development methodology, particularly when it comes to testing and unit testing. I also employ software configuration management tools such as SVN. There's a lot that modeling teams can learn from programmers in terms of managing complex software projects, such as issue tracking systems and CMS. One of the most important things is the methodology and process, model development life cycle. Create the guideline of how to develop the models, and test them, list the standard tools and test etc. For instance, pick one or two goodness-of-fit tests, and use them everywhere. Create templates of everything: modeling scripts, white papers, presentations etc. For instance, I have the templates in LaTeX for all documentation, so our white papers look very similar and everyone knows where to look for information. We have standard sections, such as descriptive statistics and standard columns in them such as kurtosis, first and last observation date etc. Have the lab journal. This is one thing that hard science people should have learned in PhD: to keep a diary of all the research, ideas and especially decisions. When you decided to use ARIMA instead of GARCH, record it in the lab journal and describe why you made the decision. Down the road people tend to forget the rationale behind the decisions, so it's important to record them. Unfortunately, folks from social sciences backgrounds have no habit of keeping the lab journals, it's a problem.
Good practice for statistical analysis in a business environment
In banking the modelling must comply with model risk management guidelines, such as OCC 2011-12. I think it's an interesting document even if you're not in banking. MathWorks has this article on model
Good practice for statistical analysis in a business environment In banking the modelling must comply with model risk management guidelines, such as OCC 2011-12. I think it's an interesting document even if you're not in banking. MathWorks has this article on modeling standards. Since modeling involves writing software in one form or another I use elements of software development methodology, particularly when it comes to testing and unit testing. I also employ software configuration management tools such as SVN. There's a lot that modeling teams can learn from programmers in terms of managing complex software projects, such as issue tracking systems and CMS. One of the most important things is the methodology and process, model development life cycle. Create the guideline of how to develop the models, and test them, list the standard tools and test etc. For instance, pick one or two goodness-of-fit tests, and use them everywhere. Create templates of everything: modeling scripts, white papers, presentations etc. For instance, I have the templates in LaTeX for all documentation, so our white papers look very similar and everyone knows where to look for information. We have standard sections, such as descriptive statistics and standard columns in them such as kurtosis, first and last observation date etc. Have the lab journal. This is one thing that hard science people should have learned in PhD: to keep a diary of all the research, ideas and especially decisions. When you decided to use ARIMA instead of GARCH, record it in the lab journal and describe why you made the decision. Down the road people tend to forget the rationale behind the decisions, so it's important to record them. Unfortunately, folks from social sciences backgrounds have no habit of keeping the lab journals, it's a problem.
Good practice for statistical analysis in a business environment In banking the modelling must comply with model risk management guidelines, such as OCC 2011-12. I think it's an interesting document even if you're not in banking. MathWorks has this article on model
18,708
Good practice for statistical analysis in a business environment
Another aspect of good practice is discipline at the initial commissioning stage. This might include basic things like agreeing in writing what is required by the commissioner (to avoid misunderstandings and subsequent disputes) and clarifying who in the business has authority to commission work (a first step towards ensuring that the function is addressing real business needs and not just indulging anyone who has a bright idea). Discipline in commissioning should also promote constructive dialogue prior to agreement on the work to be undertaken. Those commissioning may have a vague idea of what they need but have difficulty in formulating it precisely, or if they do offer a precise formulation it may not be what is most relevant to their business needs (for example, they might ask for an investigation of the reasons for a short-term fall in sales, when what they are really interested in are the longer-term factors driving sales). Statisticians and researchers may be good at formulating precise questions or plans of work, but less able to identify what will be useful to the business. There is I suggest a parallel with good practice in academic research which makes a distinction between research questions identifying fairly broad topics of interest and research hypotheses and aims within such topics which are specific enough to lead to well-defined research studies. Thus it may be helpful to think of the commissioners as generating the equivalent of the research questions and the statisticians and researchers as helping them to identify more specific work programmes relevant to those questions.
Good practice for statistical analysis in a business environment
Another aspect of good practice is discipline at the initial commissioning stage. This might include basic things like agreeing in writing what is required by the commissioner (to avoid misunderstand
Good practice for statistical analysis in a business environment Another aspect of good practice is discipline at the initial commissioning stage. This might include basic things like agreeing in writing what is required by the commissioner (to avoid misunderstandings and subsequent disputes) and clarifying who in the business has authority to commission work (a first step towards ensuring that the function is addressing real business needs and not just indulging anyone who has a bright idea). Discipline in commissioning should also promote constructive dialogue prior to agreement on the work to be undertaken. Those commissioning may have a vague idea of what they need but have difficulty in formulating it precisely, or if they do offer a precise formulation it may not be what is most relevant to their business needs (for example, they might ask for an investigation of the reasons for a short-term fall in sales, when what they are really interested in are the longer-term factors driving sales). Statisticians and researchers may be good at formulating precise questions or plans of work, but less able to identify what will be useful to the business. There is I suggest a parallel with good practice in academic research which makes a distinction between research questions identifying fairly broad topics of interest and research hypotheses and aims within such topics which are specific enough to lead to well-defined research studies. Thus it may be helpful to think of the commissioners as generating the equivalent of the research questions and the statisticians and researchers as helping them to identify more specific work programmes relevant to those questions.
Good practice for statistical analysis in a business environment Another aspect of good practice is discipline at the initial commissioning stage. This might include basic things like agreeing in writing what is required by the commissioner (to avoid misunderstand
18,709
Good practice for statistical analysis in a business environment
I think you have got part of your answer in the question - a "good structure" is key. I am an engineer and have been working in roles that emphasise a similar application - where you are introduced to problems to provide assistance with analysing and improving the outcomes but are in an advisory rather than implementer role. The best approaches, that I have seen, are ones that are not too prescriptive or loose to ensure the right amount of evidence that the work was done with diligence - which is what I think you are after. Six Sigma (which is a bit of a dirty term in some places I have worked) and other methodoligies provide a framework for approaching, solving and embedding a solution. Because they are based on a framework, they can be audited. The key is to ensure that everyone is trained in the methodology AND have a good template that is auditable. For example, you probably want the solutions to be of a standard - this is not defined by the program used but rather whether you can audit the steps of analysis used at a later date and be satisfied that the task was completed to a standard. Providing milestones - e.g. check points where you can audit will be easier than trying to audit at the end of the project. Returning to Six Sigma, some approaches might be to audit at the Define stage, after Measure and Analyse, and finally at the conclustion (after Improve and Control). Six Sigma is certainly not the best in all situations but I can recommend it as a potential starting point.
Good practice for statistical analysis in a business environment
I think you have got part of your answer in the question - a "good structure" is key. I am an engineer and have been working in roles that emphasise a similar application - where you are introduced to
Good practice for statistical analysis in a business environment I think you have got part of your answer in the question - a "good structure" is key. I am an engineer and have been working in roles that emphasise a similar application - where you are introduced to problems to provide assistance with analysing and improving the outcomes but are in an advisory rather than implementer role. The best approaches, that I have seen, are ones that are not too prescriptive or loose to ensure the right amount of evidence that the work was done with diligence - which is what I think you are after. Six Sigma (which is a bit of a dirty term in some places I have worked) and other methodoligies provide a framework for approaching, solving and embedding a solution. Because they are based on a framework, they can be audited. The key is to ensure that everyone is trained in the methodology AND have a good template that is auditable. For example, you probably want the solutions to be of a standard - this is not defined by the program used but rather whether you can audit the steps of analysis used at a later date and be satisfied that the task was completed to a standard. Providing milestones - e.g. check points where you can audit will be easier than trying to audit at the end of the project. Returning to Six Sigma, some approaches might be to audit at the Define stage, after Measure and Analyse, and finally at the conclustion (after Improve and Control). Six Sigma is certainly not the best in all situations but I can recommend it as a potential starting point.
Good practice for statistical analysis in a business environment I think you have got part of your answer in the question - a "good structure" is key. I am an engineer and have been working in roles that emphasise a similar application - where you are introduced to
18,710
Reporting degrees of freedom for Welch t-test
I have not studied actual practice, so this reply cannot address that aspect of the question. As a general principle I would expect the treatment of significant digits in reporting the degrees of freedom (df) to be based on judgment related to significant figures. The principle is to be consistent: use the precision in one quantity that is appropriate for the precision used in another one that is related to it. Specifically, when reporting values $x$ and $y=f(x)$ when $x$ is given to the nearest multiple of a small value $h$ (such as $h=\frac{1}{2}\times 10^{-6}$ for six places after the decimal point), the relative precision in $y$ as mediated by the function $f$ is $$\sup_{-h \le k \le h} |f(x+k) - f(x)| \approx h | \frac{d}{dx} f(x) |.$$ The approximation applies when $f$ is continuously differentiable on the interval $[x-h, x+h]$. In the present application, $y$ is the $p$-value, $x$ is the degrees of freedom $\nu$, and $$y = f(x) = f(\nu) = F_\nu(t)$$ where $t$ is the Welch-Satterthwaite statistic and $F_\nu$ is the CDF of the Student $t$ distribution with $\nu$ degrees of freedom. For relatively high df $\nu$, often a change in the first decimal place would not change the p-value at all (to the level of precision reported), so rounding to an integer is fine ($h=1/2$ but $h|\frac{d}{dx}f(x)|$ is very small). For very low df and extreme values of the statistic $t$, the magnitude of the derivative $|\frac{\partial}{\partial\nu}F_\nu(t)|$ can exceed $0.01$, suggesting in such cases that $\nu$ should be reported to only one less decimal place than $p$ itself. See for yourself with this labeled contour plot of the magnitude of the derivative for the lowest (reasonable) df and ranges of $|t|$ that would be of interest (because they can lead to low p-values). The labels show the base-10 logarithm of the derivative. Thus, at points between $-k$ and $-(k+1)$ on this plot, changing the reported df in the $j^\text{th}$ place after the decimal point will likely change the reported p-value only in the $(j+k)^\text{th}$ and later places. For example, suppose you are rounding the p-value to $10^{-6}$ (six decimal places). Consider the statistics $\nu=2.5$ and $t=8$. These are located near the $-3$ log contour. Therefore, $\nu$ should be reported to $6+(-3)=3$ decimal places. The light blue areas, for the largest $k$, are the ones of concern, because they show where small changes in $\nu$ have the greatest effects on the p-value. Contrast this with the situation for higher df (from $4$ to $30$ shown): The influence of $\nu$ on the precision of $p$ quickly wanes as $\nu$ increases.
Reporting degrees of freedom for Welch t-test
I have not studied actual practice, so this reply cannot address that aspect of the question. As a general principle I would expect the treatment of significant digits in reporting the degrees of fre
Reporting degrees of freedom for Welch t-test I have not studied actual practice, so this reply cannot address that aspect of the question. As a general principle I would expect the treatment of significant digits in reporting the degrees of freedom (df) to be based on judgment related to significant figures. The principle is to be consistent: use the precision in one quantity that is appropriate for the precision used in another one that is related to it. Specifically, when reporting values $x$ and $y=f(x)$ when $x$ is given to the nearest multiple of a small value $h$ (such as $h=\frac{1}{2}\times 10^{-6}$ for six places after the decimal point), the relative precision in $y$ as mediated by the function $f$ is $$\sup_{-h \le k \le h} |f(x+k) - f(x)| \approx h | \frac{d}{dx} f(x) |.$$ The approximation applies when $f$ is continuously differentiable on the interval $[x-h, x+h]$. In the present application, $y$ is the $p$-value, $x$ is the degrees of freedom $\nu$, and $$y = f(x) = f(\nu) = F_\nu(t)$$ where $t$ is the Welch-Satterthwaite statistic and $F_\nu$ is the CDF of the Student $t$ distribution with $\nu$ degrees of freedom. For relatively high df $\nu$, often a change in the first decimal place would not change the p-value at all (to the level of precision reported), so rounding to an integer is fine ($h=1/2$ but $h|\frac{d}{dx}f(x)|$ is very small). For very low df and extreme values of the statistic $t$, the magnitude of the derivative $|\frac{\partial}{\partial\nu}F_\nu(t)|$ can exceed $0.01$, suggesting in such cases that $\nu$ should be reported to only one less decimal place than $p$ itself. See for yourself with this labeled contour plot of the magnitude of the derivative for the lowest (reasonable) df and ranges of $|t|$ that would be of interest (because they can lead to low p-values). The labels show the base-10 logarithm of the derivative. Thus, at points between $-k$ and $-(k+1)$ on this plot, changing the reported df in the $j^\text{th}$ place after the decimal point will likely change the reported p-value only in the $(j+k)^\text{th}$ and later places. For example, suppose you are rounding the p-value to $10^{-6}$ (six decimal places). Consider the statistics $\nu=2.5$ and $t=8$. These are located near the $-3$ log contour. Therefore, $\nu$ should be reported to $6+(-3)=3$ decimal places. The light blue areas, for the largest $k$, are the ones of concern, because they show where small changes in $\nu$ have the greatest effects on the p-value. Contrast this with the situation for higher df (from $4$ to $30$ shown): The influence of $\nu$ on the precision of $p$ quickly wanes as $\nu$ increases.
Reporting degrees of freedom for Welch t-test I have not studied actual practice, so this reply cannot address that aspect of the question. As a general principle I would expect the treatment of significant digits in reporting the degrees of fre
18,711
Reporting degrees of freedom for Welch t-test
It is conventional to round down to the nearest integer before consulting standard t tables The reason that was a convention is because tables don't have noninteger df. There's no reason to do it otherwise. which makes sense as this adjustment is conservative. Well, the statistic doesn't actually have a t-distribution, because he squared denominator doesn't actually have a scaled chi-squared distribution. It's an approximation that may or may not be conservative in some particular instance -- rounding df down may not be certain to be conservative when we consider the exact distribution of the statistic in a particular instance. (by interpolation or by actually crunching the numbers for the t-distribution with that df?) p-values from t-distributions (applying the cdf to a t-statistic) can be computed by a variety of pretty accurate approximations, so they're effectively calculated rather than interpolated. I can't see it being appropriate to quote to more than two decimal places I agree. Are there any guidelines on how much accuracy to use? One possibility might be to investigate how accurate the Welch-Satterthwaite approximation for the p-value is in that general region of variance ratios and not quote substantially more relative accuracy than that would suggest was in the d.f. (keeping in mind that the df on the chi-squared in the square of the denominator are just giving an approximation to something that isn't chi-squared anyway).
Reporting degrees of freedom for Welch t-test
It is conventional to round down to the nearest integer before consulting standard t tables The reason that was a convention is because tables don't have noninteger df. There's no reason to do it oth
Reporting degrees of freedom for Welch t-test It is conventional to round down to the nearest integer before consulting standard t tables The reason that was a convention is because tables don't have noninteger df. There's no reason to do it otherwise. which makes sense as this adjustment is conservative. Well, the statistic doesn't actually have a t-distribution, because he squared denominator doesn't actually have a scaled chi-squared distribution. It's an approximation that may or may not be conservative in some particular instance -- rounding df down may not be certain to be conservative when we consider the exact distribution of the statistic in a particular instance. (by interpolation or by actually crunching the numbers for the t-distribution with that df?) p-values from t-distributions (applying the cdf to a t-statistic) can be computed by a variety of pretty accurate approximations, so they're effectively calculated rather than interpolated. I can't see it being appropriate to quote to more than two decimal places I agree. Are there any guidelines on how much accuracy to use? One possibility might be to investigate how accurate the Welch-Satterthwaite approximation for the p-value is in that general region of variance ratios and not quote substantially more relative accuracy than that would suggest was in the d.f. (keeping in mind that the df on the chi-squared in the square of the denominator are just giving an approximation to something that isn't chi-squared anyway).
Reporting degrees of freedom for Welch t-test It is conventional to round down to the nearest integer before consulting standard t tables The reason that was a convention is because tables don't have noninteger df. There's no reason to do it oth
18,712
How to do multivariate machine learning? (predicting multiple dependent variables)
Based on your description, it appears a multinomial logistic regression is appropriate. Assuming your outcome is a factor with 7 levels (one of the 7 buying options), then you can quickly predict membership using a multinomial logistic regression model (see ?multinom in the nnet package in R). If your outcome cannot be combined into a factor with 7 levels, then a cluster analysis would be needed to group the items together before fitting the multinomial logistic regression.
How to do multivariate machine learning? (predicting multiple dependent variables)
Based on your description, it appears a multinomial logistic regression is appropriate. Assuming your outcome is a factor with 7 levels (one of the 7 buying options), then you can quickly predict memb
How to do multivariate machine learning? (predicting multiple dependent variables) Based on your description, it appears a multinomial logistic regression is appropriate. Assuming your outcome is a factor with 7 levels (one of the 7 buying options), then you can quickly predict membership using a multinomial logistic regression model (see ?multinom in the nnet package in R). If your outcome cannot be combined into a factor with 7 levels, then a cluster analysis would be needed to group the items together before fitting the multinomial logistic regression.
How to do multivariate machine learning? (predicting multiple dependent variables) Based on your description, it appears a multinomial logistic regression is appropriate. Assuming your outcome is a factor with 7 levels (one of the 7 buying options), then you can quickly predict memb
18,713
How to do multivariate machine learning? (predicting multiple dependent variables)
You could build a random forest where each of your classes is a group of items (i.e. "green apples with farmed strawberries, with 2% milk"). Then, based on the characteristics of the shopper or whatever your predictors are, you can provide a predicted probability of purchase for each group of items. I would use R's randomForest package (https://cran.r-project.org/web/packages/randomForest/index.html) to do this.
How to do multivariate machine learning? (predicting multiple dependent variables)
You could build a random forest where each of your classes is a group of items (i.e. "green apples with farmed strawberries, with 2% milk"). Then, based on the characteristics of the shopper or whate
How to do multivariate machine learning? (predicting multiple dependent variables) You could build a random forest where each of your classes is a group of items (i.e. "green apples with farmed strawberries, with 2% milk"). Then, based on the characteristics of the shopper or whatever your predictors are, you can provide a predicted probability of purchase for each group of items. I would use R's randomForest package (https://cran.r-project.org/web/packages/randomForest/index.html) to do this.
How to do multivariate machine learning? (predicting multiple dependent variables) You could build a random forest where each of your classes is a group of items (i.e. "green apples with farmed strawberries, with 2% milk"). Then, based on the characteristics of the shopper or whate
18,714
How to do multivariate machine learning? (predicting multiple dependent variables)
One option is to obtain frequencies of all the combinations of product purchases; select the few most common combinations; then build a regression model to predict each individual's chosen combination. E.g., with a binary logistic regression you could conceivably predict purchase of a) White Wine, Brie, Strawberries and Grapes vs. b) Red Wine, Cheddar and Gouda. With more than 2 such combinations, or if you want to include the category of "none of the above," multinomial logistic regression would probably be the method of choice. Note that including just the common combos means you will have more workable numbers of each but that you will be excluding the others, at least from this procedure. I could imagine 7 items creating dozens of combos each chosen by at least a few people. This is possibly too many categories for your sample size. Moreover, if a combo were chosen by just a few people, your model would have very little information to work with. Another option is to use cluster analysis to arrive at a few sets of items that tend to be purchased together. With 7 items, you'll probably end up with fewer than 4 clusters, which might make your task easier. If you try cluster analysis and find the results unworkable, there is no reason why you have to use them: just go back to the frequency-based approach described above. In this case, if I read you right, you're looking for the most descriptive and interesting array of categories, and in establishing that, you don't need to worry about degrees of freedom or multiple comparisons or any such concerns that might apply if you were trying out multiple methods in performing some inferential test.
How to do multivariate machine learning? (predicting multiple dependent variables)
One option is to obtain frequencies of all the combinations of product purchases; select the few most common combinations; then build a regression model to predict each individual's chosen combination
How to do multivariate machine learning? (predicting multiple dependent variables) One option is to obtain frequencies of all the combinations of product purchases; select the few most common combinations; then build a regression model to predict each individual's chosen combination. E.g., with a binary logistic regression you could conceivably predict purchase of a) White Wine, Brie, Strawberries and Grapes vs. b) Red Wine, Cheddar and Gouda. With more than 2 such combinations, or if you want to include the category of "none of the above," multinomial logistic regression would probably be the method of choice. Note that including just the common combos means you will have more workable numbers of each but that you will be excluding the others, at least from this procedure. I could imagine 7 items creating dozens of combos each chosen by at least a few people. This is possibly too many categories for your sample size. Moreover, if a combo were chosen by just a few people, your model would have very little information to work with. Another option is to use cluster analysis to arrive at a few sets of items that tend to be purchased together. With 7 items, you'll probably end up with fewer than 4 clusters, which might make your task easier. If you try cluster analysis and find the results unworkable, there is no reason why you have to use them: just go back to the frequency-based approach described above. In this case, if I read you right, you're looking for the most descriptive and interesting array of categories, and in establishing that, you don't need to worry about degrees of freedom or multiple comparisons or any such concerns that might apply if you were trying out multiple methods in performing some inferential test.
How to do multivariate machine learning? (predicting multiple dependent variables) One option is to obtain frequencies of all the combinations of product purchases; select the few most common combinations; then build a regression model to predict each individual's chosen combination
18,715
How to do multivariate machine learning? (predicting multiple dependent variables)
I am assuming that you want to analyze situation similar to the following; Yi = f(X), where f() is a non-linear link and X is a vector of covariates and Yi is the i-th dependent variable, which is ordinal in nature (if it is categorical Yi cant have more than two categories), and say in your model i = 1, 2, ...5 and each of the Yi s is correlated... If so, you can certainly employ Multivariate Probit. R, Mplus and SAS can estimate MVP In contrast, you have Y = f(X), and Y (notice there is only one Y) is categorical and for example, has N categories so that that choices made over the N categories are exclusive and exhaustive; you need to fit Multinomial Logit model. There is something called multinomial probit as well, simialr to multinomial Logit. Hope this helps. Thanks Sanjoy
How to do multivariate machine learning? (predicting multiple dependent variables)
I am assuming that you want to analyze situation similar to the following; Yi = f(X), where f() is a non-linear link and X is a vector of covariates and Yi is the i-th dependent variable, which is ord
How to do multivariate machine learning? (predicting multiple dependent variables) I am assuming that you want to analyze situation similar to the following; Yi = f(X), where f() is a non-linear link and X is a vector of covariates and Yi is the i-th dependent variable, which is ordinal in nature (if it is categorical Yi cant have more than two categories), and say in your model i = 1, 2, ...5 and each of the Yi s is correlated... If so, you can certainly employ Multivariate Probit. R, Mplus and SAS can estimate MVP In contrast, you have Y = f(X), and Y (notice there is only one Y) is categorical and for example, has N categories so that that choices made over the N categories are exclusive and exhaustive; you need to fit Multinomial Logit model. There is something called multinomial probit as well, simialr to multinomial Logit. Hope this helps. Thanks Sanjoy
How to do multivariate machine learning? (predicting multiple dependent variables) I am assuming that you want to analyze situation similar to the following; Yi = f(X), where f() is a non-linear link and X is a vector of covariates and Yi is the i-th dependent variable, which is ord
18,716
What is contingent in a contingency table?
Wikipedia claims that the term was introduced by Pearson in On the theory of contingency and its relation to association and normal correlation. Pearson does indeed seem to have coined the term. He says (referring to two-way tables): I term any measure of the total deviation of the classification from independent probability a measure of its contingency. Clearly the greater the contingency, the greater must be the amount of association or of correlation between the two attributes, for such association or correlation is solely a measure from another standpoint of the degree of deviation from independence of occurrence. (Pearson, On the Theory of Contingency and Its Relation to Association and Normal Correlation, 1904, pp. 5-6.) Pearson explains in the introduction that he and others had previously considered categorical variables as ordered in all circumstances, and had analysed them as such. For example, in order to analyse eye colour, one arranged eye colours in what appeared to correspond to varying amounts of orange pigment [...] The point of the paper is to develop methods for analysing categorical variables without putting some artificial ordering on the categories. The first use of the term contingency table appears on page 34 of the same paper: This result enables us to start from the mathematical theory of independent probability as developed in the elementary textbookss, and build up from it a generalized theory of association, or, as I term it, contingency. We reach the notion of a pure contingency table, in which the order of the sub-groups is of no importance whatever. Thus, contingency is supposed to mean "non-independence". The word contingency is used because two events are contingent if the outcome of one is contingent upon - i.e. dependent upon - i.e. not independent of - the outcome of the other. In other words, it's related to definition 4 from this Merriam-Webster page.
What is contingent in a contingency table?
Wikipedia claims that the term was introduced by Pearson in On the theory of contingency and its relation to association and normal correlation. Pearson does indeed seem to have coined the term. He sa
What is contingent in a contingency table? Wikipedia claims that the term was introduced by Pearson in On the theory of contingency and its relation to association and normal correlation. Pearson does indeed seem to have coined the term. He says (referring to two-way tables): I term any measure of the total deviation of the classification from independent probability a measure of its contingency. Clearly the greater the contingency, the greater must be the amount of association or of correlation between the two attributes, for such association or correlation is solely a measure from another standpoint of the degree of deviation from independence of occurrence. (Pearson, On the Theory of Contingency and Its Relation to Association and Normal Correlation, 1904, pp. 5-6.) Pearson explains in the introduction that he and others had previously considered categorical variables as ordered in all circumstances, and had analysed them as such. For example, in order to analyse eye colour, one arranged eye colours in what appeared to correspond to varying amounts of orange pigment [...] The point of the paper is to develop methods for analysing categorical variables without putting some artificial ordering on the categories. The first use of the term contingency table appears on page 34 of the same paper: This result enables us to start from the mathematical theory of independent probability as developed in the elementary textbookss, and build up from it a generalized theory of association, or, as I term it, contingency. We reach the notion of a pure contingency table, in which the order of the sub-groups is of no importance whatever. Thus, contingency is supposed to mean "non-independence". The word contingency is used because two events are contingent if the outcome of one is contingent upon - i.e. dependent upon - i.e. not independent of - the outcome of the other. In other words, it's related to definition 4 from this Merriam-Webster page.
What is contingent in a contingency table? Wikipedia claims that the term was introduced by Pearson in On the theory of contingency and its relation to association and normal correlation. Pearson does indeed seem to have coined the term. He sa
18,717
Are PCA solutions unique?
Something that hasn't been noticed yet is that simply reversing the sign of a PC produces a different solution. That is, if $\mathbf{w}$ is the $n$th principal component, then $-\mathbf{w}$ is also a solution to the $n$th principal component. This has caused confusion before, especially when your computer outputs alternating PCs. See this question.
Are PCA solutions unique?
Something that hasn't been noticed yet is that simply reversing the sign of a PC produces a different solution. That is, if $\mathbf{w}$ is the $n$th principal component, then $-\mathbf{w}$ is also a
Are PCA solutions unique? Something that hasn't been noticed yet is that simply reversing the sign of a PC produces a different solution. That is, if $\mathbf{w}$ is the $n$th principal component, then $-\mathbf{w}$ is also a solution to the $n$th principal component. This has caused confusion before, especially when your computer outputs alternating PCs. See this question.
Are PCA solutions unique? Something that hasn't been noticed yet is that simply reversing the sign of a PC produces a different solution. That is, if $\mathbf{w}$ is the $n$th principal component, then $-\mathbf{w}$ is also a
18,718
Are PCA solutions unique?
No, the answer is not unique. There are many ways to show this. One possibility is to notice that spectral decomposition of a square $p$ by $p$ matrix $X$ is the solution to the maximization of a convex function of $w$. Consider the first eigen-vector/value: $$\lambda_1=\underset{w\in\mathbb{R}^{p}:||w||=1}{\max} w'Xw$$ (where $\lambda_1$ is the first eigen-value and $w^*$ the first eigen-vector). The solution to such problems (e.g. the values of $w$ attaining that maximum) are, in general, not unique. However the algorithms for computing these solutions are deterministic, meaning that save for numerical corner cases, the solutions you get should be the same. Example of such numerical corner cases: cases where several eigen-values are (numerically) the same, cases where the $X$ is rank-deficient...
Are PCA solutions unique?
No, the answer is not unique. There are many ways to show this. One possibility is to notice that spectral decomposition of a square $p$ by $p$ matrix $X$ is the solution to the maximization of a conv
Are PCA solutions unique? No, the answer is not unique. There are many ways to show this. One possibility is to notice that spectral decomposition of a square $p$ by $p$ matrix $X$ is the solution to the maximization of a convex function of $w$. Consider the first eigen-vector/value: $$\lambda_1=\underset{w\in\mathbb{R}^{p}:||w||=1}{\max} w'Xw$$ (where $\lambda_1$ is the first eigen-value and $w^*$ the first eigen-vector). The solution to such problems (e.g. the values of $w$ attaining that maximum) are, in general, not unique. However the algorithms for computing these solutions are deterministic, meaning that save for numerical corner cases, the solutions you get should be the same. Example of such numerical corner cases: cases where several eigen-values are (numerically) the same, cases where the $X$ is rank-deficient...
Are PCA solutions unique? No, the answer is not unique. There are many ways to show this. One possibility is to notice that spectral decomposition of a square $p$ by $p$ matrix $X$ is the solution to the maximization of a conv
18,719
Are PCA solutions unique?
It depends. If the eigenvalues of the covariance matrix are different, then the PCA is unique. Else not. The fact that the variances of the principal components are given by Ξ»i has an important implication for the uniqueness of PCA. If two of the eigenvalues are equal, then the variance of those principal components are equal. Then, the principal components are not well-defined anymore, because we can make a rotation of those principal components without affecting their variances. This is because if p zi and zi+1 have the same variance, then linear combinations such as 1/2zi + p 1/2zi+1 and p 1/2zi βˆ’ p 1/2zi+1 have the same variance as well; all the constraints (unit variance and orthogonality) are still fulfilled, so these are equally valid principal components. In fact, in linear algebra, it is well-known that the eigenvalue decomposition is uniquely defined only when the eigenvalues are all distinct. Source: Principal component analysis; Aapo HyvΓ€rinen; Based on material from the book Natural Image Statistics,2009; https://www.mv.helsinki.fi/home/amoaning/movies/uml/pca_handout.pdf Or PCA is unique up to signs, if the eigenvalues of the covariance matrix are different from each other. Is PCA unique or not, that is, is there only one PCA solution. Multiple solutions may fulfill the PCA criteria. We consider the decomposition X = Y UT where U is orthogonal, Y T Y = Dm with Dm as m-dimensional diagonal matrix, and the eigenvalues of Dm are sorted increasingly. Source: Machine Learning: Unsupervised Techniques; 2014; Sepp Hochreiter; Institute of Bioinformatics Johannes Kepler University Linz
Are PCA solutions unique?
It depends. If the eigenvalues of the covariance matrix are different, then the PCA is unique. Else not. The fact that the variances of the principal components are given by Ξ»i has an important impl
Are PCA solutions unique? It depends. If the eigenvalues of the covariance matrix are different, then the PCA is unique. Else not. The fact that the variances of the principal components are given by Ξ»i has an important implication for the uniqueness of PCA. If two of the eigenvalues are equal, then the variance of those principal components are equal. Then, the principal components are not well-defined anymore, because we can make a rotation of those principal components without affecting their variances. This is because if p zi and zi+1 have the same variance, then linear combinations such as 1/2zi + p 1/2zi+1 and p 1/2zi βˆ’ p 1/2zi+1 have the same variance as well; all the constraints (unit variance and orthogonality) are still fulfilled, so these are equally valid principal components. In fact, in linear algebra, it is well-known that the eigenvalue decomposition is uniquely defined only when the eigenvalues are all distinct. Source: Principal component analysis; Aapo HyvΓ€rinen; Based on material from the book Natural Image Statistics,2009; https://www.mv.helsinki.fi/home/amoaning/movies/uml/pca_handout.pdf Or PCA is unique up to signs, if the eigenvalues of the covariance matrix are different from each other. Is PCA unique or not, that is, is there only one PCA solution. Multiple solutions may fulfill the PCA criteria. We consider the decomposition X = Y UT where U is orthogonal, Y T Y = Dm with Dm as m-dimensional diagonal matrix, and the eigenvalues of Dm are sorted increasingly. Source: Machine Learning: Unsupervised Techniques; 2014; Sepp Hochreiter; Institute of Bioinformatics Johannes Kepler University Linz
Are PCA solutions unique? It depends. If the eigenvalues of the covariance matrix are different, then the PCA is unique. Else not. The fact that the variances of the principal components are given by Ξ»i has an important impl
18,720
When should one consider using GMM?
The implications of economic theories are often naturally formulated in terms of conditional moment restrictions (see e.g. the original asset pricing application of LP Hansen) which nest a variety of unconditional restrictions thus leading to overidentification. Rather than arbitrarily picking "which squares to minimize" to satisfy a subset of those restriction exactly using whatever-LS, GMM provides a way of efficiently combining all of them. MLE requires a complete specification - all of the moments of all the random variables included in the model should be matched. If those additional restrictions are satisfied in the population, you are naturally getting a more efficient estimator, perhaps, with a better behaving objective function to be optimized. In the context of simulation estimation, however, nonlinearity of likelihood functions introduces an additional source of bias, complicating the comparison with SMM.
When should one consider using GMM?
The implications of economic theories are often naturally formulated in terms of conditional moment restrictions (see e.g. the original asset pricing application of LP Hansen) which nest a variety of
When should one consider using GMM? The implications of economic theories are often naturally formulated in terms of conditional moment restrictions (see e.g. the original asset pricing application of LP Hansen) which nest a variety of unconditional restrictions thus leading to overidentification. Rather than arbitrarily picking "which squares to minimize" to satisfy a subset of those restriction exactly using whatever-LS, GMM provides a way of efficiently combining all of them. MLE requires a complete specification - all of the moments of all the random variables included in the model should be matched. If those additional restrictions are satisfied in the population, you are naturally getting a more efficient estimator, perhaps, with a better behaving objective function to be optimized. In the context of simulation estimation, however, nonlinearity of likelihood functions introduces an additional source of bias, complicating the comparison with SMM.
When should one consider using GMM? The implications of economic theories are often naturally formulated in terms of conditional moment restrictions (see e.g. the original asset pricing application of LP Hansen) which nest a variety of
18,721
When should one consider using GMM?
GMM is practically the only estimation method which you can use, when you run into endogeneity problems. Since these are more or less unique to econometrics, this explains GMM atraction. Note that this applies if you subsume IV methods into GMM, which is perfectly sensible thing to do.
When should one consider using GMM?
GMM is practically the only estimation method which you can use, when you run into endogeneity problems. Since these are more or less unique to econometrics, this explains GMM atraction. Note that th
When should one consider using GMM? GMM is practically the only estimation method which you can use, when you run into endogeneity problems. Since these are more or less unique to econometrics, this explains GMM atraction. Note that this applies if you subsume IV methods into GMM, which is perfectly sensible thing to do.
When should one consider using GMM? GMM is practically the only estimation method which you can use, when you run into endogeneity problems. Since these are more or less unique to econometrics, this explains GMM atraction. Note that th
18,722
When should one consider using GMM?
One partial answer seems to be that: "In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the proposed model. This is an important feature that is unique to GMM estimation." This seems like it would be important but insufficient to wholly explain the popularity of GMM in metrics.
When should one consider using GMM?
One partial answer seems to be that: "In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the propose
When should one consider using GMM? One partial answer seems to be that: "In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the proposed model. This is an important feature that is unique to GMM estimation." This seems like it would be important but insufficient to wholly explain the popularity of GMM in metrics.
When should one consider using GMM? One partial answer seems to be that: "In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the propose
18,723
Prediction and Tolerance Intervals
Your definitions appear to be correct. The book to consult about these matters is Statistical Intervals (Gerald Hahn & William Meeker), 1991. I quote: A prediction interval for a single future observation is an interval that will, with a specified degree of confidence, contain the next (or some other prespecified) randomly selected observation from a population. [A] tolerance interval is an interval that one can claim to contain at least a specified proportion, p, of the population with a specified degree of confidence, $100(1-\alpha)\%$. Here are restatements in standard mathematical terminology. Let the data $\mathbf{x}=(x_1,\ldots,x_n)$ be considered a realization of independent random variables $\mathbf{X}=(X_1,\ldots,X_n)$ with common cumulative distribution function $F_\theta$. ($\theta$ appears as a reminder that $F$ may be unknown but is assumed to lie in a given set of distributions ${F_\theta \vert \theta \in \Theta}$). Let $X_0$ be another random variable with the same distribution $F_\theta$ and independent of the first $n$ variables. A prediction interval (for a single future observation), given by endpoints $[l(\mathbf{x}), u(\mathbf{x})]$, has the defining property that $$ \inf_\theta\{{\Pr}_\theta(X_0 \in [l(\mathbf{X}), u(\mathbf{X})])\}= 100(1-\alpha)\%.$$ Specifically, ${\Pr}_\theta$ refers to the $n+1$ variate distribution of $(X_0, X_1, \ldots, X_n)$ determined by the law $F_\theta$. Note the absence of any conditional probabilities: this is a full joint probability. Note, too, the absence of any reference to a temporal sequence: $X_0$ very well may be observed in time before the other values. It does not matter. I'm not sure which aspect(s) of this may be "counterintuitive." If we conceive of selecting a statistical procedure as an activity to be pursued before collecting data, then this is a natural and reasonable formulation of a planned two-step process, because both the data ($X_i, i=1,\ldots,n$) and the "future value" $X_0$ need to be modeled as random. A tolerance interval, given by endpoints $(L(\mathbf{x}), U(\mathbf{x})]$, has the defining property that $$ \inf_\theta\{{\Pr}_\theta\left(F_\theta(U(\mathbf{X})) - F_\theta(L(\mathbf{X})\right) \ge p)\} = 100(1-\alpha)\%.$$ Note the absence of any reference to $X_0$: it plays no role. When $\{F_\theta\}$ is the set of Normal distributions, there exist prediction intervals of the form $$l(\mathbf{x}) = \bar{x} - k(\alpha, n) s, \quad u(\mathbf{x}) = \bar{x} + k(\alpha, n) s$$ ($\bar{x}$ is the sample mean and $s$ is the sample standard deviation). Values of the function $k$, which Hahn & Meeker tabulate, do not depend on the data $\mathbf{x}$. There are other prediction interval procedures, even in the Normal case: these are not the only ones. Similarly, there exist tolerance intervals of the form $$L(\mathbf{x}) = \bar{x} - K(\alpha, n, p) s, \quad U(\mathbf{x}) = \bar{x} + K(\alpha, n, p) s.$$ There are other tolerance interval procedures: these are not the only ones. Noting the similarity among these pairs of formulas, we may solve the equation $$k(\alpha, n) = K(\alpha', n, p).$$ This allows one to reinterpret a prediction interval as a tolerance interval (in many different possible ways by varying $\alpha'$ and $p$) or to reinterpret a tolerance interval as a prediction interval (only now $\alpha$ usually is uniquely determined by $\alpha'$ and $p$). This may be one origin of the confusion.
Prediction and Tolerance Intervals
Your definitions appear to be correct. The book to consult about these matters is Statistical Intervals (Gerald Hahn & William Meeker), 1991. I quote: A prediction interval for a single future observ
Prediction and Tolerance Intervals Your definitions appear to be correct. The book to consult about these matters is Statistical Intervals (Gerald Hahn & William Meeker), 1991. I quote: A prediction interval for a single future observation is an interval that will, with a specified degree of confidence, contain the next (or some other prespecified) randomly selected observation from a population. [A] tolerance interval is an interval that one can claim to contain at least a specified proportion, p, of the population with a specified degree of confidence, $100(1-\alpha)\%$. Here are restatements in standard mathematical terminology. Let the data $\mathbf{x}=(x_1,\ldots,x_n)$ be considered a realization of independent random variables $\mathbf{X}=(X_1,\ldots,X_n)$ with common cumulative distribution function $F_\theta$. ($\theta$ appears as a reminder that $F$ may be unknown but is assumed to lie in a given set of distributions ${F_\theta \vert \theta \in \Theta}$). Let $X_0$ be another random variable with the same distribution $F_\theta$ and independent of the first $n$ variables. A prediction interval (for a single future observation), given by endpoints $[l(\mathbf{x}), u(\mathbf{x})]$, has the defining property that $$ \inf_\theta\{{\Pr}_\theta(X_0 \in [l(\mathbf{X}), u(\mathbf{X})])\}= 100(1-\alpha)\%.$$ Specifically, ${\Pr}_\theta$ refers to the $n+1$ variate distribution of $(X_0, X_1, \ldots, X_n)$ determined by the law $F_\theta$. Note the absence of any conditional probabilities: this is a full joint probability. Note, too, the absence of any reference to a temporal sequence: $X_0$ very well may be observed in time before the other values. It does not matter. I'm not sure which aspect(s) of this may be "counterintuitive." If we conceive of selecting a statistical procedure as an activity to be pursued before collecting data, then this is a natural and reasonable formulation of a planned two-step process, because both the data ($X_i, i=1,\ldots,n$) and the "future value" $X_0$ need to be modeled as random. A tolerance interval, given by endpoints $(L(\mathbf{x}), U(\mathbf{x})]$, has the defining property that $$ \inf_\theta\{{\Pr}_\theta\left(F_\theta(U(\mathbf{X})) - F_\theta(L(\mathbf{X})\right) \ge p)\} = 100(1-\alpha)\%.$$ Note the absence of any reference to $X_0$: it plays no role. When $\{F_\theta\}$ is the set of Normal distributions, there exist prediction intervals of the form $$l(\mathbf{x}) = \bar{x} - k(\alpha, n) s, \quad u(\mathbf{x}) = \bar{x} + k(\alpha, n) s$$ ($\bar{x}$ is the sample mean and $s$ is the sample standard deviation). Values of the function $k$, which Hahn & Meeker tabulate, do not depend on the data $\mathbf{x}$. There are other prediction interval procedures, even in the Normal case: these are not the only ones. Similarly, there exist tolerance intervals of the form $$L(\mathbf{x}) = \bar{x} - K(\alpha, n, p) s, \quad U(\mathbf{x}) = \bar{x} + K(\alpha, n, p) s.$$ There are other tolerance interval procedures: these are not the only ones. Noting the similarity among these pairs of formulas, we may solve the equation $$k(\alpha, n) = K(\alpha', n, p).$$ This allows one to reinterpret a prediction interval as a tolerance interval (in many different possible ways by varying $\alpha'$ and $p$) or to reinterpret a tolerance interval as a prediction interval (only now $\alpha$ usually is uniquely determined by $\alpha'$ and $p$). This may be one origin of the confusion.
Prediction and Tolerance Intervals Your definitions appear to be correct. The book to consult about these matters is Statistical Intervals (Gerald Hahn & William Meeker), 1991. I quote: A prediction interval for a single future observ
18,724
Prediction and Tolerance Intervals
As I understand things, for normal tolerance limits, the value of $K(\alpha,p)$ comes from a non central t percentile. Clearly, to W Huber's point, there are some statisticians who are unfamiliar with the idea of tolerance limits versus prediction limits; the idea of tolerance seems to arise mostly in engineering design and manufacturing, as opposed to clinical biostatistics. Perhaps the reason for lack of familiarity with tolerance intervals, and the confusion with prediction intervals, is the context in which one receives his or her statistical training.
Prediction and Tolerance Intervals
As I understand things, for normal tolerance limits, the value of $K(\alpha,p)$ comes from a non central t percentile. Clearly, to W Huber's point, there are some statisticians who are unfamiliar wit
Prediction and Tolerance Intervals As I understand things, for normal tolerance limits, the value of $K(\alpha,p)$ comes from a non central t percentile. Clearly, to W Huber's point, there are some statisticians who are unfamiliar with the idea of tolerance limits versus prediction limits; the idea of tolerance seems to arise mostly in engineering design and manufacturing, as opposed to clinical biostatistics. Perhaps the reason for lack of familiarity with tolerance intervals, and the confusion with prediction intervals, is the context in which one receives his or her statistical training.
Prediction and Tolerance Intervals As I understand things, for normal tolerance limits, the value of $K(\alpha,p)$ comes from a non central t percentile. Clearly, to W Huber's point, there are some statisticians who are unfamiliar wit
18,725
How to choose between the different Adjusted $R^2$ formulas?
Without wanting to take credit for @ttnphns' answer, I wanted to move the answer out of the comments (particularly considering that the link to the article had died). Matt Krause's answer provides a useful discussion of the distinction between $R^2$ and $R^2_{adj}$ but it does not discuss the decision of which $R^2_{adj}$ formula to use in any given case. As I discuss in this answer, Yin and Fan (2001) provide a good overview of the many different formulas for estimating population variance explained $\rho^2$, all of which could potentially be labelled a type of adjusted $R^2$. They perform simulation to assess which of a wide range of adjusted r-square formulas provide the best unbiased estimate for different sample sizes, $\rho^2$, and predictor intercorrelations. They suggest that the Pratt formula may be a good option, but I don't think the study was definitive on the matter. Update: Raju et al (1997) note that adjusted $R^2$ formulas differ based on whether they are designed to estimate adjusted $R^2$ assuming fixed-x or random-x predcitors. Specifically, the Ezekial formula is designed to estimate $\rho^2$ in the fixed-x context, and the Olkin-Pratt and Pratt formulas are designed to estimate $\rho^2$ in the random-x context. There's not much difference between the Olkin-Pratt and Pratt formulas. Fixed-x assumptions align with planned experiments, random-x assumptions align with when you assume that the values of the predictor variables are a sample of possible values as is typically the case in observational studies. See this answer for further discussion . There's also not much difference between the two types of formulas as sample sizes gets moderately large (see here for a discussion of the size of the difference). Summary of Rules of Thumb If you assume that your observations for predictor variables are a random sample from a population, and you want to estimate $\rho^2$ for the full population of both predictors and criterion (i.e., random-x assumption) then use the Olkin-Pratt formula (or the Pratt formula). If you assume that your observations are fixed or you don't want to generalise beyond your observed levels of the predictor, then estimate $\rho^2$ with the Ezekiel formula. If you are want to know about out of sample prediction using the sample regression equation, then you would want to look into some form of cross-validation procedure. References Raju, N. S., Bilgic, R., Edwards, J. E., & Fleer, P. F. (1997). Methodology review: Estimation of population validity and cross-validity, and the use of equal weights in prediction. Applied Psychological Measurement, 21(4), 291-305. Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
How to choose between the different Adjusted $R^2$ formulas?
Without wanting to take credit for @ttnphns' answer, I wanted to move the answer out of the comments (particularly considering that the link to the article had died). Matt Krause's answer provides a u
How to choose between the different Adjusted $R^2$ formulas? Without wanting to take credit for @ttnphns' answer, I wanted to move the answer out of the comments (particularly considering that the link to the article had died). Matt Krause's answer provides a useful discussion of the distinction between $R^2$ and $R^2_{adj}$ but it does not discuss the decision of which $R^2_{adj}$ formula to use in any given case. As I discuss in this answer, Yin and Fan (2001) provide a good overview of the many different formulas for estimating population variance explained $\rho^2$, all of which could potentially be labelled a type of adjusted $R^2$. They perform simulation to assess which of a wide range of adjusted r-square formulas provide the best unbiased estimate for different sample sizes, $\rho^2$, and predictor intercorrelations. They suggest that the Pratt formula may be a good option, but I don't think the study was definitive on the matter. Update: Raju et al (1997) note that adjusted $R^2$ formulas differ based on whether they are designed to estimate adjusted $R^2$ assuming fixed-x or random-x predcitors. Specifically, the Ezekial formula is designed to estimate $\rho^2$ in the fixed-x context, and the Olkin-Pratt and Pratt formulas are designed to estimate $\rho^2$ in the random-x context. There's not much difference between the Olkin-Pratt and Pratt formulas. Fixed-x assumptions align with planned experiments, random-x assumptions align with when you assume that the values of the predictor variables are a sample of possible values as is typically the case in observational studies. See this answer for further discussion . There's also not much difference between the two types of formulas as sample sizes gets moderately large (see here for a discussion of the size of the difference). Summary of Rules of Thumb If you assume that your observations for predictor variables are a random sample from a population, and you want to estimate $\rho^2$ for the full population of both predictors and criterion (i.e., random-x assumption) then use the Olkin-Pratt formula (or the Pratt formula). If you assume that your observations are fixed or you don't want to generalise beyond your observed levels of the predictor, then estimate $\rho^2$ with the Ezekiel formula. If you are want to know about out of sample prediction using the sample regression equation, then you would want to look into some form of cross-validation procedure. References Raju, N. S., Bilgic, R., Edwards, J. E., & Fleer, P. F. (1997). Methodology review: Estimation of population validity and cross-validity, and the use of equal weights in prediction. Applied Psychological Measurement, 21(4), 291-305. Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
How to choose between the different Adjusted $R^2$ formulas? Without wanting to take credit for @ttnphns' answer, I wanted to move the answer out of the comments (particularly considering that the link to the article had died). Matt Krause's answer provides a u
18,726
How to choose between the different Adjusted $R^2$ formulas?
The choice of $R^2$ or adjusted $R^2$ depends on what you're trying to do. In a regression context, regular $R^2$ is used as a measure of goodness of fit for your model. However, imagine you're comparing several models which have different numbers of parameters. All things being equal, the model with more parameters will more closely fit your observation. In the limit, you could have a model with parameters for each data point but one; this would give you a perfect fit on your observations, but would be useless for new prediction since it'd capture both the underlying 'signal' AND any associated noise. Adjusted $R^2$ is an attempt to solve this problem by adjusting the $R^2$ value according to the number of parameters in the model. They therefore have slightly different purposes. $R^2$ describes how well different data sets fit a model. You might write something like "The model described above accurately predicts the performance of Part A ($r^2$=0.9), but not Widget B ($r^2$=0.05) under standard test conditions." Adjusted $R^2$ describes how well different models fit the same data (or similar data). For example, "Results from the short and long-form questionnaire predicted customer's annual spending equally well (Adjusted $R^2$ = 0.8 for both)."
How to choose between the different Adjusted $R^2$ formulas?
The choice of $R^2$ or adjusted $R^2$ depends on what you're trying to do. In a regression context, regular $R^2$ is used as a measure of goodness of fit for your model. However, imagine you're compar
How to choose between the different Adjusted $R^2$ formulas? The choice of $R^2$ or adjusted $R^2$ depends on what you're trying to do. In a regression context, regular $R^2$ is used as a measure of goodness of fit for your model. However, imagine you're comparing several models which have different numbers of parameters. All things being equal, the model with more parameters will more closely fit your observation. In the limit, you could have a model with parameters for each data point but one; this would give you a perfect fit on your observations, but would be useless for new prediction since it'd capture both the underlying 'signal' AND any associated noise. Adjusted $R^2$ is an attempt to solve this problem by adjusting the $R^2$ value according to the number of parameters in the model. They therefore have slightly different purposes. $R^2$ describes how well different data sets fit a model. You might write something like "The model described above accurately predicts the performance of Part A ($r^2$=0.9), but not Widget B ($r^2$=0.05) under standard test conditions." Adjusted $R^2$ describes how well different models fit the same data (or similar data). For example, "Results from the short and long-form questionnaire predicted customer's annual spending equally well (Adjusted $R^2$ = 0.8 for both)."
How to choose between the different Adjusted $R^2$ formulas? The choice of $R^2$ or adjusted $R^2$ depends on what you're trying to do. In a regression context, regular $R^2$ is used as a measure of goodness of fit for your model. However, imagine you're compar
18,727
What language to use for genetic programming
Your pollutant problem probably doesn't need much of a language at all. It looks like a symbolic regression rather than a control problem, in which case you could just use standard tree GP, with features and a few useful constants as the terminal set and relevant operators in the function set. The GP system will weed out irrelevant features and there are techniques to handle very large datasets. Generally, specify the smallest function set that you estimate could solve the problem, and expand it with care if necessary. You'll need to choose between tree and linear GP early on. Lisp is tree, Slash/A is linear. Read up on both to understand the pros & cons, but from what you wrote I'd suggest a simple tree GP system. It's not too hard to write your own, but there are existing Python implementations. These ones below are for evolutionary algorithms in Python in general but not all do GP and some are inactive: PyGressionGP (GP for symbolic regression in Python) -- http://code.google.com/p/pygressiongp/ PyGene -- https://github.com/blaa/PyGene A Simple Genetic Programming in Python -- http://zhanggw.wordpress.com/2009/11/08/a-simple-genetic-programming-in-python-4/ Pyevolve -- https://github.com/perone/Pyevolve -- also see blog -- http://blog.christianperone.com -- and this post -- http://blog.christianperone.com/?p=549 esec (Evolutionary Computation in Python) -- http://code.google.com/p/esec/ Peach -- http://code.google.com/p/peach/ PyBrain (does a lot, not just NN) -- http://pybrain.org/ dione -- http://dione.sourceforge.net/ PyGEP (Genetic Expression Programming) -- http://code.google.com/p/pygep/ deap (Distributed Evolutionary Algorithms) -- http://code.google.com/p/deap/ Also, see the (free) introductory book on GP by well-known GP authors Poli, Langdon and McPhee: A Field Guide to Genetic Programming -- http://www.gp-field-guide.org.uk/
What language to use for genetic programming
Your pollutant problem probably doesn't need much of a language at all. It looks like a symbolic regression rather than a control problem, in which case you could just use standard tree GP, with featu
What language to use for genetic programming Your pollutant problem probably doesn't need much of a language at all. It looks like a symbolic regression rather than a control problem, in which case you could just use standard tree GP, with features and a few useful constants as the terminal set and relevant operators in the function set. The GP system will weed out irrelevant features and there are techniques to handle very large datasets. Generally, specify the smallest function set that you estimate could solve the problem, and expand it with care if necessary. You'll need to choose between tree and linear GP early on. Lisp is tree, Slash/A is linear. Read up on both to understand the pros & cons, but from what you wrote I'd suggest a simple tree GP system. It's not too hard to write your own, but there are existing Python implementations. These ones below are for evolutionary algorithms in Python in general but not all do GP and some are inactive: PyGressionGP (GP for symbolic regression in Python) -- http://code.google.com/p/pygressiongp/ PyGene -- https://github.com/blaa/PyGene A Simple Genetic Programming in Python -- http://zhanggw.wordpress.com/2009/11/08/a-simple-genetic-programming-in-python-4/ Pyevolve -- https://github.com/perone/Pyevolve -- also see blog -- http://blog.christianperone.com -- and this post -- http://blog.christianperone.com/?p=549 esec (Evolutionary Computation in Python) -- http://code.google.com/p/esec/ Peach -- http://code.google.com/p/peach/ PyBrain (does a lot, not just NN) -- http://pybrain.org/ dione -- http://dione.sourceforge.net/ PyGEP (Genetic Expression Programming) -- http://code.google.com/p/pygep/ deap (Distributed Evolutionary Algorithms) -- http://code.google.com/p/deap/ Also, see the (free) introductory book on GP by well-known GP authors Poli, Langdon and McPhee: A Field Guide to Genetic Programming -- http://www.gp-field-guide.org.uk/
What language to use for genetic programming Your pollutant problem probably doesn't need much of a language at all. It looks like a symbolic regression rather than a control problem, in which case you could just use standard tree GP, with featu
18,728
What language to use for genetic programming
If you are going to evolve a program, you are likely to manipulate a syntax tree anyway; that way whatever program you evolve will automatically be syntactically correct. There are two things you will want to keep in mind when selecting a language. Avoid low-level constructs that may cause the evolved program to crash on some data. For example, pointer arithmetic. If you are going to use C or C++ as the language for your evolved programs, you may want to restrict it to a version without pointer arithmetic. I would vote against assembly language for similar reasons, although virtual machines like the JVM and the CLR should provide you with something of a safety net. Suitable for large data sets; if I understand your assignment correctly, the output programs will themselves have to manipulate large data sets. You will probably want to use a target language that you are already familiar with. I am not familiar with Python myself, but AFAIK it satisfies the criteria above, so it should be a good choice for your target language.
What language to use for genetic programming
If you are going to evolve a program, you are likely to manipulate a syntax tree anyway; that way whatever program you evolve will automatically be syntactically correct. There are two things you wil
What language to use for genetic programming If you are going to evolve a program, you are likely to manipulate a syntax tree anyway; that way whatever program you evolve will automatically be syntactically correct. There are two things you will want to keep in mind when selecting a language. Avoid low-level constructs that may cause the evolved program to crash on some data. For example, pointer arithmetic. If you are going to use C or C++ as the language for your evolved programs, you may want to restrict it to a version without pointer arithmetic. I would vote against assembly language for similar reasons, although virtual machines like the JVM and the CLR should provide you with something of a safety net. Suitable for large data sets; if I understand your assignment correctly, the output programs will themselves have to manipulate large data sets. You will probably want to use a target language that you are already familiar with. I am not familiar with Python myself, but AFAIK it satisfies the criteria above, so it should be a good choice for your target language.
What language to use for genetic programming If you are going to evolve a program, you are likely to manipulate a syntax tree anyway; that way whatever program you evolve will automatically be syntactically correct. There are two things you wil
18,729
What is a kernel and what sets it apart from other functions
For x,y on S, certain functions K(x,y) can be expressed as an inner product (in usually a different space). K is often referred to as a kernel or a kernel function. The word kernel is used in different ways throughout mathematics, but this is the most common usage in machine learning. The kernel trick is a way of mapping observations from a general set S into an inner product space V (equipped with its natural norm), without ever having to compute the mapping explicitly, in the hope that the observations will gain meaningful linear structure in V. This is important in terms of efficiency (computing dot products in a very high dimensional space very quicky) and practicality (we can convert linear ML algorithms to non-linear ML algorithms). For a function K to be considered a valid kernel it has to satisfy Mercer's conditions. This in practical terms means that we need to ensure the kernel matrix (computing the kernel product of every datapoint you have) will always positive semi-definite. This will ensure that the training objective function is convex, a very important property.
What is a kernel and what sets it apart from other functions
For x,y on S, certain functions K(x,y) can be expressed as an inner product (in usually a different space). K is often referred to as a kernel or a kernel function. The word kernel is used in differen
What is a kernel and what sets it apart from other functions For x,y on S, certain functions K(x,y) can be expressed as an inner product (in usually a different space). K is often referred to as a kernel or a kernel function. The word kernel is used in different ways throughout mathematics, but this is the most common usage in machine learning. The kernel trick is a way of mapping observations from a general set S into an inner product space V (equipped with its natural norm), without ever having to compute the mapping explicitly, in the hope that the observations will gain meaningful linear structure in V. This is important in terms of efficiency (computing dot products in a very high dimensional space very quicky) and practicality (we can convert linear ML algorithms to non-linear ML algorithms). For a function K to be considered a valid kernel it has to satisfy Mercer's conditions. This in practical terms means that we need to ensure the kernel matrix (computing the kernel product of every datapoint you have) will always positive semi-definite. This will ensure that the training objective function is convex, a very important property.
What is a kernel and what sets it apart from other functions For x,y on S, certain functions K(x,y) can be expressed as an inner product (in usually a different space). K is often referred to as a kernel or a kernel function. The word kernel is used in differen
18,730
What is a kernel and what sets it apart from other functions
From Williams, Christopher KI, and Carl Edward Rasmussen. "Gaussian processes for machine learning." the MIT Press 2, no. 3 (2006). Page 80. kernel = a function of two arguments mapping a pair of inputs $x \in X$, $x' \in X$ into $\mathbb{R}$. Also, kernel = kernel function. Kernels used in machine learning algorithms typically satisfied more properties, such as being positive semidefinite.
What is a kernel and what sets it apart from other functions
From Williams, Christopher KI, and Carl Edward Rasmussen. "Gaussian processes for machine learning." the MIT Press 2, no. 3 (2006). Page 80. kernel = a function of two arguments mapping a pair of inp
What is a kernel and what sets it apart from other functions From Williams, Christopher KI, and Carl Edward Rasmussen. "Gaussian processes for machine learning." the MIT Press 2, no. 3 (2006). Page 80. kernel = a function of two arguments mapping a pair of inputs $x \in X$, $x' \in X$ into $\mathbb{R}$. Also, kernel = kernel function. Kernels used in machine learning algorithms typically satisfied more properties, such as being positive semidefinite.
What is a kernel and what sets it apart from other functions From Williams, Christopher KI, and Carl Edward Rasmussen. "Gaussian processes for machine learning." the MIT Press 2, no. 3 (2006). Page 80. kernel = a function of two arguments mapping a pair of inp
18,731
What is a kernel and what sets it apart from other functions
Going to try for a less technical explanation. First, start with the dot product between two vectors. This tells you how "similar" the vectors are. If the vectors represent points in your data set, the dot product tells you if they are similar or not. But, in some (many) cases, the dot product is not the best metric of similarity. For example: Maybe points that have low dot products are similar for some other reasons. You may have data items that are not well represented as points. So, instead of using the dot product, you use a "kernel" which is just a function that takes two points and gives you a measure of their similarity. I'm not 100% sure of what technical conditions a function must meet to technically be a kernel, but this is the idea. One very nice thing is that the kernel can help you put your domain knowledge into the problem in the sense that you can say two points are the same because of xyz reason which comes form you knowing about the domain.
What is a kernel and what sets it apart from other functions
Going to try for a less technical explanation. First, start with the dot product between two vectors. This tells you how "similar" the vectors are. If the vectors represent points in your data set, th
What is a kernel and what sets it apart from other functions Going to try for a less technical explanation. First, start with the dot product between two vectors. This tells you how "similar" the vectors are. If the vectors represent points in your data set, the dot product tells you if they are similar or not. But, in some (many) cases, the dot product is not the best metric of similarity. For example: Maybe points that have low dot products are similar for some other reasons. You may have data items that are not well represented as points. So, instead of using the dot product, you use a "kernel" which is just a function that takes two points and gives you a measure of their similarity. I'm not 100% sure of what technical conditions a function must meet to technically be a kernel, but this is the idea. One very nice thing is that the kernel can help you put your domain knowledge into the problem in the sense that you can say two points are the same because of xyz reason which comes form you knowing about the domain.
What is a kernel and what sets it apart from other functions Going to try for a less technical explanation. First, start with the dot product between two vectors. This tells you how "similar" the vectors are. If the vectors represent points in your data set, th
18,732
How to choose significance level for a large data set?
In The insignificance of significance testing, Johnson (1999) noted that p-values are arbitrary, in that you can make them as small as you wish by gathering enough data, assuming the null hypothesis is false, which it almost always is. In the real world, there are unlikely to be semi-partial correlations that are exactly zero, which is the null hypothesis in testing significance of a regression coefficient. P-value significance cutoffs are even more arbitrary. The value of .05 as the cutoff between significance and nonsignificance is used by convention, not on principle. So the answer to your first question is no, there is no principled way to decide on an appropriate significance threshold. So what can you do, given your large data set? It depends on your reason(s) for exploring the statistical significance of your regression coefficients. Are you trying to model a complex multi-factorial system and develop a useful theory that reasonably fits or predicts reality? Then maybe you could think about developing a more elaborate model and taking a modeling perspective on it, as described in Rodgers (2010), The Epistemology of Mathematical And Statistical Modeling. One advantage of having a lot of data is being able to explore very rich models, ones with multiple levels and interesting interactions (assuming you have the variables to do so). If, on the other hand, you want to make some judgement as to whether to treat a particular coefficient as statistically significant or not, you might want to take Good's (1982) suggestion as summarized in Woolley (2003): Calculate the q-value as $p\cdot\sqrt{(n/100)}$ which standardizes p-values to a sample size of 100. A p-value of exactly .001 converts to a p-value of .045 -- statistically significant still. So if it's significant using some arbitrary threshold or another, what of it? If this is an observational study you have a lot more work to justify that it's actually meaningful in the way you think and not just a spurious relationship that shows up because you have misspecified your model. Note that a small effect is not so clinically interesting if it represents pre-existing differences across people selecting into different levels of treatment rather than a treatment effect. You do need to consider whether the relationship you're seeing is practically significant, as commenters have noted. Converting the figures you quote from $r$ to $r^2$ for variance explained ($r$ is correlation, square it to get variance explained) gives just 3 and 6% variance explained, respectively, which doesn't seem like much.
How to choose significance level for a large data set?
In The insignificance of significance testing, Johnson (1999) noted that p-values are arbitrary, in that you can make them as small as you wish by gathering enough data, assuming the null hypothesis i
How to choose significance level for a large data set? In The insignificance of significance testing, Johnson (1999) noted that p-values are arbitrary, in that you can make them as small as you wish by gathering enough data, assuming the null hypothesis is false, which it almost always is. In the real world, there are unlikely to be semi-partial correlations that are exactly zero, which is the null hypothesis in testing significance of a regression coefficient. P-value significance cutoffs are even more arbitrary. The value of .05 as the cutoff between significance and nonsignificance is used by convention, not on principle. So the answer to your first question is no, there is no principled way to decide on an appropriate significance threshold. So what can you do, given your large data set? It depends on your reason(s) for exploring the statistical significance of your regression coefficients. Are you trying to model a complex multi-factorial system and develop a useful theory that reasonably fits or predicts reality? Then maybe you could think about developing a more elaborate model and taking a modeling perspective on it, as described in Rodgers (2010), The Epistemology of Mathematical And Statistical Modeling. One advantage of having a lot of data is being able to explore very rich models, ones with multiple levels and interesting interactions (assuming you have the variables to do so). If, on the other hand, you want to make some judgement as to whether to treat a particular coefficient as statistically significant or not, you might want to take Good's (1982) suggestion as summarized in Woolley (2003): Calculate the q-value as $p\cdot\sqrt{(n/100)}$ which standardizes p-values to a sample size of 100. A p-value of exactly .001 converts to a p-value of .045 -- statistically significant still. So if it's significant using some arbitrary threshold or another, what of it? If this is an observational study you have a lot more work to justify that it's actually meaningful in the way you think and not just a spurious relationship that shows up because you have misspecified your model. Note that a small effect is not so clinically interesting if it represents pre-existing differences across people selecting into different levels of treatment rather than a treatment effect. You do need to consider whether the relationship you're seeing is practically significant, as commenters have noted. Converting the figures you quote from $r$ to $r^2$ for variance explained ($r$ is correlation, square it to get variance explained) gives just 3 and 6% variance explained, respectively, which doesn't seem like much.
How to choose significance level for a large data set? In The insignificance of significance testing, Johnson (1999) noted that p-values are arbitrary, in that you can make them as small as you wish by gathering enough data, assuming the null hypothesis i
18,733
How to choose significance level for a large data set?
I guess an easy way to check would be randomly sampling a similarly large number from what you know is one distribution twice and comparing the two results. If you do that several times and observe similar p-values, it would suggest that there's no real effect. If on the other hand you don't, then there probably is.
How to choose significance level for a large data set?
I guess an easy way to check would be randomly sampling a similarly large number from what you know is one distribution twice and comparing the two results. If you do that several times and observe si
How to choose significance level for a large data set? I guess an easy way to check would be randomly sampling a similarly large number from what you know is one distribution twice and comparing the two results. If you do that several times and observe similar p-values, it would suggest that there's no real effect. If on the other hand you don't, then there probably is.
How to choose significance level for a large data set? I guess an easy way to check would be randomly sampling a similarly large number from what you know is one distribution twice and comparing the two results. If you do that several times and observe si
18,734
How to setup and interpret ANOVA contrasts with the car package in R?
Your example leads to unequal cell sizes, which means that the different "types of sum of squares" matter, and the test for main effects is not as simple as you state it. Anova() uses type II sum of squares. See this question for a start. There are different ways to test the contrasts. Note that SS types don't matter as we are ultimately testing in the associated one-factorial design. I suggest using the following steps: # turn your 2x2 design into the corresponding 4x1 design using interaction() > d$ab <- interaction(d$a, d$b) # creates new factor coding the 2*2 conditions > levels(d$ab) # this is the order of the 4 conditions [1] "a1.b1" "a2.b1" "a1.b2" "a2.b2" > aovRes <- aov(y ~ ab, data=d) # oneway ANOVA using aov() with new factor # specify the contrasts you want to test as a matrix (see above for order of cells) > cntrMat <- rbind("contr 01"=c(1, -1, 0, 0), # coefficients for testing a within b1 + "contr 02"=c(0, 0, 1, -1), # coefficients for testing a within b2 + "contr 03"=c(1, -1, -1, 1)) # coefficients for interaction # test contrasts without adjusting alpha, two-sided hypotheses > library(multcomp) # for glht() > summary(glht(aovRes, linfct=mcp(ab=cntrMat), alternative="two.sided"), + test=adjusted("none")) Simultaneous Tests for General Linear Hypotheses Multiple Comparisons of Means: User-defined Contrasts Fit: aov(formula = y ~ ab, data = d) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) contr 01 == 0 -0.7704 0.7875 -0.978 0.330 contr 02 == 0 -1.0463 0.9067 -1.154 0.251 contr 03 == 0 0.2759 1.2009 0.230 0.819 (Adjusted p values reported -- none method) Now manually check the result for the first contrast. > P <- 2 # number of levels factor a > Q <- 2 # number of levels factor b > Njk <- table(d$ab) # cell sizes > Mjk <- tapply(d$y, d$ab, mean) # cell means > dfSSE <- sum(Njk) - P*Q # degrees of freedom error SS > SSE <- sum((d$y - ave(d$y, d$ab, FUN=mean))^2) # error SS > MSE <- SSE / dfSSE # mean error SS > (psiHat <- sum(cntrMat[1, ] * Mjk)) # contrast estimate [1] -0.7703638 > lenSq <- sum(cntrMat[1, ]^2 / Njk) # squared length of contrast > (SE <- sqrt(lenSq*MSE)) # standard error [1] 0.7874602 > (tStat <- psiHat / SE) # t-statistic [1] -0.9782893 > (pVal <- 2 * (1-pt(abs(tStat), dfSSE))) # p-value [1] 0.3303902
How to setup and interpret ANOVA contrasts with the car package in R?
Your example leads to unequal cell sizes, which means that the different "types of sum of squares" matter, and the test for main effects is not as simple as you state it. Anova() uses type II sum of s
How to setup and interpret ANOVA contrasts with the car package in R? Your example leads to unequal cell sizes, which means that the different "types of sum of squares" matter, and the test for main effects is not as simple as you state it. Anova() uses type II sum of squares. See this question for a start. There are different ways to test the contrasts. Note that SS types don't matter as we are ultimately testing in the associated one-factorial design. I suggest using the following steps: # turn your 2x2 design into the corresponding 4x1 design using interaction() > d$ab <- interaction(d$a, d$b) # creates new factor coding the 2*2 conditions > levels(d$ab) # this is the order of the 4 conditions [1] "a1.b1" "a2.b1" "a1.b2" "a2.b2" > aovRes <- aov(y ~ ab, data=d) # oneway ANOVA using aov() with new factor # specify the contrasts you want to test as a matrix (see above for order of cells) > cntrMat <- rbind("contr 01"=c(1, -1, 0, 0), # coefficients for testing a within b1 + "contr 02"=c(0, 0, 1, -1), # coefficients for testing a within b2 + "contr 03"=c(1, -1, -1, 1)) # coefficients for interaction # test contrasts without adjusting alpha, two-sided hypotheses > library(multcomp) # for glht() > summary(glht(aovRes, linfct=mcp(ab=cntrMat), alternative="two.sided"), + test=adjusted("none")) Simultaneous Tests for General Linear Hypotheses Multiple Comparisons of Means: User-defined Contrasts Fit: aov(formula = y ~ ab, data = d) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) contr 01 == 0 -0.7704 0.7875 -0.978 0.330 contr 02 == 0 -1.0463 0.9067 -1.154 0.251 contr 03 == 0 0.2759 1.2009 0.230 0.819 (Adjusted p values reported -- none method) Now manually check the result for the first contrast. > P <- 2 # number of levels factor a > Q <- 2 # number of levels factor b > Njk <- table(d$ab) # cell sizes > Mjk <- tapply(d$y, d$ab, mean) # cell means > dfSSE <- sum(Njk) - P*Q # degrees of freedom error SS > SSE <- sum((d$y - ave(d$y, d$ab, FUN=mean))^2) # error SS > MSE <- SSE / dfSSE # mean error SS > (psiHat <- sum(cntrMat[1, ] * Mjk)) # contrast estimate [1] -0.7703638 > lenSq <- sum(cntrMat[1, ]^2 / Njk) # squared length of contrast > (SE <- sqrt(lenSq*MSE)) # standard error [1] 0.7874602 > (tStat <- psiHat / SE) # t-statistic [1] -0.9782893 > (pVal <- 2 * (1-pt(abs(tStat), dfSSE))) # p-value [1] 0.3303902
How to setup and interpret ANOVA contrasts with the car package in R? Your example leads to unequal cell sizes, which means that the different "types of sum of squares" matter, and the test for main effects is not as simple as you state it. Anova() uses type II sum of s
18,735
Methods for merging / reducing categories in ordinal or nominal data?
This is a response to your second question. I suspect the correct approach to these kinds of decisions will be determined largely by disciplinary norms and the expectations of the intended audience of your work. As a social scientist, I often work with survey (or survey-like) data and I always try to balance substantive and data-driven logics when I collapse ordinal scales or categorical variables. In other words, I'll do my best to consider what combinations of items "hang together" in terms of their substance as well as the distribution of responses before I collapse the items. Here's a recent example of a specific (ordinal) survey question that involved a five-point frequency scale: How often do you attend the meetings of a club or organization in your community? Never A few times a year Once a month A few times a month Once a week or more I don't have the data available to me at the moment, but the results were strongly skewed towards the "never" end of the scale. As a result, my co-author and I chose to pool responses into two groups: "Once a month or more" and "Less than once a month." The resulting (binary) variable was more evenly distributed and reflected a meaningful distinction in practical terms: since many clubs and organizations don't meet more than once a month, there are good reasons to believe that people who attend meetings at least that often are "active" members of such groups whereas those who attend less frequently (or never) are "inactive." So in my experience, these decisions are at least as much art as science. That said, I also usually try to do this before fitting any models, since I work in a discipline where anything else is viewed (negatively) as data mining and highly un-scientific (fun times!) . With that in mind, it might help if you could say a little bit more about what sort of audience you have in mind for this work. It would also be in your best interests to review a few prominent methodology textbooks in your field as they can often clarify what passes for "normal" behavior among a given research community.
Methods for merging / reducing categories in ordinal or nominal data?
This is a response to your second question. I suspect the correct approach to these kinds of decisions will be determined largely by disciplinary norms and the expectations of the intended audience of
Methods for merging / reducing categories in ordinal or nominal data? This is a response to your second question. I suspect the correct approach to these kinds of decisions will be determined largely by disciplinary norms and the expectations of the intended audience of your work. As a social scientist, I often work with survey (or survey-like) data and I always try to balance substantive and data-driven logics when I collapse ordinal scales or categorical variables. In other words, I'll do my best to consider what combinations of items "hang together" in terms of their substance as well as the distribution of responses before I collapse the items. Here's a recent example of a specific (ordinal) survey question that involved a five-point frequency scale: How often do you attend the meetings of a club or organization in your community? Never A few times a year Once a month A few times a month Once a week or more I don't have the data available to me at the moment, but the results were strongly skewed towards the "never" end of the scale. As a result, my co-author and I chose to pool responses into two groups: "Once a month or more" and "Less than once a month." The resulting (binary) variable was more evenly distributed and reflected a meaningful distinction in practical terms: since many clubs and organizations don't meet more than once a month, there are good reasons to believe that people who attend meetings at least that often are "active" members of such groups whereas those who attend less frequently (or never) are "inactive." So in my experience, these decisions are at least as much art as science. That said, I also usually try to do this before fitting any models, since I work in a discipline where anything else is viewed (negatively) as data mining and highly un-scientific (fun times!) . With that in mind, it might help if you could say a little bit more about what sort of audience you have in mind for this work. It would also be in your best interests to review a few prominent methodology textbooks in your field as they can often clarify what passes for "normal" behavior among a given research community.
Methods for merging / reducing categories in ordinal or nominal data? This is a response to your second question. I suspect the correct approach to these kinds of decisions will be determined largely by disciplinary norms and the expectations of the intended audience of
18,736
Methods for merging / reducing categories in ordinal or nominal data?
The kinds of approaches ashaw discusses can lead to a relatively more systematic methodology. But I also think that by systematic you mean algorithmic. Here data mining tools may fill a gap. For one, there's the chi-squared automated interaction detection (CHAID) procedure built into SPSS's Decision Tree module; it can, according to rules set by the user, collapse ordinal or nominal categories of predictor variables when they show similar values on the outcome variable (whether it's continuous or nominal). These rules might depend on the size of the groups being collapsed or being created by collapsing, or on the p-values of related statistical tests. I believe some classification and regression tree (CART) programs can do the same things. Other respondents should be able to speak about similar functions performed by neural network or other applications provided through various data mining packages.
Methods for merging / reducing categories in ordinal or nominal data?
The kinds of approaches ashaw discusses can lead to a relatively more systematic methodology. But I also think that by systematic you mean algorithmic. Here data mining tools may fill a gap. For on
Methods for merging / reducing categories in ordinal or nominal data? The kinds of approaches ashaw discusses can lead to a relatively more systematic methodology. But I also think that by systematic you mean algorithmic. Here data mining tools may fill a gap. For one, there's the chi-squared automated interaction detection (CHAID) procedure built into SPSS's Decision Tree module; it can, according to rules set by the user, collapse ordinal or nominal categories of predictor variables when they show similar values on the outcome variable (whether it's continuous or nominal). These rules might depend on the size of the groups being collapsed or being created by collapsing, or on the p-values of related statistical tests. I believe some classification and regression tree (CART) programs can do the same things. Other respondents should be able to speak about similar functions performed by neural network or other applications provided through various data mining packages.
Methods for merging / reducing categories in ordinal or nominal data? The kinds of approaches ashaw discusses can lead to a relatively more systematic methodology. But I also think that by systematic you mean algorithmic. Here data mining tools may fill a gap. For on
18,737
How to deal with non-binary categorical variables in logistic regression (SPSS)
The UCLA website has a bunch of great tutorials for every procedure broken down by the software type that you're familiar with. Check out Annotated SPSS Output: Logistic Regression -- the SES variable they mention is categorical (and not binary). SPSS will automatically create the indicator variables for you. There's also a page dedicated to Categorical Predictors in Regression with SPSS which has specific information on how to change the default codings and a page specific to Logistic Regression.
How to deal with non-binary categorical variables in logistic regression (SPSS)
The UCLA website has a bunch of great tutorials for every procedure broken down by the software type that you're familiar with. Check out Annotated SPSS Output: Logistic Regression -- the SES variabl
How to deal with non-binary categorical variables in logistic regression (SPSS) The UCLA website has a bunch of great tutorials for every procedure broken down by the software type that you're familiar with. Check out Annotated SPSS Output: Logistic Regression -- the SES variable they mention is categorical (and not binary). SPSS will automatically create the indicator variables for you. There's also a page dedicated to Categorical Predictors in Regression with SPSS which has specific information on how to change the default codings and a page specific to Logistic Regression.
How to deal with non-binary categorical variables in logistic regression (SPSS) The UCLA website has a bunch of great tutorials for every procedure broken down by the software type that you're familiar with. Check out Annotated SPSS Output: Logistic Regression -- the SES variabl
18,738
How to deal with non-binary categorical variables in logistic regression (SPSS)
Logistic regression is a pretty flexible method. It can readily use as independent variables categorical variables. Most software that use Logistic regression should let you use categorical variables. As an example, let's say one of your categorical variable is temperature defined into three categories: cold/mild/hot. As you suggest you could interpret that as three separate dummy variables each with a value of 1 or 0. But, the software should let you use a single categorical variable instead with text value cold/mild/hot. And, the logit regression would derive coefficient (or constant) for each of the three temperature conditions. If one is not significant, the software or the user could readily take it out (after observing t stat and p value). The main benefit of grouping categorical variable categories into a single categorical variable is model efficiency. A single column in your model can handle as many categories as needed for a single categorical variable. If instead, you use a dummy variable for each categories of a categorical variable your model can quickly grow to have numerous columns that are superfluous given the mentioned alternative.
How to deal with non-binary categorical variables in logistic regression (SPSS)
Logistic regression is a pretty flexible method. It can readily use as independent variables categorical variables. Most software that use Logistic regression should let you use categorical variable
How to deal with non-binary categorical variables in logistic regression (SPSS) Logistic regression is a pretty flexible method. It can readily use as independent variables categorical variables. Most software that use Logistic regression should let you use categorical variables. As an example, let's say one of your categorical variable is temperature defined into three categories: cold/mild/hot. As you suggest you could interpret that as three separate dummy variables each with a value of 1 or 0. But, the software should let you use a single categorical variable instead with text value cold/mild/hot. And, the logit regression would derive coefficient (or constant) for each of the three temperature conditions. If one is not significant, the software or the user could readily take it out (after observing t stat and p value). The main benefit of grouping categorical variable categories into a single categorical variable is model efficiency. A single column in your model can handle as many categories as needed for a single categorical variable. If instead, you use a dummy variable for each categories of a categorical variable your model can quickly grow to have numerous columns that are superfluous given the mentioned alternative.
How to deal with non-binary categorical variables in logistic regression (SPSS) Logistic regression is a pretty flexible method. It can readily use as independent variables categorical variables. Most software that use Logistic regression should let you use categorical variable
18,739
How to deal with non-binary categorical variables in logistic regression (SPSS)
As far as my understanding goes, it is good to use dummy variable for categorical/ nominal data while for a ordinal data we can use coding of 1,2,3 for different levels. For dummy variable we will be coding 1 if it is true for a particular onservation and 0 otherwise. Also dummy variables will be 1 less than the no. Of levels, for example in binary we have 1. An all '0' observation in dummy variable will automatically make 1 for the not coded dummy.
How to deal with non-binary categorical variables in logistic regression (SPSS)
As far as my understanding goes, it is good to use dummy variable for categorical/ nominal data while for a ordinal data we can use coding of 1,2,3 for different levels. For dummy variable we will be
How to deal with non-binary categorical variables in logistic regression (SPSS) As far as my understanding goes, it is good to use dummy variable for categorical/ nominal data while for a ordinal data we can use coding of 1,2,3 for different levels. For dummy variable we will be coding 1 if it is true for a particular onservation and 0 otherwise. Also dummy variables will be 1 less than the no. Of levels, for example in binary we have 1. An all '0' observation in dummy variable will automatically make 1 for the not coded dummy.
How to deal with non-binary categorical variables in logistic regression (SPSS) As far as my understanding goes, it is good to use dummy variable for categorical/ nominal data while for a ordinal data we can use coding of 1,2,3 for different levels. For dummy variable we will be
18,740
How can I fit a spline to data that contains values and 1st/2nd derivatives?
We will describe how a spline can be used through Kalman Filtering (KF) techniques in relation with a State-Space Model (SSM). The fact that some spline models can be represented by SSM and computed with KF was revealed by C.F. Ansley and R. Kohn in the years 1980-1990. The estimated function and its derivatives are the expectations of the state conditional on the observations. These estimates are computed by using a fixed interval smoothing, a routine task when using a SSM. For the sake of simplicity, assume that the observations are made at times $t_1 < t_2 < \dots < t_n$ and that the observation number $k$ at $t_k$ involves only one derivative with order $d_k$ in $\{0,\,1,\,2\}$. The observation part of the model writes as $$ \tag{O1} y(t_k) = f^{[d_k]}(t_k) + \varepsilon(t_k) $$ where $f(t)$ denotes the unobserved true function and $\varepsilon(t_k)$ is a Gaussian error with variance $H(t_k)$ depending on the derivation order $d_k$. The (continuous time) transition equation takes the general form $$ \tag{T1} \frac{\text{d}}{\text{d}t}\boldsymbol{\alpha}(t) = \mathbf{A} \boldsymbol{\alpha}(t) + \boldsymbol{\eta}(t) $$ where $\boldsymbol{\alpha}(t)$ is the unobserved state vector and $\boldsymbol{\eta}(t)$ is a Gaussian white noise with covariance $\mathbf{Q}$, assumed to be independent of the observation noise r.vs $\varepsilon(t_k)$. In order to describe a spline, we consider a state obtained by stacking the $m$ first derivatives, i.e. $\boldsymbol{\alpha}(t) := [f(t),\, f^{[1]}(t), \, \dots,\, f^{[m-1]}(t)]^\top$. The transition is $$ \begin{bmatrix} f^{[1]}(t) \\ f^{[2]}(t) \\ \vdots \\ f^{[m-1]}(t) \\ f^{[m]}(t) \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & &\\ 0 & 0 & 1 & & \\ \vdots & & & \ddots &\\ & & & & 1\\ 0 & \dots & & & 0 \end{bmatrix} \begin{bmatrix} f(t) \\ f^{[1]}(t) \\ \vdots \\ f^{[m-2]}(t)\\ f^{[m-1]}(t) \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ \vdots\\ 0 \\ \eta(t) \end{bmatrix} $$ and we then get a polynomial spline with order $2m$ (and degree $2m-1$). While $m=2$ corresponds to the usual cubic spline, a higher order will be required to use derivatives with order $>1$. In order to stick to a classical SSM formalism we can rewrite (O1) as $$ \tag{O2} y(t_k) = \mathbf{Z}(t_k) \boldsymbol{\alpha}(t_k) + \varepsilon(t_k), $$ where the observation matrix $\mathbf{Z}(t_k)$ picks the suitable derivative in $\boldsymbol{\alpha}(t_k)$ and the variance $H(t_k)$ of $\varepsilon(t_k)$ is chosen depending on $d_k$. So $\mathbf{Z}(t_k) = \mathbf{Z}^\star_{d_k + 1}$ where $\mathbf{Z}^\star_1 := [1,\,0,\,\dots,\,0]$, $\mathbf{Z}^\star_2 := [0,\,1,\,\dots\,0]$ and $\mathbf{Z}^\star_3 := [0,\,0,\,1, 0,\,\dots]$. Similarly $H(t_k) = H^\star_{d_k+1}$ for three variances $H^\star_1$, $H^\star_2$, and $H^\star_3$. Although the transition is in continuous time, the KF is actually a standard discrete time one. Indeed, we will in practice focus on times $t$ where we have an observation, or where we want to estimate the derivatives. We can take the set $\{t_k\}$ to be the union of these two sets of times and assume that the observation at $t_k$ can be missing: this allows to estimate the $m$ derivatives at any time $t_k$ regardless of the existence of an observation. There remains to derive the discrete SSM. We will use indices for discrete times, writing $\boldsymbol{\alpha}_k$ for $\boldsymbol{\alpha}(t_k)$ and so on. The discrete-time SSM takes the form \begin{align*} \tag{DT} \boldsymbol{\alpha}_{k+1} &= \mathbf{T}_k \,\boldsymbol{\alpha}_{k} + \boldsymbol{\eta}^\star_{k}\\ y_k &= \mathbf{Z}_k\boldsymbol{\alpha}_k + \varepsilon_k \end{align*} where the matrices $\mathbf{T}_k$ and $\mathbf{Q}_k^\star := \text{Var}(\boldsymbol{\eta}_k^\star)$ are derived from (T1) and (O2) while the variance of $\varepsilon_k$ is given by $H_k=H^\star_{d_k+1}$ provided that $y_k$ is not missing. Using some algebra we can find the transition matrix for the discrete-time SSM $$ \mathbf{T}_k = \exp\left\{ \delta_k \mathbf{A} \right\} = \begin{bmatrix} 1 & \frac{\delta_k^1}{1!} & \frac{\delta_k^2}{2!} & \dots & \frac{\delta_k^{m-1}}{(m-1)!}\\ 0 & 1 & \frac{\delta_k^1}{1!} & & \\ \vdots & & & \ddots &\\ & & & & \frac{\delta_k^1}{1!}\\ 0 & \dots & & & 1 \end{bmatrix}, \qquad $$ where $\delta_k:= t_{k+1} - t_{k}$ for $k<n$. Similarly the covariance matrix $\mathbf{Q}^\star_k = \text{Var} (\boldsymbol{\eta}_k^\star)$ for the discrete-time SSM can be given as $$ \mathbf{Q}^\star_k= \sigma_\eta^2 \, \left[\frac{\delta_k^{2m-i-j+1}}{(m-i)!(m-j)! (2m-i-j+1)}\right]_{i,j} $$ where the indices $i$ and $j$ are between $1$ and $m$. Now to carry over the computation in R we need a package devoted to KF and accepting time-varying models; the CRAN package KFAS seems a good option. We can write R functions to compute the matrices $\mathbf{T}_k$ and $\mathbf{Q}^\star_k$ from the vector of times $t_k$ in order to encode the SSM (DT). In the notations used by the package, a matrix $\mathbf{R}_k$ comes to multiply the noise $\boldsymbol{\eta}^\star_k$ in the transition equation of (DT): we take it here to be the identity $\mathbf{I}_m$. Also note that a diffuse initial covariance must be used here. EDIT The $\mathbf{Q}^\star$ as initially written was wrong. Fixed (aslo in R code and image). C.F. Ansley and R. Kohn (1986) "On the Equivalence of Two Stochastic Approaches to Spline Smoothing" J. Appl. Probab., 23, pp. 391–405 R. Kohn and C.F. Ansley (1987) "A New Algorithm for Spline Smoothing Based on Smoothing a Stochastic Process" SIAM J. Sci. and Stat. Comput., 8(1), pp. 33–48 J. Helske (2017). "KFAS: Exponential Family State Space Models in R" J. Stat. Soft., 78(10), pp. 1-39 smoothWithDer <- function(t, y, d, m = 3, Hstar = c(3, 0.2, 0.1)^2, sigma2eta = 1.0^2) { ## define the SSM matrices, depending on 'delta_k' or on 'd_k' Tfun <- function(delta) { mat <- matrix(0, nrow = m, ncol = m) for (i in 0:(m-1)) { mat[col(mat) == row(mat) + i] <- delta^i / gamma(i + 1) } mat } Qfun <- function(delta) { im <- (m - 1):0 x <- delta^im / gamma(im + 1) mat <- outer(X = x, Y = x, FUN = "*") im2 <- outer(im, im, FUN = "+") sigma2eta * mat * delta / (im2 + 1) } Zfun <- function(d) { Z <- matrix(0.0, nrow = 1, ncol = m) Z[1, d + 1] <- 1.0 Z } Hfun <- function(d) ifelse(d >= 0, Hstar[d + 1], 0.0) Rfun <- function() diag(x = 1.0, nrow = m) ## define arrays by stacking the SSM matrices. We need one more ## 'delta' at the end of the series n <- length(t) delta <- diff(t) delta <- c(delta, mean(delta)) Ta <- Qa <- array(0.0, dim = c(m, m, n)) Za <- array(0.0, dim = c(1, m, n)) Ha <- array(0.0, dim = c(1, 1, n)) Ra <- array(0.0, dim = c(m, m, n)) for (k in 1:n) { Ta[ , , k] <- Tfun(delta[k]) Qa[ , , k] <- Qfun(delta[k]) Za[ , , k] <- Zfun(d[k]) Ha[ , , k] <- Hfun(d[k]) Ra[ , , k] <- Rfun() } require(KFAS) ## define the SSM and perform Kalman Filtering and smoothing mod <- SSModel(y ~ SSMcustom(Z = Za, T = Ta, R = Ra, Q = Qa, n = n, P1 = matrix(0, nrow = m, ncol = m), P1inf = diag(1.0, nrow = m), state_names = paste0("d", 0:(m-1))) - 1) out <- KFS(mod, smoothing = "state") list(t = t, filtered = out$att, smoothed = out$alphahat) } ## An example function as in OP f <- function(t, d = rep(0, length = length(t))) { f <- rep(NA, length(t)) if (any(ind <- (d == 0))) f[ind] <- 2.0 + t[ind] - 0.5 * t[ind]^2 if (any(ind <- (d == 1))) f[ind] <- 1.0 - t[ind] if (any(ind <- (d == 2))) f[ind] <- -1.0 f } set.seed(123) n <- 100 t <- seq(from = 0, to = 10, length = n) Hstar <- c(3, 0.4, 0.2)^2 sigma2eta <- 1.0 fTrue <- cbind(d0 = f(t), d1 = f(t, d = 1), d2 = f(t, d = 2)) ## ============================================================================ ## use a derivative index of -1 to indicate non-observed values, where ## 'y' will be NA ## ## [RUN #0] no derivative m = 2 (cubic spline) ## ============================================================================ d0 <- sample(c(-1, 0), size = n, replace = TRUE, prob = c(0.7, 0.3)) ft0 <- f(t, d0) ## add noise picking the right sd y0 <- ft0 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d0 + 2]) res0 <- smoothWithDer(t, y0, d0, m = 2, Hstar = Hstar) ## ============================================================================ ## [RUN #1] Only first order derivative: we can take m = 2 (cubic spline) ## ============================================================================ d1 <- sample(c(-1, 0:1), size = n, replace = TRUE, prob = c(0.7, 0.15, 0.15)) ft1 <- f(t, d1) y1 <- ft1 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d1 + 2]) res1 <- smoothWithDer(t, y1, d1, m = 2, Hstar = Hstar) ## ============================================================================ ## [RUN #2] First and second order derivative: we can take m = 3 ## (quintic spline) ## ============================================================================ d2 <- sample(c(-1, 0:2), size = n, replace = TRUE, prob = c(0.7, 0.1, 0.1, 0.1)) ft2 <- f(t, d2) y2 <- ft2 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d2 + 2]) res2 <- smoothWithDer(t, y2, d2, m = 3, Hstar = Hstar) ## plots : a ggplot with facets would be better here. for (run in 0:2) { resrun <- get(paste0("res", run)) drun <- get(paste0("d", run)) yrun <- get(paste0("y", run)) matplot(t, resrun$smoothed, pch = 16, cex = 0.7, ylab = "", xlab = "") matlines(t, fTrue, lwd = 2, lty = 1) for (dv in 0:2) { points(t[drun == dv], yrun[drun == dv], cex = 1.2, pch = 22, lwd = 2, bg = "white", col = dv + 1) } title(main = sprintf("run %d. Dots = smooothed, lines = true, square = obs", run)) legend("bottomleft", col = 1:3, legend = c("d0", "d1", "d2"), lty = 1) }
How can I fit a spline to data that contains values and 1st/2nd derivatives?
We will describe how a spline can be used through Kalman Filtering (KF) techniques in relation with a State-Space Model (SSM). The fact that some spline models can be represented by SSM and computed w
How can I fit a spline to data that contains values and 1st/2nd derivatives? We will describe how a spline can be used through Kalman Filtering (KF) techniques in relation with a State-Space Model (SSM). The fact that some spline models can be represented by SSM and computed with KF was revealed by C.F. Ansley and R. Kohn in the years 1980-1990. The estimated function and its derivatives are the expectations of the state conditional on the observations. These estimates are computed by using a fixed interval smoothing, a routine task when using a SSM. For the sake of simplicity, assume that the observations are made at times $t_1 < t_2 < \dots < t_n$ and that the observation number $k$ at $t_k$ involves only one derivative with order $d_k$ in $\{0,\,1,\,2\}$. The observation part of the model writes as $$ \tag{O1} y(t_k) = f^{[d_k]}(t_k) + \varepsilon(t_k) $$ where $f(t)$ denotes the unobserved true function and $\varepsilon(t_k)$ is a Gaussian error with variance $H(t_k)$ depending on the derivation order $d_k$. The (continuous time) transition equation takes the general form $$ \tag{T1} \frac{\text{d}}{\text{d}t}\boldsymbol{\alpha}(t) = \mathbf{A} \boldsymbol{\alpha}(t) + \boldsymbol{\eta}(t) $$ where $\boldsymbol{\alpha}(t)$ is the unobserved state vector and $\boldsymbol{\eta}(t)$ is a Gaussian white noise with covariance $\mathbf{Q}$, assumed to be independent of the observation noise r.vs $\varepsilon(t_k)$. In order to describe a spline, we consider a state obtained by stacking the $m$ first derivatives, i.e. $\boldsymbol{\alpha}(t) := [f(t),\, f^{[1]}(t), \, \dots,\, f^{[m-1]}(t)]^\top$. The transition is $$ \begin{bmatrix} f^{[1]}(t) \\ f^{[2]}(t) \\ \vdots \\ f^{[m-1]}(t) \\ f^{[m]}(t) \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & &\\ 0 & 0 & 1 & & \\ \vdots & & & \ddots &\\ & & & & 1\\ 0 & \dots & & & 0 \end{bmatrix} \begin{bmatrix} f(t) \\ f^{[1]}(t) \\ \vdots \\ f^{[m-2]}(t)\\ f^{[m-1]}(t) \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ \vdots\\ 0 \\ \eta(t) \end{bmatrix} $$ and we then get a polynomial spline with order $2m$ (and degree $2m-1$). While $m=2$ corresponds to the usual cubic spline, a higher order will be required to use derivatives with order $>1$. In order to stick to a classical SSM formalism we can rewrite (O1) as $$ \tag{O2} y(t_k) = \mathbf{Z}(t_k) \boldsymbol{\alpha}(t_k) + \varepsilon(t_k), $$ where the observation matrix $\mathbf{Z}(t_k)$ picks the suitable derivative in $\boldsymbol{\alpha}(t_k)$ and the variance $H(t_k)$ of $\varepsilon(t_k)$ is chosen depending on $d_k$. So $\mathbf{Z}(t_k) = \mathbf{Z}^\star_{d_k + 1}$ where $\mathbf{Z}^\star_1 := [1,\,0,\,\dots,\,0]$, $\mathbf{Z}^\star_2 := [0,\,1,\,\dots\,0]$ and $\mathbf{Z}^\star_3 := [0,\,0,\,1, 0,\,\dots]$. Similarly $H(t_k) = H^\star_{d_k+1}$ for three variances $H^\star_1$, $H^\star_2$, and $H^\star_3$. Although the transition is in continuous time, the KF is actually a standard discrete time one. Indeed, we will in practice focus on times $t$ where we have an observation, or where we want to estimate the derivatives. We can take the set $\{t_k\}$ to be the union of these two sets of times and assume that the observation at $t_k$ can be missing: this allows to estimate the $m$ derivatives at any time $t_k$ regardless of the existence of an observation. There remains to derive the discrete SSM. We will use indices for discrete times, writing $\boldsymbol{\alpha}_k$ for $\boldsymbol{\alpha}(t_k)$ and so on. The discrete-time SSM takes the form \begin{align*} \tag{DT} \boldsymbol{\alpha}_{k+1} &= \mathbf{T}_k \,\boldsymbol{\alpha}_{k} + \boldsymbol{\eta}^\star_{k}\\ y_k &= \mathbf{Z}_k\boldsymbol{\alpha}_k + \varepsilon_k \end{align*} where the matrices $\mathbf{T}_k$ and $\mathbf{Q}_k^\star := \text{Var}(\boldsymbol{\eta}_k^\star)$ are derived from (T1) and (O2) while the variance of $\varepsilon_k$ is given by $H_k=H^\star_{d_k+1}$ provided that $y_k$ is not missing. Using some algebra we can find the transition matrix for the discrete-time SSM $$ \mathbf{T}_k = \exp\left\{ \delta_k \mathbf{A} \right\} = \begin{bmatrix} 1 & \frac{\delta_k^1}{1!} & \frac{\delta_k^2}{2!} & \dots & \frac{\delta_k^{m-1}}{(m-1)!}\\ 0 & 1 & \frac{\delta_k^1}{1!} & & \\ \vdots & & & \ddots &\\ & & & & \frac{\delta_k^1}{1!}\\ 0 & \dots & & & 1 \end{bmatrix}, \qquad $$ where $\delta_k:= t_{k+1} - t_{k}$ for $k<n$. Similarly the covariance matrix $\mathbf{Q}^\star_k = \text{Var} (\boldsymbol{\eta}_k^\star)$ for the discrete-time SSM can be given as $$ \mathbf{Q}^\star_k= \sigma_\eta^2 \, \left[\frac{\delta_k^{2m-i-j+1}}{(m-i)!(m-j)! (2m-i-j+1)}\right]_{i,j} $$ where the indices $i$ and $j$ are between $1$ and $m$. Now to carry over the computation in R we need a package devoted to KF and accepting time-varying models; the CRAN package KFAS seems a good option. We can write R functions to compute the matrices $\mathbf{T}_k$ and $\mathbf{Q}^\star_k$ from the vector of times $t_k$ in order to encode the SSM (DT). In the notations used by the package, a matrix $\mathbf{R}_k$ comes to multiply the noise $\boldsymbol{\eta}^\star_k$ in the transition equation of (DT): we take it here to be the identity $\mathbf{I}_m$. Also note that a diffuse initial covariance must be used here. EDIT The $\mathbf{Q}^\star$ as initially written was wrong. Fixed (aslo in R code and image). C.F. Ansley and R. Kohn (1986) "On the Equivalence of Two Stochastic Approaches to Spline Smoothing" J. Appl. Probab., 23, pp. 391–405 R. Kohn and C.F. Ansley (1987) "A New Algorithm for Spline Smoothing Based on Smoothing a Stochastic Process" SIAM J. Sci. and Stat. Comput., 8(1), pp. 33–48 J. Helske (2017). "KFAS: Exponential Family State Space Models in R" J. Stat. Soft., 78(10), pp. 1-39 smoothWithDer <- function(t, y, d, m = 3, Hstar = c(3, 0.2, 0.1)^2, sigma2eta = 1.0^2) { ## define the SSM matrices, depending on 'delta_k' or on 'd_k' Tfun <- function(delta) { mat <- matrix(0, nrow = m, ncol = m) for (i in 0:(m-1)) { mat[col(mat) == row(mat) + i] <- delta^i / gamma(i + 1) } mat } Qfun <- function(delta) { im <- (m - 1):0 x <- delta^im / gamma(im + 1) mat <- outer(X = x, Y = x, FUN = "*") im2 <- outer(im, im, FUN = "+") sigma2eta * mat * delta / (im2 + 1) } Zfun <- function(d) { Z <- matrix(0.0, nrow = 1, ncol = m) Z[1, d + 1] <- 1.0 Z } Hfun <- function(d) ifelse(d >= 0, Hstar[d + 1], 0.0) Rfun <- function() diag(x = 1.0, nrow = m) ## define arrays by stacking the SSM matrices. We need one more ## 'delta' at the end of the series n <- length(t) delta <- diff(t) delta <- c(delta, mean(delta)) Ta <- Qa <- array(0.0, dim = c(m, m, n)) Za <- array(0.0, dim = c(1, m, n)) Ha <- array(0.0, dim = c(1, 1, n)) Ra <- array(0.0, dim = c(m, m, n)) for (k in 1:n) { Ta[ , , k] <- Tfun(delta[k]) Qa[ , , k] <- Qfun(delta[k]) Za[ , , k] <- Zfun(d[k]) Ha[ , , k] <- Hfun(d[k]) Ra[ , , k] <- Rfun() } require(KFAS) ## define the SSM and perform Kalman Filtering and smoothing mod <- SSModel(y ~ SSMcustom(Z = Za, T = Ta, R = Ra, Q = Qa, n = n, P1 = matrix(0, nrow = m, ncol = m), P1inf = diag(1.0, nrow = m), state_names = paste0("d", 0:(m-1))) - 1) out <- KFS(mod, smoothing = "state") list(t = t, filtered = out$att, smoothed = out$alphahat) } ## An example function as in OP f <- function(t, d = rep(0, length = length(t))) { f <- rep(NA, length(t)) if (any(ind <- (d == 0))) f[ind] <- 2.0 + t[ind] - 0.5 * t[ind]^2 if (any(ind <- (d == 1))) f[ind] <- 1.0 - t[ind] if (any(ind <- (d == 2))) f[ind] <- -1.0 f } set.seed(123) n <- 100 t <- seq(from = 0, to = 10, length = n) Hstar <- c(3, 0.4, 0.2)^2 sigma2eta <- 1.0 fTrue <- cbind(d0 = f(t), d1 = f(t, d = 1), d2 = f(t, d = 2)) ## ============================================================================ ## use a derivative index of -1 to indicate non-observed values, where ## 'y' will be NA ## ## [RUN #0] no derivative m = 2 (cubic spline) ## ============================================================================ d0 <- sample(c(-1, 0), size = n, replace = TRUE, prob = c(0.7, 0.3)) ft0 <- f(t, d0) ## add noise picking the right sd y0 <- ft0 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d0 + 2]) res0 <- smoothWithDer(t, y0, d0, m = 2, Hstar = Hstar) ## ============================================================================ ## [RUN #1] Only first order derivative: we can take m = 2 (cubic spline) ## ============================================================================ d1 <- sample(c(-1, 0:1), size = n, replace = TRUE, prob = c(0.7, 0.15, 0.15)) ft1 <- f(t, d1) y1 <- ft1 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d1 + 2]) res1 <- smoothWithDer(t, y1, d1, m = 2, Hstar = Hstar) ## ============================================================================ ## [RUN #2] First and second order derivative: we can take m = 3 ## (quintic spline) ## ============================================================================ d2 <- sample(c(-1, 0:2), size = n, replace = TRUE, prob = c(0.7, 0.1, 0.1, 0.1)) ft2 <- f(t, d2) y2 <- ft2 + rnorm(n = n, sd = c(0.0, sqrt(Hstar))[d2 + 2]) res2 <- smoothWithDer(t, y2, d2, m = 3, Hstar = Hstar) ## plots : a ggplot with facets would be better here. for (run in 0:2) { resrun <- get(paste0("res", run)) drun <- get(paste0("d", run)) yrun <- get(paste0("y", run)) matplot(t, resrun$smoothed, pch = 16, cex = 0.7, ylab = "", xlab = "") matlines(t, fTrue, lwd = 2, lty = 1) for (dv in 0:2) { points(t[drun == dv], yrun[drun == dv], cex = 1.2, pch = 22, lwd = 2, bg = "white", col = dv + 1) } title(main = sprintf("run %d. Dots = smooothed, lines = true, square = obs", run)) legend("bottomleft", col = 1:3, legend = c("d0", "d1", "d2"), lty = 1) }
How can I fit a spline to data that contains values and 1st/2nd derivatives? We will describe how a spline can be used through Kalman Filtering (KF) techniques in relation with a State-Space Model (SSM). The fact that some spline models can be represented by SSM and computed w
18,741
How can I fit a spline to data that contains values and 1st/2nd derivatives?
You can do spectacularly well with a standard least-squares routine, provided you have a reasonable idea of the relative sizes of the random errors made for each derivative. There is no restriction on the number of measurements you make for each $x$ value--you can even simultaneously measure different derivatives at each one. The only limitation in the use of Ordinary Least Squares (OLS) is the usual: you assume the measurements are independent. The basic idea can be most clearly expressed by abstracting the problem. Your model uses a set of $p$ functions $f_j:\mathbb{R}\to\mathbb{R},$ $j=1, 2, \ldots, p$ (such as any spline basis) as a basis for predicting the values $y_i = f(x_i)$ of an unknown function $f$ at points $(x_1, x_2, \ldots, x_n).$ This means you seek to estimate coefficients $\beta_j$ for which each of the linear combinations $\sum_j \beta_j f_j(x_i)$ acceptably approximates $y_i.$ Let's call this (vector) space of linear combinations $\mathbb F.$ What is special about this problem is that you don't necessarily observe the $y_i.$ Instead, there is a defined set of linear functionals $\mathcal{L}_i$ associated with the data. Recall that a functional is a "function of a function:" each $\mathcal{L}_i$ assigns a number $\mathcal{L}_i[f]$ to any function $f\in \mathbb F.$ The model posits that $$y_i = \mathcal{L}_i [f] + \sigma_i \varepsilon_i\tag{1}$$ where the $\mathcal{L}_i$ are given functionals, the $\sigma_i \gt 0$ are known scale factors, and the $\varepsilon_i$ are independent and identically distributed random variables. Two additional assumptions make OLS applicable and statistically meaningful: The common distribution of the $\varepsilon_i$ has a finite variance. Every $\mathcal{L}_i$ is a linear functional. A functional $\mathcal L$ is linear when for any elements $f_j\in\mathbb{F}$ and corresponding numbers $\alpha_j,$ $$\mathcal{L}\left[\sum_j \alpha_j f_j\right] = \sum_j \alpha_j \mathcal{L}\left[f_j\right].$$ (2) permits the model $(1)$ to be expressed more explicitly as $$y_i = \beta_1 \mathcal{L}_i[f_1] + \cdots + \beta_p \mathcal{L}_i[f_p] + \sigma_i \varepsilon_i.$$ The whole point of this reduction is that because you have stipulated all the functionals $\mathcal{L}_i,$ all the basis functions $f_j,$ and the standard deviations $\sigma_i,$ the values $\mathcal{L}_i[f_j]$ are all numbers--these are just the usual "variables" or "features" of a regression problem--and the $\sigma_i$ are merely (relative) weights. Thus, in the optimal sense of the Gauss-Markov Theorem, OLS is a great procedure to use. The functionals involved in the question are the following: Evaluate $f$ at a specified point $x:$ $\mathcal{L}[f] = f(x).$ This is what we usually do. This is linear because, by definition, linear combinations of functions are evaluated pointwise. Evaluate the derivative $f^\prime$ at a specified point $x:$ $\mathcal{L}[f] = f^\prime(x).$ This is linear because differentiation is linear. Evaluate the second derivative $f^{\prime \prime}$ at a specified point $x:$ $\mathcal{L}[f] = f^{\prime \prime}(x).$ Okay, how well does this approach work? As usual, we will study the residuals $\hat y_i - y_i$ comparing the fitted values $\hat y_i$ to the observed values. Since positions, velocities, and accelerations are all in different units, they ought to be plotted on separate axes. The top row uses curves to graph $\hat y$ and its first two derivatives. The relevant data points are plotted over the curves: observed values at the left, observed derivatives in the middle, and observed second derivatives at the right. The bottom row plots the corresponding residuals. As usual, we are looking for a lack of any appreciable relationship: we hope the residual values (their y-coordinates) vary randomly from left to right, showing independence and no trends. The $n=23$ data values were generated exactly as in the question (after setting the random number seed to 17 using set.seed(17) for reproducibility). I explored fits using the B-spline spaces $\mathbb F$ generated by the R function bs, also as in the question, for degrees 1 through 6. This figure shows the results for degree 2, which is the lowest degree (that is, simplest model) exhibiting a low AIC and good residual behavior, as well as the model indicated by an ANOVA of all six (nested) models. The fit is $$\hat y = -27.48993 + 2.54078 f_1 + 2.97679 f_2$$ where $f_1$ and $f_2$ are the B-spline basis functions created by bs. The residuals behave well. The fits are good. Moreover, this approach found the correct model: the data indeed were generated from a quadratic function (degree 2). Furthermore, the standard deviations of the residuals are about the right sizes: 0.11, 0.20, and 0.61 compared to 0.1, 0.3, and 0.6 used to generate the original errors. That's pretty amazing given that these curves obviously extrapolate the observations (which do not go beyond $x=5$) and use such a small dataset ($n=23$). Finally, residuals to the fits for higher-degree splines are qualitatively the same; they make only slight improvements at a cost of using less-plausible models. For sufficiently high degrees, they begin to oscillate wildly for small values of $x$ between the observed values, for instance. To illustrate this (bad) behavior, here's the degree-9 fit: Finally, here is an example where multiple observations of various linear functionals of the basis were made. The code for generating these observations was changed from that in the question to mult <- 2 x_f <- rep(runif(5, 0, 5), mult) # Two observations per point x_df <- rep(runif(8, 3, 8), mult) # Two derivatives per point x_ddf <- c(x_df, rep(runif(10, 4, 9)) # Derivative and acceleration per point The R code for carrying these calculations is rather general. In particular, it uses numerical differentiation to find the derivatives so that it is not dependent on the type of spline used. It handles the differing values of $\sigma_i$ by weighting the observations proportionally to $1/\sigma_i^2.$ It automatically constructs and fits a set of models in a loop. The linear functionals $\mathcal{L}_i$ and the standard deviations $\sigma_i$ are hard-coded. There are three of each, selected according to the value of the type variable in the dataset. As examples of how you can use the fits, the coda prints summaries, a list of their AICs, and an ANOVA of them all. # # Estimate spline derivatives at points of `x`. # d <- function(x, s, order=1) { h <- diff(range(x, na.rm=TRUE)) dh <- h * 1e-4 lags <- seq(-order, order, length.out=order+1) * dh/2 b <- choose(order, 0:order) * (-1)^(order:0) y <- b %*% matrix(predict(s, c(outer(lags, x, `+`))), nrow=length(lags)) y <- matrix(y / (dh^order), nrow=length(x)) } # # Fit and plot models by degree. # data$order <- c(f=0, df=1, ddf=2)[data$type] k <- max(data$order) x <- data$x w <- (c(0.1, 0.3, 0.6)^(-2))[data$order+1] # As specified in the question fits <- lapply(1:6, function(deg) { # # Construct a model matrix. # s <- bs(x, degree=deg, intercept=TRUE) X.l <- lapply(seq.int(k+1)-1, function(i) { X <- subset(data, order==i) Y <- as.data.frame(d(X$x, s, order=i)) cbind(X, Y) }) X <- do.call("rbind", X.l) # # Fit WLS models. # f <- as.formula(paste("y ~ -1 +", paste0("V", 0:deg+1, collapse="+"))) fit <- lm(f, X, weights=w) msr <- tapply(residuals(fit), data$order, function(r) { k <- length(r) - 1 - deg ifelse(k >= 1, sum(r^2) / k, 1) }) # # Compute predicted values along the graphs. # X.new <- data.frame(x = seq(min(X$x), max(X$x), length.out=101)) X.new$y.hat <- predict(s, X.new$x) %*% coefficients(fit) X.new$Dy.hat <- d(X.new$x, s, 1) %*% coefficients(fit) X.new$DDy.hat <- d(X.new$x, s, 2) %*% coefficients(fit) X$Residual <- residuals(fit) # # Return the model. # fit$msr <- msr fit }) lapply(fits, function(f) sqrt(f$msr)) lapply(fits, summary) lapply(fits, AIC) do.call("anova", fits)
How can I fit a spline to data that contains values and 1st/2nd derivatives?
You can do spectacularly well with a standard least-squares routine, provided you have a reasonable idea of the relative sizes of the random errors made for each derivative. There is no restriction o
How can I fit a spline to data that contains values and 1st/2nd derivatives? You can do spectacularly well with a standard least-squares routine, provided you have a reasonable idea of the relative sizes of the random errors made for each derivative. There is no restriction on the number of measurements you make for each $x$ value--you can even simultaneously measure different derivatives at each one. The only limitation in the use of Ordinary Least Squares (OLS) is the usual: you assume the measurements are independent. The basic idea can be most clearly expressed by abstracting the problem. Your model uses a set of $p$ functions $f_j:\mathbb{R}\to\mathbb{R},$ $j=1, 2, \ldots, p$ (such as any spline basis) as a basis for predicting the values $y_i = f(x_i)$ of an unknown function $f$ at points $(x_1, x_2, \ldots, x_n).$ This means you seek to estimate coefficients $\beta_j$ for which each of the linear combinations $\sum_j \beta_j f_j(x_i)$ acceptably approximates $y_i.$ Let's call this (vector) space of linear combinations $\mathbb F.$ What is special about this problem is that you don't necessarily observe the $y_i.$ Instead, there is a defined set of linear functionals $\mathcal{L}_i$ associated with the data. Recall that a functional is a "function of a function:" each $\mathcal{L}_i$ assigns a number $\mathcal{L}_i[f]$ to any function $f\in \mathbb F.$ The model posits that $$y_i = \mathcal{L}_i [f] + \sigma_i \varepsilon_i\tag{1}$$ where the $\mathcal{L}_i$ are given functionals, the $\sigma_i \gt 0$ are known scale factors, and the $\varepsilon_i$ are independent and identically distributed random variables. Two additional assumptions make OLS applicable and statistically meaningful: The common distribution of the $\varepsilon_i$ has a finite variance. Every $\mathcal{L}_i$ is a linear functional. A functional $\mathcal L$ is linear when for any elements $f_j\in\mathbb{F}$ and corresponding numbers $\alpha_j,$ $$\mathcal{L}\left[\sum_j \alpha_j f_j\right] = \sum_j \alpha_j \mathcal{L}\left[f_j\right].$$ (2) permits the model $(1)$ to be expressed more explicitly as $$y_i = \beta_1 \mathcal{L}_i[f_1] + \cdots + \beta_p \mathcal{L}_i[f_p] + \sigma_i \varepsilon_i.$$ The whole point of this reduction is that because you have stipulated all the functionals $\mathcal{L}_i,$ all the basis functions $f_j,$ and the standard deviations $\sigma_i,$ the values $\mathcal{L}_i[f_j]$ are all numbers--these are just the usual "variables" or "features" of a regression problem--and the $\sigma_i$ are merely (relative) weights. Thus, in the optimal sense of the Gauss-Markov Theorem, OLS is a great procedure to use. The functionals involved in the question are the following: Evaluate $f$ at a specified point $x:$ $\mathcal{L}[f] = f(x).$ This is what we usually do. This is linear because, by definition, linear combinations of functions are evaluated pointwise. Evaluate the derivative $f^\prime$ at a specified point $x:$ $\mathcal{L}[f] = f^\prime(x).$ This is linear because differentiation is linear. Evaluate the second derivative $f^{\prime \prime}$ at a specified point $x:$ $\mathcal{L}[f] = f^{\prime \prime}(x).$ Okay, how well does this approach work? As usual, we will study the residuals $\hat y_i - y_i$ comparing the fitted values $\hat y_i$ to the observed values. Since positions, velocities, and accelerations are all in different units, they ought to be plotted on separate axes. The top row uses curves to graph $\hat y$ and its first two derivatives. The relevant data points are plotted over the curves: observed values at the left, observed derivatives in the middle, and observed second derivatives at the right. The bottom row plots the corresponding residuals. As usual, we are looking for a lack of any appreciable relationship: we hope the residual values (their y-coordinates) vary randomly from left to right, showing independence and no trends. The $n=23$ data values were generated exactly as in the question (after setting the random number seed to 17 using set.seed(17) for reproducibility). I explored fits using the B-spline spaces $\mathbb F$ generated by the R function bs, also as in the question, for degrees 1 through 6. This figure shows the results for degree 2, which is the lowest degree (that is, simplest model) exhibiting a low AIC and good residual behavior, as well as the model indicated by an ANOVA of all six (nested) models. The fit is $$\hat y = -27.48993 + 2.54078 f_1 + 2.97679 f_2$$ where $f_1$ and $f_2$ are the B-spline basis functions created by bs. The residuals behave well. The fits are good. Moreover, this approach found the correct model: the data indeed were generated from a quadratic function (degree 2). Furthermore, the standard deviations of the residuals are about the right sizes: 0.11, 0.20, and 0.61 compared to 0.1, 0.3, and 0.6 used to generate the original errors. That's pretty amazing given that these curves obviously extrapolate the observations (which do not go beyond $x=5$) and use such a small dataset ($n=23$). Finally, residuals to the fits for higher-degree splines are qualitatively the same; they make only slight improvements at a cost of using less-plausible models. For sufficiently high degrees, they begin to oscillate wildly for small values of $x$ between the observed values, for instance. To illustrate this (bad) behavior, here's the degree-9 fit: Finally, here is an example where multiple observations of various linear functionals of the basis were made. The code for generating these observations was changed from that in the question to mult <- 2 x_f <- rep(runif(5, 0, 5), mult) # Two observations per point x_df <- rep(runif(8, 3, 8), mult) # Two derivatives per point x_ddf <- c(x_df, rep(runif(10, 4, 9)) # Derivative and acceleration per point The R code for carrying these calculations is rather general. In particular, it uses numerical differentiation to find the derivatives so that it is not dependent on the type of spline used. It handles the differing values of $\sigma_i$ by weighting the observations proportionally to $1/\sigma_i^2.$ It automatically constructs and fits a set of models in a loop. The linear functionals $\mathcal{L}_i$ and the standard deviations $\sigma_i$ are hard-coded. There are three of each, selected according to the value of the type variable in the dataset. As examples of how you can use the fits, the coda prints summaries, a list of their AICs, and an ANOVA of them all. # # Estimate spline derivatives at points of `x`. # d <- function(x, s, order=1) { h <- diff(range(x, na.rm=TRUE)) dh <- h * 1e-4 lags <- seq(-order, order, length.out=order+1) * dh/2 b <- choose(order, 0:order) * (-1)^(order:0) y <- b %*% matrix(predict(s, c(outer(lags, x, `+`))), nrow=length(lags)) y <- matrix(y / (dh^order), nrow=length(x)) } # # Fit and plot models by degree. # data$order <- c(f=0, df=1, ddf=2)[data$type] k <- max(data$order) x <- data$x w <- (c(0.1, 0.3, 0.6)^(-2))[data$order+1] # As specified in the question fits <- lapply(1:6, function(deg) { # # Construct a model matrix. # s <- bs(x, degree=deg, intercept=TRUE) X.l <- lapply(seq.int(k+1)-1, function(i) { X <- subset(data, order==i) Y <- as.data.frame(d(X$x, s, order=i)) cbind(X, Y) }) X <- do.call("rbind", X.l) # # Fit WLS models. # f <- as.formula(paste("y ~ -1 +", paste0("V", 0:deg+1, collapse="+"))) fit <- lm(f, X, weights=w) msr <- tapply(residuals(fit), data$order, function(r) { k <- length(r) - 1 - deg ifelse(k >= 1, sum(r^2) / k, 1) }) # # Compute predicted values along the graphs. # X.new <- data.frame(x = seq(min(X$x), max(X$x), length.out=101)) X.new$y.hat <- predict(s, X.new$x) %*% coefficients(fit) X.new$Dy.hat <- d(X.new$x, s, 1) %*% coefficients(fit) X.new$DDy.hat <- d(X.new$x, s, 2) %*% coefficients(fit) X$Residual <- residuals(fit) # # Return the model. # fit$msr <- msr fit }) lapply(fits, function(f) sqrt(f$msr)) lapply(fits, summary) lapply(fits, AIC) do.call("anova", fits)
How can I fit a spline to data that contains values and 1st/2nd derivatives? You can do spectacularly well with a standard least-squares routine, provided you have a reasonable idea of the relative sizes of the random errors made for each derivative. There is no restriction o
18,742
How can I fit a spline to data that contains values and 1st/2nd derivatives?
First of all, I want to thank you for posing this question. It's a REALLY interesting question. I love splines and the cool things you can do with them. And this gave me an excuse to do some research. :-) BLUF: The short answer is no. I don't know of any functionality in R that will do this for you automatically. The long answer is... much more complicated. The fact that the derivatives and function values aren't sampled at the same place makes this more difficult. And the fact that you don't have a function value near the right end of the interval might make it impossible. Let's start with the cubic spline. Given points $(x_j, y_j)$ and the corresponding second derivatives $z_j$, the cubic spline passing through them is: $$ S_j(x) = Ay_j + By_{j+1} + Cz_j + Dz_{j+1} $$ where $$ \begin{array}{} h_j & = & x_{j+1} - x_j \\ A & = & \frac{x_{j+1} - x}{h_j} \\ B & = & 1 - A \\ C & = & \frac{1}{6}(A^3 - A)h_j ^2 \\ D & = & \frac{1}{6}(B^3 - B)h_j ^2 \end{array} $$ It's pretty straightforward to verify that $S_j(x_j) = y_j$, $S_j(x_{j+1}) = y_{j+1}$, $S''_j(x_j) = z_j$ and $S''_j(x_{j+1}) = z_{j+1}$. This guarantees that the spline and its second derivative are continuous. However, at this point, we don't have a continuous first derivative. In order to force the first derivative to be continuous, we need the following constraint: $$ \frac{6}{h_{j-1}}y_{j-1} - \left( \frac{6}{h_{j-1}} + \frac{6}{h_j} \right) y_j + \frac{6}{h_j}y_{j+1} = h_{j-1} z_{j-1} + 2(h_{j-1} + h_j) z_j + h_j z_{j + 1} \tag{1}\label{1} $$ In the classic cubic spline setup, you assume you have the points $(x_j, y_j)$ and use equation \eqref{1} (along with two additional boundary constraints) to solve for the $z_j$. Once you know the $z_j$, the spline is fully specified and you can use it to interpolate at any arbitrary point. As an added bonus, equation \eqref{1} turns in to a tridiagonal matrix which can be solved in linear time! OK, now suppose that, instead of knowing the $y_j$, you know the $z_j$. Can you use equation \eqref{1} to solve for the $y_j$? From a pure algebra standpoint, it seems feasible. There are $N$ equations and $N$ unknowns, so... why not? But it turns out you can't; the matrix will be singular. And that should come as no surprise. How could you possibly interpolate the function values given JUST the second derivatives? At the very least, you would need an initial value, just like a differential equation. What about your situation? Some of your points have function values and some of your points have derivatives. For the time being, let's ignore the first derivatives (they're kind of a mess to deal with in the cubic spline basis). Formally, let $(x_i, y_i), i \in \mathcal{I}$ be the set of points with function values and $(x_j, z_j), j \in \mathcal{J}$ be the set of points with second derivatives. We still have $N$ equations with $N$ unknowns. It's just that some of the unknowns are $y_j$ and some are $z_j$. It turns out that you will get a solution if 0, 1 or 2 $\in \mathcal{I}$ AND $N - 3, N - 2$ or $N - 1 \in \mathcal{I}$. In other words, one of the first three points has to be a function values AND one of the last three points has to be a function value. Other than that constraint, you're free to throw in as many derivatives as you want. How about those first derivatives? It's certainly possible to include first derivatives in your spline. But, like I said, it gets a lot messier. The first derivative of the spline is given by: $$ S'_j(x) = \frac{y_{j+1} - y_j}{h_j} - \frac{3A^2 - 1}{6} h_j z_j + \frac{3B^2 - 1}{6} h_j z_{j+1} $$ Of course, we're only really interested in the derivative at the knots, so we can simplify this a little bit by evaluating it at $x_j$: $$ S'_j(x_j) = \frac{y_{j+1} - y_j}{h_j} - \frac{1}{3} h_j z_j - \frac{1}{6} h_j z_{j+1} $$ You can add these constraints to the matrix you get from equation \eqref{1} and the resulting spline will have the specified first derivatives. In addition, this will help with the singular matrix problem. You'll get a solution if you have EITHER a function value or a first derivative in the first three and last three points. So I put that all together in some code and here's the picture I got: As you can see, the results aren't great. That's because this is a regular spline which must honor ALL of the data. Since the data is stochastic, we really need to use a regression spline. That's a topic for another post. But if you work through the math, you'll end up optimizing a quadratic objective function subject to linear equality constraints - and there's a closed form solution!
How can I fit a spline to data that contains values and 1st/2nd derivatives?
First of all, I want to thank you for posing this question. It's a REALLY interesting question. I love splines and the cool things you can do with them. And this gave me an excuse to do some resear
How can I fit a spline to data that contains values and 1st/2nd derivatives? First of all, I want to thank you for posing this question. It's a REALLY interesting question. I love splines and the cool things you can do with them. And this gave me an excuse to do some research. :-) BLUF: The short answer is no. I don't know of any functionality in R that will do this for you automatically. The long answer is... much more complicated. The fact that the derivatives and function values aren't sampled at the same place makes this more difficult. And the fact that you don't have a function value near the right end of the interval might make it impossible. Let's start with the cubic spline. Given points $(x_j, y_j)$ and the corresponding second derivatives $z_j$, the cubic spline passing through them is: $$ S_j(x) = Ay_j + By_{j+1} + Cz_j + Dz_{j+1} $$ where $$ \begin{array}{} h_j & = & x_{j+1} - x_j \\ A & = & \frac{x_{j+1} - x}{h_j} \\ B & = & 1 - A \\ C & = & \frac{1}{6}(A^3 - A)h_j ^2 \\ D & = & \frac{1}{6}(B^3 - B)h_j ^2 \end{array} $$ It's pretty straightforward to verify that $S_j(x_j) = y_j$, $S_j(x_{j+1}) = y_{j+1}$, $S''_j(x_j) = z_j$ and $S''_j(x_{j+1}) = z_{j+1}$. This guarantees that the spline and its second derivative are continuous. However, at this point, we don't have a continuous first derivative. In order to force the first derivative to be continuous, we need the following constraint: $$ \frac{6}{h_{j-1}}y_{j-1} - \left( \frac{6}{h_{j-1}} + \frac{6}{h_j} \right) y_j + \frac{6}{h_j}y_{j+1} = h_{j-1} z_{j-1} + 2(h_{j-1} + h_j) z_j + h_j z_{j + 1} \tag{1}\label{1} $$ In the classic cubic spline setup, you assume you have the points $(x_j, y_j)$ and use equation \eqref{1} (along with two additional boundary constraints) to solve for the $z_j$. Once you know the $z_j$, the spline is fully specified and you can use it to interpolate at any arbitrary point. As an added bonus, equation \eqref{1} turns in to a tridiagonal matrix which can be solved in linear time! OK, now suppose that, instead of knowing the $y_j$, you know the $z_j$. Can you use equation \eqref{1} to solve for the $y_j$? From a pure algebra standpoint, it seems feasible. There are $N$ equations and $N$ unknowns, so... why not? But it turns out you can't; the matrix will be singular. And that should come as no surprise. How could you possibly interpolate the function values given JUST the second derivatives? At the very least, you would need an initial value, just like a differential equation. What about your situation? Some of your points have function values and some of your points have derivatives. For the time being, let's ignore the first derivatives (they're kind of a mess to deal with in the cubic spline basis). Formally, let $(x_i, y_i), i \in \mathcal{I}$ be the set of points with function values and $(x_j, z_j), j \in \mathcal{J}$ be the set of points with second derivatives. We still have $N$ equations with $N$ unknowns. It's just that some of the unknowns are $y_j$ and some are $z_j$. It turns out that you will get a solution if 0, 1 or 2 $\in \mathcal{I}$ AND $N - 3, N - 2$ or $N - 1 \in \mathcal{I}$. In other words, one of the first three points has to be a function values AND one of the last three points has to be a function value. Other than that constraint, you're free to throw in as many derivatives as you want. How about those first derivatives? It's certainly possible to include first derivatives in your spline. But, like I said, it gets a lot messier. The first derivative of the spline is given by: $$ S'_j(x) = \frac{y_{j+1} - y_j}{h_j} - \frac{3A^2 - 1}{6} h_j z_j + \frac{3B^2 - 1}{6} h_j z_{j+1} $$ Of course, we're only really interested in the derivative at the knots, so we can simplify this a little bit by evaluating it at $x_j$: $$ S'_j(x_j) = \frac{y_{j+1} - y_j}{h_j} - \frac{1}{3} h_j z_j - \frac{1}{6} h_j z_{j+1} $$ You can add these constraints to the matrix you get from equation \eqref{1} and the resulting spline will have the specified first derivatives. In addition, this will help with the singular matrix problem. You'll get a solution if you have EITHER a function value or a first derivative in the first three and last three points. So I put that all together in some code and here's the picture I got: As you can see, the results aren't great. That's because this is a regular spline which must honor ALL of the data. Since the data is stochastic, we really need to use a regression spline. That's a topic for another post. But if you work through the math, you'll end up optimizing a quadratic objective function subject to linear equality constraints - and there's a closed form solution!
How can I fit a spline to data that contains values and 1st/2nd derivatives? First of all, I want to thank you for posing this question. It's a REALLY interesting question. I love splines and the cool things you can do with them. And this gave me an excuse to do some resear
18,743
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT
The more technical answer is because the constrained optimization problem can be written in terms of Lagrange multipliers. In particular, the Lagrangian associated with the constrained optimization problem is given by $$\mathcal L(\beta) = \underset{\beta}{\mathrm{argmin}}\,\left\{\sum_{i=1}^N \left(y_i - \sum_{j=1}^p x_{ij} \beta_j\right)^2\right\} + \mu \left\{(1-\alpha) \sum_{j=1}^p |\beta_j| + \alpha \sum_{j=1}^p \beta_j^2\right\}$$ where $\mu$ is a multiplier chosen to satisfy the constraints of the problem. The first order conditions (which are sufficient since you are working with nice proper convex functions) for this optimization problem can thus be obtained by differentiating the Lagrangian with respect to $\beta$ and setting the derivatives equal to 0 (it's a bit more nuanced since the LASSO part has undifferentiable points, but there are methods from convex analysis to generalize the derivative to make the first order condition still work). It is clear that these first order conditions are identical to the first order conditions of the unconstrained problem you wrote down. However, I think it's useful to see why in general, with these optimization problems, it is often possible to think about the problem either through the lens of a constrained optimization problem or through the lens of an unconstrained problem. More concretely, suppose we have an unconstrained optimization problem of the following form: $$\max_x f(x) + \lambda g(x)$$ We can always try to solve this optimization directly, but sometimes, it might make sense to break this problem into subcomponents. In particular, it is not hard to see that $$\max_x f(x) + \lambda g(x) = \max_t \left(\max_x f(x)\ \mathrm{ s.t }\ g(x) = t\right) + \lambda t$$ So for a fixed value of $\lambda$ (and assuming the functions to be optimized actually achieve their optima), we can associate with it a value $t^*$ that solves the outer optimization problem. This gives us a sort of mapping from unconstrained optimization problems to constrained problems. In your particular setting, since everything is nicely behaved for elastic net regression, this mapping should in fact be one to one, so it will be useful to be able to switch between these two contexts depending on which is more useful to a particular application. In general, this relationship between constrained and unconstrained problems may be less well behaved, but it may still be useful to think about to what extent you can move between the constrained and unconstrained problem. Edit: As requested, I will include a more concrete analysis for ridge regression, since it captures the main ideas while avoiding having to deal with the technicalities associated with the non-differentiability of the LASSO penalty. Recall, we are solving optimization problem (in matrix notation): $$\underset{\beta}{\mathrm{argmin}} \left\{\sum_{i=1}^N y_i - x_i^T \beta\right\}\quad\mathrm{s.t.}\, ||\beta||^2 \leq M$$ Let $\beta^{OLS}$ be the OLS solution (i.e. when there is no constraint). Then I will focus on the case where $M < \left|\left|\beta^{OLS}\right|\right|$ (provided this exists) since otherwise, the constraint is uninteresting since it does not bind. The Lagrangian for this problem can be written $$\mathcal L(\beta) = \underset{\beta}{\mathrm{argmin}} \left\{\sum_{i=1}^N y_i - x_i^T \beta\right\} - \mu\cdot||\beta||^2 \leq M$$ Then differentiating, we get first order conditions: $$0 = -2 \left(\sum_{i=1}^N y_i x_i + \left(\sum_{i=1}^N x_i x_i^T + \mu I\right) \beta\right)$$ which is just a system of linear equations and hence can be solved: $$\hat\beta = \left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)$$ for some choice of multiplier $\mu$. The multiplier is then simply chosen to make the constraint true, i.e. we need $$\left(\left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)\right)^T\left(\left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)\right) = M$$ which exists since the LHS is monotonic in $\mu$. This equation gives an explicit mapping from multipliers $\mu \in (0,\infty)$ to constraints, $M \in \left(0, \left|\left|\beta^{OLS}\right|\right|\right)$ with $$\lim_{\mu\to 0} M(\mu) = \left|\left|\beta^{OLS}\right|\right|$$ when the RHS exists and $$\lim_{\mu \to \infty} M(\mu) = 0$$ This mapping actually corresponds to something quite intuitive. The envelope theorem tells us that $\mu(M)$ corresponds to the marginal decrease in error we get from a small relaxation of the constraint $M$. This explains why when $\mu \to 0$ corresponds to $M \to \left|\right|\beta^{OLS}\left|\right|$. Once the constraint is not binding, there is no value in relaxing it any more, which is why the multiplier vanishes.
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con
The more technical answer is because the constrained optimization problem can be written in terms of Lagrange multipliers. In particular, the Lagrangian associated with the constrained optimization pr
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT The more technical answer is because the constrained optimization problem can be written in terms of Lagrange multipliers. In particular, the Lagrangian associated with the constrained optimization problem is given by $$\mathcal L(\beta) = \underset{\beta}{\mathrm{argmin}}\,\left\{\sum_{i=1}^N \left(y_i - \sum_{j=1}^p x_{ij} \beta_j\right)^2\right\} + \mu \left\{(1-\alpha) \sum_{j=1}^p |\beta_j| + \alpha \sum_{j=1}^p \beta_j^2\right\}$$ where $\mu$ is a multiplier chosen to satisfy the constraints of the problem. The first order conditions (which are sufficient since you are working with nice proper convex functions) for this optimization problem can thus be obtained by differentiating the Lagrangian with respect to $\beta$ and setting the derivatives equal to 0 (it's a bit more nuanced since the LASSO part has undifferentiable points, but there are methods from convex analysis to generalize the derivative to make the first order condition still work). It is clear that these first order conditions are identical to the first order conditions of the unconstrained problem you wrote down. However, I think it's useful to see why in general, with these optimization problems, it is often possible to think about the problem either through the lens of a constrained optimization problem or through the lens of an unconstrained problem. More concretely, suppose we have an unconstrained optimization problem of the following form: $$\max_x f(x) + \lambda g(x)$$ We can always try to solve this optimization directly, but sometimes, it might make sense to break this problem into subcomponents. In particular, it is not hard to see that $$\max_x f(x) + \lambda g(x) = \max_t \left(\max_x f(x)\ \mathrm{ s.t }\ g(x) = t\right) + \lambda t$$ So for a fixed value of $\lambda$ (and assuming the functions to be optimized actually achieve their optima), we can associate with it a value $t^*$ that solves the outer optimization problem. This gives us a sort of mapping from unconstrained optimization problems to constrained problems. In your particular setting, since everything is nicely behaved for elastic net regression, this mapping should in fact be one to one, so it will be useful to be able to switch between these two contexts depending on which is more useful to a particular application. In general, this relationship between constrained and unconstrained problems may be less well behaved, but it may still be useful to think about to what extent you can move between the constrained and unconstrained problem. Edit: As requested, I will include a more concrete analysis for ridge regression, since it captures the main ideas while avoiding having to deal with the technicalities associated with the non-differentiability of the LASSO penalty. Recall, we are solving optimization problem (in matrix notation): $$\underset{\beta}{\mathrm{argmin}} \left\{\sum_{i=1}^N y_i - x_i^T \beta\right\}\quad\mathrm{s.t.}\, ||\beta||^2 \leq M$$ Let $\beta^{OLS}$ be the OLS solution (i.e. when there is no constraint). Then I will focus on the case where $M < \left|\left|\beta^{OLS}\right|\right|$ (provided this exists) since otherwise, the constraint is uninteresting since it does not bind. The Lagrangian for this problem can be written $$\mathcal L(\beta) = \underset{\beta}{\mathrm{argmin}} \left\{\sum_{i=1}^N y_i - x_i^T \beta\right\} - \mu\cdot||\beta||^2 \leq M$$ Then differentiating, we get first order conditions: $$0 = -2 \left(\sum_{i=1}^N y_i x_i + \left(\sum_{i=1}^N x_i x_i^T + \mu I\right) \beta\right)$$ which is just a system of linear equations and hence can be solved: $$\hat\beta = \left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)$$ for some choice of multiplier $\mu$. The multiplier is then simply chosen to make the constraint true, i.e. we need $$\left(\left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)\right)^T\left(\left(\sum_{i=1}^N x_i x_i^T + \mu I\right)^{-1}\left(\sum_{i=1}^N y_i x_i\right)\right) = M$$ which exists since the LHS is monotonic in $\mu$. This equation gives an explicit mapping from multipliers $\mu \in (0,\infty)$ to constraints, $M \in \left(0, \left|\left|\beta^{OLS}\right|\right|\right)$ with $$\lim_{\mu\to 0} M(\mu) = \left|\left|\beta^{OLS}\right|\right|$$ when the RHS exists and $$\lim_{\mu \to \infty} M(\mu) = 0$$ This mapping actually corresponds to something quite intuitive. The envelope theorem tells us that $\mu(M)$ corresponds to the marginal decrease in error we get from a small relaxation of the constraint $M$. This explains why when $\mu \to 0$ corresponds to $M \to \left|\right|\beta^{OLS}\left|\right|$. Once the constraint is not binding, there is no value in relaxing it any more, which is why the multiplier vanishes.
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con The more technical answer is because the constrained optimization problem can be written in terms of Lagrange multipliers. In particular, the Lagrangian associated with the constrained optimization pr
18,744
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT
There is a great analysis by stats_model in his answer. I tried answering similar question at The Proof of Equivalent Formulas of Ridge Regression. I will take more Hand On approach for this case. Let's try to see the mapping between $ t $ and $ \lambda $ in the 2 models. As I wrote and can be seen from stats_model in his analysis the mapping depends on the data. Hence we'll chose a specific realization of the problem. Yet the code and sketching the solution will add intuition to what's going on. We'll compare the following 2 models: $$ \text{The Regularized Model: } \arg \min_{x} \frac{1}{2} {\left\| A x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$ $$\text{The Constrained Model: } \begin{align*} \arg \min_{x} \quad & \frac{1}{2} {\left\| A x - y \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| x \right\|}_{2}^{2} \leq t \end{align*}$$ Let's assume that $ \hat{x} $ to be the solution of the regularized model and $ \tilde{x} $ to be the solution of the constrained model. We're looking at the mapping from $ t $ to $ \lambda $ such that $ \hat{x} = \tilde{x} $. Looking on my solution to Solver for Norm Constraint Least Squares one could see that solving the Constrained Model involves solving the Regularized Model and finding the $ \lambda $ that matches the $ t $ (The actual code is presented in Least Squares with Euclidean ( $ {L}_{2} $ ) Norm Constraint). So we'll run the same solver and for each $ t $ we'll display the optimal $ \lambda $. The solver basically solves: $$\begin{align*} \arg_{\lambda} \quad & \lambda \\ \text{subject to} \quad & {\left\| {\left( {A}^{T} A + 2 \lambda I \right)}^{-1} {A}^{T} b \right\|}_{2}^{2} - t = 0 \end{align*}$$ So here is our Matrix: mA = -0.0716 0.2384 -0.6963 -0.0359 0.5794 -0.9141 0.3674 1.6489 -0.1485 -0.0049 0.3248 -1.7484 0.5391 -0.4839 -0.5446 -0.8117 0.0023 0.0434 0.5681 0.7776 0.6104 -0.9808 0.6951 -1.1300 And here is our vector: vB = 0.7087 -1.2776 0.0753 1.1536 1.2268 1.5418 This is the mapping: As can be seen above, for high enough value of $ t $ the parameter $ \lambda = 0 $ as expected. Zooming in to the [0, 10] range: The full code is available on my StackExchange Cross Validated Q401212 GitHub Repository.
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con
There is a great analysis by stats_model in his answer. I tried answering similar question at The Proof of Equivalent Formulas of Ridge Regression. I will take more Hand On approach for this case. Let
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT There is a great analysis by stats_model in his answer. I tried answering similar question at The Proof of Equivalent Formulas of Ridge Regression. I will take more Hand On approach for this case. Let's try to see the mapping between $ t $ and $ \lambda $ in the 2 models. As I wrote and can be seen from stats_model in his analysis the mapping depends on the data. Hence we'll chose a specific realization of the problem. Yet the code and sketching the solution will add intuition to what's going on. We'll compare the following 2 models: $$ \text{The Regularized Model: } \arg \min_{x} \frac{1}{2} {\left\| A x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$ $$\text{The Constrained Model: } \begin{align*} \arg \min_{x} \quad & \frac{1}{2} {\left\| A x - y \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| x \right\|}_{2}^{2} \leq t \end{align*}$$ Let's assume that $ \hat{x} $ to be the solution of the regularized model and $ \tilde{x} $ to be the solution of the constrained model. We're looking at the mapping from $ t $ to $ \lambda $ such that $ \hat{x} = \tilde{x} $. Looking on my solution to Solver for Norm Constraint Least Squares one could see that solving the Constrained Model involves solving the Regularized Model and finding the $ \lambda $ that matches the $ t $ (The actual code is presented in Least Squares with Euclidean ( $ {L}_{2} $ ) Norm Constraint). So we'll run the same solver and for each $ t $ we'll display the optimal $ \lambda $. The solver basically solves: $$\begin{align*} \arg_{\lambda} \quad & \lambda \\ \text{subject to} \quad & {\left\| {\left( {A}^{T} A + 2 \lambda I \right)}^{-1} {A}^{T} b \right\|}_{2}^{2} - t = 0 \end{align*}$$ So here is our Matrix: mA = -0.0716 0.2384 -0.6963 -0.0359 0.5794 -0.9141 0.3674 1.6489 -0.1485 -0.0049 0.3248 -1.7484 0.5391 -0.4839 -0.5446 -0.8117 0.0023 0.0434 0.5681 0.7776 0.6104 -0.9808 0.6951 -1.1300 And here is our vector: vB = 0.7087 -1.2776 0.0753 1.1536 1.2268 1.5418 This is the mapping: As can be seen above, for high enough value of $ t $ the parameter $ \lambda = 0 $ as expected. Zooming in to the [0, 10] range: The full code is available on my StackExchange Cross Validated Q401212 GitHub Repository.
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con There is a great analysis by stats_model in his answer. I tried answering similar question at The Proof of Equivalent Formulas of Ridge Regression. I will take more Hand On approach for this case. Let
18,745
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT
I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder. I posted an answer here: https://math.stackexchange.com/a/4682686/192065 It shows equivalence for: $\ell(1)$ constrained optimization the usual LASSO regularization problem the Lagrangian with $g(x) = \|x\|_1 - r$
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con
I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder. I posted an answer here: https://math.sta
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder. I posted an answer here: https://math.stackexchange.com/a/4682686/192065 It shows equivalence for: $\ell(1)$ constrained optimization the usual LASSO regularization problem the Lagrangian with $g(x) = \|x\|_1 - r$
Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Con I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder. I posted an answer here: https://math.sta
18,746
Product Demand Forecasting for Thousands of Products Across Multiple Stores
I wouldn't recommend the approach used by Neal et al.. Their data is unique for two reasons: They are working with food data, which is usually denser and more stable than other retail product sales data. A given location will be selling dozens of milk cartons or egg packs per week and will have been selling those same products for decades, compared to fashion or car parts where it is not unusual to have sales of one single item every 3 or 4 weeks, and data available for only a year or two. They are forecasting for warehouses not stores. A single warehouse covers multiple stores, so their data is even more dense than average. In fact a warehouse is typically used as a natural aggregation/grouping level for stores, so they are already essentially performing a grouping of store data. Because of the nature of their data, they can get away with modeling individual time series directly. But most retailers' data would be too sparse at the individual sku/store level for them to pull that off. As zbicyclist said, this problem is usually approached using hierarchical or multi-echelon forecasting. Commercial demand forecasting packages all use some form of hierarchical forecasting The idea is to group products and stores into similar product and regions, for which aggregate forecasts are generated and used to determine overall seasonality and trend, which are then spread down reconciled using a Top-Down approach with the baseline forecasts generated for each individual sku/store combination. Besides the challenge zbicyclist mentioned, a bigger problem is that finding the optimal groupings of products and stores is a non-trivial task, which requires a combination of domain expertise and empirical analysis. Products and stores are usually grouped together in elaborate hierarchies (By department, supplier, brand, etc..for products, by region, climate, warehouse, etc...for location) which are then fed to the forecasting algorithm along with historical sales data itself. Addressing meraxes comments How about the methods used in the CorporaciΓ³n Favorita Grocery Sales Forecasting Kaggle Competition, where they allow the models to learn from the sales histories of several (possibly unrelated) products, without doing any explicit grouping? Is this still a valid approach? They're doing the grouping implicitly by using store, item, famlily, class, cluster as categorical features. I've just read through a bit of Rob Hyndman's section on hierarchical forecasting. It seems to me that doing a Top-Down approach provides reliable forecasts for aggregate levels; however, it has the huge disadvantage of losing of information due to aggregation which may affect forecasts for the bottom-level nodes. It may also be "unable to capture and take advantage of individual series characteristics such as time dynamics, special events". Three points regarding this: The disadvantage he points to depends on the grouping of the data. If you you aggregate all the products and stores, then yes this would be a problem. For example aggregating all the stores from all regions would muddy out any region specific seasonalities. But you should be aggregating up only to the relevant grouping, and as I pointed out, this will require some analysis and experimentation to find. In the specific case of retail demand, we are not worried about "loosing information due to aggregation" because frequently the times series at the bottom nodes (i.e. SKU/Store) contain very little information, which is why we aggregate them up to the higher levels in the first place. For SKU/store specific events, the way we approach it on my team is to remove the event specific effects prior to generating a forecast, and then adding them back later, after the forecast is generated. See here for details.
Product Demand Forecasting for Thousands of Products Across Multiple Stores
I wouldn't recommend the approach used by Neal et al.. Their data is unique for two reasons: They are working with food data, which is usually denser and more stable than other retail product sales d
Product Demand Forecasting for Thousands of Products Across Multiple Stores I wouldn't recommend the approach used by Neal et al.. Their data is unique for two reasons: They are working with food data, which is usually denser and more stable than other retail product sales data. A given location will be selling dozens of milk cartons or egg packs per week and will have been selling those same products for decades, compared to fashion or car parts where it is not unusual to have sales of one single item every 3 or 4 weeks, and data available for only a year or two. They are forecasting for warehouses not stores. A single warehouse covers multiple stores, so their data is even more dense than average. In fact a warehouse is typically used as a natural aggregation/grouping level for stores, so they are already essentially performing a grouping of store data. Because of the nature of their data, they can get away with modeling individual time series directly. But most retailers' data would be too sparse at the individual sku/store level for them to pull that off. As zbicyclist said, this problem is usually approached using hierarchical or multi-echelon forecasting. Commercial demand forecasting packages all use some form of hierarchical forecasting The idea is to group products and stores into similar product and regions, for which aggregate forecasts are generated and used to determine overall seasonality and trend, which are then spread down reconciled using a Top-Down approach with the baseline forecasts generated for each individual sku/store combination. Besides the challenge zbicyclist mentioned, a bigger problem is that finding the optimal groupings of products and stores is a non-trivial task, which requires a combination of domain expertise and empirical analysis. Products and stores are usually grouped together in elaborate hierarchies (By department, supplier, brand, etc..for products, by region, climate, warehouse, etc...for location) which are then fed to the forecasting algorithm along with historical sales data itself. Addressing meraxes comments How about the methods used in the CorporaciΓ³n Favorita Grocery Sales Forecasting Kaggle Competition, where they allow the models to learn from the sales histories of several (possibly unrelated) products, without doing any explicit grouping? Is this still a valid approach? They're doing the grouping implicitly by using store, item, famlily, class, cluster as categorical features. I've just read through a bit of Rob Hyndman's section on hierarchical forecasting. It seems to me that doing a Top-Down approach provides reliable forecasts for aggregate levels; however, it has the huge disadvantage of losing of information due to aggregation which may affect forecasts for the bottom-level nodes. It may also be "unable to capture and take advantage of individual series characteristics such as time dynamics, special events". Three points regarding this: The disadvantage he points to depends on the grouping of the data. If you you aggregate all the products and stores, then yes this would be a problem. For example aggregating all the stores from all regions would muddy out any region specific seasonalities. But you should be aggregating up only to the relevant grouping, and as I pointed out, this will require some analysis and experimentation to find. In the specific case of retail demand, we are not worried about "loosing information due to aggregation" because frequently the times series at the bottom nodes (i.e. SKU/Store) contain very little information, which is why we aggregate them up to the higher levels in the first place. For SKU/store specific events, the way we approach it on my team is to remove the event specific effects prior to generating a forecast, and then adding them back later, after the forecast is generated. See here for details.
Product Demand Forecasting for Thousands of Products Across Multiple Stores I wouldn't recommend the approach used by Neal et al.. Their data is unique for two reasons: They are working with food data, which is usually denser and more stable than other retail product sales d
18,747
Why don't we use non-constant learning rates for gradient decent for things other then neural networks?
Disclaimer: I don't have so much experience with optimization outside of neural networks, so my answer will be clearly biased, but there are several things that play role: (Deep) neural networks have a lot of parameters. This has several implications: Firstly, it kind-of rules out higher order methods simply because computing Hessian and higher derivatives becomes infeasible. In other domains, this may be a valid approach better than any tweaks to SGD. Secondly, although SGD is wonderful, it tends to be impractically slow. These improved SGD variants mainly enable faster training, while potentially losing some of the nice properties of SGD. In other domains, the SGD training time may not be the bottleneck, so improvements gained by speeding it up may be simply negligible. Training (deep) neural networks is non-convex optimization and I am not aware of significant convex relaxation results in the field. Unlike other fields, neural networks are not focused on provably globally optimal solutions, which leads to investing more efforts into improving the properties of loss surface and its traversal during the optimization. In other fields, employing convex relaxation and obtaining globally optimal solutions may be in the center of the interest instead of the optimization algorithm, because once the problem is defined as a convex problem, the choice of the optimization algorithm cannot improve the quality of the solution. I suppose this answer does not cover all possible aspects and I am myself curious about other opinions.
Why don't we use non-constant learning rates for gradient decent for things other then neural networ
Disclaimer: I don't have so much experience with optimization outside of neural networks, so my answer will be clearly biased, but there are several things that play role: (Deep) neural networks have
Why don't we use non-constant learning rates for gradient decent for things other then neural networks? Disclaimer: I don't have so much experience with optimization outside of neural networks, so my answer will be clearly biased, but there are several things that play role: (Deep) neural networks have a lot of parameters. This has several implications: Firstly, it kind-of rules out higher order methods simply because computing Hessian and higher derivatives becomes infeasible. In other domains, this may be a valid approach better than any tweaks to SGD. Secondly, although SGD is wonderful, it tends to be impractically slow. These improved SGD variants mainly enable faster training, while potentially losing some of the nice properties of SGD. In other domains, the SGD training time may not be the bottleneck, so improvements gained by speeding it up may be simply negligible. Training (deep) neural networks is non-convex optimization and I am not aware of significant convex relaxation results in the field. Unlike other fields, neural networks are not focused on provably globally optimal solutions, which leads to investing more efforts into improving the properties of loss surface and its traversal during the optimization. In other fields, employing convex relaxation and obtaining globally optimal solutions may be in the center of the interest instead of the optimization algorithm, because once the problem is defined as a convex problem, the choice of the optimization algorithm cannot improve the quality of the solution. I suppose this answer does not cover all possible aspects and I am myself curious about other opinions.
Why don't we use non-constant learning rates for gradient decent for things other then neural networ Disclaimer: I don't have so much experience with optimization outside of neural networks, so my answer will be clearly biased, but there are several things that play role: (Deep) neural networks have
18,748
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
I assume you can evaluate $f$ and $g$ up to a normalizing constant. Denote $f(x) = f_u(x)/c_f$ and $g(x) = g_u(x)/c_g$. A consistent estimator that may be used is $$ \widehat{D_{KL}}(f || g) = \left[n^{-1} \sum_j f_u(x_j)/\pi_f(x_j)\right]^{-1}\frac{1}{N}\sum_i^N \left[\log\left(\frac{f_u(z_i)}{g_u(z_i)}\right)\frac{f_u(z_i)}{\pi_r(z_i)}\right] - \log (\hat{r}) $$ where $$ \hat{r} = \frac{1/n}{1/n}\frac{\sum_j f_u(x_j)/\pi_f(x_j)}{\sum_j g_u(y_j)/\pi_g(y_j)} \tag{1}. $$ is an importance sampling estimator for the ratio $c_f/c_g$. Here you use $\pi_f$ and $\pi_g$ as instrumental densities for $f_u$ and $g_u$ respectively, and $\pi_r$ to target the log ratio of unnormalized densities. So let $\{x_i\} \sim \pi_f$, $\{y_i\} \sim \pi_g$, and $\{z_i\} \sim \pi_r$. The numerator of (1) converges to $c_f$. The denominator converges to $c_g$. The ratio is consistent by the continuous mapping theorem. The log of the ratio is consistent by continuous mapping again. Regarding the other part of the estimator, $$ \frac{1}{N}\sum_i^N \left[\log\left(\frac{f_u(z_i)}{g_u(z_i)}\right)\frac{f_u(z_i)}{\pi_r(z_i)}\right] \overset{\text{as}}{\to} c_f E\left[ \log\left(\frac{f_u(z_i)}{g_u(z_i)}\right) \right] $$ by the law of large numbers. My motivation is the following: \begin{align*} D_{KL}(f || g) &= \int_{-\infty}^{\infty} f(x) \log\left(\frac{f(x)}{g(x)}\right) dx \\ &= \int_{-\infty}^{\infty} f(x)\left\{ \log \left[\frac{f_u(x)}{g_u(x)} \right] + \log \left[\frac{c_g}{c_f} \right]\right\} dx \\ &= E_f\left[\log \frac{f_u(x)}{g_u(x)} \right] + \log \left[\frac{c_g}{c_f} \right] \\ &= c_f^{-1} E_{\pi_r}\left[\log \frac{f_u(x)}{g_u(x)}\frac{f_u(x)}{\pi_r(x)} \right] + \log \left[\frac{c_g}{c_f} \right]. \end{align*} So I just break it up into tractable pieces. For more ideas on how to simulate the likelhood ratio, I found a paper that has a few: https://projecteuclid.org/download/pdf_1/euclid.aos/1031594732
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
I assume you can evaluate $f$ and $g$ up to a normalizing constant. Denote $f(x) = f_u(x)/c_f$ and $g(x) = g_u(x)/c_g$. A consistent estimator that may be used is $$ \widehat{D_{KL}}(f || g) = \left[n
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo I assume you can evaluate $f$ and $g$ up to a normalizing constant. Denote $f(x) = f_u(x)/c_f$ and $g(x) = g_u(x)/c_g$. A consistent estimator that may be used is $$ \widehat{D_{KL}}(f || g) = \left[n^{-1} \sum_j f_u(x_j)/\pi_f(x_j)\right]^{-1}\frac{1}{N}\sum_i^N \left[\log\left(\frac{f_u(z_i)}{g_u(z_i)}\right)\frac{f_u(z_i)}{\pi_r(z_i)}\right] - \log (\hat{r}) $$ where $$ \hat{r} = \frac{1/n}{1/n}\frac{\sum_j f_u(x_j)/\pi_f(x_j)}{\sum_j g_u(y_j)/\pi_g(y_j)} \tag{1}. $$ is an importance sampling estimator for the ratio $c_f/c_g$. Here you use $\pi_f$ and $\pi_g$ as instrumental densities for $f_u$ and $g_u$ respectively, and $\pi_r$ to target the log ratio of unnormalized densities. So let $\{x_i\} \sim \pi_f$, $\{y_i\} \sim \pi_g$, and $\{z_i\} \sim \pi_r$. The numerator of (1) converges to $c_f$. The denominator converges to $c_g$. The ratio is consistent by the continuous mapping theorem. The log of the ratio is consistent by continuous mapping again. Regarding the other part of the estimator, $$ \frac{1}{N}\sum_i^N \left[\log\left(\frac{f_u(z_i)}{g_u(z_i)}\right)\frac{f_u(z_i)}{\pi_r(z_i)}\right] \overset{\text{as}}{\to} c_f E\left[ \log\left(\frac{f_u(z_i)}{g_u(z_i)}\right) \right] $$ by the law of large numbers. My motivation is the following: \begin{align*} D_{KL}(f || g) &= \int_{-\infty}^{\infty} f(x) \log\left(\frac{f(x)}{g(x)}\right) dx \\ &= \int_{-\infty}^{\infty} f(x)\left\{ \log \left[\frac{f_u(x)}{g_u(x)} \right] + \log \left[\frac{c_g}{c_f} \right]\right\} dx \\ &= E_f\left[\log \frac{f_u(x)}{g_u(x)} \right] + \log \left[\frac{c_g}{c_f} \right] \\ &= c_f^{-1} E_{\pi_r}\left[\log \frac{f_u(x)}{g_u(x)}\frac{f_u(x)}{\pi_r(x)} \right] + \log \left[\frac{c_g}{c_f} \right]. \end{align*} So I just break it up into tractable pieces. For more ideas on how to simulate the likelhood ratio, I found a paper that has a few: https://projecteuclid.org/download/pdf_1/euclid.aos/1031594732
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo I assume you can evaluate $f$ and $g$ up to a normalizing constant. Denote $f(x) = f_u(x)/c_f$ and $g(x) = g_u(x)/c_g$. A consistent estimator that may be used is $$ \widehat{D_{KL}}(f || g) = \left[n
18,749
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Here I assume that you can only sample from the models; an unnormalized density function is not available. You write that $$D_{KL}(f || g) = \int_{-\infty}^{\infty} f(x) \log\left(\underbrace{\frac{f(x)}{g(x)}}_{=: r}\right) dx,$$ where I have defined the ratio of probabilities to be $r$. Alex Smola writes, although in a different context that you can estimate these ratios "easily" by just training a classifier. Let us assume you have obtained a classifier $p(f|x)$, which can tell you the probability that an observation $x$ has been generated by $f$. Note that $p(g|x) = 1 - p(f|x)$. Then: $$r = \frac{p(x|f)}{p(x|g)} \\ = \frac{p(f|x) {p(x) p(g)}}{p(g|x)p(x) p(f)} \\ = \frac{p(f|x)}{p(g|x)},$$ where the first step is due to Bayes and the last follows from the assumption that $p(g) = p(f)$. Getting such a classifier can be quite easy for two reasons. First, you can do stochastic updates. That means that if you are using a gradient-based optimizer, as is typical for logistic regression or neural networks, you can just draw a samples from each $f$ and $g$ and make an update. Second, as you have virtually unlimited data–you can just sample $f$ and $g$ to death–you don't have to worry about overfitting or the like.
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Here I assume that you can only sample from the models; an unnormalized density function is not available. You write that $$D_{KL}(f || g) = \int_{-\infty}^{\infty} f(x) \log\left(\underbrace{\frac{
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Here I assume that you can only sample from the models; an unnormalized density function is not available. You write that $$D_{KL}(f || g) = \int_{-\infty}^{\infty} f(x) \log\left(\underbrace{\frac{f(x)}{g(x)}}_{=: r}\right) dx,$$ where I have defined the ratio of probabilities to be $r$. Alex Smola writes, although in a different context that you can estimate these ratios "easily" by just training a classifier. Let us assume you have obtained a classifier $p(f|x)$, which can tell you the probability that an observation $x$ has been generated by $f$. Note that $p(g|x) = 1 - p(f|x)$. Then: $$r = \frac{p(x|f)}{p(x|g)} \\ = \frac{p(f|x) {p(x) p(g)}}{p(g|x)p(x) p(f)} \\ = \frac{p(f|x)}{p(g|x)},$$ where the first step is due to Bayes and the last follows from the assumption that $p(g) = p(f)$. Getting such a classifier can be quite easy for two reasons. First, you can do stochastic updates. That means that if you are using a gradient-based optimizer, as is typical for logistic regression or neural networks, you can just draw a samples from each $f$ and $g$ and make an update. Second, as you have virtually unlimited data–you can just sample $f$ and $g$ to death–you don't have to worry about overfitting or the like.
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Here I assume that you can only sample from the models; an unnormalized density function is not available. You write that $$D_{KL}(f || g) = \int_{-\infty}^{\infty} f(x) \log\left(\underbrace{\frac{
18,750
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Besides the probabilistic classifier method mentioned by @bayerj, you can also use the lower bound of the KL divergence derived in [1-2]: $$ \mathrm{KL}[f \Vert g] \ge \sup_{T} \left\{ \mathbb{E}_{x\sim f}\left[ T(x) \right] - \mathbb{E}_{x\sim g} \left[ \exp \left( T(x) - 1 \right)\right] \right\}, $$ where $T:\mathcal{X}\to\mathbb{R}$ is an arbitrary function. Under some mild conditions, the bound is tight for: $$T(x) = 1 + \ln \left[ \frac{f(x)}{g(x)} \right]$$ To estimate KL divergence between $f$ and $g$, we maximize the lower bound w.r.t. to the function $T(x)$. References: [1] Nguyen, X., Wainwright, M.J. and Jordan, M.I., 2010. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11), pp.5847-5861. [2] Nowozin, S., Cseke, B. and Tomioka, R., 2016. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems (pp. 271-279).
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Besides the probabilistic classifier method mentioned by @bayerj, you can also use the lower bound of the KL divergence derived in [1-2]: $$ \mathrm{KL}[f \Vert g] \ge \sup_{T} \left\{ \mathbb{E}_{x\s
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Besides the probabilistic classifier method mentioned by @bayerj, you can also use the lower bound of the KL divergence derived in [1-2]: $$ \mathrm{KL}[f \Vert g] \ge \sup_{T} \left\{ \mathbb{E}_{x\sim f}\left[ T(x) \right] - \mathbb{E}_{x\sim g} \left[ \exp \left( T(x) - 1 \right)\right] \right\}, $$ where $T:\mathcal{X}\to\mathbb{R}$ is an arbitrary function. Under some mild conditions, the bound is tight for: $$T(x) = 1 + \ln \left[ \frac{f(x)}{g(x)} \right]$$ To estimate KL divergence between $f$ and $g$, we maximize the lower bound w.r.t. to the function $T(x)$. References: [1] Nguyen, X., Wainwright, M.J. and Jordan, M.I., 2010. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11), pp.5847-5861. [2] Nowozin, S., Cseke, B. and Tomioka, R., 2016. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems (pp. 271-279).
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Besides the probabilistic classifier method mentioned by @bayerj, you can also use the lower bound of the KL divergence derived in [1-2]: $$ \mathrm{KL}[f \Vert g] \ge \sup_{T} \left\{ \mathbb{E}_{x\s
18,751
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Here is a python implementation for KL estimation for gaussian samples vs. close form calculation: import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm def kl_divergence(p, mu1, sigma1, mu2, sigma2): return (1.0 / len(p)) * sum(np.log(norm.pdf(p[i], mu1, sigma1)) - np.log(norm.pdf(p[i],mu2, sigma2)) for i in range(len(p))) def kl_divergence_cf(mu1, sigma1, mu2, sigma2): return (np.log(sigma2/sigma1) + (np.power(sigma1, 2) + (np.power((mu1-mu2), 2)))/(2*np.power(sigma2, 2)) - 0.5) def kl_divergence_js(p, q): return (1.0 / len(p)) * sum(np.log(p[i]) - np.log(norm.pdf(q[i], mu2, sigma2)) for i in range(len(p))) ######## Sampling ######## mu1, sigma1 = 2, 1 # mean and standard deviation p = np.random.normal(mu1, sigma1, 10000) mu2, sigma2 = 3, 0.5 # mean and standard deviation q = np.random.normal(mu2, sigma2, 10000) ######## Close Form - KL Divergence ######## print("Close Form kl_div : " + str(kl_divergence_cf(mu1, sigma1, mu2, sigma2))) ######## MC Sampling - KL Divergence ######## print("Monte Carlo Estimation kl_div : " + str(kl_divergence(p.tolist(), mu1, sigma1, mu2, sigma2))) ######## Plotting ######## count, bins, ignored = plt.hist(p, 30, density = True) plt.plot(bins, 1/(sigma1 * np.sqrt(2 * np.pi)) * np.exp(- (bins - mu1)**2 / (2 * sigma1**2) ), linewidth = 2, color = 'r') count, bins, ignored = plt.hist(q, 30, density=True) plt.plot(bins, 1/(sigma2 * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu2)**2 / (2 * sigma2**2) ), linewidth = 2, color = 'r') plt.show()
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo
Here is a python implementation for KL estimation for gaussian samples vs. close form calculation: import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm def kl_divergence(p,
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Here is a python implementation for KL estimation for gaussian samples vs. close form calculation: import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm def kl_divergence(p, mu1, sigma1, mu2, sigma2): return (1.0 / len(p)) * sum(np.log(norm.pdf(p[i], mu1, sigma1)) - np.log(norm.pdf(p[i],mu2, sigma2)) for i in range(len(p))) def kl_divergence_cf(mu1, sigma1, mu2, sigma2): return (np.log(sigma2/sigma1) + (np.power(sigma1, 2) + (np.power((mu1-mu2), 2)))/(2*np.power(sigma2, 2)) - 0.5) def kl_divergence_js(p, q): return (1.0 / len(p)) * sum(np.log(p[i]) - np.log(norm.pdf(q[i], mu2, sigma2)) for i in range(len(p))) ######## Sampling ######## mu1, sigma1 = 2, 1 # mean and standard deviation p = np.random.normal(mu1, sigma1, 10000) mu2, sigma2 = 3, 0.5 # mean and standard deviation q = np.random.normal(mu2, sigma2, 10000) ######## Close Form - KL Divergence ######## print("Close Form kl_div : " + str(kl_divergence_cf(mu1, sigma1, mu2, sigma2))) ######## MC Sampling - KL Divergence ######## print("Monte Carlo Estimation kl_div : " + str(kl_divergence(p.tolist(), mu1, sigma1, mu2, sigma2))) ######## Plotting ######## count, bins, ignored = plt.hist(p, 30, density = True) plt.plot(bins, 1/(sigma1 * np.sqrt(2 * np.pi)) * np.exp(- (bins - mu1)**2 / (2 * sigma1**2) ), linewidth = 2, color = 'r') count, bins, ignored = plt.hist(q, 30, density=True) plt.plot(bins, 1/(sigma2 * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu2)**2 / (2 * sigma2**2) ), linewidth = 2, color = 'r') plt.show()
Estimate the Kullback–Leibler (KL) divergence with Monte Carlo Here is a python implementation for KL estimation for gaussian samples vs. close form calculation: import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm def kl_divergence(p,
18,752
Why should boostrap sample size equal the original sample size? [duplicate]
This is from Efron and Tibshirani's An Introduction to the Bootstrap (first sentence of Chapter 2): The bootstrap is a computer-based method for assigning measures of accuracy to statistical estimates. This suggests that we should in some way respect the correct sample size $n$: The accuracy of statistical estimates depends on the sample size, and your statistical estimate will come from a sample of size $n$. How to estimate the standard error of the mean with the bootstrap, and how you're fooling yourself if you draw bootstrap samples of any other size than $n$. We understand the behavior of the mean very well. As you'll no doubt remember from intro to stats, the standard error of the mean depends on the sample size, $n$, in the following manner: $SEM = s/\sqrt{n}$, where $s^2$ is the sample variance. The bootstrap principle is that a bootstrap sample relates to your sample as your sample relates to the population. In other words you're assuming your sample is a pretty good approximation to the population and that you can use it as a proxy. Let $x^{*b}$ denote the $b$-th bootstrap sample, and let $\hat \mu^*_b$ be the mean of this bootstrap sample. The bootstrap estimate of standard error is: $$ SE_\mathrm{boot} = \sqrt{\frac{\sum_{b=1}^B (\hat \mu^*_b - \bar \mu^*)^2}{(B-1)}} $$ where $B$ is the number of bootstrap samples you've drawn (the more the merrier,) and $\bar \mu^* = \sum \hat \mu^*_b/B$ is the average of the bootstrapped means. This is a long way of saying that the bootstrap estimate of standard error is simply the sample standard deviation of the bootstrapped statistics. You're using the spread in the bootstrapped means to say something about the accuracy of the sample mean. Now, we're bootstrapping, so we're treating the original sample as a population: it is a discrete distribution with mass $1/n$ at each data point $x_i$. We can draw as many samples from this as we want, and in principle we can make them as large or small as we want. If we draw an $n^*$-sized bootstrap sample and estimate its mean $\hat \mu^*$, we know that $\hat \mu^* \sim N(\hat \mu, s/\sqrt{n^*})$. For $n^*=n$ the standard deviation of your bootstrapped mean is exactly the central limit theorem-dictated $SEM$ for the original sample. This isn't true for any other $n^*$. So in this example, if $n^* = n$, the sample standard deviation of $\{\hat \mu^*_b\}$ is a good representation of the correct standard error of the mean. If you draw larger bootstrap samples you get really good estimates of the sample mean, but their spread no longer directly relates to the standard error you're trying to estimate because you can make their distribution arbitrarily tight.
Why should boostrap sample size equal the original sample size? [duplicate]
This is from Efron and Tibshirani's An Introduction to the Bootstrap (first sentence of Chapter 2): The bootstrap is a computer-based method for assigning measures of accuracy to statistical estima
Why should boostrap sample size equal the original sample size? [duplicate] This is from Efron and Tibshirani's An Introduction to the Bootstrap (first sentence of Chapter 2): The bootstrap is a computer-based method for assigning measures of accuracy to statistical estimates. This suggests that we should in some way respect the correct sample size $n$: The accuracy of statistical estimates depends on the sample size, and your statistical estimate will come from a sample of size $n$. How to estimate the standard error of the mean with the bootstrap, and how you're fooling yourself if you draw bootstrap samples of any other size than $n$. We understand the behavior of the mean very well. As you'll no doubt remember from intro to stats, the standard error of the mean depends on the sample size, $n$, in the following manner: $SEM = s/\sqrt{n}$, where $s^2$ is the sample variance. The bootstrap principle is that a bootstrap sample relates to your sample as your sample relates to the population. In other words you're assuming your sample is a pretty good approximation to the population and that you can use it as a proxy. Let $x^{*b}$ denote the $b$-th bootstrap sample, and let $\hat \mu^*_b$ be the mean of this bootstrap sample. The bootstrap estimate of standard error is: $$ SE_\mathrm{boot} = \sqrt{\frac{\sum_{b=1}^B (\hat \mu^*_b - \bar \mu^*)^2}{(B-1)}} $$ where $B$ is the number of bootstrap samples you've drawn (the more the merrier,) and $\bar \mu^* = \sum \hat \mu^*_b/B$ is the average of the bootstrapped means. This is a long way of saying that the bootstrap estimate of standard error is simply the sample standard deviation of the bootstrapped statistics. You're using the spread in the bootstrapped means to say something about the accuracy of the sample mean. Now, we're bootstrapping, so we're treating the original sample as a population: it is a discrete distribution with mass $1/n$ at each data point $x_i$. We can draw as many samples from this as we want, and in principle we can make them as large or small as we want. If we draw an $n^*$-sized bootstrap sample and estimate its mean $\hat \mu^*$, we know that $\hat \mu^* \sim N(\hat \mu, s/\sqrt{n^*})$. For $n^*=n$ the standard deviation of your bootstrapped mean is exactly the central limit theorem-dictated $SEM$ for the original sample. This isn't true for any other $n^*$. So in this example, if $n^* = n$, the sample standard deviation of $\{\hat \mu^*_b\}$ is a good representation of the correct standard error of the mean. If you draw larger bootstrap samples you get really good estimates of the sample mean, but their spread no longer directly relates to the standard error you're trying to estimate because you can make their distribution arbitrarily tight.
Why should boostrap sample size equal the original sample size? [duplicate] This is from Efron and Tibshirani's An Introduction to the Bootstrap (first sentence of Chapter 2): The bootstrap is a computer-based method for assigning measures of accuracy to statistical estima
18,753
What are the differences between stochastic and fixed regressors in linear regression model?
My suggestion is to take the habit of calling the "fixed" regressors "deterministic". This accomplishes two things: first, it clears the not-infrequent misunderstanding that "fixed" means "invariant". Second, it clearly contrasts with "stochastic", and tells us that the regressors are decided upon (hence the "design matrix" terminology that comes from fields where the regressors are f... deterministic). If regressors are deterministic, they have no distribution in the usual sense, so they have no moments in the usual sense, meaning in practice that $E(x^r) = x^r$. The only stochastic element in the sample, rests in the error term (and so in the dependent variable). This has the basic implication that a sample with even one and varying deterministic regressor is no longer an identically distributed sample: $$E(y_i) = bE(x_i) + E(u_i) \implies E(y_i) = bx_i$$ and since the deterministic $x_i$'s are varying, it follows that the dependent variable does not have the same expected value for all $i$'s. In other words, there is not one distribution, each $y_i$ has its own (possibly belonging to the same family, but with different parameters). So you see it is not about conditional moments, the implications of deterministic regressors relate to the unconditional moments. For example, averaging the dependent variable here does not give us anything meaningful, except for descriptive statistics for the sample. Reverse that to see the implication: if the $y_i$'s are draws from a population of identical random variables, in what sense, and with what validity are we going to link them with deterministic regressors? We can always regress a series of numbers on a matrix of other numbers: if we use ordinary least-squares, we will be estimating the related orthogonal projection. But this is devoid of any statistical meaning. Note also that $E(y_i \mid x_i) = E(y_i)$. Does this mean that $y_i$ is "mean-independent" from $x_i$? No, this would be the interpretation if $x_i$ was stochastic. Here, it tells us that there is no distinction between unconditional and conditional moments, when deterministic regressors are involved. We can certainly predict with deterministic regressors. $b$ is a common characteristic of all $y_i$'s, and we can recover it using deterministic regressors. Then we can take a regressor with a value out-of-sample, and predict the value of the corresponding $y$.
What are the differences between stochastic and fixed regressors in linear regression model?
My suggestion is to take the habit of calling the "fixed" regressors "deterministic". This accomplishes two things: first, it clears the not-infrequent misunderstanding that "fixed" means "invariant".
What are the differences between stochastic and fixed regressors in linear regression model? My suggestion is to take the habit of calling the "fixed" regressors "deterministic". This accomplishes two things: first, it clears the not-infrequent misunderstanding that "fixed" means "invariant". Second, it clearly contrasts with "stochastic", and tells us that the regressors are decided upon (hence the "design matrix" terminology that comes from fields where the regressors are f... deterministic). If regressors are deterministic, they have no distribution in the usual sense, so they have no moments in the usual sense, meaning in practice that $E(x^r) = x^r$. The only stochastic element in the sample, rests in the error term (and so in the dependent variable). This has the basic implication that a sample with even one and varying deterministic regressor is no longer an identically distributed sample: $$E(y_i) = bE(x_i) + E(u_i) \implies E(y_i) = bx_i$$ and since the deterministic $x_i$'s are varying, it follows that the dependent variable does not have the same expected value for all $i$'s. In other words, there is not one distribution, each $y_i$ has its own (possibly belonging to the same family, but with different parameters). So you see it is not about conditional moments, the implications of deterministic regressors relate to the unconditional moments. For example, averaging the dependent variable here does not give us anything meaningful, except for descriptive statistics for the sample. Reverse that to see the implication: if the $y_i$'s are draws from a population of identical random variables, in what sense, and with what validity are we going to link them with deterministic regressors? We can always regress a series of numbers on a matrix of other numbers: if we use ordinary least-squares, we will be estimating the related orthogonal projection. But this is devoid of any statistical meaning. Note also that $E(y_i \mid x_i) = E(y_i)$. Does this mean that $y_i$ is "mean-independent" from $x_i$? No, this would be the interpretation if $x_i$ was stochastic. Here, it tells us that there is no distinction between unconditional and conditional moments, when deterministic regressors are involved. We can certainly predict with deterministic regressors. $b$ is a common characteristic of all $y_i$'s, and we can recover it using deterministic regressors. Then we can take a regressor with a value out-of-sample, and predict the value of the corresponding $y$.
What are the differences between stochastic and fixed regressors in linear regression model? My suggestion is to take the habit of calling the "fixed" regressors "deterministic". This accomplishes two things: first, it clears the not-infrequent misunderstanding that "fixed" means "invariant".
18,754
What are the differences between stochastic and fixed regressors in linear regression model?
First, what is regression at all? See Definition and delimitation of regression model there is some disagreement about this very broad concept, but mostly it is about modeling the conditional distribution (or some aspect of it) of $Y$ given some predictors $x$. So, given that we are going to condition on $x$, why should it matter at all if $x$ was random or deterministic at the beginning? See the similar question What is the difference between conditioning on regressors vs. treating them as fixed?. I guess then that this random regressor thing seems such a mess because it really is a many-headed monster (somewhat like socialism, you cut one head and some other grow out.) So we must look at what could the reasons be for modelling the regressors as random. I try a short list, most surely not exhaustive: Measurement errors in the regressors $x$. This could well occur even with designed experiments with deterministic regressors, so seems to me a separate problem. See the tags errors-in-variables or measurement-error. Problems with data collection causing problems for inference, like regressors correlated with the error term, separate regressions with correlated error terms, and many other problems studied in econometrics and causal-inference, which cannot be modeled with deterministic regressors. Models with lagged values of the response as a predictor. This is often done with regressors treated as deterministic, which seems strange to me. Then $Y$ is treated as random in one part of the model, and as deterministic in another part ... It seems to me that this many cases is best treated on its own, and not under the very broad labeling as random regressors.
What are the differences between stochastic and fixed regressors in linear regression model?
First, what is regression at all? See Definition and delimitation of regression model there is some disagreement about this very broad concept, but mostly it is about modeling the conditional distrib
What are the differences between stochastic and fixed regressors in linear regression model? First, what is regression at all? See Definition and delimitation of regression model there is some disagreement about this very broad concept, but mostly it is about modeling the conditional distribution (or some aspect of it) of $Y$ given some predictors $x$. So, given that we are going to condition on $x$, why should it matter at all if $x$ was random or deterministic at the beginning? See the similar question What is the difference between conditioning on regressors vs. treating them as fixed?. I guess then that this random regressor thing seems such a mess because it really is a many-headed monster (somewhat like socialism, you cut one head and some other grow out.) So we must look at what could the reasons be for modelling the regressors as random. I try a short list, most surely not exhaustive: Measurement errors in the regressors $x$. This could well occur even with designed experiments with deterministic regressors, so seems to me a separate problem. See the tags errors-in-variables or measurement-error. Problems with data collection causing problems for inference, like regressors correlated with the error term, separate regressions with correlated error terms, and many other problems studied in econometrics and causal-inference, which cannot be modeled with deterministic regressors. Models with lagged values of the response as a predictor. This is often done with regressors treated as deterministic, which seems strange to me. Then $Y$ is treated as random in one part of the model, and as deterministic in another part ... It seems to me that this many cases is best treated on its own, and not under the very broad labeling as random regressors.
What are the differences between stochastic and fixed regressors in linear regression model? First, what is regression at all? See Definition and delimitation of regression model there is some disagreement about this very broad concept, but mostly it is about modeling the conditional distrib
18,755
What are the differences between stochastic and fixed regressors in linear regression model?
I don't think you describe the fixed regression correctly. The fixed in this context means that you can pick any level you decide. Suppose, you're studying Web site outages as a function of parameters of the Web server and the load. Consider two different approaches: a. you do it in the load testing lab at your firm (in vitro) b. you do it on the live production server (in vivo) A. In the load testing lab you can set any level of the load as well as any desired parameters of the Web server. You can load it with 1,000 simultaneous client and the worker pool size 100, and memory 100GB; or you could just have 10 simultaneous clients, 10 threads and 1GB etc. In this case your fixed design matrix will have four columns: the intercept and three variables. It is fixed because there's nothing random about the variable levels. You know the exact values of each variable, and you chose them as you wished. B. On the live production server, you can probably control only some parameters, and certainly can't control the load: clients will come and go as they wish. So, at least the load will be stochastic. Even the parameters are not completely fixed: after all you want the server still running and serving clients while you're testing it. Maybe you can play with the memory and thread pool settings in some ranges though. So, in the best case you can set only two variables out of three bona fide regressors. You have the random design matrix in this case. You can only observe the load, which is the regressor here. This is a random variable. Needless to say that analysis is much easier and more robust when you have a fixed design matrix.
What are the differences between stochastic and fixed regressors in linear regression model?
I don't think you describe the fixed regression correctly. The fixed in this context means that you can pick any level you decide. Suppose, you're studying Web site outages as a function of parameter
What are the differences between stochastic and fixed regressors in linear regression model? I don't think you describe the fixed regression correctly. The fixed in this context means that you can pick any level you decide. Suppose, you're studying Web site outages as a function of parameters of the Web server and the load. Consider two different approaches: a. you do it in the load testing lab at your firm (in vitro) b. you do it on the live production server (in vivo) A. In the load testing lab you can set any level of the load as well as any desired parameters of the Web server. You can load it with 1,000 simultaneous client and the worker pool size 100, and memory 100GB; or you could just have 10 simultaneous clients, 10 threads and 1GB etc. In this case your fixed design matrix will have four columns: the intercept and three variables. It is fixed because there's nothing random about the variable levels. You know the exact values of each variable, and you chose them as you wished. B. On the live production server, you can probably control only some parameters, and certainly can't control the load: clients will come and go as they wish. So, at least the load will be stochastic. Even the parameters are not completely fixed: after all you want the server still running and serving clients while you're testing it. Maybe you can play with the memory and thread pool settings in some ranges though. So, in the best case you can set only two variables out of three bona fide regressors. You have the random design matrix in this case. You can only observe the load, which is the regressor here. This is a random variable. Needless to say that analysis is much easier and more robust when you have a fixed design matrix.
What are the differences between stochastic and fixed regressors in linear regression model? I don't think you describe the fixed regression correctly. The fixed in this context means that you can pick any level you decide. Suppose, you're studying Web site outages as a function of parameter
18,756
What are the differences between stochastic and fixed regressors in linear regression model?
Many interesting things are already said but let me add some. First of all the asker is right that "random vs stochastic or fixed regressors" is badly treated point in literature and among specialists in general. Me too encountered situation like the asker told us, time ago. It seems me that the problem come from two main points. The first one is the meaning of β€œregression” as kjetil b halvorsen suggest us. Indeed, in general, several problems can come from unclear definitions. Today I convinced that regression must be intended as synonym of conditional expectation function (see here: Regression and the CEF ; Regression's population parameters ). Therefore something like β€œdeterministic regressors” is an ambigous object because we cannot merge random and non random variables in a joint probability distribution. Sometimes β€œnon stochastic regressors” is a bad terminology that stand for: we conditioning for $X$, so we can consider it as known, so as a constant (non stochastic). Indeed all quantities we are interested in (moments, distributions, estimates, ecc) are derived conditioning on $X$, no matters where $X$ come from. So we can forget the β€œnon stochastic regressors”, we can consider the, usually unknown, joint distribution ($y$,$X$) as the only true starting point and go ahead. Just a note about variable like: constant, dummy, time trend, ecc. They are frequently used in regression and it seems me that we can use them (condition on them), even if they cannot be properly included in a joint probability distribution. The second problem come from the debate between regression and causality. Indeed sometimes β€œnon stochastic regressors” stand for something like β€œfixed in repeated samples” or β€œfixed by experimenter”. It seems me that Aksakal answer go in this direction. The experimental paradigm is common in econometrics. It bring us to the as if condition that permit us to achieve causal interpretation for regression. This way can work, but today I convinced that the structural causal paradigm is the best. I summarize my point of view here (Under which assumptions a regression can be interpreted causally?). Now, it seems me that an ambiguous concepts like β€œregressors fixed by experimenter” is useless and dangerous. Regression is regression, we cannot force this concept. If we want deal with interventions, we need another object. We need structural equations. In any case "non stochastic regressors" concept is surely not enough for proper causal inference (see here: non stochastic regressors and causation) Finally, in some presentation most of the concept emerged in this discussion are used. However them remain deeply unclear. It seems me that an authoritative example is in the Greene widespread manual (8-th edition 2018 edition pag 25):
What are the differences between stochastic and fixed regressors in linear regression model?
Many interesting things are already said but let me add some. First of all the asker is right that "random vs stochastic or fixed regressors" is badly treated point in literature and among specialists
What are the differences between stochastic and fixed regressors in linear regression model? Many interesting things are already said but let me add some. First of all the asker is right that "random vs stochastic or fixed regressors" is badly treated point in literature and among specialists in general. Me too encountered situation like the asker told us, time ago. It seems me that the problem come from two main points. The first one is the meaning of β€œregression” as kjetil b halvorsen suggest us. Indeed, in general, several problems can come from unclear definitions. Today I convinced that regression must be intended as synonym of conditional expectation function (see here: Regression and the CEF ; Regression's population parameters ). Therefore something like β€œdeterministic regressors” is an ambigous object because we cannot merge random and non random variables in a joint probability distribution. Sometimes β€œnon stochastic regressors” is a bad terminology that stand for: we conditioning for $X$, so we can consider it as known, so as a constant (non stochastic). Indeed all quantities we are interested in (moments, distributions, estimates, ecc) are derived conditioning on $X$, no matters where $X$ come from. So we can forget the β€œnon stochastic regressors”, we can consider the, usually unknown, joint distribution ($y$,$X$) as the only true starting point and go ahead. Just a note about variable like: constant, dummy, time trend, ecc. They are frequently used in regression and it seems me that we can use them (condition on them), even if they cannot be properly included in a joint probability distribution. The second problem come from the debate between regression and causality. Indeed sometimes β€œnon stochastic regressors” stand for something like β€œfixed in repeated samples” or β€œfixed by experimenter”. It seems me that Aksakal answer go in this direction. The experimental paradigm is common in econometrics. It bring us to the as if condition that permit us to achieve causal interpretation for regression. This way can work, but today I convinced that the structural causal paradigm is the best. I summarize my point of view here (Under which assumptions a regression can be interpreted causally?). Now, it seems me that an ambiguous concepts like β€œregressors fixed by experimenter” is useless and dangerous. Regression is regression, we cannot force this concept. If we want deal with interventions, we need another object. We need structural equations. In any case "non stochastic regressors" concept is surely not enough for proper causal inference (see here: non stochastic regressors and causation) Finally, in some presentation most of the concept emerged in this discussion are used. However them remain deeply unclear. It seems me that an authoritative example is in the Greene widespread manual (8-th edition 2018 edition pag 25):
What are the differences between stochastic and fixed regressors in linear regression model? Many interesting things are already said but let me add some. First of all the asker is right that "random vs stochastic or fixed regressors" is badly treated point in literature and among specialists
18,757
What are the differences between stochastic and fixed regressors in linear regression model?
I've upvoted a couple answers that already provide many of the ingredients to the answer. I'll provide what I view as a more direct answer. Suppose you find a dataset with observations on 2 fields: x (fertilizer) and y (yield) but you don't know exactly how this dataset was obtained. You think of Fisher's experiments and realize that this is probably experimental data where x (amount of fertilizer) was set by the experimenter and after some time the corresponding y (crop yield) was measured. You want to fit the model $y=\beta_0+\beta_1x+\epsilon$. What would it mean for you to treat x as non-random/fixed? To treat x as non-random means to assume that the x was set by the experimenter in the precise sense that: $E(\epsilon|x)=E(\epsilon)$ $Var(\epsilon|x)=Var(\epsilon)$ This is what most textbooks mean by a non-random regressor. Not only is x under the experimenter's control but it has been set in a particular way. For example, if the experimenter randomly pulled fertilizer amounts out of a hat, that would meet the above conditions. On the other hand, if the experimenter set fertilizer amounts as a function of plot quality, this would not meet the above conditions. In this setting we would assume $E(\epsilon)=0$ and $Var(\epsilon)=\sigma^2$. What would it mean for you to treat x as random? To treat x as random means to assume that this is observational data where the x was merely observed and not set, which really says that we do not know the probability distribution of x. In this setting we would assume $E(\epsilon|x)=0$ and $Var(\epsilon|x)=\sigma^2$. Is there any other thing we could assume? We could assume that x was set by the experimenter in a way that violated one of the above 2 conditions. This is still a non-random regressor in the dictionary sense as Var(x)=0 but this is in conflict with what textbooks mean by "non-random regressor". If the experimenter set fertilizer amounts as a function of plot quality then $E(\epsilon|x)\not=E(\epsilon)$ and even if we further assume that $E(\epsilon)=0$, note that $E(y|x)=\beta_0+\beta_1x+E(\epsilon|x)\not= \beta_0+\beta_1x+E(\epsilon)=\beta_0+\beta_1x$.
What are the differences between stochastic and fixed regressors in linear regression model?
I've upvoted a couple answers that already provide many of the ingredients to the answer. I'll provide what I view as a more direct answer. Suppose you find a dataset with observations on 2 fields: x
What are the differences between stochastic and fixed regressors in linear regression model? I've upvoted a couple answers that already provide many of the ingredients to the answer. I'll provide what I view as a more direct answer. Suppose you find a dataset with observations on 2 fields: x (fertilizer) and y (yield) but you don't know exactly how this dataset was obtained. You think of Fisher's experiments and realize that this is probably experimental data where x (amount of fertilizer) was set by the experimenter and after some time the corresponding y (crop yield) was measured. You want to fit the model $y=\beta_0+\beta_1x+\epsilon$. What would it mean for you to treat x as non-random/fixed? To treat x as non-random means to assume that the x was set by the experimenter in the precise sense that: $E(\epsilon|x)=E(\epsilon)$ $Var(\epsilon|x)=Var(\epsilon)$ This is what most textbooks mean by a non-random regressor. Not only is x under the experimenter's control but it has been set in a particular way. For example, if the experimenter randomly pulled fertilizer amounts out of a hat, that would meet the above conditions. On the other hand, if the experimenter set fertilizer amounts as a function of plot quality, this would not meet the above conditions. In this setting we would assume $E(\epsilon)=0$ and $Var(\epsilon)=\sigma^2$. What would it mean for you to treat x as random? To treat x as random means to assume that this is observational data where the x was merely observed and not set, which really says that we do not know the probability distribution of x. In this setting we would assume $E(\epsilon|x)=0$ and $Var(\epsilon|x)=\sigma^2$. Is there any other thing we could assume? We could assume that x was set by the experimenter in a way that violated one of the above 2 conditions. This is still a non-random regressor in the dictionary sense as Var(x)=0 but this is in conflict with what textbooks mean by "non-random regressor". If the experimenter set fertilizer amounts as a function of plot quality then $E(\epsilon|x)\not=E(\epsilon)$ and even if we further assume that $E(\epsilon)=0$, note that $E(y|x)=\beta_0+\beta_1x+E(\epsilon|x)\not= \beta_0+\beta_1x+E(\epsilon)=\beta_0+\beta_1x$.
What are the differences between stochastic and fixed regressors in linear regression model? I've upvoted a couple answers that already provide many of the ingredients to the answer. I'll provide what I view as a more direct answer. Suppose you find a dataset with observations on 2 fields: x
18,758
Interpreting model averaging results in R
See Grueber et al. 2011, "Multimodel inference in ecology and evolution: challenges and solutions" Evolutionary Biology 24:699-711. It really depends on goals as to whether you want to use full or conditional data. In my field we would use criteria, such as AICC to determine which models are most supported, then use those as your conditional subset. This information would then be reported. For example, your first four models are all within 2 AIC units of each other, so they all would be included in your subset. The others are way out there (higher AIC) so including information from them would actually reduce the quality of your beta estimates.
Interpreting model averaging results in R
See Grueber et al. 2011, "Multimodel inference in ecology and evolution: challenges and solutions" Evolutionary Biology 24:699-711. It really depends on goals as to whether you want to use full or co
Interpreting model averaging results in R See Grueber et al. 2011, "Multimodel inference in ecology and evolution: challenges and solutions" Evolutionary Biology 24:699-711. It really depends on goals as to whether you want to use full or conditional data. In my field we would use criteria, such as AICC to determine which models are most supported, then use those as your conditional subset. This information would then be reported. For example, your first four models are all within 2 AIC units of each other, so they all would be included in your subset. The others are way out there (higher AIC) so including information from them would actually reduce the quality of your beta estimates.
Interpreting model averaging results in R See Grueber et al. 2011, "Multimodel inference in ecology and evolution: challenges and solutions" Evolutionary Biology 24:699-711. It really depends on goals as to whether you want to use full or co
18,759
Interpreting model averaging results in R
I think the premise about the difference between what exactly the full and conditional averages are is wrong. One is an average that includes zeroes (full) and one does not include zeroes (conditional). from the help file for the model.avg() command: Note The β€˜subset’ (or β€˜conditional’) average only averages over the models where the parameter appears. An alternative, the β€˜full’ average assumes that a variable is included in every model, but in some models the corresponding coefficient (and its respective variance) is set to zero. Unlike the β€˜subset average’, it does not have a tendency of biasing the value away from zero. The β€˜full’ average is a type of shrinkage estimator and for variables with a weak relationship to the response they are smaller than β€˜subset’ estimators. If you want to only use a subset of models (based on delta AIC for example), use the subset argument in model.avg(). You will still get conditional and full estimates, as long as some of the included models are missing some variables that others have.
Interpreting model averaging results in R
I think the premise about the difference between what exactly the full and conditional averages are is wrong. One is an average that includes zeroes (full) and one does not include zeroes (conditional
Interpreting model averaging results in R I think the premise about the difference between what exactly the full and conditional averages are is wrong. One is an average that includes zeroes (full) and one does not include zeroes (conditional). from the help file for the model.avg() command: Note The β€˜subset’ (or β€˜conditional’) average only averages over the models where the parameter appears. An alternative, the β€˜full’ average assumes that a variable is included in every model, but in some models the corresponding coefficient (and its respective variance) is set to zero. Unlike the β€˜subset average’, it does not have a tendency of biasing the value away from zero. The β€˜full’ average is a type of shrinkage estimator and for variables with a weak relationship to the response they are smaller than β€˜subset’ estimators. If you want to only use a subset of models (based on delta AIC for example), use the subset argument in model.avg(). You will still get conditional and full estimates, as long as some of the included models are missing some variables that others have.
Interpreting model averaging results in R I think the premise about the difference between what exactly the full and conditional averages are is wrong. One is an average that includes zeroes (full) and one does not include zeroes (conditional
18,760
Textbooks/readings on what to do when you can't create an ideal experiment?
There are two fields where randomized experiments are almost always impossible: they are social sciences and economics. In these instances you can only do "quasi experiments". Try searching with keywords quasi experiments, observational studies and social sciences; you will get some good text books. I can recommend two excellent books on this subject: the second book by Shadish and Cook is a classic: Counterfactuals and Causal Inference: Methods and Principles for Social Research By Morgan and Winship Experimental and Quasi-Experimental Designs for Generalized Causal Inference by by William R. Shadish and Thomas D. Cook A classic paper that uses a technique called "propensity score matching" in non experimental setting for causal inference by Dehejia and Wahba is highly recommended as well. Additional recommendations: Design of Observational Studies by Paul R. Rosenbaum. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction by Imbens and Rubin. IF you are looking at time series quasi experiments, the above books have some chapters devoted to them, but a dedicated book is by Gene v. Glass Design and Analysis of Time-Series Experiments and I would check his article Interrupted time series. Trivia: Gene V Glass coined the term "Meta Analysis".
Textbooks/readings on what to do when you can't create an ideal experiment?
There are two fields where randomized experiments are almost always impossible: they are social sciences and economics. In these instances you can only do "quasi experiments". Try searching with keywo
Textbooks/readings on what to do when you can't create an ideal experiment? There are two fields where randomized experiments are almost always impossible: they are social sciences and economics. In these instances you can only do "quasi experiments". Try searching with keywords quasi experiments, observational studies and social sciences; you will get some good text books. I can recommend two excellent books on this subject: the second book by Shadish and Cook is a classic: Counterfactuals and Causal Inference: Methods and Principles for Social Research By Morgan and Winship Experimental and Quasi-Experimental Designs for Generalized Causal Inference by by William R. Shadish and Thomas D. Cook A classic paper that uses a technique called "propensity score matching" in non experimental setting for causal inference by Dehejia and Wahba is highly recommended as well. Additional recommendations: Design of Observational Studies by Paul R. Rosenbaum. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction by Imbens and Rubin. IF you are looking at time series quasi experiments, the above books have some chapters devoted to them, but a dedicated book is by Gene v. Glass Design and Analysis of Time-Series Experiments and I would check his article Interrupted time series. Trivia: Gene V Glass coined the term "Meta Analysis".
Textbooks/readings on what to do when you can't create an ideal experiment? There are two fields where randomized experiments are almost always impossible: they are social sciences and economics. In these instances you can only do "quasi experiments". Try searching with keywo
18,761
Textbooks/readings on what to do when you can't create an ideal experiment?
This is where quasiexperimental designs can be useful. In many situations in practice, experimental designs are not practical because, although you have a treatment, you are not able to perform random assignment to groups or maybe you only have one group. In your education example, you may not have control over who receives the treatment because you intend to perform the intervention to all the kids in one school. However, you maybe able to compare their scores to scores from previous years, or randomize classrooms so that some classes receive the intervention before others, or compare multiple schools including those that did not receive the intervention. It might make sense to do an interrupted time series design where you have just one group, but take measurements constantly, and administer the treatment in the middle of your study duration. This way, you can see if the slope of the dependent variable over time changed right after the treatment, relative to the overall slope across the entire study. The number of measurements can be as low as 3, but more the better. So, my suggestion is to read up on quasiexperimental study designs.
Textbooks/readings on what to do when you can't create an ideal experiment?
This is where quasiexperimental designs can be useful. In many situations in practice, experimental designs are not practical because, although you have a treatment, you are not able to perform random
Textbooks/readings on what to do when you can't create an ideal experiment? This is where quasiexperimental designs can be useful. In many situations in practice, experimental designs are not practical because, although you have a treatment, you are not able to perform random assignment to groups or maybe you only have one group. In your education example, you may not have control over who receives the treatment because you intend to perform the intervention to all the kids in one school. However, you maybe able to compare their scores to scores from previous years, or randomize classrooms so that some classes receive the intervention before others, or compare multiple schools including those that did not receive the intervention. It might make sense to do an interrupted time series design where you have just one group, but take measurements constantly, and administer the treatment in the middle of your study duration. This way, you can see if the slope of the dependent variable over time changed right after the treatment, relative to the overall slope across the entire study. The number of measurements can be as low as 3, but more the better. So, my suggestion is to read up on quasiexperimental study designs.
Textbooks/readings on what to do when you can't create an ideal experiment? This is where quasiexperimental designs can be useful. In many situations in practice, experimental designs are not practical because, although you have a treatment, you are not able to perform random
18,762
Textbooks/readings on what to do when you can't create an ideal experiment?
The most thorough, general, and precise treatment of causality is Judea Pearl 2009, "Causality", 2nd ed., Cambridge University Press. Especially, it makes clear that causality is not really a statistical issue - even unlimited data does not solve it. It introduces a precise language to express qualitative and theoretical knowledge needed for causal inference when something about the data is suboptimal. You will see that failed randomization is just one issue among many. It also subsume all other mathematical frameworks, e.g. those by Imbens, Rubin, and Rosenbaum. I can not overstate how accessible, elegant and powerful his approach is. I strongly recommend it. However, you should read it in a non-linear fashion (chapters 5 and 11 are more accessible, and then you can work backwards through chapters 1, 3, and 7 for understanding the general theory). When you have understand the basics, you can easily look into more recent advancements, for example on when it is possible to "transport" causal findings from one context to another, which is not necessarily possible even with randomization (Pearl, Judea, and Elias Bareinboim 2014, "External validity: From do-calculus to transportability across populations." Statistical Science).
Textbooks/readings on what to do when you can't create an ideal experiment?
The most thorough, general, and precise treatment of causality is Judea Pearl 2009, "Causality", 2nd ed., Cambridge University Press. Especially, it makes clear that causality is not really a statist
Textbooks/readings on what to do when you can't create an ideal experiment? The most thorough, general, and precise treatment of causality is Judea Pearl 2009, "Causality", 2nd ed., Cambridge University Press. Especially, it makes clear that causality is not really a statistical issue - even unlimited data does not solve it. It introduces a precise language to express qualitative and theoretical knowledge needed for causal inference when something about the data is suboptimal. You will see that failed randomization is just one issue among many. It also subsume all other mathematical frameworks, e.g. those by Imbens, Rubin, and Rosenbaum. I can not overstate how accessible, elegant and powerful his approach is. I strongly recommend it. However, you should read it in a non-linear fashion (chapters 5 and 11 are more accessible, and then you can work backwards through chapters 1, 3, and 7 for understanding the general theory). When you have understand the basics, you can easily look into more recent advancements, for example on when it is possible to "transport" causal findings from one context to another, which is not necessarily possible even with randomization (Pearl, Judea, and Elias Bareinboim 2014, "External validity: From do-calculus to transportability across populations." Statistical Science).
Textbooks/readings on what to do when you can't create an ideal experiment? The most thorough, general, and precise treatment of causality is Judea Pearl 2009, "Causality", 2nd ed., Cambridge University Press. Especially, it makes clear that causality is not really a statist
18,763
Textbooks/readings on what to do when you can't create an ideal experiment?
Perhaps these are what you're looking for... Statistics for Experimenters Design and Analysis of Experiments Design and Analysis of Experiments with R (not related to the previous title) Process Improvement using Data (free online or as PDF, chapter 5 covers DoE)
Textbooks/readings on what to do when you can't create an ideal experiment?
Perhaps these are what you're looking for... Statistics for Experimenters Design and Analysis of Experiments Design and Analysis of Experiments with R (not related to the previous title) Process Impro
Textbooks/readings on what to do when you can't create an ideal experiment? Perhaps these are what you're looking for... Statistics for Experimenters Design and Analysis of Experiments Design and Analysis of Experiments with R (not related to the previous title) Process Improvement using Data (free online or as PDF, chapter 5 covers DoE)
Textbooks/readings on what to do when you can't create an ideal experiment? Perhaps these are what you're looking for... Statistics for Experimenters Design and Analysis of Experiments Design and Analysis of Experiments with R (not related to the previous title) Process Impro
18,764
Simulating a Brownian Excursion using a Brownian Bridge?
A Brownian excursion can be constructed from a bridge using the following construction by Vervaat: https://projecteuclid.org/download/pdf_1/euclid.aop/1176995155 A quick approximation in R, using @whuber's BB code, is n <- 1001 times <- seq(0, 1, length.out=n) set.seed(17) dW <- rnorm(n)/sqrt(n) W <- cumsum(dW) # plot(times,W,type="l") # original BM B <- W - times * W[n] # The Brownian bridge from (0,0) to (1,target) # plot(times,B,type="l") # Vervaat construction Bmin <- min(B) tmin <- which(B == Bmin) newtimes <- (times[tmin] + times) %% 1 J<-floor(newtimes * n) BE <- B[J] - Bmin plot(1:length(BE)/n,BE,type="l") Here is another plot (from set.seed(21)). A key observation with an excursion is that the conditioning actually manifests as a "repulsion" from 0, and you are unlikely to see an excursion come close to $0$ on the interior of $(0,1)$. Aside: The distribution of the absolute value of a Brownian bridge $(|BB_t|)_{0 \le t \le 1}$ and the excursion, $(BB_t)_{0 \le t \le 1}$ conditioned to be positive, are not the same. Intuitively, the excursion is repelled from the origin, because Brownian paths coming too close to the origin are likely to go negative soon after and thus are penalised by the conditioning. This can even be illustrated with a simple random walk bridge and excursion on $6$ steps, which is a natural discrete analogue of BM (and converges to BM as steps becomes large and you rescale). Indeed, take an symmetric SRW starting from $0$. First, let us consider the "bridge" conditioning and see what happens if we just take the absolute value. Consider all simple paths $s$ of length $6$ that start and end at $0$. The number of such paths is ${6 \choose 3} = 20$. There are $2\times {4 \choose 2} = 12$ of these for which $|s_2| = 0$. In other words, the probability for the absolute value of our SRW "bridge" (conditioned to end at $0$) to have value 0 at step $2$ is $12/20 = 0.6$. Secondly, we will consider the "excursion" conditioning. The number of non-negative simple paths $s$ of length $6 = 2*3$ that end at $0$ is the Catalan number $C_{m=3} = {2m\choose m}/(m+1) = 5$. Exactly $2$ of these paths have $s_2 = 0$. Thus, the probability for our SRW "excursion" (conditioned to stay positive and end at $0$) to have value 0 at step $2$ is $2/5 = 0.4 < 0.6$. In case you still doubt this phenomenon persists in the limit you could consider the probability for SRW bridges and excursions of length $4n$ hitting 0 at step $2n$. For the SRW excursion: we have $$\mathbb{P}(S_{2n} = 0 | S_{j} \ge 0, j \le 4n, S_{4n} = 0) = C_n^2 / C_{2n} \sim (4^{2n}/\pi n^3)/(4^{2n} / \sqrt{(2n)^3 \pi})$$ using the aysmptotics from wikipedia https://en.wikipedia.org/wiki/Catalan_number. I.e. it is like $cn^{-3/2}$ eventually. For abs(SRW bridge): $$\mathbb{P}(|S_{2n}| = 0 | S_{4n} = 0) = {2n \choose n}^2 / {4n \choose 2n} \sim (4^n/\sqrt{\pi n})^2/(4^{2n} / \sqrt{2n\pi})$$ using the asymptotics from wikipedia https://en.wikipedia.org/wiki/Binomial_coefficient. This is like $cn^{-1/2}$. In other words, the asymptotic probability to see the SRW bridge conditioned to be positive at $0$ near the middle is much smaller than that for the absolute value of the bridge. Here is an alternative construction based on a 3D Bessel process instead of a Brownian bridge. I use the facts explained in https://projecteuclid.org/download/pdf_1/euclid.ejp/1457125524 Overview- 1) Simulate a 3d Bessel process. This is like a BM conditioned to be positive. 2) Apply an appropriate time-space rescaling in order to obtain a Bessel 3 bridge (Equation (2) in the paper). 3) Use the fact (noted just after Theorem 1 in the paper) that a Bessel 3 bridge actually has the same distribution as a Brownian excursion. A slight drawback is that you need to run the Bessel process for quite a while (T=100 below) on a relatively fine grid in order for the space/time scaling to kick in at the end. ## Another construction of Brownian excursion via Bessel processes set.seed(27092017) ## The Bessel process must run for a long time in order to construct a bridge T <- 100 n <- 100001 d<-3 # dimension for Bessel process dW <- matrix(ncol = n, nrow = d, data=rnorm(d*n)/sqrt(n/T)) dW[,1] <- 0 W <- apply(dW, 1, cumsum) BessD <- apply(W,1,function(x) {sqrt(sum(x^2))}) times <- seq(0, T, length.out=n) # plot(times,BessD, type="l") # Bessel D process times01 <- times[times < 1] rescaletimes <- pmin(times01/(1-times01),T) # plot(times01,rescaletimes,type="l") # compare rescaled times # create new time index rescaletimeindex <- sapply(rescaletimes,function(x){max(which(times<=x))} ) BE <- (1 - times01) * BessD[rescaletimeindex] plot(times01,BE, type="l") Here is the output:
Simulating a Brownian Excursion using a Brownian Bridge?
A Brownian excursion can be constructed from a bridge using the following construction by Vervaat: https://projecteuclid.org/download/pdf_1/euclid.aop/1176995155 A quick approximation in R, using @whu
Simulating a Brownian Excursion using a Brownian Bridge? A Brownian excursion can be constructed from a bridge using the following construction by Vervaat: https://projecteuclid.org/download/pdf_1/euclid.aop/1176995155 A quick approximation in R, using @whuber's BB code, is n <- 1001 times <- seq(0, 1, length.out=n) set.seed(17) dW <- rnorm(n)/sqrt(n) W <- cumsum(dW) # plot(times,W,type="l") # original BM B <- W - times * W[n] # The Brownian bridge from (0,0) to (1,target) # plot(times,B,type="l") # Vervaat construction Bmin <- min(B) tmin <- which(B == Bmin) newtimes <- (times[tmin] + times) %% 1 J<-floor(newtimes * n) BE <- B[J] - Bmin plot(1:length(BE)/n,BE,type="l") Here is another plot (from set.seed(21)). A key observation with an excursion is that the conditioning actually manifests as a "repulsion" from 0, and you are unlikely to see an excursion come close to $0$ on the interior of $(0,1)$. Aside: The distribution of the absolute value of a Brownian bridge $(|BB_t|)_{0 \le t \le 1}$ and the excursion, $(BB_t)_{0 \le t \le 1}$ conditioned to be positive, are not the same. Intuitively, the excursion is repelled from the origin, because Brownian paths coming too close to the origin are likely to go negative soon after and thus are penalised by the conditioning. This can even be illustrated with a simple random walk bridge and excursion on $6$ steps, which is a natural discrete analogue of BM (and converges to BM as steps becomes large and you rescale). Indeed, take an symmetric SRW starting from $0$. First, let us consider the "bridge" conditioning and see what happens if we just take the absolute value. Consider all simple paths $s$ of length $6$ that start and end at $0$. The number of such paths is ${6 \choose 3} = 20$. There are $2\times {4 \choose 2} = 12$ of these for which $|s_2| = 0$. In other words, the probability for the absolute value of our SRW "bridge" (conditioned to end at $0$) to have value 0 at step $2$ is $12/20 = 0.6$. Secondly, we will consider the "excursion" conditioning. The number of non-negative simple paths $s$ of length $6 = 2*3$ that end at $0$ is the Catalan number $C_{m=3} = {2m\choose m}/(m+1) = 5$. Exactly $2$ of these paths have $s_2 = 0$. Thus, the probability for our SRW "excursion" (conditioned to stay positive and end at $0$) to have value 0 at step $2$ is $2/5 = 0.4 < 0.6$. In case you still doubt this phenomenon persists in the limit you could consider the probability for SRW bridges and excursions of length $4n$ hitting 0 at step $2n$. For the SRW excursion: we have $$\mathbb{P}(S_{2n} = 0 | S_{j} \ge 0, j \le 4n, S_{4n} = 0) = C_n^2 / C_{2n} \sim (4^{2n}/\pi n^3)/(4^{2n} / \sqrt{(2n)^3 \pi})$$ using the aysmptotics from wikipedia https://en.wikipedia.org/wiki/Catalan_number. I.e. it is like $cn^{-3/2}$ eventually. For abs(SRW bridge): $$\mathbb{P}(|S_{2n}| = 0 | S_{4n} = 0) = {2n \choose n}^2 / {4n \choose 2n} \sim (4^n/\sqrt{\pi n})^2/(4^{2n} / \sqrt{2n\pi})$$ using the asymptotics from wikipedia https://en.wikipedia.org/wiki/Binomial_coefficient. This is like $cn^{-1/2}$. In other words, the asymptotic probability to see the SRW bridge conditioned to be positive at $0$ near the middle is much smaller than that for the absolute value of the bridge. Here is an alternative construction based on a 3D Bessel process instead of a Brownian bridge. I use the facts explained in https://projecteuclid.org/download/pdf_1/euclid.ejp/1457125524 Overview- 1) Simulate a 3d Bessel process. This is like a BM conditioned to be positive. 2) Apply an appropriate time-space rescaling in order to obtain a Bessel 3 bridge (Equation (2) in the paper). 3) Use the fact (noted just after Theorem 1 in the paper) that a Bessel 3 bridge actually has the same distribution as a Brownian excursion. A slight drawback is that you need to run the Bessel process for quite a while (T=100 below) on a relatively fine grid in order for the space/time scaling to kick in at the end. ## Another construction of Brownian excursion via Bessel processes set.seed(27092017) ## The Bessel process must run for a long time in order to construct a bridge T <- 100 n <- 100001 d<-3 # dimension for Bessel process dW <- matrix(ncol = n, nrow = d, data=rnorm(d*n)/sqrt(n/T)) dW[,1] <- 0 W <- apply(dW, 1, cumsum) BessD <- apply(W,1,function(x) {sqrt(sum(x^2))}) times <- seq(0, T, length.out=n) # plot(times,BessD, type="l") # Bessel D process times01 <- times[times < 1] rescaletimes <- pmin(times01/(1-times01),T) # plot(times01,rescaletimes,type="l") # compare rescaled times # create new time index rescaletimeindex <- sapply(rescaletimes,function(x){max(which(times<=x))} ) BE <- (1 - times01) * BessD[rescaletimeindex] plot(times01,BE, type="l") Here is the output:
Simulating a Brownian Excursion using a Brownian Bridge? A Brownian excursion can be constructed from a bridge using the following construction by Vervaat: https://projecteuclid.org/download/pdf_1/euclid.aop/1176995155 A quick approximation in R, using @whu
18,765
Simulating a Brownian Excursion using a Brownian Bridge?
The Reflection Principle asserts if the path of a Wiener process $f(t)$ reaches a value $f(s) = a$ at time $t = s$, then the subsequent path after time $s$ has the same distribution as the reflection of the subsequent path about the value $a$ Wikipedia, accessed 9/26/2017. Accordingly we may simulate a Brownian bridge and reflect it about the value $a=0$ simply by taking its absolute value. The Brownian bridge is simulated by subtracting the trend from the start point $(0,0)$ to the end $(T,B(T))$ from the Brownian motion $B$ itself. (Without any loss of generality we may measure time in units that make $T=1$. Thus, at time $t$ simply subtract $B(T)t$ from $B(t)$.) The same procedure may be applied to display a Brownian motion conditional not only on returning to a specified value at time $T\gt 0$ (the value is $0$ for the bridge), but also on remaining between two limits (which necessarily include the starting value of $0$ at time $0$ and the specified ending value). This Brownian motion starts and ends with a value of zero: it is a Brownian Bridge. The red graph is a Brownian excursion developed from the preceding Brownian bridge: all its values are nonnegative. The blue graph has been developed in the same way by reflecting the Brownian bridge between the dotted lines every time it encounters them. The gray graph displays the original Brownian bridge. The calculations are simple and fast: divide the set of times into small intervals, generate independent identically distributed Normal increments for each interval, accumulate them, subtract the trend, and perform any reflections needed. Here is R code. In it, W is the original Brownian motion, B is the Brownian bridge, and B2 is the excursion constrained between two specified values ymin (non-positive) and ymax (non-negative). Its technique for performing reflection using the modulus %% operator and componentwise minimum pmin may be of practical interest. # # Brownian bridge in n steps from t=0 to t=1. # n <- 1001 times <- seq(0, 1, length.out=n) target <- 0 # Constraint at time=1 set.seed(17) dW <- rnorm(n) W <- cumsum(dW) B <- W + times * (target - W[n]) # The Brownian bridge from (0,0) to (1,target) # # The constrained excursion. # ymax <- max(abs(B))/5 # A nice limit for illustration ymin <- -ymax * 2 # Another nice limit yrange2 <- 2*(ymax - ymin) B2 <- (B - ymin) %% yrange2 B2 <- pmin(B2, yrange2-B2) + ymin
Simulating a Brownian Excursion using a Brownian Bridge?
The Reflection Principle asserts if the path of a Wiener process $f(t)$ reaches a value $f(s) = a$ at time $t = s$, then the subsequent path after time $s$ has the same distribution as the reflection
Simulating a Brownian Excursion using a Brownian Bridge? The Reflection Principle asserts if the path of a Wiener process $f(t)$ reaches a value $f(s) = a$ at time $t = s$, then the subsequent path after time $s$ has the same distribution as the reflection of the subsequent path about the value $a$ Wikipedia, accessed 9/26/2017. Accordingly we may simulate a Brownian bridge and reflect it about the value $a=0$ simply by taking its absolute value. The Brownian bridge is simulated by subtracting the trend from the start point $(0,0)$ to the end $(T,B(T))$ from the Brownian motion $B$ itself. (Without any loss of generality we may measure time in units that make $T=1$. Thus, at time $t$ simply subtract $B(T)t$ from $B(t)$.) The same procedure may be applied to display a Brownian motion conditional not only on returning to a specified value at time $T\gt 0$ (the value is $0$ for the bridge), but also on remaining between two limits (which necessarily include the starting value of $0$ at time $0$ and the specified ending value). This Brownian motion starts and ends with a value of zero: it is a Brownian Bridge. The red graph is a Brownian excursion developed from the preceding Brownian bridge: all its values are nonnegative. The blue graph has been developed in the same way by reflecting the Brownian bridge between the dotted lines every time it encounters them. The gray graph displays the original Brownian bridge. The calculations are simple and fast: divide the set of times into small intervals, generate independent identically distributed Normal increments for each interval, accumulate them, subtract the trend, and perform any reflections needed. Here is R code. In it, W is the original Brownian motion, B is the Brownian bridge, and B2 is the excursion constrained between two specified values ymin (non-positive) and ymax (non-negative). Its technique for performing reflection using the modulus %% operator and componentwise minimum pmin may be of practical interest. # # Brownian bridge in n steps from t=0 to t=1. # n <- 1001 times <- seq(0, 1, length.out=n) target <- 0 # Constraint at time=1 set.seed(17) dW <- rnorm(n) W <- cumsum(dW) B <- W + times * (target - W[n]) # The Brownian bridge from (0,0) to (1,target) # # The constrained excursion. # ymax <- max(abs(B))/5 # A nice limit for illustration ymin <- -ymax * 2 # Another nice limit yrange2 <- 2*(ymax - ymin) B2 <- (B - ymin) %% yrange2 B2 <- pmin(B2, yrange2-B2) + ymin
Simulating a Brownian Excursion using a Brownian Bridge? The Reflection Principle asserts if the path of a Wiener process $f(t)$ reaches a value $f(s) = a$ at time $t = s$, then the subsequent path after time $s$ has the same distribution as the reflection
18,766
Simulating a Brownian Excursion using a Brownian Bridge?
You could use a rejection method: simulate Brownian bridges and keep the positive ones. It works. But. It is very slow, as a lot of sample trajectories are rejected. And the larger "frequency" you set, the less likely you are to find trajectories. succeeded <- FALSE while(!succeeded) { bridge <- rbridge(end = 1, frequency = 500) succeeded=all(bridge>=0) } plot(bridge) You can speed it up keeping the negative trajectories as well. while(!succeeded) { bridge <- rbridge(end = 1, frequency = 500) succeeded=all(bridge>=0)||all(bridge<=0) } bridge = abs(bridge) plot(bridge)
Simulating a Brownian Excursion using a Brownian Bridge?
You could use a rejection method: simulate Brownian bridges and keep the positive ones. It works. But. It is very slow, as a lot of sample trajectories are rejected. And the larger "frequency" you se
Simulating a Brownian Excursion using a Brownian Bridge? You could use a rejection method: simulate Brownian bridges and keep the positive ones. It works. But. It is very slow, as a lot of sample trajectories are rejected. And the larger "frequency" you set, the less likely you are to find trajectories. succeeded <- FALSE while(!succeeded) { bridge <- rbridge(end = 1, frequency = 500) succeeded=all(bridge>=0) } plot(bridge) You can speed it up keeping the negative trajectories as well. while(!succeeded) { bridge <- rbridge(end = 1, frequency = 500) succeeded=all(bridge>=0)||all(bridge<=0) } bridge = abs(bridge) plot(bridge)
Simulating a Brownian Excursion using a Brownian Bridge? You could use a rejection method: simulate Brownian bridges and keep the positive ones. It works. But. It is very slow, as a lot of sample trajectories are rejected. And the larger "frequency" you se
18,767
What are "rotated" and "unrotated" principal components, given that PCA always rotates the coordinates axes?
This is going to be a non-technical answer. You are right: PCA is essentially a rotation of the coordinate axes, chosen such that each successful axis captures as much variance as possible. In some disciplines (such as e.g. psychology), people like to apply PCA in order to interpret the resulting axes. I.e. they want to be able to say that principal axis #1 (which is a certain linear combination of original variables) has some particular meaning. To guess this meaning they would look at the weights in the linear combination. However, these weights are often messy and no clear meaning can be discerned. In these cases, people sometimes choose to tinker a bit with the vanilla PCA solution. They take certain number of principal axes (that are deemed "significant" by some criterion), and additionally rotate them, trying to achieve some "simple structure" --- that is, linear combinations that would be easier to interpret. There are specific algorithms that look for the simplest possible structure; one of them is called varimax. After varimax rotation, successive components do not anymore capture as much variance as possible! This feature of PCA gets broken by doing the additional varimax (or any other) rotation. So before applying varimax rotation, you have "unrotated" principal components. And afterwards, you get "rotated" principal components. In other words, this terminology refers to the post-processing of the PCA results and not to the PCA rotation itself. All of this is somewhat complicated by the fact that what gets rotated are loadings and not principal axes as such. However, for the mathematical details I refer you (and any interested reader) to my long answer here: Is PCA followed by a rotation (such as varimax) still PCA?
What are "rotated" and "unrotated" principal components, given that PCA always rotates the coordinat
This is going to be a non-technical answer. You are right: PCA is essentially a rotation of the coordinate axes, chosen such that each successful axis captures as much variance as possible. In some di
What are "rotated" and "unrotated" principal components, given that PCA always rotates the coordinates axes? This is going to be a non-technical answer. You are right: PCA is essentially a rotation of the coordinate axes, chosen such that each successful axis captures as much variance as possible. In some disciplines (such as e.g. psychology), people like to apply PCA in order to interpret the resulting axes. I.e. they want to be able to say that principal axis #1 (which is a certain linear combination of original variables) has some particular meaning. To guess this meaning they would look at the weights in the linear combination. However, these weights are often messy and no clear meaning can be discerned. In these cases, people sometimes choose to tinker a bit with the vanilla PCA solution. They take certain number of principal axes (that are deemed "significant" by some criterion), and additionally rotate them, trying to achieve some "simple structure" --- that is, linear combinations that would be easier to interpret. There are specific algorithms that look for the simplest possible structure; one of them is called varimax. After varimax rotation, successive components do not anymore capture as much variance as possible! This feature of PCA gets broken by doing the additional varimax (or any other) rotation. So before applying varimax rotation, you have "unrotated" principal components. And afterwards, you get "rotated" principal components. In other words, this terminology refers to the post-processing of the PCA results and not to the PCA rotation itself. All of this is somewhat complicated by the fact that what gets rotated are loadings and not principal axes as such. However, for the mathematical details I refer you (and any interested reader) to my long answer here: Is PCA followed by a rotation (such as varimax) still PCA?
What are "rotated" and "unrotated" principal components, given that PCA always rotates the coordinat This is going to be a non-technical answer. You are right: PCA is essentially a rotation of the coordinate axes, chosen such that each successful axis captures as much variance as possible. In some di
18,768
Does Breiman's random forest use information gain or Gini index?
The randomForest package in R by A. Liaw is a port of the original code being a mix of c-code(translated) some remaining fortran code and R wrapper code. To decide the overall best split across break points and across mtry variables, the code uses a scoring function similar to gini-gain: $GiniGain(N,X)=Gini(N)-\frac{\lvert N_{1} \rvert }{\lvert N \rvert }Gini(N_{1})-\frac{\lvert N_{2} \rvert }{\lvert N \rvert }Gini(N_{2})$ Where $X$ is a given feature, $N$ is the node on which the split is to be made, and $N_{1}$ and $N_{2}$ are the two child nodes created by splitting $N$. $\lvert . \rvert $ is the number of elements in a node. And $Gini(N)=1-\sum_{k=1}^{K}p_{k}^2$, where $K$ is the number of categories in the node But the applied scoring function is not the exactly same, but instead a equivalent more computational efficient version. $Gini(N)$ and |N| are constant for all compared splits and thus omitted. Also lets inspect the part if the sum of squared prevalence in a node(1) is computed as $\frac{\lvert N_{2} \rvert }{\lvert N \rvert }Gini(N_{2}) \propto |N_2| Gini(N_{2}) = |N_2| (1-\sum_{k=1}^{K}p_{k}^2 ) = |N_2| \sum \frac{nclass_{2,k}^2}{|N_2|^2}$ where $nclass_{1,k}$ is the class count of target-class k in daughter node 1. Notice $|N_2|$ is placed both in nominator and denominator. removing the trivial constant $1-$ from equation such that best split decision is to maximize nodes size weighted sum of squared class prevalence... score= $|N_1| \sum_{k=1}^{K}p_{1,k}^2 + |N_2| \sum_{k=1}^{K}p_{2,k}^2 = |N_1|\sum_{k=1}^{K}\frac{nclass_{1,k}^2}{|N_1|^2} + |N_2|\sum_{k=1}^{K}\frac{nclass_{2,k}^2}{|N_2|^2}$ $ = \sum_{k=1}^{K}\frac{nclass_{2,k}^2}{1} |N_1|^{-1} + \sum_{k=1}^{K}\frac{nclass_{2,k}^2}{1} |N_1|^{-2} $ $= nominator_1/denominator_1 + nominator_2/denominator_2$ The implementation also allows for classwise up/down weighting of samples. Also very important when the implementation update this modified gini-gain, moving a single sample from one node to the other is very efficient. The sample can be substracted from nominators/denominators of one node and added to the others. I wrote a prototype-RF some months ago, ignorantly recomputing from scratch gini-gain for every break-point and that was slower :) If several splits scores are best, a random winner is picked. This answer was based on inspecting source file "randomForest.x.x.tar.gz/src/classTree.c" line 209-250
Does Breiman's random forest use information gain or Gini index?
The randomForest package in R by A. Liaw is a port of the original code being a mix of c-code(translated) some remaining fortran code and R wrapper code. To decide the overall best split across break
Does Breiman's random forest use information gain or Gini index? The randomForest package in R by A. Liaw is a port of the original code being a mix of c-code(translated) some remaining fortran code and R wrapper code. To decide the overall best split across break points and across mtry variables, the code uses a scoring function similar to gini-gain: $GiniGain(N,X)=Gini(N)-\frac{\lvert N_{1} \rvert }{\lvert N \rvert }Gini(N_{1})-\frac{\lvert N_{2} \rvert }{\lvert N \rvert }Gini(N_{2})$ Where $X$ is a given feature, $N$ is the node on which the split is to be made, and $N_{1}$ and $N_{2}$ are the two child nodes created by splitting $N$. $\lvert . \rvert $ is the number of elements in a node. And $Gini(N)=1-\sum_{k=1}^{K}p_{k}^2$, where $K$ is the number of categories in the node But the applied scoring function is not the exactly same, but instead a equivalent more computational efficient version. $Gini(N)$ and |N| are constant for all compared splits and thus omitted. Also lets inspect the part if the sum of squared prevalence in a node(1) is computed as $\frac{\lvert N_{2} \rvert }{\lvert N \rvert }Gini(N_{2}) \propto |N_2| Gini(N_{2}) = |N_2| (1-\sum_{k=1}^{K}p_{k}^2 ) = |N_2| \sum \frac{nclass_{2,k}^2}{|N_2|^2}$ where $nclass_{1,k}$ is the class count of target-class k in daughter node 1. Notice $|N_2|$ is placed both in nominator and denominator. removing the trivial constant $1-$ from equation such that best split decision is to maximize nodes size weighted sum of squared class prevalence... score= $|N_1| \sum_{k=1}^{K}p_{1,k}^2 + |N_2| \sum_{k=1}^{K}p_{2,k}^2 = |N_1|\sum_{k=1}^{K}\frac{nclass_{1,k}^2}{|N_1|^2} + |N_2|\sum_{k=1}^{K}\frac{nclass_{2,k}^2}{|N_2|^2}$ $ = \sum_{k=1}^{K}\frac{nclass_{2,k}^2}{1} |N_1|^{-1} + \sum_{k=1}^{K}\frac{nclass_{2,k}^2}{1} |N_1|^{-2} $ $= nominator_1/denominator_1 + nominator_2/denominator_2$ The implementation also allows for classwise up/down weighting of samples. Also very important when the implementation update this modified gini-gain, moving a single sample from one node to the other is very efficient. The sample can be substracted from nominators/denominators of one node and added to the others. I wrote a prototype-RF some months ago, ignorantly recomputing from scratch gini-gain for every break-point and that was slower :) If several splits scores are best, a random winner is picked. This answer was based on inspecting source file "randomForest.x.x.tar.gz/src/classTree.c" line 209-250
Does Breiman's random forest use information gain or Gini index? The randomForest package in R by A. Liaw is a port of the original code being a mix of c-code(translated) some remaining fortran code and R wrapper code. To decide the overall best split across break
18,769
When is MCMC useful?
Monte Carlo integration is one form of numerical integration which can be much more efficient than, e.g., numerical integration by approximating the integrand with polynomials. This is especially true in high dimensions, where simple numerical integration techniques require large numbers of function evaluations. To compute the normalization constant $p(D)$, we could use importance sampling, $$p(D) = \int \frac{q(\theta)}{q(\theta)} p(\theta)p(D \mid \theta) \, d\theta \approx \frac{1}{N} \sum_n w_n p(\theta_n)p(D \mid \theta_n),$$ where $w_n = 1/q(\theta_n)$ and the $\theta_n$ are sampled from $q$. Note that we only need to evaluate the joint distribution at the sampled points. For the right $q$, this estimator can be very efficient in the sense of requiring very few samples. In practice, choosing an appropriate $q$ can be difficult, but this is where MCMC can help! Annealed importance sampling (Neal, 1998) combines MCMC with importance sampling. Another reason why MCMC is useful is this: We usually aren't even that interested in the posterior density of $\theta$, but rather in summary statistics and expectations, e.g., $$\int p(\theta \mid D) f(\theta) \, d\theta.$$ Knowing $p(D)$ does not generally mean we can solve this integral, but samples are a very convenient way to estimate it. Finally, being able to evaluate $p(D \mid \theta)p(\theta)$ is a requirement for some MCMC methods, but not all of them (e.g., Murray et al., 2006).
When is MCMC useful?
Monte Carlo integration is one form of numerical integration which can be much more efficient than, e.g., numerical integration by approximating the integrand with polynomials. This is especially true
When is MCMC useful? Monte Carlo integration is one form of numerical integration which can be much more efficient than, e.g., numerical integration by approximating the integrand with polynomials. This is especially true in high dimensions, where simple numerical integration techniques require large numbers of function evaluations. To compute the normalization constant $p(D)$, we could use importance sampling, $$p(D) = \int \frac{q(\theta)}{q(\theta)} p(\theta)p(D \mid \theta) \, d\theta \approx \frac{1}{N} \sum_n w_n p(\theta_n)p(D \mid \theta_n),$$ where $w_n = 1/q(\theta_n)$ and the $\theta_n$ are sampled from $q$. Note that we only need to evaluate the joint distribution at the sampled points. For the right $q$, this estimator can be very efficient in the sense of requiring very few samples. In practice, choosing an appropriate $q$ can be difficult, but this is where MCMC can help! Annealed importance sampling (Neal, 1998) combines MCMC with importance sampling. Another reason why MCMC is useful is this: We usually aren't even that interested in the posterior density of $\theta$, but rather in summary statistics and expectations, e.g., $$\int p(\theta \mid D) f(\theta) \, d\theta.$$ Knowing $p(D)$ does not generally mean we can solve this integral, but samples are a very convenient way to estimate it. Finally, being able to evaluate $p(D \mid \theta)p(\theta)$ is a requirement for some MCMC methods, but not all of them (e.g., Murray et al., 2006).
When is MCMC useful? Monte Carlo integration is one form of numerical integration which can be much more efficient than, e.g., numerical integration by approximating the integrand with polynomials. This is especially true
18,770
When is MCMC useful?
When you are given a prior $p(\theta)$ and a likelihood $f(x|\theta)$ that are either not computable in closed form or such that the posterior distribution $$p(\theta|x)\propto p(\theta)f(x|\theta)$$is not of a standard type, simulating directly from this target towards a Monte Carlo approximation of the posterior distribution is not feasible. A typical example is made of hierarchical models with non-conjugate priors, such as those found in the BUGS book. Indirect simulation methods such as accept-reject, ratio-of-uniform, or importance-sampling techniques customarily run into numerical and precision difficulties when the dimension of the parameter $\theta$ increases beyond a few units. On the opposite, Markov chain Monte Carlo methods are more ameanable to large dimensions in that they can explore the posterior distribution on a local basis, i.e. in a neighbourhood of the current value, and on a smaller number of components, i.e., on subspaces. For instance, the Gibbs sampler validates the notion that simulating from a one-dimensional target at a time, namely the full conditional distributions associated with $p(\theta|x)$, is sufficient to achieve simulation from the true posterior in the long run. Markov chain Monte Carlo methods also some degree of universality in that algorithms like the Metropolis-Hastings algorithm is formally available for any posterior distribution $p(\theta|x)$ that can be computed up to a constant. In cases when $p(\theta)f(x|\theta)$ cannot be easily computed, alternatives exist, either by completing this distribution into a manageable distribution over a larger space, as in$$p(\theta)f(x|\theta)\propto \int g(z|\theta,x) p(\theta)f(x|\theta)\text{d}z$$ or through non-Markovian methods like ABC. MCMC methods have given a much broader reach for Bayesian methods, as illustrated by the upsurge that followed the popularisation of the method by Alan Gelfand and Adrian Smith in 1990.
When is MCMC useful?
When you are given a prior $p(\theta)$ and a likelihood $f(x|\theta)$ that are either not computable in closed form or such that the posterior distribution $$p(\theta|x)\propto p(\theta)f(x|\theta)$$i
When is MCMC useful? When you are given a prior $p(\theta)$ and a likelihood $f(x|\theta)$ that are either not computable in closed form or such that the posterior distribution $$p(\theta|x)\propto p(\theta)f(x|\theta)$$is not of a standard type, simulating directly from this target towards a Monte Carlo approximation of the posterior distribution is not feasible. A typical example is made of hierarchical models with non-conjugate priors, such as those found in the BUGS book. Indirect simulation methods such as accept-reject, ratio-of-uniform, or importance-sampling techniques customarily run into numerical and precision difficulties when the dimension of the parameter $\theta$ increases beyond a few units. On the opposite, Markov chain Monte Carlo methods are more ameanable to large dimensions in that they can explore the posterior distribution on a local basis, i.e. in a neighbourhood of the current value, and on a smaller number of components, i.e., on subspaces. For instance, the Gibbs sampler validates the notion that simulating from a one-dimensional target at a time, namely the full conditional distributions associated with $p(\theta|x)$, is sufficient to achieve simulation from the true posterior in the long run. Markov chain Monte Carlo methods also some degree of universality in that algorithms like the Metropolis-Hastings algorithm is formally available for any posterior distribution $p(\theta|x)$ that can be computed up to a constant. In cases when $p(\theta)f(x|\theta)$ cannot be easily computed, alternatives exist, either by completing this distribution into a manageable distribution over a larger space, as in$$p(\theta)f(x|\theta)\propto \int g(z|\theta,x) p(\theta)f(x|\theta)\text{d}z$$ or through non-Markovian methods like ABC. MCMC methods have given a much broader reach for Bayesian methods, as illustrated by the upsurge that followed the popularisation of the method by Alan Gelfand and Adrian Smith in 1990.
When is MCMC useful? When you are given a prior $p(\theta)$ and a likelihood $f(x|\theta)$ that are either not computable in closed form or such that the posterior distribution $$p(\theta|x)\propto p(\theta)f(x|\theta)$$i
18,771
What's the difference between regression coefficients and partial regression coefficients?
"Partial regression coefficients" are the slope coefficients ($\beta_j$s) in a multiple regression model. By "regression coefficients" (i.e., without the "partial") the author means the slope coefficient in a simple (only one variable) regression model. If you have multiple predictor / explanatory variables, and you run both a set of simple regressions, and a multiple regression with all of them, you will find that the coefficient for a particular variable, $X_j$, will always differ between its simple regression model and the multiple regression model, unless $X_j$ is pairwise orthogonal with all other variables in the set. In that case, $\hat\beta_{j\ {\rm simple}} = \hat\beta_{j\ {\rm multiple}}$. For a fuller understanding of this topic, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
What's the difference between regression coefficients and partial regression coefficients?
"Partial regression coefficients" are the slope coefficients ($\beta_j$s) in a multiple regression model. By "regression coefficients" (i.e., without the "partial") the author means the slope coeffic
What's the difference between regression coefficients and partial regression coefficients? "Partial regression coefficients" are the slope coefficients ($\beta_j$s) in a multiple regression model. By "regression coefficients" (i.e., without the "partial") the author means the slope coefficient in a simple (only one variable) regression model. If you have multiple predictor / explanatory variables, and you run both a set of simple regressions, and a multiple regression with all of them, you will find that the coefficient for a particular variable, $X_j$, will always differ between its simple regression model and the multiple regression model, unless $X_j$ is pairwise orthogonal with all other variables in the set. In that case, $\hat\beta_{j\ {\rm simple}} = \hat\beta_{j\ {\rm multiple}}$. For a fuller understanding of this topic, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
What's the difference between regression coefficients and partial regression coefficients? "Partial regression coefficients" are the slope coefficients ($\beta_j$s) in a multiple regression model. By "regression coefficients" (i.e., without the "partial") the author means the slope coeffic
18,772
Mixed model idea and Bayesian method
This is a good question. Strictly speaking, using a mixed model does not make you Bayesian. Imagine estimating each random effect separately (treating it as a fixed effect) and then looking at the resulting distribution. This is "dirty," but conceptually you have a probability distribution over the random effects based on a relative frequency concept. But if, as a frequentist, you fit you model using full maximum likelihood and then wish to "estimate" the random effects, you've got a little complication. These quantities aren't fixed like your typical regression parameters, so a better word than "estimation" would probably be "prediction." If you want to predict a random effect for a given subject, you're going to want to use that subject's data. You'll need to resort to Bayes' rule, or at least the notion that $$f(\beta_i | \mathbf{y}_i) \propto f(\mathbf{y}_i | \beta_i) g(\beta_i).$$ Here the random effects distribution $g()$ works essentially like a prior. And I think by this point, many people would call this "empirical Bayes." To be a true Bayesian, you would not only need to specify a distribution for your random effects, but distributions (priors) for each parameter that defines that distribution, as well distributions for all fixed effects parameters and the model epsilon. It's pretty intense!
Mixed model idea and Bayesian method
This is a good question. Strictly speaking, using a mixed model does not make you Bayesian. Imagine estimating each random effect separately (treating it as a fixed effect) and then looking at the res
Mixed model idea and Bayesian method This is a good question. Strictly speaking, using a mixed model does not make you Bayesian. Imagine estimating each random effect separately (treating it as a fixed effect) and then looking at the resulting distribution. This is "dirty," but conceptually you have a probability distribution over the random effects based on a relative frequency concept. But if, as a frequentist, you fit you model using full maximum likelihood and then wish to "estimate" the random effects, you've got a little complication. These quantities aren't fixed like your typical regression parameters, so a better word than "estimation" would probably be "prediction." If you want to predict a random effect for a given subject, you're going to want to use that subject's data. You'll need to resort to Bayes' rule, or at least the notion that $$f(\beta_i | \mathbf{y}_i) \propto f(\mathbf{y}_i | \beta_i) g(\beta_i).$$ Here the random effects distribution $g()$ works essentially like a prior. And I think by this point, many people would call this "empirical Bayes." To be a true Bayesian, you would not only need to specify a distribution for your random effects, but distributions (priors) for each parameter that defines that distribution, as well distributions for all fixed effects parameters and the model epsilon. It's pretty intense!
Mixed model idea and Bayesian method This is a good question. Strictly speaking, using a mixed model does not make you Bayesian. Imagine estimating each random effect separately (treating it as a fixed effect) and then looking at the res
18,773
Mixed model idea and Bayesian method
Random effects are a way to specify a distributionial assumption by using conditional distributions. For example, the random one-way ANOVA model is: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J, \qquad \mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$$ And this distributional assumption is equivalent to $$\begin{pmatrix} y_{i1} \\ \vdots \\ y_{iJ} \end{pmatrix} \sim_{\text{iid}} {\cal N}\left(\begin{pmatrix} \mu \\ \vdots \\ \mu \end{pmatrix}, \Sigma\right), \quad i=1,\ldots,I$$ where $\Sigma$ has an exchangeable structure (with diagonal entry $\sigma^2_b+\sigma^2_w$ and covariance $\sigma^2_b$). To Bayesianify the model, you need to assign prior distributions on $\mu$ and $\Sigma$.
Mixed model idea and Bayesian method
Random effects are a way to specify a distributionial assumption by using conditional distributions. For example, the random one-way ANOVA model is: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\m
Mixed model idea and Bayesian method Random effects are a way to specify a distributionial assumption by using conditional distributions. For example, the random one-way ANOVA model is: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J, \qquad \mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$$ And this distributional assumption is equivalent to $$\begin{pmatrix} y_{i1} \\ \vdots \\ y_{iJ} \end{pmatrix} \sim_{\text{iid}} {\cal N}\left(\begin{pmatrix} \mu \\ \vdots \\ \mu \end{pmatrix}, \Sigma\right), \quad i=1,\ldots,I$$ where $\Sigma$ has an exchangeable structure (with diagonal entry $\sigma^2_b+\sigma^2_w$ and covariance $\sigma^2_b$). To Bayesianify the model, you need to assign prior distributions on $\mu$ and $\Sigma$.
Mixed model idea and Bayesian method Random effects are a way to specify a distributionial assumption by using conditional distributions. For example, the random one-way ANOVA model is: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\m
18,774
Mixed model idea and Bayesian method
If you're talking in terms of reproducing the same answers, then the answer is yes. The INLA (google "inla bayesian") computational method for bayesian GLMMs combined with a uniform prior for the fixed effects and variance parameters, basically reproduces the EBLUP/EBLUE outputs under the "simple plug in" gaussian approximation, where the variance parameters are estimated via REML.
Mixed model idea and Bayesian method
If you're talking in terms of reproducing the same answers, then the answer is yes. The INLA (google "inla bayesian") computational method for bayesian GLMMs combined with a uniform prior for the fix
Mixed model idea and Bayesian method If you're talking in terms of reproducing the same answers, then the answer is yes. The INLA (google "inla bayesian") computational method for bayesian GLMMs combined with a uniform prior for the fixed effects and variance parameters, basically reproduces the EBLUP/EBLUE outputs under the "simple plug in" gaussian approximation, where the variance parameters are estimated via REML.
Mixed model idea and Bayesian method If you're talking in terms of reproducing the same answers, then the answer is yes. The INLA (google "inla bayesian") computational method for bayesian GLMMs combined with a uniform prior for the fix
18,775
Mixed model idea and Bayesian method
I don't think so, I consider it part of the likelihood function. It's similar to specifying the error term follows a Normal distribution in a regression model, or a certain binary process can be modeled using a logistic relationship in a GLM. Since no prior information, or distributions, are used I do not consider it Bayesian.
Mixed model idea and Bayesian method
I don't think so, I consider it part of the likelihood function. It's similar to specifying the error term follows a Normal distribution in a regression model, or a certain binary process can be mod
Mixed model idea and Bayesian method I don't think so, I consider it part of the likelihood function. It's similar to specifying the error term follows a Normal distribution in a regression model, or a certain binary process can be modeled using a logistic relationship in a GLM. Since no prior information, or distributions, are used I do not consider it Bayesian.
Mixed model idea and Bayesian method I don't think so, I consider it part of the likelihood function. It's similar to specifying the error term follows a Normal distribution in a regression model, or a certain binary process can be mod
18,776
How to correct outliers once detected for time series data forecasting?
There is now a facility in the forecast package for R for identifying and replacying outliers. (It also handles the missing values.) As you are apparently already using the forecast package, this might be a convenient solution for you. For example: fit <- nnetar(tsclean(x)) The tsclean() function will fit a robust trend using loess (for non-seasonal series), or robust trend and seasonal components using STL (for seasonal series). The residuals are computed and the following bounds are computed: \begin{align} U &= q_{0.9} + 2(q_{0.9}-q_{0.1}) \\ L &= q_{0.1} - 2(q_{0.9}-q_{0.1}) \end{align} where $q_{0.1}$ and $q_{0.9}$ are the 10th and 90th percentiles of the residuals respectively. Outliers are identified as points with residuals larger than $U$ or smaller than $L$. For non-seasonal time series, outliers are replaced by linear interpolation. For seasonal time series, the seasonal component from the STL fit is removed and the seasonally adjusted series is linearly interpolated to replace the outliers, before re-seasonalizing the result.
How to correct outliers once detected for time series data forecasting?
There is now a facility in the forecast package for R for identifying and replacying outliers. (It also handles the missing values.) As you are apparently already using the forecast package, this migh
How to correct outliers once detected for time series data forecasting? There is now a facility in the forecast package for R for identifying and replacying outliers. (It also handles the missing values.) As you are apparently already using the forecast package, this might be a convenient solution for you. For example: fit <- nnetar(tsclean(x)) The tsclean() function will fit a robust trend using loess (for non-seasonal series), or robust trend and seasonal components using STL (for seasonal series). The residuals are computed and the following bounds are computed: \begin{align} U &= q_{0.9} + 2(q_{0.9}-q_{0.1}) \\ L &= q_{0.1} - 2(q_{0.9}-q_{0.1}) \end{align} where $q_{0.1}$ and $q_{0.9}$ are the 10th and 90th percentiles of the residuals respectively. Outliers are identified as points with residuals larger than $U$ or smaller than $L$. For non-seasonal time series, outliers are replaced by linear interpolation. For seasonal time series, the seasonal component from the STL fit is removed and the seasonally adjusted series is linearly interpolated to replace the outliers, before re-seasonalizing the result.
How to correct outliers once detected for time series data forecasting? There is now a facility in the forecast package for R for identifying and replacying outliers. (It also handles the missing values.) As you are apparently already using the forecast package, this migh
18,777
How to correct outliers once detected for time series data forecasting?
When you identify an ARIMA model you should be simultaneously identifying Pulses/Level Shifts/Seasonal Pulses and/or Local Time Trends. You can get some reading material on Intervention Detection procedures. I recommend "Time Series Analysis: Univariate and Multivariate Methods" by David P. Reilly and William W. S. Wei. You may have to pursue commercial software like SAS/SPSS/AUTOBOX to get any useful results as the free software I have seen is wanting. In passing, I have contributed major technical improvements in this area to AUTOBOX. EDIT: An even better approach is to identify the outliers using the rigorous ARIMA method plus Intervention Detection procedures leading to robust ARIMA parameters and a good forecast. Now consider developing simulated forecasts incorporating re-sampled residuals free of pulse effects. In this way, you get the best of both worlds viz a good model and more realistic uncertainty statements for the forecasts which don't assume that the estimated model parameters are the population values.
How to correct outliers once detected for time series data forecasting?
When you identify an ARIMA model you should be simultaneously identifying Pulses/Level Shifts/Seasonal Pulses and/or Local Time Trends. You can get some reading material on Intervention Detection proc
How to correct outliers once detected for time series data forecasting? When you identify an ARIMA model you should be simultaneously identifying Pulses/Level Shifts/Seasonal Pulses and/or Local Time Trends. You can get some reading material on Intervention Detection procedures. I recommend "Time Series Analysis: Univariate and Multivariate Methods" by David P. Reilly and William W. S. Wei. You may have to pursue commercial software like SAS/SPSS/AUTOBOX to get any useful results as the free software I have seen is wanting. In passing, I have contributed major technical improvements in this area to AUTOBOX. EDIT: An even better approach is to identify the outliers using the rigorous ARIMA method plus Intervention Detection procedures leading to robust ARIMA parameters and a good forecast. Now consider developing simulated forecasts incorporating re-sampled residuals free of pulse effects. In this way, you get the best of both worlds viz a good model and more realistic uncertainty statements for the forecasts which don't assume that the estimated model parameters are the population values.
How to correct outliers once detected for time series data forecasting? When you identify an ARIMA model you should be simultaneously identifying Pulses/Level Shifts/Seasonal Pulses and/or Local Time Trends. You can get some reading material on Intervention Detection proc
18,778
How to correct outliers once detected for time series data forecasting?
I agree with @Aksakal. Instead of removing the outliers, a better approach would be to use some kind of statistical procedure to deal with the outliers. I suggest you winsorise your data. If implemented properly, winsorisation can be relatively robust to outliers. On this page: http://www.r-bloggers.com/winsorization/, you will find R-codes to implement winsorisation. If you consider winsorising your data, you will need to think carefully about the tails of the distribution. Are the outliers expected to be extremely low, or are they expected to be extremely high, or maybe both. This will affect whether you winsorise at e.g. the 5% or 10% and/or the 95% or 99% level.
How to correct outliers once detected for time series data forecasting?
I agree with @Aksakal. Instead of removing the outliers, a better approach would be to use some kind of statistical procedure to deal with the outliers. I suggest you winsorise your data. If implement
How to correct outliers once detected for time series data forecasting? I agree with @Aksakal. Instead of removing the outliers, a better approach would be to use some kind of statistical procedure to deal with the outliers. I suggest you winsorise your data. If implemented properly, winsorisation can be relatively robust to outliers. On this page: http://www.r-bloggers.com/winsorization/, you will find R-codes to implement winsorisation. If you consider winsorising your data, you will need to think carefully about the tails of the distribution. Are the outliers expected to be extremely low, or are they expected to be extremely high, or maybe both. This will affect whether you winsorise at e.g. the 5% or 10% and/or the 95% or 99% level.
How to correct outliers once detected for time series data forecasting? I agree with @Aksakal. Instead of removing the outliers, a better approach would be to use some kind of statistical procedure to deal with the outliers. I suggest you winsorise your data. If implement
18,779
How to correct outliers once detected for time series data forecasting?
In the forecasting context, removing outliers is very dangerous. For instance, you're forecasting sales of a grocery shop. Let's say there was a gas explosion in the neighboring building, which caused you to close the shop for a few days. This was the only time the shop was closed in 10 years. So, you get the time series, detect the outlier, remove it, and forecast. You silently assumed that nothing like this will happen in the future. In a practical sense, you compressed your observed variance, and the coefficient variances shrank. So, if you show the confidence bands for your forecast they'll be narrower than they would have been if you did not remove the outlier. Of course, you could keep the outlier, and proceed as usual, but this is not a good approach either. The reason is that this outlier will skew the coefficients. I think a better approach, in this case, is to allow for an error distribution with fat tails, maybe a stable distribution. In this case, your outlier will not skew the coefficients too much. They'll be close to the coefficients with an outlier removed. However, the outlier will show up in the error distribution, the error variance. Essentially, you'll end up with wider forecast confidence bands. The confidence bands convey a very important piece of information. If you are forecasting that the sales would be \$1,000,000 this month, but there's a 5% chance that they'll be $10,000, this impacts your decisions on spending, cash management, etc...
How to correct outliers once detected for time series data forecasting?
In the forecasting context, removing outliers is very dangerous. For instance, you're forecasting sales of a grocery shop. Let's say there was a gas explosion in the neighboring building, which caused
How to correct outliers once detected for time series data forecasting? In the forecasting context, removing outliers is very dangerous. For instance, you're forecasting sales of a grocery shop. Let's say there was a gas explosion in the neighboring building, which caused you to close the shop for a few days. This was the only time the shop was closed in 10 years. So, you get the time series, detect the outlier, remove it, and forecast. You silently assumed that nothing like this will happen in the future. In a practical sense, you compressed your observed variance, and the coefficient variances shrank. So, if you show the confidence bands for your forecast they'll be narrower than they would have been if you did not remove the outlier. Of course, you could keep the outlier, and proceed as usual, but this is not a good approach either. The reason is that this outlier will skew the coefficients. I think a better approach, in this case, is to allow for an error distribution with fat tails, maybe a stable distribution. In this case, your outlier will not skew the coefficients too much. They'll be close to the coefficients with an outlier removed. However, the outlier will show up in the error distribution, the error variance. Essentially, you'll end up with wider forecast confidence bands. The confidence bands convey a very important piece of information. If you are forecasting that the sales would be \$1,000,000 this month, but there's a 5% chance that they'll be $10,000, this impacts your decisions on spending, cash management, etc...
How to correct outliers once detected for time series data forecasting? In the forecasting context, removing outliers is very dangerous. For instance, you're forecasting sales of a grocery shop. Let's say there was a gas explosion in the neighboring building, which caused
18,780
How to correct outliers once detected for time series data forecasting?
To perform forecasting using a model with outliers removed depends on the probability of outliers occurring in the future and the expected distribution of its effect if it indeed occurs. Is the training data sufficient for illuminating this?. A Bayesian approach should help...
How to correct outliers once detected for time series data forecasting?
To perform forecasting using a model with outliers removed depends on the probability of outliers occurring in the future and the expected distribution of its effect if it indeed occurs. Is the traini
How to correct outliers once detected for time series data forecasting? To perform forecasting using a model with outliers removed depends on the probability of outliers occurring in the future and the expected distribution of its effect if it indeed occurs. Is the training data sufficient for illuminating this?. A Bayesian approach should help...
How to correct outliers once detected for time series data forecasting? To perform forecasting using a model with outliers removed depends on the probability of outliers occurring in the future and the expected distribution of its effect if it indeed occurs. Is the traini
18,781
Interactions terms and higher order polynomials
Yes, you should always include all of the terms, from the highest order all the way down to the linear term, in the interaction. There are a couple of really great threads on CV that discuss related issues that you might find helpful in thinking about this: Does it make sense to add a quadratic term, but not the linear term to a model? Including the interaction, but not the main effects in a model Do all interactions need their individual terms in a model? The short answer is that by not including certain terms in the model, you force parts of it to be exactly zero. This imposes an inflexibility to your model that necessarily causes bias, unless those parameters are exactly zero in reality; the situation is analogous to suppressing the intercept (which you can see discussed here). You should also be aware that any automatic model selection routine is dangerous. (For the basic story, it may be helpful to read my answer here.) In addition to that, however, these algorithms don't 'think' in terms of the relationships between variables, so they don't necessarily keep lower level terms in the model when power or interaction terms are included.
Interactions terms and higher order polynomials
Yes, you should always include all of the terms, from the highest order all the way down to the linear term, in the interaction. There are a couple of really great threads on CV that discuss related
Interactions terms and higher order polynomials Yes, you should always include all of the terms, from the highest order all the way down to the linear term, in the interaction. There are a couple of really great threads on CV that discuss related issues that you might find helpful in thinking about this: Does it make sense to add a quadratic term, but not the linear term to a model? Including the interaction, but not the main effects in a model Do all interactions need their individual terms in a model? The short answer is that by not including certain terms in the model, you force parts of it to be exactly zero. This imposes an inflexibility to your model that necessarily causes bias, unless those parameters are exactly zero in reality; the situation is analogous to suppressing the intercept (which you can see discussed here). You should also be aware that any automatic model selection routine is dangerous. (For the basic story, it may be helpful to read my answer here.) In addition to that, however, these algorithms don't 'think' in terms of the relationships between variables, so they don't necessarily keep lower level terms in the model when power or interaction terms are included.
Interactions terms and higher order polynomials Yes, you should always include all of the terms, from the highest order all the way down to the linear term, in the interaction. There are a couple of really great threads on CV that discuss related
18,782
How far will self study get me?
It's all about being able to show a potential employer that you have the skills they are looking for. A degree from a college is one piece of information that an employer can use for that, but not the only thing (nor does it necessarily translate into real world skills). For me as a hiring manager even more important than that is experience and hands on examples. If you want to work in data analysis or machine learning my advice to you would be to do as much data analysis and machine learning work as you can. Start a blog, open a Github account, compete in competitions like on Kaggle. Depending on where you live, find a meetup, hackathon, etc. Not only will you learn a lot from those experiences, you'll also meet a lot of people in the field and generate some examples of work that you can show an employer.
How far will self study get me?
It's all about being able to show a potential employer that you have the skills they are looking for. A degree from a college is one piece of information that an employer can use for that, but not the
How far will self study get me? It's all about being able to show a potential employer that you have the skills they are looking for. A degree from a college is one piece of information that an employer can use for that, but not the only thing (nor does it necessarily translate into real world skills). For me as a hiring manager even more important than that is experience and hands on examples. If you want to work in data analysis or machine learning my advice to you would be to do as much data analysis and machine learning work as you can. Start a blog, open a Github account, compete in competitions like on Kaggle. Depending on where you live, find a meetup, hackathon, etc. Not only will you learn a lot from those experiences, you'll also meet a lot of people in the field and generate some examples of work that you can show an employer.
How far will self study get me? It's all about being able to show a potential employer that you have the skills they are looking for. A degree from a college is one piece of information that an employer can use for that, but not the
18,783
How far will self study get me?
The answer may depend very much on your local culture. I see that you are based in Vienna, Austria. Now, I'm Austrian myself (though I never worked in Austria), and the Austrian (along with the German and other European) job market always strikes me as much more credential-oriented than, e.g., the market in the US. Thus, getting a foot in the door without formal credentials may be a lot easier in the US than in Europe. It might be helpful if other responders could indicate which culture their experience comes from. In addition, if you are 31 now and do another 10 years of self-study, you will be 41, and for all purposes, you will compete against recent graduates who are 15 years younger and do have credentials. It seems reasonable that 10 years hands-on experience should beat 3 years of college courses, but HR people may see that differently. Bottom line: I think this may be doable, but it will not be easy. Good luck!
How far will self study get me?
The answer may depend very much on your local culture. I see that you are based in Vienna, Austria. Now, I'm Austrian myself (though I never worked in Austria), and the Austrian (along with the German
How far will self study get me? The answer may depend very much on your local culture. I see that you are based in Vienna, Austria. Now, I'm Austrian myself (though I never worked in Austria), and the Austrian (along with the German and other European) job market always strikes me as much more credential-oriented than, e.g., the market in the US. Thus, getting a foot in the door without formal credentials may be a lot easier in the US than in Europe. It might be helpful if other responders could indicate which culture their experience comes from. In addition, if you are 31 now and do another 10 years of self-study, you will be 41, and for all purposes, you will compete against recent graduates who are 15 years younger and do have credentials. It seems reasonable that 10 years hands-on experience should beat 3 years of college courses, but HR people may see that differently. Bottom line: I think this may be doable, but it will not be easy. Good luck!
How far will self study get me? The answer may depend very much on your local culture. I see that you are based in Vienna, Austria. Now, I'm Austrian myself (though I never worked in Austria), and the Austrian (along with the German
18,784
How do you visualize binary outcomes versus a continuous predictor?
What I have done in the past is basically what you've done with the addition of a loess. Depending on the density of points, I would use translucent points (alpha), as shown below, and/or pipe symbols ("|") to minimize overlap. library(ggplot2) # plotting package for R N=100 data=data.frame(Q=seq(N), Freq=runif(N,0,1), Success=sample(seq(0,1), size=N, replace=TRUE)) ggplot(data, aes(x=Freq, y=Success))+geom_point(size=2, alpha=0.4)+ stat_smooth(method="loess", colour="blue", size=1.5)+ xlab("Frequency")+ ylab("Probability of Detection")+ theme_bw() (I don't think the error bars should widen on the edges here, but there isn't an easy way I know of to do that with ggplot's internal stat_smooth function. If you used this method for reals in R, we could do it by estimating the loess and its error bar before plotting.) (Edit: And plus-ones for comments from Andy W. about trying vertical jitter if the density of the data makes it useful and from Mimshot about proper confidence intervals.)
How do you visualize binary outcomes versus a continuous predictor?
What I have done in the past is basically what you've done with the addition of a loess. Depending on the density of points, I would use translucent points (alpha), as shown below, and/or pipe symbol
How do you visualize binary outcomes versus a continuous predictor? What I have done in the past is basically what you've done with the addition of a loess. Depending on the density of points, I would use translucent points (alpha), as shown below, and/or pipe symbols ("|") to minimize overlap. library(ggplot2) # plotting package for R N=100 data=data.frame(Q=seq(N), Freq=runif(N,0,1), Success=sample(seq(0,1), size=N, replace=TRUE)) ggplot(data, aes(x=Freq, y=Success))+geom_point(size=2, alpha=0.4)+ stat_smooth(method="loess", colour="blue", size=1.5)+ xlab("Frequency")+ ylab("Probability of Detection")+ theme_bw() (I don't think the error bars should widen on the edges here, but there isn't an easy way I know of to do that with ggplot's internal stat_smooth function. If you used this method for reals in R, we could do it by estimating the loess and its error bar before plotting.) (Edit: And plus-ones for comments from Andy W. about trying vertical jitter if the density of the data makes it useful and from Mimshot about proper confidence intervals.)
How do you visualize binary outcomes versus a continuous predictor? What I have done in the past is basically what you've done with the addition of a loess. Depending on the density of points, I would use translucent points (alpha), as shown below, and/or pipe symbol
18,785
How do you visualize binary outcomes versus a continuous predictor?
Also consider which scales are most appropriate for your use case. Say you're doing visual inspection for the purposes of modeling in logistic regression and want to visualize a continuous predictor to determine if you need to add a spline or polynomial term to your model. In this case, you may want a scale in log-odds rather than probability/proportion. The function at the gist below uses some limited heuristics to split the continuous predictor into bins, calculate the mean proportion, convert to log-odds, then plot geom_smooth over these aggregate points. Example of what this chart looks like if a covariate has a quadratic relationship (+ noise) with the log-odds of a binary target: devtools::source_gist("https://gist.github.com/brshallo/3ccb8e12a3519b05ec41ca93500aa4b3") # simulated dataset with quadratic relationship between x and y set.seed(12) samp_size <- 1000 simulated_df <- tibble(x = rlogis(samp_size), y_odds = 0.2*x^2, y_probs = exp(y_odds)/(1 + exp(y_odds))) %>% mutate(y = rbinom(samp_size, 1, prob = y_probs)) # looking at on balanced dataset simulated_df_balanced <- simulated_df %>% group_by(y) %>% sample_n(table(simulated_df$y) %>% min()) ggplot_continuous_binary(df = simulated_df, covariate = x, response = y, snip_scales = TRUE) #> [1] "bin size: 18" #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' Created on 2019-02-06 by the reprex package (v0.2.1) For comparison, here is what that quadratic relationship would look like if you just plotted the 1's/0's and added a geom_smooth: simulated_df %>% ggplot(aes(x, y))+ geom_smooth()+ geom_jitter(height = 0.01, width = 0)+ coord_cartesian(ylim = c(0, 1), xlim = c(-3.76, 3.59)) # set xlim to be generally consistent with prior chart #> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' Created on 2019-02-25 by the reprex package (v0.2.1) Relationship to logit is less clear and using geom_smooth has some problems.
How do you visualize binary outcomes versus a continuous predictor?
Also consider which scales are most appropriate for your use case. Say you're doing visual inspection for the purposes of modeling in logistic regression and want to visualize a continuous predictor t
How do you visualize binary outcomes versus a continuous predictor? Also consider which scales are most appropriate for your use case. Say you're doing visual inspection for the purposes of modeling in logistic regression and want to visualize a continuous predictor to determine if you need to add a spline or polynomial term to your model. In this case, you may want a scale in log-odds rather than probability/proportion. The function at the gist below uses some limited heuristics to split the continuous predictor into bins, calculate the mean proportion, convert to log-odds, then plot geom_smooth over these aggregate points. Example of what this chart looks like if a covariate has a quadratic relationship (+ noise) with the log-odds of a binary target: devtools::source_gist("https://gist.github.com/brshallo/3ccb8e12a3519b05ec41ca93500aa4b3") # simulated dataset with quadratic relationship between x and y set.seed(12) samp_size <- 1000 simulated_df <- tibble(x = rlogis(samp_size), y_odds = 0.2*x^2, y_probs = exp(y_odds)/(1 + exp(y_odds))) %>% mutate(y = rbinom(samp_size, 1, prob = y_probs)) # looking at on balanced dataset simulated_df_balanced <- simulated_df %>% group_by(y) %>% sample_n(table(simulated_df$y) %>% min()) ggplot_continuous_binary(df = simulated_df, covariate = x, response = y, snip_scales = TRUE) #> [1] "bin size: 18" #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' Created on 2019-02-06 by the reprex package (v0.2.1) For comparison, here is what that quadratic relationship would look like if you just plotted the 1's/0's and added a geom_smooth: simulated_df %>% ggplot(aes(x, y))+ geom_smooth()+ geom_jitter(height = 0.01, width = 0)+ coord_cartesian(ylim = c(0, 1), xlim = c(-3.76, 3.59)) # set xlim to be generally consistent with prior chart #> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' Created on 2019-02-25 by the reprex package (v0.2.1) Relationship to logit is less clear and using geom_smooth has some problems.
How do you visualize binary outcomes versus a continuous predictor? Also consider which scales are most appropriate for your use case. Say you're doing visual inspection for the purposes of modeling in logistic regression and want to visualize a continuous predictor t
18,786
How do you visualize binary outcomes versus a continuous predictor?
If you have so many points that they overlap and jittering is insufficient, you can add histograms for both levels of the binary variables (so one will be upside down). Here's an example combined with a logistic regression. The idea and R code (popbio::logi.hist.plot) come from Rot, M. D. L. C. (2005). Improving the presentation of results of logistic regression with R. Bulletin of the Ecological Society of America, 86(1), 41-48. https://esajournals.onlinelibrary.wiley.com/doi/10.1890/0012-9623%282005%2986%5B41%3AITPORO%5D2.0.CO%3B2
How do you visualize binary outcomes versus a continuous predictor?
If you have so many points that they overlap and jittering is insufficient, you can add histograms for both levels of the binary variables (so one will be upside down). Here's an example combined with
How do you visualize binary outcomes versus a continuous predictor? If you have so many points that they overlap and jittering is insufficient, you can add histograms for both levels of the binary variables (so one will be upside down). Here's an example combined with a logistic regression. The idea and R code (popbio::logi.hist.plot) come from Rot, M. D. L. C. (2005). Improving the presentation of results of logistic regression with R. Bulletin of the Ecological Society of America, 86(1), 41-48. https://esajournals.onlinelibrary.wiley.com/doi/10.1890/0012-9623%282005%2986%5B41%3AITPORO%5D2.0.CO%3B2
How do you visualize binary outcomes versus a continuous predictor? If you have so many points that they overlap and jittering is insufficient, you can add histograms for both levels of the binary variables (so one will be upside down). Here's an example combined with
18,787
How do you visualize binary outcomes versus a continuous predictor?
The ggridges package offers more creative ways to avoid overplotting those ones and zeros. Modifying @MattBagg's example. Not optimal for this dataset, but you'll get the point. library(ggplot2) library(ggridges) N=100 data=data.frame(Q=seq(N), Freq=runif(N,0,1), Success=sample(seq(0,1), size=N, replace=TRUE)) ggplot() + ggridges::geom_density_ridges(data = data, aes(x = Freq, y = Success, group = Success), scale = 0.2) + stat_smooth(data = data, aes(x = Freq, y = Success), size=1.5) + coord_cartesian(ylim = c(0, 1.25), xlim = c(0, 1), expand = FALSE) + scale_y_continuous(breaks = c(0, 0.5, 1)) + labs(x = "Frequency", y = "Probability of Detection") + theme_bw() #> Picking joint bandwidth of 0.123 #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' Created on 2021-06-30 by the reprex package (v2.0.0)
How do you visualize binary outcomes versus a continuous predictor?
The ggridges package offers more creative ways to avoid overplotting those ones and zeros. Modifying @MattBagg's example. Not optimal for this dataset, but you'll get the point. library(ggplot2) libr
How do you visualize binary outcomes versus a continuous predictor? The ggridges package offers more creative ways to avoid overplotting those ones and zeros. Modifying @MattBagg's example. Not optimal for this dataset, but you'll get the point. library(ggplot2) library(ggridges) N=100 data=data.frame(Q=seq(N), Freq=runif(N,0,1), Success=sample(seq(0,1), size=N, replace=TRUE)) ggplot() + ggridges::geom_density_ridges(data = data, aes(x = Freq, y = Success, group = Success), scale = 0.2) + stat_smooth(data = data, aes(x = Freq, y = Success), size=1.5) + coord_cartesian(ylim = c(0, 1.25), xlim = c(0, 1), expand = FALSE) + scale_y_continuous(breaks = c(0, 0.5, 1)) + labs(x = "Frequency", y = "Probability of Detection") + theme_bw() #> Picking joint bandwidth of 0.123 #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' Created on 2021-06-30 by the reprex package (v2.0.0)
How do you visualize binary outcomes versus a continuous predictor? The ggridges package offers more creative ways to avoid overplotting those ones and zeros. Modifying @MattBagg's example. Not optimal for this dataset, but you'll get the point. library(ggplot2) libr
18,788
How do you visualize binary outcomes versus a continuous predictor?
I agree that posting just a few lines of sample data would go a long way. If I understand the question, I think it would be simplest to plot the frequency by the proportion found. First I will generate some sample data in R; please correct me if I haven't understood you correctly. # Create some sample data data=data.frame(Q=1:20,F=seq(5,100,by=5)) set.seed(1) data$found<-round(sapply(data$F,function(x) runif(1,1,x))) data$prop<-data$found/data$F # Looks like: Q F found prop 1 1 5 2 0.4000000 2 2 10 4 0.4000000 3 3 15 9 0.6000000 4 4 20 18 0.9000000 5 5 25 6 0.2400000 6 6 30 27 0.9000000 7 7 35 33 0.9428571 8 8 40 27 0.6750000 9 9 45 29 0.6444444 10 10 50 4 0.0800000 11 11 55 12 0.2181818 12 12 60 11 0.1833333 13 13 65 45 0.6923077 14 14 70 28 0.4000000 15 15 75 58 0.7733333 16 16 80 40 0.5000000 17 17 85 61 0.7176471 18 18 90 89 0.9888889 19 19 95 37 0.3894737 20 20 100 78 0.7800000 And now simply plot frequency (F) by proportion: # Plot frequency by proportion found. plot(data$F,data$prop,xlab='Frequency',ylab='Proportion Found',type='l',col='red',lwd=2)
How do you visualize binary outcomes versus a continuous predictor?
I agree that posting just a few lines of sample data would go a long way. If I understand the question, I think it would be simplest to plot the frequency by the proportion found. First I will genera
How do you visualize binary outcomes versus a continuous predictor? I agree that posting just a few lines of sample data would go a long way. If I understand the question, I think it would be simplest to plot the frequency by the proportion found. First I will generate some sample data in R; please correct me if I haven't understood you correctly. # Create some sample data data=data.frame(Q=1:20,F=seq(5,100,by=5)) set.seed(1) data$found<-round(sapply(data$F,function(x) runif(1,1,x))) data$prop<-data$found/data$F # Looks like: Q F found prop 1 1 5 2 0.4000000 2 2 10 4 0.4000000 3 3 15 9 0.6000000 4 4 20 18 0.9000000 5 5 25 6 0.2400000 6 6 30 27 0.9000000 7 7 35 33 0.9428571 8 8 40 27 0.6750000 9 9 45 29 0.6444444 10 10 50 4 0.0800000 11 11 55 12 0.2181818 12 12 60 11 0.1833333 13 13 65 45 0.6923077 14 14 70 28 0.4000000 15 15 75 58 0.7733333 16 16 80 40 0.5000000 17 17 85 61 0.7176471 18 18 90 89 0.9888889 19 19 95 37 0.3894737 20 20 100 78 0.7800000 And now simply plot frequency (F) by proportion: # Plot frequency by proportion found. plot(data$F,data$prop,xlab='Frequency',ylab='Proportion Found',type='l',col='red',lwd=2)
How do you visualize binary outcomes versus a continuous predictor? I agree that posting just a few lines of sample data would go a long way. If I understand the question, I think it would be simplest to plot the frequency by the proportion found. First I will genera
18,789
Dispersion parameter in GLM output
One way to explore this is to try fitting the same model using different tools, here is one example: > fit1 <- lm( Sepal.Length ~ ., data=iris ) > fit2 <- glm( Sepal.Length ~ ., data=iris ) > summary(fit1) Call: lm(formula = Sepal.Length ~ ., data = iris) Residuals: Min 1Q Median 3Q Max -0.79424 -0.21874 0.00899 0.20255 0.73103 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.17127 0.27979 7.760 1.43e-12 *** Sepal.Width 0.49589 0.08607 5.761 4.87e-08 *** Petal.Length 0.82924 0.06853 12.101 < 2e-16 *** Petal.Width -0.31516 0.15120 -2.084 0.03889 * Speciesversicolor -0.72356 0.24017 -3.013 0.00306 ** Speciesvirginica -1.02350 0.33373 -3.067 0.00258 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.3068 on 144 degrees of freedom Multiple R-squared: 0.8673, Adjusted R-squared: 0.8627 F-statistic: 188.3 on 5 and 144 DF, p-value: < 2.2e-16 > summary(fit2) Call: glm(formula = Sepal.Length ~ ., data = iris) Deviance Residuals: Min 1Q Median 3Q Max -0.79424 -0.21874 0.00899 0.20255 0.73103 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.17127 0.27979 7.760 1.43e-12 *** Sepal.Width 0.49589 0.08607 5.761 4.87e-08 *** Petal.Length 0.82924 0.06853 12.101 < 2e-16 *** Petal.Width -0.31516 0.15120 -2.084 0.03889 * Speciesversicolor -0.72356 0.24017 -3.013 0.00306 ** Speciesvirginica -1.02350 0.33373 -3.067 0.00258 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for gaussian family taken to be 0.09414226) Null deviance: 102.168 on 149 degrees of freedom Residual deviance: 13.556 on 144 degrees of freedom AIC: 79.116 Number of Fisher Scoring iterations: 2 > sqrt( 0.09414226 ) [1] 0.3068261 So you can see that the residual standard error of the linear model is just the square root of the dispersion from the glm, in other words the dispersion (for Gaussian models) is the same as the mean square error.
Dispersion parameter in GLM output
One way to explore this is to try fitting the same model using different tools, here is one example: > fit1 <- lm( Sepal.Length ~ ., data=iris ) > fit2 <- glm( Sepal.Length ~ ., data=iris ) > summary(
Dispersion parameter in GLM output One way to explore this is to try fitting the same model using different tools, here is one example: > fit1 <- lm( Sepal.Length ~ ., data=iris ) > fit2 <- glm( Sepal.Length ~ ., data=iris ) > summary(fit1) Call: lm(formula = Sepal.Length ~ ., data = iris) Residuals: Min 1Q Median 3Q Max -0.79424 -0.21874 0.00899 0.20255 0.73103 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.17127 0.27979 7.760 1.43e-12 *** Sepal.Width 0.49589 0.08607 5.761 4.87e-08 *** Petal.Length 0.82924 0.06853 12.101 < 2e-16 *** Petal.Width -0.31516 0.15120 -2.084 0.03889 * Speciesversicolor -0.72356 0.24017 -3.013 0.00306 ** Speciesvirginica -1.02350 0.33373 -3.067 0.00258 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.3068 on 144 degrees of freedom Multiple R-squared: 0.8673, Adjusted R-squared: 0.8627 F-statistic: 188.3 on 5 and 144 DF, p-value: < 2.2e-16 > summary(fit2) Call: glm(formula = Sepal.Length ~ ., data = iris) Deviance Residuals: Min 1Q Median 3Q Max -0.79424 -0.21874 0.00899 0.20255 0.73103 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.17127 0.27979 7.760 1.43e-12 *** Sepal.Width 0.49589 0.08607 5.761 4.87e-08 *** Petal.Length 0.82924 0.06853 12.101 < 2e-16 *** Petal.Width -0.31516 0.15120 -2.084 0.03889 * Speciesversicolor -0.72356 0.24017 -3.013 0.00306 ** Speciesvirginica -1.02350 0.33373 -3.067 0.00258 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for gaussian family taken to be 0.09414226) Null deviance: 102.168 on 149 degrees of freedom Residual deviance: 13.556 on 144 degrees of freedom AIC: 79.116 Number of Fisher Scoring iterations: 2 > sqrt( 0.09414226 ) [1] 0.3068261 So you can see that the residual standard error of the linear model is just the square root of the dispersion from the glm, in other words the dispersion (for Gaussian models) is the same as the mean square error.
Dispersion parameter in GLM output One way to explore this is to try fitting the same model using different tools, here is one example: > fit1 <- lm( Sepal.Length ~ ., data=iris ) > fit2 <- glm( Sepal.Length ~ ., data=iris ) > summary(
18,790
Dispersion parameter in GLM output
Let us speculate the simple situation where there is no covariate information in your data. Say, you just have observations $Y_1, Y_2, \ldots, Y_n \in \mathbb{R}$. If you are using normal distribution to model your data, you would probably write that $Y_i \sim \mathcal{N}(\mu, \sigma^2)$, and then try to estimate $\mu$ and $\sigma$, maybe via maximum likelihood estimation. But let's say your data is count data and thus not normally distributed. It is not even continuous this case, so you may use Poisson distribution instead: $Y_i \sim Poisson(\lambda)$. However, you have only one parameter here! The single parameter $\lambda$ determines both mean and variance by $\mathbb{E}[Y_i] = \lambda$ and $Var[Y_i] = \lambda$. This also happens when you use Bernoulli or binomial distribution. But you may have larger or smaller variance in your data, possibly because observations are not truly iid or the distribution you chose was not realistic enough. So people add dispersion parameter to get additional degree of freedom in modeling mean and variance simultaneously. I guess any textbook on GLM will give you more detailed and mathematical explanation about what it is, but the motivation, I believe, is pretty simple like this.
Dispersion parameter in GLM output
Let us speculate the simple situation where there is no covariate information in your data. Say, you just have observations $Y_1, Y_2, \ldots, Y_n \in \mathbb{R}$. If you are using normal distribution
Dispersion parameter in GLM output Let us speculate the simple situation where there is no covariate information in your data. Say, you just have observations $Y_1, Y_2, \ldots, Y_n \in \mathbb{R}$. If you are using normal distribution to model your data, you would probably write that $Y_i \sim \mathcal{N}(\mu, \sigma^2)$, and then try to estimate $\mu$ and $\sigma$, maybe via maximum likelihood estimation. But let's say your data is count data and thus not normally distributed. It is not even continuous this case, so you may use Poisson distribution instead: $Y_i \sim Poisson(\lambda)$. However, you have only one parameter here! The single parameter $\lambda$ determines both mean and variance by $\mathbb{E}[Y_i] = \lambda$ and $Var[Y_i] = \lambda$. This also happens when you use Bernoulli or binomial distribution. But you may have larger or smaller variance in your data, possibly because observations are not truly iid or the distribution you chose was not realistic enough. So people add dispersion parameter to get additional degree of freedom in modeling mean and variance simultaneously. I guess any textbook on GLM will give you more detailed and mathematical explanation about what it is, but the motivation, I believe, is pretty simple like this.
Dispersion parameter in GLM output Let us speculate the simple situation where there is no covariate information in your data. Say, you just have observations $Y_1, Y_2, \ldots, Y_n \in \mathbb{R}$. If you are using normal distribution
18,791
Classification with tall fat data
I think you should look at Online Learning methods. The perceptron and the kernel perceptron are extremely easy to code and work extremely well in practice, and there are a whole host of other online methods. Note that any online learning method can be converted into a batch learning algorithm, in which case they closely resemble stochastic gradient descent methods. If you're using Matlab there's a really nice toolbox called DOGMA by Francesco Orabona, which contains a range of online learning algorithms, and you can evaluate a few different methods using that. I've used this in some of my research and found it to be very useful (note that as far as I remember it expects the data as [features x examples] so you might have to transpose it). As others have mentioned, you might want to try dimensionality reduction. PCA might not be such a good option here, as you have to compute the covariance matrix which will be very costly. You could try looking at Random Projections. The theory is tough, but the principle is very simple. It's based on the Johnson-Lindenstrauss Lemma if you're interested, but the basic idea is that if you randomly project to a lower dimensional space, then $\ell_2$ distances between points are preserved up to some $\epsilon$. If you're using an RBF kernel, then $\ell_2$ distances are all you are interested in!
Classification with tall fat data
I think you should look at Online Learning methods. The perceptron and the kernel perceptron are extremely easy to code and work extremely well in practice, and there are a whole host of other online
Classification with tall fat data I think you should look at Online Learning methods. The perceptron and the kernel perceptron are extremely easy to code and work extremely well in practice, and there are a whole host of other online methods. Note that any online learning method can be converted into a batch learning algorithm, in which case they closely resemble stochastic gradient descent methods. If you're using Matlab there's a really nice toolbox called DOGMA by Francesco Orabona, which contains a range of online learning algorithms, and you can evaluate a few different methods using that. I've used this in some of my research and found it to be very useful (note that as far as I remember it expects the data as [features x examples] so you might have to transpose it). As others have mentioned, you might want to try dimensionality reduction. PCA might not be such a good option here, as you have to compute the covariance matrix which will be very costly. You could try looking at Random Projections. The theory is tough, but the principle is very simple. It's based on the Johnson-Lindenstrauss Lemma if you're interested, but the basic idea is that if you randomly project to a lower dimensional space, then $\ell_2$ distances between points are preserved up to some $\epsilon$. If you're using an RBF kernel, then $\ell_2$ distances are all you are interested in!
Classification with tall fat data I think you should look at Online Learning methods. The perceptron and the kernel perceptron are extremely easy to code and work extremely well in practice, and there are a whole host of other online
18,792
Classification with tall fat data
First, I would like to ask you how do you know linear classifier is the best choice? Intuitively for such a large space (R^10000) it is possible that some other non-linear classifier is a better choice. I suggest you to try several different classifiers and observe the prediction errors (I would try several regularized classification models). If you run out of memory reduce the dimension using PCA
Classification with tall fat data
First, I would like to ask you how do you know linear classifier is the best choice? Intuitively for such a large space (R^10000) it is possible that some other non-linear classifier is a better choic
Classification with tall fat data First, I would like to ask you how do you know linear classifier is the best choice? Intuitively for such a large space (R^10000) it is possible that some other non-linear classifier is a better choice. I suggest you to try several different classifiers and observe the prediction errors (I would try several regularized classification models). If you run out of memory reduce the dimension using PCA
Classification with tall fat data First, I would like to ask you how do you know linear classifier is the best choice? Intuitively for such a large space (R^10000) it is possible that some other non-linear classifier is a better choic
18,793
Classification with tall fat data
You can also use PCA to reduce dimensions without computing the covariance matrix --- by using neural newtork equivalent of PCA. Here is a paper that describes it (but I recommend doing your own search): http://users.ics.tkk.fi/oja/Oja1982.pdf, and here is a link to somethings that may be working matlab implementation: http://www.cs.purdue.edu/homes/dgleich/projects/pca_neural_nets_website/index.html.
Classification with tall fat data
You can also use PCA to reduce dimensions without computing the covariance matrix --- by using neural newtork equivalent of PCA. Here is a paper that describes it (but I recommend doing your own sear
Classification with tall fat data You can also use PCA to reduce dimensions without computing the covariance matrix --- by using neural newtork equivalent of PCA. Here is a paper that describes it (but I recommend doing your own search): http://users.ics.tkk.fi/oja/Oja1982.pdf, and here is a link to somethings that may be working matlab implementation: http://www.cs.purdue.edu/homes/dgleich/projects/pca_neural_nets_website/index.html.
Classification with tall fat data You can also use PCA to reduce dimensions without computing the covariance matrix --- by using neural newtork equivalent of PCA. Here is a paper that describes it (but I recommend doing your own sear
18,794
Classification with tall fat data
As jb suggested, I think it is better to use a "Dimension Reduction" method. Principle Component Analysis (PCA) is a popular choice. Also you can try unsupervised feature learning techniques as well. For more information about unsupervised feature learning can be found at http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
Classification with tall fat data
As jb suggested, I think it is better to use a "Dimension Reduction" method. Principle Component Analysis (PCA) is a popular choice. Also you can try unsupervised feature learning techniques as well.
Classification with tall fat data As jb suggested, I think it is better to use a "Dimension Reduction" method. Principle Component Analysis (PCA) is a popular choice. Also you can try unsupervised feature learning techniques as well. For more information about unsupervised feature learning can be found at http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
Classification with tall fat data As jb suggested, I think it is better to use a "Dimension Reduction" method. Principle Component Analysis (PCA) is a popular choice. Also you can try unsupervised feature learning techniques as well.
18,795
Estimator for a binomial distribution
I guess what you are looking for is the probability generating function. A derivation of the probability generating function of the binomial distribution can be found under http://economictheoryblog.com/2012/10/21/binomial-distribution/ However, having a look at Wikipedia is nowadays always a good idea, although I have to say that the specification of the binomial could be improved. https://en.wikipedia.org/wiki/Binomial_distribution#Specification
Estimator for a binomial distribution
I guess what you are looking for is the probability generating function. A derivation of the probability generating function of the binomial distribution can be found under http://economictheoryblog.
Estimator for a binomial distribution I guess what you are looking for is the probability generating function. A derivation of the probability generating function of the binomial distribution can be found under http://economictheoryblog.com/2012/10/21/binomial-distribution/ However, having a look at Wikipedia is nowadays always a good idea, although I have to say that the specification of the binomial could be improved. https://en.wikipedia.org/wiki/Binomial_distribution#Specification
Estimator for a binomial distribution I guess what you are looking for is the probability generating function. A derivation of the probability generating function of the binomial distribution can be found under http://economictheoryblog.
18,796
Estimator for a binomial distribution
Say you have data $k_1, \dots, k_m \sim \text{iid binomial}(n, p)$. You could easily derive method-of-moment estimators by setting $\bar{k} = \hat{n}\hat{p}$ and $s_k^2 = \hat{n}\hat{p}(1-\hat{p})$ and solving for $\hat{n}$ and $\hat{p}$. Or you could calculate MLEs (perhaps just numerically), eg using optim in R.
Estimator for a binomial distribution
Say you have data $k_1, \dots, k_m \sim \text{iid binomial}(n, p)$. You could easily derive method-of-moment estimators by setting $\bar{k} = \hat{n}\hat{p}$ and $s_k^2 = \hat{n}\hat{p}(1-\hat{p})$ an
Estimator for a binomial distribution Say you have data $k_1, \dots, k_m \sim \text{iid binomial}(n, p)$. You could easily derive method-of-moment estimators by setting $\bar{k} = \hat{n}\hat{p}$ and $s_k^2 = \hat{n}\hat{p}(1-\hat{p})$ and solving for $\hat{n}$ and $\hat{p}$. Or you could calculate MLEs (perhaps just numerically), eg using optim in R.
Estimator for a binomial distribution Say you have data $k_1, \dots, k_m \sim \text{iid binomial}(n, p)$. You could easily derive method-of-moment estimators by setting $\bar{k} = \hat{n}\hat{p}$ and $s_k^2 = \hat{n}\hat{p}(1-\hat{p})$ an
18,797
Estimator for a binomial distribution
Every distribution have some unknown parameter(s). For example in the Bernoulli distribution has one unknown parameter probability of success (p). Likewise in the Binomial distribution has two unknown parameters n and p. It depends on your objective which unknown parameter you want to estimate. you can fix one parameter and estimation other one. For more information see this
Estimator for a binomial distribution
Every distribution have some unknown parameter(s). For example in the Bernoulli distribution has one unknown parameter probability of success (p). Likewise in the Binomial distribution has two unknown
Estimator for a binomial distribution Every distribution have some unknown parameter(s). For example in the Bernoulli distribution has one unknown parameter probability of success (p). Likewise in the Binomial distribution has two unknown parameters n and p. It depends on your objective which unknown parameter you want to estimate. you can fix one parameter and estimation other one. For more information see this
Estimator for a binomial distribution Every distribution have some unknown parameter(s). For example in the Bernoulli distribution has one unknown parameter probability of success (p). Likewise in the Binomial distribution has two unknown
18,798
Estimator for a binomial distribution
I think we could use method of moments estimation to estimate the parameters of the Binomial distribution by the mean and the variance. Using the method of moments estimation to estimate The parameters $p$ and $m$. [{\hat{p}}_n=\frac{\overline{X}-S^2}{\overline{X}}][\hat{m}_n=\frac{\overline{X}^2}{\overline{X}-S^2}] Proof The estimators of the parameters $m$ and $p$ by the Method of Moments are the solutions of the system of equations $$mp =\bar{X},\quad mp(1-p) = S^2.$$ Hence our equations for the method of moments are: [\overline{X}=mp] [S^2=mp (1-p).] Simple arithmetic shows: [S^2 = mp\left(1 - p\right) = \bar{X}\left(1 - p\right)] [S^2=\bar{X}-\bar{X} p] [\bar{X}p=\bar{X}-S^2, \mbox{ therefore } \hat{p}=\frac{\bar{X}-S^2}{\bar{X}}.] Then, [\bar{X} = mp, \mbox{ that is, } m \left(\frac{\bar{X}-S^2}{\bar{X}}\right)] [\bar{X}=m\left(\frac{\bar{X}-S^2}{\bar{X}}\right), \mbox{ or } \hat{m}=\frac{\bar{X}^2}{\bar{X}-S^2}. ]
Estimator for a binomial distribution
I think we could use method of moments estimation to estimate the parameters of the Binomial distribution by the mean and the variance. Using the method of moments estimation to estimate The param
Estimator for a binomial distribution I think we could use method of moments estimation to estimate the parameters of the Binomial distribution by the mean and the variance. Using the method of moments estimation to estimate The parameters $p$ and $m$. [{\hat{p}}_n=\frac{\overline{X}-S^2}{\overline{X}}][\hat{m}_n=\frac{\overline{X}^2}{\overline{X}-S^2}] Proof The estimators of the parameters $m$ and $p$ by the Method of Moments are the solutions of the system of equations $$mp =\bar{X},\quad mp(1-p) = S^2.$$ Hence our equations for the method of moments are: [\overline{X}=mp] [S^2=mp (1-p).] Simple arithmetic shows: [S^2 = mp\left(1 - p\right) = \bar{X}\left(1 - p\right)] [S^2=\bar{X}-\bar{X} p] [\bar{X}p=\bar{X}-S^2, \mbox{ therefore } \hat{p}=\frac{\bar{X}-S^2}{\bar{X}}.] Then, [\bar{X} = mp, \mbox{ that is, } m \left(\frac{\bar{X}-S^2}{\bar{X}}\right)] [\bar{X}=m\left(\frac{\bar{X}-S^2}{\bar{X}}\right), \mbox{ or } \hat{m}=\frac{\bar{X}^2}{\bar{X}-S^2}. ]
Estimator for a binomial distribution I think we could use method of moments estimation to estimate the parameters of the Binomial distribution by the mean and the variance. Using the method of moments estimation to estimate The param
18,799
How to plot a stair steps function with ggplot?
As noted by @chl the answer is simply using geom_step() instead of geom_path() in the example above. Result (the plot has different data):
How to plot a stair steps function with ggplot?
As noted by @chl the answer is simply using geom_step() instead of geom_path() in the example above. Result (the plot has different data):
How to plot a stair steps function with ggplot? As noted by @chl the answer is simply using geom_step() instead of geom_path() in the example above. Result (the plot has different data):
How to plot a stair steps function with ggplot? As noted by @chl the answer is simply using geom_step() instead of geom_path() in the example above. Result (the plot has different data):
18,800
How to prove that Elo rating or Page ranking have a meaning for my set?
You need a probability model. The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already means something specific in statistics). We would predict that player A will beat player B when strength(A) exceeds strength(B). But this statement is too weak because (a) it is not quantitative and (b) it does not account for the possibility of a weaker player occasionally beating a stronger player. We can overcome both problems by supposing the probability that A beats B depends only on the difference in their strengths. If this is so, then we can re-express all the strengths is necessary so that the difference in strengths equals the log odds of a win. Specifically, this model is $$\mathrm{logit}(\Pr(A \text{ beats } B)) = \lambda_A - \lambda_B$$ where, by definition, $\mathrm{logit}(p) = \log(p) - \log(1-p)$ is the log odds and I have written $\lambda_A$ for player A's strength, etc. This model has as many parameters as players (but there is one less degree of freedom, because it can only identify relative strengths, so we would fix one of the parameters at an arbitrary value). It is a kind of generalized linear model (in the Binomial family, with logit link). The parameters can be estimated by Maximum Likelihood. The same theory provides a means to erect confidence intervals around the parameter estimates and to test hypotheses (such as whether the strongest player, according to the estimates, is significantly stronger than the estimated weakest player). Specifically, the likelihood of a set of games is the product $$\prod_{\text{all games}}{\frac{\exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}{1 + \exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}}.$$ After fixing the value of one of the $\lambda$, the estimates of the others are the values that maximize this likelihood. Thus, varying any of the estimates reduces the likelihood from its maximum. If it is reduced too much, it is not consistent with the data. In this fashion we can find confidence intervals for all the parameters: they are the limits in which varying the estimates does not overly decrease the log likelihood. General hypotheses can similarly be tested: a hypothesis constrains the strengths (such as by supposing they are all equal), this constraint limits how large the likelihood can get, and if this restricted maximum falls too far short of the actual maximum, the hypothesis is rejected. In this particular problem there are 18 games and 7 free parameters. In general that is too many parameters: there is so much flexibility that the parameters can be quite freely varied without changing the maximum likelihood much. Thus, applying the ML machinery is likely to prove the obvious, which is that there likely are not enough data to have confidence in the strength estimates.
How to prove that Elo rating or Page ranking have a meaning for my set?
You need a probability model. The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already
How to prove that Elo rating or Page ranking have a meaning for my set? You need a probability model. The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already means something specific in statistics). We would predict that player A will beat player B when strength(A) exceeds strength(B). But this statement is too weak because (a) it is not quantitative and (b) it does not account for the possibility of a weaker player occasionally beating a stronger player. We can overcome both problems by supposing the probability that A beats B depends only on the difference in their strengths. If this is so, then we can re-express all the strengths is necessary so that the difference in strengths equals the log odds of a win. Specifically, this model is $$\mathrm{logit}(\Pr(A \text{ beats } B)) = \lambda_A - \lambda_B$$ where, by definition, $\mathrm{logit}(p) = \log(p) - \log(1-p)$ is the log odds and I have written $\lambda_A$ for player A's strength, etc. This model has as many parameters as players (but there is one less degree of freedom, because it can only identify relative strengths, so we would fix one of the parameters at an arbitrary value). It is a kind of generalized linear model (in the Binomial family, with logit link). The parameters can be estimated by Maximum Likelihood. The same theory provides a means to erect confidence intervals around the parameter estimates and to test hypotheses (such as whether the strongest player, according to the estimates, is significantly stronger than the estimated weakest player). Specifically, the likelihood of a set of games is the product $$\prod_{\text{all games}}{\frac{\exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}{1 + \exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}}.$$ After fixing the value of one of the $\lambda$, the estimates of the others are the values that maximize this likelihood. Thus, varying any of the estimates reduces the likelihood from its maximum. If it is reduced too much, it is not consistent with the data. In this fashion we can find confidence intervals for all the parameters: they are the limits in which varying the estimates does not overly decrease the log likelihood. General hypotheses can similarly be tested: a hypothesis constrains the strengths (such as by supposing they are all equal), this constraint limits how large the likelihood can get, and if this restricted maximum falls too far short of the actual maximum, the hypothesis is rejected. In this particular problem there are 18 games and 7 free parameters. In general that is too many parameters: there is so much flexibility that the parameters can be quite freely varied without changing the maximum likelihood much. Thus, applying the ML machinery is likely to prove the obvious, which is that there likely are not enough data to have confidence in the strength estimates.
How to prove that Elo rating or Page ranking have a meaning for my set? You need a probability model. The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already