idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
30,601
How to verify extremely low error rates
Generally speaking, you can't. I would be very wary of techniques that claim to be able to prove a $1/10^6$ error rate given only $4000$ tests. Often those kinds of techniques involve somewhere making an assumption of independence, which there is no way to validate reliably: it's just a leap of faith. These kinds of flawed reasoning have led to serious failures in the world of safety-critical systems. There may be some special cases where you can demonstrate the desired level of reliability using such a limited number of tests, e.g., by taking into account something about the physics of the situation. But they are rare, and that kind of reasoning is fragile.
How to verify extremely low error rates
Generally speaking, you can't. I would be very wary of techniques that claim to be able to prove a $1/10^6$ error rate given only $4000$ tests. Often those kinds of techniques involve somewhere mak
How to verify extremely low error rates Generally speaking, you can't. I would be very wary of techniques that claim to be able to prove a $1/10^6$ error rate given only $4000$ tests. Often those kinds of techniques involve somewhere making an assumption of independence, which there is no way to validate reliably: it's just a leap of faith. These kinds of flawed reasoning have led to serious failures in the world of safety-critical systems. There may be some special cases where you can demonstrate the desired level of reliability using such a limited number of tests, e.g., by taking into account something about the physics of the situation. But they are rare, and that kind of reasoning is fragile.
How to verify extremely low error rates Generally speaking, you can't. I would be very wary of techniques that claim to be able to prove a $1/10^6$ error rate given only $4000$ tests. Often those kinds of techniques involve somewhere mak
30,602
How to report effect size measures r and r-squared and what is a non-technical explanation of them?
General points on the term 'effect size' The term 'effect size' can have both narrow and specific meanings. narrowest meaning: Some authors use the term 'effect size' almost exclusively within the context of standardised group mean differences (i.e., $d$). narrow meaning: Any of a set of standardised statistics that quantify relationships broad meaning: Any value that quantifies the degree of effect, including unstandardised measures of relationship. Just to be clear, $r^2$ is a measure of effect size, just as $r$ is a measure of effect size. $r$ is just a more commonly used effect size measure used in meta-analyses and the like to summarise strength of bivariate relationship. When to report $r$ versus $r^2$ A convention in psychology and probably other areas is that correlations (i.e., $r$) are typically reported when summarising one or often a matrix of bivariate relationships and that $r^2$ is reported in the context of models predicting a variable (e.g., multiple regression). This makes sense for several rasons. First, correlation communicates the direction of the relationship whereas $r^2$ does not; however, directional information is communicated in predictive models by interpreting model coefficients. Second, where correlations are typically ranging between .1 and .3, then the correlation seems to be a bit more nuanced than $r^2$, and thus, fewer decimal places are required to be displayed. Explaining $r$ and $r^2$ in plain English $r$ is a standardised measure of the strength and direction of linear relationship between two variables ranging from -1 for a perfect negative relationship and 1 for a perfect positive relationship. You may want to give your non-statistical audience a sense of some rules of thumb set out by Cohen and others (something like r = .1 = small; r = .3 = medium; r = .5 = large), while at the same time telling them not to take such presciptions too literally. You might also present some scatterplots of various correlations and some examples of typical correlation sizes in their field of interest. One somewhat intuitive interpretation of $r$ is that it is equivalent to a standardised regression coefficient. I think that the interpretation of $r^2$ as the percentage of variance explained by the linear relationship between two variables is relatively intuitive.
How to report effect size measures r and r-squared and what is a non-technical explanation of them?
General points on the term 'effect size' The term 'effect size' can have both narrow and specific meanings. narrowest meaning: Some authors use the term 'effect size' almost exclusively within the co
How to report effect size measures r and r-squared and what is a non-technical explanation of them? General points on the term 'effect size' The term 'effect size' can have both narrow and specific meanings. narrowest meaning: Some authors use the term 'effect size' almost exclusively within the context of standardised group mean differences (i.e., $d$). narrow meaning: Any of a set of standardised statistics that quantify relationships broad meaning: Any value that quantifies the degree of effect, including unstandardised measures of relationship. Just to be clear, $r^2$ is a measure of effect size, just as $r$ is a measure of effect size. $r$ is just a more commonly used effect size measure used in meta-analyses and the like to summarise strength of bivariate relationship. When to report $r$ versus $r^2$ A convention in psychology and probably other areas is that correlations (i.e., $r$) are typically reported when summarising one or often a matrix of bivariate relationships and that $r^2$ is reported in the context of models predicting a variable (e.g., multiple regression). This makes sense for several rasons. First, correlation communicates the direction of the relationship whereas $r^2$ does not; however, directional information is communicated in predictive models by interpreting model coefficients. Second, where correlations are typically ranging between .1 and .3, then the correlation seems to be a bit more nuanced than $r^2$, and thus, fewer decimal places are required to be displayed. Explaining $r$ and $r^2$ in plain English $r$ is a standardised measure of the strength and direction of linear relationship between two variables ranging from -1 for a perfect negative relationship and 1 for a perfect positive relationship. You may want to give your non-statistical audience a sense of some rules of thumb set out by Cohen and others (something like r = .1 = small; r = .3 = medium; r = .5 = large), while at the same time telling them not to take such presciptions too literally. You might also present some scatterplots of various correlations and some examples of typical correlation sizes in their field of interest. One somewhat intuitive interpretation of $r$ is that it is equivalent to a standardised regression coefficient. I think that the interpretation of $r^2$ as the percentage of variance explained by the linear relationship between two variables is relatively intuitive.
How to report effect size measures r and r-squared and what is a non-technical explanation of them? General points on the term 'effect size' The term 'effect size' can have both narrow and specific meanings. narrowest meaning: Some authors use the term 'effect size' almost exclusively within the co
30,603
How to report effect size measures r and r-squared and what is a non-technical explanation of them?
If you refer to the term "effect size", there are some standards on how to report them (Cohen, 1992). The most common is Cohen's $d$, which can be directly transformed into a correlation-based measure of effect-size, $r_{ES}$: $r_{ES} = \frac{d}{\sqrt{(d2 + 4)}}$ For ANOVAs, you usually report $\eta^2$, which directly refers to "variance explained". If the original statistics was a correlation, just report the correlation. It already is a measure of effect size. To explain them in plain English, I would refer to Cohen's table of effect size magnitudes. For correlations, it says: <.10: trivial .10 - .30: small to medium .30 - .50: medium to large >.50: large to very large Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. doi:10.1037/0033-2909.112.1.155
How to report effect size measures r and r-squared and what is a non-technical explanation of them?
If you refer to the term "effect size", there are some standards on how to report them (Cohen, 1992). The most common is Cohen's $d$, which can be directly transformed into a correlation-based measure
How to report effect size measures r and r-squared and what is a non-technical explanation of them? If you refer to the term "effect size", there are some standards on how to report them (Cohen, 1992). The most common is Cohen's $d$, which can be directly transformed into a correlation-based measure of effect-size, $r_{ES}$: $r_{ES} = \frac{d}{\sqrt{(d2 + 4)}}$ For ANOVAs, you usually report $\eta^2$, which directly refers to "variance explained". If the original statistics was a correlation, just report the correlation. It already is a measure of effect size. To explain them in plain English, I would refer to Cohen's table of effect size magnitudes. For correlations, it says: <.10: trivial .10 - .30: small to medium .30 - .50: medium to large >.50: large to very large Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. doi:10.1037/0033-2909.112.1.155
How to report effect size measures r and r-squared and what is a non-technical explanation of them? If you refer to the term "effect size", there are some standards on how to report them (Cohen, 1992). The most common is Cohen's $d$, which can be directly transformed into a correlation-based measure
30,604
How to calculate linear regression with Libre Office? [closed]
LibreOffice's help explains how trend lines are computed. You will see that the work is done through two functions: INTERCEPT and SLOPE. If you are interested in linear regression, you should also have a look at the LINEST function.
How to calculate linear regression with Libre Office? [closed]
LibreOffice's help explains how trend lines are computed. You will see that the work is done through two functions: INTERCEPT and SLOPE. If you are interested in linear regression, you should also ha
How to calculate linear regression with Libre Office? [closed] LibreOffice's help explains how trend lines are computed. You will see that the work is done through two functions: INTERCEPT and SLOPE. If you are interested in linear regression, you should also have a look at the LINEST function.
How to calculate linear regression with Libre Office? [closed] LibreOffice's help explains how trend lines are computed. You will see that the work is done through two functions: INTERCEPT and SLOPE. If you are interested in linear regression, you should also ha
30,605
How to calculate linear regression with Libre Office? [closed]
LibreOffice did add this feature in version 5.1 early last year. To generate this, select your data, click the "Data" menu, and go to the bottom for "Statistics" > "Regression". This will give the following dialog: Enter the top-left corner of where you want your results to be generated, hit OK, and voila:
How to calculate linear regression with Libre Office? [closed]
LibreOffice did add this feature in version 5.1 early last year. To generate this, select your data, click the "Data" menu, and go to the bottom for "Statistics" > "Regression". This will give the fol
How to calculate linear regression with Libre Office? [closed] LibreOffice did add this feature in version 5.1 early last year. To generate this, select your data, click the "Data" menu, and go to the bottom for "Statistics" > "Regression". This will give the following dialog: Enter the top-left corner of where you want your results to be generated, hit OK, and voila:
How to calculate linear regression with Libre Office? [closed] LibreOffice did add this feature in version 5.1 early last year. To generate this, select your data, click the "Data" menu, and go to the bottom for "Statistics" > "Regression". This will give the fol
30,606
How to calculate linear regression with Libre Office? [closed]
There is a somewhat similar question on Quantitative Finance here where one answerer warns against using LibreOffice. Perhaps another answerer's response to use R might be appropriate for you too?
How to calculate linear regression with Libre Office? [closed]
There is a somewhat similar question on Quantitative Finance here where one answerer warns against using LibreOffice. Perhaps another answerer's response to use R might be appropriate for you too?
How to calculate linear regression with Libre Office? [closed] There is a somewhat similar question on Quantitative Finance here where one answerer warns against using LibreOffice. Perhaps another answerer's response to use R might be appropriate for you too?
How to calculate linear regression with Libre Office? [closed] There is a somewhat similar question on Quantitative Finance here where one answerer warns against using LibreOffice. Perhaps another answerer's response to use R might be appropriate for you too?
30,607
Interpreting coefficient in a linear regression model with categorical variables
No, you shouldn't add all of the coefficients together. You essentially have the model $$ {\rm lifespan} = \beta_{0} + \beta_{1} \cdot {\rm fox} + \beta_{2} \cdot {\rm pig} + \beta_{3} \cdot {\rm wolf} + \beta_{4} \cdot {\rm weight} + \varepsilon $$ where, for example, ${\rm pig} = 1$ if the animal was a pig and 0 otherwise. So, to calculate $\beta_{0} + \beta_{1} + \beta_{2} + \beta_{3} + \beta_{4}$ as you've suggested for getting the overall average when ${\rm weight}=1$ is like saying "if you were a pig, a wolf, and a fox, and your weight was 1, what is your expected lifespan?". Clearly since each animal is only one of those things, that doesn't make much sense. You will have to do this separately for each animal. For example, $\beta_{0} + \beta_{2} + \beta_{4}$ is the expected lifespan for a pig when its weight is 1.
Interpreting coefficient in a linear regression model with categorical variables
No, you shouldn't add all of the coefficients together. You essentially have the model $$ {\rm lifespan} = \beta_{0} + \beta_{1} \cdot {\rm fox} + \beta_{2} \cdot {\rm pig} + \beta_{3} \cdot {\rm w
Interpreting coefficient in a linear regression model with categorical variables No, you shouldn't add all of the coefficients together. You essentially have the model $$ {\rm lifespan} = \beta_{0} + \beta_{1} \cdot {\rm fox} + \beta_{2} \cdot {\rm pig} + \beta_{3} \cdot {\rm wolf} + \beta_{4} \cdot {\rm weight} + \varepsilon $$ where, for example, ${\rm pig} = 1$ if the animal was a pig and 0 otherwise. So, to calculate $\beta_{0} + \beta_{1} + \beta_{2} + \beta_{3} + \beta_{4}$ as you've suggested for getting the overall average when ${\rm weight}=1$ is like saying "if you were a pig, a wolf, and a fox, and your weight was 1, what is your expected lifespan?". Clearly since each animal is only one of those things, that doesn't make much sense. You will have to do this separately for each animal. For example, $\beta_{0} + \beta_{2} + \beta_{4}$ is the expected lifespan for a pig when its weight is 1.
Interpreting coefficient in a linear regression model with categorical variables No, you shouldn't add all of the coefficients together. You essentially have the model $$ {\rm lifespan} = \beta_{0} + \beta_{1} \cdot {\rm fox} + \beta_{2} \cdot {\rm pig} + \beta_{3} \cdot {\rm w
30,608
Interpreting coefficient in a linear regression model with categorical variables
The simplest thing to do is to use the predict function on the lm object, then it take care of many of the details like converting a factor to the right values to add together. If you are trying to understand the pieces that go into the prediction then set type='terms' and it will show the individual pieces that add together make your prediction. Note also that how a factor is converted to variables depends on some options, the default will choose a baseline group to compare the other groups to, but you can also set it to an average and differences from that average (or other comparisons of interest).
Interpreting coefficient in a linear regression model with categorical variables
The simplest thing to do is to use the predict function on the lm object, then it take care of many of the details like converting a factor to the right values to add together. If you are trying to u
Interpreting coefficient in a linear regression model with categorical variables The simplest thing to do is to use the predict function on the lm object, then it take care of many of the details like converting a factor to the right values to add together. If you are trying to understand the pieces that go into the prediction then set type='terms' and it will show the individual pieces that add together make your prediction. Note also that how a factor is converted to variables depends on some options, the default will choose a baseline group to compare the other groups to, but you can also set it to an average and differences from that average (or other comparisons of interest).
Interpreting coefficient in a linear regression model with categorical variables The simplest thing to do is to use the predict function on the lm object, then it take care of many of the details like converting a factor to the right values to add together. If you are trying to u
30,609
Interpreting coefficient in a linear regression model with categorical variables
If you want the average lifespan when weight is 1 then you can just take out "animal" in this call: lm(formula = lifespan ~ 1 + animal + weight, data = animal.life)
Interpreting coefficient in a linear regression model with categorical variables
If you want the average lifespan when weight is 1 then you can just take out "animal" in this call: lm(formula = lifespan ~ 1 + animal + weight, data = animal.life)
Interpreting coefficient in a linear regression model with categorical variables If you want the average lifespan when weight is 1 then you can just take out "animal" in this call: lm(formula = lifespan ~ 1 + animal + weight, data = animal.life)
Interpreting coefficient in a linear regression model with categorical variables If you want the average lifespan when weight is 1 then you can just take out "animal" in this call: lm(formula = lifespan ~ 1 + animal + weight, data = animal.life)
30,610
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not?
Yes, you should find and assignable cause for every point that's outside the limits. But things are a little more complicated. First you have to determine if the process is in control, since a control chart is meaningless when the process is out of control. Nearly 1/4 of your observations falling outside the limits is a strong sign that the process may be out of control. Looking at the chart would be useful to determine whether the process is under control or not. Besides falling outside the control limits, there are other potential reasons for needing to look for assignable causes for certain observations. For example, if you have several observations in a row falling on the same side of the mean -- especially if they're near the control limit -- they may need to assigned a special cause. I might be able to be more specific if you'd post the chart itself. If you want to learn more about control charts, SPC Press has a number of useful free resources. You might also want to look at this book: it's short, concise and very informative. (Edit:) I assumed we were talking about real-world data, not an exam question. In this case, the correct answer really is the first one: the points outside the control limits are (probably) caused by assignable causes. The exam is a little sloppy in its terminology, though: you can't actually tell with 100% certainty that the points outside the control limits are not caused by chance. You can only say that there is a 99.7% probability that a particular point outside the limits is not caused by chance.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause
Yes, you should find and assignable cause for every point that's outside the limits. But things are a little more complicated. First you have to determine if the process is in control, since a control
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not? Yes, you should find and assignable cause for every point that's outside the limits. But things are a little more complicated. First you have to determine if the process is in control, since a control chart is meaningless when the process is out of control. Nearly 1/4 of your observations falling outside the limits is a strong sign that the process may be out of control. Looking at the chart would be useful to determine whether the process is under control or not. Besides falling outside the control limits, there are other potential reasons for needing to look for assignable causes for certain observations. For example, if you have several observations in a row falling on the same side of the mean -- especially if they're near the control limit -- they may need to assigned a special cause. I might be able to be more specific if you'd post the chart itself. If you want to learn more about control charts, SPC Press has a number of useful free resources. You might also want to look at this book: it's short, concise and very informative. (Edit:) I assumed we were talking about real-world data, not an exam question. In this case, the correct answer really is the first one: the points outside the control limits are (probably) caused by assignable causes. The exam is a little sloppy in its terminology, though: you can't actually tell with 100% certainty that the points outside the control limits are not caused by chance. You can only say that there is a 99.7% probability that a particular point outside the limits is not caused by chance.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause Yes, you should find and assignable cause for every point that's outside the limits. But things are a little more complicated. First you have to determine if the process is in control, since a control
30,611
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not?
My understanding of control charts is a little bit different... After the first signal at observation 2, wouldn't the process would be stopped and checked for problems, and then restarted? In any case, you could use a p-value argument. The probability of observing 4 or more observations (out of 15) beyond their control limits is VERY tiny if the process is actually in control. Let's say the probability of an observation going outside of the control limits while the process is actually in control is about 0.01 (this exact probability depends on the in control distribution of the data), so if the process is in control, we expect a false alarm (ie out of control signal caused by random chance) every 100 observations or so. The probability of observing 4 or more out of control signals (out of 15) while the process is in control is about 0.000012, so it's very unlikely that the signals are due to random chance. While an actual diagnosis would require you to look at the chart and possibly actually investigate the physical process, because the out of control points are both below and above the control limits, I'm betting there was a scale shift (i.e. increase in variance.)
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause
My understanding of control charts is a little bit different... After the first signal at observation 2, wouldn't the process would be stopped and checked for problems, and then restarted? In any case
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not? My understanding of control charts is a little bit different... After the first signal at observation 2, wouldn't the process would be stopped and checked for problems, and then restarted? In any case, you could use a p-value argument. The probability of observing 4 or more observations (out of 15) beyond their control limits is VERY tiny if the process is actually in control. Let's say the probability of an observation going outside of the control limits while the process is actually in control is about 0.01 (this exact probability depends on the in control distribution of the data), so if the process is in control, we expect a false alarm (ie out of control signal caused by random chance) every 100 observations or so. The probability of observing 4 or more out of control signals (out of 15) while the process is in control is about 0.000012, so it's very unlikely that the signals are due to random chance. While an actual diagnosis would require you to look at the chart and possibly actually investigate the physical process, because the out of control points are both below and above the control limits, I'm betting there was a scale shift (i.e. increase in variance.)
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause My understanding of control charts is a little bit different... After the first signal at observation 2, wouldn't the process would be stopped and checked for problems, and then restarted? In any case
30,612
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not?
(Sorry for posting a new answer, I can't reply to comments directly yet) I don't really agree with the statement: "Apparently, if you cross either the UCL or LCL, there has to be an assignable cause" To keep things simple, if your in control distribution is N(0,1), then you will still obtain false alarms once every 370 observations, on average, using a UCL of 3 and LCL of -3. When the chart signals, the process needs to be investigated. Only then can a reason for the signal be assigned (ie process change or random error.) Setting the UCL and LCL requires the user to balance the desired false alarm/missed detection rate (analogous to the Type I/Type II error trade off in hypothesis testing.) You can also wait until a few signals to actually stop and investigate the process, but in that case, you may detect the shift too late if it really occurred at the first signal. Again, you can't have something for nothing and the user must use their judgment to decide on how to set up the control chart and monitor the process.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause
(Sorry for posting a new answer, I can't reply to comments directly yet) I don't really agree with the statement: "Apparently, if you cross either the UCL or LCL, there has to be an assignable cause"
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not? (Sorry for posting a new answer, I can't reply to comments directly yet) I don't really agree with the statement: "Apparently, if you cross either the UCL or LCL, there has to be an assignable cause" To keep things simple, if your in control distribution is N(0,1), then you will still obtain false alarms once every 370 observations, on average, using a UCL of 3 and LCL of -3. When the chart signals, the process needs to be investigated. Only then can a reason for the signal be assigned (ie process change or random error.) Setting the UCL and LCL requires the user to balance the desired false alarm/missed detection rate (analogous to the Type I/Type II error trade off in hypothesis testing.) You can also wait until a few signals to actually stop and investigate the process, but in that case, you may detect the shift too late if it really occurred at the first signal. Again, you can't have something for nothing and the user must use their judgment to decide on how to set up the control chart and monitor the process.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause (Sorry for posting a new answer, I can't reply to comments directly yet) I don't really agree with the statement: "Apparently, if you cross either the UCL or LCL, there has to be an assignable cause"
30,613
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not?
I found something interesting tucked away in a study document from the IEEE geared toward this exam: Data points falling within the UCL and LCL range are considered to be in control and caused by chance causes. Outliers falling above the UCL or below the LCL are considered to be out of control and caused by assignable causes. If a number of points fall systematically above or below the mean (but are within the UCL and LCL) this may indicate a nonrandom out-of-control state. The goal of a control chart is to detect out-of-control states quickly. The chart, alone, will not indicate the root causes of the event, but it will provide investigative leads. Apparently, if you cross either the UCL or LCL, there has to be an assignable cause. This makes sense, given the Wikipedia definition of characteristics of assignable (special) cause: New, unanticipated, emergent or previously neglected phenomena within the system; Variation inherently unpredictable, even probabilistically; Variation outside the historical experience base; and Evidence of some inherent change in the system or our knowledge of it.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause
I found something interesting tucked away in a study document from the IEEE geared toward this exam: Data points falling within the UCL and LCL range are considered to be in control and caused by ch
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not? I found something interesting tucked away in a study document from the IEEE geared toward this exam: Data points falling within the UCL and LCL range are considered to be in control and caused by chance causes. Outliers falling above the UCL or below the LCL are considered to be out of control and caused by assignable causes. If a number of points fall systematically above or below the mean (but are within the UCL and LCL) this may indicate a nonrandom out-of-control state. The goal of a control chart is to detect out-of-control states quickly. The chart, alone, will not indicate the root causes of the event, but it will provide investigative leads. Apparently, if you cross either the UCL or LCL, there has to be an assignable cause. This makes sense, given the Wikipedia definition of characteristics of assignable (special) cause: New, unanticipated, emergent or previously neglected phenomena within the system; Variation inherently unpredictable, even probabilistically; Variation outside the historical experience base; and Evidence of some inherent change in the system or our knowledge of it.
Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause I found something interesting tucked away in a study document from the IEEE geared toward this exam: Data points falling within the UCL and LCL range are considered to be in control and caused by ch
30,614
Good line color for "threshold" line in a time-series graph?
If it does not break your styleguide I would rather color the background of the plots red/(yellow/)green than just plotting a line. In my imagination this should make it pretty clear to a user that values are fine on green and to be checked on red. Just my 5¢.
Good line color for "threshold" line in a time-series graph?
If it does not break your styleguide I would rather color the background of the plots red/(yellow/)green than just plotting a line. In my imagination this should make it pretty clear to a user that va
Good line color for "threshold" line in a time-series graph? If it does not break your styleguide I would rather color the background of the plots red/(yellow/)green than just plotting a line. In my imagination this should make it pretty clear to a user that values are fine on green and to be checked on red. Just my 5¢.
Good line color for "threshold" line in a time-series graph? If it does not break your styleguide I would rather color the background of the plots red/(yellow/)green than just plotting a line. In my imagination this should make it pretty clear to a user that va
30,615
Good line color for "threshold" line in a time-series graph?
To me, whether or not the line represents actual data seems irrelevant. What's the point of the plot? If it's so that somebody will do something when utilization crosses a threshold, the line marking the threshold had better be very visible. If the point of the plot is to give an overview of utilization over time, then why include the line at all? Just put the major gridlines of your plot at intervals that will coincide with your threshold (25% in your example), and let the reader figure it out. ... y'all been reading too much Tufte.
Good line color for "threshold" line in a time-series graph?
To me, whether or not the line represents actual data seems irrelevant. What's the point of the plot? If it's so that somebody will do something when utilization crosses a threshold, the line markin
Good line color for "threshold" line in a time-series graph? To me, whether or not the line represents actual data seems irrelevant. What's the point of the plot? If it's so that somebody will do something when utilization crosses a threshold, the line marking the threshold had better be very visible. If the point of the plot is to give an overview of utilization over time, then why include the line at all? Just put the major gridlines of your plot at intervals that will coincide with your threshold (25% in your example), and let the reader figure it out. ... y'all been reading too much Tufte.
Good line color for "threshold" line in a time-series graph? To me, whether or not the line represents actual data seems irrelevant. What's the point of the plot? If it's so that somebody will do something when utilization crosses a threshold, the line markin
30,616
Good line color for "threshold" line in a time-series graph?
If this is about your "Qnotifier" I think that you should plot the threshold line in some darker gray so it is distinguishable but not disturbing. Then I would color the part of the plot that reaches over the threshold in some alarmistic hue, like red.
Good line color for "threshold" line in a time-series graph?
If this is about your "Qnotifier" I think that you should plot the threshold line in some darker gray so it is distinguishable but not disturbing. Then I would color the part of the plot that reaches
Good line color for "threshold" line in a time-series graph? If this is about your "Qnotifier" I think that you should plot the threshold line in some darker gray so it is distinguishable but not disturbing. Then I would color the part of the plot that reaches over the threshold in some alarmistic hue, like red.
Good line color for "threshold" line in a time-series graph? If this is about your "Qnotifier" I think that you should plot the threshold line in some darker gray so it is distinguishable but not disturbing. Then I would color the part of the plot that reaches
30,617
Good line color for "threshold" line in a time-series graph?
I would strongly enjoin you to avoid red as an indicator: there are many sorts of colour-deficiency that make this choice problematic (see eg http://en.wikipedia.org/wiki/Color_blindness#Design_implications_of_color_blindness ). The high-contrast option is I believe the best choice.
Good line color for "threshold" line in a time-series graph?
I would strongly enjoin you to avoid red as an indicator: there are many sorts of colour-deficiency that make this choice problematic (see eg http://en.wikipedia.org/wiki/Color_blindness#Design_implic
Good line color for "threshold" line in a time-series graph? I would strongly enjoin you to avoid red as an indicator: there are many sorts of colour-deficiency that make this choice problematic (see eg http://en.wikipedia.org/wiki/Color_blindness#Design_implications_of_color_blindness ). The high-contrast option is I believe the best choice.
Good line color for "threshold" line in a time-series graph? I would strongly enjoin you to avoid red as an indicator: there are many sorts of colour-deficiency that make this choice problematic (see eg http://en.wikipedia.org/wiki/Color_blindness#Design_implic
30,618
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities?
In this case you can make a non-parametric estimate, although without important things like confidence intervals. You have a combination of left-censored (<1), right-censored (>90) and interval-censored (10-20) values. Although you might not think of what you have as a survival problem, a survival function $S(t)$ is just 1 minus the corresponding (cumulative) distribution function $F(t)$ (i.e., $S(t)=1-F(t)$), so the median of $F(t)$ is the value corresponding to a "survival" fraction of 0.5, the first quartile that for a survival fraction of 0.75, etc. So you can use a survival modeling method designed to handle arbitrarily censored data to get estimates of quantiles. The R icenReg package can calculate the Turnbull nonparametric maximum-likelihood estimate of a survival curve based on such data (a generalization of the Kaplan-Meier method for interval-censored data). That should be more generally useful than a method that requires you to pre-rank the exact and interval values. To get a single non-parametric survival curve this way, provide a 2-column matrix with the lower and upper limits for each data point. For a known data point, those two values are identical. With your example percentage data (lower limit, 0; upper limit, 100): library(icenReg) datMat <- matrix(c(0,1,5,5,10,10,10,20,25,25,90,100),ncol=2,byrow=TRUE) datMat ## [,1] [,2] ## [1,] 0 1 ## [2,] 5 5 ## [3,] 10 10 ## [4,] 10 20 ## [5,] 25 25 ## [6,] 90 100 icTest<- ic_np(datMat) plot(icTest,bty="n") I didn't change the default axis labels, so your values correspond to "time" here. $S(t)$ is the survival function for your data, although the boxes might look strange. The package vignette explains: Looking at the plots, we can see a unique feature about the NPMLE for interval censored data. That is, there are two lines used to represent the survival curve. This is because with interval censored data, the NPMLE is not always unique; any curve that lies between the two lines has the same likelihood. Based on the plot, you would accept the range of 10-20, corresponding to $S(t) = 0.5$, as including the median. The IQR would be 5 - 25 (corresponding to $S(t) = 0.75, S(t) = 0.25$). If you have a reasonable parametric form for your data you can do much more with this type of modeling, as Frank Harrell suggests in his answer.
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra
In this case you can make a non-parametric estimate, although without important things like confidence intervals. You have a combination of left-censored (<1), right-censored (>90) and interval-censor
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities? In this case you can make a non-parametric estimate, although without important things like confidence intervals. You have a combination of left-censored (<1), right-censored (>90) and interval-censored (10-20) values. Although you might not think of what you have as a survival problem, a survival function $S(t)$ is just 1 minus the corresponding (cumulative) distribution function $F(t)$ (i.e., $S(t)=1-F(t)$), so the median of $F(t)$ is the value corresponding to a "survival" fraction of 0.5, the first quartile that for a survival fraction of 0.75, etc. So you can use a survival modeling method designed to handle arbitrarily censored data to get estimates of quantiles. The R icenReg package can calculate the Turnbull nonparametric maximum-likelihood estimate of a survival curve based on such data (a generalization of the Kaplan-Meier method for interval-censored data). That should be more generally useful than a method that requires you to pre-rank the exact and interval values. To get a single non-parametric survival curve this way, provide a 2-column matrix with the lower and upper limits for each data point. For a known data point, those two values are identical. With your example percentage data (lower limit, 0; upper limit, 100): library(icenReg) datMat <- matrix(c(0,1,5,5,10,10,10,20,25,25,90,100),ncol=2,byrow=TRUE) datMat ## [,1] [,2] ## [1,] 0 1 ## [2,] 5 5 ## [3,] 10 10 ## [4,] 10 20 ## [5,] 25 25 ## [6,] 90 100 icTest<- ic_np(datMat) plot(icTest,bty="n") I didn't change the default axis labels, so your values correspond to "time" here. $S(t)$ is the survival function for your data, although the boxes might look strange. The package vignette explains: Looking at the plots, we can see a unique feature about the NPMLE for interval censored data. That is, there are two lines used to represent the survival curve. This is because with interval censored data, the NPMLE is not always unique; any curve that lies between the two lines has the same likelihood. Based on the plot, you would accept the range of 10-20, corresponding to $S(t) = 0.5$, as including the median. The IQR would be 5 - 25 (corresponding to $S(t) = 0.75, S(t) = 0.25$). If you have a reasonable parametric form for your data you can do much more with this type of modeling, as Frank Harrell suggests in his answer.
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra In this case you can make a non-parametric estimate, although without important things like confidence intervals. You have a combination of left-censored (<1), right-censored (>90) and interval-censor
30,619
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities?
If you can order the levels of the observations, you can determine the median, and the 1st quartile and 3rd quartile. In your sample data, the observations could be ordered, with the exception that you have to decide if "10 - 20" is greater then "10", or if they would have the same rank when ranked. IRQ itself, as the difference between the 1st and 3rd quartile, wouldn't necessarily make sense. There are different methods to determine the value of quantiles. For example, R has 9 options ( www.rdocumentation.org/packages/stats/versions/3.6.2/topics/quantile ). Some are more appropriate for non-continuous data. With discontinuous data, you may get answers that are e.g. "Between 'good' and 'very good'. In your sample data, the median would be between "10" and "10 - 20", assuming those are two different when ranked. The following can be run in R (or at e.g. rdrr.io/snippets/ ). This assumes that "10 - 20" is greater than "10". And uses R quantile type 1, which will not return answers that straddle two levels. . Observed = c("5", "10", "< 1", "10 - 20", "25", "> 90") Obs.factor = factor(Observed, ordered = TRUE, levels = c("< 1", "5", "10","10 - 20", "25", "> 90") ) quantile (Obs.factor, type=1, probs=0.50) quantile (Obs.factor, type=1, probs=0.25) quantile (Obs.factor, type=1, probs=0.75)
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra
If you can order the levels of the observations, you can determine the median, and the 1st quartile and 3rd quartile. In your sample data, the observations could be ordered, with the exception that y
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities? If you can order the levels of the observations, you can determine the median, and the 1st quartile and 3rd quartile. In your sample data, the observations could be ordered, with the exception that you have to decide if "10 - 20" is greater then "10", or if they would have the same rank when ranked. IRQ itself, as the difference between the 1st and 3rd quartile, wouldn't necessarily make sense. There are different methods to determine the value of quantiles. For example, R has 9 options ( www.rdocumentation.org/packages/stats/versions/3.6.2/topics/quantile ). Some are more appropriate for non-continuous data. With discontinuous data, you may get answers that are e.g. "Between 'good' and 'very good'. In your sample data, the median would be between "10" and "10 - 20", assuming those are two different when ranked. The following can be run in R (or at e.g. rdrr.io/snippets/ ). This assumes that "10 - 20" is greater than "10". And uses R quantile type 1, which will not return answers that straddle two levels. . Observed = c("5", "10", "< 1", "10 - 20", "25", "> 90") Obs.factor = factor(Observed, ordered = TRUE, levels = c("< 1", "5", "10","10 - 20", "25", "> 90") ) quantile (Obs.factor, type=1, probs=0.50) quantile (Obs.factor, type=1, probs=0.25) quantile (Obs.factor, type=1, probs=0.75)
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra If you can order the levels of the observations, you can determine the median, and the 1st quartile and 3rd quartile. In your sample data, the observations could be ordered, with the exception that y
30,620
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities?
If you only had "< 1" occuring you could compute the median. In general you can't estimate what you want unless you assume a smooth parametric distribution and explicitly handle left, right, and interval censoring in computing the likelihood function so that you can get maximum likelihood estimates of the parameters of that distribution. Then you compute the mean and quantiles which are functions of those underlying parameters. It's pretty involved.
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra
If you only had "< 1" occuring you could compute the median. In general you can't estimate what you want unless you assume a smooth parametric distribution and explicitly handle left, right, and inte
Is it possible to calculate median value and interquatile range for a set of numbers containing a range or inequalities? If you only had "< 1" occuring you could compute the median. In general you can't estimate what you want unless you assume a smooth parametric distribution and explicitly handle left, right, and interval censoring in computing the likelihood function so that you can get maximum likelihood estimates of the parameters of that distribution. Then you compute the mean and quantiles which are functions of those underlying parameters. It's pretty involved.
Is it possible to calculate median value and interquatile range for a set of numbers containing a ra If you only had "< 1" occuring you could compute the median. In general you can't estimate what you want unless you assume a smooth parametric distribution and explicitly handle left, right, and inte
30,621
Consistency between two outputs of a neural network
Another method would be to build two neural networks. The first NN is trained to predict the destination. For the second NN, include the destination predicted by the first NN as an input feature and train the network to predict the class. The second network should then learn to only predict classes that are options for the predicted destination. Edited in response to @Jivan's comment. There are more complex methods of multi-label classification, but I'd keep it simple if possible, and try either @Dikran's or my approach first. They are both standard ways of implementing multi-label classification (see this Medium post). Dikran's method is a Label Powerset and mine is a Classifier Chain. As you've pointed out, there are pros and cons to both these methods. If neither of these produce a good enough result, you could try a variation of the classifier chain, where you build one network to predict one label from the union of destinations and classes. Then train two further networks, one that predicts the destination given a predicted class and the other that predicts the class given a predicted destination. At inference time, you would use the first network to predict either a class or destination, then the appropriate second network predict the other label.
Consistency between two outputs of a neural network
Another method would be to build two neural networks. The first NN is trained to predict the destination. For the second NN, include the destination predicted by the first NN as an input feature and t
Consistency between two outputs of a neural network Another method would be to build two neural networks. The first NN is trained to predict the destination. For the second NN, include the destination predicted by the first NN as an input feature and train the network to predict the class. The second network should then learn to only predict classes that are options for the predicted destination. Edited in response to @Jivan's comment. There are more complex methods of multi-label classification, but I'd keep it simple if possible, and try either @Dikran's or my approach first. They are both standard ways of implementing multi-label classification (see this Medium post). Dikran's method is a Label Powerset and mine is a Classifier Chain. As you've pointed out, there are pros and cons to both these methods. If neither of these produce a good enough result, you could try a variation of the classifier chain, where you build one network to predict one label from the union of destinations and classes. Then train two further networks, one that predicts the destination given a predicted class and the other that predicts the class given a predicted destination. At inference time, you would use the first network to predict either a class or destination, then the appropriate second network predict the other label.
Consistency between two outputs of a neural network Another method would be to build two neural networks. The first NN is trained to predict the destination. For the second NN, include the destination predicted by the first NN as an input feature and t
30,622
Consistency between two outputs of a neural network
If consistency is a problem I would make it a single classification task where "London first class", "London second class", ..., "Rome first class" and "Rome second class" were distinct classes, rather than make it two distinct classification tasks. You current network architecture is giving the a-priori hint that they are completely distinct classification tasks, but if e.g. some destinations don't have both classes, then there is a dependence between the two sub-classes. Combining the two classification tasks into one would be the easiest way of putting the dependence back into the model. At the moment, I think your model is predicting that the customer would opt for a first class ticket if it were available, which is not an unreasonable answer - it is just generalising the idea that people in relatively well paid occupations (e.g. accountant) tend to travel first-class. You could always just ignore the class output where it is not an option. Does the network really need so many layers? It could be that a single hidden layer may be sufficient for this problem and the layer above that is not actually doing much useful processing, in which case the division of the network may not be that meaningful.
Consistency between two outputs of a neural network
If consistency is a problem I would make it a single classification task where "London first class", "London second class", ..., "Rome first class" and "Rome second class" were distinct classes, rathe
Consistency between two outputs of a neural network If consistency is a problem I would make it a single classification task where "London first class", "London second class", ..., "Rome first class" and "Rome second class" were distinct classes, rather than make it two distinct classification tasks. You current network architecture is giving the a-priori hint that they are completely distinct classification tasks, but if e.g. some destinations don't have both classes, then there is a dependence between the two sub-classes. Combining the two classification tasks into one would be the easiest way of putting the dependence back into the model. At the moment, I think your model is predicting that the customer would opt for a first class ticket if it were available, which is not an unreasonable answer - it is just generalising the idea that people in relatively well paid occupations (e.g. accountant) tend to travel first-class. You could always just ignore the class output where it is not an option. Does the network really need so many layers? It could be that a single hidden layer may be sufficient for this problem and the layer above that is not actually doing much useful processing, in which case the division of the network may not be that meaningful.
Consistency between two outputs of a neural network If consistency is a problem I would make it a single classification task where "London first class", "London second class", ..., "Rome first class" and "Rome second class" were distinct classes, rathe
30,623
Consistency between two outputs of a neural network
Cost function In what way would your neural network be able to know that the 1st class with destination London is not feasible? How do you teach that to the network? In what way did you 'punish' the network during training for wrong predictions? It is important that the training phase allows the network to train the desired features. In your question, you did not tell which cost function you used to train the model. It is also not clear what type of output is created by your model and what you would desire from it. Do I guess correctly that the output is just a single class prediction? In that case, what class prediction would you favor in the example from the question. Is 'London 2nd class' a better prediction than 'London 1st class'? When this cost function only cares about a single error then it is gonna care less about combined errors. That might lead to your problem (I am assuming that this is how your cost function is created, but it is not clear). Predicting London + 1st class will be wrong in the 89302 cases when the true value is Londen + 2nd class. But the choice to predict the 1st class instead of 2nd class might be rewarded in the 48516 + 41411 + 38186 + 35247 + 28512 cases when the true value is Paris/Rome/Berlin/Madrid/Rotterdam + 1st class (I am not sure, but I guess that your cost function is doing this). You can punish the system for making predictions about 1st class when it is in London, but at the same time you reward 1st class predictions when the occur in other cities. So you are getting Londen 1st class as result. Type of output I mentioned earlier that I am guessing that your model is just giving a single class prediction. I am guessing this based on your situation as well as on the phrase For example, for a given entry, the network would predict "London" and "1st Class" If that is the case then you might consider to use a different type of output. Instead of predicting a single class you could have as output a vector of probabilities for all desired combinations of destinations and classes (as well as other aspects that you might have in your model). Then you could value the predictions and perform the training based on a likelihood function of a categorical distribution. When you apply this model (some online shopping tool or some help for an airline company?) then it will not give a single class as output, but instead it could give a ranking of the top destinations. Network structure What kind of dense neural network do you have and how did your train it? It might be imaginable that there should be a node in some of those layers that gets trained to deal with the London + 2nd class case specifically. But, how many layers do you have, how many nodes per layer do you have, how did you do cross-valdiation? It is imaginable that this error/false-prediction might occur. But it is difficult to say why and how exactly it occurs without details.
Consistency between two outputs of a neural network
Cost function In what way would your neural network be able to know that the 1st class with destination London is not feasible? How do you teach that to the network? In what way did you 'punish' the n
Consistency between two outputs of a neural network Cost function In what way would your neural network be able to know that the 1st class with destination London is not feasible? How do you teach that to the network? In what way did you 'punish' the network during training for wrong predictions? It is important that the training phase allows the network to train the desired features. In your question, you did not tell which cost function you used to train the model. It is also not clear what type of output is created by your model and what you would desire from it. Do I guess correctly that the output is just a single class prediction? In that case, what class prediction would you favor in the example from the question. Is 'London 2nd class' a better prediction than 'London 1st class'? When this cost function only cares about a single error then it is gonna care less about combined errors. That might lead to your problem (I am assuming that this is how your cost function is created, but it is not clear). Predicting London + 1st class will be wrong in the 89302 cases when the true value is Londen + 2nd class. But the choice to predict the 1st class instead of 2nd class might be rewarded in the 48516 + 41411 + 38186 + 35247 + 28512 cases when the true value is Paris/Rome/Berlin/Madrid/Rotterdam + 1st class (I am not sure, but I guess that your cost function is doing this). You can punish the system for making predictions about 1st class when it is in London, but at the same time you reward 1st class predictions when the occur in other cities. So you are getting Londen 1st class as result. Type of output I mentioned earlier that I am guessing that your model is just giving a single class prediction. I am guessing this based on your situation as well as on the phrase For example, for a given entry, the network would predict "London" and "1st Class" If that is the case then you might consider to use a different type of output. Instead of predicting a single class you could have as output a vector of probabilities for all desired combinations of destinations and classes (as well as other aspects that you might have in your model). Then you could value the predictions and perform the training based on a likelihood function of a categorical distribution. When you apply this model (some online shopping tool or some help for an airline company?) then it will not give a single class as output, but instead it could give a ranking of the top destinations. Network structure What kind of dense neural network do you have and how did your train it? It might be imaginable that there should be a node in some of those layers that gets trained to deal with the London + 2nd class case specifically. But, how many layers do you have, how many nodes per layer do you have, how did you do cross-valdiation? It is imaginable that this error/false-prediction might occur. But it is difficult to say why and how exactly it occurs without details.
Consistency between two outputs of a neural network Cost function In what way would your neural network be able to know that the 1st class with destination London is not feasible? How do you teach that to the network? In what way did you 'punish' the n
30,624
Confidence interval / p-value duality: don't they use different distributions?
Basically the duality holds, see also this question about the duality: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? I can think of two reasons to say that it doesn't hold (see below), but it is not because the duality is wrong and instead it is more about details and semantical (every kid has a parent, but that doesn't mean that each kid and each parent are pairs). No, not correct 1 There is no single the p-value and single the confidence interval. Instead, there are multiple ways to define p-values and multiple ways to define confidence intervals. So a particular confidence interval and particular construction of a p-value do not need to correspond with each other. Yes, correct 1 But, there is a correspondence such that every confidence interval can be used as a hypothesis test, and confidence distributions could be used to compute p-values for particular parameters/hypotheses. The reason is that a confidence interval contains the parameter p% of the time no matter what the true parameter is. So given that a hypothesis is true, the probability that it is outside a p% confidence interval is p%. The false rejection probability, if you use confidence intervals, is p%. The only cases where this does not work are when the confidence intervals are not exact. E.g. sometimes confidence intervals are approximations or estimates. But then, you should allow the same freedom for p-values which can also be approximations or estimates. No, not correct 2 The other way around is not necessarily true. With every p-value (or more generally the construction method for a p-value) you can not always construct a confidence interval. Instead, sometimes you end up with a confidence region (a set of disjoint intervals).
Confidence interval / p-value duality: don't they use different distributions?
Basically the duality holds, see also this question about the duality: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? I can think of t
Confidence interval / p-value duality: don't they use different distributions? Basically the duality holds, see also this question about the duality: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? I can think of two reasons to say that it doesn't hold (see below), but it is not because the duality is wrong and instead it is more about details and semantical (every kid has a parent, but that doesn't mean that each kid and each parent are pairs). No, not correct 1 There is no single the p-value and single the confidence interval. Instead, there are multiple ways to define p-values and multiple ways to define confidence intervals. So a particular confidence interval and particular construction of a p-value do not need to correspond with each other. Yes, correct 1 But, there is a correspondence such that every confidence interval can be used as a hypothesis test, and confidence distributions could be used to compute p-values for particular parameters/hypotheses. The reason is that a confidence interval contains the parameter p% of the time no matter what the true parameter is. So given that a hypothesis is true, the probability that it is outside a p% confidence interval is p%. The false rejection probability, if you use confidence intervals, is p%. The only cases where this does not work are when the confidence intervals are not exact. E.g. sometimes confidence intervals are approximations or estimates. But then, you should allow the same freedom for p-values which can also be approximations or estimates. No, not correct 2 The other way around is not necessarily true. With every p-value (or more generally the construction method for a p-value) you can not always construct a confidence interval. Instead, sometimes you end up with a confidence region (a set of disjoint intervals).
Confidence interval / p-value duality: don't they use different distributions? Basically the duality holds, see also this question about the duality: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? I can think of t
30,625
Confidence interval / p-value duality: don't they use different distributions?
This is not correct in general. I'll provide a simple example. A popular test for a binomial proportion is derived from the central limit theorem. If $\pi$ is the true risk for the outcome, then asymptotically, $$ p \stackrel{d}{\approx} \mathcal{N}[\pi, \pi(1-\pi) / n] $$ where $p$ is our estimated risk and $n$ is our sample size. The test is then found by standardizing $p$ from this distribution using either the estimated risk in the variance (Wald test) or the risk under the null (Score test). The test statistic is $$ Z=\frac{p-\pi_{0}}{\sqrt{\frac{p\left(1-p\right)}{n}}} $$ and associated confidence intervals are $$ \left(\widehat{\pi}_{L}, \widehat{\pi}_{U}\right)=p \pm Z_{1-\alpha / 2} \sqrt{p(1-p) / n}$$ Your points in the bullets are true for this test and the associated confidence interval because the latter is derived from the former. However, they fail in general as many different confidence intervals exist for the binomial $^{1.}$ all with close to nominal coverage and slightly different widths. It could be the case that the test of proportions as shown above yields a p-value small enough to reject the null, but a confidence interval other than the one I've posted covers the null value. We can demonstrate this with some R code. I'll calculate the confidence intervals for a range and outcomes using a Wilson score interval and the asymptotic interval. You will see they do not line up exactly, meaning some intervals cover some values while others don't, even considering the same data are used to create both. Hence using some intervals we would reject the null so to speak, while using others would lead to a failure to reject the null. library(binom) n = 20 x = seq(0, n, 2) a = binom.wilson(x, n) b = binom.asymp(x, n) plot(a$upper, b$upper, xlab = "Wilson Upper Limit", ylab='Asymptotic Upper Limit', type = 'l', col='red') abline(0, 1) References Brown, Lawrence D., T. Tony Cai, and Anirban DasGupta. Confidence intervals for a binomial proportion and asymptotic expansions. The Annals of Statistics 30.1 (2002): 160-201.
Confidence interval / p-value duality: don't they use different distributions?
This is not correct in general. I'll provide a simple example. A popular test for a binomial proportion is derived from the central limit theorem. If $\pi$ is the true risk for the outcome, then asy
Confidence interval / p-value duality: don't they use different distributions? This is not correct in general. I'll provide a simple example. A popular test for a binomial proportion is derived from the central limit theorem. If $\pi$ is the true risk for the outcome, then asymptotically, $$ p \stackrel{d}{\approx} \mathcal{N}[\pi, \pi(1-\pi) / n] $$ where $p$ is our estimated risk and $n$ is our sample size. The test is then found by standardizing $p$ from this distribution using either the estimated risk in the variance (Wald test) or the risk under the null (Score test). The test statistic is $$ Z=\frac{p-\pi_{0}}{\sqrt{\frac{p\left(1-p\right)}{n}}} $$ and associated confidence intervals are $$ \left(\widehat{\pi}_{L}, \widehat{\pi}_{U}\right)=p \pm Z_{1-\alpha / 2} \sqrt{p(1-p) / n}$$ Your points in the bullets are true for this test and the associated confidence interval because the latter is derived from the former. However, they fail in general as many different confidence intervals exist for the binomial $^{1.}$ all with close to nominal coverage and slightly different widths. It could be the case that the test of proportions as shown above yields a p-value small enough to reject the null, but a confidence interval other than the one I've posted covers the null value. We can demonstrate this with some R code. I'll calculate the confidence intervals for a range and outcomes using a Wilson score interval and the asymptotic interval. You will see they do not line up exactly, meaning some intervals cover some values while others don't, even considering the same data are used to create both. Hence using some intervals we would reject the null so to speak, while using others would lead to a failure to reject the null. library(binom) n = 20 x = seq(0, n, 2) a = binom.wilson(x, n) b = binom.asymp(x, n) plot(a$upper, b$upper, xlab = "Wilson Upper Limit", ylab='Asymptotic Upper Limit', type = 'l', col='red') abline(0, 1) References Brown, Lawrence D., T. Tony Cai, and Anirban DasGupta. Confidence intervals for a binomial proportion and asymptotic expansions. The Annals of Statistics 30.1 (2002): 160-201.
Confidence interval / p-value duality: don't they use different distributions? This is not correct in general. I'll provide a simple example. A popular test for a binomial proportion is derived from the central limit theorem. If $\pi$ is the true risk for the outcome, then asy
30,626
Confidence interval / p-value duality: don't they use different distributions?
Yes, both formulations are identical because this is just the definition of a confidence interval. Formally, if you have measured the parameter estimate $\theta_0$ for the parameter $\theta$, a $1-\alpha$ confidence interval $[\theta_1,\theta_2]$ is given by \begin{eqnarray*}& & P(\hat{\theta} \geq \theta_0|\theta=\theta_1)=\alpha/2 \\ \mbox{and}\quad & & P(\hat{\theta}\leq \theta_0|\theta=\theta_2)=\alpha/2 \end{eqnarray*} This definition distributes the error probability equally on both sides, so it is only equivalent to a two-sided test. Compared to the hypothesis testing scenario, the condition in the probability is the null hypothesis, which means that $[\theta_1,\theta_2]$ are just the borders of the rejection region. Remark: Interestingly, most textbooks on statistics give a different defintion of a confidence interval: $P(\theta\in[\theta_1,\theta_2])=1-\alpha$. This definition depends on the unknown parameter value $\theta$ and only allows for solving for $\theta_{1/2}$ in very special cases: the asymptotic normal aproximation ("Wald interval") for the binomial proportion (which is problematic, as pointed out in other answers) and the confidence interval for a statsitical mean. I have found the general definition in DiCiccio, Efron: "Bootstrap confidence intervals." Statistical Science, pp. 189-228, 1996
Confidence interval / p-value duality: don't they use different distributions?
Yes, both formulations are identical because this is just the definition of a confidence interval. Formally, if you have measured the parameter estimate $\theta_0$ for the parameter $\theta$, a $1-\al
Confidence interval / p-value duality: don't they use different distributions? Yes, both formulations are identical because this is just the definition of a confidence interval. Formally, if you have measured the parameter estimate $\theta_0$ for the parameter $\theta$, a $1-\alpha$ confidence interval $[\theta_1,\theta_2]$ is given by \begin{eqnarray*}& & P(\hat{\theta} \geq \theta_0|\theta=\theta_1)=\alpha/2 \\ \mbox{and}\quad & & P(\hat{\theta}\leq \theta_0|\theta=\theta_2)=\alpha/2 \end{eqnarray*} This definition distributes the error probability equally on both sides, so it is only equivalent to a two-sided test. Compared to the hypothesis testing scenario, the condition in the probability is the null hypothesis, which means that $[\theta_1,\theta_2]$ are just the borders of the rejection region. Remark: Interestingly, most textbooks on statistics give a different defintion of a confidence interval: $P(\theta\in[\theta_1,\theta_2])=1-\alpha$. This definition depends on the unknown parameter value $\theta$ and only allows for solving for $\theta_{1/2}$ in very special cases: the asymptotic normal aproximation ("Wald interval") for the binomial proportion (which is problematic, as pointed out in other answers) and the confidence interval for a statsitical mean. I have found the general definition in DiCiccio, Efron: "Bootstrap confidence intervals." Statistical Science, pp. 189-228, 1996
Confidence interval / p-value duality: don't they use different distributions? Yes, both formulations are identical because this is just the definition of a confidence interval. Formally, if you have measured the parameter estimate $\theta_0$ for the parameter $\theta$, a $1-\al
30,627
How to determine the likelihood a random number generator is using a uniform distribution?
This is the birthday problem in another form. With $d$ equally likely days and $n$ independent draws, the expected number of distinct days drawn is $d(1-(1-\frac1d)^n)$ which in your case of $d=100$ and $n=40$ is about $33.1$, rather more than $15$. The probability of $x$ distinct days drawn is $\dfrac{d! \, S_2(n,x)}{(d-x)!\, d^n}$ where $S_2(n,x)$ is a Stirling number of the second kind. In your case of $d=100$ and $n=40$ and $x=15$ this probability is about $9.47\times 10^{-17}$ and for $x \le 15$ is about $9.61\times 10^{-17}$, both of which are extremely small. By contrast, the probability for $29 \le x \le 38$ is about $0.9765$. You can use this as a possible test that the draws are uniform and independent.
How to determine the likelihood a random number generator is using a uniform distribution?
This is the birthday problem in another form. With $d$ equally likely days and $n$ independent draws, the expected number of distinct days drawn is $d(1-(1-\frac1d)^n)$ which in your case of $d=100$ a
How to determine the likelihood a random number generator is using a uniform distribution? This is the birthday problem in another form. With $d$ equally likely days and $n$ independent draws, the expected number of distinct days drawn is $d(1-(1-\frac1d)^n)$ which in your case of $d=100$ and $n=40$ is about $33.1$, rather more than $15$. The probability of $x$ distinct days drawn is $\dfrac{d! \, S_2(n,x)}{(d-x)!\, d^n}$ where $S_2(n,x)$ is a Stirling number of the second kind. In your case of $d=100$ and $n=40$ and $x=15$ this probability is about $9.47\times 10^{-17}$ and for $x \le 15$ is about $9.61\times 10^{-17}$, both of which are extremely small. By contrast, the probability for $29 \le x \le 38$ is about $0.9765$. You can use this as a possible test that the draws are uniform and independent.
How to determine the likelihood a random number generator is using a uniform distribution? This is the birthday problem in another form. With $d$ equally likely days and $n$ independent draws, the expected number of distinct days drawn is $d(1-(1-\frac1d)^n)$ which in your case of $d=100$ a
30,628
How to determine the likelihood a random number generator is using a uniform distribution?
With $N = 365$ and $x = 23,$ randomly generated numbers, your vetting procedure is similar to the famous birthday problem, in which one would expect matching numbers among the $x$ a little more than half the time. However, the likelihood of matching birthdays among $23$ is reasonably robust to real-life situations in which some months are more likely to generate actual human birthdays than others. Thus a failure to get one or more matches about half the time would cast doubt on the randomness of the 'generator', but getting matches nearly half the time time would not be strong evidence that the numbers are generated truly at random. Classic birthday problem with equally likely 365 equally likely birthdays. By simulation in R, $P(Y = 0) = 0.494 \pm 0.003$ [the exact probability of $0$ matches is $0.4927$ to four places] and $E(Y) = 0.678\pm 0.005.$ set.seed(1234) x = 23; N = 365 y = replicate(10^5, x-length(unique(sample(1:N,x,rep=T)))) mean(y==0); mean(y) [1] 0.49395 # aprx P(No Match) [1] 0.67842 # aprx E(Nr Matches) 2*sd(y==0)/sqrt(10^5) [1] 0.003162062 2*sd(y)/sqrt(10^5) [1] 0.005012195 With days not equally likely (roughly 95% and 110% as likely in two halves of the year): $P(Y=0) = 0.491\pm 0.002, E(Y)=0.683\pm 0.003.$ Within the margin of simulation error, results are not significantly different from those for equally likely days. set.seed(1235) x = 23; N = 365; pr = c(rep(95, 180), rep(105, 185)) y = replicate(10^5, x-length(unique(sample(1:N,x,rep=T,p=pr)))) mean(y==0); mean(y) [1] 0.49102 [1] 0.68265 sd(y==0)/sqrt(10^5) [1] 0.001580892 sd(y)/sqrt(10^5) [1] 0.002509512 The birthday problem has been shown with more extensive simulations not to be especially finicky in case birthdays are not exactly equally likely. There are lists of problems that are notoriously sensitive to imperfections in random number generators. You can google the 'Die Hard Battery' of especially finicky simulation problems that have been used to vet pseudorandom number generators.
How to determine the likelihood a random number generator is using a uniform distribution?
With $N = 365$ and $x = 23,$ randomly generated numbers, your vetting procedure is similar to the famous birthday problem, in which one would expect matching numbers among the $x$ a little more than h
How to determine the likelihood a random number generator is using a uniform distribution? With $N = 365$ and $x = 23,$ randomly generated numbers, your vetting procedure is similar to the famous birthday problem, in which one would expect matching numbers among the $x$ a little more than half the time. However, the likelihood of matching birthdays among $23$ is reasonably robust to real-life situations in which some months are more likely to generate actual human birthdays than others. Thus a failure to get one or more matches about half the time would cast doubt on the randomness of the 'generator', but getting matches nearly half the time time would not be strong evidence that the numbers are generated truly at random. Classic birthday problem with equally likely 365 equally likely birthdays. By simulation in R, $P(Y = 0) = 0.494 \pm 0.003$ [the exact probability of $0$ matches is $0.4927$ to four places] and $E(Y) = 0.678\pm 0.005.$ set.seed(1234) x = 23; N = 365 y = replicate(10^5, x-length(unique(sample(1:N,x,rep=T)))) mean(y==0); mean(y) [1] 0.49395 # aprx P(No Match) [1] 0.67842 # aprx E(Nr Matches) 2*sd(y==0)/sqrt(10^5) [1] 0.003162062 2*sd(y)/sqrt(10^5) [1] 0.005012195 With days not equally likely (roughly 95% and 110% as likely in two halves of the year): $P(Y=0) = 0.491\pm 0.002, E(Y)=0.683\pm 0.003.$ Within the margin of simulation error, results are not significantly different from those for equally likely days. set.seed(1235) x = 23; N = 365; pr = c(rep(95, 180), rep(105, 185)) y = replicate(10^5, x-length(unique(sample(1:N,x,rep=T,p=pr)))) mean(y==0); mean(y) [1] 0.49102 [1] 0.68265 sd(y==0)/sqrt(10^5) [1] 0.001580892 sd(y)/sqrt(10^5) [1] 0.002509512 The birthday problem has been shown with more extensive simulations not to be especially finicky in case birthdays are not exactly equally likely. There are lists of problems that are notoriously sensitive to imperfections in random number generators. You can google the 'Die Hard Battery' of especially finicky simulation problems that have been used to vet pseudorandom number generators.
How to determine the likelihood a random number generator is using a uniform distribution? With $N = 365$ and $x = 23,$ randomly generated numbers, your vetting procedure is similar to the famous birthday problem, in which one would expect matching numbers among the $x$ a little more than h
30,629
Does PCA provide advantage if all PC's are used?
The PCs are just linear combinations of the original features. For example, if there are two features, $x$ and $y$, the features mapped on the PCs will be something like $f_1=\alpha_1 x+\beta_1 y$, and $f_2=\alpha_2x+\beta_2y$. So, it's just a change of axes. In ordinary linear regression, the target variable is expressed in terms of linear combination of features, i.e. $y=ax+by+k$. Using the new features that are linear combinations of the old ones will generate an equivalent equation. For example, for two features, this would look like the following:$$\begin{align}y&=cf_1+df_2+k=c(\alpha_1x+\beta_1y)+d(\alpha_2x+\beta_2y)+k\\&=\underbrace{(c\alpha_1+d\alpha_2)}_ax + \underbrace{(c\beta_1+d\beta_2)}_by+k\end{align}$$ This is the case for OLS, but in general, does using all PCs have an advantage? Maybe. Having orthogonal axes may be paramount for the downstream analyses you'll perform depending on what you’re after, so generalizing this for all ML is not possible.
Does PCA provide advantage if all PC's are used?
The PCs are just linear combinations of the original features. For example, if there are two features, $x$ and $y$, the features mapped on the PCs will be something like $f_1=\alpha_1 x+\beta_1 y$, an
Does PCA provide advantage if all PC's are used? The PCs are just linear combinations of the original features. For example, if there are two features, $x$ and $y$, the features mapped on the PCs will be something like $f_1=\alpha_1 x+\beta_1 y$, and $f_2=\alpha_2x+\beta_2y$. So, it's just a change of axes. In ordinary linear regression, the target variable is expressed in terms of linear combination of features, i.e. $y=ax+by+k$. Using the new features that are linear combinations of the old ones will generate an equivalent equation. For example, for two features, this would look like the following:$$\begin{align}y&=cf_1+df_2+k=c(\alpha_1x+\beta_1y)+d(\alpha_2x+\beta_2y)+k\\&=\underbrace{(c\alpha_1+d\alpha_2)}_ax + \underbrace{(c\beta_1+d\beta_2)}_by+k\end{align}$$ This is the case for OLS, but in general, does using all PCs have an advantage? Maybe. Having orthogonal axes may be paramount for the downstream analyses you'll perform depending on what you’re after, so generalizing this for all ML is not possible.
Does PCA provide advantage if all PC's are used? The PCs are just linear combinations of the original features. For example, if there are two features, $x$ and $y$, the features mapped on the PCs will be something like $f_1=\alpha_1 x+\beta_1 y$, an
30,630
Does PCA provide advantage if all PC's are used?
@gunes is correct (+1) in terms of unpenalized ordinary least squares models. There is one situation in which PCA might be considered to "provide an advantage when all PCs are used," however, even in linear regression modeling. Principal components regression (PCR) selects only a subset of PCs, an all-or-none 1/0 weighting of the components, as James et al explain in Section 6.3.1 of ISLR. Chapter 6 also covers ridge regression as a "shrinkage" or penalization method. James et al then compare these approaches (page 236): One can even think of ridge regression as a continuous version of PCR That is, ridge regression uses all the PCs but gives them different non-zero weights rather than the all-or-none PC selection used by PCR. Page 79 of ESL has more details. In that sense ridge regression does use all the PCs, just not equally. But that's not PCR in the sense of your question.
Does PCA provide advantage if all PC's are used?
@gunes is correct (+1) in terms of unpenalized ordinary least squares models. There is one situation in which PCA might be considered to "provide an advantage when all PCs are used," however, even in
Does PCA provide advantage if all PC's are used? @gunes is correct (+1) in terms of unpenalized ordinary least squares models. There is one situation in which PCA might be considered to "provide an advantage when all PCs are used," however, even in linear regression modeling. Principal components regression (PCR) selects only a subset of PCs, an all-or-none 1/0 weighting of the components, as James et al explain in Section 6.3.1 of ISLR. Chapter 6 also covers ridge regression as a "shrinkage" or penalization method. James et al then compare these approaches (page 236): One can even think of ridge regression as a continuous version of PCR That is, ridge regression uses all the PCs but gives them different non-zero weights rather than the all-or-none PC selection used by PCR. Page 79 of ESL has more details. In that sense ridge regression does use all the PCs, just not equally. But that's not PCR in the sense of your question.
Does PCA provide advantage if all PC's are used? @gunes is correct (+1) in terms of unpenalized ordinary least squares models. There is one situation in which PCA might be considered to "provide an advantage when all PCs are used," however, even in
30,631
Is there a name for the midpoint of a sum (not a mean or median)?
This looks like the weighted median > x <- c(1, 1, 1, 2, 7, 8, 10, 10, 20, 20) > median(rep(x, times = x)) [1] 15 If you do not use R rep(x, times = x) generates a new vector with each element repeated its own number of times. Note that this gives a value half way between 10 and 20 for your example as that is how it defines the median rather than 20 which you quote. In a comment on the original question Federico Poloni poses the data-set {1, 3, 4, 4}. If we take the definition of the median as being that value such that no more than half the observations lie above it and no more than half lie below it then the analogous procedure here would be to take the value such that the sum of the values above it were no more than half the total sum and vice versa. So the answer would be 3 since any other value either has more than half the total sum (11) above or below it. In another answer Carlo makes the valid point that this is an unusual choice for a measure of central tendency which is a very fair point.
Is there a name for the midpoint of a sum (not a mean or median)?
This looks like the weighted median > x <- c(1, 1, 1, 2, 7, 8, 10, 10, 20, 20) > median(rep(x, times = x)) [1] 15 If you do not use R rep(x, times = x) generates a new vector with each element
Is there a name for the midpoint of a sum (not a mean or median)? This looks like the weighted median > x <- c(1, 1, 1, 2, 7, 8, 10, 10, 20, 20) > median(rep(x, times = x)) [1] 15 If you do not use R rep(x, times = x) generates a new vector with each element repeated its own number of times. Note that this gives a value half way between 10 and 20 for your example as that is how it defines the median rather than 20 which you quote. In a comment on the original question Federico Poloni poses the data-set {1, 3, 4, 4}. If we take the definition of the median as being that value such that no more than half the observations lie above it and no more than half lie below it then the analogous procedure here would be to take the value such that the sum of the values above it were no more than half the total sum and vice versa. So the answer would be 3 since any other value either has more than half the total sum (11) above or below it. In another answer Carlo makes the valid point that this is an unusual choice for a measure of central tendency which is a very fair point.
Is there a name for the midpoint of a sum (not a mean or median)? This looks like the weighted median > x <- c(1, 1, 1, 2, 7, 8, 10, 10, 20, 20) > median(rep(x, times = x)) [1] 15 If you do not use R rep(x, times = x) generates a new vector with each element
30,632
Is there a name for the midpoint of a sum (not a mean or median)?
I never heard of such statistic, but it occurs to me that it has terrible properties as a central tendency estimator: supposing that your distribution is always positive (negative values can actually worsen this) it is also probably quite skewed, meaning that your statistic will result in a value very high, next to the highest observed values. Let's suppose for instance that every value in your sample is two times the preceding one, your statistic will pick the last value of them all. This is also quite problematic from a probabilistic prospective, while mean and median have simple probabilistic meanings.
Is there a name for the midpoint of a sum (not a mean or median)?
I never heard of such statistic, but it occurs to me that it has terrible properties as a central tendency estimator: supposing that your distribution is always positive (negative values can actually
Is there a name for the midpoint of a sum (not a mean or median)? I never heard of such statistic, but it occurs to me that it has terrible properties as a central tendency estimator: supposing that your distribution is always positive (negative values can actually worsen this) it is also probably quite skewed, meaning that your statistic will result in a value very high, next to the highest observed values. Let's suppose for instance that every value in your sample is two times the preceding one, your statistic will pick the last value of them all. This is also quite problematic from a probabilistic prospective, while mean and median have simple probabilistic meanings.
Is there a name for the midpoint of a sum (not a mean or median)? I never heard of such statistic, but it occurs to me that it has terrible properties as a central tendency estimator: supposing that your distribution is always positive (negative values can actually
30,633
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
The complete formula for a linear model is (in quasi matrix form) $$Y=\beta X+\epsilon$$ So we have multiple coefficents for the variables that we are controlling for, and then we have $\epsilon$, which is everything else which we did not explain with our included variables. In this error term belong all the variables which we did not consider, either because we do not have information for them or because we simply do not know of them (random deviation). So there is just no way for you to know what in this term belongs to what unknown term.
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
The complete formula for a linear model is (in quasi matrix form) $$Y=\beta X+\epsilon$$ So we have multiple coefficents for the variables that we are controlling for, and then we have $\epsilon$, whi
Is it possible to perform a regression where you have an unknown / unknowable feature variable? The complete formula for a linear model is (in quasi matrix form) $$Y=\beta X+\epsilon$$ So we have multiple coefficents for the variables that we are controlling for, and then we have $\epsilon$, which is everything else which we did not explain with our included variables. In this error term belong all the variables which we did not consider, either because we do not have information for them or because we simply do not know of them (random deviation). So there is just no way for you to know what in this term belongs to what unknown term.
Is it possible to perform a regression where you have an unknown / unknowable feature variable? The complete formula for a linear model is (in quasi matrix form) $$Y=\beta X+\epsilon$$ So we have multiple coefficents for the variables that we are controlling for, and then we have $\epsilon$, whi
30,634
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
How about if I have some knowledge of the statistics of how x3 is distributed? If you do the regression of $y$ on $x_1$ and $x_2$, then if you're willing to make educated guesses how $x_3$ correlates with each of these, you can calculate what these guesses would entail for how the coefficients you estimate would change if you could observe $x_3$ and ran the full regression. Suppose for instance that $x_3$ isn't correlated with $x_1$. Then $\alpha_{2, \text{your regression}} =\alpha_{2, \text{full regression}} + \alpha_3 \cdot \frac{cov(x_3, x_2)}{var(x_2)}$ So if $x_3$ is likely to be only weakly correlated with $y$ or $x_1$ and $x_2$ not much would change. And if it is, you can use these omitted-variable-bias formulas to predict how things would change.
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
How about if I have some knowledge of the statistics of how x3 is distributed? If you do the regression of $y$ on $x_1$ and $x_2$, then if you're willing to make educated guesses how $x_3$ correlat
Is it possible to perform a regression where you have an unknown / unknowable feature variable? How about if I have some knowledge of the statistics of how x3 is distributed? If you do the regression of $y$ on $x_1$ and $x_2$, then if you're willing to make educated guesses how $x_3$ correlates with each of these, you can calculate what these guesses would entail for how the coefficients you estimate would change if you could observe $x_3$ and ran the full regression. Suppose for instance that $x_3$ isn't correlated with $x_1$. Then $\alpha_{2, \text{your regression}} =\alpha_{2, \text{full regression}} + \alpha_3 \cdot \frac{cov(x_3, x_2)}{var(x_2)}$ So if $x_3$ is likely to be only weakly correlated with $y$ or $x_1$ and $x_2$ not much would change. And if it is, you can use these omitted-variable-bias formulas to predict how things would change.
Is it possible to perform a regression where you have an unknown / unknowable feature variable? How about if I have some knowledge of the statistics of how x3 is distributed? If you do the regression of $y$ on $x_1$ and $x_2$, then if you're willing to make educated guesses how $x_3$ correlat
30,635
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
It is always possible... but your estimates will be biased in many cases. The most favorable case occurs: (a) When $x_{3n}$ is not correlated with the other regressors, in this case, regress $y_n$ on $(\iota,x_{1},x_{2})$ and you have unbiased estimates of $a_0,a_1,a_2$ (Frish-Waugh-Lovell theorem) (b) If in addition to (a) you know $\sigma$ and $x_3 \sim \mathcal{N}(0, \sigma^2)$, then you can even identify $a_3$: draw $N$ iid values for $x_{3n} \sim \mathcal{N}(0, \sigma^2)$ and regress $y_n$ on $(\iota,x_{1},x_{2},x_{3})$.
Is it possible to perform a regression where you have an unknown / unknowable feature variable?
It is always possible... but your estimates will be biased in many cases. The most favorable case occurs: (a) When $x_{3n}$ is not correlated with the other regressors, in this case, regress $y_n$ on
Is it possible to perform a regression where you have an unknown / unknowable feature variable? It is always possible... but your estimates will be biased in many cases. The most favorable case occurs: (a) When $x_{3n}$ is not correlated with the other regressors, in this case, regress $y_n$ on $(\iota,x_{1},x_{2})$ and you have unbiased estimates of $a_0,a_1,a_2$ (Frish-Waugh-Lovell theorem) (b) If in addition to (a) you know $\sigma$ and $x_3 \sim \mathcal{N}(0, \sigma^2)$, then you can even identify $a_3$: draw $N$ iid values for $x_{3n} \sim \mathcal{N}(0, \sigma^2)$ and regress $y_n$ on $(\iota,x_{1},x_{2},x_{3})$.
Is it possible to perform a regression where you have an unknown / unknowable feature variable? It is always possible... but your estimates will be biased in many cases. The most favorable case occurs: (a) When $x_{3n}$ is not correlated with the other regressors, in this case, regress $y_n$ on
30,636
Is there formal test of non-linearity in linear regression?
Box-Tidwell was developed for ordinary least squares regression models. So if you were inclined to use Box-Tidwell for this, that's actually what it's designed for. It's not the only possible approach, but it sounds like an approach you're already familiar with. However, I'm not convinced that (most times it's used) a formal test is appropriate - I believe it usually answers the wrong question, while the diagnostic plots you've been looking at come closer to answering a useful question. [I have a similar opinion of many other tests of regression assumptions]
Is there formal test of non-linearity in linear regression?
Box-Tidwell was developed for ordinary least squares regression models. So if you were inclined to use Box-Tidwell for this, that's actually what it's designed for. It's not the only possible approach
Is there formal test of non-linearity in linear regression? Box-Tidwell was developed for ordinary least squares regression models. So if you were inclined to use Box-Tidwell for this, that's actually what it's designed for. It's not the only possible approach, but it sounds like an approach you're already familiar with. However, I'm not convinced that (most times it's used) a formal test is appropriate - I believe it usually answers the wrong question, while the diagnostic plots you've been looking at come closer to answering a useful question. [I have a similar opinion of many other tests of regression assumptions]
Is there formal test of non-linearity in linear regression? Box-Tidwell was developed for ordinary least squares regression models. So if you were inclined to use Box-Tidwell for this, that's actually what it's designed for. It's not the only possible approach
30,637
Is there formal test of non-linearity in linear regression?
The best formal tests come from relaxing the linearity assumption, then seeing if removing the nonlinearities damages the explained variation in Y. For example you can expand X using a regression spline and test the nonlinear components. My RMS Course Notes goes into details. But once you've allowed for the possibility of nonlinearity, you distort statistical inference by removing the nonlinear terms. The real numerator degrees of freedom for the regression are the number of chances to give the model, which must take into account the nonlinear terms. So the best advice overall is to allow effects not known to be linear to be be nonlinear and be done with it. This will preserve confidence interval coverage, etc.
Is there formal test of non-linearity in linear regression?
The best formal tests come from relaxing the linearity assumption, then seeing if removing the nonlinearities damages the explained variation in Y. For example you can expand X using a regression spl
Is there formal test of non-linearity in linear regression? The best formal tests come from relaxing the linearity assumption, then seeing if removing the nonlinearities damages the explained variation in Y. For example you can expand X using a regression spline and test the nonlinear components. My RMS Course Notes goes into details. But once you've allowed for the possibility of nonlinearity, you distort statistical inference by removing the nonlinear terms. The real numerator degrees of freedom for the regression are the number of chances to give the model, which must take into account the nonlinear terms. So the best advice overall is to allow effects not known to be linear to be be nonlinear and be done with it. This will preserve confidence interval coverage, etc.
Is there formal test of non-linearity in linear regression? The best formal tests come from relaxing the linearity assumption, then seeing if removing the nonlinearities damages the explained variation in Y. For example you can expand X using a regression spl
30,638
Is there formal test of non-linearity in linear regression?
Fit a non-linear regression (e.g. spline model like GAM) and then compare it to the linear model using AIC or likelihood ratio test. This is a simple and intuitive method of testing non-linearity. If the test rejects, or if AIC prefers the GAM, then conclude there are non-linearities.
Is there formal test of non-linearity in linear regression?
Fit a non-linear regression (e.g. spline model like GAM) and then compare it to the linear model using AIC or likelihood ratio test. This is a simple and intuitive method of testing non-linearity. If
Is there formal test of non-linearity in linear regression? Fit a non-linear regression (e.g. spline model like GAM) and then compare it to the linear model using AIC or likelihood ratio test. This is a simple and intuitive method of testing non-linearity. If the test rejects, or if AIC prefers the GAM, then conclude there are non-linearities.
Is there formal test of non-linearity in linear regression? Fit a non-linear regression (e.g. spline model like GAM) and then compare it to the linear model using AIC or likelihood ratio test. This is a simple and intuitive method of testing non-linearity. If
30,639
Relation between independence and correlation of uniform random variables
Independent implies uncorrelated but the implication doesn't go the other way. Uncorrelated implies independence only under certain conditions. e.g. if you have a bivariate normal, it is the case that uncorrelated implies independent (as you said). It is easy to construct bivariate distributions with uniform margins where the variables are uncorrelated but are not independent. Here are a few examples: consider an additional random variable $B$ which takes the values $\pm 1$ each with probability $\frac12$, independent of $X$. Then let $Y=BX$. take the bivariate distribution of two independent uniforms and slice it in 4 equal-size sections on each margin (yielding $4\times 4=16$ pieces, each of size $\frac12\times\frac12$). Now take all the probability from the 4 corner pieces and the 4 center pieces and put it evenly into the other 8 pieces. Let $Y = 2|X|-1$. In each case, the variables are uncorrelated but not independent (e.g. if $X=1$, what is $P(-0.1<Y<0.1)\,$?) If you specify some particular family of bivariate distributions with uniform margins it might be possible that under that formulation the only uncorrelated one is independent. Then being uncorrelated would imply independence. For example, if you restrict your attention to say the Gaussian copula, then I think the only uncorrelated one has independent margins; you can readily rescale that so that each margin is on (-1,1). Some R code for sampling from and plotting these bivariates (not necessarily efficiently): n <- 100000 x <- runif(n,-1,1) b <- rbinom(n,1,.5)*2-1 y1 <-b*x y2 <-ifelse(0.5<abs(x)&abs(x)<1, runif(n,-.5,.5), runif(n,0.5,1)*b ) y3 <- 2*abs(x)-1 par(mfrow=c(1,3)) plot(x,y1,pch=16,cex=.3,col=rgb(.5,.5,.5,.5)) plot(x,y2,pch=16,cex=.5,col=rgb(.5,.5,.5,.5)) abline(h=c(-1,-.5,0,.5,1),col=4,lty=3) abline(v=c(-1,-.5,0,.5,1),col=4,lty=3) plot(x,y3,pch=16,cex=.3,col=rgb(.5,.5,.5,.5)) (In this formulation, $(Y_2, Y_3)$ gives a fourth example) [Incidentally by transforming all of these to normality (i.e. transforming $X$ to $\Phi^{-1}(\frac12(X+1))$ and so forth), you get examples of uncorrelated normal random variables that are not independent. Naturally they aren't jointly normal.]
Relation between independence and correlation of uniform random variables
Independent implies uncorrelated but the implication doesn't go the other way. Uncorrelated implies independence only under certain conditions. e.g. if you have a bivariate normal, it is the case that
Relation between independence and correlation of uniform random variables Independent implies uncorrelated but the implication doesn't go the other way. Uncorrelated implies independence only under certain conditions. e.g. if you have a bivariate normal, it is the case that uncorrelated implies independent (as you said). It is easy to construct bivariate distributions with uniform margins where the variables are uncorrelated but are not independent. Here are a few examples: consider an additional random variable $B$ which takes the values $\pm 1$ each with probability $\frac12$, independent of $X$. Then let $Y=BX$. take the bivariate distribution of two independent uniforms and slice it in 4 equal-size sections on each margin (yielding $4\times 4=16$ pieces, each of size $\frac12\times\frac12$). Now take all the probability from the 4 corner pieces and the 4 center pieces and put it evenly into the other 8 pieces. Let $Y = 2|X|-1$. In each case, the variables are uncorrelated but not independent (e.g. if $X=1$, what is $P(-0.1<Y<0.1)\,$?) If you specify some particular family of bivariate distributions with uniform margins it might be possible that under that formulation the only uncorrelated one is independent. Then being uncorrelated would imply independence. For example, if you restrict your attention to say the Gaussian copula, then I think the only uncorrelated one has independent margins; you can readily rescale that so that each margin is on (-1,1). Some R code for sampling from and plotting these bivariates (not necessarily efficiently): n <- 100000 x <- runif(n,-1,1) b <- rbinom(n,1,.5)*2-1 y1 <-b*x y2 <-ifelse(0.5<abs(x)&abs(x)<1, runif(n,-.5,.5), runif(n,0.5,1)*b ) y3 <- 2*abs(x)-1 par(mfrow=c(1,3)) plot(x,y1,pch=16,cex=.3,col=rgb(.5,.5,.5,.5)) plot(x,y2,pch=16,cex=.5,col=rgb(.5,.5,.5,.5)) abline(h=c(-1,-.5,0,.5,1),col=4,lty=3) abline(v=c(-1,-.5,0,.5,1),col=4,lty=3) plot(x,y3,pch=16,cex=.3,col=rgb(.5,.5,.5,.5)) (In this formulation, $(Y_2, Y_3)$ gives a fourth example) [Incidentally by transforming all of these to normality (i.e. transforming $X$ to $\Phi^{-1}(\frac12(X+1))$ and so forth), you get examples of uncorrelated normal random variables that are not independent. Naturally they aren't jointly normal.]
Relation between independence and correlation of uniform random variables Independent implies uncorrelated but the implication doesn't go the other way. Uncorrelated implies independence only under certain conditions. e.g. if you have a bivariate normal, it is the case that
30,640
Quantifying dependence of Cauchy random variables
Just because they don't have a covariance doesn't mean that the basic $x^t\Sigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as: $$f({\mathbf x}; {\mathbf\mu},{\mathbf\Sigma}, k)= \frac{\Gamma\left(\frac{1+k}{2}\right)}{\Gamma(\frac{1}{2})\pi^{\frac{k}{2}}\left|{\mathbf\Sigma}\right|^{\frac{1}{2}}\left[1+({\mathbf x}-{\mathbf\mu})^T{\mathbf\Sigma}^{-1}({\mathbf x}-{\mathbf\mu})\right]^{\frac{1+k}{2}}} $$ which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom. For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $\Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $\Sigma$ has to be positive definite symmetric; if $\Sigma$ is diagonal, the variates are independent, etc. Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is: $$\mathcal{L}(\mu, \Sigma) = -{n\over 2}|\Sigma| - {k+1 \over 2}\sum_{i=1}^n\log(1+s_i)$$ where $s_i = (x_i-\mu)^T\Sigma^{-1}(x_i-\mu)$. Differentiating leads to the following simple expressions: $$\mu = \sum w_ix_i/\sum w_i$$ $$\Sigma = {1 \over n}\sum w_i(x_i-\mu)(x_i-\mu)^T$$ $$w_i = (1+k)/(1+s_i)$$ The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step. For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
Quantifying dependence of Cauchy random variables
Just because they don't have a covariance doesn't mean that the basic $x^t\Sigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauch
Quantifying dependence of Cauchy random variables Just because they don't have a covariance doesn't mean that the basic $x^t\Sigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as: $$f({\mathbf x}; {\mathbf\mu},{\mathbf\Sigma}, k)= \frac{\Gamma\left(\frac{1+k}{2}\right)}{\Gamma(\frac{1}{2})\pi^{\frac{k}{2}}\left|{\mathbf\Sigma}\right|^{\frac{1}{2}}\left[1+({\mathbf x}-{\mathbf\mu})^T{\mathbf\Sigma}^{-1}({\mathbf x}-{\mathbf\mu})\right]^{\frac{1+k}{2}}} $$ which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom. For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $\Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $\Sigma$ has to be positive definite symmetric; if $\Sigma$ is diagonal, the variates are independent, etc. Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is: $$\mathcal{L}(\mu, \Sigma) = -{n\over 2}|\Sigma| - {k+1 \over 2}\sum_{i=1}^n\log(1+s_i)$$ where $s_i = (x_i-\mu)^T\Sigma^{-1}(x_i-\mu)$. Differentiating leads to the following simple expressions: $$\mu = \sum w_ix_i/\sum w_i$$ $$\Sigma = {1 \over n}\sum w_i(x_i-\mu)(x_i-\mu)^T$$ $$w_i = (1+k)/(1+s_i)$$ The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step. For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.
Quantifying dependence of Cauchy random variables Just because they don't have a covariance doesn't mean that the basic $x^t\Sigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauch
30,641
Quantifying dependence of Cauchy random variables
While $\text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $\text{cov}(\Phi(X),\Phi(Y))$ does exist for, e.g., bounded functions $\Phi(\cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations. Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $\Phi_X(X)\sim\mathcal{U}(0,1)$ and $\Phi_Y(Y)\sim\mathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates. ¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=\Phi^{-1}(\{\arg\tan(X)/\pi+1\}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal $$(Z_X,Z_Y) \sim \mathcal{N}_2(0_2,\Sigma)$$This is a Gaussian copula.
Quantifying dependence of Cauchy random variables
While $\text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $\text{cov}(\Phi(X),\Phi(Y))$ does exist for, e.g., bounded functions $\Phi(\cdot)$. Actually, the notion of covar
Quantifying dependence of Cauchy random variables While $\text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $\text{cov}(\Phi(X),\Phi(Y))$ does exist for, e.g., bounded functions $\Phi(\cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations. Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $\Phi_X(X)\sim\mathcal{U}(0,1)$ and $\Phi_Y(Y)\sim\mathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates. ¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=\Phi^{-1}(\{\arg\tan(X)/\pi+1\}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal $$(Z_X,Z_Y) \sim \mathcal{N}_2(0_2,\Sigma)$$This is a Gaussian copula.
Quantifying dependence of Cauchy random variables While $\text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $\text{cov}(\Phi(X),\Phi(Y))$ does exist for, e.g., bounded functions $\Phi(\cdot)$. Actually, the notion of covar
30,642
Why does unbiasedness not imply consistency
In that paragraph the authors are giving an extreme example to show how being unbiased doesn't mean that a random variable is converging on anything. The authors are taking a random sample $X_1,\dots, X_n \sim \mathcal N(\mu,\sigma^2)$ and want to estimate $\mu$. Noting that $E(X_1) = \mu$, we could produce an unbiased estimator of $\mu$ by just ignoring all of our data except the first point $X_1$. But that's clearly a terrible idea, so unbiasedness alone is not a good criterion for evaluating an estimator. Somehow, as we get more data, we want our estimator to vary less and less from $\mu$, and that's exactly what consistency says: for any distance $\varepsilon$, the probability that $\hat \theta_n$ is more than $\varepsilon$ away from $\theta$ heads to $0$ as $n \to \infty$. And this can happen even if for any finite $n$ $\hat \theta$ is biased. An example of this is the variance estimator $\hat \sigma^2_n = \frac 1n \sum_{i=1}^n(y_i - \bar y_n)^2$ in a normal sample. This is biased but consistent. Intuitively, a statistic is unbiased if it exactly equals the target quantity when averaged over all possible samples. But we know that the average of a bunch of things doesn't have to be anywhere near the things being averaged; this is just a fancier version of how the average of $0$ and $1$ is $1/2$, although neither $0$ nor $1$ are particularly close to $1/2$ (depending on how you measure "close"). Here's another example (although this is almost just the same example in disguise). Let $X_1 \sim \text{Bern}(\theta)$ and let $X_2 = X_3 = \dots = X_1$. Our estimator of $\theta$ will be $\hat \theta(X) = \bar X_n$. Note that $E \bar X_n = p$ so we do indeed have an unbiased estimator. But $\bar X_n = X_1 \in \{0,1\}$ so this estimator definitely isn't converging on anything close to $\theta \in (0,1)$, and for every $n$ we actually still have $\bar X_n \sim \text{Bern}(\theta)$.
Why does unbiasedness not imply consistency
In that paragraph the authors are giving an extreme example to show how being unbiased doesn't mean that a random variable is converging on anything. The authors are taking a random sample $X_1,\dots,
Why does unbiasedness not imply consistency In that paragraph the authors are giving an extreme example to show how being unbiased doesn't mean that a random variable is converging on anything. The authors are taking a random sample $X_1,\dots, X_n \sim \mathcal N(\mu,\sigma^2)$ and want to estimate $\mu$. Noting that $E(X_1) = \mu$, we could produce an unbiased estimator of $\mu$ by just ignoring all of our data except the first point $X_1$. But that's clearly a terrible idea, so unbiasedness alone is not a good criterion for evaluating an estimator. Somehow, as we get more data, we want our estimator to vary less and less from $\mu$, and that's exactly what consistency says: for any distance $\varepsilon$, the probability that $\hat \theta_n$ is more than $\varepsilon$ away from $\theta$ heads to $0$ as $n \to \infty$. And this can happen even if for any finite $n$ $\hat \theta$ is biased. An example of this is the variance estimator $\hat \sigma^2_n = \frac 1n \sum_{i=1}^n(y_i - \bar y_n)^2$ in a normal sample. This is biased but consistent. Intuitively, a statistic is unbiased if it exactly equals the target quantity when averaged over all possible samples. But we know that the average of a bunch of things doesn't have to be anywhere near the things being averaged; this is just a fancier version of how the average of $0$ and $1$ is $1/2$, although neither $0$ nor $1$ are particularly close to $1/2$ (depending on how you measure "close"). Here's another example (although this is almost just the same example in disguise). Let $X_1 \sim \text{Bern}(\theta)$ and let $X_2 = X_3 = \dots = X_1$. Our estimator of $\theta$ will be $\hat \theta(X) = \bar X_n$. Note that $E \bar X_n = p$ so we do indeed have an unbiased estimator. But $\bar X_n = X_1 \in \{0,1\}$ so this estimator definitely isn't converging on anything close to $\theta \in (0,1)$, and for every $n$ we actually still have $\bar X_n \sim \text{Bern}(\theta)$.
Why does unbiasedness not imply consistency In that paragraph the authors are giving an extreme example to show how being unbiased doesn't mean that a random variable is converging on anything. The authors are taking a random sample $X_1,\dots,
30,643
Why does unbiasedness not imply consistency
As far as I understand, consistency implies both unbiasedness and low variance and therefore, unbiasedness alone is not sufficient to imply consistency. Right. Or using the slightly more lay terms of "accuracy" for low bias, and "precision" for low variance, consistency requires that we be both accurate and precise. Just being accurate doesn't mean we're hitting the target. It's like the old joke about two statisticians who go hunting. One misses a deer ten feet to the left. The other one misses ten feet to the right. They then congratulate each other on the basis that, on average, they hit the deer. Even though their bias is zero, to actually hit the deer, they need low variance as well.
Why does unbiasedness not imply consistency
As far as I understand, consistency implies both unbiasedness and low variance and therefore, unbiasedness alone is not sufficient to imply consistency. Right. Or using the slightly more lay terms of
Why does unbiasedness not imply consistency As far as I understand, consistency implies both unbiasedness and low variance and therefore, unbiasedness alone is not sufficient to imply consistency. Right. Or using the slightly more lay terms of "accuracy" for low bias, and "precision" for low variance, consistency requires that we be both accurate and precise. Just being accurate doesn't mean we're hitting the target. It's like the old joke about two statisticians who go hunting. One misses a deer ten feet to the left. The other one misses ten feet to the right. They then congratulate each other on the basis that, on average, they hit the deer. Even though their bias is zero, to actually hit the deer, they need low variance as well.
Why does unbiasedness not imply consistency As far as I understand, consistency implies both unbiasedness and low variance and therefore, unbiasedness alone is not sufficient to imply consistency. Right. Or using the slightly more lay terms of
30,644
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
Very nice question! It would indeed make sense that a "good" prior distribution gives positive probability or positive density value to the "true" parameter $\theta_0$, but from a purely decisional perspective this does not have to be the case. A simple counter-example to this "intuition" that$$\pi(\theta_0)>0$$should be necessary, when $\pi(\cdot)$ is the prior density and $\theta_0$ is the "true" value of the parameter, is the brilliant minimaxity result of Casella and Strawderman (1981): when estimating a Normal mean $\mu$ based on a single observation $x\sim{\cal N}(\mu,1)$ with the additional constraint that $|\mu|<\rho$, if $\rho$ is small enough, $\rho\le 1.0567$ specifically, the minimax estimator corresponds to a (least favourable) uniform prior on $\{-\rho,\rho\}$, meaning that $\pi$ gives equal weight to $-\rho$ and $\rho$ (and none to any other value of the mean $\mu$) $$\pi(\theta)=\frac{1}{2}\delta_{-\rho}(\theta)+ \frac{1}{2}\delta_{\rho}(\theta)$$ When $\rho$ increases the least favourable prior sees its support growing, but remaining a finite set of possible values. However the posterior expectation, $\mathbb{E}[\mu|x]$, can take any value on $(-\rho,\rho)$. The core of the discussion (see comments) may be that, were the Bayes estimator to be constrained to be a point in the support of $\pi(\cdot)$, its properties would be quite different. Similarly, when considering admissible estimators, Bayes estimators associated with a proper prior on a compact set are usually admissible, although they have a restricted support. In both cases, the frequentist notion (minimaxity or admissibility) is defined over the possible range of parameters rather that at the "true" value of the parameter (which brings an answer to Question 4.) For instance, looking at the posterior risk $$\int_\Theta L(\theta,\delta) \pi(\theta|x)\text{d}\theta$$ or at the Bayes risk $$\int_{\cal X}\int_\Theta L(\theta,\delta) \pi(\theta)f(x|\theta)\text{d}\theta\text{d}x$$ does not involve the true value $\theta_0$. Furthermore, as pointed out in the above example, when the Bayes estimator is defined by a formal expression such as the posterior mean $$\hat{\theta}^\pi(x)=\int_\Theta \theta\pi(\theta|x)\text{d}\theta$$ for the quadratic (or $L_2$) loss, this estimator may take values outside the support of $\pi$ in cases this support is not convex. As an aside, when reading for the true θ to have generated the data (i.e. "exist"), θ must be a possible variate under π, e.g. have non-zero probability, non-zero density I consider it a misrepresentation of the meaning of a prior. The prior distribution is not supposed to stand for an actual physical (or real) mechanism that saw a parameter value $\theta_0$ generated from $\pi$ followed by an observation $x$ generated from $f(x|\theta_0)$. The prior is a reference measure on the parameter space that incorporates prior information and subjective beliefs about the parameter and that is by no means unique. A Bayesian analysis is always relative to the prior chosen to conduct this Bayesian analysis. Hence, there is not an absolute necessity for the true parameter to belong to the support of $\pi$. Obviously, when this support is a compact connected set, ${\mathscr A}$, any value of the parameter outside the set ${\mathscr A}$ cannot be consistently estimated by the posterior mean $\hat{\theta}^\pi$ but this does not even prevent the estimator to be admissible.
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
Very nice question! It would indeed make sense that a "good" prior distribution gives positive probability or positive density value to the "true" parameter $\theta_0$, but from a purely decisional pe
Does a Bayes estimator require that the true parameter is a possible variate of the prior? Very nice question! It would indeed make sense that a "good" prior distribution gives positive probability or positive density value to the "true" parameter $\theta_0$, but from a purely decisional perspective this does not have to be the case. A simple counter-example to this "intuition" that$$\pi(\theta_0)>0$$should be necessary, when $\pi(\cdot)$ is the prior density and $\theta_0$ is the "true" value of the parameter, is the brilliant minimaxity result of Casella and Strawderman (1981): when estimating a Normal mean $\mu$ based on a single observation $x\sim{\cal N}(\mu,1)$ with the additional constraint that $|\mu|<\rho$, if $\rho$ is small enough, $\rho\le 1.0567$ specifically, the minimax estimator corresponds to a (least favourable) uniform prior on $\{-\rho,\rho\}$, meaning that $\pi$ gives equal weight to $-\rho$ and $\rho$ (and none to any other value of the mean $\mu$) $$\pi(\theta)=\frac{1}{2}\delta_{-\rho}(\theta)+ \frac{1}{2}\delta_{\rho}(\theta)$$ When $\rho$ increases the least favourable prior sees its support growing, but remaining a finite set of possible values. However the posterior expectation, $\mathbb{E}[\mu|x]$, can take any value on $(-\rho,\rho)$. The core of the discussion (see comments) may be that, were the Bayes estimator to be constrained to be a point in the support of $\pi(\cdot)$, its properties would be quite different. Similarly, when considering admissible estimators, Bayes estimators associated with a proper prior on a compact set are usually admissible, although they have a restricted support. In both cases, the frequentist notion (minimaxity or admissibility) is defined over the possible range of parameters rather that at the "true" value of the parameter (which brings an answer to Question 4.) For instance, looking at the posterior risk $$\int_\Theta L(\theta,\delta) \pi(\theta|x)\text{d}\theta$$ or at the Bayes risk $$\int_{\cal X}\int_\Theta L(\theta,\delta) \pi(\theta)f(x|\theta)\text{d}\theta\text{d}x$$ does not involve the true value $\theta_0$. Furthermore, as pointed out in the above example, when the Bayes estimator is defined by a formal expression such as the posterior mean $$\hat{\theta}^\pi(x)=\int_\Theta \theta\pi(\theta|x)\text{d}\theta$$ for the quadratic (or $L_2$) loss, this estimator may take values outside the support of $\pi$ in cases this support is not convex. As an aside, when reading for the true θ to have generated the data (i.e. "exist"), θ must be a possible variate under π, e.g. have non-zero probability, non-zero density I consider it a misrepresentation of the meaning of a prior. The prior distribution is not supposed to stand for an actual physical (or real) mechanism that saw a parameter value $\theta_0$ generated from $\pi$ followed by an observation $x$ generated from $f(x|\theta_0)$. The prior is a reference measure on the parameter space that incorporates prior information and subjective beliefs about the parameter and that is by no means unique. A Bayesian analysis is always relative to the prior chosen to conduct this Bayesian analysis. Hence, there is not an absolute necessity for the true parameter to belong to the support of $\pi$. Obviously, when this support is a compact connected set, ${\mathscr A}$, any value of the parameter outside the set ${\mathscr A}$ cannot be consistently estimated by the posterior mean $\hat{\theta}^\pi$ but this does not even prevent the estimator to be admissible.
Does a Bayes estimator require that the true parameter is a possible variate of the prior? Very nice question! It would indeed make sense that a "good" prior distribution gives positive probability or positive density value to the "true" parameter $\theta_0$, but from a purely decisional pe
30,645
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
Yes, it is generally assumed that the true $\theta$ is in the domain of the prior. It is the responsibility of the statistician to see that this is the case. Usually, yes. For example, when estimating a mean or location parameter, any prior on $(-\infty, \infty)$ will have the true value in its domain. (If the parameter is known to be greater than zero, e.g., "mean number of traffic accidents on the Bay Bridge per day", the prior doesn't need to include negative values, obviously.) If we are estimating a probability, any prior on $[0,1]$ will have the true value in its domain. If we are constructing a prior on a variance term, any prior on $(0, \infty)$ will have the true value in its domain... and so on. If your posterior is "stacked up" at one edge of the domain of the prior, and your prior imposes an unnecessary restriction on the domain at that same edge, this is an ad-hoc indicator that the unnecessary restriction may be causing you problems. But this should only occur if a) you have constructed a prior whose form is driven largely by convenience instead of actual prior knowledge, and b) the convenience-induced form of the prior restricts the domain of the parameter to a subset of what its "natural" domain can be considered to be. An example of such is an old, hopefully long obsoleted, practice of bounding the prior on a variance term slightly away from zero in order to avoid potential computational difficulties. If the true value of the variance is between the bound and zero, well... but actually thinking about the potential values of the variance given the data, or (for example) putting the prior on the log of the variance instead, will allow you to avoid this problem, and similar mild cleverness should allow you to avoid domain-limiting priors in general. Answered by #1.
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
Yes, it is generally assumed that the true $\theta$ is in the domain of the prior. It is the responsibility of the statistician to see that this is the case. Usually, yes. For example, when estimati
Does a Bayes estimator require that the true parameter is a possible variate of the prior? Yes, it is generally assumed that the true $\theta$ is in the domain of the prior. It is the responsibility of the statistician to see that this is the case. Usually, yes. For example, when estimating a mean or location parameter, any prior on $(-\infty, \infty)$ will have the true value in its domain. (If the parameter is known to be greater than zero, e.g., "mean number of traffic accidents on the Bay Bridge per day", the prior doesn't need to include negative values, obviously.) If we are estimating a probability, any prior on $[0,1]$ will have the true value in its domain. If we are constructing a prior on a variance term, any prior on $(0, \infty)$ will have the true value in its domain... and so on. If your posterior is "stacked up" at one edge of the domain of the prior, and your prior imposes an unnecessary restriction on the domain at that same edge, this is an ad-hoc indicator that the unnecessary restriction may be causing you problems. But this should only occur if a) you have constructed a prior whose form is driven largely by convenience instead of actual prior knowledge, and b) the convenience-induced form of the prior restricts the domain of the parameter to a subset of what its "natural" domain can be considered to be. An example of such is an old, hopefully long obsoleted, practice of bounding the prior on a variance term slightly away from zero in order to avoid potential computational difficulties. If the true value of the variance is between the bound and zero, well... but actually thinking about the potential values of the variance given the data, or (for example) putting the prior on the log of the variance instead, will allow you to avoid this problem, and similar mild cleverness should allow you to avoid domain-limiting priors in general. Answered by #1.
Does a Bayes estimator require that the true parameter is a possible variate of the prior? Yes, it is generally assumed that the true $\theta$ is in the domain of the prior. It is the responsibility of the statistician to see that this is the case. Usually, yes. For example, when estimati
30,646
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
The simple, intuitive answer is that prior reflects your prior knowledge about the $\theta$ and the minimal knowledge that you should have, is about it's domain. If you use bounded prior, then you assume that the values outside the bounds have zero probability, are impossible, and this is a very strong assumption that should not be made without good rationale. This is why people who don't want to make strong prior assumptions, use vague priors on $-\infty$ to $\infty$. Besides the bounded case, when your sample grows, or more precisely conveys more information, your posterior should finally converge to $\theta$ no matter of prior.
Does a Bayes estimator require that the true parameter is a possible variate of the prior?
The simple, intuitive answer is that prior reflects your prior knowledge about the $\theta$ and the minimal knowledge that you should have, is about it's domain. If you use bounded prior, then you ass
Does a Bayes estimator require that the true parameter is a possible variate of the prior? The simple, intuitive answer is that prior reflects your prior knowledge about the $\theta$ and the minimal knowledge that you should have, is about it's domain. If you use bounded prior, then you assume that the values outside the bounds have zero probability, are impossible, and this is a very strong assumption that should not be made without good rationale. This is why people who don't want to make strong prior assumptions, use vague priors on $-\infty$ to $\infty$. Besides the bounded case, when your sample grows, or more precisely conveys more information, your posterior should finally converge to $\theta$ no matter of prior.
Does a Bayes estimator require that the true parameter is a possible variate of the prior? The simple, intuitive answer is that prior reflects your prior knowledge about the $\theta$ and the minimal knowledge that you should have, is about it's domain. If you use bounded prior, then you ass
30,647
How do Variational Auto Encoders backprop past the sampling step [duplicate]
The reparameterization trick. $$x = \text{sample}(\mathcal{N}(\mu, \sigma^2))$$ is not backpropable wrt $\mu$ or $\sigma$. However, we can rewrite this as: $$x = \mu + \sigma\ \text{sample}( \mathcal{N}(0, 1))$$ which is clearly equivalent and backpropable.
How do Variational Auto Encoders backprop past the sampling step [duplicate]
The reparameterization trick. $$x = \text{sample}(\mathcal{N}(\mu, \sigma^2))$$ is not backpropable wrt $\mu$ or $\sigma$. However, we can rewrite this as: $$x = \mu + \sigma\ \text{sample}( \mathcal{
How do Variational Auto Encoders backprop past the sampling step [duplicate] The reparameterization trick. $$x = \text{sample}(\mathcal{N}(\mu, \sigma^2))$$ is not backpropable wrt $\mu$ or $\sigma$. However, we can rewrite this as: $$x = \mu + \sigma\ \text{sample}( \mathcal{N}(0, 1))$$ which is clearly equivalent and backpropable.
How do Variational Auto Encoders backprop past the sampling step [duplicate] The reparameterization trick. $$x = \text{sample}(\mathcal{N}(\mu, \sigma^2))$$ is not backpropable wrt $\mu$ or $\sigma$. However, we can rewrite this as: $$x = \mu + \sigma\ \text{sample}( \mathcal{
30,648
Can weight decay be higher than learning rate
Training a neural network means minimizing some error function which generally contains 2 parts: a data term (which penalizes when the network gives incorrect predictions) and a regularization term (which ensures the network weights satisfy some other assumptions), in our case the weight decay penalizing weights far from zero. The error function may look like this: $E=\frac{1}{N}||\mathbf{y}-\mathbf{t}||_2 + \lambda ||w||_2$, where $\mathbf{y}$ are the network predictions, $\mathbf{t}$ are the desired outputs (ground truth), $N$ is the size of the training set, and $w$ is the vector of the network weights. The parameter $\lambda$ controls the relative importance of the two parts of the error function. Setting a weight decay corresponds to setting this parameter. If you set it to a high value, the network does not care so much about correct predictions on the training set and rather keeps the weights low, hoping for good generalization performance on the unseen data. How the error function is minimized is an entirely separate thing. You can use a fancy method such as Adam, or you can take a simple stochastic gradient descent: both work on the same iterative principle: Evaluate derivatives of the error function w.r.t. weights: $\frac{\partial E}{\partial w}$ Update weights in the negative direction of the derivatives by a small step. It can be written down like this: $w_{t+1} = w_t - \eta \frac{\partial E}{\partial w}$ Parameter $\eta$ is called learning rate: it controls the size of the step. Thus, these two parameters are independent of each other and in principle it can make sense to set weight decay larger than learning rate. Practically, it depends entirely on your specific scenario: Which network architecture are you using? How many weights are there? What is the error function? Are you using some other regularizers? etc. It is your job to find the right hyperparameters.
Can weight decay be higher than learning rate
Training a neural network means minimizing some error function which generally contains 2 parts: a data term (which penalizes when the network gives incorrect predictions) and a regularization term (w
Can weight decay be higher than learning rate Training a neural network means minimizing some error function which generally contains 2 parts: a data term (which penalizes when the network gives incorrect predictions) and a regularization term (which ensures the network weights satisfy some other assumptions), in our case the weight decay penalizing weights far from zero. The error function may look like this: $E=\frac{1}{N}||\mathbf{y}-\mathbf{t}||_2 + \lambda ||w||_2$, where $\mathbf{y}$ are the network predictions, $\mathbf{t}$ are the desired outputs (ground truth), $N$ is the size of the training set, and $w$ is the vector of the network weights. The parameter $\lambda$ controls the relative importance of the two parts of the error function. Setting a weight decay corresponds to setting this parameter. If you set it to a high value, the network does not care so much about correct predictions on the training set and rather keeps the weights low, hoping for good generalization performance on the unseen data. How the error function is minimized is an entirely separate thing. You can use a fancy method such as Adam, or you can take a simple stochastic gradient descent: both work on the same iterative principle: Evaluate derivatives of the error function w.r.t. weights: $\frac{\partial E}{\partial w}$ Update weights in the negative direction of the derivatives by a small step. It can be written down like this: $w_{t+1} = w_t - \eta \frac{\partial E}{\partial w}$ Parameter $\eta$ is called learning rate: it controls the size of the step. Thus, these two parameters are independent of each other and in principle it can make sense to set weight decay larger than learning rate. Practically, it depends entirely on your specific scenario: Which network architecture are you using? How many weights are there? What is the error function? Are you using some other regularizers? etc. It is your job to find the right hyperparameters.
Can weight decay be higher than learning rate Training a neural network means minimizing some error function which generally contains 2 parts: a data term (which penalizes when the network gives incorrect predictions) and a regularization term (w
30,649
Difference between supervised machine learning and design of experiments?
Your question is difficult to answer because there is no "supervised ML algorithm". There are a large number of different ML algorithms that can be optimized in a supervised fashion, each with their strengths and weaknesses. On a very abstract level, you can define Machine Learning (ML) as a search through some space $P$ for a parameterization ($\theta$) of a given model $M$ such that $M(x;\theta)$ gives a minimal value (though not always globally) for the cost function ($\mathcal{C}$) and input ($x$). More formally: $$\arg\min_{\theta\in P} \mathcal{C}(M(\theta))$$ For supervised learning, one form the cost function can take is (given $y$ as ground truth): $$\mathcal{C}(M(x; \theta)) = ||M(x;\theta) - y||_2$$ Any search through $P$ that minimizes the cost function can fit in this framework, and thus you can claim DOE as a ML algorithm if you want. Specifically, an ML algorithm is defined by: the optimization technique employed, the model used, and your cost function. If you fill those in for DOE, you can start to compare it against other ML algorithms.
Difference between supervised machine learning and design of experiments?
Your question is difficult to answer because there is no "supervised ML algorithm". There are a large number of different ML algorithms that can be optimized in a supervised fashion, each with their s
Difference between supervised machine learning and design of experiments? Your question is difficult to answer because there is no "supervised ML algorithm". There are a large number of different ML algorithms that can be optimized in a supervised fashion, each with their strengths and weaknesses. On a very abstract level, you can define Machine Learning (ML) as a search through some space $P$ for a parameterization ($\theta$) of a given model $M$ such that $M(x;\theta)$ gives a minimal value (though not always globally) for the cost function ($\mathcal{C}$) and input ($x$). More formally: $$\arg\min_{\theta\in P} \mathcal{C}(M(\theta))$$ For supervised learning, one form the cost function can take is (given $y$ as ground truth): $$\mathcal{C}(M(x; \theta)) = ||M(x;\theta) - y||_2$$ Any search through $P$ that minimizes the cost function can fit in this framework, and thus you can claim DOE as a ML algorithm if you want. Specifically, an ML algorithm is defined by: the optimization technique employed, the model used, and your cost function. If you fill those in for DOE, you can start to compare it against other ML algorithms.
Difference between supervised machine learning and design of experiments? Your question is difficult to answer because there is no "supervised ML algorithm". There are a large number of different ML algorithms that can be optimized in a supervised fashion, each with their s
30,650
Difference between supervised machine learning and design of experiments?
Substitute "supervised ML" with "regression analysis", and you'll see that your question is difficult to answer. Depending on how narrowly you define regression analysis, DOE could be part of the term. Suppose, you're planning to use regression analysis to determine efficiency of a certain herbicide in the field. One of the very first step would be to design the experiment. In this case I'd argue that DOE is inseparable from your regression analysis, it's part of it. In many applications of ML you're given the data, and can't do much about planning the experiments, in fact, there is no experiment. For instance, in Kaggle competitions everything's usually set in advance, the training dataset is pre-defined and given, the test data set will be given at evaluation step, and you can't do anything about these things. That is why DOE is not mentioned al ot in the field. However, it doesn't have to be this way. Suppose, you're building self-driving cars. Yes, as usual, you can employ all the available datasets to train your ML vision components, all the images with traffic signs and road situations, nicely tagged etc. No DOE involved here, really. Yet, once you're out of the initial phase, and get into the field tests, things change. All the typical concerns that DOE addresses show up, e.g. would you need 10 test cars or 1,000 to get reliable results? Would you need 1,000 miles or 1,000,000 miles on the empty roads before trying the test car on a real street? etc. ML may not necessarily call DOE what they do to plan the development, but in essence it is DOE. Therefore, the answer to your question lies in how narrowly or broadly you define ML term. Is it just fitting the function to the data using the cost (loss)? Or is it a more general definition of building the reliable machine that replaces humans, which would include more than just fitting/optimization but at least some aspects of DOE.
Difference between supervised machine learning and design of experiments?
Substitute "supervised ML" with "regression analysis", and you'll see that your question is difficult to answer. Depending on how narrowly you define regression analysis, DOE could be part of the term
Difference between supervised machine learning and design of experiments? Substitute "supervised ML" with "regression analysis", and you'll see that your question is difficult to answer. Depending on how narrowly you define regression analysis, DOE could be part of the term. Suppose, you're planning to use regression analysis to determine efficiency of a certain herbicide in the field. One of the very first step would be to design the experiment. In this case I'd argue that DOE is inseparable from your regression analysis, it's part of it. In many applications of ML you're given the data, and can't do much about planning the experiments, in fact, there is no experiment. For instance, in Kaggle competitions everything's usually set in advance, the training dataset is pre-defined and given, the test data set will be given at evaluation step, and you can't do anything about these things. That is why DOE is not mentioned al ot in the field. However, it doesn't have to be this way. Suppose, you're building self-driving cars. Yes, as usual, you can employ all the available datasets to train your ML vision components, all the images with traffic signs and road situations, nicely tagged etc. No DOE involved here, really. Yet, once you're out of the initial phase, and get into the field tests, things change. All the typical concerns that DOE addresses show up, e.g. would you need 10 test cars or 1,000 to get reliable results? Would you need 1,000 miles or 1,000,000 miles on the empty roads before trying the test car on a real street? etc. ML may not necessarily call DOE what they do to plan the development, but in essence it is DOE. Therefore, the answer to your question lies in how narrowly or broadly you define ML term. Is it just fitting the function to the data using the cost (loss)? Or is it a more general definition of building the reliable machine that replaces humans, which would include more than just fitting/optimization but at least some aspects of DOE.
Difference between supervised machine learning and design of experiments? Substitute "supervised ML" with "regression analysis", and you'll see that your question is difficult to answer. Depending on how narrowly you define regression analysis, DOE could be part of the term
30,651
Difference between supervised machine learning and design of experiments?
I just finished a graduate-level course in experimental design and am starting to learn machine learning... I bet there are people on this website who can answer this better than I can, but hopefully this answer will do. At their cores, experimental design (ED) and machine learning (ML) have different goals. The primary goal of ED is to assess the influences of treatments and, if applicable, compare the influences of different treatments. The primary goal of ML is to give accurate predictions. These different cores thus influence how each topic is developed: In ED, emphasis is placed on good design so that the variability (of treatment parameter estimates) is reduced, sometimes with the need to meet budgetary constraints. My ED professor once said something to the tune of "Statisticians are always criticized for demanding a large sample size. If you were a statistician working on missiles, firing a missile is a few million bucks down the drain." Fractional factorial designs, from what I understand, are particularly popular due to budgetary constraints. The primary goal of ED is statistical inference of treatment parameters. In ML, emphasis is placed on using predictive algorithms and the issues behind them (e.g., computational compexity, computer software/hardware issues, etc.). Given my experience in both, I would not try to compare the two subjects. It is like comparing apples to oranges.
Difference between supervised machine learning and design of experiments?
I just finished a graduate-level course in experimental design and am starting to learn machine learning... I bet there are people on this website who can answer this better than I can, but hopefully
Difference between supervised machine learning and design of experiments? I just finished a graduate-level course in experimental design and am starting to learn machine learning... I bet there are people on this website who can answer this better than I can, but hopefully this answer will do. At their cores, experimental design (ED) and machine learning (ML) have different goals. The primary goal of ED is to assess the influences of treatments and, if applicable, compare the influences of different treatments. The primary goal of ML is to give accurate predictions. These different cores thus influence how each topic is developed: In ED, emphasis is placed on good design so that the variability (of treatment parameter estimates) is reduced, sometimes with the need to meet budgetary constraints. My ED professor once said something to the tune of "Statisticians are always criticized for demanding a large sample size. If you were a statistician working on missiles, firing a missile is a few million bucks down the drain." Fractional factorial designs, from what I understand, are particularly popular due to budgetary constraints. The primary goal of ED is statistical inference of treatment parameters. In ML, emphasis is placed on using predictive algorithms and the issues behind them (e.g., computational compexity, computer software/hardware issues, etc.). Given my experience in both, I would not try to compare the two subjects. It is like comparing apples to oranges.
Difference between supervised machine learning and design of experiments? I just finished a graduate-level course in experimental design and am starting to learn machine learning... I bet there are people on this website who can answer this better than I can, but hopefully
30,652
Difference between supervised machine learning and design of experiments?
I think that there is a fundamental problem in the way the question is posed. DoE does not per se model a system, the scope of DoE is to efficiently excite the system to gain maximum amount of information with a limited set of resources (time and experiments). Machine Learning(ML) supervised or unsupervised on the other and is a modelling technique that aims at finding relations within huge amount of observational data already collected. Much like simple regression is the step after you collected the data. There might be some similarities between DoE and Re-inforced Learning methods, were the ML itself is able to move paramters in the system in order to find an optimal fit or minimize a predifined objective function. In that context the ML model is performin some kind of DoE to gain more information and improve itself.
Difference between supervised machine learning and design of experiments?
I think that there is a fundamental problem in the way the question is posed. DoE does not per se model a system, the scope of DoE is to efficiently excite the system to gain maximum amount of informa
Difference between supervised machine learning and design of experiments? I think that there is a fundamental problem in the way the question is posed. DoE does not per se model a system, the scope of DoE is to efficiently excite the system to gain maximum amount of information with a limited set of resources (time and experiments). Machine Learning(ML) supervised or unsupervised on the other and is a modelling technique that aims at finding relations within huge amount of observational data already collected. Much like simple regression is the step after you collected the data. There might be some similarities between DoE and Re-inforced Learning methods, were the ML itself is able to move paramters in the system in order to find an optimal fit or minimize a predifined objective function. In that context the ML model is performin some kind of DoE to gain more information and improve itself.
Difference between supervised machine learning and design of experiments? I think that there is a fundamental problem in the way the question is posed. DoE does not per se model a system, the scope of DoE is to efficiently excite the system to gain maximum amount of informa
30,653
Difference between supervised machine learning and design of experiments?
4 years past. hope I can still contribute to your question. I have actual experience in DOE applied into real world of semiconductor manufacturing, more than 30x conducted DOE RSM. And am also a big data computational enthusiast. My direct answer: yes DOE is a supervised ML. DOE is part of ML. I cant say ML is part of DOE. Because ML is a wide topic and method of application. Probably you already knew that DOE RSM is more informative than the other method of DOE such as Taguchi. so stick with DOE RSM (Response surface methodology). a traditional DOE, but very effective. Method of calculation applied to DOE RSM in fractional & full factorial is Matrix Multiplication.(linear algebra). there are other ways to connect and calculate the missing link/values in order to come out, a model. usually Polynomials in 3D. you can apply other models forcibly, such as exponential, weibul, hyperbol, and many more models. How do you judge if you selected the correct model? no worries, use the regression analysis result such as R Sq and RsqAdj and also use Chi sq test, to see if your model fits into your actual data. if model to actual is too far, use other models. if still too far, then review again. Your last question, What is the difference between DOE and supervised machine learning in terms of the accuracy or other performance measure of the transfer function? in terms of accuracy... well this depends on application. if your experiments behaves with the models identified by dead mathematicians, then basically no difference. as long as the DOE and Machine learning calculation applied the same model. otherwise, there will be difference of accuracy in predicting the behaviour. either, might result far from the model. I assumed you knew this next statement, because you know how to use DOE. more levels in DOE, the more accurate you predict the physical behaviour of what you do. hard core experimenters love using minimum level 4 of DOE, multiple factors. Interpolation is more accurate than extrapolation.
Difference between supervised machine learning and design of experiments?
4 years past. hope I can still contribute to your question. I have actual experience in DOE applied into real world of semiconductor manufacturing, more than 30x conducted DOE RSM. And am also a big
Difference between supervised machine learning and design of experiments? 4 years past. hope I can still contribute to your question. I have actual experience in DOE applied into real world of semiconductor manufacturing, more than 30x conducted DOE RSM. And am also a big data computational enthusiast. My direct answer: yes DOE is a supervised ML. DOE is part of ML. I cant say ML is part of DOE. Because ML is a wide topic and method of application. Probably you already knew that DOE RSM is more informative than the other method of DOE such as Taguchi. so stick with DOE RSM (Response surface methodology). a traditional DOE, but very effective. Method of calculation applied to DOE RSM in fractional & full factorial is Matrix Multiplication.(linear algebra). there are other ways to connect and calculate the missing link/values in order to come out, a model. usually Polynomials in 3D. you can apply other models forcibly, such as exponential, weibul, hyperbol, and many more models. How do you judge if you selected the correct model? no worries, use the regression analysis result such as R Sq and RsqAdj and also use Chi sq test, to see if your model fits into your actual data. if model to actual is too far, use other models. if still too far, then review again. Your last question, What is the difference between DOE and supervised machine learning in terms of the accuracy or other performance measure of the transfer function? in terms of accuracy... well this depends on application. if your experiments behaves with the models identified by dead mathematicians, then basically no difference. as long as the DOE and Machine learning calculation applied the same model. otherwise, there will be difference of accuracy in predicting the behaviour. either, might result far from the model. I assumed you knew this next statement, because you know how to use DOE. more levels in DOE, the more accurate you predict the physical behaviour of what you do. hard core experimenters love using minimum level 4 of DOE, multiple factors. Interpolation is more accurate than extrapolation.
Difference between supervised machine learning and design of experiments? 4 years past. hope I can still contribute to your question. I have actual experience in DOE applied into real world of semiconductor manufacturing, more than 30x conducted DOE RSM. And am also a big
30,654
Difference between supervised machine learning and design of experiments?
We have generally observed that nonlinear input-output dependencies captured by ML lead to better prediction than regression models developed using CCD DOE. We used ideantical data pairs for both. The RMS errors were generally lower for ML that carefully avoided overtraining. A good data collection strategy appears to be a grid design.
Difference between supervised machine learning and design of experiments?
We have generally observed that nonlinear input-output dependencies captured by ML lead to better prediction than regression models developed using CCD DOE. We used ideantical data pairs for both. The
Difference between supervised machine learning and design of experiments? We have generally observed that nonlinear input-output dependencies captured by ML lead to better prediction than regression models developed using CCD DOE. We used ideantical data pairs for both. The RMS errors were generally lower for ML that carefully avoided overtraining. A good data collection strategy appears to be a grid design.
Difference between supervised machine learning and design of experiments? We have generally observed that nonlinear input-output dependencies captured by ML lead to better prediction than regression models developed using CCD DOE. We used ideantical data pairs for both. The
30,655
Why can't we cancel these two matrices in the OLS estimator?
Why can't we simply cancel out the $X^T$ I remember asking myself almost exactly the same question 300 years ago upon seeing the regression equation $y=X\beta$ for the first time in my life. The difference was that I told myself: why don't we simply solve it as $\beta=X^{-1}y$? It turns out the answer is almost the same for both yours and my questions. How does "cancelling out" work? In a simple algebra you get the following $$a\times b=a\times c$$ Then you multiply both sides by the same number, in this case it's $a^{-1}$: $$a^{-1}a\times b=a^{-1}a\times c$$ $$ b = c$$ The trouble is that neither $(X^T)^{-1}$ nor $X^{-1}$ exist when $n\ne k$. These are rectangular matrices as you can easily see, and actually allude to in your second question. There is no inversion of the rectangular matrices. I'll qualify the last statement later. When $n=k$, you could do what I suggested in the beginning, i.e. $\beta=X^{-1}y$, because the least squares problem is not necessary anymore. It degenerates into a simple linear algebra equation with a unique solution. As noted in the comments, not even every square matrix has a solution. For instance, if you have a matrix with two identical rows there is no inverse for it. Back to inverse of the rectangular matrix. You may have heard about the matrix pseudo inverse operation. You can apply it to invert the rectangular matrix, but there's no shortcut here: this will indeed solve the least squares equation, so you'll get back to the starting point :)
Why can't we cancel these two matrices in the OLS estimator?
Why can't we simply cancel out the $X^T$ I remember asking myself almost exactly the same question 300 years ago upon seeing the regression equation $y=X\beta$ for the first time in my life. The diff
Why can't we cancel these two matrices in the OLS estimator? Why can't we simply cancel out the $X^T$ I remember asking myself almost exactly the same question 300 years ago upon seeing the regression equation $y=X\beta$ for the first time in my life. The difference was that I told myself: why don't we simply solve it as $\beta=X^{-1}y$? It turns out the answer is almost the same for both yours and my questions. How does "cancelling out" work? In a simple algebra you get the following $$a\times b=a\times c$$ Then you multiply both sides by the same number, in this case it's $a^{-1}$: $$a^{-1}a\times b=a^{-1}a\times c$$ $$ b = c$$ The trouble is that neither $(X^T)^{-1}$ nor $X^{-1}$ exist when $n\ne k$. These are rectangular matrices as you can easily see, and actually allude to in your second question. There is no inversion of the rectangular matrices. I'll qualify the last statement later. When $n=k$, you could do what I suggested in the beginning, i.e. $\beta=X^{-1}y$, because the least squares problem is not necessary anymore. It degenerates into a simple linear algebra equation with a unique solution. As noted in the comments, not even every square matrix has a solution. For instance, if you have a matrix with two identical rows there is no inverse for it. Back to inverse of the rectangular matrix. You may have heard about the matrix pseudo inverse operation. You can apply it to invert the rectangular matrix, but there's no shortcut here: this will indeed solve the least squares equation, so you'll get back to the starting point :)
Why can't we cancel these two matrices in the OLS estimator? Why can't we simply cancel out the $X^T$ I remember asking myself almost exactly the same question 300 years ago upon seeing the regression equation $y=X\beta$ for the first time in my life. The diff
30,656
Interpretation of Breusch-Pagan test bptest() in R
This ought to be a typo on rstatistics.net. You are correct that the null hypothesis of the Breusch-Pagan test is homoscedasticity (= variance does not depend on auxiliary regressors). If the $p$-value becomes "small", the null hypothesis is rejected. I would recommend contacting the authors of rstatistics.net regarding this issue to see if they agree and fix it. Moreover, note that glvma() employs a different auxiliary regressor than bptest() by default and switches off studentization. More precisely, you can see the differences if you replicate the results by setting the arguments of bptest() explicitly. The model is given by: data("cars", package = "datasets") lmMod <- lm(dist ~ speed, data = cars) The default employed by bptest() then uses the same auxiliary regressors as the model, i.e., speed in this case. Also it uses the studentized version with improved finite-sample properties yielding a non-significant result. library("lmtest") bptest(lmMod, ~ speed, data = cars, studentize = TRUE) ## studentized Breusch-Pagan test ## ## data: lmMod ## BP = 3.2149, df = 1, p-value = 0.07297 In contrast, the glvma() switches off studentization and checks for a linear trend in the variances. cars$trend <- 1:nrow(cars) bptest(lmMod, ~ trend, data = cars, studentize = FALSE) ## Breusch-Pagan test ## ## data: lmMod ## BP = 5.2834, df = 1, p-value = 0.02153 As you can see both $p$-values are rather small but on different sides of 5%. The studentized versions are both slightly above 5%.
Interpretation of Breusch-Pagan test bptest() in R
This ought to be a typo on rstatistics.net. You are correct that the null hypothesis of the Breusch-Pagan test is homoscedasticity (= variance does not depend on auxiliary regressors). If the $p$-valu
Interpretation of Breusch-Pagan test bptest() in R This ought to be a typo on rstatistics.net. You are correct that the null hypothesis of the Breusch-Pagan test is homoscedasticity (= variance does not depend on auxiliary regressors). If the $p$-value becomes "small", the null hypothesis is rejected. I would recommend contacting the authors of rstatistics.net regarding this issue to see if they agree and fix it. Moreover, note that glvma() employs a different auxiliary regressor than bptest() by default and switches off studentization. More precisely, you can see the differences if you replicate the results by setting the arguments of bptest() explicitly. The model is given by: data("cars", package = "datasets") lmMod <- lm(dist ~ speed, data = cars) The default employed by bptest() then uses the same auxiliary regressors as the model, i.e., speed in this case. Also it uses the studentized version with improved finite-sample properties yielding a non-significant result. library("lmtest") bptest(lmMod, ~ speed, data = cars, studentize = TRUE) ## studentized Breusch-Pagan test ## ## data: lmMod ## BP = 3.2149, df = 1, p-value = 0.07297 In contrast, the glvma() switches off studentization and checks for a linear trend in the variances. cars$trend <- 1:nrow(cars) bptest(lmMod, ~ trend, data = cars, studentize = FALSE) ## Breusch-Pagan test ## ## data: lmMod ## BP = 5.2834, df = 1, p-value = 0.02153 As you can see both $p$-values are rather small but on different sides of 5%. The studentized versions are both slightly above 5%.
Interpretation of Breusch-Pagan test bptest() in R This ought to be a typo on rstatistics.net. You are correct that the null hypothesis of the Breusch-Pagan test is homoscedasticity (= variance does not depend on auxiliary regressors). If the $p$-valu
30,657
Where can I get information about relationships among probability distributions in statistics?
Those books are massive references on all connections between distributions: N.L. Johnson, S. Kotz, & N. Balakrishnan (1994) Continuous Univariate Distributions, Vol. 1. J. Wiley N.L. Johnson, S. Kotz, & N. Balakrishnan (1995) Continuous Univariate Distributions, Vol. 2. J. Wiley N.L. Johnson, S. Kotz, & A.W. Kemp (1993) Univariate Discrete Distributions. J. Wiley that cover all the links found on the Wikipedia graph. If not all possible relationships, of course!
Where can I get information about relationships among probability distributions in statistics?
Those books are massive references on all connections between distributions: N.L. Johnson, S. Kotz, & N. Balakrishnan (1994) Continuous Univariate Distributions, Vol. 1. J. Wiley N.L. Johnson, S. K
Where can I get information about relationships among probability distributions in statistics? Those books are massive references on all connections between distributions: N.L. Johnson, S. Kotz, & N. Balakrishnan (1994) Continuous Univariate Distributions, Vol. 1. J. Wiley N.L. Johnson, S. Kotz, & N. Balakrishnan (1995) Continuous Univariate Distributions, Vol. 2. J. Wiley N.L. Johnson, S. Kotz, & A.W. Kemp (1993) Univariate Discrete Distributions. J. Wiley that cover all the links found on the Wikipedia graph. If not all possible relationships, of course!
Where can I get information about relationships among probability distributions in statistics? Those books are massive references on all connections between distributions: N.L. Johnson, S. Kotz, & N. Balakrishnan (1994) Continuous Univariate Distributions, Vol. 1. J. Wiley N.L. Johnson, S. K
30,658
Where can I get information about relationships among probability distributions in statistics?
Check the following papers: Leemis, L. M. (1986). Relationships among common univariate distributions. The American Statistician, 40(2), 143-146. Leemis, L. M., & McQueston, J. T. (2008). Univariate distribution relationships. The American Statistician, 62(1), 45-53. You can find such information also in Wikipedia articles about probability distributions since in most cases they have Related distributions section that describes such relations.
Where can I get information about relationships among probability distributions in statistics?
Check the following papers: Leemis, L. M. (1986). Relationships among common univariate distributions. The American Statistician, 40(2), 143-146. Leemis, L. M., & McQueston, J. T. (2008). Univariat
Where can I get information about relationships among probability distributions in statistics? Check the following papers: Leemis, L. M. (1986). Relationships among common univariate distributions. The American Statistician, 40(2), 143-146. Leemis, L. M., & McQueston, J. T. (2008). Univariate distribution relationships. The American Statistician, 62(1), 45-53. You can find such information also in Wikipedia articles about probability distributions since in most cases they have Related distributions section that describes such relations.
Where can I get information about relationships among probability distributions in statistics? Check the following papers: Leemis, L. M. (1986). Relationships among common univariate distributions. The American Statistician, 40(2), 143-146. Leemis, L. M., & McQueston, J. T. (2008). Univariat
30,659
Use of ICC in multilevel modelling
The ICC (intra-class correlation) is interpretable and useful for random intercepts models. It is the correlation between two observations within the same cluster. The higher the correlation within the clusters (ie. the larger the ICC) the lower the variability is within the clusters and consequently the higher the variability is between the clusters. Alternatively, it is also measure of how much variation there is at each level, and this is why it is also called the variance partition coefficient (VPC). Therefore, as you rightly point out, in a random intercepts model, when the ICC is large, this is evidence in favour of retaining the random intercepts, while when it is small, this is evidence in favour of discarding random intercepts. However, as is often the case in applied statistics, what determines "small" and "large" is context-specific and discipline-specific. Once we introduce random slopes/coefficients, things get more complicated. The ICC is no longer the same as the VPC, because the ICC will be a function of the variable(s) for which random slopes are specified. Therefore there can be an infinite number of values for the ICC is the variable in question is continuous, and as many as the number of levels if it is categorical or a count. Thus any interpretation of the ICC in a random slopes model becomes more difficult. Stata, for example, will calculate a single value for the ICC but in a random slopes model, this is accompanied by the warning: Note: ICC is conditional on zero values of random-effects covariates. In other words, it has computed the ICC based on a value of zero for the random slope variable(s), so any interpretation of the ICC is also based on a value of zero for the slope variable(s). Regarding your question: If the ICC the small, then it means theres little variability between the clusters, which might suggest that their means are similar and thus a random intercepts model may not be needed, but does that automatically mean a random slopes model is also not warranted? No, because it is possible for each cluster to have the same intercept (no random intercept) while the slopes may indeed vary, which we can visualise like this: If we want to know whether random slopes are supported by the data, one approach is to fit models with and without random slopes and use a likelihood ratio test.
Use of ICC in multilevel modelling
The ICC (intra-class correlation) is interpretable and useful for random intercepts models. It is the correlation between two observations within the same cluster. The higher the correlation within th
Use of ICC in multilevel modelling The ICC (intra-class correlation) is interpretable and useful for random intercepts models. It is the correlation between two observations within the same cluster. The higher the correlation within the clusters (ie. the larger the ICC) the lower the variability is within the clusters and consequently the higher the variability is between the clusters. Alternatively, it is also measure of how much variation there is at each level, and this is why it is also called the variance partition coefficient (VPC). Therefore, as you rightly point out, in a random intercepts model, when the ICC is large, this is evidence in favour of retaining the random intercepts, while when it is small, this is evidence in favour of discarding random intercepts. However, as is often the case in applied statistics, what determines "small" and "large" is context-specific and discipline-specific. Once we introduce random slopes/coefficients, things get more complicated. The ICC is no longer the same as the VPC, because the ICC will be a function of the variable(s) for which random slopes are specified. Therefore there can be an infinite number of values for the ICC is the variable in question is continuous, and as many as the number of levels if it is categorical or a count. Thus any interpretation of the ICC in a random slopes model becomes more difficult. Stata, for example, will calculate a single value for the ICC but in a random slopes model, this is accompanied by the warning: Note: ICC is conditional on zero values of random-effects covariates. In other words, it has computed the ICC based on a value of zero for the random slope variable(s), so any interpretation of the ICC is also based on a value of zero for the slope variable(s). Regarding your question: If the ICC the small, then it means theres little variability between the clusters, which might suggest that their means are similar and thus a random intercepts model may not be needed, but does that automatically mean a random slopes model is also not warranted? No, because it is possible for each cluster to have the same intercept (no random intercept) while the slopes may indeed vary, which we can visualise like this: If we want to know whether random slopes are supported by the data, one approach is to fit models with and without random slopes and use a likelihood ratio test.
Use of ICC in multilevel modelling The ICC (intra-class correlation) is interpretable and useful for random intercepts models. It is the correlation between two observations within the same cluster. The higher the correlation within th
30,660
Role of Dirac function in particle filters
@user20160 has already given you nice answer to your (1)-(3) questions, but the last one seems to be not yet fully answered. How can a representation of a probability density function arise from a weighted sum of $\delta(\cdot)$s that themselves take only values of either zero or infinity? Let me start with quoting Wikipedia as it provides a pretty clear description in this case (notice the bolds I added): The Dirac delta can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite, $$\delta(x) = \begin{cases} +\infty, & x = 0 \\ 0, & x \ne 0 \end{cases}$$ and which is also constrained to satisfy the identity $$\int_{-\infty}^\infty \delta(x) \, dx = 1$$ This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no function defined on the real numbers has these properties. The Dirac delta function can be rigorously defined either as a distribution or as a measure. Further on, Wikipedia provides more formal definition and lots of worked examples, so I'd recommend you go through the whole article. Let me quote one example from it: In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent fully continuous distributions). For example, the probability density function $f(x)$ of a discrete distribution consisting of points $x = \{x_1, \dots, x_n\}$, with corresponding probabilities $p_1, \dots, p_n$, can be written as $$ f(x) = \sum_{i=1}^n p_i \delta(x-x_i) $$ What this equation is saying is that we take sum over $n$ continuous distributions $\delta_{x_i} = \delta(x-x_i)$ that have all their mass around $x_i$'s. If you'd try to imagine $\delta_{x_i}$ distributions in terms of cumulative distribution functions, it needs to be $$ F_{x_i}(x) = \begin{cases} 0 & \text{if } x < x_i \\ 1 & \text{if } x \ge x_i \end{cases} $$ So we can re-write previous density to cumulative distribution function $$ F(x) = \sum_{i=1}^n p_i F_{x_i}(x) = \sum_{i=1}^n p_i \mathbf{1}_{x \ge x_i} $$ where $\mathbf{1}_{x \ge x_i}$ is an indicator function pointing at $x_i$. Notice that this basically is a categorical distribution in disguise. Moreover, you can define Dirac delta in terms of arbitrary function $$ \int_{-\infty}^\infty f(x) \delta(x-x_i) dx = f(x_i) $$ so it "works" as continuous version of indicator function. The take-away message is that Dirac delta is not a standard function. It's also not equal to infinity at zero -- if it was, it would be useless because infinity is not a number, so we couldn't perform any arithmetic operations over it. You can think of Dirac delta simply as an indicator function pointing at some $x_i$ that is continuous and integrates to unity. No black magic involved, it is just a way to hack the calculus to deal with discrete values.
Role of Dirac function in particle filters
@user20160 has already given you nice answer to your (1)-(3) questions, but the last one seems to be not yet fully answered. How can a representation of a probability density function arise from a w
Role of Dirac function in particle filters @user20160 has already given you nice answer to your (1)-(3) questions, but the last one seems to be not yet fully answered. How can a representation of a probability density function arise from a weighted sum of $\delta(\cdot)$s that themselves take only values of either zero or infinity? Let me start with quoting Wikipedia as it provides a pretty clear description in this case (notice the bolds I added): The Dirac delta can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite, $$\delta(x) = \begin{cases} +\infty, & x = 0 \\ 0, & x \ne 0 \end{cases}$$ and which is also constrained to satisfy the identity $$\int_{-\infty}^\infty \delta(x) \, dx = 1$$ This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no function defined on the real numbers has these properties. The Dirac delta function can be rigorously defined either as a distribution or as a measure. Further on, Wikipedia provides more formal definition and lots of worked examples, so I'd recommend you go through the whole article. Let me quote one example from it: In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent fully continuous distributions). For example, the probability density function $f(x)$ of a discrete distribution consisting of points $x = \{x_1, \dots, x_n\}$, with corresponding probabilities $p_1, \dots, p_n$, can be written as $$ f(x) = \sum_{i=1}^n p_i \delta(x-x_i) $$ What this equation is saying is that we take sum over $n$ continuous distributions $\delta_{x_i} = \delta(x-x_i)$ that have all their mass around $x_i$'s. If you'd try to imagine $\delta_{x_i}$ distributions in terms of cumulative distribution functions, it needs to be $$ F_{x_i}(x) = \begin{cases} 0 & \text{if } x < x_i \\ 1 & \text{if } x \ge x_i \end{cases} $$ So we can re-write previous density to cumulative distribution function $$ F(x) = \sum_{i=1}^n p_i F_{x_i}(x) = \sum_{i=1}^n p_i \mathbf{1}_{x \ge x_i} $$ where $\mathbf{1}_{x \ge x_i}$ is an indicator function pointing at $x_i$. Notice that this basically is a categorical distribution in disguise. Moreover, you can define Dirac delta in terms of arbitrary function $$ \int_{-\infty}^\infty f(x) \delta(x-x_i) dx = f(x_i) $$ so it "works" as continuous version of indicator function. The take-away message is that Dirac delta is not a standard function. It's also not equal to infinity at zero -- if it was, it would be useless because infinity is not a number, so we couldn't perform any arithmetic operations over it. You can think of Dirac delta simply as an indicator function pointing at some $x_i$ that is continuous and integrates to unity. No black magic involved, it is just a way to hack the calculus to deal with discrete values.
Role of Dirac function in particle filters @user20160 has already given you nice answer to your (1)-(3) questions, but the last one seems to be not yet fully answered. How can a representation of a probability density function arise from a w
30,661
Role of Dirac function in particle filters
What is the relationship between the support of the particle approximation and the Dirac function? The distribution is approximated as a weighted sum of delta functions. So, the support of the approximation is the the union of the support of the delta functions. Each delta function is zero everywhere except for a single point ($x_t^{(i)}$), where its value is infinite. So, the support of each delta function is that single point, and the support of the approximating distribution is the set of points $\left \{ x_t^{(i)} \right \}_{i=1}^N$ Why is a summation sign used when evaluating $\delta$ can only ever yield a value of 0 or infinity? Shouldn't this be an integral instead? The sum is there to express the distribution as a weighted sum of delta functions. This is just saying: "place a delta function at each point $x_t^{(i)}$, and scale its amplitude by $\pi_t^{(i)}$." The distribution is continuous, so its value at each point is the probability density, not the probability. We'd integrate the density over some region to get the associated probability. The integral of each scaled delta function will be $\pi_t^{(i)}$. This means the probability of each point $x_t^{(i)}$ is $\pi_t{(i)}$, and the probability of any other value is 0. Here's an example of approximating a continuous distribution using delta functions. The distribution $g$ is a Gaussian distribution. $g$ is approximated using distribution $f$, which is a sum of 50 scaled delta functions. The locations of the delta functions are sampled from $g$. By eye, the PDFs don't look very similar because $f$ doesn't have a nice shape that we can see. But, the delta functions are packed closer together in regions where $g$ has higher density. Once we start taking integrals, the similarity becomes more apparent. For example, the CDFs are noticeably similar. The mean, variance, etc. will also be similar. The quality of the approximation will improve as the number of samples/delta functions increases. How can the notion of the support of a function be extended to a set of points (e.g., $x^{(i)}_t$), which isn't itself a function? Support is a concept defined for functions, not sets. The support of a function is the set of inputs for which the output is nonzero. As above, if we define a function as a sum of delta functions located at each point in a set $S$, the support of that function is $S$. We can also consider the indicator function of $S$. Say $S$ is a subset of some larger set $L$ (e.g. the real numbers). The indicator function $I_S(x)$ is defined on $L$. It takes a value of $1$ if $x \in S$, otherwise $0$. So, the support of the indicator function is $S$.
Role of Dirac function in particle filters
What is the relationship between the support of the particle approximation and the Dirac function? The distribution is approximated as a weighted sum of delta functions. So, the support of the approx
Role of Dirac function in particle filters What is the relationship between the support of the particle approximation and the Dirac function? The distribution is approximated as a weighted sum of delta functions. So, the support of the approximation is the the union of the support of the delta functions. Each delta function is zero everywhere except for a single point ($x_t^{(i)}$), where its value is infinite. So, the support of each delta function is that single point, and the support of the approximating distribution is the set of points $\left \{ x_t^{(i)} \right \}_{i=1}^N$ Why is a summation sign used when evaluating $\delta$ can only ever yield a value of 0 or infinity? Shouldn't this be an integral instead? The sum is there to express the distribution as a weighted sum of delta functions. This is just saying: "place a delta function at each point $x_t^{(i)}$, and scale its amplitude by $\pi_t^{(i)}$." The distribution is continuous, so its value at each point is the probability density, not the probability. We'd integrate the density over some region to get the associated probability. The integral of each scaled delta function will be $\pi_t^{(i)}$. This means the probability of each point $x_t^{(i)}$ is $\pi_t{(i)}$, and the probability of any other value is 0. Here's an example of approximating a continuous distribution using delta functions. The distribution $g$ is a Gaussian distribution. $g$ is approximated using distribution $f$, which is a sum of 50 scaled delta functions. The locations of the delta functions are sampled from $g$. By eye, the PDFs don't look very similar because $f$ doesn't have a nice shape that we can see. But, the delta functions are packed closer together in regions where $g$ has higher density. Once we start taking integrals, the similarity becomes more apparent. For example, the CDFs are noticeably similar. The mean, variance, etc. will also be similar. The quality of the approximation will improve as the number of samples/delta functions increases. How can the notion of the support of a function be extended to a set of points (e.g., $x^{(i)}_t$), which isn't itself a function? Support is a concept defined for functions, not sets. The support of a function is the set of inputs for which the output is nonzero. As above, if we define a function as a sum of delta functions located at each point in a set $S$, the support of that function is $S$. We can also consider the indicator function of $S$. Say $S$ is a subset of some larger set $L$ (e.g. the real numbers). The indicator function $I_S(x)$ is defined on $L$. It takes a value of $1$ if $x \in S$, otherwise $0$. So, the support of the indicator function is $S$.
Role of Dirac function in particle filters What is the relationship between the support of the particle approximation and the Dirac function? The distribution is approximated as a weighted sum of delta functions. So, the support of the approx
30,662
Role of Dirac function in particle filters
How can a representation of a probability density function arise from a weighted sum of δ(⋅)s that themselves take only values of either zero or infinity? Think of Dirac's delta function as a bridge between discrete and continuous values. Dirac came up with them to simplify his math by applying continuous math tools to discrete quantities. I think of Dirac's delta in precisely the same situations when it's too cumbersome to deal with discrete values. So, in your example someone wanted to have the probability density function. Great! But the trouble is that your inputs are discrete observations. So, this dude knew about Dirac's function, and plugged it in: $$p(x) \approx \sum_{i=1}^N \omega^i \delta(x-x^i)$$ To understand this expression bear in mind how Dirac's delta is defined: $$\int f(x)\delta(x-x_0)dx=f(x_0)$$ $$\delta(x)\equiv 0, \forall x\ne 0 $$ Notice, that it's not defined the way you described it: Dirac function becomes infinitely large at a point pp, that is δ(p)=∞ and that it is zero elsewhere, This is not the right way to think of a Dirac function. Always think of it as an integral above whose purpose is to link discrete value at $x_0$ to continuous expression (integral) $\int \dots dx$. Now, apply an integral to your equation: $$\int p(x) dx \approx \int \left(\sum_{i=1}^N \omega^i \delta(x-x^i)\right) dx= \sum_i\omega_i$$ If you didn't have the Dirac' delta and applied the integral to a sum you'd get an undefined integral: $$\int \left(\sum_{i=1}^N \omega^i \right) dx=\infty$$ Summarizing, Dirac's delta purpose is bring discrete quantities into continuous space, and you definition of $p(x)$ demonstrates just that. It constructs the continuous density function out of $N$ discrete values. Again, it is misleading to think of Dirac function as "infinity at $x_0$ and zero everywhere". This description does not bring anything useful in terms of intuition. Drop it. Here's how Diract himself defined his function in "The Principles of Quantum Mechanics" : This is how he describes the purpose of the function, notice how he keeps repeating the word "integrand" and emphasizes "convenience":
Role of Dirac function in particle filters
How can a representation of a probability density function arise from a weighted sum of δ(⋅)s that themselves take only values of either zero or infinity? Think of Dirac's delta function as a bri
Role of Dirac function in particle filters How can a representation of a probability density function arise from a weighted sum of δ(⋅)s that themselves take only values of either zero or infinity? Think of Dirac's delta function as a bridge between discrete and continuous values. Dirac came up with them to simplify his math by applying continuous math tools to discrete quantities. I think of Dirac's delta in precisely the same situations when it's too cumbersome to deal with discrete values. So, in your example someone wanted to have the probability density function. Great! But the trouble is that your inputs are discrete observations. So, this dude knew about Dirac's function, and plugged it in: $$p(x) \approx \sum_{i=1}^N \omega^i \delta(x-x^i)$$ To understand this expression bear in mind how Dirac's delta is defined: $$\int f(x)\delta(x-x_0)dx=f(x_0)$$ $$\delta(x)\equiv 0, \forall x\ne 0 $$ Notice, that it's not defined the way you described it: Dirac function becomes infinitely large at a point pp, that is δ(p)=∞ and that it is zero elsewhere, This is not the right way to think of a Dirac function. Always think of it as an integral above whose purpose is to link discrete value at $x_0$ to continuous expression (integral) $\int \dots dx$. Now, apply an integral to your equation: $$\int p(x) dx \approx \int \left(\sum_{i=1}^N \omega^i \delta(x-x^i)\right) dx= \sum_i\omega_i$$ If you didn't have the Dirac' delta and applied the integral to a sum you'd get an undefined integral: $$\int \left(\sum_{i=1}^N \omega^i \right) dx=\infty$$ Summarizing, Dirac's delta purpose is bring discrete quantities into continuous space, and you definition of $p(x)$ demonstrates just that. It constructs the continuous density function out of $N$ discrete values. Again, it is misleading to think of Dirac function as "infinity at $x_0$ and zero everywhere". This description does not bring anything useful in terms of intuition. Drop it. Here's how Diract himself defined his function in "The Principles of Quantum Mechanics" : This is how he describes the purpose of the function, notice how he keeps repeating the word "integrand" and emphasizes "convenience":
Role of Dirac function in particle filters How can a representation of a probability density function arise from a weighted sum of δ(⋅)s that themselves take only values of either zero or infinity? Think of Dirac's delta function as a bri
30,663
Role of Dirac function in particle filters
I think your confusions are all a result of thinking of the Dirac delta as a function. It is not (see wikipedia article https://en.wikipedia.org/wiki/Dirac_delta_function). The delta function only makes sense as a mathematical object when it appears inside an integral. From this perspective the Dirac delta can usually be manipulated as though it were a function. As @Tim quoted, the Dirac delta function can be rigorously defined either as a distribution or as a measure. This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no function defined on the real numbers has these properties. The Dirac delta function can be rigorously defined either as a distribution or as a measure. I think its easier to think of it as a measure ( ie essentially something you integrate against). So given a function f, $\mu(f):= \int_{-\infty}^{\infty} f(x) \ d\mu(x)$ if you have a density p(x) then this induces a measure $P$: $P(f)= \int_{-\infty}^{\infty} f (x)\ p(x)dx$ and the delta function induces a measure $\nu$ such that $\nu(f)=f(0)$ So the function notation just helps with eg adding measures together (Q2). ie what it's really saying is : $\mu(f):= \sum_{i=1}^{n} \nu_{x_i}(f)$ where $\nu_{x_i}(f)=f(x_i)$ This viewpoint clarifies the support question too. the support is defined using arbitrary functions: all functions f without support at zero will have $\mu(f)$ = 0 Support of a distribution As mentioned in the wikipedia article, the delta function can be viewed constructively as a limit of measures induced by Gaussians with mean at zero and vanishing standard deviation ($\sigma$) ( denoting Gaussian pdf as $g(x;\mu;\sigma)$ ) $\nu(f) = \lim_{\sigma\rightarrow 0} \int ^\infty _{-\infty} f(x) g(x;0,\sigma) dx$
Role of Dirac function in particle filters
I think your confusions are all a result of thinking of the Dirac delta as a function. It is not (see wikipedia article https://en.wikipedia.org/wiki/Dirac_delta_function). The delta function only
Role of Dirac function in particle filters I think your confusions are all a result of thinking of the Dirac delta as a function. It is not (see wikipedia article https://en.wikipedia.org/wiki/Dirac_delta_function). The delta function only makes sense as a mathematical object when it appears inside an integral. From this perspective the Dirac delta can usually be manipulated as though it were a function. As @Tim quoted, the Dirac delta function can be rigorously defined either as a distribution or as a measure. This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no function defined on the real numbers has these properties. The Dirac delta function can be rigorously defined either as a distribution or as a measure. I think its easier to think of it as a measure ( ie essentially something you integrate against). So given a function f, $\mu(f):= \int_{-\infty}^{\infty} f(x) \ d\mu(x)$ if you have a density p(x) then this induces a measure $P$: $P(f)= \int_{-\infty}^{\infty} f (x)\ p(x)dx$ and the delta function induces a measure $\nu$ such that $\nu(f)=f(0)$ So the function notation just helps with eg adding measures together (Q2). ie what it's really saying is : $\mu(f):= \sum_{i=1}^{n} \nu_{x_i}(f)$ where $\nu_{x_i}(f)=f(x_i)$ This viewpoint clarifies the support question too. the support is defined using arbitrary functions: all functions f without support at zero will have $\mu(f)$ = 0 Support of a distribution As mentioned in the wikipedia article, the delta function can be viewed constructively as a limit of measures induced by Gaussians with mean at zero and vanishing standard deviation ($\sigma$) ( denoting Gaussian pdf as $g(x;\mu;\sigma)$ ) $\nu(f) = \lim_{\sigma\rightarrow 0} \int ^\infty _{-\infty} f(x) g(x;0,\sigma) dx$
Role of Dirac function in particle filters I think your confusions are all a result of thinking of the Dirac delta as a function. It is not (see wikipedia article https://en.wikipedia.org/wiki/Dirac_delta_function). The delta function only
30,664
How is prior knowledge possible under a purely Bayesian framework?
Speaking of prior knowledge can be misleading, that is why you often see people speaking rather about prior beliefs. You do not need to have any prior knowledge to set up a prior. If you needed one, how would Longley-Cook manage with his problem? Here is an example from the 1950s when Longley-Cook, an actuary at an insurance company, was asked to price the risk for a mid-air collision of two planes, an event which as far as he knew hadn't happened before. The civilian airline industry was still very young, but rapidly growing and all Longely-Cook knew was that there were no collisions in the previous 5 years. Lack of data about mid-air collisions was not a problem to assign some prior to it that lead to pretty accurate conclusions as described by Markus Gesmann. This is extreme example of insufficient data and no prior knowledge, but in most real life situations you would have some out-of-data beliefs about your problem, that can be translated to priors. There is a common misconception about priors that they need to be somehow "correct", or "unique". In fact, you can purposefully use "incorrect" priors to validate different beliefs against your data. Such approach is described by Spiegelhalter (2004) who describes how a "community" of priors (e.g. "skeptical", or "optimistic") can be used in decision-making scenario. In this case it is not even prior beliefs that are used to form priors, but rather prior hypotheses. Since when using Bayesian approach, you include both the prior and data into your model, information from both sources will be combined. The more informative is your prior comparing to data, the more influence it would have, the more informative is your data, the less influence would your prior have. Eventually, "all models are wrong, but some are useful". Priors describe beliefs that you incorporate in your model, they do not have to be correct. It is enough if they are helpful for your problem, as we are dealing only with approximations of reality that are described by your models. Yes, they are subjective. As you already noticed, if we needed prior knowledge for them, we would end up in a vicious circle. Their beauty is that they can be formed even when confronted with shortage of data, so to overcome it. Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174.
How is prior knowledge possible under a purely Bayesian framework?
Speaking of prior knowledge can be misleading, that is why you often see people speaking rather about prior beliefs. You do not need to have any prior knowledge to set up a prior. If you needed one, h
How is prior knowledge possible under a purely Bayesian framework? Speaking of prior knowledge can be misleading, that is why you often see people speaking rather about prior beliefs. You do not need to have any prior knowledge to set up a prior. If you needed one, how would Longley-Cook manage with his problem? Here is an example from the 1950s when Longley-Cook, an actuary at an insurance company, was asked to price the risk for a mid-air collision of two planes, an event which as far as he knew hadn't happened before. The civilian airline industry was still very young, but rapidly growing and all Longely-Cook knew was that there were no collisions in the previous 5 years. Lack of data about mid-air collisions was not a problem to assign some prior to it that lead to pretty accurate conclusions as described by Markus Gesmann. This is extreme example of insufficient data and no prior knowledge, but in most real life situations you would have some out-of-data beliefs about your problem, that can be translated to priors. There is a common misconception about priors that they need to be somehow "correct", or "unique". In fact, you can purposefully use "incorrect" priors to validate different beliefs against your data. Such approach is described by Spiegelhalter (2004) who describes how a "community" of priors (e.g. "skeptical", or "optimistic") can be used in decision-making scenario. In this case it is not even prior beliefs that are used to form priors, but rather prior hypotheses. Since when using Bayesian approach, you include both the prior and data into your model, information from both sources will be combined. The more informative is your prior comparing to data, the more influence it would have, the more informative is your data, the less influence would your prior have. Eventually, "all models are wrong, but some are useful". Priors describe beliefs that you incorporate in your model, they do not have to be correct. It is enough if they are helpful for your problem, as we are dealing only with approximations of reality that are described by your models. Yes, they are subjective. As you already noticed, if we needed prior knowledge for them, we would end up in a vicious circle. Their beauty is that they can be formed even when confronted with shortage of data, so to overcome it. Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174.
How is prior knowledge possible under a purely Bayesian framework? Speaking of prior knowledge can be misleading, that is why you often see people speaking rather about prior beliefs. You do not need to have any prior knowledge to set up a prior. If you needed one, h
30,665
How is prior knowledge possible under a purely Bayesian framework?
I think you're making the mistake of applying something like the frequentist concept of probability to the foundations of the subjective definition. All that a prior is in the subjective framework is a quantification of a current belief, before updating it. By definition, you don't need anything concrete to arrive at that belief and it doesn't need to be valid, you just need to have it and to quantify it. A prior can be informative or uninformative and it can be strong or weak. The point of those scales is that you don't have any implicit assumptions about the validity of your prior knowledge, you have explicit ones, and sometimes that can be "I have no information." Or it can be "I am not confident in the information I have." The point is, there is no requirement that prior knowledge is "valid". And that assumption is the only reason your scenario seems paradoxical. By the way, if you like thinking about the philosophy of probability, you should read The Emergence of Probability by Ian Hacking and its sequel, The Taming of Chance. The first book especially was really illuminating in how the concept of probability came to have dual and seemingly incompatible definitions. As a teaser: did you know that until fairly recently, calling something "probable" meant that it was "approvable", i.e. that it was "approved by the authorities" or that it was a generally well respected opinion. It had nothing whatsoever to do with any concept of likelihood.
How is prior knowledge possible under a purely Bayesian framework?
I think you're making the mistake of applying something like the frequentist concept of probability to the foundations of the subjective definition. All that a prior is in the subjective framework is
How is prior knowledge possible under a purely Bayesian framework? I think you're making the mistake of applying something like the frequentist concept of probability to the foundations of the subjective definition. All that a prior is in the subjective framework is a quantification of a current belief, before updating it. By definition, you don't need anything concrete to arrive at that belief and it doesn't need to be valid, you just need to have it and to quantify it. A prior can be informative or uninformative and it can be strong or weak. The point of those scales is that you don't have any implicit assumptions about the validity of your prior knowledge, you have explicit ones, and sometimes that can be "I have no information." Or it can be "I am not confident in the information I have." The point is, there is no requirement that prior knowledge is "valid". And that assumption is the only reason your scenario seems paradoxical. By the way, if you like thinking about the philosophy of probability, you should read The Emergence of Probability by Ian Hacking and its sequel, The Taming of Chance. The first book especially was really illuminating in how the concept of probability came to have dual and seemingly incompatible definitions. As a teaser: did you know that until fairly recently, calling something "probable" meant that it was "approvable", i.e. that it was "approved by the authorities" or that it was a generally well respected opinion. It had nothing whatsoever to do with any concept of likelihood.
How is prior knowledge possible under a purely Bayesian framework? I think you're making the mistake of applying something like the frequentist concept of probability to the foundations of the subjective definition. All that a prior is in the subjective framework is
30,666
comparison of SGD and ALS in collaborative filtering
Both SGD and ALS are very practical for matrix factorization, Yehuda Koren, a winner of the Netflix prize (see here) and a pioneer in Matrix factorization techniques for CF, worked at Yahoo labs at the time, and was a part of the development of a CF model for Yahoo. Reading through Yahoo labs' publications (for example here and here), it is easy to see that they are using SGD heavily, and we can only assume that the same holds for production systems. Matrix factorization is often done on a matrix of user_featurexmovie_features (instead of matrices of usersxmovies) because of the cold-start issue, making the argument mentioned in the link less relevant. SGD also has the upper hand regarding dealing with missing data, which is a fairly common scenario. To sum up, SGD is a very common method for CF, and I see no reason why it cannot be applied on large data sets.
comparison of SGD and ALS in collaborative filtering
Both SGD and ALS are very practical for matrix factorization, Yehuda Koren, a winner of the Netflix prize (see here) and a pioneer in Matrix factorization techniques for CF, worked at Yahoo labs at t
comparison of SGD and ALS in collaborative filtering Both SGD and ALS are very practical for matrix factorization, Yehuda Koren, a winner of the Netflix prize (see here) and a pioneer in Matrix factorization techniques for CF, worked at Yahoo labs at the time, and was a part of the development of a CF model for Yahoo. Reading through Yahoo labs' publications (for example here and here), it is easy to see that they are using SGD heavily, and we can only assume that the same holds for production systems. Matrix factorization is often done on a matrix of user_featurexmovie_features (instead of matrices of usersxmovies) because of the cold-start issue, making the argument mentioned in the link less relevant. SGD also has the upper hand regarding dealing with missing data, which is a fairly common scenario. To sum up, SGD is a very common method for CF, and I see no reason why it cannot be applied on large data sets.
comparison of SGD and ALS in collaborative filtering Both SGD and ALS are very practical for matrix factorization, Yehuda Koren, a winner of the Netflix prize (see here) and a pioneer in Matrix factorization techniques for CF, worked at Yahoo labs at t
30,667
comparison of SGD and ALS in collaborative filtering
Check out the comparison here: Recommender: An Analysis of Collaborative Filtering Techniques -Aberger The conclusion seems to be that biased stochastic gradient descent is generally faster and more accurate than ALS except in situations of sparse data in which ALS performs better.
comparison of SGD and ALS in collaborative filtering
Check out the comparison here: Recommender: An Analysis of Collaborative Filtering Techniques -Aberger The conclusion seems to be that biased stochastic gradient descent is generally faster and more a
comparison of SGD and ALS in collaborative filtering Check out the comparison here: Recommender: An Analysis of Collaborative Filtering Techniques -Aberger The conclusion seems to be that biased stochastic gradient descent is generally faster and more accurate than ALS except in situations of sparse data in which ALS performs better.
comparison of SGD and ALS in collaborative filtering Check out the comparison here: Recommender: An Analysis of Collaborative Filtering Techniques -Aberger The conclusion seems to be that biased stochastic gradient descent is generally faster and more a
30,668
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R
"Corpus" is a collection of text documents. VCorpus in tm refers to "Volatile" corpus which means that the corpus is stored in memory and would be destroyed when the R object containing it is destroyed. Contrast this with PCorpus or Permanent Corpus which are stored outside the memory in a db. In order to create a VCorpus using tm, we need to pass a "Source" object as a parameter to the VCorpus method. You can find the sources available using this method - getSources() [1] "DataframeSource" "DirSource" "URISource" "VectorSource" [5] "XMLSource" "ZipSource" Source abstracts input locations, like a directory, a URI etc. VectorSource is for only character vectors A simple example : Say you have a char vector - input <- c('This is line one.','And this is the second one') Create the source - vecSource <- VectorSource(input) Then create the corpus - VCorpus(vecSource) Hope this helps. You can read more here - https://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R
"Corpus" is a collection of text documents. VCorpus in tm refers to "Volatile" corpus which means that the corpus is stored in memory and would be destroyed when the R object containing it is destroy
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R "Corpus" is a collection of text documents. VCorpus in tm refers to "Volatile" corpus which means that the corpus is stored in memory and would be destroyed when the R object containing it is destroyed. Contrast this with PCorpus or Permanent Corpus which are stored outside the memory in a db. In order to create a VCorpus using tm, we need to pass a "Source" object as a parameter to the VCorpus method. You can find the sources available using this method - getSources() [1] "DataframeSource" "DirSource" "URISource" "VectorSource" [5] "XMLSource" "ZipSource" Source abstracts input locations, like a directory, a URI etc. VectorSource is for only character vectors A simple example : Say you have a char vector - input <- c('This is line one.','And this is the second one') Create the source - vecSource <- VectorSource(input) Then create the corpus - VCorpus(vecSource) Hope this helps. You can read more here - https://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R "Corpus" is a collection of text documents. VCorpus in tm refers to "Volatile" corpus which means that the corpus is stored in memory and would be destroyed when the R object containing it is destroy
30,669
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R
In practical terms, there is a big difference between Corpus and VCorpus. Corpus uses SimpleCorpus as a default, which means some features of VCorpus will not be available. One that is immediately evident is that SimpleCorpus will not allow you to keep dashes, underscores or other signs of punctuation; SimpleCorpus or Corpus automatically removes them, VCorpus does not. There are other limitations of Corpus that you will find in the help with ?SimpleCorpus. Here is an example: # Read a text file from internet filePath <- "http://www.sthda.com/sthda/RDoc/example-files/martin-luther-king-i-have-a-dream-speech.txt" text <- readLines(filePath) # load the data as a corpus C.mlk <- Corpus(VectorSource(text)) C.mlk V.mlk <- VCorpus(VectorSource(text)) V.mlk The output will be: <<SimpleCorpus>> Metadata: corpus specific: 1, document level (indexed): 0 Content: documents: 46 <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 46 If you do an inspection of the objects: # inspect the content of the document inspect(C.mlk[1:2]) inspect(V.mlk[1:2]) You will notice that Corpus unpacks the text: <<SimpleCorpus>> Metadata: corpus specific: 1, document level (indexed): 0 Content: documents: 2 [1] [2] And so even though we face the difficulties of today and tomorrow, I still have a dream. It is a dream deeply rooted in the American dream. <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 2 [[1]] <<PlainTextDocument>> Metadata: 7 Content: chars: 0 [[2]] <<PlainTextDocument>> Metadata: 7 Content: chars: 139 While VCorpus keeps it together within the object. Let's say now you do the matrix conversion for both: dtm.C.mlk <- DocumentTermMatrix(C.mlk) length(dtm.C.mlk$dimnames$Terms) # 168 dtm.V.mlk <- DocumentTermMatrix(V.mlk) length(dtm.V.mlk$dimnames$Terms) # 187 Finally, let's see the content. This is from Corpus: grep("[[:punct:]]", dtm.C.mlk$dimnames$Terms, value = TRUE) # character(0) And from VCorpus: grep("[[:punct:]]", dtm.V.mlk$dimnames$Terms, value = TRUE) [1] "alabama," "almighty," "brotherhood." "brothers." [5] "california." "catholics," "character." "children," [9] "city," "colorado." "creed:" "day," [13] "day." "died," "dream." "equal." [17] "exalted," "faith," "gentiles," "georgia," [21] "georgia." "hamlet," "hampshire." "happens," [25] "hope," "hope." "injustice," "justice." [29] "last!" "liberty," "low," "meaning:" [33] "men," "mississippi," "mississippi." "mountainside," [37] "nation," "nullification," "oppression," "pennsylvania." [41] "plain," "pride," "racists," "ring!" [45] "ring," "ring." "self-evident," "sing." [49] "snow-capped" "spiritual:" "straight;" "tennessee." [53] "thee," "today!" "together," "together." [57] "tomorrow," "true." "york." Take a look at the words with punctuation. That is a huge difference. Isn't it?
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R
In practical terms, there is a big difference between Corpus and VCorpus. Corpus uses SimpleCorpus as a default, which means some features of VCorpus will not be available. One that is immediately evi
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R In practical terms, there is a big difference between Corpus and VCorpus. Corpus uses SimpleCorpus as a default, which means some features of VCorpus will not be available. One that is immediately evident is that SimpleCorpus will not allow you to keep dashes, underscores or other signs of punctuation; SimpleCorpus or Corpus automatically removes them, VCorpus does not. There are other limitations of Corpus that you will find in the help with ?SimpleCorpus. Here is an example: # Read a text file from internet filePath <- "http://www.sthda.com/sthda/RDoc/example-files/martin-luther-king-i-have-a-dream-speech.txt" text <- readLines(filePath) # load the data as a corpus C.mlk <- Corpus(VectorSource(text)) C.mlk V.mlk <- VCorpus(VectorSource(text)) V.mlk The output will be: <<SimpleCorpus>> Metadata: corpus specific: 1, document level (indexed): 0 Content: documents: 46 <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 46 If you do an inspection of the objects: # inspect the content of the document inspect(C.mlk[1:2]) inspect(V.mlk[1:2]) You will notice that Corpus unpacks the text: <<SimpleCorpus>> Metadata: corpus specific: 1, document level (indexed): 0 Content: documents: 2 [1] [2] And so even though we face the difficulties of today and tomorrow, I still have a dream. It is a dream deeply rooted in the American dream. <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 2 [[1]] <<PlainTextDocument>> Metadata: 7 Content: chars: 0 [[2]] <<PlainTextDocument>> Metadata: 7 Content: chars: 139 While VCorpus keeps it together within the object. Let's say now you do the matrix conversion for both: dtm.C.mlk <- DocumentTermMatrix(C.mlk) length(dtm.C.mlk$dimnames$Terms) # 168 dtm.V.mlk <- DocumentTermMatrix(V.mlk) length(dtm.V.mlk$dimnames$Terms) # 187 Finally, let's see the content. This is from Corpus: grep("[[:punct:]]", dtm.C.mlk$dimnames$Terms, value = TRUE) # character(0) And from VCorpus: grep("[[:punct:]]", dtm.V.mlk$dimnames$Terms, value = TRUE) [1] "alabama," "almighty," "brotherhood." "brothers." [5] "california." "catholics," "character." "children," [9] "city," "colorado." "creed:" "day," [13] "day." "died," "dream." "equal." [17] "exalted," "faith," "gentiles," "georgia," [21] "georgia." "hamlet," "hampshire." "happens," [25] "hope," "hope." "injustice," "justice." [29] "last!" "liberty," "low," "meaning:" [33] "men," "mississippi," "mississippi." "mountainside," [37] "nation," "nullification," "oppression," "pennsylvania." [41] "plain," "pride," "racists," "ring!" [45] "ring," "ring." "self-evident," "sing." [49] "snow-capped" "spiritual:" "straight;" "tennessee." [53] "thee," "today!" "together," "together." [57] "tomorrow," "true." "york." Take a look at the words with punctuation. That is a huge difference. Isn't it?
What is VectorSource and VCorpus in 'tm' (Text Mining) package in R In practical terms, there is a big difference between Corpus and VCorpus. Corpus uses SimpleCorpus as a default, which means some features of VCorpus will not be available. One that is immediately evi
30,670
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
All, I couldn't find one picture that put everything together, so I made one based on what I have been studying. Putting the scales of measurement on the same diagram with the data types was confusing me, so I tried to show that there is a distinction there. I appreciate your help and thoughts! Regards, Leaning
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
All, I couldn't find one picture that put everything together, so I made one based on what I have been studying. Putting the scales of measurement on the same diagram with the data types was confusin
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? All, I couldn't find one picture that put everything together, so I made one based on what I have been studying. Putting the scales of measurement on the same diagram with the data types was confusing me, so I tried to show that there is a distinction there. I appreciate your help and thoughts! Regards, Leaning
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? All, I couldn't find one picture that put everything together, so I made one based on what I have been studying. Putting the scales of measurement on the same diagram with the data types was confusin
30,671
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
These typologies can easily confuse as much as they explain. For example, binary data, as introduced in many introductory texts or courses, certainly sound qualitative: yes or no, survived or died, present or absent, male or female, whatever. But score the two possibilities 1 or 0 and everything is then perfectly quantitative. Such scoring is the basis of all sorts of analyses: the proportion female is just the average of several 0s for males and 1s for females. If I encounter 7 females and 3 males, I can just average 1, 1, 1, 1, 1, 1, 1, 0, 0, 0 to get the proportion 0.7. With binary responses, you have a wide open road then to logit and probit regression, and so forth, which focus on variation in the proportion, fraction or probability survived, or something similar, with whatever else controls or influences it. No one need get worried by the coding being arbitrary. The proportion male is just 1 minus the proportion female, and so forth. Almost the same is true when nominal or ordinal data are being considered, as any analyses of such data hinge on first counting how many fall into each category and then you can be as quantitative as you like. Pie charts and bar charts, as first encountered in early years, show that, so it is puzzling how many accounts miss this in explanations. Put another way, you can classify raw or original data as first reported and as appearing in say the cell of a spreadsheet or database. But its original form is not immutable. Imagine something stark like a death from puzzlement from reading too many superficial textbooks. That can be written on a certificate, but statistical analysis never stops there. There is an aggregation to counts (how many such deaths in a area and a time period), a reduction to rates (how many relative to the population at risk), and so on. So, how the data are first encoded rarely inhibits their use in other ways and transformation to other forms. The etymology of data is here revealing: translating the original Latin literally, they are as given to you, but there is no rule against converting them to many other forms.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
These typologies can easily confuse as much as they explain. For example, binary data, as introduced in many introductory texts or courses, certainly sound qualitative: yes or no, survived or died,
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? These typologies can easily confuse as much as they explain. For example, binary data, as introduced in many introductory texts or courses, certainly sound qualitative: yes or no, survived or died, present or absent, male or female, whatever. But score the two possibilities 1 or 0 and everything is then perfectly quantitative. Such scoring is the basis of all sorts of analyses: the proportion female is just the average of several 0s for males and 1s for females. If I encounter 7 females and 3 males, I can just average 1, 1, 1, 1, 1, 1, 1, 0, 0, 0 to get the proportion 0.7. With binary responses, you have a wide open road then to logit and probit regression, and so forth, which focus on variation in the proportion, fraction or probability survived, or something similar, with whatever else controls or influences it. No one need get worried by the coding being arbitrary. The proportion male is just 1 minus the proportion female, and so forth. Almost the same is true when nominal or ordinal data are being considered, as any analyses of such data hinge on first counting how many fall into each category and then you can be as quantitative as you like. Pie charts and bar charts, as first encountered in early years, show that, so it is puzzling how many accounts miss this in explanations. Put another way, you can classify raw or original data as first reported and as appearing in say the cell of a spreadsheet or database. But its original form is not immutable. Imagine something stark like a death from puzzlement from reading too many superficial textbooks. That can be written on a certificate, but statistical analysis never stops there. There is an aggregation to counts (how many such deaths in a area and a time period), a reduction to rates (how many relative to the population at risk), and so on. So, how the data are first encoded rarely inhibits their use in other ways and transformation to other forms. The etymology of data is here revealing: translating the original Latin literally, they are as given to you, but there is no rule against converting them to many other forms.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? These typologies can easily confuse as much as they explain. For example, binary data, as introduced in many introductory texts or courses, certainly sound qualitative: yes or no, survived or died,
30,672
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
It depends what you mean by "quantitative data" and "qualitative data". I think the two sites you cite are using the terms differently. Suppose, for example, you ask people: Did you vote for Obama, Romney, someone else or no one in the presidential election? What sort of data is this? The variable is nominal: It's only names, there is no order to it. But many people would call it quantitative because the key thing is how many choose which candidate. That's as opposed to qualitative data which might be transcriptions of interviews about what they like best about Obama (or Romney or whoever). A better way to look at it is to clearly distinguish quantitative data from quantitative variables.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
It depends what you mean by "quantitative data" and "qualitative data". I think the two sites you cite are using the terms differently. Suppose, for example, you ask people: Did you vote for Obama, Ro
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? It depends what you mean by "quantitative data" and "qualitative data". I think the two sites you cite are using the terms differently. Suppose, for example, you ask people: Did you vote for Obama, Romney, someone else or no one in the presidential election? What sort of data is this? The variable is nominal: It's only names, there is no order to it. But many people would call it quantitative because the key thing is how many choose which candidate. That's as opposed to qualitative data which might be transcriptions of interviews about what they like best about Obama (or Romney or whoever). A better way to look at it is to clearly distinguish quantitative data from quantitative variables.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? It depends what you mean by "quantitative data" and "qualitative data". I think the two sites you cite are using the terms differently. Suppose, for example, you ask people: Did you vote for Obama, Ro
30,673
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
Neither of these charts are correct. They are rather nonsensical and you are right to be confused (aside from the contradiction). They seem to be conflating the ideas of fundamental variable type and variable selection to model a system (with a pdf). There are 3 fundamental variable types (excluding subtypes): Nominal (categorical/qualitative), Ordinal, and Continuous (Numeric, Quantitative). Ordinal has both a qualitative and quantitative nature. Attribute is not really basic type but is usually discussed in that way when choosing an appropriate control chart, where one is choosing the best pdf with which to model the system. This is sometimes called "attribute data", but it's type is nominal (aka categorical etc). Like Nick mentioned, we count nominals, so it can be confused with a numeric type, but its not.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
Neither of these charts are correct. They are rather nonsensical and you are right to be confused (aside from the contradiction). They seem to be conflating the ideas of fundamental variable type and
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? Neither of these charts are correct. They are rather nonsensical and you are right to be confused (aside from the contradiction). They seem to be conflating the ideas of fundamental variable type and variable selection to model a system (with a pdf). There are 3 fundamental variable types (excluding subtypes): Nominal (categorical/qualitative), Ordinal, and Continuous (Numeric, Quantitative). Ordinal has both a qualitative and quantitative nature. Attribute is not really basic type but is usually discussed in that way when choosing an appropriate control chart, where one is choosing the best pdf with which to model the system. This is sometimes called "attribute data", but it's type is nominal (aka categorical etc). Like Nick mentioned, we count nominals, so it can be confused with a numeric type, but its not.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? Neither of these charts are correct. They are rather nonsensical and you are right to be confused (aside from the contradiction). They seem to be conflating the ideas of fundamental variable type and
30,674
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
I found this question while searching about levels of measurement and related concepts. I think the charts in the question lack the context. When we do the categorization we define the rules for grouping the objects according to our purpose. So what is the purpose? And are we talking about the variables? We could categorize variables according to the levels of measurement, then we could have 4 scales (groups) with the following rules: nominal: attributes of a variable are differentiated only by name (category) and there is no order (rank, position). ordinal: attributes of a variable are differentiated by order (rank, position), but we do not know the relative degree of difference between them. interval: attributes of a variable are differentiated by the degree of difference between them, but there is no absolute zero, and the ratio between the attributes is unknown. ratio: attributes of a variable are differentiated by the degree of difference between them, there is absolute zero, and we could find the ratio between the attributes. And this is only one approach from Stanley Smith Stevens. There are several other typologies. Continuous and discrete variables are mathematical concepts where we have a range of real numbers and: continuous variable can take any value in this range. The number of permitted values is uncountable. while for discrete variable the number of permitted values in the range is either finite or countably infinite.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both?
I found this question while searching about levels of measurement and related concepts. I think the charts in the question lack the context. When we do the categorization we define the rules for group
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? I found this question while searching about levels of measurement and related concepts. I think the charts in the question lack the context. When we do the categorization we define the rules for grouping the objects according to our purpose. So what is the purpose? And are we talking about the variables? We could categorize variables according to the levels of measurement, then we could have 4 scales (groups) with the following rules: nominal: attributes of a variable are differentiated only by name (category) and there is no order (rank, position). ordinal: attributes of a variable are differentiated by order (rank, position), but we do not know the relative degree of difference between them. interval: attributes of a variable are differentiated by the degree of difference between them, but there is no absolute zero, and the ratio between the attributes is unknown. ratio: attributes of a variable are differentiated by the degree of difference between them, there is absolute zero, and we could find the ratio between the attributes. And this is only one approach from Stanley Smith Stevens. There are several other typologies. Continuous and discrete variables are mathematical concepts where we have a range of real numbers and: continuous variable can take any value in this range. The number of permitted values is uncountable. while for discrete variable the number of permitted values in the range is either finite or countably infinite.
Is nominal, ordinal, & binary for quantitative data, qualitative data, or both? I found this question while searching about levels of measurement and related concepts. I think the charts in the question lack the context. When we do the categorization we define the rules for group
30,675
Using Uniform Distribution to Generate Correlated Random Samples in R
Since the question is "how to use the Uniform distribution to generate correlated random numbers from different marginal distributions in $\mathbb{R}$" and not only normal random variates, the above answer does not produce simulations with the intended correlation for an arbitrary pair of marginal distributions in $\mathbb{R}$. The reason is that, for most cdfs $G_X$ and $G_Y$,$$\text{cor}(X,Y)\ne\text{cor}(G_X^{-1}(\Phi(X),G_Y^{-1}(\Phi(Y)),$$when$$(X,Y)\sim\text{N}_2(0,\Sigma),$$where $\Phi$ denotes the standard normal cdf. To wit, here is a counter-example with an Exp(1) and a Gamma(.2,1) as my pair of marginal distributions in $\mathbb{R}$. library(mvtnorm) #correlated normals with correlation 0.7 x=rmvnorm(1e4,mean=c(0,0),sigma=matrix(c(1,.7,.7,1),ncol=2),meth="chol") cor(x[,1],x[,2]) [1] 0.704503 y=pnorm(x) #correlated uniforms cor(y[,1],y[,2]) [1] 0.6860069 #correlated Exp(1) and Ga(.2,1) cor(-log(1-y[,1]),qgamma(y[,2],shape=.2)) [1] 0.5840085 Another obvious counter-example is when $G_X$ is the Cauchy cdf, in which case the correlation is not defined. To give a broader picture, here is an R code where both $G_X$ and $G_Y$ are arbitrary: etacor=function(rho=0,nsim=1e4,fx=qnorm,fy=qnorm){ #generate a bivariate correlated normal sample x1=rnorm(nsim);x2=rnorm(nsim) if (length(rho)==1){ y=pnorm(cbind(x1,rho*x1+sqrt((1-rho^2))*x2)) return(cor(fx(y[,1]),fy(y[,2]))) } coeur=rho rho2=sqrt(1-rho^2) for (t in 1:length(rho)){ y=pnorm(cbind(x1,rho[t]*x1+rho2[t]*x2)) coeur[t]=cor(fx(y[,1]),fy(y[,2]))} return(coeur) } Playing around with different cdfs led me to single out this special case of a $\chi^2_3$ distribution for $G_X$ and a log-Normal distribution for $G_Y$: rhos=seq(-1,1,by=.01) trancor=etacor(rho=rhos,fx=function(x){qchisq(x,df=3)},fy=qlnorm) plot(rhos,trancor,ty="l",ylim=c(-1,1)) abline(a=0,b=1,lty=2) which shows how far from the diagonal the correlation can be. A final warning Given two arbitrary distributions $G_X$ and $G_Y$, the range of possible values of $\text{cor}(X,Y)$ is not necessarily $(-1,1)$. The problem may thus have no solution.
Using Uniform Distribution to Generate Correlated Random Samples in R
Since the question is "how to use the Uniform distribution to generate correlated random numbers from different marginal distributions in $\mathbb{R}$" and not only normal random variates, the abo
Using Uniform Distribution to Generate Correlated Random Samples in R Since the question is "how to use the Uniform distribution to generate correlated random numbers from different marginal distributions in $\mathbb{R}$" and not only normal random variates, the above answer does not produce simulations with the intended correlation for an arbitrary pair of marginal distributions in $\mathbb{R}$. The reason is that, for most cdfs $G_X$ and $G_Y$,$$\text{cor}(X,Y)\ne\text{cor}(G_X^{-1}(\Phi(X),G_Y^{-1}(\Phi(Y)),$$when$$(X,Y)\sim\text{N}_2(0,\Sigma),$$where $\Phi$ denotes the standard normal cdf. To wit, here is a counter-example with an Exp(1) and a Gamma(.2,1) as my pair of marginal distributions in $\mathbb{R}$. library(mvtnorm) #correlated normals with correlation 0.7 x=rmvnorm(1e4,mean=c(0,0),sigma=matrix(c(1,.7,.7,1),ncol=2),meth="chol") cor(x[,1],x[,2]) [1] 0.704503 y=pnorm(x) #correlated uniforms cor(y[,1],y[,2]) [1] 0.6860069 #correlated Exp(1) and Ga(.2,1) cor(-log(1-y[,1]),qgamma(y[,2],shape=.2)) [1] 0.5840085 Another obvious counter-example is when $G_X$ is the Cauchy cdf, in which case the correlation is not defined. To give a broader picture, here is an R code where both $G_X$ and $G_Y$ are arbitrary: etacor=function(rho=0,nsim=1e4,fx=qnorm,fy=qnorm){ #generate a bivariate correlated normal sample x1=rnorm(nsim);x2=rnorm(nsim) if (length(rho)==1){ y=pnorm(cbind(x1,rho*x1+sqrt((1-rho^2))*x2)) return(cor(fx(y[,1]),fy(y[,2]))) } coeur=rho rho2=sqrt(1-rho^2) for (t in 1:length(rho)){ y=pnorm(cbind(x1,rho[t]*x1+rho2[t]*x2)) coeur[t]=cor(fx(y[,1]),fy(y[,2]))} return(coeur) } Playing around with different cdfs led me to single out this special case of a $\chi^2_3$ distribution for $G_X$ and a log-Normal distribution for $G_Y$: rhos=seq(-1,1,by=.01) trancor=etacor(rho=rhos,fx=function(x){qchisq(x,df=3)},fy=qlnorm) plot(rhos,trancor,ty="l",ylim=c(-1,1)) abline(a=0,b=1,lty=2) which shows how far from the diagonal the correlation can be. A final warning Given two arbitrary distributions $G_X$ and $G_Y$, the range of possible values of $\text{cor}(X,Y)$ is not necessarily $(-1,1)$. The problem may thus have no solution.
Using Uniform Distribution to Generate Correlated Random Samples in R Since the question is "how to use the Uniform distribution to generate correlated random numbers from different marginal distributions in $\mathbb{R}$" and not only normal random variates, the abo
30,676
Using Uniform Distribution to Generate Correlated Random Samples in R
I wrote the correlate package. People said it is promising (worthy of a publish in Journal of Statistical Software), but I never wrote the paper for it because I chose not to pursue an academic career. I believe the not maintained correlate package is still on CRAN. When you install it, you can do the following: require('correlate') a <- rnorm(100) b <- runif(100) newdata <- correlate(cbind(a,b),0.5) The result is that newdata will have a correlation of 0.5, without changing the univariate distributions of a and b (same values are there, they just get moved around until the multivariate 0.5 correlation has been reached. I'll reply on questions here, sorry for the lack of documentation.
Using Uniform Distribution to Generate Correlated Random Samples in R
I wrote the correlate package. People said it is promising (worthy of a publish in Journal of Statistical Software), but I never wrote the paper for it because I chose not to pursue an academic career
Using Uniform Distribution to Generate Correlated Random Samples in R I wrote the correlate package. People said it is promising (worthy of a publish in Journal of Statistical Software), but I never wrote the paper for it because I chose not to pursue an academic career. I believe the not maintained correlate package is still on CRAN. When you install it, you can do the following: require('correlate') a <- rnorm(100) b <- runif(100) newdata <- correlate(cbind(a,b),0.5) The result is that newdata will have a correlation of 0.5, without changing the univariate distributions of a and b (same values are there, they just get moved around until the multivariate 0.5 correlation has been reached. I'll reply on questions here, sorry for the lack of documentation.
Using Uniform Distribution to Generate Correlated Random Samples in R I wrote the correlate package. People said it is promising (worthy of a publish in Journal of Statistical Software), but I never wrote the paper for it because I chose not to pursue an academic career
30,677
Using Uniform Distribution to Generate Correlated Random Samples in R
Generate two samples of correlated data from a standard normal random distribution following a predetermined correlation. As an example, let's pick a correlation r = 0.7, and code a correlation matrix such as: (C <- matrix(c(1,0.7,0.7,1), nrow = 2)) [,1] [,2] [1,] 1.0 0.7 [2,] 0.7 1.0 We can use mvtnorm to generate now these two samples as a bivariate random vector: set.seed(0) SN <- rmvnorm(mean = c(0,0), sig = C, n = 1e5) resulting in two vector components distributed as ~ $N(0, 1)$ and with a cor(SN[,1],SN[,2])= 0.6996197 ~ 0.7. Both components can be extricated as follows: X1 <- SN[,1]; X2 <- SN[,2] Here's the plot with the overlapping regression line: Use the Probability Integral Transform here to obtain a bivariate random vector with marginal distributions ~ $U(0, 1)$ and the same correlation: U <- pnorm(SN) - so we are feeding into pnorm the SN vector to find $erf(SN)$ (or $\Phi(SN)$). In the process, we preserve the cor(U[,1], U[,2]) = 0.6816123 ~ 0.7 . Again we can decompose the vector U1 <- U[,1]; U2 <- U[,2] and produce a scatterplot with marginal distributions at the edges, clearly showing their uniform nature: Apply the inverse transform sampling method here to finally obtain the bivector of equally correlated points belonging to whichever distribution family we set out to reproduce. From here we can just generate two vectors distributed normally and with equal or different variances. For instance: Y1 <- qnorm(U1, mean = 8,sd = 10) and Y2 <- qnorm(U2, mean = -5, sd = 4), which will maintain the desired correlation, cor(Y1,Y2) = 0.6996197 ~ 0.7. Or opt for different distributions. If the distributions chosen are very dissimilar, the correlation may not be as precise. For instance, let's get U1 to follow a $t$ distribution with 3 d.f., and U2 an exponential with a $\lambda$=1: Z1 <- qt(U1, df = 3) and Z2 <- qexp(U2, rate = 1) The cor(Z1,Z2) [1] 0.5941299 < 0.7. Here are the respective histograms: Here is an example of code for the entire process and normal marginals: Cor_samples <- function(r, n, mean1, mean2, sd1, sd2){ C <- matrix(c(1,r,r,1), nrow = 2) require(mvtnorm) SN <- rmvnorm(mean = c(0,0), sig = C, n = n) U <- pnorm(SN) U1 <- U[,1] U2 <- U[,2] Y1 <<- qnorm(U1, mean = mean1,sd = sd1) Y2 <<- qnorm(U2, mean = mean2,sd = sd2) sample_measures <<- as.data.frame(c(mean(Y1), mean(Y2), sd(Y1), sd(Y2), cor(Y1,Y2)), names<-c("mean Y1", "mean Y2", "SD Y1", "SD Y2", "Cor(Y1,Y2)")) sample_measures } For comparison, I've put together a function based on the Cholesky decomposition: Cholesky_samples <- function(r, n, mean1, mean2, sd1, sd2){ C <- matrix(c(1,r,r,1), nrow = 2) L <- chol(C) X1 <- rnorm(n) X2 <- rnorm(n) X <- rbind(X1,X2) Y <- t(L)%*%X Y1 <- Y[1,] Y2 <- Y[2,] N_1 <<- Y[1,] * sd1 + mean1 N_2 <<- Y[2,] * sd2 + mean2 sample_measures <<- as.data.frame(c(mean(N_1), mean(N_2), sd(N_1), sd(N_2), cor(N_1, N_2)), names<-c("mean N_1", "mean N_2", "SD N_1", "SD N_2","cor(N_1,N_2)")) sample_measures } Trying both methods to generate correlated (say, $r=0.7$) samples distributed ~ $N(97,23)$ and $N(32,8)$ we get, setting set.seed(99): Using the Uniform: cor_samples(0.7, 1000, 97, 32, 23, 8) c(mean(Y1), mean(Y2), sd(Y1), sd(Y2), cor(Y1, Y2)) mean Y1 96.5298821 mean Y2 32.1548306 SD Y1 22.8669448 SD Y2 8.1150780 cor(Y1,Y2) 0.7061308 and Using the Cholesky: Cholesky_samples(0.7, 1000, 97, 32, 23, 8) c(mean(N_1), mean(N_2), sd(N_1), sd(N_2), cor(N_1, N_2)) mean N_1 96.4457504 mean N_2 31.9979675 SD N_1 23.5255419 SD N_2 8.1459100 cor(N_1,N_2) 0.7282176
Using Uniform Distribution to Generate Correlated Random Samples in R
Generate two samples of correlated data from a standard normal random distribution following a predetermined correlation. As an example, let's pick a correlation r = 0.7, and code a correlation matri
Using Uniform Distribution to Generate Correlated Random Samples in R Generate two samples of correlated data from a standard normal random distribution following a predetermined correlation. As an example, let's pick a correlation r = 0.7, and code a correlation matrix such as: (C <- matrix(c(1,0.7,0.7,1), nrow = 2)) [,1] [,2] [1,] 1.0 0.7 [2,] 0.7 1.0 We can use mvtnorm to generate now these two samples as a bivariate random vector: set.seed(0) SN <- rmvnorm(mean = c(0,0), sig = C, n = 1e5) resulting in two vector components distributed as ~ $N(0, 1)$ and with a cor(SN[,1],SN[,2])= 0.6996197 ~ 0.7. Both components can be extricated as follows: X1 <- SN[,1]; X2 <- SN[,2] Here's the plot with the overlapping regression line: Use the Probability Integral Transform here to obtain a bivariate random vector with marginal distributions ~ $U(0, 1)$ and the same correlation: U <- pnorm(SN) - so we are feeding into pnorm the SN vector to find $erf(SN)$ (or $\Phi(SN)$). In the process, we preserve the cor(U[,1], U[,2]) = 0.6816123 ~ 0.7 . Again we can decompose the vector U1 <- U[,1]; U2 <- U[,2] and produce a scatterplot with marginal distributions at the edges, clearly showing their uniform nature: Apply the inverse transform sampling method here to finally obtain the bivector of equally correlated points belonging to whichever distribution family we set out to reproduce. From here we can just generate two vectors distributed normally and with equal or different variances. For instance: Y1 <- qnorm(U1, mean = 8,sd = 10) and Y2 <- qnorm(U2, mean = -5, sd = 4), which will maintain the desired correlation, cor(Y1,Y2) = 0.6996197 ~ 0.7. Or opt for different distributions. If the distributions chosen are very dissimilar, the correlation may not be as precise. For instance, let's get U1 to follow a $t$ distribution with 3 d.f., and U2 an exponential with a $\lambda$=1: Z1 <- qt(U1, df = 3) and Z2 <- qexp(U2, rate = 1) The cor(Z1,Z2) [1] 0.5941299 < 0.7. Here are the respective histograms: Here is an example of code for the entire process and normal marginals: Cor_samples <- function(r, n, mean1, mean2, sd1, sd2){ C <- matrix(c(1,r,r,1), nrow = 2) require(mvtnorm) SN <- rmvnorm(mean = c(0,0), sig = C, n = n) U <- pnorm(SN) U1 <- U[,1] U2 <- U[,2] Y1 <<- qnorm(U1, mean = mean1,sd = sd1) Y2 <<- qnorm(U2, mean = mean2,sd = sd2) sample_measures <<- as.data.frame(c(mean(Y1), mean(Y2), sd(Y1), sd(Y2), cor(Y1,Y2)), names<-c("mean Y1", "mean Y2", "SD Y1", "SD Y2", "Cor(Y1,Y2)")) sample_measures } For comparison, I've put together a function based on the Cholesky decomposition: Cholesky_samples <- function(r, n, mean1, mean2, sd1, sd2){ C <- matrix(c(1,r,r,1), nrow = 2) L <- chol(C) X1 <- rnorm(n) X2 <- rnorm(n) X <- rbind(X1,X2) Y <- t(L)%*%X Y1 <- Y[1,] Y2 <- Y[2,] N_1 <<- Y[1,] * sd1 + mean1 N_2 <<- Y[2,] * sd2 + mean2 sample_measures <<- as.data.frame(c(mean(N_1), mean(N_2), sd(N_1), sd(N_2), cor(N_1, N_2)), names<-c("mean N_1", "mean N_2", "SD N_1", "SD N_2","cor(N_1,N_2)")) sample_measures } Trying both methods to generate correlated (say, $r=0.7$) samples distributed ~ $N(97,23)$ and $N(32,8)$ we get, setting set.seed(99): Using the Uniform: cor_samples(0.7, 1000, 97, 32, 23, 8) c(mean(Y1), mean(Y2), sd(Y1), sd(Y2), cor(Y1, Y2)) mean Y1 96.5298821 mean Y2 32.1548306 SD Y1 22.8669448 SD Y2 8.1150780 cor(Y1,Y2) 0.7061308 and Using the Cholesky: Cholesky_samples(0.7, 1000, 97, 32, 23, 8) c(mean(N_1), mean(N_2), sd(N_1), sd(N_2), cor(N_1, N_2)) mean N_1 96.4457504 mean N_2 31.9979675 SD N_1 23.5255419 SD N_2 8.1459100 cor(N_1,N_2) 0.7282176
Using Uniform Distribution to Generate Correlated Random Samples in R Generate two samples of correlated data from a standard normal random distribution following a predetermined correlation. As an example, let's pick a correlation r = 0.7, and code a correlation matri
30,678
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the dataset or the model?
The covariance matrix of the data is always non-negative definite, there is no doubt about that. However, the model-implied covariance matrix may not be when some parameters take values outside their natural ranges. In turn, this may happen for a number of reasons. Your 4-factor model may be misspecified, i.e., does not fit the data right. Your model is OK, it's just that the sample that you are dealing with favors high values of the correlation parameter. To distinguish between 1 and 2, you need to find a way to test whether the correlation in question is significantly greater than 1, which is not a trivial endeavor (doi: 10.1177/0049124112442138): few packages computed the standard errors properly at the time that paper was written, and I don't know if the current version of lavaan does. lavaan computes numeric derivatives (as any other software) by taking parameter $\pm$ a small step, and while the current value of the parameter is kosher, the step may throw it over the limit and produce a matrix that is not positive definite. (Analytic derivatives are available for the multivariate normal case, but binary/ordinal variables require numeric integration over the distributions of latent variables, and do not lend themselves to analytic differentiation. So this depends on your model.) I think you can argue that, due to lack of convergence, your 4-factor model does not work well, and is not a contender in your model selection.
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the datas
The covariance matrix of the data is always non-negative definite, there is no doubt about that. However, the model-implied covariance matrix may not be when some parameters take values outside their
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the dataset or the model? The covariance matrix of the data is always non-negative definite, there is no doubt about that. However, the model-implied covariance matrix may not be when some parameters take values outside their natural ranges. In turn, this may happen for a number of reasons. Your 4-factor model may be misspecified, i.e., does not fit the data right. Your model is OK, it's just that the sample that you are dealing with favors high values of the correlation parameter. To distinguish between 1 and 2, you need to find a way to test whether the correlation in question is significantly greater than 1, which is not a trivial endeavor (doi: 10.1177/0049124112442138): few packages computed the standard errors properly at the time that paper was written, and I don't know if the current version of lavaan does. lavaan computes numeric derivatives (as any other software) by taking parameter $\pm$ a small step, and while the current value of the parameter is kosher, the step may throw it over the limit and produce a matrix that is not positive definite. (Analytic derivatives are available for the multivariate normal case, but binary/ordinal variables require numeric integration over the distributions of latent variables, and do not lend themselves to analytic differentiation. So this depends on your model.) I think you can argue that, due to lack of convergence, your 4-factor model does not work well, and is not a contender in your model selection.
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the datas The covariance matrix of the data is always non-negative definite, there is no doubt about that. However, the model-implied covariance matrix may not be when some parameters take values outside their
30,679
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the dataset or the model?
The reasons are summarised well by @StasK. And I will show some ways to fix this errors. It may suffer from your model building. Use function summary, you will find there are some correlations between the latent variables you define, that are out of bounds. I mean $|\rho| > 1$ or $|\rho|$ is near 1. If you find it is true, one way to solve it is to collapse the not-good latent variables into one latent variable and their manifest variables. For example, latent1 =~ v1 + v2 latent2 =~ v3 + v4 latent3 =~ v5 + v6 latent1 is good and the other two are bad. Change in this way, letent1 =~ v1 + v2 letent2and3 ~= v3 + v4 + v5 + v6 And find whether there is a better result. The idea is from Erin Buchanan, you can check her tutorial for SEM online. I drop this reference because of the sexual assault related to a online teaching organization. If you are interested in the reference, connect to me privately, I reply some technical but not yet organized notes.
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the datas
The reasons are summarised well by @StasK. And I will show some ways to fix this errors. It may suffer from your model building. Use function summary, you will find there are some correlations between
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the dataset or the model? The reasons are summarised well by @StasK. And I will show some ways to fix this errors. It may suffer from your model building. Use function summary, you will find there are some correlations between the latent variables you define, that are out of bounds. I mean $|\rho| > 1$ or $|\rho|$ is near 1. If you find it is true, one way to solve it is to collapse the not-good latent variables into one latent variable and their manifest variables. For example, latent1 =~ v1 + v2 latent2 =~ v3 + v4 latent3 =~ v5 + v6 latent1 is good and the other two are bad. Change in this way, letent1 =~ v1 + v2 letent2and3 ~= v3 + v4 + v5 + v6 And find whether there is a better result. The idea is from Erin Buchanan, you can check her tutorial for SEM online. I drop this reference because of the sexual assault related to a online teaching organization. If you are interested in the reference, connect to me privately, I reply some technical but not yet organized notes.
When a CFA model has a "covariance matrix was not positive definite" problem, is it due to the datas The reasons are summarised well by @StasK. And I will show some ways to fix this errors. It may suffer from your model building. Use function summary, you will find there are some correlations between
30,680
Why including some observations twice changes the coefficients of logistic regression?
You say that you have the same intuition (that the coefficients shouldn't change) for a linear regression, so I'm going to answer your question in that setting, because linear regression is a bit easier to visualize. I think the thing that is causing your mistaken intuition here is that you're imagining that duplicating data points doesn't change the scatterplot. But it does! It might not be easily visible, depending on the kind of scatterplot you use, but that's why we have things like jitter. The phenomenon of different scatterplots looking the same because the datapoints are directly on top of each other is a problem (called "overplotting"), because it makes fundamentally different data sets look the same. The regression algorithm doesn't "see" the scatterplot itself, it sees the underlying data points. And when you replicate some of those data points, it makes them more strongly represented in the underlying data, so the regression algorithm treats them as more important to "get right." (Basically, you can think of each occurrence of a data point as pulling the regression line towards it with the same force--so if you have two data points at a given spot, they will pull the line towards them twice as hard.) I'll give a visual example, using jitter to make clear what a regression actually "sees." First, here's a dataset consisting of one ascending and one descending sequence: It's symmetrical, so the line of best fit would just be flat through 5.5. But what happens if I replicate the descending sequence 10 times, and use jitter to make the scatterplot look visually like it looks like to the underlying algorithm? Now it looks a lot more like the ascending sequence is just outliers, right? And the trend line is clearly decreasing. (It works exactly the same way with logistic regression, it's just harder to make it clear from a plot.)
Why including some observations twice changes the coefficients of logistic regression?
You say that you have the same intuition (that the coefficients shouldn't change) for a linear regression, so I'm going to answer your question in that setting, because linear regression is a bit easi
Why including some observations twice changes the coefficients of logistic regression? You say that you have the same intuition (that the coefficients shouldn't change) for a linear regression, so I'm going to answer your question in that setting, because linear regression is a bit easier to visualize. I think the thing that is causing your mistaken intuition here is that you're imagining that duplicating data points doesn't change the scatterplot. But it does! It might not be easily visible, depending on the kind of scatterplot you use, but that's why we have things like jitter. The phenomenon of different scatterplots looking the same because the datapoints are directly on top of each other is a problem (called "overplotting"), because it makes fundamentally different data sets look the same. The regression algorithm doesn't "see" the scatterplot itself, it sees the underlying data points. And when you replicate some of those data points, it makes them more strongly represented in the underlying data, so the regression algorithm treats them as more important to "get right." (Basically, you can think of each occurrence of a data point as pulling the regression line towards it with the same force--so if you have two data points at a given spot, they will pull the line towards them twice as hard.) I'll give a visual example, using jitter to make clear what a regression actually "sees." First, here's a dataset consisting of one ascending and one descending sequence: It's symmetrical, so the line of best fit would just be flat through 5.5. But what happens if I replicate the descending sequence 10 times, and use jitter to make the scatterplot look visually like it looks like to the underlying algorithm? Now it looks a lot more like the ascending sequence is just outliers, right? And the trend line is clearly decreasing. (It works exactly the same way with logistic regression, it's just harder to make it clear from a plot.)
Why including some observations twice changes the coefficients of logistic regression? You say that you have the same intuition (that the coefficients shouldn't change) for a linear regression, so I'm going to answer your question in that setting, because linear regression is a bit easi
30,681
Interpreting logistic regression coefficients with a regularization term
The coefficients that are returned standard with a logistic regression fit are not odds ratios. They represent the change in the log odds of 'success' associated with a one-unit change in their respective variable, when all else is held equal. If you exponentiate a coefficient, then you can interpret the result as an odds ratio (of course, this is not true of the intercept). More on this can be found in my answer here: Interpretation of simple predictions to odds ratios in logistic regression. Adding a penalty to the model fit will (potentially) change the fitted value of the estimated coefficients, but it does not change the interpretation of the coefficients in the sense discussed in your question / above.* * (I wonder if confusion about this statement is the origin of the recent downvote.) To be clearer: The fitted coefficient on $X_1$, $\hat\beta_1$ represents the change in the log odds of success associated with a 1-unit change in $X_1$ if there is no penalty term used in fitting the model and if a penalty term is used to fit the model. In neither case is it the odds ratio. However, $\exp(\hat\beta_1)$ is the odds ratio associated with a 1-unit change in $X_1$, again irrespective of whether a penalty term was used to fit the model. A model fitted with a penalty term can be interpreted within a Bayesian framework, but doesn't necessarily have to be. Moreover, even if it is, $\hat\beta_1$ still represents the change in the log odds of success associated with a 1-unit change in $X_1$ not an odds ratio.
Interpreting logistic regression coefficients with a regularization term
The coefficients that are returned standard with a logistic regression fit are not odds ratios. They represent the change in the log odds of 'success' associated with a one-unit change in their respe
Interpreting logistic regression coefficients with a regularization term The coefficients that are returned standard with a logistic regression fit are not odds ratios. They represent the change in the log odds of 'success' associated with a one-unit change in their respective variable, when all else is held equal. If you exponentiate a coefficient, then you can interpret the result as an odds ratio (of course, this is not true of the intercept). More on this can be found in my answer here: Interpretation of simple predictions to odds ratios in logistic regression. Adding a penalty to the model fit will (potentially) change the fitted value of the estimated coefficients, but it does not change the interpretation of the coefficients in the sense discussed in your question / above.* * (I wonder if confusion about this statement is the origin of the recent downvote.) To be clearer: The fitted coefficient on $X_1$, $\hat\beta_1$ represents the change in the log odds of success associated with a 1-unit change in $X_1$ if there is no penalty term used in fitting the model and if a penalty term is used to fit the model. In neither case is it the odds ratio. However, $\exp(\hat\beta_1)$ is the odds ratio associated with a 1-unit change in $X_1$, again irrespective of whether a penalty term was used to fit the model. A model fitted with a penalty term can be interpreted within a Bayesian framework, but doesn't necessarily have to be. Moreover, even if it is, $\hat\beta_1$ still represents the change in the log odds of success associated with a 1-unit change in $X_1$ not an odds ratio.
Interpreting logistic regression coefficients with a regularization term The coefficients that are returned standard with a logistic regression fit are not odds ratios. They represent the change in the log odds of 'success' associated with a one-unit change in their respe
30,682
Interpreting logistic regression coefficients with a regularization term
Regularized linear regression and regularized logistic regression can be interpreted nicely from a Bayesian point of view. The regularization parameter corresponds to a choice of prior distribution on the weights, for example, something like a normal distribution centered at zero with standard deviation given by the inverse of the regularization parameter. Then via your training data, these distributions are updated to finally give you the posterior distributions on the weights. So, for example, a larger regularization parameter means that, as a prior, we think that the weights should be closer to zero, hence with this setup it's less likely that the posterior distributions will be supported far away from zero -- which agrees with the intuition of what regularization is "supposed to do". For most implementations of regularized regression, the final output of the weights is just the expected value of the posterior distributions. By the way, unregularized regression can be basically interpreted in the same way: it's the limit as the regularization parameter goes to zero.
Interpreting logistic regression coefficients with a regularization term
Regularized linear regression and regularized logistic regression can be interpreted nicely from a Bayesian point of view. The regularization parameter corresponds to a choice of prior distribution on
Interpreting logistic regression coefficients with a regularization term Regularized linear regression and regularized logistic regression can be interpreted nicely from a Bayesian point of view. The regularization parameter corresponds to a choice of prior distribution on the weights, for example, something like a normal distribution centered at zero with standard deviation given by the inverse of the regularization parameter. Then via your training data, these distributions are updated to finally give you the posterior distributions on the weights. So, for example, a larger regularization parameter means that, as a prior, we think that the weights should be closer to zero, hence with this setup it's less likely that the posterior distributions will be supported far away from zero -- which agrees with the intuition of what regularization is "supposed to do". For most implementations of regularized regression, the final output of the weights is just the expected value of the posterior distributions. By the way, unregularized regression can be basically interpreted in the same way: it's the limit as the regularization parameter goes to zero.
Interpreting logistic regression coefficients with a regularization term Regularized linear regression and regularized logistic regression can be interpreted nicely from a Bayesian point of view. The regularization parameter corresponds to a choice of prior distribution on
30,683
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Does using a kernel function make the data linearly separable? In some cases, but not others. For example, the linear kernel induces a feature space that's equivalent to the original input space (up to dot product preserving transformations like rotation and reflection). If the data aren't linearly separable in input space, they won't be in feature space either. Polynomial kernels with degree >1 map the data nonlinearly into a higher dimensional feature space. Data that aren't linearly separable in input space may be linearly separable in feature space (depending on the particular data and kernel), but may not be in other cases. RBF kernels map the data nonlinearly into an infinite-dimensional feature space. If the kernel bandwidth is chosen small enough, the data are always linearly separable in feature space. When linear separability is possible, why use a soft-margin SVM? The input features may not contain enough information about class labels to perfectly predict them. In these cases, perfectly separating the training data would be overfitting, and would hurt generalization performance. Consider the following example, where points from one class are drawn from an isotropic Gaussian distribution, and points from the other are drawn from a surrounding, ring-shaped distribution. The optimal decision boundary is a circle through the low density region between these distributions. The data aren't truly separable because the distributions overlap, and points from each class end up on the wrong side of the optimal decision boundary. As mentioned above, an RBF kernel with small bandwidth allows linear separability of the training data in feature space. A hard-margin SVM using this kernel achieves perfect accuracy on the training set (background color indicates predicted class, point color indicates actual class): The hard margin SVM maximizes the margin, subject to the constraint that no training point is misclassified. The RBF kernel ensures that it's possible to meet this constraint. However, the resulting decision boundary is completely overfit, and will not generalize well to future data. Instead, we can use a soft margin SVM, which allows some margin violations and misclassifications in exchange for a bigger margin (the tradeoff is controlled by a hyperparameter). The hope is that a bigger margin will increase generalization performance. Here's the output for a soft margin SVM with the same RBF kernel: Despite more errors on the training set, the decision boundary is closer to the true boundary, and the soft margin SVM will generalize better. Further improvements could be made by tweaking the kernel.
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Does using a kernel function make the data linearly separable? In some cases, but not others. For example, the linear kernel induces a feature space that's equivalent to the original input space (up t
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Does using a kernel function make the data linearly separable? In some cases, but not others. For example, the linear kernel induces a feature space that's equivalent to the original input space (up to dot product preserving transformations like rotation and reflection). If the data aren't linearly separable in input space, they won't be in feature space either. Polynomial kernels with degree >1 map the data nonlinearly into a higher dimensional feature space. Data that aren't linearly separable in input space may be linearly separable in feature space (depending on the particular data and kernel), but may not be in other cases. RBF kernels map the data nonlinearly into an infinite-dimensional feature space. If the kernel bandwidth is chosen small enough, the data are always linearly separable in feature space. When linear separability is possible, why use a soft-margin SVM? The input features may not contain enough information about class labels to perfectly predict them. In these cases, perfectly separating the training data would be overfitting, and would hurt generalization performance. Consider the following example, where points from one class are drawn from an isotropic Gaussian distribution, and points from the other are drawn from a surrounding, ring-shaped distribution. The optimal decision boundary is a circle through the low density region between these distributions. The data aren't truly separable because the distributions overlap, and points from each class end up on the wrong side of the optimal decision boundary. As mentioned above, an RBF kernel with small bandwidth allows linear separability of the training data in feature space. A hard-margin SVM using this kernel achieves perfect accuracy on the training set (background color indicates predicted class, point color indicates actual class): The hard margin SVM maximizes the margin, subject to the constraint that no training point is misclassified. The RBF kernel ensures that it's possible to meet this constraint. However, the resulting decision boundary is completely overfit, and will not generalize well to future data. Instead, we can use a soft margin SVM, which allows some margin violations and misclassifications in exchange for a bigger margin (the tradeoff is controlled by a hyperparameter). The hope is that a bigger margin will increase generalization performance. Here's the output for a soft margin SVM with the same RBF kernel: Despite more errors on the training set, the decision boundary is closer to the true boundary, and the soft margin SVM will generalize better. Further improvements could be made by tweaking the kernel.
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Does using a kernel function make the data linearly separable? In some cases, but not others. For example, the linear kernel induces a feature space that's equivalent to the original input space (up t
30,684
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
You are convoluting two different things. The classification algorithm used by SVM is always linear (e.g. a hyperplane) in some feature space induced by a kernel. Hard margin SVM, which is typically the first example you encounter when learning SVM, requires linearly separable data in feature space or there is no solution to the training problem. Typically, this first example works in input space but the same can be done in any feature space of your choosing. But my question is why to use a soft-margin if the data is going to be linearly separable anyway in the high space? Soft-margin SVM does not require data to be separable, not even in feature space. This is the key difference between hard and soft margin. Soft-margin SVM allows instances to fall within the margin and even on the wrong side of the separating hyperplane, but penalizes these instances using hinge loss. Or does that mean that even after mapping with the kernel it doesn't necessarily mean that it will become linearly separable? The use of a nonlinear kernel never gives any guarantees to make any data set linearly separable in the induced feature space. This is not necessary. The reason we use kernels is to map the data from input space onto a higher dimensional space, in which a (higher dimensional) hyperplane will be better at separating the data. That is all. If data is perfectly separable in feature space, your training accuracy is $1$ by definition. This is still rare even when using kernels. You can find kernels that make data linearly separable, but this usually requires very complex kernels which lead to results that generalize poorly. An example of this would be an RBF kernel with very high $\gamma$, which basically yields the unit matrix as kernel matrix (this is perfectly separable but will generalize badly on unseen data).
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
You are convoluting two different things. The classification algorithm used by SVM is always linear (e.g. a hyperplane) in some feature space induced by a kernel. Hard margin SVM, which is typically t
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? You are convoluting two different things. The classification algorithm used by SVM is always linear (e.g. a hyperplane) in some feature space induced by a kernel. Hard margin SVM, which is typically the first example you encounter when learning SVM, requires linearly separable data in feature space or there is no solution to the training problem. Typically, this first example works in input space but the same can be done in any feature space of your choosing. But my question is why to use a soft-margin if the data is going to be linearly separable anyway in the high space? Soft-margin SVM does not require data to be separable, not even in feature space. This is the key difference between hard and soft margin. Soft-margin SVM allows instances to fall within the margin and even on the wrong side of the separating hyperplane, but penalizes these instances using hinge loss. Or does that mean that even after mapping with the kernel it doesn't necessarily mean that it will become linearly separable? The use of a nonlinear kernel never gives any guarantees to make any data set linearly separable in the induced feature space. This is not necessary. The reason we use kernels is to map the data from input space onto a higher dimensional space, in which a (higher dimensional) hyperplane will be better at separating the data. That is all. If data is perfectly separable in feature space, your training accuracy is $1$ by definition. This is still rare even when using kernels. You can find kernels that make data linearly separable, but this usually requires very complex kernels which lead to results that generalize poorly. An example of this would be an RBF kernel with very high $\gamma$, which basically yields the unit matrix as kernel matrix (this is perfectly separable but will generalize badly on unseen data).
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? You are convoluting two different things. The classification algorithm used by SVM is always linear (e.g. a hyperplane) in some feature space induced by a kernel. Hard margin SVM, which is typically t
30,685
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Yes, it becomes linearly separable, especially if you always use the RBF kernel which maps to infinite dimensional space. When people talk about soft margin, it is different from what you are thinking. The SVM by design expects that the functional margin [1] for the two classes be atleast 1. This requirement, however, need not be always satisfied, since this requirement is stricter than just being linearly separable. Thereby, you introduce slack variables to accommodate points that don't satisfy the functional margin requirement. [1] https://stackoverflow.com/questions/14658452/how-to-understand-the-functional-margin-in-svm Some more material to read: 1] Given a set of points in two dimensional space, how can one design decision function for SVM? 2] How to understand effect of RBF SVM
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Yes, it becomes linearly separable, especially if you always use the RBF kernel which maps to infinite dimensional space. When people talk about soft margin, it is different from what you are thinking
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Yes, it becomes linearly separable, especially if you always use the RBF kernel which maps to infinite dimensional space. When people talk about soft margin, it is different from what you are thinking. The SVM by design expects that the functional margin [1] for the two classes be atleast 1. This requirement, however, need not be always satisfied, since this requirement is stricter than just being linearly separable. Thereby, you introduce slack variables to accommodate points that don't satisfy the functional margin requirement. [1] https://stackoverflow.com/questions/14658452/how-to-understand-the-functional-margin-in-svm Some more material to read: 1] Given a set of points in two dimensional space, how can one design decision function for SVM? 2] How to understand effect of RBF SVM
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Yes, it becomes linearly separable, especially if you always use the RBF kernel which maps to infinite dimensional space. When people talk about soft margin, it is different from what you are thinking
30,686
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Although it is possible to select the Gaussian kernel and achieve separability in the feature space, this might not be the best strategy for minimizing the expected loss (i.e. the true risk as opposed to the empirical risk). Consider an example of labeled points in $\mathbb{R}^3$ where negative points lie in the ($\ell_2$) unit ball and positive points lie outside the ball of radius 2. However, suppose there also are a few outliers in the training sample: a few positive points lie in the "negative" region inside the unit ball. Now, if we use a polynomial kernel that includes the $\ell_2$ norm of the data points as a new dimension, then we can almost linearly separate the data in the feature space. There are a few outliers of course, so we will still have some training error using this kernel. However, if the outliers correspond to fundamental noise in our problem, then it might be the case that the Bayes optimal decision rule in fact is the hypothesis that classifies points as positive if there norm is at least two and negative otherwise. Indeed, if the outliers arise because for some points $x$ the label is not deterministic in the sense that $P(Y = 1 \mid X = x) \in (0, 1)$ rather than being $0$ or $1$, then the noise is fundamental to the problem and we should avoid fitting to it. We could instead go with a Gaussian kernel which makes the data linearly separable, but this would amount to overfitting and hurt the true risk of our hypothesis. This example shows that there are cases when one does want to use a kernel, the data may still not be linearly separable in the new feature space, and so we still need the soft-margin SVM formulation.
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM?
Although it is possible to select the Gaussian kernel and achieve separability in the feature space, this might not be the best strategy for minimizing the expected loss (i.e. the true risk as opposed
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Although it is possible to select the Gaussian kernel and achieve separability in the feature space, this might not be the best strategy for minimizing the expected loss (i.e. the true risk as opposed to the empirical risk). Consider an example of labeled points in $\mathbb{R}^3$ where negative points lie in the ($\ell_2$) unit ball and positive points lie outside the ball of radius 2. However, suppose there also are a few outliers in the training sample: a few positive points lie in the "negative" region inside the unit ball. Now, if we use a polynomial kernel that includes the $\ell_2$ norm of the data points as a new dimension, then we can almost linearly separate the data in the feature space. There are a few outliers of course, so we will still have some training error using this kernel. However, if the outliers correspond to fundamental noise in our problem, then it might be the case that the Bayes optimal decision rule in fact is the hypothesis that classifies points as positive if there norm is at least two and negative otherwise. Indeed, if the outliers arise because for some points $x$ the label is not deterministic in the sense that $P(Y = 1 \mid X = x) \in (0, 1)$ rather than being $0$ or $1$, then the noise is fundamental to the problem and we should avoid fitting to it. We could instead go with a Gaussian kernel which makes the data linearly separable, but this would amount to overfitting and hurt the true risk of our hypothesis. This example shows that there are cases when one does want to use a kernel, the data may still not be linearly separable in the new feature space, and so we still need the soft-margin SVM formulation.
Does using a kernel function make the data linearly separable? If so, why using soft-margin SVM? Although it is possible to select the Gaussian kernel and achieve separability in the feature space, this might not be the best strategy for minimizing the expected loss (i.e. the true risk as opposed
30,687
Maximum number of independent variables in Logistic Regression
For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you entertain putting into the model. The rationale for that "rule" is that it results in a model performance metric that is likely to be as good or as bad in new data as it appears to be in the training data. But you need 96 observations just to estimate the intercept so that the overall predicted risk is within a $\pm 0.1$ margin of error of the true risk with 0.95 confidence.
Maximum number of independent variables in Logistic Regression
For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you e
Maximum number of independent variables in Logistic Regression For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you entertain putting into the model. The rationale for that "rule" is that it results in a model performance metric that is likely to be as good or as bad in new data as it appears to be in the training data. But you need 96 observations just to estimate the intercept so that the overall predicted risk is within a $\pm 0.1$ margin of error of the true risk with 0.95 confidence.
Maximum number of independent variables in Logistic Regression For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you e
30,688
Maximum number of independent variables in Logistic Regression
Having too many parameters compared to observations may lead to overfitting. Various adjustments or measures can be used to correct for this. AIC for example accounts for both the number of variables and the number of observations in your dataset and is probably most often used. AIC itself doesn't adjust the model, but serves as a tool to select the best model if you construct multiple ones. It's basically a tradeoff between residual error and model complexity. You can furthermore take a look at other "information criteria" or more advanced techniques like crossvalidation, penalized logistic regression ("penalized" package in R), ...
Maximum number of independent variables in Logistic Regression
Having too many parameters compared to observations may lead to overfitting. Various adjustments or measures can be used to correct for this. AIC for example accounts for both the number of variables
Maximum number of independent variables in Logistic Regression Having too many parameters compared to observations may lead to overfitting. Various adjustments or measures can be used to correct for this. AIC for example accounts for both the number of variables and the number of observations in your dataset and is probably most often used. AIC itself doesn't adjust the model, but serves as a tool to select the best model if you construct multiple ones. It's basically a tradeoff between residual error and model complexity. You can furthermore take a look at other "information criteria" or more advanced techniques like crossvalidation, penalized logistic regression ("penalized" package in R), ...
Maximum number of independent variables in Logistic Regression Having too many parameters compared to observations may lead to overfitting. Various adjustments or measures can be used to correct for this. AIC for example accounts for both the number of variables
30,689
Maximum number of independent variables in Logistic Regression
If the number of independent variables is not very large, you can just do “all subsets” regression in which all possible models are fit. The model the model with the highest F statistic or proportion of explained variation (PVE) (note: the concept was established with linear regression but can be applied to logistic regression as well) is selected. But this often results that we will choose the full model. So we need to penalize models with many variables that don’t fit much better than models with fewer variables with Akaike Information Criterion (AIC). Lower AIC values usually indicate a better model that we will finally select. If the number of independent variables is large. The strategy is, select the best model with only one variable, then select another variable so that the best model with two variables is obtained, then select the 3rd variable...so on and so forth. The selection stops once AIC increases. Usually the complexity is around O(n^2) rather than O(2^n) in all subsets regression.
Maximum number of independent variables in Logistic Regression
If the number of independent variables is not very large, you can just do “all subsets” regression in which all possible models are fit. The model the model with the highest F statistic or proportion
Maximum number of independent variables in Logistic Regression If the number of independent variables is not very large, you can just do “all subsets” regression in which all possible models are fit. The model the model with the highest F statistic or proportion of explained variation (PVE) (note: the concept was established with linear regression but can be applied to logistic regression as well) is selected. But this often results that we will choose the full model. So we need to penalize models with many variables that don’t fit much better than models with fewer variables with Akaike Information Criterion (AIC). Lower AIC values usually indicate a better model that we will finally select. If the number of independent variables is large. The strategy is, select the best model with only one variable, then select another variable so that the best model with two variables is obtained, then select the 3rd variable...so on and so forth. The selection stops once AIC increases. Usually the complexity is around O(n^2) rather than O(2^n) in all subsets regression.
Maximum number of independent variables in Logistic Regression If the number of independent variables is not very large, you can just do “all subsets” regression in which all possible models are fit. The model the model with the highest F statistic or proportion
30,690
How to estimate vector autoregression & impulse response function with panel data
https://www.researchgate.net/publication/312165764_Panel_Vector_Autoregression_in_R_The_panelvar_Package Here you will find the R-package and the link to the paper.
How to estimate vector autoregression & impulse response function with panel data
https://www.researchgate.net/publication/312165764_Panel_Vector_Autoregression_in_R_The_panelvar_Package Here you will find the R-package and the link to the paper.
How to estimate vector autoregression & impulse response function with panel data https://www.researchgate.net/publication/312165764_Panel_Vector_Autoregression_in_R_The_panelvar_Package Here you will find the R-package and the link to the paper.
How to estimate vector autoregression & impulse response function with panel data https://www.researchgate.net/publication/312165764_Panel_Vector_Autoregression_in_R_The_panelvar_Package Here you will find the R-package and the link to the paper.
30,691
How to estimate vector autoregression & impulse response function with panel data
Common panel data vector autoregression models include the Arellano-Bond estimator (commonly referred to as "difference" GMM), the Blundell-Bond estimator (commonly referred to as "system" GMM) and the Arellano-Bover estimator. All use GMM, and begin with a model: $$y_{it}=\sum_{l=1}^p\rho_ly_{i,t-l}+x_{i,t}'\beta+\alpha_i+\epsilon_{it} $$ Arellano and Bond takes the first difference of $y_{i,t}$ to remove the fixed effect, $\alpha_i$ and then uses lagged levels as instruments: $$ E[\Delta \epsilon_{it}y_{i,t-2}]=0$$ This is basically the same as the procedure detailed in this Holtz-Eakin Newey Rosen article, which also provides some instructions for implementation. Blundell and Bond use lagged first differences as instruments for levels: $$ E[\epsilon_{it}\Delta y_{i,t-1}]=0$$ The name "system" GMM usually means a mix of these instruments with those from Arellano Bond. Arellano and Bover use the system GMM and also explore forward demeaning of variables, which to my knowledge is not directly implemented for R, but you can check out their paper for details. In R, both Arellano-Bond and Blundell-Bond are implemented in the plm package, under the command pgmm. The documentation I've linked to provides instructions and examples for exactly how to implement them.
How to estimate vector autoregression & impulse response function with panel data
Common panel data vector autoregression models include the Arellano-Bond estimator (commonly referred to as "difference" GMM), the Blundell-Bond estimator (commonly referred to as "system" GMM) and th
How to estimate vector autoregression & impulse response function with panel data Common panel data vector autoregression models include the Arellano-Bond estimator (commonly referred to as "difference" GMM), the Blundell-Bond estimator (commonly referred to as "system" GMM) and the Arellano-Bover estimator. All use GMM, and begin with a model: $$y_{it}=\sum_{l=1}^p\rho_ly_{i,t-l}+x_{i,t}'\beta+\alpha_i+\epsilon_{it} $$ Arellano and Bond takes the first difference of $y_{i,t}$ to remove the fixed effect, $\alpha_i$ and then uses lagged levels as instruments: $$ E[\Delta \epsilon_{it}y_{i,t-2}]=0$$ This is basically the same as the procedure detailed in this Holtz-Eakin Newey Rosen article, which also provides some instructions for implementation. Blundell and Bond use lagged first differences as instruments for levels: $$ E[\epsilon_{it}\Delta y_{i,t-1}]=0$$ The name "system" GMM usually means a mix of these instruments with those from Arellano Bond. Arellano and Bover use the system GMM and also explore forward demeaning of variables, which to my knowledge is not directly implemented for R, but you can check out their paper for details. In R, both Arellano-Bond and Blundell-Bond are implemented in the plm package, under the command pgmm. The documentation I've linked to provides instructions and examples for exactly how to implement them.
How to estimate vector autoregression & impulse response function with panel data Common panel data vector autoregression models include the Arellano-Bond estimator (commonly referred to as "difference" GMM), the Blundell-Bond estimator (commonly referred to as "system" GMM) and th
30,692
How to estimate vector autoregression & impulse response function with panel data
You can use a system of seemingly unrelated regression equations (using the package systemfit) after you convert the dataset with pdata.frame (plm package). You need to derive the impulse response functions by yourself. If you follow Hamilton's or Greene's textbook, it should not be too complicated.
How to estimate vector autoregression & impulse response function with panel data
You can use a system of seemingly unrelated regression equations (using the package systemfit) after you convert the dataset with pdata.frame (plm package). You need to derive the impulse response fun
How to estimate vector autoregression & impulse response function with panel data You can use a system of seemingly unrelated regression equations (using the package systemfit) after you convert the dataset with pdata.frame (plm package). You need to derive the impulse response functions by yourself. If you follow Hamilton's or Greene's textbook, it should not be too complicated.
How to estimate vector autoregression & impulse response function with panel data You can use a system of seemingly unrelated regression equations (using the package systemfit) after you convert the dataset with pdata.frame (plm package). You need to derive the impulse response fun
30,693
How to estimate vector autoregression & impulse response function with panel data
I just found this paper "Panel Vector Autoregression in R: The Panelvar Package" (2017) by Michael Sigmund, Robert Ferstl and Daniel Unterkofler, which basically is a description of the methods implemented in R. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2896087 Additionally, there's another question here: Panel vector autoregression models in R? The authors are now in the process of publishing the code on CRAN, but already provide binary packages on researchgate. https://www.researchgate.net/project/Panel-Vector-Autoregression-Models-with-different-GMM-estimators The binary panelvar package can be downloaded directly, I think sources should be available on CRAN in the near future. https://www.researchgate.net/publication/322526372_panelvar_044
How to estimate vector autoregression & impulse response function with panel data
I just found this paper "Panel Vector Autoregression in R: The Panelvar Package" (2017) by Michael Sigmund, Robert Ferstl and Daniel Unterkofler, which basically is a description of the methods implem
How to estimate vector autoregression & impulse response function with panel data I just found this paper "Panel Vector Autoregression in R: The Panelvar Package" (2017) by Michael Sigmund, Robert Ferstl and Daniel Unterkofler, which basically is a description of the methods implemented in R. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2896087 Additionally, there's another question here: Panel vector autoregression models in R? The authors are now in the process of publishing the code on CRAN, but already provide binary packages on researchgate. https://www.researchgate.net/project/Panel-Vector-Autoregression-Models-with-different-GMM-estimators The binary panelvar package can be downloaded directly, I think sources should be available on CRAN in the near future. https://www.researchgate.net/publication/322526372_panelvar_044
How to estimate vector autoregression & impulse response function with panel data I just found this paper "Panel Vector Autoregression in R: The Panelvar Package" (2017) by Michael Sigmund, Robert Ferstl and Daniel Unterkofler, which basically is a description of the methods implem
30,694
How to estimate vector autoregression & impulse response function with panel data
I would suggest using the {vars} library in R. It has a function for estimating a VAR-model and for estimating an impulse response function from this model and for investigating Granger causality etc. I suggest you look into the following functions: > VARselect() > VAR() > irf() > causality()
How to estimate vector autoregression & impulse response function with panel data
I would suggest using the {vars} library in R. It has a function for estimating a VAR-model and for estimating an impulse response function from this model and for investigating Granger causality etc.
How to estimate vector autoregression & impulse response function with panel data I would suggest using the {vars} library in R. It has a function for estimating a VAR-model and for estimating an impulse response function from this model and for investigating Granger causality etc. I suggest you look into the following functions: > VARselect() > VAR() > irf() > causality()
How to estimate vector autoregression & impulse response function with panel data I would suggest using the {vars} library in R. It has a function for estimating a VAR-model and for estimating an impulse response function from this model and for investigating Granger causality etc.
30,695
How to estimate vector autoregression & impulse response function with panel data
Hi @Roman and every one else. I am also in panel VAR models and in my search, I came across this stata-based user-written commands pvar and xtvar. I have used pvar already and it seems quite okay. You can read more about it here, and a step-by-step application
How to estimate vector autoregression & impulse response function with panel data
Hi @Roman and every one else. I am also in panel VAR models and in my search, I came across this stata-based user-written commands pvar and xtvar. I have used pvar already and it seems quite okay. You
How to estimate vector autoregression & impulse response function with panel data Hi @Roman and every one else. I am also in panel VAR models and in my search, I came across this stata-based user-written commands pvar and xtvar. I have used pvar already and it seems quite okay. You can read more about it here, and a step-by-step application
How to estimate vector autoregression & impulse response function with panel data Hi @Roman and every one else. I am also in panel VAR models and in my search, I came across this stata-based user-written commands pvar and xtvar. I have used pvar already and it seems quite okay. You
30,696
Books on statistical ecology?
Some good books that I would personally recommend are: Hilborn & Mangel (1997) The Ecological Detective: confronting models with data. Princeton University Press. This one is more about statistics with ecological examples, but there is nothing wrong about that. This would give a good flavour of how statistics could be used in ecology. Note the date; it won't cover some of the more recent developments or applications. M. Henry H. Stevens (2009) A Primer of Ecology with R. Springer. Perhaps too basic and not particularly on anything spatial, but it covers the various topics that we'd teach ecologists and illustrates the ecological theory and models with R code. B. M. Bolker (2008) Ecological Models and Data in R. Princeton University Press. I love this book. It covers topics you will be familiar with given your stats background but applied in an ecological context. Emphasis on fitting models and optimising them from basic principles using R code. James S. Clark (2007) Models for Ecological Data: an introduction. Princeton University Press. Don't be put off by the "introduction" in the title; this is anything but an introduction. Broad coverage, lots of theory, emphasis on fitting models by hand employing Bayesian approaches (the R lab manual companion discusses writing your own Gibbs samplers for example!) Not a book, but I'll add this as you specifically mention your interest in Gaussian Processes. Take a look at Integrated Nested Laplace Approximation (INLA), which has a website. It is an R package and has lots of examples to play with. If you look at their FAQ you'll find several papers that describe the approach, particularly: H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society, Series B, 71(2):319{392, 2009. (PDF available here).
Books on statistical ecology?
Some good books that I would personally recommend are: Hilborn & Mangel (1997) The Ecological Detective: confronting models with data. Princeton University Press. This one is more about statistics wi
Books on statistical ecology? Some good books that I would personally recommend are: Hilborn & Mangel (1997) The Ecological Detective: confronting models with data. Princeton University Press. This one is more about statistics with ecological examples, but there is nothing wrong about that. This would give a good flavour of how statistics could be used in ecology. Note the date; it won't cover some of the more recent developments or applications. M. Henry H. Stevens (2009) A Primer of Ecology with R. Springer. Perhaps too basic and not particularly on anything spatial, but it covers the various topics that we'd teach ecologists and illustrates the ecological theory and models with R code. B. M. Bolker (2008) Ecological Models and Data in R. Princeton University Press. I love this book. It covers topics you will be familiar with given your stats background but applied in an ecological context. Emphasis on fitting models and optimising them from basic principles using R code. James S. Clark (2007) Models for Ecological Data: an introduction. Princeton University Press. Don't be put off by the "introduction" in the title; this is anything but an introduction. Broad coverage, lots of theory, emphasis on fitting models by hand employing Bayesian approaches (the R lab manual companion discusses writing your own Gibbs samplers for example!) Not a book, but I'll add this as you specifically mention your interest in Gaussian Processes. Take a look at Integrated Nested Laplace Approximation (INLA), which has a website. It is an R package and has lots of examples to play with. If you look at their FAQ you'll find several papers that describe the approach, particularly: H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society, Series B, 71(2):319{392, 2009. (PDF available here).
Books on statistical ecology? Some good books that I would personally recommend are: Hilborn & Mangel (1997) The Ecological Detective: confronting models with data. Princeton University Press. This one is more about statistics wi
30,697
Books on statistical ecology?
Jack Weiss (may he rest in peace) was an excellent trained statistician that also really had a good grasp on ecological/environmental principles. He served as an invaluable statistics consultant to ecological/environmental scientists throughout the US and even globally. Although he doesn't have any books that I'm aware of, his course notes are still available online: Statistical Methods in Ecology [or a 2012 version] Course Descrition: This is a course in statistical modeling for ecologists and their kin. We focus on elementary statistical methods, primarily regression, and describe how they can be extended to make them more appropriate for analyzing ecological data. These extensions include using more realistic probability models (beyond the normal distribution) and accounting for situations in which observations are not statistically independent. For each model we consider we will see how to estimate it using both frequentist (when possible) and Bayesian methods. Our emphasis here is on depth rather than breadth. (The other graduate course that I teach, ECOL 562, is a survey course that covers a wide range of statistical methods useful in environmental science. This course focuses on 40% of the material from that course but covers it in greater depth.) Familiarity with the standard parametric approaches of statistical analysis such as hypothesis testing is assumed. The course is intended to serve as a transition between what is typically taught in an undergraduate statistics course and what is actually needed to successfully analyze data in ecology and environmental sciences. The ideal enrollee is an upper level undergraduate or beginning graduate student who has already taken an introductory statistics course and wishes to see the modern application of statistics to environmental science and ecology. Topics include: - Basic concepts in regression: categorical predictors and interactions - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma - Likelihood theory and its applications in regression - Bayesian approaches to model fitting - Model selection protocols: Information-theoretic alternatives to significance testing - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, gamma regression - Mixed effects models for analyzing temporally and spatially correlated data - Random intercepts and slopes models - Multilevel models with 2 and 3 levels - Hierarchical Bayesian modeling - Nonlinear mixed effects models - Mixed effects models with nested and crossed random effects - Hybrid mixed effects models with multivariate responses Statistics for Environmental Science [or a 2007;2012 version] Course Descrition: An introduction to statistical methods for ecology and environmental science. This is a topics course. Our emphasis here is on breadth rather than depth. (The other graduate course I teach takes an in-depth approach to the topics covered in the first third of this course.) Familiarity with the standard parametric approaches of statistical analysis such as hypothesis testing is assumed. The course is intended to serve as a transition between what is typically taught in an undergraduate statistics course and what is actually needed to successfully analyze data in ecology and environmental sciences. The ideal enrollee is an upper level undergraduate or beginning graduate student who has already taken an introductory statistics course and wishes to see the modern application of statistics to environmental science and ecology. Topics include: - Overview of regression - Likelihood theory and its applications in regression - Generalized linear models - Analysis of temporally correlated data - Mixed effects models - Generalized estimating equations - Bayesian methods - Generalized additive models - Survey sampling methods - Machine learning methods - Survival analysis - Contingency table analysis - Analysis of extreme values - Structural equation models Statistics for Ecology & Evolution Course Description: This is a course in statistical modeling for ecologists and their kin. We focus on elementary statistical methods, primarily regression, and describe how they can be extended to make them more appropriate for analyzing ecological data. These extensions include using more realistic probability models (beyond the normal distribution) and accounting for situations in which observations are not statistically independent. Topics include: - Experiments in ecology - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma, and exponential - Likelihood theory and its applications in regression - Bayesian approaches to model fitting - Model selection protocols: Information-theoretic alternatives to significance testing - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, and others - Regression models for temporally and spatially correlated data: random coefficient models (multilevel models) and hierarchical Bayesian modeling Ecology 145—Statistical Analysis ECOL 145 is intended to be an intense introduction to the analysis of ecological data. Its target audience consists of highly motivated graduate students and upper level undergraduates in biologically-related disciplines who ideally have data of their own to analyze. This is a serious, hands-on course not suitable for dilettantes or those who wish to merely audit and observe. We focus on the use of two modern statistical packages, R and WinBUGS, and use them to tackle real data sets with all their foibles. The closer you are to carrying out your own research and analyzing your own data the more useful this course should turn out to be. The perspective of the course is that probability models are best thought of as data-generating mechanisms and in keeping with this viewpoint we use likelihood-based methods to directly model ecological data. Data sets are from the published literature, from my own consulting projects, or are supplied by students who are enrolled in the course. If you have data you need to get analyzed you are welcome to submit it to me for use in class exercises. Topics include: - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma, and exponential - Likelihood theory and its applications in regression - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, and others - The perils of significance testing—multiple comparison adjustments and the false discovery rate - Model selection protocols: likelihood ratio tests, Wald tests, and information-theoretic alternatives to significance testing - Goodness of fit for GLMs: deviance statistics, extensions of R2, Pearson chi-square approaches - Regression models for temporally and spatially correlated data: random coefficient models (multilevel models) and the method of generalized estimating equations - Bayesian approaches to data analysis - Hierarchical Bayesian modeling using WinBUGS and R I'm sure there is a ton of overlap between courses, but his notes (and R code) are available for each of these courses and should prove to be very useful to most people visiting this post.
Books on statistical ecology?
Jack Weiss (may he rest in peace) was an excellent trained statistician that also really had a good grasp on ecological/environmental principles. He served as an invaluable statistics consultant to ec
Books on statistical ecology? Jack Weiss (may he rest in peace) was an excellent trained statistician that also really had a good grasp on ecological/environmental principles. He served as an invaluable statistics consultant to ecological/environmental scientists throughout the US and even globally. Although he doesn't have any books that I'm aware of, his course notes are still available online: Statistical Methods in Ecology [or a 2012 version] Course Descrition: This is a course in statistical modeling for ecologists and their kin. We focus on elementary statistical methods, primarily regression, and describe how they can be extended to make them more appropriate for analyzing ecological data. These extensions include using more realistic probability models (beyond the normal distribution) and accounting for situations in which observations are not statistically independent. For each model we consider we will see how to estimate it using both frequentist (when possible) and Bayesian methods. Our emphasis here is on depth rather than breadth. (The other graduate course that I teach, ECOL 562, is a survey course that covers a wide range of statistical methods useful in environmental science. This course focuses on 40% of the material from that course but covers it in greater depth.) Familiarity with the standard parametric approaches of statistical analysis such as hypothesis testing is assumed. The course is intended to serve as a transition between what is typically taught in an undergraduate statistics course and what is actually needed to successfully analyze data in ecology and environmental sciences. The ideal enrollee is an upper level undergraduate or beginning graduate student who has already taken an introductory statistics course and wishes to see the modern application of statistics to environmental science and ecology. Topics include: - Basic concepts in regression: categorical predictors and interactions - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma - Likelihood theory and its applications in regression - Bayesian approaches to model fitting - Model selection protocols: Information-theoretic alternatives to significance testing - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, gamma regression - Mixed effects models for analyzing temporally and spatially correlated data - Random intercepts and slopes models - Multilevel models with 2 and 3 levels - Hierarchical Bayesian modeling - Nonlinear mixed effects models - Mixed effects models with nested and crossed random effects - Hybrid mixed effects models with multivariate responses Statistics for Environmental Science [or a 2007;2012 version] Course Descrition: An introduction to statistical methods for ecology and environmental science. This is a topics course. Our emphasis here is on breadth rather than depth. (The other graduate course I teach takes an in-depth approach to the topics covered in the first third of this course.) Familiarity with the standard parametric approaches of statistical analysis such as hypothesis testing is assumed. The course is intended to serve as a transition between what is typically taught in an undergraduate statistics course and what is actually needed to successfully analyze data in ecology and environmental sciences. The ideal enrollee is an upper level undergraduate or beginning graduate student who has already taken an introductory statistics course and wishes to see the modern application of statistics to environmental science and ecology. Topics include: - Overview of regression - Likelihood theory and its applications in regression - Generalized linear models - Analysis of temporally correlated data - Mixed effects models - Generalized estimating equations - Bayesian methods - Generalized additive models - Survey sampling methods - Machine learning methods - Survival analysis - Contingency table analysis - Analysis of extreme values - Structural equation models Statistics for Ecology & Evolution Course Description: This is a course in statistical modeling for ecologists and their kin. We focus on elementary statistical methods, primarily regression, and describe how they can be extended to make them more appropriate for analyzing ecological data. These extensions include using more realistic probability models (beyond the normal distribution) and accounting for situations in which observations are not statistically independent. Topics include: - Experiments in ecology - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma, and exponential - Likelihood theory and its applications in regression - Bayesian approaches to model fitting - Model selection protocols: Information-theoretic alternatives to significance testing - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, and others - Regression models for temporally and spatially correlated data: random coefficient models (multilevel models) and hierarchical Bayesian modeling Ecology 145—Statistical Analysis ECOL 145 is intended to be an intense introduction to the analysis of ecological data. Its target audience consists of highly motivated graduate students and upper level undergraduates in biologically-related disciplines who ideally have data of their own to analyze. This is a serious, hands-on course not suitable for dilettantes or those who wish to merely audit and observe. We focus on the use of two modern statistical packages, R and WinBUGS, and use them to tackle real data sets with all their foibles. The closer you are to carrying out your own research and analyzing your own data the more useful this course should turn out to be. The perspective of the course is that probability models are best thought of as data-generating mechanisms and in keeping with this viewpoint we use likelihood-based methods to directly model ecological data. Data sets are from the published literature, from my own consulting projects, or are supplied by students who are enrolled in the course. If you have data you need to get analyzed you are welcome to submit it to me for use in class exercises. Topics include: - Statistical distributions important in ecological modeling: binomial, Poisson, negative binomial, normal, lognormal, gamma, and exponential - Likelihood theory and its applications in regression - Generalized linear models: Poisson regression, negative binomial regression, logistic regression, and others - The perils of significance testing—multiple comparison adjustments and the false discovery rate - Model selection protocols: likelihood ratio tests, Wald tests, and information-theoretic alternatives to significance testing - Goodness of fit for GLMs: deviance statistics, extensions of R2, Pearson chi-square approaches - Regression models for temporally and spatially correlated data: random coefficient models (multilevel models) and the method of generalized estimating equations - Bayesian approaches to data analysis - Hierarchical Bayesian modeling using WinBUGS and R I'm sure there is a ton of overlap between courses, but his notes (and R code) are available for each of these courses and should prove to be very useful to most people visiting this post.
Books on statistical ecology? Jack Weiss (may he rest in peace) was an excellent trained statistician that also really had a good grasp on ecological/environmental principles. He served as an invaluable statistics consultant to ec
30,698
Books on statistical ecology?
Some good ecology books based in Bayesian statistics are: Kery, M. 2010. Introduction to WinBUGS for Ecologists: Bayesian approach to regression, ANOVA, mixed models and related analyses. Academic Press. Kery, M., and M. Schaub. 2011. Bayesian Population Analysis using WinBUGS: A hierarchical perspective. Academic Press. Royle, J.A. and R.M. Dorazio. 2008. Hierarchical Modeling and Inference in Ecology: The Analysis of Data from Populations, Metapopulations, and Communities. Academic Press I also find Zuur et al. (2009) very useful. Zuur, A., E. N. Ieno, N. Walker, A. A. Saveliey, and G. M. Smith. Mixed Effects Models and Extensions in Ecology with R. Springer.
Books on statistical ecology?
Some good ecology books based in Bayesian statistics are: Kery, M. 2010. Introduction to WinBUGS for Ecologists: Bayesian approach to regression, ANOVA, mixed models and related analyses. Academic P
Books on statistical ecology? Some good ecology books based in Bayesian statistics are: Kery, M. 2010. Introduction to WinBUGS for Ecologists: Bayesian approach to regression, ANOVA, mixed models and related analyses. Academic Press. Kery, M., and M. Schaub. 2011. Bayesian Population Analysis using WinBUGS: A hierarchical perspective. Academic Press. Royle, J.A. and R.M. Dorazio. 2008. Hierarchical Modeling and Inference in Ecology: The Analysis of Data from Populations, Metapopulations, and Communities. Academic Press I also find Zuur et al. (2009) very useful. Zuur, A., E. N. Ieno, N. Walker, A. A. Saveliey, and G. M. Smith. Mixed Effects Models and Extensions in Ecology with R. Springer.
Books on statistical ecology? Some good ecology books based in Bayesian statistics are: Kery, M. 2010. Introduction to WinBUGS for Ecologists: Bayesian approach to regression, ANOVA, mixed models and related analyses. Academic P
30,699
How to include interaction terms in R/tree model?
You don't add interaction terms in the model formula, the nature of the tree structure itself allows for interactions without specifying a variable that is the interaction. In R an interaction term in a formula is converted to a variable in the model matrix. For example the interaction a:b would become a variable in the model matrix that takes values $ab = a \times b$. R does this for you behind the scenes. In a tree interactions are formed not by explicit operations on the variables but through the tree structure. Consider this example using the famous Edgar Anderson Iris data set data(iris) require(rpart) mod <- rpart(Species ~ ., data = iris) plot(mod) text(mod) Produces In this simple case, the interaction is local; the variable Petal.Width only has an effect in the model for the subset of data for which Petal.Length is greater than or equal to 2.45. In other words, the interaction only affects observations that end up going down the right hand branch of the tree after the first split. In contrast, interactions of the sort you specified are global; in a:b the interaction has an effect for any value of a or b. `
How to include interaction terms in R/tree model?
You don't add interaction terms in the model formula, the nature of the tree structure itself allows for interactions without specifying a variable that is the interaction. In R an interaction term in
How to include interaction terms in R/tree model? You don't add interaction terms in the model formula, the nature of the tree structure itself allows for interactions without specifying a variable that is the interaction. In R an interaction term in a formula is converted to a variable in the model matrix. For example the interaction a:b would become a variable in the model matrix that takes values $ab = a \times b$. R does this for you behind the scenes. In a tree interactions are formed not by explicit operations on the variables but through the tree structure. Consider this example using the famous Edgar Anderson Iris data set data(iris) require(rpart) mod <- rpart(Species ~ ., data = iris) plot(mod) text(mod) Produces In this simple case, the interaction is local; the variable Petal.Width only has an effect in the model for the subset of data for which Petal.Length is greater than or equal to 2.45. In other words, the interaction only affects observations that end up going down the right hand branch of the tree after the first split. In contrast, interactions of the sort you specified are global; in a:b the interaction has an effect for any value of a or b. `
How to include interaction terms in R/tree model? You don't add interaction terms in the model formula, the nature of the tree structure itself allows for interactions without specifying a variable that is the interaction. In R an interaction term in
30,700
Example of time series prediction using neural networks in R
Rob Hyndman is doing some active research on forecasting with nueral nets. He recently added the nnetar() function to the forecast package that utilizes the nnet package you reference to fit to time series data. http://cran.r-project.org/web/packages/forecast/index.html The example from the help docs: fit <- nnetar(lynx) fcast <- forecast(fit) plot(fcast) Rob gives more context in this specific section of his online text: Forecasting: principles and practice. (And a big thanks to Rob obviously.)
Example of time series prediction using neural networks in R
Rob Hyndman is doing some active research on forecasting with nueral nets. He recently added the nnetar() function to the forecast package that utilizes the nnet package you reference to fit to time
Example of time series prediction using neural networks in R Rob Hyndman is doing some active research on forecasting with nueral nets. He recently added the nnetar() function to the forecast package that utilizes the nnet package you reference to fit to time series data. http://cran.r-project.org/web/packages/forecast/index.html The example from the help docs: fit <- nnetar(lynx) fcast <- forecast(fit) plot(fcast) Rob gives more context in this specific section of his online text: Forecasting: principles and practice. (And a big thanks to Rob obviously.)
Example of time series prediction using neural networks in R Rob Hyndman is doing some active research on forecasting with nueral nets. He recently added the nnetar() function to the forecast package that utilizes the nnet package you reference to fit to time