idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
13,301 | Changing null hypothesis in linear regression | You can simply not make probability or likelihood statements about the parameter using a confidence interval, this is a Bayesian paradigm.
What John is saying is confusing because it there is an equivalence between CIs and Pvalues, so at a 5%, saying that your CI includes 1 is equivalent to saying that Pval>0.05.
linea... | Changing null hypothesis in linear regression | You can simply not make probability or likelihood statements about the parameter using a confidence interval, this is a Bayesian paradigm.
What John is saying is confusing because it there is an equiv | Changing null hypothesis in linear regression
You can simply not make probability or likelihood statements about the parameter using a confidence interval, this is a Bayesian paradigm.
What John is saying is confusing because it there is an equivalence between CIs and Pvalues, so at a 5%, saying that your CI includes 1... | Changing null hypothesis in linear regression
You can simply not make probability or likelihood statements about the parameter using a confidence interval, this is a Bayesian paradigm.
What John is saying is confusing because it there is an equiv |
13,302 | Advantages of the Exponential Family: why should we study it and use it? | ...why should we study it and use it?
I think your list of advantages effectively answers your own question, but let me offer some meta-mathematical commentary that might elucidate this topic. Generally speaking, mathematicians like to generalise concepts and results up to the maximal point that they can, to the limi... | Advantages of the Exponential Family: why should we study it and use it? | ...why should we study it and use it?
I think your list of advantages effectively answers your own question, but let me offer some meta-mathematical commentary that might elucidate this topic. Gener | Advantages of the Exponential Family: why should we study it and use it?
...why should we study it and use it?
I think your list of advantages effectively answers your own question, but let me offer some meta-mathematical commentary that might elucidate this topic. Generally speaking, mathematicians like to generalis... | Advantages of the Exponential Family: why should we study it and use it?
...why should we study it and use it?
I think your list of advantages effectively answers your own question, but let me offer some meta-mathematical commentary that might elucidate this topic. Gener |
13,303 | Advantages of the Exponential Family: why should we study it and use it? | I would say the most compelling motivation for the exponential families is that they are minimum assumptive distribution given measurements. If you have a real-valued sensor whose measurements are summarized by mean and variance, then the minimum assumption you can make about its observations is that they are normally... | Advantages of the Exponential Family: why should we study it and use it? | I would say the most compelling motivation for the exponential families is that they are minimum assumptive distribution given measurements. If you have a real-valued sensor whose measurements are su | Advantages of the Exponential Family: why should we study it and use it?
I would say the most compelling motivation for the exponential families is that they are minimum assumptive distribution given measurements. If you have a real-valued sensor whose measurements are summarized by mean and variance, then the minimum... | Advantages of the Exponential Family: why should we study it and use it?
I would say the most compelling motivation for the exponential families is that they are minimum assumptive distribution given measurements. If you have a real-valued sensor whose measurements are su |
13,304 | Data Augmentation strategies for Time Series Forecasting | Any other ideas to do data augmentation for time series forecasting?
I'm currently thinking about the same problem. I've found the paper "Data Augmentation for Time Series Classification
using Convolutional Neural Networks" by Le Guennec et al. which doesn't cover forecasting however. Still the augmentation methods me... | Data Augmentation strategies for Time Series Forecasting | Any other ideas to do data augmentation for time series forecasting?
I'm currently thinking about the same problem. I've found the paper "Data Augmentation for Time Series Classification
using Convol | Data Augmentation strategies for Time Series Forecasting
Any other ideas to do data augmentation for time series forecasting?
I'm currently thinking about the same problem. I've found the paper "Data Augmentation for Time Series Classification
using Convolutional Neural Networks" by Le Guennec et al. which doesn't cov... | Data Augmentation strategies for Time Series Forecasting
Any other ideas to do data augmentation for time series forecasting?
I'm currently thinking about the same problem. I've found the paper "Data Augmentation for Time Series Classification
using Convol |
13,305 | Data Augmentation strategies for Time Series Forecasting | I have recently implemented another approach inspired by this paper from Bergmeir, Hyndman and Benitez.
The idea is to take a time series and first apply a transformation such as the Box Cox transformation or Yeo-johnson (which solves some problems with the Box Cox) to stabilise the variance of the series, then apply... | Data Augmentation strategies for Time Series Forecasting | I have recently implemented another approach inspired by this paper from Bergmeir, Hyndman and Benitez.
The idea is to take a time series and first apply a transformation such as the Box Cox transfo | Data Augmentation strategies for Time Series Forecasting
I have recently implemented another approach inspired by this paper from Bergmeir, Hyndman and Benitez.
The idea is to take a time series and first apply a transformation such as the Box Cox transformation or Yeo-johnson (which solves some problems with the Box... | Data Augmentation strategies for Time Series Forecasting
I have recently implemented another approach inspired by this paper from Bergmeir, Hyndman and Benitez.
The idea is to take a time series and first apply a transformation such as the Box Cox transfo |
13,306 | Data Augmentation strategies for Time Series Forecasting | Any other ideas to do data augmentation for time series forecasting?
Another answer with a different approach, based on "Dataset Augmentation in Feature Space" by DeVries and Taylor.
In this work, we demonstrate that extrapolating between samples in feature space can be used to augment datasets and improve the perfor... | Data Augmentation strategies for Time Series Forecasting | Any other ideas to do data augmentation for time series forecasting?
Another answer with a different approach, based on "Dataset Augmentation in Feature Space" by DeVries and Taylor.
In this work, w | Data Augmentation strategies for Time Series Forecasting
Any other ideas to do data augmentation for time series forecasting?
Another answer with a different approach, based on "Dataset Augmentation in Feature Space" by DeVries and Taylor.
In this work, we demonstrate that extrapolating between samples in feature spa... | Data Augmentation strategies for Time Series Forecasting
Any other ideas to do data augmentation for time series forecasting?
Another answer with a different approach, based on "Dataset Augmentation in Feature Space" by DeVries and Taylor.
In this work, w |
13,307 | Overfitting: No silver bullet? | Not a whole answer, but one thing that people overlook in this discussion is what does Cross-Validation (for example) mean, why do you use it, and what does it cover?
The problem I see with searching too hard is that the CV that people are doing is often within a single model. Easy to do by setting a folds= argument of... | Overfitting: No silver bullet? | Not a whole answer, but one thing that people overlook in this discussion is what does Cross-Validation (for example) mean, why do you use it, and what does it cover?
The problem I see with searching | Overfitting: No silver bullet?
Not a whole answer, but one thing that people overlook in this discussion is what does Cross-Validation (for example) mean, why do you use it, and what does it cover?
The problem I see with searching too hard is that the CV that people are doing is often within a single model. Easy to do ... | Overfitting: No silver bullet?
Not a whole answer, but one thing that people overlook in this discussion is what does Cross-Validation (for example) mean, why do you use it, and what does it cover?
The problem I see with searching |
13,308 | Overfitting: No silver bullet? | In my 4 or so years of experience, I've found that trying out every model available in caret (or scikit-learn) doesn't necessarily lead to overfitting. I've found that if you have a sufficiently large dataset (10,000+ rows) and a more or less even balance of classes (i.e., no class imbalance like in credit risk or mark... | Overfitting: No silver bullet? | In my 4 or so years of experience, I've found that trying out every model available in caret (or scikit-learn) doesn't necessarily lead to overfitting. I've found that if you have a sufficiently large | Overfitting: No silver bullet?
In my 4 or so years of experience, I've found that trying out every model available in caret (or scikit-learn) doesn't necessarily lead to overfitting. I've found that if you have a sufficiently large dataset (10,000+ rows) and a more or less even balance of classes (i.e., no class imbala... | Overfitting: No silver bullet?
In my 4 or so years of experience, I've found that trying out every model available in caret (or scikit-learn) doesn't necessarily lead to overfitting. I've found that if you have a sufficiently large |
13,309 | Overfitting: No silver bullet? | So much depends on scale. I wish I could count on having more than 2,000-3,000 cases like @RyanZotti typically has; I seldom have 1/10th that many. That's a big difference in perspective between "big data" machine learning folk and those working in fields like biomedicine, which might account for some of the different... | Overfitting: No silver bullet? | So much depends on scale. I wish I could count on having more than 2,000-3,000 cases like @RyanZotti typically has; I seldom have 1/10th that many. That's a big difference in perspective between "big | Overfitting: No silver bullet?
So much depends on scale. I wish I could count on having more than 2,000-3,000 cases like @RyanZotti typically has; I seldom have 1/10th that many. That's a big difference in perspective between "big data" machine learning folk and those working in fields like biomedicine, which might ac... | Overfitting: No silver bullet?
So much depends on scale. I wish I could count on having more than 2,000-3,000 cases like @RyanZotti typically has; I seldom have 1/10th that many. That's a big difference in perspective between "big |
13,310 | Overfitting: No silver bullet? | I agree with @ryan-zotti that searching hard enough does not necessarily lead to overfitting - or at least not to an amount so that we would call it overfitting. Let me try to state my point of view on this:
Box once said:
Remember that all models are wrong; the practical question is how wrong do they have to be to no... | Overfitting: No silver bullet? | I agree with @ryan-zotti that searching hard enough does not necessarily lead to overfitting - or at least not to an amount so that we would call it overfitting. Let me try to state my point of view o | Overfitting: No silver bullet?
I agree with @ryan-zotti that searching hard enough does not necessarily lead to overfitting - or at least not to an amount so that we would call it overfitting. Let me try to state my point of view on this:
Box once said:
Remember that all models are wrong; the practical question is how... | Overfitting: No silver bullet?
I agree with @ryan-zotti that searching hard enough does not necessarily lead to overfitting - or at least not to an amount so that we would call it overfitting. Let me try to state my point of view o |
13,311 | Overfitting: No silver bullet? | I think this is a very good question. I always want to observe the "U" shape curve in cross validation experiments with real data. However, my experience with real world data (~ 5 years in credit card transactions and education data) does not tell me over fitting can easily happen in huge amount (billion rows) real wo... | Overfitting: No silver bullet? | I think this is a very good question. I always want to observe the "U" shape curve in cross validation experiments with real data. However, my experience with real world data (~ 5 years in credit card | Overfitting: No silver bullet?
I think this is a very good question. I always want to observe the "U" shape curve in cross validation experiments with real data. However, my experience with real world data (~ 5 years in credit card transactions and education data) does not tell me over fitting can easily happen in huge... | Overfitting: No silver bullet?
I think this is a very good question. I always want to observe the "U" shape curve in cross validation experiments with real data. However, my experience with real world data (~ 5 years in credit card |
13,312 | Overfitting: No silver bullet? | overfitting will happen if one searches for a model hard enough, unless one imposes restrictions on model complexity, period
I guess the simple answer is yes, if the search space (complexity of considered model class(es)) is large enough).
If data is the new oil, then note that oil is usually burnt during use.
Conside... | Overfitting: No silver bullet? | overfitting will happen if one searches for a model hard enough, unless one imposes restrictions on model complexity, period
I guess the simple answer is yes, if the search space (complexity of consi | Overfitting: No silver bullet?
overfitting will happen if one searches for a model hard enough, unless one imposes restrictions on model complexity, period
I guess the simple answer is yes, if the search space (complexity of considered model class(es)) is large enough).
If data is the new oil, then note that oil is us... | Overfitting: No silver bullet?
overfitting will happen if one searches for a model hard enough, unless one imposes restrictions on model complexity, period
I guess the simple answer is yes, if the search space (complexity of consi |
13,313 | Overfitting: No silver bullet? | Already existing answers are mostly fine, but I add one small aspect that I haven't seen mentioned.
Let's assume you compare lots of models by cross-validation in a correct manner (avoiding information leakage, if necessary using nested CV, see answer by Wayne), and ultimately you choose the one that gives you the best... | Overfitting: No silver bullet? | Already existing answers are mostly fine, but I add one small aspect that I haven't seen mentioned.
Let's assume you compare lots of models by cross-validation in a correct manner (avoiding informatio | Overfitting: No silver bullet?
Already existing answers are mostly fine, but I add one small aspect that I haven't seen mentioned.
Let's assume you compare lots of models by cross-validation in a correct manner (avoiding information leakage, if necessary using nested CV, see answer by Wayne), and ultimately you choose ... | Overfitting: No silver bullet?
Already existing answers are mostly fine, but I add one small aspect that I haven't seen mentioned.
Let's assume you compare lots of models by cross-validation in a correct manner (avoiding informatio |
13,314 | How to get predictions in terms of survival time from a Cox PH model? | The Cox Proportional Hazards model doesn't model the underlying hazard, which is what you'd need to predict survival time like that - this is both the model's great strength and one of it's major drawbacks.
If you are particularly interested in obtaining estimates of the probability of survival at particular time point... | How to get predictions in terms of survival time from a Cox PH model? | The Cox Proportional Hazards model doesn't model the underlying hazard, which is what you'd need to predict survival time like that - this is both the model's great strength and one of it's major draw | How to get predictions in terms of survival time from a Cox PH model?
The Cox Proportional Hazards model doesn't model the underlying hazard, which is what you'd need to predict survival time like that - this is both the model's great strength and one of it's major drawbacks.
If you are particularly interested in obtai... | How to get predictions in terms of survival time from a Cox PH model?
The Cox Proportional Hazards model doesn't model the underlying hazard, which is what you'd need to predict survival time like that - this is both the model's great strength and one of it's major draw |
13,315 | How to get predictions in terms of survival time from a Cox PH model? | @statBeginner Yes it will. It requires two steps:
x <- survfit(cox.ph.model, newdata = dataset)
dataset$Results <- summary(x)$table[,"median"]
but I am not sure if median time to survival is accurate enough. | How to get predictions in terms of survival time from a Cox PH model? | @statBeginner Yes it will. It requires two steps:
x <- survfit(cox.ph.model, newdata = dataset)
dataset$Results <- summary(x)$table[,"median"]
but I am not sure if median time to survival is accurate | How to get predictions in terms of survival time from a Cox PH model?
@statBeginner Yes it will. It requires two steps:
x <- survfit(cox.ph.model, newdata = dataset)
dataset$Results <- summary(x)$table[,"median"]
but I am not sure if median time to survival is accurate enough. | How to get predictions in terms of survival time from a Cox PH model?
@statBeginner Yes it will. It requires two steps:
x <- survfit(cox.ph.model, newdata = dataset)
dataset$Results <- summary(x)$table[,"median"]
but I am not sure if median time to survival is accurate |
13,316 | How to get predictions in terms of survival time from a Cox PH model? | Although I agree with these point, median survival IS clinically useful.
You might be interested in our work (and others) looking at using the median as a basis for survival intervals - we think these are more useful.
https://academic.oup.com/annonc/article/25/10/2014/2801274 | How to get predictions in terms of survival time from a Cox PH model? | Although I agree with these point, median survival IS clinically useful.
You might be interested in our work (and others) looking at using the median as a basis for survival intervals - we think these | How to get predictions in terms of survival time from a Cox PH model?
Although I agree with these point, median survival IS clinically useful.
You might be interested in our work (and others) looking at using the median as a basis for survival intervals - we think these are more useful.
https://academic.oup.com/annonc/... | How to get predictions in terms of survival time from a Cox PH model?
Although I agree with these point, median survival IS clinically useful.
You might be interested in our work (and others) looking at using the median as a basis for survival intervals - we think these |
13,317 | Is a p-value of 0.04993 enough to reject null hypothesis? | There are two issues here:
1) If you're doing a formal hypothesis test (and if you're going as far as quoting a p-value in my book you already are), what is the formal rejection rule?
When comparing test statistics to critical values, the critical value is in the rejection region. While this formality doesn't matter mu... | Is a p-value of 0.04993 enough to reject null hypothesis? | There are two issues here:
1) If you're doing a formal hypothesis test (and if you're going as far as quoting a p-value in my book you already are), what is the formal rejection rule?
When comparing t | Is a p-value of 0.04993 enough to reject null hypothesis?
There are two issues here:
1) If you're doing a formal hypothesis test (and if you're going as far as quoting a p-value in my book you already are), what is the formal rejection rule?
When comparing test statistics to critical values, the critical value is in th... | Is a p-value of 0.04993 enough to reject null hypothesis?
There are two issues here:
1) If you're doing a formal hypothesis test (and if you're going as far as quoting a p-value in my book you already are), what is the formal rejection rule?
When comparing t |
13,318 | Is a p-value of 0.04993 enough to reject null hypothesis? | It lies in the eye of the beholder.
Formally, if there is a strict decision rule for your problem, follow it. This means $\alpha$ is given. However, I am not aware of any problem where this is the case (though setting $\alpha=0.05$ is what many practitioners do after Statistics101).
So it really boils down to what Ale... | Is a p-value of 0.04993 enough to reject null hypothesis? | It lies in the eye of the beholder.
Formally, if there is a strict decision rule for your problem, follow it. This means $\alpha$ is given. However, I am not aware of any problem where this is the cas | Is a p-value of 0.04993 enough to reject null hypothesis?
It lies in the eye of the beholder.
Formally, if there is a strict decision rule for your problem, follow it. This means $\alpha$ is given. However, I am not aware of any problem where this is the case (though setting $\alpha=0.05$ is what many practitioners do ... | Is a p-value of 0.04993 enough to reject null hypothesis?
It lies in the eye of the beholder.
Formally, if there is a strict decision rule for your problem, follow it. This means $\alpha$ is given. However, I am not aware of any problem where this is the cas |
13,319 | Is a p-value of 0.04993 enough to reject null hypothesis? | In light of the assumptions of your model, you should reject the null because dichotomizing claims based on hypothesis tests have clear epistemological and pragmatic functions. But never forget that: “No isolated experiment, however significant in itself, can suffice for the experimental demonstration of any natural p... | Is a p-value of 0.04993 enough to reject null hypothesis? | In light of the assumptions of your model, you should reject the null because dichotomizing claims based on hypothesis tests have clear epistemological and pragmatic functions. But never forget that: | Is a p-value of 0.04993 enough to reject null hypothesis?
In light of the assumptions of your model, you should reject the null because dichotomizing claims based on hypothesis tests have clear epistemological and pragmatic functions. But never forget that: “No isolated experiment, however significant in itself, can s... | Is a p-value of 0.04993 enough to reject null hypothesis?
In light of the assumptions of your model, you should reject the null because dichotomizing claims based on hypothesis tests have clear epistemological and pragmatic functions. But never forget that: |
13,320 | Is a p-value of 0.04993 enough to reject null hypothesis? | The 0.05 threshold is a hurdle that you have set for yourself in order to enforce a degree of self-skepticism about your alternative hypothesis. It somewhat weakens that self-skepticism if you change the definition of the threshold after seeing the result. The real question is why you are performing an NHST, what do y... | Is a p-value of 0.04993 enough to reject null hypothesis? | The 0.05 threshold is a hurdle that you have set for yourself in order to enforce a degree of self-skepticism about your alternative hypothesis. It somewhat weakens that self-skepticism if you change | Is a p-value of 0.04993 enough to reject null hypothesis?
The 0.05 threshold is a hurdle that you have set for yourself in order to enforce a degree of self-skepticism about your alternative hypothesis. It somewhat weakens that self-skepticism if you change the definition of the threshold after seeing the result. The ... | Is a p-value of 0.04993 enough to reject null hypothesis?
The 0.05 threshold is a hurdle that you have set for yourself in order to enforce a degree of self-skepticism about your alternative hypothesis. It somewhat weakens that self-skepticism if you change |
13,321 | Is a p-value of 0.04993 enough to reject null hypothesis? | The answer is absolutely not. There is no "in the eye of the beholder", there is no argument, the answer is no, your data is not significant at the $p=0.05$ level. (Ok, there is one way out, but its a very narrow path.)
The key problem is this phrase: "We came across some data...".
This suggests that you looked at se... | Is a p-value of 0.04993 enough to reject null hypothesis? | The answer is absolutely not. There is no "in the eye of the beholder", there is no argument, the answer is no, your data is not significant at the $p=0.05$ level. (Ok, there is one way out, but its | Is a p-value of 0.04993 enough to reject null hypothesis?
The answer is absolutely not. There is no "in the eye of the beholder", there is no argument, the answer is no, your data is not significant at the $p=0.05$ level. (Ok, there is one way out, but its a very narrow path.)
The key problem is this phrase: "We came ... | Is a p-value of 0.04993 enough to reject null hypothesis?
The answer is absolutely not. There is no "in the eye of the beholder", there is no argument, the answer is no, your data is not significant at the $p=0.05$ level. (Ok, there is one way out, but its |
13,322 | How can it be trapped in a saddle point? | Take a look at the image below from Off Convex. In a convex function (leftmost image), there is only one local minimum, which is also the global minimum. But in a non-convex function (rightmost image), there may be multiple local minima and often joining two local minima is a saddle point. If you are approaching from a... | How can it be trapped in a saddle point? | Take a look at the image below from Off Convex. In a convex function (leftmost image), there is only one local minimum, which is also the global minimum. But in a non-convex function (rightmost image) | How can it be trapped in a saddle point?
Take a look at the image below from Off Convex. In a convex function (leftmost image), there is only one local minimum, which is also the global minimum. But in a non-convex function (rightmost image), there may be multiple local minima and often joining two local minima is a sa... | How can it be trapped in a saddle point?
Take a look at the image below from Off Convex. In a convex function (leftmost image), there is only one local minimum, which is also the global minimum. But in a non-convex function (rightmost image) |
13,323 | How can it be trapped in a saddle point? | It should not.
[1] has shown that gradient descent with random initialization and appropriate constant step size does not converge to a saddle point. It is a long discussion but to give you an idea of why see the following example:
$$f(x,y)=\frac12 x^2+ \frac14y^4 - \frac12y^2$$
The critical points are
$$z_1=\begin{bm... | How can it be trapped in a saddle point? | It should not.
[1] has shown that gradient descent with random initialization and appropriate constant step size does not converge to a saddle point. It is a long discussion but to give you an idea of | How can it be trapped in a saddle point?
It should not.
[1] has shown that gradient descent with random initialization and appropriate constant step size does not converge to a saddle point. It is a long discussion but to give you an idea of why see the following example:
$$f(x,y)=\frac12 x^2+ \frac14y^4 - \frac12y^2$$... | How can it be trapped in a saddle point?
It should not.
[1] has shown that gradient descent with random initialization and appropriate constant step size does not converge to a saddle point. It is a long discussion but to give you an idea of |
13,324 | How can it be trapped in a saddle point? | If you go to the referenced paper (they also emperically show how their saddle-free approach does indeed improve upon mini-batch SGD) they state:
A step of the gradient descent method always points in the right direction close to a saddle point...and so small steps are taken in directions corresponding to eigenvalues ... | How can it be trapped in a saddle point? | If you go to the referenced paper (they also emperically show how their saddle-free approach does indeed improve upon mini-batch SGD) they state:
A step of the gradient descent method always points i | How can it be trapped in a saddle point?
If you go to the referenced paper (they also emperically show how their saddle-free approach does indeed improve upon mini-batch SGD) they state:
A step of the gradient descent method always points in the right direction close to a saddle point...and so small steps are taken in... | How can it be trapped in a saddle point?
If you go to the referenced paper (they also emperically show how their saddle-free approach does indeed improve upon mini-batch SGD) they state:
A step of the gradient descent method always points i |
13,325 | How can it be trapped in a saddle point? | I think the problem is that while approaching a saddle point you enter a plateau, i.e. an area with low (in absolute value) gradients. Especially when you're approaching from the ridge. So your algorithm decreases the step size. With a decreased step size now all gradients (in all directions) are small in absolute valu... | How can it be trapped in a saddle point? | I think the problem is that while approaching a saddle point you enter a plateau, i.e. an area with low (in absolute value) gradients. Especially when you're approaching from the ridge. So your algori | How can it be trapped in a saddle point?
I think the problem is that while approaching a saddle point you enter a plateau, i.e. an area with low (in absolute value) gradients. Especially when you're approaching from the ridge. So your algorithm decreases the step size. With a decreased step size now all gradients (in a... | How can it be trapped in a saddle point?
I think the problem is that while approaching a saddle point you enter a plateau, i.e. an area with low (in absolute value) gradients. Especially when you're approaching from the ridge. So your algori |
13,326 | XGBoost can handle missing data in the forecasting phase | xgboost decides at training time whether missing values go into the right or left node. It chooses which to minimise loss. If there are no missing values at training time, it defaults to sending any new missings to the right node.
If there is signal in the distribution of your missings, then this is essentially fit by ... | XGBoost can handle missing data in the forecasting phase | xgboost decides at training time whether missing values go into the right or left node. It chooses which to minimise loss. If there are no missing values at training time, it defaults to sending any n | XGBoost can handle missing data in the forecasting phase
xgboost decides at training time whether missing values go into the right or left node. It chooses which to minimise loss. If there are no missing values at training time, it defaults to sending any new missings to the right node.
If there is signal in the distri... | XGBoost can handle missing data in the forecasting phase
xgboost decides at training time whether missing values go into the right or left node. It chooses which to minimise loss. If there are no missing values at training time, it defaults to sending any n |
13,327 | What would be an example of a really simple model with an intractable likelihood? | Two distributions that are used a lot in the literature are:
The g-and-k distribution. This is defined by its quantile function (inverse cdf) but has an intractable density. Rayner and MacGillivray (2002) is a good overview of these, and one of many ABC papers which use it as a toy example is Drovandi and Pettitt (201... | What would be an example of a really simple model with an intractable likelihood? | Two distributions that are used a lot in the literature are:
The g-and-k distribution. This is defined by its quantile function (inverse cdf) but has an intractable density. Rayner and MacGillivray ( | What would be an example of a really simple model with an intractable likelihood?
Two distributions that are used a lot in the literature are:
The g-and-k distribution. This is defined by its quantile function (inverse cdf) but has an intractable density. Rayner and MacGillivray (2002) is a good overview of these, and... | What would be an example of a really simple model with an intractable likelihood?
Two distributions that are used a lot in the literature are:
The g-and-k distribution. This is defined by its quantile function (inverse cdf) but has an intractable density. Rayner and MacGillivray ( |
13,328 | What would be an example of a really simple model with an intractable likelihood? | One example I came through a few weeks ago and quite like for its simplicity is the following one: given an original normal dataset
$$
x_1,\ldots,x_n\stackrel{\text{iid}}{\sim}\text{N}(\theta,\sigma^2)\,,
$$
the reported data is (alas!) made of the two-dimensional summary
$$
S(x_1,\ldots,x_n)=(\text{med}(x_1,\ldots,x_n... | What would be an example of a really simple model with an intractable likelihood? | One example I came through a few weeks ago and quite like for its simplicity is the following one: given an original normal dataset
$$
x_1,\ldots,x_n\stackrel{\text{iid}}{\sim}\text{N}(\theta,\sigma^2 | What would be an example of a really simple model with an intractable likelihood?
One example I came through a few weeks ago and quite like for its simplicity is the following one: given an original normal dataset
$$
x_1,\ldots,x_n\stackrel{\text{iid}}{\sim}\text{N}(\theta,\sigma^2)\,,
$$
the reported data is (alas!) m... | What would be an example of a really simple model with an intractable likelihood?
One example I came through a few weeks ago and quite like for its simplicity is the following one: given an original normal dataset
$$
x_1,\ldots,x_n\stackrel{\text{iid}}{\sim}\text{N}(\theta,\sigma^2 |
13,329 | Beta distribution fitting in Scipy | Despite an apparent lack of documentation on the output of beta.fit, it does output in the following order:
$\alpha$, $\beta$, loc (lower limit), scale (upper limit - lower limit) | Beta distribution fitting in Scipy | Despite an apparent lack of documentation on the output of beta.fit, it does output in the following order:
$\alpha$, $\beta$, loc (lower limit), scale (upper limit - lower limit) | Beta distribution fitting in Scipy
Despite an apparent lack of documentation on the output of beta.fit, it does output in the following order:
$\alpha$, $\beta$, loc (lower limit), scale (upper limit - lower limit) | Beta distribution fitting in Scipy
Despite an apparent lack of documentation on the output of beta.fit, it does output in the following order:
$\alpha$, $\beta$, loc (lower limit), scale (upper limit - lower limit) |
13,330 | To what extent is the distinction between correlation and causation relevant to Google? | The simple answer is that Google (or anyone) should care about the distinction to the extent that they intend to intervene. Causal knowledge tells you about the effects of interventions (actions) in a given domain.
If, for example, Google wishes to increase click-through rates on ads, increase the number of users of... | To what extent is the distinction between correlation and causation relevant to Google? | The simple answer is that Google (or anyone) should care about the distinction to the extent that they intend to intervene. Causal knowledge tells you about the effects of interventions (actions) in | To what extent is the distinction between correlation and causation relevant to Google?
The simple answer is that Google (or anyone) should care about the distinction to the extent that they intend to intervene. Causal knowledge tells you about the effects of interventions (actions) in a given domain.
If, for exampl... | To what extent is the distinction between correlation and causation relevant to Google?
The simple answer is that Google (or anyone) should care about the distinction to the extent that they intend to intervene. Causal knowledge tells you about the effects of interventions (actions) in |
13,331 | To what extent is the distinction between correlation and causation relevant to Google? | First, it is just a quip and is incorrect. Google has a lot of very talented statisticians, information retrieval experts, linguists, economists, some psychologists, and others. These folks spend a lot of time educating a lot of non-statisticians about the difference between correlation and causation. Given that it'... | To what extent is the distinction between correlation and causation relevant to Google? | First, it is just a quip and is incorrect. Google has a lot of very talented statisticians, information retrieval experts, linguists, economists, some psychologists, and others. These folks spend a | To what extent is the distinction between correlation and causation relevant to Google?
First, it is just a quip and is incorrect. Google has a lot of very talented statisticians, information retrieval experts, linguists, economists, some psychologists, and others. These folks spend a lot of time educating a lot of n... | To what extent is the distinction between correlation and causation relevant to Google?
First, it is just a quip and is incorrect. Google has a lot of very talented statisticians, information retrieval experts, linguists, economists, some psychologists, and others. These folks spend a |
13,332 | To what extent is the distinction between correlation and causation relevant to Google? | Author of the quip here.
The comment was partially inspired by a talk by David Mease (at Google), where he said, and I paraphrase, car insurance companies don't care if being male causes more accidents, as long as it's correlated, they have to charge more. It is, in fact, impossible to change someone's gender in an exp... | To what extent is the distinction between correlation and causation relevant to Google? | Author of the quip here.
The comment was partially inspired by a talk by David Mease (at Google), where he said, and I paraphrase, car insurance companies don't care if being male causes more accident | To what extent is the distinction between correlation and causation relevant to Google?
Author of the quip here.
The comment was partially inspired by a talk by David Mease (at Google), where he said, and I paraphrase, car insurance companies don't care if being male causes more accidents, as long as it's correlated, t... | To what extent is the distinction between correlation and causation relevant to Google?
Author of the quip here.
The comment was partially inspired by a talk by David Mease (at Google), where he said, and I paraphrase, car insurance companies don't care if being male causes more accident |
13,333 | To what extent is the distinction between correlation and causation relevant to Google? | I agree with David: The difference matters if you intend to intervene, and Google can test the results of interventions by running controlled experiments. (The optimal schedule of such experiments depends on your set of causal hypotheses, which you learn from previous experiments plus observational data, so correlation... | To what extent is the distinction between correlation and causation relevant to Google? | I agree with David: The difference matters if you intend to intervene, and Google can test the results of interventions by running controlled experiments. (The optimal schedule of such experiments dep | To what extent is the distinction between correlation and causation relevant to Google?
I agree with David: The difference matters if you intend to intervene, and Google can test the results of interventions by running controlled experiments. (The optimal schedule of such experiments depends on your set of causal hypot... | To what extent is the distinction between correlation and causation relevant to Google?
I agree with David: The difference matters if you intend to intervene, and Google can test the results of interventions by running controlled experiments. (The optimal schedule of such experiments dep |
13,334 | Interpreting 2D correspondence analysis plots | First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances" between row cells and column cells. In other words, we seek a hierarchy (we also speak of "ordination") of the relatio... | Interpreting 2D correspondence analysis plots | First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances | Interpreting 2D correspondence analysis plots
First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances" between row cells and column cells. In other words, we seek a hierarchy ... | Interpreting 2D correspondence analysis plots
First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances |
13,335 | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector? | Why bias is important?
The bias term $b$ is, indeed, a special parameter in SVM. Without it, the classifier will always go through the origin. So, SVM does not give you the separating hyperplane with the maximum margin if it does not happen to pass through the origin, unless you have a bias term.
Below is a visualizati... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto | Why bias is important?
The bias term $b$ is, indeed, a special parameter in SVM. Without it, the classifier will always go through the origin. So, SVM does not give you the separating hyperplane with | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector?
Why bias is important?
The bias term $b$ is, indeed, a special parameter in SVM. Without it, the classifier will always go through the origin. So, SVM does not give you the separating hyperplane with the maximum margi... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto
Why bias is important?
The bias term $b$ is, indeed, a special parameter in SVM. Without it, the classifier will always go through the origin. So, SVM does not give you the separating hyperplane with |
13,336 | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector? | Sometimes, people will just omit the intercept in SVM, but i think the reason maybe we can penalizing intercept in order to omit it. i.e.,
we can modify the data $\mathbf{\hat{x}} = (\mathbf{1}, \mathbf{x})$, and $\mathbf{\hat{w}} = (w_{0}, \mathbf{w}^{T})^{T}$ so that omit the intercept
$$\mathbf{x} ~ \mathbf{w} + b ... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto | Sometimes, people will just omit the intercept in SVM, but i think the reason maybe we can penalizing intercept in order to omit it. i.e.,
we can modify the data $\mathbf{\hat{x}} = (\mathbf{1}, \math | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector?
Sometimes, people will just omit the intercept in SVM, but i think the reason maybe we can penalizing intercept in order to omit it. i.e.,
we can modify the data $\mathbf{\hat{x}} = (\mathbf{1}, \mathbf{x})$, and $\ma... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto
Sometimes, people will just omit the intercept in SVM, but i think the reason maybe we can penalizing intercept in order to omit it. i.e.,
we can modify the data $\mathbf{\hat{x}} = (\mathbf{1}, \math |
13,337 | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector? | In additional to the reasons mentioned above, the distance of a point $x$ to a hyperplane defined by slope $\theta$ and intercept $b$ is $$\frac{|\theta^T x + b|}{||\theta||}$$
This is how the concept of margin in SVM is movitated. If you change the $\theta$ to include the intercept term $b$, the norm of the $\theta$ ... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto | In additional to the reasons mentioned above, the distance of a point $x$ to a hyperplane defined by slope $\theta$ and intercept $b$ is $$\frac{|\theta^T x + b|}{||\theta||}$$
This is how the concep | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vector?
In additional to the reasons mentioned above, the distance of a point $x$ to a hyperplane defined by slope $\theta$ and intercept $b$ is $$\frac{|\theta^T x + b|}{||\theta||}$$
This is how the concept of margin in SV... | Why is the bias term in SVM estimated separately, instead of an extra dimension in the feature vecto
In additional to the reasons mentioned above, the distance of a point $x$ to a hyperplane defined by slope $\theta$ and intercept $b$ is $$\frac{|\theta^T x + b|}{||\theta||}$$
This is how the concep |
13,338 | How much missing data is too much? Multiple Imputation (MICE) & R | In principle, MICE should be able to handle large amounts of missing data. Variables with lots of missing data points would be expected to end up with larger error terms than those with fewer missing data points, so your ability to detect significant relations to those variables would be limited accordingly. That's an ... | How much missing data is too much? Multiple Imputation (MICE) & R | In principle, MICE should be able to handle large amounts of missing data. Variables with lots of missing data points would be expected to end up with larger error terms than those with fewer missing | How much missing data is too much? Multiple Imputation (MICE) & R
In principle, MICE should be able to handle large amounts of missing data. Variables with lots of missing data points would be expected to end up with larger error terms than those with fewer missing data points, so your ability to detect significant rel... | How much missing data is too much? Multiple Imputation (MICE) & R
In principle, MICE should be able to handle large amounts of missing data. Variables with lots of missing data points would be expected to end up with larger error terms than those with fewer missing |
13,339 | How much missing data is too much? Multiple Imputation (MICE) & R | Mice can handle a large amount of missing data. Especially if there are a lot of columns with few missing data, one with 80% is no problem.
You can also expect that in most of the times adding this variable leads to better imputation results than leaving it out.
( because more information / correlations available that... | How much missing data is too much? Multiple Imputation (MICE) & R | Mice can handle a large amount of missing data. Especially if there are a lot of columns with few missing data, one with 80% is no problem.
You can also expect that in most of the times adding this va | How much missing data is too much? Multiple Imputation (MICE) & R
Mice can handle a large amount of missing data. Especially if there are a lot of columns with few missing data, one with 80% is no problem.
You can also expect that in most of the times adding this variable leads to better imputation results than leaving... | How much missing data is too much? Multiple Imputation (MICE) & R
Mice can handle a large amount of missing data. Especially if there are a lot of columns with few missing data, one with 80% is no problem.
You can also expect that in most of the times adding this va |
13,340 | How much missing data is too much? Multiple Imputation (MICE) & R | This is not a coding question but if you want an answer here it is...
Missing data are very complicated. There is not a percentage value to accept of discard your variables. The variance of your variable is what is important to watch before imputation of data.
If you do not want to take some time to review all the sta... | How much missing data is too much? Multiple Imputation (MICE) & R | This is not a coding question but if you want an answer here it is...
Missing data are very complicated. There is not a percentage value to accept of discard your variables. The variance of your varia | How much missing data is too much? Multiple Imputation (MICE) & R
This is not a coding question but if you want an answer here it is...
Missing data are very complicated. There is not a percentage value to accept of discard your variables. The variance of your variable is what is important to watch before imputation of... | How much missing data is too much? Multiple Imputation (MICE) & R
This is not a coding question but if you want an answer here it is...
Missing data are very complicated. There is not a percentage value to accept of discard your variables. The variance of your varia |
13,341 | How much missing data is too much? Multiple Imputation (MICE) & R | (Unable to comment yet - sorry! I would have like to have commented on Joel's response.)
I want to point out that, I believe, the quality of the imputation algorithm has bearing on the amount of data that may be validly imputed.
If the imputation method is poor (i.e., it predicts missing values in a biased manner), th... | How much missing data is too much? Multiple Imputation (MICE) & R | (Unable to comment yet - sorry! I would have like to have commented on Joel's response.)
I want to point out that, I believe, the quality of the imputation algorithm has bearing on the amount of data | How much missing data is too much? Multiple Imputation (MICE) & R
(Unable to comment yet - sorry! I would have like to have commented on Joel's response.)
I want to point out that, I believe, the quality of the imputation algorithm has bearing on the amount of data that may be validly imputed.
If the imputation method... | How much missing data is too much? Multiple Imputation (MICE) & R
(Unable to comment yet - sorry! I would have like to have commented on Joel's response.)
I want to point out that, I believe, the quality of the imputation algorithm has bearing on the amount of data |
13,342 | How to check for normal distribution using Excel for performing a t-test? | You have the right idea. This can be done systematically, comprehensively, and with relatively simple calculations. A graph of the results is called a normal probability plot (or sometimes a P-P plot). From it you can see much more detail than appears in other graphical representations, especially histograms, and wi... | How to check for normal distribution using Excel for performing a t-test? | You have the right idea. This can be done systematically, comprehensively, and with relatively simple calculations. A graph of the results is called a normal probability plot (or sometimes a P-P plo | How to check for normal distribution using Excel for performing a t-test?
You have the right idea. This can be done systematically, comprehensively, and with relatively simple calculations. A graph of the results is called a normal probability plot (or sometimes a P-P plot). From it you can see much more detail than... | How to check for normal distribution using Excel for performing a t-test?
You have the right idea. This can be done systematically, comprehensively, and with relatively simple calculations. A graph of the results is called a normal probability plot (or sometimes a P-P plo |
13,343 | How to check for normal distribution using Excel for performing a t-test? | You could plot a histogram using the data analysis toolpack in Excel. Graphical approaches are more likely to communicate the degree of non-normality, which is typically more relevant for assumption testing (see this discussion of normality).
The data analysis toolpack in Excel will also give you skewness and kurtosis... | How to check for normal distribution using Excel for performing a t-test? | You could plot a histogram using the data analysis toolpack in Excel. Graphical approaches are more likely to communicate the degree of non-normality, which is typically more relevant for assumption t | How to check for normal distribution using Excel for performing a t-test?
You could plot a histogram using the data analysis toolpack in Excel. Graphical approaches are more likely to communicate the degree of non-normality, which is typically more relevant for assumption testing (see this discussion of normality).
Th... | How to check for normal distribution using Excel for performing a t-test?
You could plot a histogram using the data analysis toolpack in Excel. Graphical approaches are more likely to communicate the degree of non-normality, which is typically more relevant for assumption t |
13,344 | How to check for normal distribution using Excel for performing a t-test? | This question borders on statistics theory too - testing for normality with limited data may be questionable (although we all have done this from time to time).
As an alternative, you can look at kurtosis and skewness coefficients. From Hahn and Shapiro: Statistical Models in Engineering some background is provided on... | How to check for normal distribution using Excel for performing a t-test? | This question borders on statistics theory too - testing for normality with limited data may be questionable (although we all have done this from time to time).
As an alternative, you can look at kurt | How to check for normal distribution using Excel for performing a t-test?
This question borders on statistics theory too - testing for normality with limited data may be questionable (although we all have done this from time to time).
As an alternative, you can look at kurtosis and skewness coefficients. From Hahn and... | How to check for normal distribution using Excel for performing a t-test?
This question borders on statistics theory too - testing for normality with limited data may be questionable (although we all have done this from time to time).
As an alternative, you can look at kurt |
13,345 | "Investigator intention" and thresholds/p-values | Here's some more info: http://doingbayesiandataanalysis.blogspot.com/2012/07/sampling-distributions-of-t-when.html
A more complete discussion is provided here: http://www.indiana.edu/~kruschke/BEST/ That article considers p values for stopping at threshold N, stopping at threshold duration, and stopping at threshold ... | "Investigator intention" and thresholds/p-values | Here's some more info: http://doingbayesiandataanalysis.blogspot.com/2012/07/sampling-distributions-of-t-when.html
A more complete discussion is provided here: http://www.indiana.edu/~kruschke/BEST/ | "Investigator intention" and thresholds/p-values
Here's some more info: http://doingbayesiandataanalysis.blogspot.com/2012/07/sampling-distributions-of-t-when.html
A more complete discussion is provided here: http://www.indiana.edu/~kruschke/BEST/ That article considers p values for stopping at threshold N, stopping ... | "Investigator intention" and thresholds/p-values
Here's some more info: http://doingbayesiandataanalysis.blogspot.com/2012/07/sampling-distributions-of-t-when.html
A more complete discussion is provided here: http://www.indiana.edu/~kruschke/BEST/ |
13,346 | "Investigator intention" and thresholds/p-values | I finally tracked down the paper associated with the slides: Kruschke (2010), also available directly from the author (via CiteSeerX) here, since the journal is not widely carried. The explanation is a little bit prosaic, but I'm still not sure I buy it.
In the fixed-N case, the critical $t$-value is computed as follow... | "Investigator intention" and thresholds/p-values | I finally tracked down the paper associated with the slides: Kruschke (2010), also available directly from the author (via CiteSeerX) here, since the journal is not widely carried. The explanation is | "Investigator intention" and thresholds/p-values
I finally tracked down the paper associated with the slides: Kruschke (2010), also available directly from the author (via CiteSeerX) here, since the journal is not widely carried. The explanation is a little bit prosaic, but I'm still not sure I buy it.
In the fixed-N c... | "Investigator intention" and thresholds/p-values
I finally tracked down the paper associated with the slides: Kruschke (2010), also available directly from the author (via CiteSeerX) here, since the journal is not widely carried. The explanation is |
13,347 | How do Bayesian Statistics handle the absence of priors? | Q1: Is the absence of a prior equivalent (in the strict theoretical sense) to having an uninformative prior?
No.
First, there is no mathematical definition for an "uninformative prior". This word is only used informally to describe some priors.
For example, Jeffrey's prior is often called "uninformative". This prior ge... | How do Bayesian Statistics handle the absence of priors? | Q1: Is the absence of a prior equivalent (in the strict theoretical sense) to having an uninformative prior?
No.
First, there is no mathematical definition for an "uninformative prior". This word is o | How do Bayesian Statistics handle the absence of priors?
Q1: Is the absence of a prior equivalent (in the strict theoretical sense) to having an uninformative prior?
No.
First, there is no mathematical definition for an "uninformative prior". This word is only used informally to describe some priors.
For example, Jeffr... | How do Bayesian Statistics handle the absence of priors?
Q1: Is the absence of a prior equivalent (in the strict theoretical sense) to having an uninformative prior?
No.
First, there is no mathematical definition for an "uninformative prior". This word is o |
13,348 | How do Bayesian Statistics handle the absence of priors? | First of all, Bayesian approach is often used because you want to include prior knowledge in your model to enrich it. If you don't have any prior knowledge, then you stick to so-called "uninformative" or weekly informative priors. Notice that uniform prior is not "uninformative" by definition, since assumption about un... | How do Bayesian Statistics handle the absence of priors? | First of all, Bayesian approach is often used because you want to include prior knowledge in your model to enrich it. If you don't have any prior knowledge, then you stick to so-called "uninformative" | How do Bayesian Statistics handle the absence of priors?
First of all, Bayesian approach is often used because you want to include prior knowledge in your model to enrich it. If you don't have any prior knowledge, then you stick to so-called "uninformative" or weekly informative priors. Notice that uniform prior is not... | How do Bayesian Statistics handle the absence of priors?
First of all, Bayesian approach is often used because you want to include prior knowledge in your model to enrich it. If you don't have any prior knowledge, then you stick to so-called "uninformative" |
13,349 | How do Bayesian Statistics handle the absence of priors? | question 1
I think the answer is probably no. My reason is we don't really have a definition for "uninformative" except for somehow measuring how far the final answer is from some arbitrarily informative model/likelihood.
Many uninformative priors are validated against "intuitive" examples where we already have "the mo... | How do Bayesian Statistics handle the absence of priors? | question 1
I think the answer is probably no. My reason is we don't really have a definition for "uninformative" except for somehow measuring how far the final answer is from some arbitrarily informat | How do Bayesian Statistics handle the absence of priors?
question 1
I think the answer is probably no. My reason is we don't really have a definition for "uninformative" except for somehow measuring how far the final answer is from some arbitrarily informative model/likelihood.
Many uninformative priors are validated a... | How do Bayesian Statistics handle the absence of priors?
question 1
I think the answer is probably no. My reason is we don't really have a definition for "uninformative" except for somehow measuring how far the final answer is from some arbitrarily informat |
13,350 | How do Bayesian Statistics handle the absence of priors? | This is only a short remark as addition to the other excellent answers. Often, or at least sometimes, it is somewhat arbitrary (or conventional) what part of the information entering a statistical analysis is called data and which part is called prior. Or, more generally, we can say that information in a statistical a... | How do Bayesian Statistics handle the absence of priors? | This is only a short remark as addition to the other excellent answers. Often, or at least sometimes, it is somewhat arbitrary (or conventional) what part of the information entering a statistical ana | How do Bayesian Statistics handle the absence of priors?
This is only a short remark as addition to the other excellent answers. Often, or at least sometimes, it is somewhat arbitrary (or conventional) what part of the information entering a statistical analysis is called data and which part is called prior. Or, more ... | How do Bayesian Statistics handle the absence of priors?
This is only a short remark as addition to the other excellent answers. Often, or at least sometimes, it is somewhat arbitrary (or conventional) what part of the information entering a statistical ana |
13,351 | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | The answer is indeed $1/e$, as guessed in the earlier replies based on simulations and finite approximations.
The solution is easily arrived at by introducing a sequence of functions $f_n: [0,1]\to[0,1]$. Although we could proceed to that step immediately, it might appear rather mysterious. The first part of this solu... | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | The answer is indeed $1/e$, as guessed in the earlier replies based on simulations and finite approximations.
The solution is easily arrived at by introducing a sequence of functions $f_n: [0,1]\to[0, | Expectation of a product of $n$ dependent random variables when $n\to\infty$
The answer is indeed $1/e$, as guessed in the earlier replies based on simulations and finite approximations.
The solution is easily arrived at by introducing a sequence of functions $f_n: [0,1]\to[0,1]$. Although we could proceed to that step... | Expectation of a product of $n$ dependent random variables when $n\to\infty$
The answer is indeed $1/e$, as guessed in the earlier replies based on simulations and finite approximations.
The solution is easily arrived at by introducing a sequence of functions $f_n: [0,1]\to[0, |
13,352 | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Update
I think it's a safe bet that the answer is $1/e$. I ran the integrals for the expected value from $n=2$ to $n=100$ using Mathematica and with $n=100$ I got
0.367879441171442321595523770161567628159853507344458757185018968311538556667710938369307469618599737077005261635286940285462842065735614
(to 100 decimal p... | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Update
I think it's a safe bet that the answer is $1/e$. I ran the integrals for the expected value from $n=2$ to $n=100$ using Mathematica and with $n=100$ I got
0.3678794411714423215955237701615676 | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Update
I think it's a safe bet that the answer is $1/e$. I ran the integrals for the expected value from $n=2$ to $n=100$ using Mathematica and with $n=100$ I got
0.36787944117144232159552377016156762815985350734445875718501896831153855666771... | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Update
I think it's a safe bet that the answer is $1/e$. I ran the integrals for the expected value from $n=2$ to $n=100$ using Mathematica and with $n=100$ I got
0.3678794411714423215955237701615676 |
13,353 | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Nice question. Just as a quick comment, I would note that:
$X_n$ will converge to 1 rapidly, so for Monte Carlo checking, setting $n = 1000$ will more than do the trick.
If $Z_n = X_1 X_2 \dots X_n$, then by Monte Carlo simulation, as $n \rightarrow \infty$, $E[Z_n] \approx 0.367$.
The following diagram compares the... | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Nice question. Just as a quick comment, I would note that:
$X_n$ will converge to 1 rapidly, so for Monte Carlo checking, setting $n = 1000$ will more than do the trick.
If $Z_n = X_1 X_2 \dots X_n$ | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Nice question. Just as a quick comment, I would note that:
$X_n$ will converge to 1 rapidly, so for Monte Carlo checking, setting $n = 1000$ will more than do the trick.
If $Z_n = X_1 X_2 \dots X_n$, then by Monte Carlo simulation, as $n \ri... | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Nice question. Just as a quick comment, I would note that:
$X_n$ will converge to 1 rapidly, so for Monte Carlo checking, setting $n = 1000$ will more than do the trick.
If $Z_n = X_1 X_2 \dots X_n$ |
13,354 | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Purely intuitively, and based on Rusty's other answer, I think the answer should be something like this:
n = 1:1000
x = (1 + (n^2 - 1)/(n^2)) / 2
prod(x)
Which gives us 0.3583668. For each $X$, you are splitting the $(a,1)$ range in half, where $a$ starts out at $0$. So it's a product of $1/2, (1 + 3/4)/2,... | Expectation of a product of $n$ dependent random variables when $n\to\infty$ | Purely intuitively, and based on Rusty's other answer, I think the answer should be something like this:
n = 1:1000
x = (1 + (n^2 - 1)/(n^2)) / 2
prod(x)
Which gives us 0.3583668. For eac | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Purely intuitively, and based on Rusty's other answer, I think the answer should be something like this:
n = 1:1000
x = (1 + (n^2 - 1)/(n^2)) / 2
prod(x)
Which gives us 0.3583668. For each $X$, you are splitting the $(a,1)$ range ... | Expectation of a product of $n$ dependent random variables when $n\to\infty$
Purely intuitively, and based on Rusty's other answer, I think the answer should be something like this:
n = 1:1000
x = (1 + (n^2 - 1)/(n^2)) / 2
prod(x)
Which gives us 0.3583668. For eac |
13,355 | How to interpret notched box plots | In my case (second plot), the notches don't meaningfully overlap. But
why does the bottom of the box on the right hand side take that
strange form? How do I explain that?
It indicates that the 25th percentile is about 21, 75th percentile about 30.5. And the lower and upper limits of the notch are about 18 and 27. ... | How to interpret notched box plots | In my case (second plot), the notches don't meaningfully overlap. But
why does the bottom of the box on the right hand side take that
strange form? How do I explain that?
It indicates that the 25 | How to interpret notched box plots
In my case (second plot), the notches don't meaningfully overlap. But
why does the bottom of the box on the right hand side take that
strange form? How do I explain that?
It indicates that the 25th percentile is about 21, 75th percentile about 30.5. And the lower and upper limits... | How to interpret notched box plots
In my case (second plot), the notches don't meaningfully overlap. But
why does the bottom of the box on the right hand side take that
strange form? How do I explain that?
It indicates that the 25 |
13,356 | How to recode categorical variable into numerical variable when using SVM or Neural Network | In NLP, where words are typically encoded as 1-of-k, the use of word embeddings has emerged recently. The wikipedia page with its references is a good start.
The general idea is to learn a vectorial representation $x_i \in \mathbb{R}^n$ for each word $i$ where semantically similar words are close in that space. Consequ... | How to recode categorical variable into numerical variable when using SVM or Neural Network | In NLP, where words are typically encoded as 1-of-k, the use of word embeddings has emerged recently. The wikipedia page with its references is a good start.
The general idea is to learn a vectorial r | How to recode categorical variable into numerical variable when using SVM or Neural Network
In NLP, where words are typically encoded as 1-of-k, the use of word embeddings has emerged recently. The wikipedia page with its references is a good start.
The general idea is to learn a vectorial representation $x_i \in \math... | How to recode categorical variable into numerical variable when using SVM or Neural Network
In NLP, where words are typically encoded as 1-of-k, the use of word embeddings has emerged recently. The wikipedia page with its references is a good start.
The general idea is to learn a vectorial r |
13,357 | How to recode categorical variable into numerical variable when using SVM or Neural Network | The 'standard' methods are: one-hot encoding (which you mentioned in the question).
If there are too many possible categories, but you need 0-1 encoding, you can use hashing trick.
The other frequently used method is averaging answer over category: see picture from comment at kaggle. | How to recode categorical variable into numerical variable when using SVM or Neural Network | The 'standard' methods are: one-hot encoding (which you mentioned in the question).
If there are too many possible categories, but you need 0-1 encoding, you can use hashing trick.
The other frequentl | How to recode categorical variable into numerical variable when using SVM or Neural Network
The 'standard' methods are: one-hot encoding (which you mentioned in the question).
If there are too many possible categories, but you need 0-1 encoding, you can use hashing trick.
The other frequently used method is averaging a... | How to recode categorical variable into numerical variable when using SVM or Neural Network
The 'standard' methods are: one-hot encoding (which you mentioned in the question).
If there are too many possible categories, but you need 0-1 encoding, you can use hashing trick.
The other frequentl |
13,358 | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can use dummyVars in R, from the caret package. It will automatically create different columns based on number of levels. Afterwards, you can use cbind and attach it to you original data. Other options include model.matrix and sparse.model.matrix. | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can use dummyVars in R, from the caret package. It will automatically create different columns based on number of levels. Afterwards, you can use cbind and attach it to you original data. Other op | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can use dummyVars in R, from the caret package. It will automatically create different columns based on number of levels. Afterwards, you can use cbind and attach it to you original data. Other options include model.matrix a... | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can use dummyVars in R, from the caret package. It will automatically create different columns based on number of levels. Afterwards, you can use cbind and attach it to you original data. Other op |
13,359 | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can try binary encoding which is more compact and sometimes outperforms one-hot. You can implement categorical embedding in Keras, for example. | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can try binary encoding which is more compact and sometimes outperforms one-hot. You can implement categorical embedding in Keras, for example. | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can try binary encoding which is more compact and sometimes outperforms one-hot. You can implement categorical embedding in Keras, for example. | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can try binary encoding which is more compact and sometimes outperforms one-hot. You can implement categorical embedding in Keras, for example. |
13,360 | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can use entity encoding, which is a more sophisticated network structure. It adds between 1 and $k-1$ hidden, linear neurons between the categorical input and the first fully-connected layer. This has some nice empirical results behind it.
"Entity Embeddings of Categorical Variables" by Cheng Guo, Felix Berkhahn
W... | How to recode categorical variable into numerical variable when using SVM or Neural Network | You can use entity encoding, which is a more sophisticated network structure. It adds between 1 and $k-1$ hidden, linear neurons between the categorical input and the first fully-connected layer. This | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can use entity encoding, which is a more sophisticated network structure. It adds between 1 and $k-1$ hidden, linear neurons between the categorical input and the first fully-connected layer. This has some nice empirical res... | How to recode categorical variable into numerical variable when using SVM or Neural Network
You can use entity encoding, which is a more sophisticated network structure. It adds between 1 and $k-1$ hidden, linear neurons between the categorical input and the first fully-connected layer. This |
13,361 | Training a Hidden Markov Model, multiple training instances | Neither concatenating nor running each iteration of training with a different sequence is right thing to do. The correct approach requires some explanation:
One usually trains an HMM using an E-M algorithm. This consists of several iterations. Each iteration has one "estimate" and one "maximize" step. In the "maximize"... | Training a Hidden Markov Model, multiple training instances | Neither concatenating nor running each iteration of training with a different sequence is right thing to do. The correct approach requires some explanation:
One usually trains an HMM using an E-M algo | Training a Hidden Markov Model, multiple training instances
Neither concatenating nor running each iteration of training with a different sequence is right thing to do. The correct approach requires some explanation:
One usually trains an HMM using an E-M algorithm. This consists of several iterations. Each iteration h... | Training a Hidden Markov Model, multiple training instances
Neither concatenating nor running each iteration of training with a different sequence is right thing to do. The correct approach requires some explanation:
One usually trains an HMM using an E-M algo |
13,362 | Training a Hidden Markov Model, multiple training instances | Lawrence Rabiner describes a mathematically well-founded approach in this tutorial from IEEE 77. The tutorial is also the 6th chapter of the book Fundamentals of Speech Recognition by Rabiner and Juang.
R.I.A Davis et. al. provides some additional suggestions in this paper.
I have not gone thoroughly through the math, ... | Training a Hidden Markov Model, multiple training instances | Lawrence Rabiner describes a mathematically well-founded approach in this tutorial from IEEE 77. The tutorial is also the 6th chapter of the book Fundamentals of Speech Recognition by Rabiner and Juan | Training a Hidden Markov Model, multiple training instances
Lawrence Rabiner describes a mathematically well-founded approach in this tutorial from IEEE 77. The tutorial is also the 6th chapter of the book Fundamentals of Speech Recognition by Rabiner and Juang.
R.I.A Davis et. al. provides some additional suggestions ... | Training a Hidden Markov Model, multiple training instances
Lawrence Rabiner describes a mathematically well-founded approach in this tutorial from IEEE 77. The tutorial is also the 6th chapter of the book Fundamentals of Speech Recognition by Rabiner and Juan |
13,363 | Training a Hidden Markov Model, multiple training instances | If you follow the math, adding extra training examples implies to recalculate the way you compute the likelihood. Instead of summing over dimensions, you also sum over training examples.
If you train one model after the other, there is no guarantee that the EM is going to coverage for every training example, and you ar... | Training a Hidden Markov Model, multiple training instances | If you follow the math, adding extra training examples implies to recalculate the way you compute the likelihood. Instead of summing over dimensions, you also sum over training examples.
If you train | Training a Hidden Markov Model, multiple training instances
If you follow the math, adding extra training examples implies to recalculate the way you compute the likelihood. Instead of summing over dimensions, you also sum over training examples.
If you train one model after the other, there is no guarantee that the EM... | Training a Hidden Markov Model, multiple training instances
If you follow the math, adding extra training examples implies to recalculate the way you compute the likelihood. Instead of summing over dimensions, you also sum over training examples.
If you train |
13,364 | Training a Hidden Markov Model, multiple training instances | This is more of a comment on the paper by RIA Davis referenced by Bittenus (above). I will have to agree with Bittenus, there is not much of a mathematical backing behind the techniques proposed in the paper - it is more of an empirical comparison.
The paper only considers the case wherein the HMM is of a restricted t... | Training a Hidden Markov Model, multiple training instances | This is more of a comment on the paper by RIA Davis referenced by Bittenus (above). I will have to agree with Bittenus, there is not much of a mathematical backing behind the techniques proposed in th | Training a Hidden Markov Model, multiple training instances
This is more of a comment on the paper by RIA Davis referenced by Bittenus (above). I will have to agree with Bittenus, there is not much of a mathematical backing behind the techniques proposed in the paper - it is more of an empirical comparison.
The paper ... | Training a Hidden Markov Model, multiple training instances
This is more of a comment on the paper by RIA Davis referenced by Bittenus (above). I will have to agree with Bittenus, there is not much of a mathematical backing behind the techniques proposed in th |
13,365 | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | The problem was studied by by Straka et.al for the Unscented Kalman Filter which draws (deterministic) samples from a multivariate Normal distribution as part of the algorithm. With some luck, the results might be applicable to the monte-carlo problem.
The Cholesky Decomposition (CD) and the Eigen Decomposition (ED) -... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | The problem was studied by by Straka et.al for the Unscented Kalman Filter which draws (deterministic) samples from a multivariate Normal distribution as part of the algorithm. With some luck, the re | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
The problem was studied by by Straka et.al for the Unscented Kalman Filter which draws (deterministic) samples from a multivariate Normal distribution as part of the algorithm. With some luck, the results might be applicable... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
The problem was studied by by Straka et.al for the Unscented Kalman Filter which draws (deterministic) samples from a multivariate Normal distribution as part of the algorithm. With some luck, the re |
13,366 | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | Here is a simple illustration using R to compare the computation time of the two method.
library(mvtnorm)
library(clusterGeneration)
set.seed(1234)
mean <- rnorm(1000, 0, 1)
sigma <- genPositiveDefMat(1000)
sigma <- sigma$Sigma
eigen.time <- system.time(
rmvnorm(n=1000, mean=mean, sigma = sigma, method = "eigen")
... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | Here is a simple illustration using R to compare the computation time of the two method.
library(mvtnorm)
library(clusterGeneration)
set.seed(1234)
mean <- rnorm(1000, 0, 1)
sigma <- genPositiveDefMat | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
Here is a simple illustration using R to compare the computation time of the two method.
library(mvtnorm)
library(clusterGeneration)
set.seed(1234)
mean <- rnorm(1000, 0, 1)
sigma <- genPositiveDefMat(1000)
sigma <- sigma$Sig... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
Here is a simple illustration using R to compare the computation time of the two method.
library(mvtnorm)
library(clusterGeneration)
set.seed(1234)
mean <- rnorm(1000, 0, 1)
sigma <- genPositiveDefMat |
13,367 | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | Here's the manual, or poor-man's, prove-it-to-myself demonstration:
> set.seed(0)
> # The correlation matrix
> corr_matrix = matrix(cbind(1, .80, .2, .80, 1, .7, .2, .7, 1), nrow=3)
> nvar = 3 # Three columns of correlated data points
> nobs = 1e6 # One million observations for each column
> std_norm = matrix(rnorm(nva... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution | Here's the manual, or poor-man's, prove-it-to-myself demonstration:
> set.seed(0)
> # The correlation matrix
> corr_matrix = matrix(cbind(1, .80, .2, .80, 1, .7, .2, .7, 1), nrow=3)
> nvar = 3 # Three | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
Here's the manual, or poor-man's, prove-it-to-myself demonstration:
> set.seed(0)
> # The correlation matrix
> corr_matrix = matrix(cbind(1, .80, .2, .80, 1, .7, .2, .7, 1), nrow=3)
> nvar = 3 # Three columns of correlated da... | Cholesky versus eigendecomposition for drawing samples from a multivariate normal distribution
Here's the manual, or poor-man's, prove-it-to-myself demonstration:
> set.seed(0)
> # The correlation matrix
> corr_matrix = matrix(cbind(1, .80, .2, .80, 1, .7, .2, .7, 1), nrow=3)
> nvar = 3 # Three |
13,368 | Cox baseline hazard | Apparently, basehaz() actually computes a cumulative hazard rate, rather than the hazard rate itself. The formula is as follows:
$$
\hat{H}_0(t) = \sum_{y_{(l)} \leq t} \hat{h}_0(y_{(l)}),
$$
with
$$
\hat{h}_0(y_{(l)}) = \frac{d_{(l)}}{\sum_{j \in R(y_{(l)})} \exp(\mathbf{x}^{\prime}_j \mathbf{\beta})}
$$
where $y_{(1... | Cox baseline hazard | Apparently, basehaz() actually computes a cumulative hazard rate, rather than the hazard rate itself. The formula is as follows:
$$
\hat{H}_0(t) = \sum_{y_{(l)} \leq t} \hat{h}_0(y_{(l)}),
$$
with
$$ | Cox baseline hazard
Apparently, basehaz() actually computes a cumulative hazard rate, rather than the hazard rate itself. The formula is as follows:
$$
\hat{H}_0(t) = \sum_{y_{(l)} \leq t} \hat{h}_0(y_{(l)}),
$$
with
$$
\hat{h}_0(y_{(l)}) = \frac{d_{(l)}}{\sum_{j \in R(y_{(l)})} \exp(\mathbf{x}^{\prime}_j \mathbf{\bet... | Cox baseline hazard
Apparently, basehaz() actually computes a cumulative hazard rate, rather than the hazard rate itself. The formula is as follows:
$$
\hat{H}_0(t) = \sum_{y_{(l)} \leq t} \hat{h}_0(y_{(l)}),
$$
with
$$ |
13,369 | Confidence interval around the ratio of two proportions | The standard way to do this in epidemiology (where a ratio of proportions is usually referred to as a risk ratio) is to first log-transform the ratio, calculate a confidence interval on the log scale using the delta method and assuming a normal distribution, then transform back. This works better in moderate sample siz... | Confidence interval around the ratio of two proportions | The standard way to do this in epidemiology (where a ratio of proportions is usually referred to as a risk ratio) is to first log-transform the ratio, calculate a confidence interval on the log scale | Confidence interval around the ratio of two proportions
The standard way to do this in epidemiology (where a ratio of proportions is usually referred to as a risk ratio) is to first log-transform the ratio, calculate a confidence interval on the log scale using the delta method and assuming a normal distribution, then ... | Confidence interval around the ratio of two proportions
The standard way to do this in epidemiology (where a ratio of proportions is usually referred to as a risk ratio) is to first log-transform the ratio, calculate a confidence interval on the log scale |
13,370 | Why does ridge regression classifier work quite well for text classification? | Text classification problems tend to be quite high dimensional (many features), and high dimensional problems are likely to be linearly separable (as you can separate any d+1 points in a d-dimensional space with a linear classifier, regardless of how the points are labelled). So linear classifiers, whether ridge regre... | Why does ridge regression classifier work quite well for text classification? | Text classification problems tend to be quite high dimensional (many features), and high dimensional problems are likely to be linearly separable (as you can separate any d+1 points in a d-dimensional | Why does ridge regression classifier work quite well for text classification?
Text classification problems tend to be quite high dimensional (many features), and high dimensional problems are likely to be linearly separable (as you can separate any d+1 points in a d-dimensional space with a linear classifier, regardles... | Why does ridge regression classifier work quite well for text classification?
Text classification problems tend to be quite high dimensional (many features), and high dimensional problems are likely to be linearly separable (as you can separate any d+1 points in a d-dimensional |
13,371 | Why does ridge regression classifier work quite well for text classification? | Ridge regression, as the name suggests, is a method for regression rather than classification. Presumably you are using a threshold to turn it into a classifier. In any case, you are simply learning a linear classifier that is defined by a hyperplane. The reason it is working is because the task at hand is essentially ... | Why does ridge regression classifier work quite well for text classification? | Ridge regression, as the name suggests, is a method for regression rather than classification. Presumably you are using a threshold to turn it into a classifier. In any case, you are simply learning a | Why does ridge regression classifier work quite well for text classification?
Ridge regression, as the name suggests, is a method for regression rather than classification. Presumably you are using a threshold to turn it into a classifier. In any case, you are simply learning a linear classifier that is defined by a hy... | Why does ridge regression classifier work quite well for text classification?
Ridge regression, as the name suggests, is a method for regression rather than classification. Presumably you are using a threshold to turn it into a classifier. In any case, you are simply learning a |
13,372 | How to add periodic component to linear regression model? | You could try the wonderful stl() method -- it decomposes (using iterated loess() fitting) into trend and seasonal and remainder. This may just pick up your oscillations here. | How to add periodic component to linear regression model? | You could try the wonderful stl() method -- it decomposes (using iterated loess() fitting) into trend and seasonal and remainder. This may just pick up your oscillations here. | How to add periodic component to linear regression model?
You could try the wonderful stl() method -- it decomposes (using iterated loess() fitting) into trend and seasonal and remainder. This may just pick up your oscillations here. | How to add periodic component to linear regression model?
You could try the wonderful stl() method -- it decomposes (using iterated loess() fitting) into trend and seasonal and remainder. This may just pick up your oscillations here. |
13,373 | How to add periodic component to linear regression model? | If you know the frequency of the oscillation, you can include two additional predictors, sin(2π w t) and cos(2π w t) -- set w to get the desired wavelength -- and this will model the oscillation. You need both terms to fit the amplitude and the phase angle. If there is more than one frequency, you will need a sine an... | How to add periodic component to linear regression model? | If you know the frequency of the oscillation, you can include two additional predictors, sin(2π w t) and cos(2π w t) -- set w to get the desired wavelength -- and this will model the oscillation. You | How to add periodic component to linear regression model?
If you know the frequency of the oscillation, you can include two additional predictors, sin(2π w t) and cos(2π w t) -- set w to get the desired wavelength -- and this will model the oscillation. You need both terms to fit the amplitude and the phase angle. If... | How to add periodic component to linear regression model?
If you know the frequency of the oscillation, you can include two additional predictors, sin(2π w t) and cos(2π w t) -- set w to get the desired wavelength -- and this will model the oscillation. You |
13,374 | How to add periodic component to linear regression model? | Let's begin by observing that ordinary least squares fitting for these data is likely inappropriate. If the individual data being accumulated are assumed, as usual, to have random error components, then the error in the cumulative data (not the cumulative frequencies--that's something different than what you have) is ... | How to add periodic component to linear regression model? | Let's begin by observing that ordinary least squares fitting for these data is likely inappropriate. If the individual data being accumulated are assumed, as usual, to have random error components, t | How to add periodic component to linear regression model?
Let's begin by observing that ordinary least squares fitting for these data is likely inappropriate. If the individual data being accumulated are assumed, as usual, to have random error components, then the error in the cumulative data (not the cumulative frequ... | How to add periodic component to linear regression model?
Let's begin by observing that ordinary least squares fitting for these data is likely inappropriate. If the individual data being accumulated are assumed, as usual, to have random error components, t |
13,375 | How to add periodic component to linear regression model? | Clearly the dominant oscillation has period one day. Looks like there are also lower-frequency components relating to the day of the week, so add a component with frequency one week (i.e. one-seventh of a day) and its first few harmonics. That gives a model of the form:
$$\mbox{E}(y) = c + a_0 \cos(2\pi t) + b_0 \sin(2... | How to add periodic component to linear regression model? | Clearly the dominant oscillation has period one day. Looks like there are also lower-frequency components relating to the day of the week, so add a component with frequency one week (i.e. one-seventh | How to add periodic component to linear regression model?
Clearly the dominant oscillation has period one day. Looks like there are also lower-frequency components relating to the day of the week, so add a component with frequency one week (i.e. one-seventh of a day) and its first few harmonics. That gives a model of t... | How to add periodic component to linear regression model?
Clearly the dominant oscillation has period one day. Looks like there are also lower-frequency components relating to the day of the week, so add a component with frequency one week (i.e. one-seventh |
13,376 | Moments of a distribution - any use for partial or higher moments? | Aside from special properties of a few numbers (e.g., 2), the only real reason to single out integer moments as opposed to fractional moments is convenience.
Higher moments can be used to understand tail behavior. For example, a centered random variable $X$ with variance 1 has subgaussian tails (i.e. $\mathbb{P}(|X| >... | Moments of a distribution - any use for partial or higher moments? | Aside from special properties of a few numbers (e.g., 2), the only real reason to single out integer moments as opposed to fractional moments is convenience.
Higher moments can be used to understand t | Moments of a distribution - any use for partial or higher moments?
Aside from special properties of a few numbers (e.g., 2), the only real reason to single out integer moments as opposed to fractional moments is convenience.
Higher moments can be used to understand tail behavior. For example, a centered random variabl... | Moments of a distribution - any use for partial or higher moments?
Aside from special properties of a few numbers (e.g., 2), the only real reason to single out integer moments as opposed to fractional moments is convenience.
Higher moments can be used to understand t |
13,377 | Moments of a distribution - any use for partial or higher moments? | I get suspicious when I hear people ask about third and fourth moments. There are two common errors people often have in mind when they bring up the topic. I'm not saying that you are necessarily making these mistakes, but they do come up often.
First, it sounds like they implicitly believe that distributions can be b... | Moments of a distribution - any use for partial or higher moments? | I get suspicious when I hear people ask about third and fourth moments. There are two common errors people often have in mind when they bring up the topic. I'm not saying that you are necessarily mak | Moments of a distribution - any use for partial or higher moments?
I get suspicious when I hear people ask about third and fourth moments. There are two common errors people often have in mind when they bring up the topic. I'm not saying that you are necessarily making these mistakes, but they do come up often.
First,... | Moments of a distribution - any use for partial or higher moments?
I get suspicious when I hear people ask about third and fourth moments. There are two common errors people often have in mind when they bring up the topic. I'm not saying that you are necessarily mak |
13,378 | Moments of a distribution - any use for partial or higher moments? | One example of use (interpretation is a better qualifier) of a higher moment: the fifth moment of a univariate distribution measures the asymmetry of its tails. | Moments of a distribution - any use for partial or higher moments? | One example of use (interpretation is a better qualifier) of a higher moment: the fifth moment of a univariate distribution measures the asymmetry of its tails. | Moments of a distribution - any use for partial or higher moments?
One example of use (interpretation is a better qualifier) of a higher moment: the fifth moment of a univariate distribution measures the asymmetry of its tails. | Moments of a distribution - any use for partial or higher moments?
One example of use (interpretation is a better qualifier) of a higher moment: the fifth moment of a univariate distribution measures the asymmetry of its tails. |
13,379 | How to simulate data to be statistically significant? | General Comments
"I am in 10th grade and I am looking to simulate data for a machine learning science fair project." Awesome. I did not care at all about math in 10th grade; I think I took something like Algebra 2 that year...? I can't wait until you put me out of a job in a few years! I give some advice below, but: W... | How to simulate data to be statistically significant? | General Comments
"I am in 10th grade and I am looking to simulate data for a machine learning science fair project." Awesome. I did not care at all about math in 10th grade; I think I took something | How to simulate data to be statistically significant?
General Comments
"I am in 10th grade and I am looking to simulate data for a machine learning science fair project." Awesome. I did not care at all about math in 10th grade; I think I took something like Algebra 2 that year...? I can't wait until you put me out of ... | How to simulate data to be statistically significant?
General Comments
"I am in 10th grade and I am looking to simulate data for a machine learning science fair project." Awesome. I did not care at all about math in 10th grade; I think I took something |
13,380 | How to simulate data to be statistically significant? | If you already know some Python, then you will definitely be able to achieve what you need using base Python along with numpy and/or pandas. As Mark White suggests though, a lot of simulation and stats-related stuff is baked into R, so definitely worth a look.
Below is a basic framework for how you might approach this ... | How to simulate data to be statistically significant? | If you already know some Python, then you will definitely be able to achieve what you need using base Python along with numpy and/or pandas. As Mark White suggests though, a lot of simulation and stat | How to simulate data to be statistically significant?
If you already know some Python, then you will definitely be able to achieve what you need using base Python along with numpy and/or pandas. As Mark White suggests though, a lot of simulation and stats-related stuff is baked into R, so definitely worth a look.
Below... | How to simulate data to be statistically significant?
If you already know some Python, then you will definitely be able to achieve what you need using base Python along with numpy and/or pandas. As Mark White suggests though, a lot of simulation and stat |
13,381 | How to simulate data to be statistically significant? | This is a great project. There is a challenge for projects like this, and your method of using simulated data is a great way of assessing it.
Do you have an a priori hypothesis, e.g. "people are more forgetful in the evening"? In that case, a statistical test that compares the frequency of forgetting in the evening co... | How to simulate data to be statistically significant? | This is a great project. There is a challenge for projects like this, and your method of using simulated data is a great way of assessing it.
Do you have an a priori hypothesis, e.g. "people are more | How to simulate data to be statistically significant?
This is a great project. There is a challenge for projects like this, and your method of using simulated data is a great way of assessing it.
Do you have an a priori hypothesis, e.g. "people are more forgetful in the evening"? In that case, a statistical test that ... | How to simulate data to be statistically significant?
This is a great project. There is a challenge for projects like this, and your method of using simulated data is a great way of assessing it.
Do you have an a priori hypothesis, e.g. "people are more |
13,382 | An example where the likelihood principle *really* matters? | Think about a hypothetical situation when a point null hypothesis is true but one keeps sampling until $p<0.05$ (this will always happen sooner or later, i.e. it will happen with probability 1) and then decides to stop the trial and reject the null. This is an admittedly extreme stopping rule but consider it for the sa... | An example where the likelihood principle *really* matters? | Think about a hypothetical situation when a point null hypothesis is true but one keeps sampling until $p<0.05$ (this will always happen sooner or later, i.e. it will happen with probability 1) and th | An example where the likelihood principle *really* matters?
Think about a hypothetical situation when a point null hypothesis is true but one keeps sampling until $p<0.05$ (this will always happen sooner or later, i.e. it will happen with probability 1) and then decides to stop the trial and reject the null. This is an... | An example where the likelihood principle *really* matters?
Think about a hypothetical situation when a point null hypothesis is true but one keeps sampling until $p<0.05$ (this will always happen sooner or later, i.e. it will happen with probability 1) and th |
13,383 | An example where the likelihood principle *really* matters? | Disclaimer: I believe this answer is at the core of the entire argument, so it worth discussion, but I haven't fully explored the issue. As such, I welcome corrections, refinements and comments.
The most important aspect is in regards to sequentially collected data. For example, suppose you observed binary outcomes, an... | An example where the likelihood principle *really* matters? | Disclaimer: I believe this answer is at the core of the entire argument, so it worth discussion, but I haven't fully explored the issue. As such, I welcome corrections, refinements and comments.
The m | An example where the likelihood principle *really* matters?
Disclaimer: I believe this answer is at the core of the entire argument, so it worth discussion, but I haven't fully explored the issue. As such, I welcome corrections, refinements and comments.
The most important aspect is in regards to sequentially collected... | An example where the likelihood principle *really* matters?
Disclaimer: I believe this answer is at the core of the entire argument, so it worth discussion, but I haven't fully explored the issue. As such, I welcome corrections, refinements and comments.
The m |
13,384 | An example where the likelihood principle *really* matters? | Outline of LR tests for exponential data.
Let $X_1, X_2, \dots, X_n$ be a random sample from
$\mathsf{Exp}(\text{rate} =\lambda),$ so that $E(X_i) = \mu = 1/\lambda.$
For $x > 0,$ the density function is $f(x) = \lambda e^{-\lambda x}$ and
the CDF is $F(x) = 1 - e^{-\lambda x}.$
1. Test statistic is sample minimum.
L... | An example where the likelihood principle *really* matters? | Outline of LR tests for exponential data.
Let $X_1, X_2, \dots, X_n$ be a random sample from
$\mathsf{Exp}(\text{rate} =\lambda),$ so that $E(X_i) = \mu = 1/\lambda.$
For $x > 0,$ the density functio | An example where the likelihood principle *really* matters?
Outline of LR tests for exponential data.
Let $X_1, X_2, \dots, X_n$ be a random sample from
$\mathsf{Exp}(\text{rate} =\lambda),$ so that $E(X_i) = \mu = 1/\lambda.$
For $x > 0,$ the density function is $f(x) = \lambda e^{-\lambda x}$ and
the CDF is $F(x) = ... | An example where the likelihood principle *really* matters?
Outline of LR tests for exponential data.
Let $X_1, X_2, \dots, X_n$ be a random sample from
$\mathsf{Exp}(\text{rate} =\lambda),$ so that $E(X_i) = \mu = 1/\lambda.$
For $x > 0,$ the density functio |
13,385 | An example where the likelihood principle *really* matters? | Violation by different pdf functions $f(x,\theta)$ and $g(x,\theta)$
This case will be an example of 'violation' because the probability distribution functions $f(x,\theta)$ $g(x,\theta)$ are intrinsically different. Even when $f$ and $g$, differ, they may relate to the likelihood principle because at fixed measurement... | An example where the likelihood principle *really* matters? | Violation by different pdf functions $f(x,\theta)$ and $g(x,\theta)$
This case will be an example of 'violation' because the probability distribution functions $f(x,\theta)$ $g(x,\theta)$ are intrinsi | An example where the likelihood principle *really* matters?
Violation by different pdf functions $f(x,\theta)$ and $g(x,\theta)$
This case will be an example of 'violation' because the probability distribution functions $f(x,\theta)$ $g(x,\theta)$ are intrinsically different. Even when $f$ and $g$, differ, they may rel... | An example where the likelihood principle *really* matters?
Violation by different pdf functions $f(x,\theta)$ and $g(x,\theta)$
This case will be an example of 'violation' because the probability distribution functions $f(x,\theta)$ $g(x,\theta)$ are intrinsi |
13,386 | An example where the likelihood principle *really* matters? | Here is an example adapted from Statistical decision theory and Bayesian analysis by James O. Berger (Second edition page 29).
Say that two species of wasps can be distinguished by the number of notches on the wings (call this $x$) and by the number of black rings around the abdomen (call this $y$). The distribution of... | An example where the likelihood principle *really* matters? | Here is an example adapted from Statistical decision theory and Bayesian analysis by James O. Berger (Second edition page 29).
Say that two species of wasps can be distinguished by the number of notch | An example where the likelihood principle *really* matters?
Here is an example adapted from Statistical decision theory and Bayesian analysis by James O. Berger (Second edition page 29).
Say that two species of wasps can be distinguished by the number of notches on the wings (call this $x$) and by the number of black r... | An example where the likelihood principle *really* matters?
Here is an example adapted from Statistical decision theory and Bayesian analysis by James O. Berger (Second edition page 29).
Say that two species of wasps can be distinguished by the number of notch |
13,387 | How to calculate out of sample R squared? | First of all is need to say that for prediction evaluation, then out of sample, the usual $R^2$ is not adequate. It is so because the usual $R^2$ is computed on residuals, that are in sample quantities.
We can define: $R^2 = 1 – RSS/TSS$
RSS = residual sum of square
TSS = total sum of square
The main problem here is t... | How to calculate out of sample R squared? | First of all is need to say that for prediction evaluation, then out of sample, the usual $R^2$ is not adequate. It is so because the usual $R^2$ is computed on residuals, that are in sample quantiti | How to calculate out of sample R squared?
First of all is need to say that for prediction evaluation, then out of sample, the usual $R^2$ is not adequate. It is so because the usual $R^2$ is computed on residuals, that are in sample quantities.
We can define: $R^2 = 1 – RSS/TSS$
RSS = residual sum of square
TSS = tota... | How to calculate out of sample R squared?
First of all is need to say that for prediction evaluation, then out of sample, the usual $R^2$ is not adequate. It is so because the usual $R^2$ is computed on residuals, that are in sample quantiti |
13,388 | How to calculate out of sample R squared? | You are correct.
The OSR$^2$ residuals are based on testing data, but the baseline should still be training data. With that said, your SST is $SST=Σ(y−\bar y_{train})^2$; notice that the is the same for $R^2$ | How to calculate out of sample R squared? | You are correct.
The OSR$^2$ residuals are based on testing data, but the baseline should still be training data. With that said, your SST is $SST=Σ(y−\bar y_{train})^2$; notice that the is the same | How to calculate out of sample R squared?
You are correct.
The OSR$^2$ residuals are based on testing data, but the baseline should still be training data. With that said, your SST is $SST=Σ(y−\bar y_{train})^2$; notice that the is the same for $R^2$ | How to calculate out of sample R squared?
You are correct.
The OSR$^2$ residuals are based on testing data, but the baseline should still be training data. With that said, your SST is $SST=Σ(y−\bar y_{train})^2$; notice that the is the same |
13,389 | How to calculate out of sample R squared? | We have just published an article on this subject in The American Statistician here
Similar to @markowitz, we define out-of-sample $R^2$ as a comparison of two out-of-sample models: the null model using only the mean outcome of the training data $\bar{y}_{train}$, and the more elaborate model using covariate informatio... | How to calculate out of sample R squared? | We have just published an article on this subject in The American Statistician here
Similar to @markowitz, we define out-of-sample $R^2$ as a comparison of two out-of-sample models: the null model usi | How to calculate out of sample R squared?
We have just published an article on this subject in The American Statistician here
Similar to @markowitz, we define out-of-sample $R^2$ as a comparison of two out-of-sample models: the null model using only the mean outcome of the training data $\bar{y}_{train}$, and the more ... | How to calculate out of sample R squared?
We have just published an article on this subject in The American Statistician here
Similar to @markowitz, we define out-of-sample $R^2$ as a comparison of two out-of-sample models: the null model usi |
13,390 | Why do we say "Residual standard error"? | As in mentioned by a comment by NRH to one of the other answers, the documentation for stats::sigma says:
The misnomer “Residual standard error” has been part of too many R (and S) outputs to be easily changed there.
This tells me that the developers know this terminology to be bogus. However, since it has crept into... | Why do we say "Residual standard error"? | As in mentioned by a comment by NRH to one of the other answers, the documentation for stats::sigma says:
The misnomer “Residual standard error” has been part of too many R (and S) outputs to be easi | Why do we say "Residual standard error"?
As in mentioned by a comment by NRH to one of the other answers, the documentation for stats::sigma says:
The misnomer “Residual standard error” has been part of too many R (and S) outputs to be easily changed there.
This tells me that the developers know this terminology to b... | Why do we say "Residual standard error"?
As in mentioned by a comment by NRH to one of the other answers, the documentation for stats::sigma says:
The misnomer “Residual standard error” has been part of too many R (and S) outputs to be easi |
13,391 | Why do we say "Residual standard error"? | I think that phrasing is specific to R's summary.lm() output. Notice that the underlying value is actually called "sigma" (summary.lm()$sigma). I don't think other software necessarily uses that name for the standard deviation of the residuals. In addition, the phrasing 'residual standard deviation' is common in tex... | Why do we say "Residual standard error"? | I think that phrasing is specific to R's summary.lm() output. Notice that the underlying value is actually called "sigma" (summary.lm()$sigma). I don't think other software necessarily uses that nam | Why do we say "Residual standard error"?
I think that phrasing is specific to R's summary.lm() output. Notice that the underlying value is actually called "sigma" (summary.lm()$sigma). I don't think other software necessarily uses that name for the standard deviation of the residuals. In addition, the phrasing 'resi... | Why do we say "Residual standard error"?
I think that phrasing is specific to R's summary.lm() output. Notice that the underlying value is actually called "sigma" (summary.lm()$sigma). I don't think other software necessarily uses that nam |
13,392 | Why do we say "Residual standard error"? | From my econometrics training, it is called "residual standard error" because it is an estimate of the actual "residual standard deviation". See this related question that corroborates this terminology.
A Google search for the term residual standard error also shows up a lot of hits, so it is by no means an R oddity. I... | Why do we say "Residual standard error"? | From my econometrics training, it is called "residual standard error" because it is an estimate of the actual "residual standard deviation". See this related question that corroborates this terminolog | Why do we say "Residual standard error"?
From my econometrics training, it is called "residual standard error" because it is an estimate of the actual "residual standard deviation". See this related question that corroborates this terminology.
A Google search for the term residual standard error also shows up a lot of ... | Why do we say "Residual standard error"?
From my econometrics training, it is called "residual standard error" because it is an estimate of the actual "residual standard deviation". See this related question that corroborates this terminolog |
13,393 | Why do we say "Residual standard error"? | This is really, really confusing use of the term "standard error". I teach Introductory Statistics at a college, and this is one of the most confusing details in R for students (along with R using standard deviation and not variance in its various pnorm, qnorm, etc. commands).
A standard error, from a statistical sens... | Why do we say "Residual standard error"? | This is really, really confusing use of the term "standard error". I teach Introductory Statistics at a college, and this is one of the most confusing details in R for students (along with R using st | Why do we say "Residual standard error"?
This is really, really confusing use of the term "standard error". I teach Introductory Statistics at a college, and this is one of the most confusing details in R for students (along with R using standard deviation and not variance in its various pnorm, qnorm, etc. commands).
... | Why do we say "Residual standard error"?
This is really, really confusing use of the term "standard error". I teach Introductory Statistics at a college, and this is one of the most confusing details in R for students (along with R using st |
13,394 | Why do we say "Residual standard error"? | Put simply, the standard error of the sample is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean.
Standard error - Wikipedia, the free encyclopedia | Why do we say "Residual standard error"? | Put simply, the standard error of the sample is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which ind | Why do we say "Residual standard error"?
Put simply, the standard error of the sample is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean.
Standard error - Wik... | Why do we say "Residual standard error"?
Put simply, the standard error of the sample is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which ind |
13,395 | Why do we say "Residual standard error"? | A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same XX values an infinite number of times (when the linear model is true).
The difference between these predicted values and the ones used to fit th... | Why do we say "Residual standard error"? | A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same XX values an infinite nu | Why do we say "Residual standard error"?
A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same XX values an infinite number of times (when the linear model is true).
The difference between these pre... | Why do we say "Residual standard error"?
A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same XX values an infinite nu |
13,396 | Why do we say "Residual standard error"? | For the nls (nonlinear least squares fit) R function, the "Residual standard error" seems to be:
$$\sqrt{\frac{\mathrm{RSS}}{n-p}}$$
where RSS is the "residual sum-of-squares", n is the number of observations and p is the number of estimated parameters. There's absolutely no description in the documentation, this assum... | Why do we say "Residual standard error"? | For the nls (nonlinear least squares fit) R function, the "Residual standard error" seems to be:
$$\sqrt{\frac{\mathrm{RSS}}{n-p}}$$
where RSS is the "residual sum-of-squares", n is the number of obse | Why do we say "Residual standard error"?
For the nls (nonlinear least squares fit) R function, the "Residual standard error" seems to be:
$$\sqrt{\frac{\mathrm{RSS}}{n-p}}$$
where RSS is the "residual sum-of-squares", n is the number of observations and p is the number of estimated parameters. There's absolutely no des... | Why do we say "Residual standard error"?
For the nls (nonlinear least squares fit) R function, the "Residual standard error" seems to be:
$$\sqrt{\frac{\mathrm{RSS}}{n-p}}$$
where RSS is the "residual sum-of-squares", n is the number of obse |
13,397 | Generalized additive models -- who does research on them besides Simon Wood? | There are many researchers on GAMs: it's just that basically the same model (GLM with linear predictor given by sum of smooth functions) is given lots of different names. You'll find models that you could refer to as GAMs called: semiparametric regression models, smoothing spline ANOVA models, structured additive regre... | Generalized additive models -- who does research on them besides Simon Wood? | There are many researchers on GAMs: it's just that basically the same model (GLM with linear predictor given by sum of smooth functions) is given lots of different names. You'll find models that you c | Generalized additive models -- who does research on them besides Simon Wood?
There are many researchers on GAMs: it's just that basically the same model (GLM with linear predictor given by sum of smooth functions) is given lots of different names. You'll find models that you could refer to as GAMs called: semiparametri... | Generalized additive models -- who does research on them besides Simon Wood?
There are many researchers on GAMs: it's just that basically the same model (GLM with linear predictor given by sum of smooth functions) is given lots of different names. You'll find models that you c |
13,398 | Generalized additive models -- who does research on them besides Simon Wood? | google scholar gives a lot of hits, in addition to the references above, and in comments, some which looks interesting is:
http://www.sciencedirect.com/science/article/pii/S0304380002002041 GAM's in studies of species distributions, published in "Ecological Modelling"
http://aje.oxfordjournals.org/content/156/3/193... | Generalized additive models -- who does research on them besides Simon Wood? | google scholar gives a lot of hits, in addition to the references above, and in comments, some which looks interesting is:
http://www.sciencedirect.com/science/article/pii/S0304380002002041 GAM's | Generalized additive models -- who does research on them besides Simon Wood?
google scholar gives a lot of hits, in addition to the references above, and in comments, some which looks interesting is:
http://www.sciencedirect.com/science/article/pii/S0304380002002041 GAM's in studies of species distributions, publis... | Generalized additive models -- who does research on them besides Simon Wood?
google scholar gives a lot of hits, in addition to the references above, and in comments, some which looks interesting is:
http://www.sciencedirect.com/science/article/pii/S0304380002002041 GAM's |
13,399 | When can we speak of collinearity | There is no 'bright line' between not too much collinearity and too much collinearity (except in the trivial sense that $r = 1.0$ is definitely too much). Analysts would not typically think of $r = .50$ as too much collinearity between two variables. A rule of thumb regarding multicollinearity is that you have too mu... | When can we speak of collinearity | There is no 'bright line' between not too much collinearity and too much collinearity (except in the trivial sense that $r = 1.0$ is definitely too much). Analysts would not typically think of $r = . | When can we speak of collinearity
There is no 'bright line' between not too much collinearity and too much collinearity (except in the trivial sense that $r = 1.0$ is definitely too much). Analysts would not typically think of $r = .50$ as too much collinearity between two variables. A rule of thumb regarding multico... | When can we speak of collinearity
There is no 'bright line' between not too much collinearity and too much collinearity (except in the trivial sense that $r = 1.0$ is definitely too much). Analysts would not typically think of $r = . |
13,400 | When can we speak of collinearity | My take on the three questions is
Question 1 What classifies as too much correlation? For example: a pearson correlation of 0.5 is that too much?
Many authors argue that (multi-)collinearity is not a problem. Take a look here and here for a rather acid opinion on the subject. The bottom line is that multicollinearit... | When can we speak of collinearity | My take on the three questions is
Question 1 What classifies as too much correlation? For example: a pearson correlation of 0.5 is that too much?
Many authors argue that (multi-)collinearity is not | When can we speak of collinearity
My take on the three questions is
Question 1 What classifies as too much correlation? For example: a pearson correlation of 0.5 is that too much?
Many authors argue that (multi-)collinearity is not a problem. Take a look here and here for a rather acid opinion on the subject. The bo... | When can we speak of collinearity
My take on the three questions is
Question 1 What classifies as too much correlation? For example: a pearson correlation of 0.5 is that too much?
Many authors argue that (multi-)collinearity is not |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.