idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
12,901
Is median fairer than mean?
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system: If a rational user thinks the proper rating should be 4 stars, but it currently has 4.5 stars, then the best way to get to four stars (assuming there have been more than six votes) is to...
Is median fairer than mean?
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system: If a rational user thinks the proper rating should be 4 stars, but
Is median fairer than mean? The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system: If a rational user thinks the proper rating should be 4 stars, but it currently has 4.5 stars, then the best way to get to four stars (assuming there have bee...
Is median fairer than mean? The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system: If a rational user thinks the proper rating should be 4 stars, but
12,902
Is median fairer than mean?
Several good answers still leave room for more comments. First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evident, but it is easy for real data to be more complicated. At most, the median is intended to discount or ignore outliers,...
Is median fairer than mean?
Several good answers still leave room for more comments. First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evide
Is median fairer than mean? Several good answers still leave room for more comments. First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evident, but it is easy for real data to be more complicated. At most, the median is intended to ...
Is median fairer than mean? Several good answers still leave room for more comments. First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evide
12,903
What is the point of reporting descriptive statistics?
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sample from a hospital. Her dependent variable is binary a...
What is the point of reporting descriptive statistics?
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors
What is the point of reporting descriptive statistics? In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sam...
What is the point of reporting descriptive statistics? In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors
12,904
What is the point of reporting descriptive statistics?
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case tabulating the sex, grades and so on would be a beneficial addition to the logistic regression. It is not to enable peo...
What is the point of reporting descriptive statistics?
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case
What is the point of reporting descriptive statistics? The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case tabulating the sex, grades and so on would be a beneficial addit...
What is the point of reporting descriptive statistics? The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case
12,905
What is the point of reporting descriptive statistics?
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the logistic regression is going to dominate over everything else, so you will likely learn to ignore the salary, regardles...
What is the point of reporting descriptive statistics?
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the
What is the point of reporting descriptive statistics? Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the logistic regression is going to dominate over everything else, s...
What is the point of reporting descriptive statistics? Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the
12,906
What is the point of reporting descriptive statistics?
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis. You may use data from different sources to blow up your descriptives. 1 table should be enough. The one you attached is not very intuitive.
What is the point of reporting descriptive statistics?
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis. You may use data from differ
What is the point of reporting descriptive statistics? A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis. You may use data from different sources to blow up your descriptives. 1 table should be enoug...
What is the point of reporting descriptive statistics? A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis. You may use data from differ
12,907
Why AUC =1 even classifier has misclassified half of the samples?
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the positive patterns have higher probabilities than all of the negative patterns. In this case there will be a decision thre...
Why AUC =1 even classifier has misclassified half of the samples?
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the po
Why AUC =1 even classifier has misclassified half of the samples? The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the positive patterns have higher probabilities than all of ...
Why AUC =1 even classifier has misclassified half of the samples? The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the po
12,908
Why AUC =1 even classifier has misclassified half of the samples?
The other answers explain what is happening but I thought a picture might be nice. You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclassification rate of 50%.
Why AUC =1 even classifier has misclassified half of the samples?
The other answers explain what is happening but I thought a picture might be nice. You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclas
Why AUC =1 even classifier has misclassified half of the samples? The other answers explain what is happening but I thought a picture might be nice. You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclassification rate of 50%.
Why AUC =1 even classifier has misclassified half of the samples? The other answers explain what is happening but I thought a picture might be nice. You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclas
12,909
Why AUC =1 even classifier has misclassified half of the samples?
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a randomly-selected 1 is ranked higher than a randomly-selected 0. In this sample, this is always true, so it's a probability 1 ...
Why AUC =1 even classifier has misclassified half of the samples?
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a random
Why AUC =1 even classifier has misclassified half of the samples? The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a randomly-selected 1 is ranked higher than a randomly-selecte...
Why AUC =1 even classifier has misclassified half of the samples? The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a random
12,910
Why is dimensionality reduction used if it almost always reduces the explained variation?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$ where $SS_{res} = \sum_{i}{(y_i - \hat{y})^2}$ is a residual sum of squares and $SS_{tot} = \sum_{i}{(y_i - \bar{y})^2}$ is a total sum of squares...
Why is dimensionality reduction used if it almost always reduces the explained variation?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$ where $SS_{res} = \sum_{i}
Why is dimensionality reduction used if it almost always reduces the explained variation? Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$ where $SS_{res} = \sum_{i}{(y_i - \hat{y})^2}$ is a resi...
Why is dimensionality reduction used if it almost always reduces the explained variation? Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$ where $SS_{res} = \sum_{i}
12,911
Why is dimensionality reduction used if it almost always reduces the explained variation?
In your question there is an implicit assumption about the regressor being linear. In case it is linear your assertion is correct. But for the case of non linear regressor you may think on the dimensionality reduction step as a feature extraction. In that case it has a very important role in order to get good results. ...
Why is dimensionality reduction used if it almost always reduces the explained variation?
In your question there is an implicit assumption about the regressor being linear. In case it is linear your assertion is correct. But for the case of non linear regressor you may think on the dimensi
Why is dimensionality reduction used if it almost always reduces the explained variation? In your question there is an implicit assumption about the regressor being linear. In case it is linear your assertion is correct. But for the case of non linear regressor you may think on the dimensionality reduction step as a fe...
Why is dimensionality reduction used if it almost always reduces the explained variation? In your question there is an implicit assumption about the regressor being linear. In case it is linear your assertion is correct. But for the case of non linear regressor you may think on the dimensi
12,912
Why is dimensionality reduction used if it almost always reduces the explained variation?
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Performing PCA does not reduce the accuracy of the model. The principal components, when you use all of them, should also explain the 95%. It is the reduction of the dimension...
Why is dimensionality reduction used if it almost always reduces the explained variation?
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Performing PCA does not reduce the accuracy of the mode
Why is dimensionality reduction used if it almost always reduces the explained variation? If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Performing PCA does not reduce the accuracy of the model. The principal components, w...
Why is dimensionality reduction used if it almost always reduces the explained variation? If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Performing PCA does not reduce the accuracy of the mode
12,913
Why is dimensionality reduction used if it almost always reduces the explained variation?
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem, the primarily alternatives are penalized maximum likelihood estimation (ridge regression, lasso, elastic net, etc.) or...
Why is dimensionality reduction used if it almost always reduces the explained variation?
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem
Why is dimensionality reduction used if it almost always reduces the explained variation? Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem, the primarily alternatives a...
Why is dimensionality reduction used if it almost always reduces the explained variation? Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem
12,914
Why is dimensionality reduction used if it almost always reduces the explained variation?
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationary period. Note: In reality, one would work a log transform of the data which assumes a constant percent change relations...
Why is dimensionality reduction used if it almost always reduces the explained variation?
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationar
Why is dimensionality reduction used if it almost always reduces the explained variation? Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationary period. Note: In reality, on...
Why is dimensionality reduction used if it almost always reduces the explained variation? Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationar
12,915
Why is dimensionality reduction used if it almost always reduces the explained variation?
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in time. That's nearly 10 billion data points. If I want to analyze this data set... well, let's say some processes can take s...
Why is dimensionality reduction used if it almost always reduces the explained variation?
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in tim
Why is dimensionality reduction used if it almost always reduces the explained variation? Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in time. That's nearly 10 billion da...
Why is dimensionality reduction used if it almost always reduces the explained variation? Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in tim
12,916
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female": If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the remaining four have $2^4$ possible gender configurations, only $1$ of which is all-female, giving $\frac{1}{2^4}$ If it's ...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female": If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the rem
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed] The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female": If it's 8 specific employees - say, the employees in positions 1 thru 8 ...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female": If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the rem
12,917
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we assume that the sexes are mutually independent. Then the "female-indicator" variables for the people in the group are: $$...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we as
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed] Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or fem...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we as
12,918
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions. My reading of your question as asked leads to the answer "the probability is zero". Eight out of twelve employees are female, so...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions. My readi
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed] You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different quest...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions. My readi
12,919
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues. Chief among these (and at fault here) is the reference class problem. In a frequentist framework, this corresponds to assigning your statement of "probability" to a well-populated space of outcomes (recall t...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues. Chief among these (and at fault here) is the reference class problem. In a frequentist
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed] What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues. Chief among these (and at fault here) is the reference class problem. In ...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues. Chief among these (and at fault here) is the reference class problem. In a frequentist
12,920
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men, or perhaps it is a small company run by an entrepreneur who (probably unlawfully, but that is another subject) only or...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed] Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that doe...
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men
12,921
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density $$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$ and infinite expectation. The minimum variate $\min(Y_1,Y_2)$ then has density $$g(x)=\frac{8}{\pi^2}\frac{\pi/2-\arctan(x)}{1+x^2}\mathbb I_{x>0}$$ Since (by L'Hospital's rule) $$\frac{\pi/2-\arctan(...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density $$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$ and infinite expectation. The minimum variate $\min(Y_1,Y_2)$ then has density $
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation? Take two positive iid Cauchy variates $Y_1,Y_2$ with common density $$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$ and infinite expectation. The minimum variate $\min(Y_1,Y_2)$ then has density $$g(x)=\fra...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp Take two positive iid Cauchy variates $Y_1,Y_2$ with common density $$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$ and infinite expectation. The minimum variate $\min(Y_1,Y_2)$ then has density $
12,922
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computing specific integrals. Let $Z=\min(X,Y).$ Then, from basic axioms and definitions, we can work out that for any number ...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computin
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation? Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computing specific...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computin
12,923
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Well, if you don't impose independance, yes. Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by: $$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \text{if} & B = 1\end{array}\right. $$ $$Y = \left\{ \begin{array}[ccc] . |Z| & \text{if} & B = 0 \\0 & \text{if} & B = 1...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Well, if you don't impose independance, yes. Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by: $$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \t
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation? Well, if you don't impose independance, yes. Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by: $$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \text{if} & ...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp Well, if you don't impose independance, yes. Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by: $$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \t
12,924
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of this approach is that it easily generalizes to different order statistics and to different moments or other functions $T(X)...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of thi
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation? This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of this approach...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of thi
12,925
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$: $$E_x[z]=\int_{-\infty}^xzf(z)dz$$ Let's look at the rate of growth of this exepctation: $$\frac d {dx}E_x[z]=xf(x)$$ So...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation? It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$: $$E_x[z]=\...
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
12,926
Simple linear model with autocorrelated errors in R [closed]
Have a look at gls (generalized least squares) from the package nlme You can set a correlation profile for the errors in the regression, e.g. ARMA, etc: gls(Y ~ X, correlation=corARMA(p=1,q=1)) for ARMA(1,1) errors.
Simple linear model with autocorrelated errors in R [closed]
Have a look at gls (generalized least squares) from the package nlme You can set a correlation profile for the errors in the regression, e.g. ARMA, etc: gls(Y ~ X, correlation=corARMA(p=1,q=1)) for
Simple linear model with autocorrelated errors in R [closed] Have a look at gls (generalized least squares) from the package nlme You can set a correlation profile for the errors in the regression, e.g. ARMA, etc: gls(Y ~ X, correlation=corARMA(p=1,q=1)) for ARMA(1,1) errors.
Simple linear model with autocorrelated errors in R [closed] Have a look at gls (generalized least squares) from the package nlme You can set a correlation profile for the errors in the regression, e.g. ARMA, etc: gls(Y ~ X, correlation=corARMA(p=1,q=1)) for
12,927
Simple linear model with autocorrelated errors in R [closed]
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions. x <- 1:100 e <- 25*arima.sim(model=list(ar=0.3),n=100) y <- 1 + 2*x + e ###Fit the model using gls() require(nlme) (fit1 <- gls(y~x, corr=corAR1(0.5,form=~1))) Gene...
Simple linear model with autocorrelated errors in R [closed]
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions. x <- 1:100 e <- 25*arima.sim(model=list(ar=0.
Simple linear model with autocorrelated errors in R [closed] In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions. x <- 1:100 e <- 25*arima.sim(model=list(ar=0.3),n=100) y <- 1 + 2*x + e ###Fit the model using gls() re...
Simple linear model with autocorrelated errors in R [closed] In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions. x <- 1:100 e <- 25*arima.sim(model=list(ar=0.
12,928
Simple linear model with autocorrelated errors in R [closed]
Use function gls from package nlme. Here is the example. ##Generate data frame with regressor and AR(1) error. The error term is ## \eps_t=0.3*\eps_{t-1}+v_t df <- data.frame(x1=rnorm(100), err=filter(rnorm(100)/5,filter=0.3,method="recursive")) ##Create ther response df$y <- 1 + 2*df$x + df$err ###Fit the model gls...
Simple linear model with autocorrelated errors in R [closed]
Use function gls from package nlme. Here is the example. ##Generate data frame with regressor and AR(1) error. The error term is ## \eps_t=0.3*\eps_{t-1}+v_t df <- data.frame(x1=rnorm(100), err=filte
Simple linear model with autocorrelated errors in R [closed] Use function gls from package nlme. Here is the example. ##Generate data frame with regressor and AR(1) error. The error term is ## \eps_t=0.3*\eps_{t-1}+v_t df <- data.frame(x1=rnorm(100), err=filter(rnorm(100)/5,filter=0.3,method="recursive")) ##Create th...
Simple linear model with autocorrelated errors in R [closed] Use function gls from package nlme. Here is the example. ##Generate data frame with regressor and AR(1) error. The error term is ## \eps_t=0.3*\eps_{t-1}+v_t df <- data.frame(x1=rnorm(100), err=filte
12,929
Simple linear model with autocorrelated errors in R [closed]
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure. For example: corr=corAR1(form=~1) indicates that order of the data is the one they are in the table. corr=corAR1(form=~Year) indicates that the order is the one of fac...
Simple linear model with autocorrelated errors in R [closed]
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure. For example: corr=corAR1(form=~1) indicates tha
Simple linear model with autocorrelated errors in R [closed] You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure. For example: corr=corAR1(form=~1) indicates that order of the data is the one they are in the table. corr=...
Simple linear model with autocorrelated errors in R [closed] You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure. For example: corr=corAR1(form=~1) indicates tha
12,930
Flaws in Frequentist Inference
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Even if you are a Bayesian, it is...
Flaws in Frequentist Inference
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be tru
Flaws in Frequentist Inference I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Ev...
Flaws in Frequentist Inference I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be tru
12,931
Flaws in Frequentist Inference
It's worth noting that there is nothing that prevents Frequentist analysis from saying "Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditional on some of your data being censored, the MLE estimator $\hat \mu$ is no longer equal to $\bar x$ and has some bias"....
Flaws in Frequentist Inference
It's worth noting that there is nothing that prevents Frequentist analysis from saying "Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditio
Flaws in Frequentist Inference It's worth noting that there is nothing that prevents Frequentist analysis from saying "Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditional on some of your data being censored, the MLE estimator $\hat \mu$ is no longer equal ...
Flaws in Frequentist Inference It's worth noting that there is nothing that prevents Frequentist analysis from saying "Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditio
12,932
Flaws in Frequentist Inference
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions: We can still consider $X \sim N(\mu, 1)$. However, we are not observing $X$, but $X' = \min(100, X)$, which is another Random Variable. 2,3. ...
Flaws in Frequentist Inference
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions: We can still consider $
Flaws in Frequentist Inference I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions: We can still consider $X \sim N(\mu, 1)$. However, we are not observing $X$, but $X' = \min(100, X)$, which is a...
Flaws in Frequentist Inference I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions: We can still consider $
12,933
Flaws in Frequentist Inference
It is a bit sad to see printed such carelessly written prose. Consider the phrase "For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)= g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the data actually observed..." -while the mathematical formula in the above same sentence shows that the posterior densi...
Flaws in Frequentist Inference
It is a bit sad to see printed such carelessly written prose. Consider the phrase "For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)= g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the
Flaws in Frequentist Inference It is a bit sad to see printed such carelessly written prose. Consider the phrase "For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)= g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the data actually observed..." -while the mathematical formula in the above same sentence...
Flaws in Frequentist Inference It is a bit sad to see printed such carelessly written prose. Consider the phrase "For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)= g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the
12,934
Analysis of Kullback-Leibler divergence
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study. When we write (the OP...
Analysis of Kullback-Leibler divergence
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different,
Analysis of Kullback-Leibler divergence The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenom...
Analysis of Kullback-Leibler divergence The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different,
12,935
Analysis of Kullback-Leibler divergence
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$. The extra encoding cost above the minimum encoding cost that would have been attained by using the ideal code for $P$ is the KL divergence.
Analysis of Kullback-Leibler divergence
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$. The extra encoding cost above the minimum encoding cost that
Analysis of Kullback-Leibler divergence Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$. The extra encoding cost above the minimum encoding cost that would have been attained by using the ideal code for $P$ is the KL divergence.
Analysis of Kullback-Leibler divergence Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$. The extra encoding cost above the minimum encoding cost that
12,936
Analysis of Kullback-Leibler divergence
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the two corresponding symbols from Q plus one bit of extra information.
Analysis of Kullback-Leibler divergence
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the
Analysis of Kullback-Leibler divergence KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the two corresponding symbols from Q plus one bit of extra information.
Analysis of Kullback-Leibler divergence KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the
12,937
What is the difference between data mining and statistical analysis?
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting. Data mining was a largely commercial concern and driven by business needs (coupled with the "need" for vendors to sell software and hardware systems to businesses). One thing Friedman ...
What is the difference between data mining and statistical analysis?
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting. Data mining was a largely commercial concern and driven by busine
What is the difference between data mining and statistical analysis? Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting. Data mining was a largely commercial concern and driven by business needs (coupled with the "need" for vendors to se...
What is the difference between data mining and statistical analysis? Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting. Data mining was a largely commercial concern and driven by busine
12,938
What is the difference between data mining and statistical analysis?
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in the area of artificial intelligence and statistics. Section 1.4 from Witten & Frank summarizes my viewpoint so I'm going...
What is the difference between data mining and statistical analysis?
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in
What is the difference between data mining and statistical analysis? The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in the area of artificial intelligence and statistics....
What is the difference between data mining and statistical analysis? The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in
12,939
What is the difference between data mining and statistical analysis?
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, trends, clusters, and outliers in the data. On the other hand, Predictive is to build models and procedures for regressio...
What is the difference between data mining and statistical analysis?
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, t
What is the difference between data mining and statistical analysis? Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, trends, clusters, and outliers in the data. On the o...
What is the difference between data mining and statistical analysis? Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, t
12,940
What is the difference between data mining and statistical analysis?
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird. It is often associated with computational statistics, i.e. only stuff you can do with a computer. Data miners stole a significant proportion of multivariate statistics and calle...
What is the difference between data mining and statistical analysis?
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird. It is often associated with computational statistics, i.e
What is the difference between data mining and statistical analysis? Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird. It is often associated with computational statistics, i.e. only stuff you can do with a computer. Data miner...
What is the difference between data mining and statistical analysis? Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird. It is often associated with computational statistics, i.e
12,941
What is the difference between data mining and statistical analysis?
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying: "Data mining seems more concerned with prediction using observed variables than with understanding the causal system of latent variables; ps...
What is the difference between data mining and statistical analysis?
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying: "Data mining seems m
What is the difference between data mining and statistical analysis? I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying: "Data mining seems more concerned with prediction using observed variab...
What is the difference between data mining and statistical analysis? I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying: "Data mining seems m
12,942
What is the difference between data mining and statistical analysis?
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling-prediction approach. I think the tradition of statisic is build with all steps : exploratory analysis, then modeling...
What is the difference between data mining and statistical analysis?
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling
What is the difference between data mining and statistical analysis? I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling-prediction approach. I think the tradition of sta...
What is the difference between data mining and statistical analysis? I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling
12,943
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
1. Normal distribution of residuals: The normality condition comes into play when you're trying to get confidence intervals and/or p-values. $\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gauss Markov condition. This plot tries to illustrate the distribution of points in the population in blue (with the popul...
Assumptions of multiple regression: how is normality assumption different from constant variance ass
1. Normal distribution of residuals: The normality condition comes into play when you're trying to get confidence intervals and/or p-values. $\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gaus
Assumptions of multiple regression: how is normality assumption different from constant variance assumption? 1. Normal distribution of residuals: The normality condition comes into play when you're trying to get confidence intervals and/or p-values. $\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gauss Markov co...
Assumptions of multiple regression: how is normality assumption different from constant variance ass 1. Normal distribution of residuals: The normality condition comes into play when you're trying to get confidence intervals and/or p-values. $\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gaus
12,944
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
It is not the OP's fault, but I am starting to get tired reading misinformation like this. I read that these are the conditions for using the multiple regression model: the residuals of the model are nearly normal, the variability of the residuals is nearly constant the residuals are independent, and each variable i...
Assumptions of multiple regression: how is normality assumption different from constant variance ass
It is not the OP's fault, but I am starting to get tired reading misinformation like this. I read that these are the conditions for using the multiple regression model: the residuals of the model a
Assumptions of multiple regression: how is normality assumption different from constant variance assumption? It is not the OP's fault, but I am starting to get tired reading misinformation like this. I read that these are the conditions for using the multiple regression model: the residuals of the model are nearly n...
Assumptions of multiple regression: how is normality assumption different from constant variance ass It is not the OP's fault, but I am starting to get tired reading misinformation like this. I read that these are the conditions for using the multiple regression model: the residuals of the model a
12,945
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
Antoni Parellada had a perfect answer with nice graphical illustration. I just want to add one comment to summarize difference between two statements the residuals of the model are nearly normal the variability of the residuals is nearly constant Statement 1 gives the "shape" of the residual is "bell shaped curv...
Assumptions of multiple regression: how is normality assumption different from constant variance ass
Antoni Parellada had a perfect answer with nice graphical illustration. I just want to add one comment to summarize difference between two statements the residuals of the model are nearly normal th
Assumptions of multiple regression: how is normality assumption different from constant variance assumption? Antoni Parellada had a perfect answer with nice graphical illustration. I just want to add one comment to summarize difference between two statements the residuals of the model are nearly normal the variabili...
Assumptions of multiple regression: how is normality assumption different from constant variance ass Antoni Parellada had a perfect answer with nice graphical illustration. I just want to add one comment to summarize difference between two statements the residuals of the model are nearly normal th
12,946
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases you don't need and, in many cases, cannot really assume that the distribution is normal. The assumptions that you quoted...
Assumptions of multiple regression: how is normality assumption different from constant variance ass
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases
Assumptions of multiple regression: how is normality assumption different from constant variance assumption? There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases you don't n...
Assumptions of multiple regression: how is normality assumption different from constant variance ass There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases
12,947
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary. A regression model is a formal means of expressing the two essential ingredients of a statistical relation: A tendency of the response variable $Y$ to vary with the predictor variable $X$ in a systemati...
Assumptions of multiple regression: how is normality assumption different from constant variance ass
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary. A regression model is a formal means of expressing the two essential ingredients of
Assumptions of multiple regression: how is normality assumption different from constant variance assumption? I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary. A regression model is a formal means of expressing the two essential ingredients of a statistic...
Assumptions of multiple regression: how is normality assumption different from constant variance ass I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary. A regression model is a formal means of expressing the two essential ingredients of
12,948
Pairwise Mahalanobis distances
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD. cholMaha <- function(X) { dec <- chol( cov(X) ) tmp <- forwardsolve(t(dec), t(X) ) dist(t(tmp)) } It should be faster, because forward-solving a triangular system is faster then dense matrix multiplication with t...
Pairwise Mahalanobis distances
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD. cholMaha <- function(X) { dec <- chol( cov(X) ) tmp <- forwardsolve(t(dec), t(X) ) dist(t(tmp
Pairwise Mahalanobis distances Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD. cholMaha <- function(X) { dec <- chol( cov(X) ) tmp <- forwardsolve(t(dec), t(X) ) dist(t(tmp)) } It should be faster, because forward-solving a triangular system is faster then den...
Pairwise Mahalanobis distances Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD. cholMaha <- function(X) { dec <- chol( cov(X) ) tmp <- forwardsolve(t(dec), t(X) ) dist(t(tmp
12,949
Pairwise Mahalanobis distances
The standard formula for squared Mahalanobis distance between two data points is $$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$ where $x_i$ is a $p \times 1$ vector corresponding to observation $i$. Typically, the covariance matrix is estimated from the observed data. Not counting matrix inversion, this operation re...
Pairwise Mahalanobis distances
The standard formula for squared Mahalanobis distance between two data points is $$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$ where $x_i$ is a $p \times 1$ vector corresponding to observation $i$
Pairwise Mahalanobis distances The standard formula for squared Mahalanobis distance between two data points is $$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$ where $x_i$ is a $p \times 1$ vector corresponding to observation $i$. Typically, the covariance matrix is estimated from the observed data. Not counting matr...
Pairwise Mahalanobis distances The standard formula for squared Mahalanobis distance between two data points is $$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$ where $x_i$ is a $p \times 1$ vector corresponding to observation $i$
12,950
Pairwise Mahalanobis distances
Let's try the obvious. From $$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$ it follows we can compute the vector $$u_i = x_i^\prime \Sigma^{-1}x_i$$ in $O(p^2)$ time and the matrix $$V = X \Sigma^{-1} X^\prime$$ in $O(p n^2 + p^2 n...
Pairwise Mahalanobis distances
Let's try the obvious. From $$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$ it follows we can compute the vecto
Pairwise Mahalanobis distances Let's try the obvious. From $$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$ it follows we can compute the vector $$u_i = x_i^\prime \Sigma^{-1}x_i$$ in $O(p^2)$ time and the matrix $$V = X \Sigma^{-1}...
Pairwise Mahalanobis distances Let's try the obvious. From $$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$ it follows we can compute the vecto
12,951
Pairwise Mahalanobis distances
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use dist() for that. Let $X$ denote the $n\times p$ data matrix, which we assume to be centered so that its columns have mea...
Pairwise Mahalanobis distances
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use
Pairwise Mahalanobis distances If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use dist() for that. Let $X$ denote the $n\times p$ data matrix, which we assume to be center...
Pairwise Mahalanobis distances If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use
12,952
Pairwise Mahalanobis distances
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stats package. It seems to be slightly faster (about 10% faster in some benchmarks I have run). Note that it returns Mahalan...
Pairwise Mahalanobis distances
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stat
Pairwise Mahalanobis distances This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stats package. It seems to be slightly faster (about 10% faster in some benchmarks I have run...
Pairwise Mahalanobis distances This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stat
12,953
Pairwise Mahalanobis distances
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic matrix calculations as R or Matlab, but much faster with loops. The routines for Cholesky decompositions and triangle su...
Pairwise Mahalanobis distances
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic
Pairwise Mahalanobis distances I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic matrix calculations as R or Matlab, but much faster with loops. The routines for Cholesky...
Pairwise Mahalanobis distances I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic
12,954
Pairwise Mahalanobis distances
The formula you have posted is not computing what you think you are computing (a U-statistics). In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences of the data). You are using cov(x0) (this is the covariance matrix of your original data). I think this is a mistake in...
Pairwise Mahalanobis distances
The formula you have posted is not computing what you think you are computing (a U-statistics). In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences
Pairwise Mahalanobis distances The formula you have posted is not computing what you think you are computing (a U-statistics). In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences of the data). You are using cov(x0) (this is the covariance matrix of your original data...
Pairwise Mahalanobis distances The formula you have posted is not computing what you think you are computing (a U-statistics). In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences
12,955
Pairwise Mahalanobis distances
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix. #Manly (2004, p.65-66) x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17) x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33) x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50) x4 <- c(50.53, 50.23, 50.57, 51.9...
Pairwise Mahalanobis distances
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix. #Manly (2004, p.65-66) x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17) x2 <
Pairwise Mahalanobis distances There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix. #Manly (2004, p.65-66) x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17) x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33) x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50) x4...
Pairwise Mahalanobis distances There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix. #Manly (2004, p.65-66) x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17) x2 <
12,956
Pairwise Mahalanobis distances
This is the expanded with code my old answer moved here from another thread. I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a hat matrix approach using solving of a system of linear equations (for it is faster than inverting of covariance matrix). ...
Pairwise Mahalanobis distances
This is the expanded with code my old answer moved here from another thread. I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a ha
Pairwise Mahalanobis distances This is the expanded with code my old answer moved here from another thread. I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a hat matrix approach using solving of a system of linear equations (for it is faster than in...
Pairwise Mahalanobis distances This is the expanded with code my old answer moved here from another thread. I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a ha
12,957
What is the relationship between sample size and the influence of prior on posterior?
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as $$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underbrace{p(\theta)}_{{\rm prior}} $$ or, as is more commonly displayed on the log scale, $$ \log( p(\theta | {\bf X}) ) = ...
What is the relationship between sample size and the influence of prior on posterior?
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as $$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underb
What is the relationship between sample size and the influence of prior on posterior? Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as $$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underbrace{p(\theta)}_{{\rm prior}} $$ o...
What is the relationship between sample size and the influence of prior on posterior? Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as $$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underb
12,958
What is the relationship between sample size and the influence of prior on posterior?
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the posterior distributions are shown when $x=n/2$ has been observed. As $n$ grows, both posteriors become more and more con...
What is the relationship between sample size and the influence of prior on posterior?
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the
What is the relationship between sample size and the influence of prior on posterior? Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the posterior distributions are shown ...
What is the relationship between sample size and the influence of prior on posterior? Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the
12,959
Simple linear regression output interpretation
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that there is a strong relationship; the $p$-value is si...
Simple linear regression output interpretation
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predic
Simple linear regression output interpretation The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that ther...
Simple linear regression output interpretation The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predic
12,960
Simple linear regression output interpretation
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the dependent variable and the fitted values. The exact interpretation and derivation of the coefficient of determination $R^{2}$ ...
Simple linear regression output interpretation
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the depend
Simple linear regression output interpretation The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the dependent variable and the fitted values. The exact interpretation and derivati...
Simple linear regression output interpretation The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the depend
12,961
Simple linear regression output interpretation
The $R^2$ value tells you how much variation in the data is explained by the fitted model. The low $R^2$ value in your study suggests that your data is probably spread widely around the regression line, meaning that the regression model can only explain (very little) 8.9% of the variation in the data. Have you checked ...
Simple linear regression output interpretation
The $R^2$ value tells you how much variation in the data is explained by the fitted model. The low $R^2$ value in your study suggests that your data is probably spread widely around the regression lin
Simple linear regression output interpretation The $R^2$ value tells you how much variation in the data is explained by the fitted model. The low $R^2$ value in your study suggests that your data is probably spread widely around the regression line, meaning that the regression model can only explain (very little) 8.9% ...
Simple linear regression output interpretation The $R^2$ value tells you how much variation in the data is explained by the fitted model. The low $R^2$ value in your study suggests that your data is probably spread widely around the regression lin
12,962
Simple linear regression output interpretation
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressand (the $y$) divided by the empirical standard deviation of the regressor (the $x$). Depending on the scaling of the $x...
Simple linear regression output interpretation
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressa
Simple linear regression output interpretation For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressand (the $y$) divided by the empirical standard deviation of the regressor...
Simple linear regression output interpretation For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressa
12,963
Simple linear regression output interpretation
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach. Suppose we collect a bunch of observation from 1000 random people trying to find out if punches in the face are associated with headaches: $$Headaches = \beta_0 + \beta_1 Punch\_in\_the\_face + \varepsilon...
Simple linear regression output interpretation
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach. Suppose we collect a bunch of observation from 1000 random people trying to find out
Simple linear regression output interpretation I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach. Suppose we collect a bunch of observation from 1000 random people trying to find out if punches in the face are associated with headaches: $$Headaches = \beta...
Simple linear regression output interpretation I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach. Suppose we collect a bunch of observation from 1000 random people trying to find out
12,964
Simple linear regression output interpretation
@Macro had a great answer. The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant pp-value doesn't tell you necessarily that there is a strong relat...
Simple linear regression output interpretation
@Macro had a great answer. The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance
Simple linear regression output interpretation @Macro had a great answer. The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant pp-value doesn't te...
Simple linear regression output interpretation @Macro had a great answer. The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance
12,965
If I want an interpretable model, are there methods other than Linear Regression?
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the lightbulbs), because it's straightforward to use and gives predictable results. Second, linear regression is not always "i...
If I want an interpretable model, are there methods other than Linear Regression?
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the li
If I want an interpretable model, are there methods other than Linear Regression? It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the lightbulbs), because it's straightforwar...
If I want an interpretable model, are there methods other than Linear Regression? It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the li
12,966
If I want an interpretable model, are there methods other than Linear Regression?
Decision Tree would be another choice. Or Lasso Regression to create a sparse system. Check this figure from An Introduction to Statistical Learning book. http://www.sr-sv.com/wp-content/uploads/2015/09/STAT01.png
If I want an interpretable model, are there methods other than Linear Regression?
Decision Tree would be another choice. Or Lasso Regression to create a sparse system. Check this figure from An Introduction to Statistical Learning book. http://www.sr-sv.com/wp-content/uploads/2015
If I want an interpretable model, are there methods other than Linear Regression? Decision Tree would be another choice. Or Lasso Regression to create a sparse system. Check this figure from An Introduction to Statistical Learning book. http://www.sr-sv.com/wp-content/uploads/2015/09/STAT01.png
If I want an interpretable model, are there methods other than Linear Regression? Decision Tree would be another choice. Or Lasso Regression to create a sparse system. Check this figure from An Introduction to Statistical Learning book. http://www.sr-sv.com/wp-content/uploads/2015
12,967
If I want an interpretable model, are there methods other than Linear Regression?
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to making ML models interpretable.
If I want an interpretable model, are there methods other than Linear Regression?
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to ma
If I want an interpretable model, are there methods other than Linear Regression? I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to making ML models interpretable.
If I want an interpretable model, are there methods other than Linear Regression? I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to ma
12,968
If I want an interpretable model, are there methods other than Linear Regression?
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive models, but also machine learning methods used for regression. I include random forests, gradient boosted machines, neural ne...
If I want an interpretable model, are there methods other than Linear Regression?
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive model
If I want an interpretable model, are there methods other than Linear Regression? No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive models, but also machine learning methods u...
If I want an interpretable model, are there methods other than Linear Regression? No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive model
12,969
What is the distribution of the rounded down average of Poisson random variables?
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson distribution of parameter $\lambda = \lambda_1 + \lambda_2 + \cdots + \lambda_n$ and $m=n$.) The distribution of $Y$ is...
What is the distribution of the rounded down average of Poisson random variables?
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson
What is the distribution of the rounded down average of Poisson random variables? A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson distribution of parameter $\lambda = ...
What is the distribution of the rounded down average of Poisson random variables? A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson
12,970
What is the distribution of the rounded down average of Poisson random variables?
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$. Dividing by $n$ reduces the mean to $\lambda / n$ and variance $\lambda / n^2$ so the variance will be less than the equi...
What is the distribution of the rounded down average of Poisson random variables?
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$. D
What is the distribution of the rounded down average of Poisson random variables? As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$. Dividing by $n$ reduces the mean to $\l...
What is the distribution of the rounded down average of Poisson random variables? As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$. D
12,971
What is the distribution of the rounded down average of Poisson random variables?
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might not help you very much. As Michael Chernick noted in comments on his own answer, the sum $\sum_i X_i$ of independent Poisson random variables $X_i$ with respective paramete...
What is the distribution of the rounded down average of Poisson random variables?
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might not help you very much. As Michael Chernick noted in c
What is the distribution of the rounded down average of Poisson random variables? The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might not help you very much. As Michael Chernick noted in comments on his own answer, the sum $\...
What is the distribution of the rounded down average of Poisson random variables? The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might not help you very much. As Michael Chernick noted in c
12,972
What is the distribution of the rounded down average of Poisson random variables?
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will still have the shape of the Poisson. It is just that the discrete probabilities may occur at non-integer points.
What is the distribution of the rounded down average of Poisson random variables?
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will st
What is the distribution of the rounded down average of Poisson random variables? Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will still have the shape of the Poisson. It...
What is the distribution of the rounded down average of Poisson random variables? Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will st
12,973
Why is my R-squared so low when my t-statistics are so large?
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your response variable explained by your covariates. Suppose you are estimating a regression model with $n$ observations, $$ ...
Why is my R-squared so low when my t-statistics are so large?
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your
Why is my R-squared so low when my t-statistics are so large? The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your response variable explained by your covariates. Suppose yo...
Why is my R-squared so low when my t-statistics are so large? The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your
12,974
Why is my R-squared so low when my t-statistics are so large?
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the regression that cause the response to jump around.
Why is my R-squared so low when my t-statistics are so large?
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the
Why is my R-squared so low when my t-statistics are so large? To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the regression that cause the response to jump around.
Why is my R-squared so low when my t-statistics are so large? To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the
12,975
Why is my R-squared so low when my t-statistics are so large?
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared is low because the errors are large, which means that the variability in your data is large and thus your regression mo...
Why is my R-squared so low when my t-statistics are so large?
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared
Why is my R-squared so low when my t-statistics are so large? Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared is low because the errors are large, which means that the...
Why is my R-squared so low when my t-statistics are so large? Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared
12,976
Why is my R-squared so low when my t-statistics are so large?
Several answers given are close but still wrong. "The t-values are used to judge the accurary of your estimate of the βi's" is the one that concerns me the most. The T-value is merely an indication of the likelihood of random occurrence. Large means unlikely. Small means very likely. Positive and Negative don't matte...
Why is my R-squared so low when my t-statistics are so large?
Several answers given are close but still wrong. "The t-values are used to judge the accurary of your estimate of the βi's" is the one that concerns me the most. The T-value is merely an indication
Why is my R-squared so low when my t-statistics are so large? Several answers given are close but still wrong. "The t-values are used to judge the accurary of your estimate of the βi's" is the one that concerns me the most. The T-value is merely an indication of the likelihood of random occurrence. Large means unlike...
Why is my R-squared so low when my t-statistics are so large? Several answers given are close but still wrong. "The t-values are used to judge the accurary of your estimate of the βi's" is the one that concerns me the most. The T-value is merely an indication
12,977
Why is my R-squared so low when my t-statistics are so large?
The only way to deal with a small R squared, check the following: Is your sample size large enough? If yes, do step 2. but if no, increase your sample size. How many covariates did you use for your model estimation? If more than 1 as in your case, deal with the problem of multicolinearity of the covariates or simply, ...
Why is my R-squared so low when my t-statistics are so large?
The only way to deal with a small R squared, check the following: Is your sample size large enough? If yes, do step 2. but if no, increase your sample size. How many covariates did you use for your m
Why is my R-squared so low when my t-statistics are so large? The only way to deal with a small R squared, check the following: Is your sample size large enough? If yes, do step 2. but if no, increase your sample size. How many covariates did you use for your model estimation? If more than 1 as in your case, deal with...
Why is my R-squared so low when my t-statistics are so large? The only way to deal with a small R squared, check the following: Is your sample size large enough? If yes, do step 2. but if no, increase your sample size. How many covariates did you use for your m
12,978
Why is the Cauchy Distribution so useful?
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that practitioners in finance are wary of using models that have light-tailed distributions (e.g., the normal distribution) on the...
Why is the Cauchy Distribution so useful?
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that pract
Why is the Cauchy Distribution so useful? In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that practitioners in finance are wary of using models that have light-tailed distributi...
Why is the Cauchy Distribution so useful? In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that pract
12,979
Why is the Cauchy Distribution so useful?
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(0,1)$. The Cauchy distribution is important in physics (where it’s known as the Lorentz distribution) because it’s the s...
Why is the Cauchy Distribution so useful?
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(
Why is the Cauchy Distribution so useful? The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(0,1)$. The Cauchy distribution is important in physics (where it’s known as th...
Why is the Cauchy Distribution so useful? The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(
12,980
Choosing between uninformative beta priors
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different data. As you can clearly see, the choice of "uninformative" priors affected the posterior distribution, especially in c...
Choosing between uninformative beta priors
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different
Choosing between uninformative beta priors First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different data. As you can clearly see, the choice of "uninformative" priors affected ...
Choosing between uninformative beta priors First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different
12,981
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?. Here are your data drawn with a 1:1 aspect ratio, revealing how much the scales of the two variables differ: To its right, the plot of the...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?. Here are your data
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?. Here are your data drawn with a 1:1 as...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?. Here are your data
12,982
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actually the x direction is rather small compared to the y direction. Based on that you would expect two enlonged clusters. N...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actua
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actually the x direction...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actua
12,983
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power = 1, corresponds to the “historical” R implementation, whereas d.power = 2 corresponds to what Tibshirani et al had propos...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power =
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power = 1, corresponds to t...
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them? I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power =
12,984
Completing a 3x3 correlation matrix: two coefficients of the three given
We already know $\gamma$ is bounded between $[-1,1]$ The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative Thus, \begin{align*} 1(1-\gamma^2)-0.6(0.6-0.8\gamma)+0.8(0.6\gamma-0.8) &\geq 0\\ -\gamma^2+0.96\gamma \geq 0\\ \implies \gamma(\gamma-0.96) \leq 0 \text{ an...
Completing a 3x3 correlation matrix: two coefficients of the three given
We already know $\gamma$ is bounded between $[-1,1]$ The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative Thus, \begin{align*} 1(1-\gamma^2)-0.6(
Completing a 3x3 correlation matrix: two coefficients of the three given We already know $\gamma$ is bounded between $[-1,1]$ The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative Thus, \begin{align*} 1(1-\gamma^2)-0.6(0.6-0.8\gamma)+0.8(0.6\gamma-0.8) &\geq 0\\ -\g...
Completing a 3x3 correlation matrix: two coefficients of the three given We already know $\gamma$ is bounded between $[-1,1]$ The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative Thus, \begin{align*} 1(1-\gamma^2)-0.6(
12,985
Completing a 3x3 correlation matrix: two coefficients of the three given
Here's a simpler (and perhaps more intuitive) solution: Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ for the vectors $\mathbf{v}_1$, $\mathbf{v}_2$, $\mathbf{v}_3$, where the angle bracket $\langl...
Completing a 3x3 correlation matrix: two coefficients of the three given
Here's a simpler (and perhaps more intuitive) solution: Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v
Completing a 3x3 correlation matrix: two coefficients of the three given Here's a simpler (and perhaps more intuitive) solution: Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ for the vectors $\math...
Completing a 3x3 correlation matrix: two coefficients of the three given Here's a simpler (and perhaps more intuitive) solution: Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v
12,986
Completing a 3x3 correlation matrix: two coefficients of the three given
Let us consider the following convex set $$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$ which is a spectrahedron named $3$-dimensional elliptope. Here's a depiction of this elliptope Intersecting this elliptope with the planes defined by...
Completing a 3x3 correlation matrix: two coefficients of the three given
Let us consider the following convex set $$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$ which is a spectrahedron named
Completing a 3x3 correlation matrix: two coefficients of the three given Let us consider the following convex set $$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$ which is a spectrahedron named $3$-dimensional elliptope. Here's a depiction ...
Completing a 3x3 correlation matrix: two coefficients of the three given Let us consider the following convex set $$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$ which is a spectrahedron named
12,987
Completing a 3x3 correlation matrix: two coefficients of the three given
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions. For a single "free" parameter problem such as this, it's easy to see that the set of all values making the matrix psd will be a single interval. Therefore, it is suffici...
Completing a 3x3 correlation matrix: two coefficients of the three given
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions. For a single "free" parameter problem such as thi
Completing a 3x3 correlation matrix: two coefficients of the three given Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions. For a single "free" parameter problem such as this, it's easy to see that the set of all values ...
Completing a 3x3 correlation matrix: two coefficients of the three given Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions. For a single "free" parameter problem such as thi
12,988
Completing a 3x3 correlation matrix: two coefficients of the three given
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation). "Matrix should be positive semidefinite" implies the variable vectors are a bunch in Euclidean space. The case of correlation matrix is easier than covariance ma...
Completing a 3x3 correlation matrix: two coefficients of the three given
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation). "Matrix should be positive semidefinite"
Completing a 3x3 correlation matrix: two coefficients of the three given Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation). "Matrix should be positive semidefinite" implies the variable vectors are a bunch in Euc...
Completing a 3x3 correlation matrix: two coefficients of the three given Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation). "Matrix should be positive semidefinite"
12,989
Completing a 3x3 correlation matrix: two coefficients of the three given
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa). To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by the spectral theorm, since $A$ is symmetric) $A=UDU^T$ where $U$ is a matrix of orthonormal eigenvectors and $D$ is a dia...
Completing a 3x3 correlation matrix: two coefficients of the three given
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa). To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by t
Completing a 3x3 correlation matrix: two coefficients of the three given Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa). To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by the spectral theorm, since $A$ is symmetric) $A=...
Completing a 3x3 correlation matrix: two coefficients of the three given Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa). To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by t
12,990
Minimum number of layers in a deep neural network
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
Minimum number of layers in a deep neural network
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
Minimum number of layers in a deep neural network "Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
Minimum number of layers in a deep neural network "Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
12,991
Minimum number of layers in a deep neural network
"Deep" One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)). "Very Deep" In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16+ hidden layers. "Extremely Deep" In 2016 the "extremely deep" residual networks He et al. (2016) consist of 50 up to 1...
Minimum number of layers in a deep neural network
"Deep" One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)). "Very Deep" In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16
Minimum number of layers in a deep neural network "Deep" One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)). "Very Deep" In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16+ hidden layers. "Extremely Deep" In 2016 the "extremely deep" residu...
Minimum number of layers in a deep neural network "Deep" One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)). "Very Deep" In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16
12,992
Minimum number of layers in a deep neural network
As per the literature, Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003. https://en.wikipedia.org/wiki/Deep_learning It is said that There is no universally agreed upon threshold of depth dividing shal...
Minimum number of layers in a deep neural network
As per the literature, Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003. https://en.w
Minimum number of layers in a deep neural network As per the literature, Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003. https://en.wikipedia.org/wiki/Deep_learning It is said that There is no universa...
Minimum number of layers in a deep neural network As per the literature, Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003. https://en.w
12,993
Explain model adjustment, in plain English
Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men are more likely to watch football than wom...
Explain model adjustment, in plain English
Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than t
Explain model adjustment, in plain English Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men...
Explain model adjustment, in plain English Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than t
12,994
Explain model adjustment, in plain English
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females: set.seed(69) x <- rep(1:10,2) y <- c(jitter(1:10, factor=4), (jitter(1:10, factor=4)+2)) sex <- rep(c("f", "m"), each=10) df1...
Explain model adjustment, in plain English
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females: set.s
Explain model adjustment, in plain English Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females: set.seed(69) x <- rep(1:10,2) y <- c(jitter(1:10, factor=4), (jitter(1:10, factor=...
Explain model adjustment, in plain English Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females: set.s
12,995
Find Probability of one event out of three when all of them can't happen together
This Venn diagram displays a situation where the chance of mutual intersection is zero: From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not in the mutual overlap of all three disks. That permits us to update the diagram: Applying the same reasoning to $\Pr(...
Find Probability of one event out of three when all of them can't happen together
This Venn diagram displays a situation where the chance of mutual intersection is zero: From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not
Find Probability of one event out of three when all of them can't happen together This Venn diagram displays a situation where the chance of mutual intersection is zero: From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not in the mutual overlap of all three dis...
Find Probability of one event out of three when all of them can't happen together This Venn diagram displays a situation where the chance of mutual intersection is zero: From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not
12,996
Find Probability of one event out of three when all of them can't happen together
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\mathbb P(E)=\mathbb P(E\cap F)+\mathbb P(E\cap G)=2/3$$
Find Probability of one event out of three when all of them can't happen together
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\m
Find Probability of one event out of three when all of them can't happen together If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\mathbb P(E)=\mathbb P(E\cap F)+\mathbb ...
Find Probability of one event out of three when all of them can't happen together If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\m
12,997
Find Probability of one event out of three when all of them can't happen together
The answer to the question "Can you determine $P(E)$?" is Yes. Given events $E, F, G$ defined on a sample space $\Omega$, we know that \begin{align} &E\cap F\cap G\\ &E\cap F\cap G^c\\ &E\cap F^c\cap G\\ &E\cap F^c\cap G^c\\ &E^c\cap F\cap G\\ &E^c\cap F\cap G^c\\ &E^c\cap F^c\cap G\\ &E^c\cap F^c\cap G^c\\ \end{align...
Find Probability of one event out of three when all of them can't happen together
The answer to the question "Can you determine $P(E)$?" is Yes. Given events $E, F, G$ defined on a sample space $\Omega$, we know that \begin{align} &E\cap F\cap G\\ &E\cap F\cap G^c\\ &E\cap F^c\cap
Find Probability of one event out of three when all of them can't happen together The answer to the question "Can you determine $P(E)$?" is Yes. Given events $E, F, G$ defined on a sample space $\Omega$, we know that \begin{align} &E\cap F\cap G\\ &E\cap F\cap G^c\\ &E\cap F^c\cap G\\ &E\cap F^c\cap G^c\\ &E^c\cap F\c...
Find Probability of one event out of three when all of them can't happen together The answer to the question "Can you determine $P(E)$?" is Yes. Given events $E, F, G$ defined on a sample space $\Omega$, we know that \begin{align} &E\cap F\cap G\\ &E\cap F\cap G^c\\ &E\cap F^c\cap
12,998
Find Probability of one event out of three when all of them can't happen together
Can we think of it that way? P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3 P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1 Meaning that The probability of event E happening by itself is zero, which means it can only happen with either F or G and it can't happen with both. P(E) = P(E ∩ F ) + P(E ∩ G) = 1/3 + 1/3 = 2/3
Find Probability of one event out of three when all of them can't happen together
Can we think of it that way? P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3 P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1 Meaning that The probability of event E happening by itself is zero, which means it can only hap
Find Probability of one event out of three when all of them can't happen together Can we think of it that way? P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3 P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1 Meaning that The probability of event E happening by itself is zero, which means it can only happen with either F or G and it can't ha...
Find Probability of one event out of three when all of them can't happen together Can we think of it that way? P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3 P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1 Meaning that The probability of event E happening by itself is zero, which means it can only hap
12,999
Find Probability of one event out of three when all of them can't happen together
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob: $$ P(E) = P(E, F) + P(E, G) = \tfrac{2}{3} $$ Since $P(E \mid E,F)P(E, F) = P(E, F)$, ditto for $E,G$ and $P(E \mid F, G) = 0$.
Find Probability of one event out of three when all of them can't happen together
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob: $$ P(E) = P(E, F) + P(E, G) = \tfrac{2}{3} $$ Since $P(E \mid E,F)P(E, F) = P(E, F)$,
Find Probability of one event out of three when all of them can't happen together Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob: $$ P(E) = P(E, F) + P(E, G) = \tfrac{2}{3} $$ Since $P(E \mid E,F)P(E, F) = P(E, F)$, ditto for $E,G$ and $P(E \mid F, G) = ...
Find Probability of one event out of three when all of them can't happen together Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob: $$ P(E) = P(E, F) + P(E, G) = \tfrac{2}{3} $$ Since $P(E \mid E,F)P(E, F) = P(E, F)$,
13,000
How to smooth data and force monotonicity
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam(), but the steps are shown below, based mostly on the example from ?pcls, modified to suit the s...
How to smooth data and force monotonicity
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't
How to smooth data and force monotonicity You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam(), but the steps are shown below, based mostly on the ...
How to smooth data and force monotonicity You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't