idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
13,601 | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | Note: This is an answer to the initial question, rather than the recurrence.
If she rolls a 4, then it essentially doesn't count, because the next roll is independent. In other words, after rolling a 4 the situation is the same as when she started. So you can ignore the 4. Then the outcomes that could matter are 1-3 an... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | Note: This is an answer to the initial question, rather than the recurrence.
If she rolls a 4, then it essentially doesn't count, because the next roll is independent. In other words, after rolling a | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
Note: This is an answer to the initial question, rather than the recurrence.
If she rolls a 4, then it essentially doesn't count, because the next roll is independent. In other words, after rolling a 4 the situation is the ... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
Note: This is an answer to the initial question, rather than the recurrence.
If she rolls a 4, then it essentially doesn't count, because the next roll is independent. In other words, after rolling a |
13,602 | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | The answers by dsaxton (https://stats.stackexchange.com/a/232107/90759) and GeoMatt22 (https://stats.stackexchange.com/a/232107/90759) give the best approaches to the problem. Another is to realize that your expression
$$P(W) = \frac13+\frac16\left(\frac13+\frac16(\cdots)\right)$$
Is really a geometric progression:
$$\... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | The answers by dsaxton (https://stats.stackexchange.com/a/232107/90759) and GeoMatt22 (https://stats.stackexchange.com/a/232107/90759) give the best approaches to the problem. Another is to realize th | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
The answers by dsaxton (https://stats.stackexchange.com/a/232107/90759) and GeoMatt22 (https://stats.stackexchange.com/a/232107/90759) give the best approaches to the problem. Another is to realize that your expression
$$P(... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
The answers by dsaxton (https://stats.stackexchange.com/a/232107/90759) and GeoMatt22 (https://stats.stackexchange.com/a/232107/90759) give the best approaches to the problem. Another is to realize th |
13,603 | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | All of the above answers are correct, but they don't explain why they are correct, and why you can ignore so many details and avoid having to solve a complicated recurrence relation.
The reason why the other answers are correct is the Strong Markov property, which for a discrete Markov Chain is equivalent to the regula... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | All of the above answers are correct, but they don't explain why they are correct, and why you can ignore so many details and avoid having to solve a complicated recurrence relation.
The reason why th | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
All of the above answers are correct, but they don't explain why they are correct, and why you can ignore so many details and avoid having to solve a complicated recurrence relation.
The reason why the other answers are cor... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
All of the above answers are correct, but they don't explain why they are correct, and why you can ignore so many details and avoid having to solve a complicated recurrence relation.
The reason why th |
13,604 | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | Another way to look at the problem.
Lets call a 'real result' a 1,2,3,5 or 6.
What is the probability of winning on the first roll, if you got a 'real result'? 2/5
What is the probability of winning on the second roll, if the second roll is the first time you got a 'real result'? 2/5
Same for third, fourth.
So, you can... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4? | Another way to look at the problem.
Lets call a 'real result' a 1,2,3,5 or 6.
What is the probability of winning on the first roll, if you got a 'real result'? 2/5
What is the probability of winning o | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
Another way to look at the problem.
Lets call a 'real result' a 1,2,3,5 or 6.
What is the probability of winning on the first roll, if you got a 'real result'? 2/5
What is the probability of winning on the second roll, if t... | Roll a die until it lands on any number other than 4. What is the probability the result is > 4?
Another way to look at the problem.
Lets call a 'real result' a 1,2,3,5 or 6.
What is the probability of winning on the first roll, if you got a 'real result'? 2/5
What is the probability of winning o |
13,605 | Why isn't variance defined as the difference between every value following each other? | The most obvious reason is that there is often no time sequence in the values. So if you jumble the data, it makes no difference in the information conveyed by the data. If we follow your method, then every time you jumble the data you get a different sample variance.
The more theoretical answer is that sample variance... | Why isn't variance defined as the difference between every value following each other? | The most obvious reason is that there is often no time sequence in the values. So if you jumble the data, it makes no difference in the information conveyed by the data. If we follow your method, then | Why isn't variance defined as the difference between every value following each other?
The most obvious reason is that there is often no time sequence in the values. So if you jumble the data, it makes no difference in the information conveyed by the data. If we follow your method, then every time you jumble the data y... | Why isn't variance defined as the difference between every value following each other?
The most obvious reason is that there is often no time sequence in the values. So if you jumble the data, it makes no difference in the information conveyed by the data. If we follow your method, then |
13,606 | Why isn't variance defined as the difference between every value following each other? | It is defined that way!
Here's the algebra. Let the values be $\mathbf{x}=(x_1, x_2, \ldots, x_n)$. Denote by $F$ the empirical distribution function of these values (which means each $x_i$ contributes a probability mass of $1/n$ at the value $x_i$) and let $X$ and $Y$ be independent random variables with distributio... | Why isn't variance defined as the difference between every value following each other? | It is defined that way!
Here's the algebra. Let the values be $\mathbf{x}=(x_1, x_2, \ldots, x_n)$. Denote by $F$ the empirical distribution function of these values (which means each $x_i$ contribu | Why isn't variance defined as the difference between every value following each other?
It is defined that way!
Here's the algebra. Let the values be $\mathbf{x}=(x_1, x_2, \ldots, x_n)$. Denote by $F$ the empirical distribution function of these values (which means each $x_i$ contributes a probability mass of $1/n$ a... | Why isn't variance defined as the difference between every value following each other?
It is defined that way!
Here's the algebra. Let the values be $\mathbf{x}=(x_1, x_2, \ldots, x_n)$. Denote by $F$ the empirical distribution function of these values (which means each $x_i$ contribu |
13,607 | Why isn't variance defined as the difference between every value following each other? | Just a complement to the other answers, variance can be computed as the squared difference between terms:
$$\begin{align}
&\text{Var}(X) = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right)^2 = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i - \overline x -x_j + \overline x\right)^2 = \\
&\frac{1}{2\cdot ... | Why isn't variance defined as the difference between every value following each other? | Just a complement to the other answers, variance can be computed as the squared difference between terms:
$$\begin{align}
&\text{Var}(X) = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right) | Why isn't variance defined as the difference between every value following each other?
Just a complement to the other answers, variance can be computed as the squared difference between terms:
$$\begin{align}
&\text{Var}(X) = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right)^2 = \\
&\frac{1}{2\cdot n^2}\sum... | Why isn't variance defined as the difference between every value following each other?
Just a complement to the other answers, variance can be computed as the squared difference between terms:
$$\begin{align}
&\text{Var}(X) = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right) |
13,608 | Why isn't variance defined as the difference between every value following each other? | Others have answered about the usefulness of variance defined as usual. Anyway, we just have two legitimate definitions of different things: the usual definition of variance, and your definition.
Then, the main question is why the first one is called variance and not yours. That is just a matter of convention. Until 19... | Why isn't variance defined as the difference between every value following each other? | Others have answered about the usefulness of variance defined as usual. Anyway, we just have two legitimate definitions of different things: the usual definition of variance, and your definition.
Then | Why isn't variance defined as the difference between every value following each other?
Others have answered about the usefulness of variance defined as usual. Anyway, we just have two legitimate definitions of different things: the usual definition of variance, and your definition.
Then, the main question is why the fi... | Why isn't variance defined as the difference between every value following each other?
Others have answered about the usefulness of variance defined as usual. Anyway, we just have two legitimate definitions of different things: the usual definition of variance, and your definition.
Then |
13,609 | Why isn't variance defined as the difference between every value following each other? | @GreenParker answer is more complete, but an intuitive example might be useful to illustrate the drawback to your approach.
In your question, you seem to assume that the order in which realisations of a random variable appear matters.
However, it is easy to think of examples in which it doesn't.
Consider the example ... | Why isn't variance defined as the difference between every value following each other? | @GreenParker answer is more complete, but an intuitive example might be useful to illustrate the drawback to your approach.
In your question, you seem to assume that the order in which realisations o | Why isn't variance defined as the difference between every value following each other?
@GreenParker answer is more complete, but an intuitive example might be useful to illustrate the drawback to your approach.
In your question, you seem to assume that the order in which realisations of a random variable appear matter... | Why isn't variance defined as the difference between every value following each other?
@GreenParker answer is more complete, but an intuitive example might be useful to illustrate the drawback to your approach.
In your question, you seem to assume that the order in which realisations o |
13,610 | Why isn't variance defined as the difference between every value following each other? | Although there are many good answers to this question I believe some important points where left behind and since this question came up with a really interesting point I would like to provide yet another point of view.
Why isn't variance defined as the difference between every value following
each other instead of ... | Why isn't variance defined as the difference between every value following each other? | Although there are many good answers to this question I believe some important points where left behind and since this question came up with a really interesting point I would like to provide yet anot | Why isn't variance defined as the difference between every value following each other?
Although there are many good answers to this question I believe some important points where left behind and since this question came up with a really interesting point I would like to provide yet another point of view.
Why isn't vari... | Why isn't variance defined as the difference between every value following each other?
Although there are many good answers to this question I believe some important points where left behind and since this question came up with a really interesting point I would like to provide yet anot |
13,611 | Why isn't variance defined as the difference between every value following each other? | The time-stepped difference is indeed used in one form, the Allan Variance.
http://www.allanstime.com/AllanVariance/ | Why isn't variance defined as the difference between every value following each other? | The time-stepped difference is indeed used in one form, the Allan Variance.
http://www.allanstime.com/AllanVariance/ | Why isn't variance defined as the difference between every value following each other?
The time-stepped difference is indeed used in one form, the Allan Variance.
http://www.allanstime.com/AllanVariance/ | Why isn't variance defined as the difference between every value following each other?
The time-stepped difference is indeed used in one form, the Allan Variance.
http://www.allanstime.com/AllanVariance/ |
13,612 | Why isn't variance defined as the difference between every value following each other? | Lots of good answers here, but I'll add a few.
The way it is defined now has proven useful. For example, normal distributions appear all the time in data and a normal distribution is defined by its mean and variance. Edit: as @whuber pointed out in a comment, there are various other ways specify a normal distribution.... | Why isn't variance defined as the difference between every value following each other? | Lots of good answers here, but I'll add a few.
The way it is defined now has proven useful. For example, normal distributions appear all the time in data and a normal distribution is defined by its m | Why isn't variance defined as the difference between every value following each other?
Lots of good answers here, but I'll add a few.
The way it is defined now has proven useful. For example, normal distributions appear all the time in data and a normal distribution is defined by its mean and variance. Edit: as @whube... | Why isn't variance defined as the difference between every value following each other?
Lots of good answers here, but I'll add a few.
The way it is defined now has proven useful. For example, normal distributions appear all the time in data and a normal distribution is defined by its m |
13,613 | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | 1 They don’t mean what people think they mean
Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)? Is it a correct procedure for a statistical testing? I have a gut feeling that it is a wrong situation to apply hypothesis testing, but I can not formall... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | 1 They don’t mean what people think they mean
Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)? Is it a correct procedure for a s | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
1 They don’t mean what people think they mean
Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)? Is it a correct procedure for a statistical testing? I hav... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
1 They don’t mean what people think they mean
Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)? Is it a correct procedure for a s |
13,614 | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | "Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)?" Good question! Yes, you're right, it's not a p-value. What's more the example is not a hypothesis test and it's not a significance test. Anyone who uses it as an argument to discard p-values or hypo... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | "Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)?" Good question! Yes, you're right, it's not a p-value. What's more the example | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
"Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)?" Good question! Yes, you're right, it's not a p-value. What's more the example is not a hypothesis test ... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
"Am I right that this is not a p-value (which is the probability to see this or more extreme value of a test statistic)?" Good question! Yes, you're right, it's not a p-value. What's more the example |
13,615 | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | The author of the article suffers from not understanding that hypothesis tests and confidence intervals serve different inferential purposes:
The confidence interval (bootstrap or otherwise) serves to provide a plausible range of estimates for a target parameter.
The hypothesis test serves to make a decision as to wh... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | The author of the article suffers from not understanding that hypothesis tests and confidence intervals serve different inferential purposes:
The confidence interval (bootstrap or otherwise) serves t | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
The author of the article suffers from not understanding that hypothesis tests and confidence intervals serve different inferential purposes:
The confidence interval (bootstrap or otherwise) serves to provide a plausible ran... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
The author of the article suffers from not understanding that hypothesis tests and confidence intervals serve different inferential purposes:
The confidence interval (bootstrap or otherwise) serves t |
13,616 | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | I agree that confidence intervals provide a lot more for performing inference than a single p-value for a single hypothesis, but there is no reason to ditch the p-value and no reason to rely solely on bootstrap confidence intervals. The confidence interval is the set of all hypotheses that are not significant (one wou... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | I agree that confidence intervals provide a lot more for performing inference than a single p-value for a single hypothesis, but there is no reason to ditch the p-value and no reason to rely solely on | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
I agree that confidence intervals provide a lot more for performing inference than a single p-value for a single hypothesis, but there is no reason to ditch the p-value and no reason to rely solely on bootstrap confidence int... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
I agree that confidence intervals provide a lot more for performing inference than a single p-value for a single hypothesis, but there is no reason to ditch the p-value and no reason to rely solely on |
13,617 | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | 1. Citizenship example
This seems to be a poor but valid test. It makes sense with an extreme p-value cut off $\alpha = 0$. This way we would only reject those citizens whose profession is not found anywhere in the US. So maybe we would say "Robbert cannot be a US citizen because he is a suicide bomber and there are no... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead" | 1. Citizenship example
This seems to be a poor but valid test. It makes sense with an extreme p-value cut off $\alpha = 0$. This way we would only reject those citizens whose profession is not found a | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
1. Citizenship example
This seems to be a poor but valid test. It makes sense with an extreme p-value cut off $\alpha = 0$. This way we would only reject those citizens whose profession is not found anywhere in the US. So may... | Three questions about the article "Ditch p-values. Use Bootstrap confidence intervals instead"
1. Citizenship example
This seems to be a poor but valid test. It makes sense with an extreme p-value cut off $\alpha = 0$. This way we would only reject those citizens whose profession is not found a |
13,618 | When to drop a term from a regression model? | I have never understood the wish for parsimony. Seeking parsimony destroys all aspects of statistical inference (bias of regression coefficients, standard errors, confidence intervals, P-values). A good reason to keep variables is that this preserves the accuracy of confidence intervals and other quantities. Think o... | When to drop a term from a regression model? | I have never understood the wish for parsimony. Seeking parsimony destroys all aspects of statistical inference (bias of regression coefficients, standard errors, confidence intervals, P-values). A | When to drop a term from a regression model?
I have never understood the wish for parsimony. Seeking parsimony destroys all aspects of statistical inference (bias of regression coefficients, standard errors, confidence intervals, P-values). A good reason to keep variables is that this preserves the accuracy of confid... | When to drop a term from a regression model?
I have never understood the wish for parsimony. Seeking parsimony destroys all aspects of statistical inference (bias of regression coefficients, standard errors, confidence intervals, P-values). A |
13,619 | When to drop a term from a regression model? | These answers about selection of variables all assume that the cost of the observation of variables is 0.
And that is not true.
While the issue of selection of variables for a given model may or may not involve selection, the implications for future behavior DOES involve selection.
Consider the problem of predicting ... | When to drop a term from a regression model? | These answers about selection of variables all assume that the cost of the observation of variables is 0.
And that is not true.
While the issue of selection of variables for a given model may or may | When to drop a term from a regression model?
These answers about selection of variables all assume that the cost of the observation of variables is 0.
And that is not true.
While the issue of selection of variables for a given model may or may not involve selection, the implications for future behavior DOES involve se... | When to drop a term from a regression model?
These answers about selection of variables all assume that the cost of the observation of variables is 0.
And that is not true.
While the issue of selection of variables for a given model may or may |
13,620 | When to drop a term from a regression model? | The most common advice these days is to get the AIC of the two models and take the one with the lower AIC. So, if your full model has an AIC of -20 and the model without the weakest predictor has an AIC > -20 then you keep the full model. Some might argue that if the difference < 3 you keep the simpler one. I prefer... | When to drop a term from a regression model? | The most common advice these days is to get the AIC of the two models and take the one with the lower AIC. So, if your full model has an AIC of -20 and the model without the weakest predictor has an | When to drop a term from a regression model?
The most common advice these days is to get the AIC of the two models and take the one with the lower AIC. So, if your full model has an AIC of -20 and the model without the weakest predictor has an AIC > -20 then you keep the full model. Some might argue that if the diffe... | When to drop a term from a regression model?
The most common advice these days is to get the AIC of the two models and take the one with the lower AIC. So, if your full model has an AIC of -20 and the model without the weakest predictor has an |
13,621 | When to drop a term from a regression model? | There are at least two other possible reasons for keeping a variable:
1) It affects the parameters for OTHER variables.
2) The fact that it is small is clinically interesting in itself
To see about 1, you can look at the predicted values for each person from a model with and without the variable in the model. I sugges... | When to drop a term from a regression model? | There are at least two other possible reasons for keeping a variable:
1) It affects the parameters for OTHER variables.
2) The fact that it is small is clinically interesting in itself
To see about 1, | When to drop a term from a regression model?
There are at least two other possible reasons for keeping a variable:
1) It affects the parameters for OTHER variables.
2) The fact that it is small is clinically interesting in itself
To see about 1, you can look at the predicted values for each person from a model with and... | When to drop a term from a regression model?
There are at least two other possible reasons for keeping a variable:
1) It affects the parameters for OTHER variables.
2) The fact that it is small is clinically interesting in itself
To see about 1, |
13,622 | When to drop a term from a regression model? | What are you using this model for? Is parsimony an important goal?
More parsimonious models are preferred in some situations, but I wouldn't say parsimony is a good thing in itself. Parsimonious models can be understood and communicated more easily, and parsimony can help guard against over-fitting, but often times t... | When to drop a term from a regression model? | What are you using this model for? Is parsimony an important goal?
More parsimonious models are preferred in some situations, but I wouldn't say parsimony is a good thing in itself. Parsimonious mod | When to drop a term from a regression model?
What are you using this model for? Is parsimony an important goal?
More parsimonious models are preferred in some situations, but I wouldn't say parsimony is a good thing in itself. Parsimonious models can be understood and communicated more easily, and parsimony can help ... | When to drop a term from a regression model?
What are you using this model for? Is parsimony an important goal?
More parsimonious models are preferred in some situations, but I wouldn't say parsimony is a good thing in itself. Parsimonious mod |
13,623 | When to drop a term from a regression model? | From your wording it sounds as if you're inclined to drop the last predictor because its predictive value is low; a substantial change on that predictor would not imply a substantial change on the response variable. If that is the case, then i like this criterion for including/dropping the predictor. It's more groun... | When to drop a term from a regression model? | From your wording it sounds as if you're inclined to drop the last predictor because its predictive value is low; a substantial change on that predictor would not imply a substantial change on the res | When to drop a term from a regression model?
From your wording it sounds as if you're inclined to drop the last predictor because its predictive value is low; a substantial change on that predictor would not imply a substantial change on the response variable. If that is the case, then i like this criterion for inclu... | When to drop a term from a regression model?
From your wording it sounds as if you're inclined to drop the last predictor because its predictive value is low; a substantial change on that predictor would not imply a substantial change on the res |
13,624 | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome? | You're right. If $P(H) = 0.2$, and you're using zero-one loss (that is, you need to guess an actual outcome as opposed to a probability or something, and furthermore, getting heads when you guessed tails is equally as bad as getting tails when you guessed heads), you should guess tails every time.
People often mistaken... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m | You're right. If $P(H) = 0.2$, and you're using zero-one loss (that is, you need to guess an actual outcome as opposed to a probability or something, and furthermore, getting heads when you guessed ta | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome?
You're right. If $P(H) = 0.2$, and you're using zero-one loss (that is, you need to guess an actual outcome as opposed to a probability or something, and furthermore, getting heads when you guessed ... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m
You're right. If $P(H) = 0.2$, and you're using zero-one loss (that is, you need to guess an actual outcome as opposed to a probability or something, and furthermore, getting heads when you guessed ta |
13,625 | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome? | You are essentially asking a very interesting question: should I predict using "MAP Bayesian" Maximum a posteriori estimation or "Real Bayesian".
Suppose you know the true distribution that $P(H)=0.2$, then using the MAP estimation, suppose you want to make 100 predictions on next 100 flip outcomes. You should always g... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m | You are essentially asking a very interesting question: should I predict using "MAP Bayesian" Maximum a posteriori estimation or "Real Bayesian".
Suppose you know the true distribution that $P(H)=0.2$ | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome?
You are essentially asking a very interesting question: should I predict using "MAP Bayesian" Maximum a posteriori estimation or "Real Bayesian".
Suppose you know the true distribution that $P(H)=0.... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m
You are essentially asking a very interesting question: should I predict using "MAP Bayesian" Maximum a posteriori estimation or "Real Bayesian".
Suppose you know the true distribution that $P(H)=0.2$ |
13,626 | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome? | Due to independence your expectation value is always maximized if you guess the most likely case. There isn't a better strategy because each flip/roll doesn't give you any additional information about the coin/die.
Anywhere you guess a less likely outcome your expectation of winning is less than if you had guessed the... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m | Due to independence your expectation value is always maximized if you guess the most likely case. There isn't a better strategy because each flip/roll doesn't give you any additional information about | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the most probable outcome?
Due to independence your expectation value is always maximized if you guess the most likely case. There isn't a better strategy because each flip/roll doesn't give you any additional information abo... | To maximize the chance of correctly guessing the result of a coin flip, should I always choose the m
Due to independence your expectation value is always maximized if you guess the most likely case. There isn't a better strategy because each flip/roll doesn't give you any additional information about |
13,627 | Using ANOVA on percentages? | There is a difference between having a binary variable as your dependent variable and having a proportion as your dependent variable.
Binary dependent variable:
This sounds like what you have. (i.e., each mother either smoked or she did not smoke)
In this case I would not use ANOVA. Logistic regression with some for... | Using ANOVA on percentages? | There is a difference between having a binary variable as your dependent variable and having a proportion as your dependent variable.
Binary dependent variable:
This sounds like what you have. (i.e | Using ANOVA on percentages?
There is a difference between having a binary variable as your dependent variable and having a proportion as your dependent variable.
Binary dependent variable:
This sounds like what you have. (i.e., each mother either smoked or she did not smoke)
In this case I would not use ANOVA. Logis... | Using ANOVA on percentages?
There is a difference between having a binary variable as your dependent variable and having a proportion as your dependent variable.
Binary dependent variable:
This sounds like what you have. (i.e |
13,628 | Using ANOVA on percentages? | It depends on how close the responses within different groups are to 0 or 100%. If there are a lot of extreme values (i.e. many values piled up on 0 or 100%) this will be difficult. (If you don't know the "denominators", i.e. the numbers of subjects from which the percentages are calculated, then you can't use contin... | Using ANOVA on percentages? | It depends on how close the responses within different groups are to 0 or 100%. If there are a lot of extreme values (i.e. many values piled up on 0 or 100%) this will be difficult. (If you don't kn | Using ANOVA on percentages?
It depends on how close the responses within different groups are to 0 or 100%. If there are a lot of extreme values (i.e. many values piled up on 0 or 100%) this will be difficult. (If you don't know the "denominators", i.e. the numbers of subjects from which the percentages are calculate... | Using ANOVA on percentages?
It depends on how close the responses within different groups are to 0 or 100%. If there are a lot of extreme values (i.e. many values piled up on 0 or 100%) this will be difficult. (If you don't kn |
13,629 | Using ANOVA on percentages? | You need to have the raw data, so that the response variable is 0/1 (not smoke, smoke). Then you can use binary logistic regression. It is not correct to group BMI into intervals. The cutpoints are not correct, probably don't exist, and you are not officially testing whether BMI is associated with smoking. You are ... | Using ANOVA on percentages? | You need to have the raw data, so that the response variable is 0/1 (not smoke, smoke). Then you can use binary logistic regression. It is not correct to group BMI into intervals. The cutpoints are | Using ANOVA on percentages?
You need to have the raw data, so that the response variable is 0/1 (not smoke, smoke). Then you can use binary logistic regression. It is not correct to group BMI into intervals. The cutpoints are not correct, probably don't exist, and you are not officially testing whether BMI is associ... | Using ANOVA on percentages?
You need to have the raw data, so that the response variable is 0/1 (not smoke, smoke). Then you can use binary logistic regression. It is not correct to group BMI into intervals. The cutpoints are |
13,630 | Using ANOVA on percentages? | If you choose to do an ordinary ANOVA on proportional data, it is crucial to verify the assumption of homogeneous error variances. If (as is common with percentage data), the error variances are not constant, a more realistic alternative is to try beta regression, which can account for this heteroscedasticity in the mo... | Using ANOVA on percentages? | If you choose to do an ordinary ANOVA on proportional data, it is crucial to verify the assumption of homogeneous error variances. If (as is common with percentage data), the error variances are not c | Using ANOVA on percentages?
If you choose to do an ordinary ANOVA on proportional data, it is crucial to verify the assumption of homogeneous error variances. If (as is common with percentage data), the error variances are not constant, a more realistic alternative is to try beta regression, which can account for this ... | Using ANOVA on percentages?
If you choose to do an ordinary ANOVA on proportional data, it is crucial to verify the assumption of homogeneous error variances. If (as is common with percentage data), the error variances are not c |
13,631 | Software for easy-yet-robust data exploration | I program in Python for 95% of my work and the rest in R or MATLAB or IDL/PV-WAVE (and soon SAS). But, I am in an environment where time-to-results is often a huge driver of the analysis chosen and so I often use point-and-click tools as well. In my experience, there is no single, robust, flexible GUI tool for doing an... | Software for easy-yet-robust data exploration | I program in Python for 95% of my work and the rest in R or MATLAB or IDL/PV-WAVE (and soon SAS). But, I am in an environment where time-to-results is often a huge driver of the analysis chosen and so | Software for easy-yet-robust data exploration
I program in Python for 95% of my work and the rest in R or MATLAB or IDL/PV-WAVE (and soon SAS). But, I am in an environment where time-to-results is often a huge driver of the analysis chosen and so I often use point-and-click tools as well. In my experience, there is no ... | Software for easy-yet-robust data exploration
I program in Python for 95% of my work and the rest in R or MATLAB or IDL/PV-WAVE (and soon SAS). But, I am in an environment where time-to-results is often a huge driver of the analysis chosen and so |
13,632 | Software for easy-yet-robust data exploration | As far as exploratory (possibly interactive) data analysis is concerned, I would suggest to take a look at:
Weka, originally targets data-mining applications, but can be used for data summaries.
Mondrian, for interactive data visualization.
KNIME, which relies on the idea of building data flows and is compatible with ... | Software for easy-yet-robust data exploration | As far as exploratory (possibly interactive) data analysis is concerned, I would suggest to take a look at:
Weka, originally targets data-mining applications, but can be used for data summaries.
Mond | Software for easy-yet-robust data exploration
As far as exploratory (possibly interactive) data analysis is concerned, I would suggest to take a look at:
Weka, originally targets data-mining applications, but can be used for data summaries.
Mondrian, for interactive data visualization.
KNIME, which relies on the idea ... | Software for easy-yet-robust data exploration
As far as exploratory (possibly interactive) data analysis is concerned, I would suggest to take a look at:
Weka, originally targets data-mining applications, but can be used for data summaries.
Mond |
13,633 | Software for easy-yet-robust data exploration | Some people think of programming as simply entering a command line statement. At that point then perhaps you are a bit lost in encouraging them. However, if they are using spreadsheets already then they already have to enter formulas. These are akin to command line statements. If they really mean they don't want to... | Software for easy-yet-robust data exploration | Some people think of programming as simply entering a command line statement. At that point then perhaps you are a bit lost in encouraging them. However, if they are using spreadsheets already then | Software for easy-yet-robust data exploration
Some people think of programming as simply entering a command line statement. At that point then perhaps you are a bit lost in encouraging them. However, if they are using spreadsheets already then they already have to enter formulas. These are akin to command line state... | Software for easy-yet-robust data exploration
Some people think of programming as simply entering a command line statement. At that point then perhaps you are a bit lost in encouraging them. However, if they are using spreadsheets already then |
13,634 | Software for easy-yet-robust data exploration | I'm going to put a pitch in here for JMP. I have a couple reasons why it's my preferred non-programming data exploration tool of choice:
Really good visualization tools. More most basic EDA-type plots, it's as good as R is, and considerably easier to use for producing something approaching a publication-ready plot. It... | Software for easy-yet-robust data exploration | I'm going to put a pitch in here for JMP. I have a couple reasons why it's my preferred non-programming data exploration tool of choice:
Really good visualization tools. More most basic EDA-type plot | Software for easy-yet-robust data exploration
I'm going to put a pitch in here for JMP. I have a couple reasons why it's my preferred non-programming data exploration tool of choice:
Really good visualization tools. More most basic EDA-type plots, it's as good as R is, and considerably easier to use for producing some... | Software for easy-yet-robust data exploration
I'm going to put a pitch in here for JMP. I have a couple reasons why it's my preferred non-programming data exploration tool of choice:
Really good visualization tools. More most basic EDA-type plot |
13,635 | Software for easy-yet-robust data exploration | I can recommend Tableau as a good tool for data exploration and visualization, simply because of the different ways that you can explore and view the data, simply by dragging and dropping. The graphs are fairly sharp and you can easily output to PDF for presentation purposes. If you want you can extend it with some "p... | Software for easy-yet-robust data exploration | I can recommend Tableau as a good tool for data exploration and visualization, simply because of the different ways that you can explore and view the data, simply by dragging and dropping. The graphs | Software for easy-yet-robust data exploration
I can recommend Tableau as a good tool for data exploration and visualization, simply because of the different ways that you can explore and view the data, simply by dragging and dropping. The graphs are fairly sharp and you can easily output to PDF for presentation purpos... | Software for easy-yet-robust data exploration
I can recommend Tableau as a good tool for data exploration and visualization, simply because of the different ways that you can explore and view the data, simply by dragging and dropping. The graphs |
13,636 | Software for easy-yet-robust data exploration | As John said, data exploration doesn't require much programming in R. Here's a list of data exploration commands you can give people. (I just came up with this; you can surely expand it.)
Export the data from whatever package it's in. (Exporting numerical data without quotation marks is convenient.) Then read the data ... | Software for easy-yet-robust data exploration | As John said, data exploration doesn't require much programming in R. Here's a list of data exploration commands you can give people. (I just came up with this; you can surely expand it.)
Export the d | Software for easy-yet-robust data exploration
As John said, data exploration doesn't require much programming in R. Here's a list of data exploration commands you can give people. (I just came up with this; you can surely expand it.)
Export the data from whatever package it's in. (Exporting numerical data without quota... | Software for easy-yet-robust data exploration
As John said, data exploration doesn't require much programming in R. Here's a list of data exploration commands you can give people. (I just came up with this; you can surely expand it.)
Export the d |
13,637 | Software for easy-yet-robust data exploration | This is more of a lament than an answer...
The best software I've seen for this is Arc, which is built on top of Xlisp-Stat. It's fantastic software for data exploration with lots of built in interactive graphics, as well as lots of statistical inference capabilities. In my opinion nothing else has come close to its ... | Software for easy-yet-robust data exploration | This is more of a lament than an answer...
The best software I've seen for this is Arc, which is built on top of Xlisp-Stat. It's fantastic software for data exploration with lots of built in interac | Software for easy-yet-robust data exploration
This is more of a lament than an answer...
The best software I've seen for this is Arc, which is built on top of Xlisp-Stat. It's fantastic software for data exploration with lots of built in interactive graphics, as well as lots of statistical inference capabilities. In ... | Software for easy-yet-robust data exploration
This is more of a lament than an answer...
The best software I've seen for this is Arc, which is built on top of Xlisp-Stat. It's fantastic software for data exploration with lots of built in interac |
13,638 | Software for easy-yet-robust data exploration | A new software system that looks promising for this purpose is Deducer, built on top of R. Unfortunately, being new, I suspect it does not yet cover the breadth of questions that people might ask, but it does meet the toe-in-the-water criterion of leading people towards a true package should they so decide later.
I've... | Software for easy-yet-robust data exploration | A new software system that looks promising for this purpose is Deducer, built on top of R. Unfortunately, being new, I suspect it does not yet cover the breadth of questions that people might ask, bu | Software for easy-yet-robust data exploration
A new software system that looks promising for this purpose is Deducer, built on top of R. Unfortunately, being new, I suspect it does not yet cover the breadth of questions that people might ask, but it does meet the toe-in-the-water criterion of leading people towards a ... | Software for easy-yet-robust data exploration
A new software system that looks promising for this purpose is Deducer, built on top of R. Unfortunately, being new, I suspect it does not yet cover the breadth of questions that people might ask, bu |
13,639 | Software for easy-yet-robust data exploration | For the exploration of what data contain and cleaning it up the former Google Refine, now Open Refine, is a pretty good GUI. It's much more powerful for the preparation and cleaning than something like Excel. Then switch to something like R-Commander for your analyses. | Software for easy-yet-robust data exploration | For the exploration of what data contain and cleaning it up the former Google Refine, now Open Refine, is a pretty good GUI. It's much more powerful for the preparation and cleaning than something lik | Software for easy-yet-robust data exploration
For the exploration of what data contain and cleaning it up the former Google Refine, now Open Refine, is a pretty good GUI. It's much more powerful for the preparation and cleaning than something like Excel. Then switch to something like R-Commander for your analyses. | Software for easy-yet-robust data exploration
For the exploration of what data contain and cleaning it up the former Google Refine, now Open Refine, is a pretty good GUI. It's much more powerful for the preparation and cleaning than something lik |
13,640 | Software for easy-yet-robust data exploration | Anyone who answers R, or any of it's "GUIs" didn't read the question.
There is a program specifically designed for this and it's called JMP. Yes, it's expensive, though it has a free trial, and is incredibly cheap for students or college staff (like $50 cheap).
There is also RapidMiner, which is a workflow-based GUI ... | Software for easy-yet-robust data exploration | Anyone who answers R, or any of it's "GUIs" didn't read the question.
There is a program specifically designed for this and it's called JMP. Yes, it's expensive, though it has a free trial, and is in | Software for easy-yet-robust data exploration
Anyone who answers R, or any of it's "GUIs" didn't read the question.
There is a program specifically designed for this and it's called JMP. Yes, it's expensive, though it has a free trial, and is incredibly cheap for students or college staff (like $50 cheap).
There is a... | Software for easy-yet-robust data exploration
Anyone who answers R, or any of it's "GUIs" didn't read the question.
There is a program specifically designed for this and it's called JMP. Yes, it's expensive, though it has a free trial, and is in |
13,641 | Software for easy-yet-robust data exploration | Well, this particular tool is popular in my industry (though it is not industry-specific by design):
http://www.umetrics.com/simca
It allows you to do latent variable type multivariate analysis (PCA and PLS), and it includes all the attendant interpretative plots / calculations and interrogation tools like contribution... | Software for easy-yet-robust data exploration | Well, this particular tool is popular in my industry (though it is not industry-specific by design):
http://www.umetrics.com/simca
It allows you to do latent variable type multivariate analysis (PCA a | Software for easy-yet-robust data exploration
Well, this particular tool is popular in my industry (though it is not industry-specific by design):
http://www.umetrics.com/simca
It allows you to do latent variable type multivariate analysis (PCA and PLS), and it includes all the attendant interpretative plots / calculat... | Software for easy-yet-robust data exploration
Well, this particular tool is popular in my industry (though it is not industry-specific by design):
http://www.umetrics.com/simca
It allows you to do latent variable type multivariate analysis (PCA a |
13,642 | Software for easy-yet-robust data exploration | In my opinion, if you don't code yourself the test, you are prone to errors and misunderstandings of the results.
I think that you should recommend them to hire a statistician that has computer skills.
If it is to do always the same thing, then indeed you can use a small tool (blackbox) that will do the stuff. But I a... | Software for easy-yet-robust data exploration | In my opinion, if you don't code yourself the test, you are prone to errors and misunderstandings of the results.
I think that you should recommend them to hire a statistician that has computer skills | Software for easy-yet-robust data exploration
In my opinion, if you don't code yourself the test, you are prone to errors and misunderstandings of the results.
I think that you should recommend them to hire a statistician that has computer skills.
If it is to do always the same thing, then indeed you can use a small t... | Software for easy-yet-robust data exploration
In my opinion, if you don't code yourself the test, you are prone to errors and misunderstandings of the results.
I think that you should recommend them to hire a statistician that has computer skills |
13,643 | Software for easy-yet-robust data exploration | I would recommend John Fox's R package called R commander:
http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/
It creates a user interface similar to SPSS (or the like) that is great for beginners and does not require the user to input any code at all. It is all done via drop-down boxes (you can even minimize the R console wh... | Software for easy-yet-robust data exploration | I would recommend John Fox's R package called R commander:
http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/
It creates a user interface similar to SPSS (or the like) that is great for beginners and does no | Software for easy-yet-robust data exploration
I would recommend John Fox's R package called R commander:
http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/
It creates a user interface similar to SPSS (or the like) that is great for beginners and does not require the user to input any code at all. It is all done via drop-down... | Software for easy-yet-robust data exploration
I would recommend John Fox's R package called R commander:
http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/
It creates a user interface similar to SPSS (or the like) that is great for beginners and does no |
13,644 | Software for easy-yet-robust data exploration | Another useful tool, although just for Windows, is Spotfire -- I found it quite useful for quickly looking at various histograms and scatter plots for single and pairs of variables. A research tool that helps you rank single variables as well as pairs based on simple statistics -- Hierarchical Clustering Explorer from ... | Software for easy-yet-robust data exploration | Another useful tool, although just for Windows, is Spotfire -- I found it quite useful for quickly looking at various histograms and scatter plots for single and pairs of variables. A research tool th | Software for easy-yet-robust data exploration
Another useful tool, although just for Windows, is Spotfire -- I found it quite useful for quickly looking at various histograms and scatter plots for single and pairs of variables. A research tool that helps you rank single variables as well as pairs based on simple statis... | Software for easy-yet-robust data exploration
Another useful tool, although just for Windows, is Spotfire -- I found it quite useful for quickly looking at various histograms and scatter plots for single and pairs of variables. A research tool th |
13,645 | Which comes first - domain expertise or an experimental approach? | This will probably be closed quickly as opinion-based, but here is a point you may want to consider.
200 features is a lot, and 30k rows is less than it sounds like. A "fishing expedition" to find relevant features is quite likely to overfit and select spurious features. The danger is that when you go to your domain ex... | Which comes first - domain expertise or an experimental approach? | This will probably be closed quickly as opinion-based, but here is a point you may want to consider.
200 features is a lot, and 30k rows is less than it sounds like. A "fishing expedition" to find rel | Which comes first - domain expertise or an experimental approach?
This will probably be closed quickly as opinion-based, but here is a point you may want to consider.
200 features is a lot, and 30k rows is less than it sounds like. A "fishing expedition" to find relevant features is quite likely to overfit and select s... | Which comes first - domain expertise or an experimental approach?
This will probably be closed quickly as opinion-based, but here is a point you may want to consider.
200 features is a lot, and 30k rows is less than it sounds like. A "fishing expedition" to find rel |
13,646 | Which comes first - domain expertise or an experimental approach? | John Elder in 2005 gave a (now classic) presentation called: "Top 10 Data Mining Mistakes". Number 4 in that list is: Listen (only) to the data.
Specifically for business environments where it is almost certain that we act using incomplete information (e.g. client priorities, financial and physical resources, legal fra... | Which comes first - domain expertise or an experimental approach? | John Elder in 2005 gave a (now classic) presentation called: "Top 10 Data Mining Mistakes". Number 4 in that list is: Listen (only) to the data.
Specifically for business environments where it is almo | Which comes first - domain expertise or an experimental approach?
John Elder in 2005 gave a (now classic) presentation called: "Top 10 Data Mining Mistakes". Number 4 in that list is: Listen (only) to the data.
Specifically for business environments where it is almost certain that we act using incomplete information (e... | Which comes first - domain expertise or an experimental approach?
John Elder in 2005 gave a (now classic) presentation called: "Top 10 Data Mining Mistakes". Number 4 in that list is: Listen (only) to the data.
Specifically for business environments where it is almo |
13,647 | Which comes first - domain expertise or an experimental approach? | The problem you are dealing with is a selection of variables problem, and so standard principles and methods apply. In particular, if you have a large number of initial variables/features to select from, there is a danger of overfitting if you fail to adopt appropriate methods that account for multiple comparisons. I... | Which comes first - domain expertise or an experimental approach? | The problem you are dealing with is a selection of variables problem, and so standard principles and methods apply. In particular, if you have a large number of initial variables/features to select f | Which comes first - domain expertise or an experimental approach?
The problem you are dealing with is a selection of variables problem, and so standard principles and methods apply. In particular, if you have a large number of initial variables/features to select from, there is a danger of overfitting if you fail to a... | Which comes first - domain expertise or an experimental approach?
The problem you are dealing with is a selection of variables problem, and so standard principles and methods apply. In particular, if you have a large number of initial variables/features to select f |
13,648 | Which comes first - domain expertise or an experimental approach? | There are two aspects here causal inference and explainability.
From a causal inference perspective, domain expertise should guide the process of building relevant factors on a given purpose, targets, that are really linked and not just correlations explored or discovered by data scientists. Inference and Intervention... | Which comes first - domain expertise or an experimental approach? | There are two aspects here causal inference and explainability.
From a causal inference perspective, domain expertise should guide the process of building relevant factors on a given purpose, targets, | Which comes first - domain expertise or an experimental approach?
There are two aspects here causal inference and explainability.
From a causal inference perspective, domain expertise should guide the process of building relevant factors on a given purpose, targets, that are really linked and not just correlations expl... | Which comes first - domain expertise or an experimental approach?
There are two aspects here causal inference and explainability.
From a causal inference perspective, domain expertise should guide the process of building relevant factors on a given purpose, targets, |
13,649 | How do I perform a regression on non-normal data which remain non-normal when transformed? | You don't need to assume Normal distributions to do regression. Least squares regression is the BLUE estimator (Best Linear, Unbiased Estimator) regardless of the distributions. See the Gauss-Markov Theorem (e.g. wikipedia) A normal distribution is only used to show that the estimator is also the maximum likelihood est... | How do I perform a regression on non-normal data which remain non-normal when transformed? | You don't need to assume Normal distributions to do regression. Least squares regression is the BLUE estimator (Best Linear, Unbiased Estimator) regardless of the distributions. See the Gauss-Markov T | How do I perform a regression on non-normal data which remain non-normal when transformed?
You don't need to assume Normal distributions to do regression. Least squares regression is the BLUE estimator (Best Linear, Unbiased Estimator) regardless of the distributions. See the Gauss-Markov Theorem (e.g. wikipedia) A nor... | How do I perform a regression on non-normal data which remain non-normal when transformed?
You don't need to assume Normal distributions to do regression. Least squares regression is the BLUE estimator (Best Linear, Unbiased Estimator) regardless of the distributions. See the Gauss-Markov T |
13,650 | How do I perform a regression on non-normal data which remain non-normal when transformed? | First, OLS regression makes no assumptions about the data, it makes assumptions about the errors, as estimated by residuals.
Second, transforming data to make in fit a model is, in my opinion, the wrong approach. You want your model to fit your problem, not the other way round. In the old days, OLS regression was "the ... | How do I perform a regression on non-normal data which remain non-normal when transformed? | First, OLS regression makes no assumptions about the data, it makes assumptions about the errors, as estimated by residuals.
Second, transforming data to make in fit a model is, in my opinion, the wro | How do I perform a regression on non-normal data which remain non-normal when transformed?
First, OLS regression makes no assumptions about the data, it makes assumptions about the errors, as estimated by residuals.
Second, transforming data to make in fit a model is, in my opinion, the wrong approach. You want your mo... | How do I perform a regression on non-normal data which remain non-normal when transformed?
First, OLS regression makes no assumptions about the data, it makes assumptions about the errors, as estimated by residuals.
Second, transforming data to make in fit a model is, in my opinion, the wro |
13,651 | How do I perform a regression on non-normal data which remain non-normal when transformed? | Rather than relying on a test for normality of the residuals, try assessing the normality with rational judgment. Normality tests do not tell you that your data is normal, only that it's not. But given that the data are a sample you can be quite certain they're not actually normal without a test. The requirement is app... | How do I perform a regression on non-normal data which remain non-normal when transformed? | Rather than relying on a test for normality of the residuals, try assessing the normality with rational judgment. Normality tests do not tell you that your data is normal, only that it's not. But give | How do I perform a regression on non-normal data which remain non-normal when transformed?
Rather than relying on a test for normality of the residuals, try assessing the normality with rational judgment. Normality tests do not tell you that your data is normal, only that it's not. But given that the data are a sample ... | How do I perform a regression on non-normal data which remain non-normal when transformed?
Rather than relying on a test for normality of the residuals, try assessing the normality with rational judgment. Normality tests do not tell you that your data is normal, only that it's not. But give |
13,652 | How do I perform a regression on non-normal data which remain non-normal when transformed? | Broadly, there are two possible approaches to your problem: one which is well-justified from a theoretical perspective, but potentially impossible to implement in practice, while the other is more heuristic.
The theoretically optimal approach (which you probably won't actually be able to use, unfortunately) is to calcu... | How do I perform a regression on non-normal data which remain non-normal when transformed? | Broadly, there are two possible approaches to your problem: one which is well-justified from a theoretical perspective, but potentially impossible to implement in practice, while the other is more heu | How do I perform a regression on non-normal data which remain non-normal when transformed?
Broadly, there are two possible approaches to your problem: one which is well-justified from a theoretical perspective, but potentially impossible to implement in practice, while the other is more heuristic.
The theoretically opt... | How do I perform a regression on non-normal data which remain non-normal when transformed?
Broadly, there are two possible approaches to your problem: one which is well-justified from a theoretical perspective, but potentially impossible to implement in practice, while the other is more heu |
13,653 | Law of Large Numbers for whole distributions | While the law of large numbers is framed in terms of "means" this actually gives you a large amount of flexibility to show convergence of other types of quantities. In particular, you can use indicator functions to get convergence results for the probabilities of any specified event. To see how to do this, suppose we... | Law of Large Numbers for whole distributions | While the law of large numbers is framed in terms of "means" this actually gives you a large amount of flexibility to show convergence of other types of quantities. In particular, you can use indicat | Law of Large Numbers for whole distributions
While the law of large numbers is framed in terms of "means" this actually gives you a large amount of flexibility to show convergence of other types of quantities. In particular, you can use indicator functions to get convergence results for the probabilities of any specif... | Law of Large Numbers for whole distributions
While the law of large numbers is framed in terms of "means" this actually gives you a large amount of flexibility to show convergence of other types of quantities. In particular, you can use indicat |
13,654 | Law of Large Numbers for whole distributions | You may be asking for the Glivenko-Cantelli theorem.
https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem
Note that this is about cumulative distribution functions, i.e., for data sets relative frequencies that observations are below any given value $x$, or by implication in any interval $[x_1,x_2]$. Histogr... | Law of Large Numbers for whole distributions | You may be asking for the Glivenko-Cantelli theorem.
https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem
Note that this is about cumulative distribution functions, i.e., for data sets rela | Law of Large Numbers for whole distributions
You may be asking for the Glivenko-Cantelli theorem.
https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem
Note that this is about cumulative distribution functions, i.e., for data sets relative frequencies that observations are below any given value $x$, or by imp... | Law of Large Numbers for whole distributions
You may be asking for the Glivenko-Cantelli theorem.
https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem
Note that this is about cumulative distribution functions, i.e., for data sets rela |
13,655 | Law of Large Numbers for whole distributions | As an alternative to Glivenko–Cantelli, you can look at Sanov's theorem, which uses the Kullback–Leibler divergence as the distance measure. For any set $A$ of frequency distributions, this theorem upper-bounds the probability that the observed frequency distribution $f$ (for $n$ IID instances of a random variable with... | Law of Large Numbers for whole distributions | As an alternative to Glivenko–Cantelli, you can look at Sanov's theorem, which uses the Kullback–Leibler divergence as the distance measure. For any set $A$ of frequency distributions, this theorem up | Law of Large Numbers for whole distributions
As an alternative to Glivenko–Cantelli, you can look at Sanov's theorem, which uses the Kullback–Leibler divergence as the distance measure. For any set $A$ of frequency distributions, this theorem upper-bounds the probability that the observed frequency distribution $f$ (fo... | Law of Large Numbers for whole distributions
As an alternative to Glivenko–Cantelli, you can look at Sanov's theorem, which uses the Kullback–Leibler divergence as the distance measure. For any set $A$ of frequency distributions, this theorem up |
13,656 | Law of Large Numbers for whole distributions | We can define an indicator function that gives whether an observation is inside or outside of an interval, which will have a Bernoulli distribution, and for a sample of multiple observations we have a binomial distribution. We can then apply the CLT to show that the mean converges to the probability mass of the origin... | Law of Large Numbers for whole distributions | We can define an indicator function that gives whether an observation is inside or outside of an interval, which will have a Bernoulli distribution, and for a sample of multiple observations we have | Law of Large Numbers for whole distributions
We can define an indicator function that gives whether an observation is inside or outside of an interval, which will have a Bernoulli distribution, and for a sample of multiple observations we have a binomial distribution. We can then apply the CLT to show that the mean co... | Law of Large Numbers for whole distributions
We can define an indicator function that gives whether an observation is inside or outside of an interval, which will have a Bernoulli distribution, and for a sample of multiple observations we have |
13,657 | Linear regression with slope constraint | I want to perform ... linear regression in R. ... I would like the slope to be inside an interval, let's say, between 1.4 and 1.6. How can this be done?
(i) Simple way:
fit the regression. If it's in the bounds, you're done.
If it's not in the bounds, set the slope to the nearest bound, and
estimate the intercept as... | Linear regression with slope constraint | I want to perform ... linear regression in R. ... I would like the slope to be inside an interval, let's say, between 1.4 and 1.6. How can this be done?
(i) Simple way:
fit the regression. If it's i | Linear regression with slope constraint
I want to perform ... linear regression in R. ... I would like the slope to be inside an interval, let's say, between 1.4 and 1.6. How can this be done?
(i) Simple way:
fit the regression. If it's in the bounds, you're done.
If it's not in the bounds, set the slope to the near... | Linear regression with slope constraint
I want to perform ... linear regression in R. ... I would like the slope to be inside an interval, let's say, between 1.4 and 1.6. How can this be done?
(i) Simple way:
fit the regression. If it's i |
13,658 | Linear regression with slope constraint | Glen_b's second method, using least squares with a box constraint can be more easily implemented via ridge regression. The solution to ridge regression can be viewed as the Lagrangian for a regression with a bound on the magnitude of the norm of the weight vector (and hence its slope). So following whuber's suggestio... | Linear regression with slope constraint | Glen_b's second method, using least squares with a box constraint can be more easily implemented via ridge regression. The solution to ridge regression can be viewed as the Lagrangian for a regressio | Linear regression with slope constraint
Glen_b's second method, using least squares with a box constraint can be more easily implemented via ridge regression. The solution to ridge regression can be viewed as the Lagrangian for a regression with a bound on the magnitude of the norm of the weight vector (and hence its ... | Linear regression with slope constraint
Glen_b's second method, using least squares with a box constraint can be more easily implemented via ridge regression. The solution to ridge regression can be viewed as the Lagrangian for a regressio |
13,659 | Linear regression with slope constraint | Another approach would be to use Bayesian methods to fit the regression and choose a prior distribution on $a$ that only has support in the region you want, e.g. a uniform from 1.4 to 1.6, or a beta distribution shifted and scaled to that domain.
There are many examples on the web and in software of using Bayesian meth... | Linear regression with slope constraint | Another approach would be to use Bayesian methods to fit the regression and choose a prior distribution on $a$ that only has support in the region you want, e.g. a uniform from 1.4 to 1.6, or a beta d | Linear regression with slope constraint
Another approach would be to use Bayesian methods to fit the regression and choose a prior distribution on $a$ that only has support in the region you want, e.g. a uniform from 1.4 to 1.6, or a beta distribution shifted and scaled to that domain.
There are many examples on the we... | Linear regression with slope constraint
Another approach would be to use Bayesian methods to fit the regression and choose a prior distribution on $a$ that only has support in the region you want, e.g. a uniform from 1.4 to 1.6, or a beta d |
13,660 | Linear regression with slope constraint | Another approach might be to reformulate your regression as an optimization problem and use an optimizer. I'm not sure if it can be reformulated this way, but I thought of this question when I read this blog posting on R optimizers:
http://zoonek.free.fr/blosxom/R/2012-06-01_Optimization.html | Linear regression with slope constraint | Another approach might be to reformulate your regression as an optimization problem and use an optimizer. I'm not sure if it can be reformulated this way, but I thought of this question when I read th | Linear regression with slope constraint
Another approach might be to reformulate your regression as an optimization problem and use an optimizer. I'm not sure if it can be reformulated this way, but I thought of this question when I read this blog posting on R optimizers:
http://zoonek.free.fr/blosxom/R/2012-06-01_Opti... | Linear regression with slope constraint
Another approach might be to reformulate your regression as an optimization problem and use an optimizer. I'm not sure if it can be reformulated this way, but I thought of this question when I read th |
13,661 | Recommended terminology for "statistically significant" | I don't think the objection is to just the term "statistically significant" but to the abuse of the whole concept of statistical significance testing and to the misinterpretation of results that are (or are not) statistically significant.
In particular, look at these six statements:
P-values can indicate how incompat... | Recommended terminology for "statistically significant" | I don't think the objection is to just the term "statistically significant" but to the abuse of the whole concept of statistical significance testing and to the misinterpretation of results that are ( | Recommended terminology for "statistically significant"
I don't think the objection is to just the term "statistically significant" but to the abuse of the whole concept of statistical significance testing and to the misinterpretation of results that are (or are not) statistically significant.
In particular, look at th... | Recommended terminology for "statistically significant"
I don't think the objection is to just the term "statistically significant" but to the abuse of the whole concept of statistical significance testing and to the misinterpretation of results that are ( |
13,662 | Recommended terminology for "statistically significant" | In my opinion one of more honest yet non-technical phrasing would be something like:
The obtained result is surprising/unexpected (p = 0.03) under the assumption of no mean difference between the groups.
Or, permitting the format, it could be expanded:
The obtained difference of $\Delta m$ would be quite surprising ... | Recommended terminology for "statistically significant" | In my opinion one of more honest yet non-technical phrasing would be something like:
The obtained result is surprising/unexpected (p = 0.03) under the assumption of no mean difference between the gro | Recommended terminology for "statistically significant"
In my opinion one of more honest yet non-technical phrasing would be something like:
The obtained result is surprising/unexpected (p = 0.03) under the assumption of no mean difference between the groups.
Or, permitting the format, it could be expanded:
The obta... | Recommended terminology for "statistically significant"
In my opinion one of more honest yet non-technical phrasing would be something like:
The obtained result is surprising/unexpected (p = 0.03) under the assumption of no mean difference between the gro |
13,663 | Recommended terminology for "statistically significant" | I agree with the answer by Peter Flom, but would like to add an additional point on the use of the term "significance" in statistical hypothesis testing. Most hypothesis tests of interest in statistics have a null hypothesis that posits a zero value for some "effect" and an alternative hypothesis that posits a non-zer... | Recommended terminology for "statistically significant" | I agree with the answer by Peter Flom, but would like to add an additional point on the use of the term "significance" in statistical hypothesis testing. Most hypothesis tests of interest in statisti | Recommended terminology for "statistically significant"
I agree with the answer by Peter Flom, but would like to add an additional point on the use of the term "significance" in statistical hypothesis testing. Most hypothesis tests of interest in statistics have a null hypothesis that posits a zero value for some "eff... | Recommended terminology for "statistically significant"
I agree with the answer by Peter Flom, but would like to add an additional point on the use of the term "significance" in statistical hypothesis testing. Most hypothesis tests of interest in statisti |
13,664 | Recommended terminology for "statistically significant" | In general, I agree with the following statements in the editorial Moving to a World Beyond "p < 0.05" which is part of the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician:
What you will NOT find in this issue is one solution that majestically replaces the ... | Recommended terminology for "statistically significant" | In general, I agree with the following statements in the editorial Moving to a World Beyond "p < 0.05" which is part of the special issue Statistical Inference in the 21st Century: A World Beyond p < | Recommended terminology for "statistically significant"
In general, I agree with the following statements in the editorial Moving to a World Beyond "p < 0.05" which is part of the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician:
What you will NOT find in th... | Recommended terminology for "statistically significant"
In general, I agree with the following statements in the editorial Moving to a World Beyond "p < 0.05" which is part of the special issue Statistical Inference in the 21st Century: A World Beyond p < |
13,665 | Recommended terminology for "statistically significant" | If we know the null hypothesis is not exactly true, yet the result is not statistically significant, then that is an issue of sample size, or statistical power. Statistical significance is not really a goal, it's a necessity that one achieves with appropriate statistical power. Given the same effect size, the results o... | Recommended terminology for "statistically significant" | If we know the null hypothesis is not exactly true, yet the result is not statistically significant, then that is an issue of sample size, or statistical power. Statistical significance is not really | Recommended terminology for "statistically significant"
If we know the null hypothesis is not exactly true, yet the result is not statistically significant, then that is an issue of sample size, or statistical power. Statistical significance is not really a goal, it's a necessity that one achieves with appropriate stat... | Recommended terminology for "statistically significant"
If we know the null hypothesis is not exactly true, yet the result is not statistically significant, then that is an issue of sample size, or statistical power. Statistical significance is not really |
13,666 | Recommended terminology for "statistically significant" | You can just state the result: "On average, Gurples were 10 cm taller than Cheebles (Difference in Height = 10 [5, 14]; mean, 95% CI, p=0.03)." | Recommended terminology for "statistically significant" | You can just state the result: "On average, Gurples were 10 cm taller than Cheebles (Difference in Height = 10 [5, 14]; mean, 95% CI, p=0.03)." | Recommended terminology for "statistically significant"
You can just state the result: "On average, Gurples were 10 cm taller than Cheebles (Difference in Height = 10 [5, 14]; mean, 95% CI, p=0.03)." | Recommended terminology for "statistically significant"
You can just state the result: "On average, Gurples were 10 cm taller than Cheebles (Difference in Height = 10 [5, 14]; mean, 95% CI, p=0.03)." |
13,667 | What is the 'fundamental' idea of machine learning for estimating parameters? | If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie empirical loss.
For instance, if you have a prediction task and are evaluated by the number of misclassifications, you coul... | What is the 'fundamental' idea of machine learning for estimating parameters? | If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie emp | What is the 'fundamental' idea of machine learning for estimating parameters?
If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie empirical loss.
For instance, if you have a p... | What is the 'fundamental' idea of machine learning for estimating parameters?
If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie emp |
13,668 | What is the 'fundamental' idea of machine learning for estimating parameters? | I will give an itemized answer. Can provide more citations on demand, although this is not really controversial.
Statistics is not all about
maximizing (log)-likelihood. That's
anathema to principled bayesians who
just update their posteriors or
propagate their beliefs through an
appropriate model.
A lot of statistics... | What is the 'fundamental' idea of machine learning for estimating parameters? | I will give an itemized answer. Can provide more citations on demand, although this is not really controversial.
Statistics is not all about
maximizing (log)-likelihood. That's
anathema to principled | What is the 'fundamental' idea of machine learning for estimating parameters?
I will give an itemized answer. Can provide more citations on demand, although this is not really controversial.
Statistics is not all about
maximizing (log)-likelihood. That's
anathema to principled bayesians who
just update their posterior... | What is the 'fundamental' idea of machine learning for estimating parameters?
I will give an itemized answer. Can provide more citations on demand, although this is not really controversial.
Statistics is not all about
maximizing (log)-likelihood. That's
anathema to principled |
13,669 | What is the 'fundamental' idea of machine learning for estimating parameters? | I can't post a comment (the appropriate place for this comment) as I don't have enough reputation, but the answer accepted as the best answer by the question owner misses the point.
"If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss."
The likelihood is a loss function.... | What is the 'fundamental' idea of machine learning for estimating parameters? | I can't post a comment (the appropriate place for this comment) as I don't have enough reputation, but the answer accepted as the best answer by the question owner misses the point.
"If statistics is | What is the 'fundamental' idea of machine learning for estimating parameters?
I can't post a comment (the appropriate place for this comment) as I don't have enough reputation, but the answer accepted as the best answer by the question owner misses the point.
"If statistics is all about maximizing likelihood, then mach... | What is the 'fundamental' idea of machine learning for estimating parameters?
I can't post a comment (the appropriate place for this comment) as I don't have enough reputation, but the answer accepted as the best answer by the question owner misses the point.
"If statistics is |
13,670 | What is the 'fundamental' idea of machine learning for estimating parameters? | There is a trivial answer -- there is no parameter estimation in machine learning! We don't assume that our models are equivalent to some hidden background models; we treat both reality and the model as black boxes and we try to shake the model box (train in official terminology) so that its output will be similar to t... | What is the 'fundamental' idea of machine learning for estimating parameters? | There is a trivial answer -- there is no parameter estimation in machine learning! We don't assume that our models are equivalent to some hidden background models; we treat both reality and the model | What is the 'fundamental' idea of machine learning for estimating parameters?
There is a trivial answer -- there is no parameter estimation in machine learning! We don't assume that our models are equivalent to some hidden background models; we treat both reality and the model as black boxes and we try to shake the mod... | What is the 'fundamental' idea of machine learning for estimating parameters?
There is a trivial answer -- there is no parameter estimation in machine learning! We don't assume that our models are equivalent to some hidden background models; we treat both reality and the model |
13,671 | What is the 'fundamental' idea of machine learning for estimating parameters? | I don't think there is a fundamental idea around parameter estimation in Machine Learning. The ML crowd will happily maximize the likelihood or the posterior, as long as the algorithms are efficient and predict "accurately". The focus is on computation, and results from statistics are widely used.
If you're looking f... | What is the 'fundamental' idea of machine learning for estimating parameters? | I don't think there is a fundamental idea around parameter estimation in Machine Learning. The ML crowd will happily maximize the likelihood or the posterior, as long as the algorithms are efficient | What is the 'fundamental' idea of machine learning for estimating parameters?
I don't think there is a fundamental idea around parameter estimation in Machine Learning. The ML crowd will happily maximize the likelihood or the posterior, as long as the algorithms are efficient and predict "accurately". The focus is on... | What is the 'fundamental' idea of machine learning for estimating parameters?
I don't think there is a fundamental idea around parameter estimation in Machine Learning. The ML crowd will happily maximize the likelihood or the posterior, as long as the algorithms are efficient |
13,672 | What is the 'fundamental' idea of machine learning for estimating parameters? | You can rewrite a likelihood-maximization problem as a loss-minimization problem by defining the loss as the negative log likelihood. If the likelihood is a product of independent probabilities or probability densities, the loss will be a sum of independent terms, which can be computed efficiently. Furthermore, if the ... | What is the 'fundamental' idea of machine learning for estimating parameters? | You can rewrite a likelihood-maximization problem as a loss-minimization problem by defining the loss as the negative log likelihood. If the likelihood is a product of independent probabilities or pro | What is the 'fundamental' idea of machine learning for estimating parameters?
You can rewrite a likelihood-maximization problem as a loss-minimization problem by defining the loss as the negative log likelihood. If the likelihood is a product of independent probabilities or probability densities, the loss will be a sum... | What is the 'fundamental' idea of machine learning for estimating parameters?
You can rewrite a likelihood-maximization problem as a loss-minimization problem by defining the loss as the negative log likelihood. If the likelihood is a product of independent probabilities or pro |
13,673 | Why is XOR not linearly separable? | Draw a picture.
The question asks you to show it is not possible to find a half-plane and its complement that separate the blue points where XOR is zero from the red points where XOR is one (in the sense that the former lie in the half-plane and the latter lie in its complement).
One (flawed) attempt is shown here, wh... | Why is XOR not linearly separable? | Draw a picture.
The question asks you to show it is not possible to find a half-plane and its complement that separate the blue points where XOR is zero from the red points where XOR is one (in the s | Why is XOR not linearly separable?
Draw a picture.
The question asks you to show it is not possible to find a half-plane and its complement that separate the blue points where XOR is zero from the red points where XOR is one (in the sense that the former lie in the half-plane and the latter lie in its complement).
One... | Why is XOR not linearly separable?
Draw a picture.
The question asks you to show it is not possible to find a half-plane and its complement that separate the blue points where XOR is zero from the red points where XOR is one (in the s |
13,674 | Why is XOR not linearly separable? | Xor(0,0) == 0 implies c >= 0
Xor(1,1) == 0 implies a+b+c >=0
Adding these implies that a+b+2c >=0
Xor(0,1) == 1 implies a + c < 0
Xor(1,0) == 1 implies b + c < 0
Adding these implies that a+b+2c <0
So it both has to be >=0 and <0 which is not possible.
PS. The same argument is stated in the accepted answer. | Why is XOR not linearly separable? | Xor(0,0) == 0 implies c >= 0
Xor(1,1) == 0 implies a+b+c >=0
Adding these implies that a+b+2c >=0
Xor(0,1) == 1 implies a + c < 0
Xor(1,0) == 1 implies b + c < 0
Adding these implies that a+b+2c | Why is XOR not linearly separable?
Xor(0,0) == 0 implies c >= 0
Xor(1,1) == 0 implies a+b+c >=0
Adding these implies that a+b+2c >=0
Xor(0,1) == 1 implies a + c < 0
Xor(1,0) == 1 implies b + c < 0
Adding these implies that a+b+2c <0
So it both has to be >=0 and <0 which is not possible.
PS. The same argument is ... | Why is XOR not linearly separable?
Xor(0,0) == 0 implies c >= 0
Xor(1,1) == 0 implies a+b+c >=0
Adding these implies that a+b+2c >=0
Xor(0,1) == 1 implies a + c < 0
Xor(1,0) == 1 implies b + c < 0
Adding these implies that a+b+2c |
13,675 | For linear classifiers, do larger coefficients imply more important features? | Not at all. The magnitude of the coefficients depends directly on the scales selected for the variables, which is a somewhat arbitrary modeling decision.
To see this, consider a linear regression model predicting the petal width of an iris (in centimeters) given its petal length (in centimeters):
summary(lm(Petal.Width... | For linear classifiers, do larger coefficients imply more important features? | Not at all. The magnitude of the coefficients depends directly on the scales selected for the variables, which is a somewhat arbitrary modeling decision.
To see this, consider a linear regression mode | For linear classifiers, do larger coefficients imply more important features?
Not at all. The magnitude of the coefficients depends directly on the scales selected for the variables, which is a somewhat arbitrary modeling decision.
To see this, consider a linear regression model predicting the petal width of an iris (i... | For linear classifiers, do larger coefficients imply more important features?
Not at all. The magnitude of the coefficients depends directly on the scales selected for the variables, which is a somewhat arbitrary modeling decision.
To see this, consider a linear regression mode |
13,676 | For linear classifiers, do larger coefficients imply more important features? | "Feature importance" is a very slippery concept even when all predictors have been adjusted to a common scale (which in itself is a non-trivial problem in many practical applications involving categorical variables or skewed distributions). So if you avoid the scaling problems indicated in the answer by @josliber or th... | For linear classifiers, do larger coefficients imply more important features? | "Feature importance" is a very slippery concept even when all predictors have been adjusted to a common scale (which in itself is a non-trivial problem in many practical applications involving categor | For linear classifiers, do larger coefficients imply more important features?
"Feature importance" is a very slippery concept even when all predictors have been adjusted to a common scale (which in itself is a non-trivial problem in many practical applications involving categorical variables or skewed distributions). S... | For linear classifiers, do larger coefficients imply more important features?
"Feature importance" is a very slippery concept even when all predictors have been adjusted to a common scale (which in itself is a non-trivial problem in many practical applications involving categor |
13,677 | For linear classifiers, do larger coefficients imply more important features? | Just to add to the previous answer, the coefficient itself also fails to capture how much variability a predictor exhibits, which has a large effect on how useful it is in making predictions. Consider the simple model
$$
\text{E}(Y_i) = \alpha + \beta X_i
$$
where $X_i$ is a Bernoulli$(p)$ random variable. By taking ... | For linear classifiers, do larger coefficients imply more important features? | Just to add to the previous answer, the coefficient itself also fails to capture how much variability a predictor exhibits, which has a large effect on how useful it is in making predictions. Conside | For linear classifiers, do larger coefficients imply more important features?
Just to add to the previous answer, the coefficient itself also fails to capture how much variability a predictor exhibits, which has a large effect on how useful it is in making predictions. Consider the simple model
$$
\text{E}(Y_i) = \alp... | For linear classifiers, do larger coefficients imply more important features?
Just to add to the previous answer, the coefficient itself also fails to capture how much variability a predictor exhibits, which has a large effect on how useful it is in making predictions. Conside |
13,678 | MLE vs MAP estimation, when to use which? | If a prior probability is given as part of the problem setup, then use that information (i.e. use MAP). If no such prior information is given or assumed, then MAP is not possible, and MLE is a reasonable approach. | MLE vs MAP estimation, when to use which? | If a prior probability is given as part of the problem setup, then use that information (i.e. use MAP). If no such prior information is given or assumed, then MAP is not possible, and MLE is a reasona | MLE vs MAP estimation, when to use which?
If a prior probability is given as part of the problem setup, then use that information (i.e. use MAP). If no such prior information is given or assumed, then MAP is not possible, and MLE is a reasonable approach. | MLE vs MAP estimation, when to use which?
If a prior probability is given as part of the problem setup, then use that information (i.e. use MAP). If no such prior information is given or assumed, then MAP is not possible, and MLE is a reasona |
13,679 | MLE vs MAP estimation, when to use which? | A Bayesian would agree with you, a frequentist would not. This is a matter of opinion, perspective, and philosophy. I think that it does a lot of harm to the statistics community to attempt to argue that one method is always better than the other. Many problems will have Bayesian and frequentist solutions that are si... | MLE vs MAP estimation, when to use which? | A Bayesian would agree with you, a frequentist would not. This is a matter of opinion, perspective, and philosophy. I think that it does a lot of harm to the statistics community to attempt to argue | MLE vs MAP estimation, when to use which?
A Bayesian would agree with you, a frequentist would not. This is a matter of opinion, perspective, and philosophy. I think that it does a lot of harm to the statistics community to attempt to argue that one method is always better than the other. Many problems will have Baye... | MLE vs MAP estimation, when to use which?
A Bayesian would agree with you, a frequentist would not. This is a matter of opinion, perspective, and philosophy. I think that it does a lot of harm to the statistics community to attempt to argue |
13,680 | MLE vs MAP estimation, when to use which? | Assuming you have accurate prior information, MAP is better if the problem has a zero-one loss function on the estimate. If the loss is not zero-one (and in many real-world problems it is not), then it can happen that the MLE achieves lower expected loss. In these cases, it would be better not to limit yourself to MA... | MLE vs MAP estimation, when to use which? | Assuming you have accurate prior information, MAP is better if the problem has a zero-one loss function on the estimate. If the loss is not zero-one (and in many real-world problems it is not), then | MLE vs MAP estimation, when to use which?
Assuming you have accurate prior information, MAP is better if the problem has a zero-one loss function on the estimate. If the loss is not zero-one (and in many real-world problems it is not), then it can happen that the MLE achieves lower expected loss. In these cases, it w... | MLE vs MAP estimation, when to use which?
Assuming you have accurate prior information, MAP is better if the problem has a zero-one loss function on the estimate. If the loss is not zero-one (and in many real-world problems it is not), then |
13,681 | MLE vs MAP estimation, when to use which? | Short answer by @bean explains it very well. However, I would like to point to the section 1.1 of the paper Gibbs Sampling for the uninitiated by Resnik and Hardisty which takes the matter to more depth. I am writing few lines from this paper with very slight modifications (This answers repeats few of things which OP k... | MLE vs MAP estimation, when to use which? | Short answer by @bean explains it very well. However, I would like to point to the section 1.1 of the paper Gibbs Sampling for the uninitiated by Resnik and Hardisty which takes the matter to more dep | MLE vs MAP estimation, when to use which?
Short answer by @bean explains it very well. However, I would like to point to the section 1.1 of the paper Gibbs Sampling for the uninitiated by Resnik and Hardisty which takes the matter to more depth. I am writing few lines from this paper with very slight modifications (Thi... | MLE vs MAP estimation, when to use which?
Short answer by @bean explains it very well. However, I would like to point to the section 1.1 of the paper Gibbs Sampling for the uninitiated by Resnik and Hardisty which takes the matter to more dep |
13,682 | MLE vs MAP estimation, when to use which? | Theoretically, if you have the information about the prior probability, use MAP; otherwise MLE.
However, as the amount of data increases, the leading role of prior assumptions (which used by MAP) on model parameters will gradually weaken, while the data samples will greatly occupy a favorable position. In extreme cases... | MLE vs MAP estimation, when to use which? | Theoretically, if you have the information about the prior probability, use MAP; otherwise MLE.
However, as the amount of data increases, the leading role of prior assumptions (which used by MAP) on m | MLE vs MAP estimation, when to use which?
Theoretically, if you have the information about the prior probability, use MAP; otherwise MLE.
However, as the amount of data increases, the leading role of prior assumptions (which used by MAP) on model parameters will gradually weaken, while the data samples will greatly occ... | MLE vs MAP estimation, when to use which?
Theoretically, if you have the information about the prior probability, use MAP; otherwise MLE.
However, as the amount of data increases, the leading role of prior assumptions (which used by MAP) on m |
13,683 | MLE vs MAP estimation, when to use which? | As we know that
$$\begin{equation}\begin{aligned}
\hat\theta^{MAP}&=\arg \max\limits_{\substack{\theta}} \log P(\theta|\mathcal{D})\\
&= \arg \max\limits_{\substack{\theta}} \log \frac{P(\mathcal{D}|\theta)P(\theta)}{P(\mathcal{D})}\\
&=\arg \max\limits_{\substack{\theta}} \log P(\mathcal{D}|\theta)P(\theta) \\
&=\arg ... | MLE vs MAP estimation, when to use which? | As we know that
$$\begin{equation}\begin{aligned}
\hat\theta^{MAP}&=\arg \max\limits_{\substack{\theta}} \log P(\theta|\mathcal{D})\\
&= \arg \max\limits_{\substack{\theta}} \log \frac{P(\mathcal{D}|\ | MLE vs MAP estimation, when to use which?
As we know that
$$\begin{equation}\begin{aligned}
\hat\theta^{MAP}&=\arg \max\limits_{\substack{\theta}} \log P(\theta|\mathcal{D})\\
&= \arg \max\limits_{\substack{\theta}} \log \frac{P(\mathcal{D}|\theta)P(\theta)}{P(\mathcal{D})}\\
&=\arg \max\limits_{\substack{\theta}} \log... | MLE vs MAP estimation, when to use which?
As we know that
$$\begin{equation}\begin{aligned}
\hat\theta^{MAP}&=\arg \max\limits_{\substack{\theta}} \log P(\theta|\mathcal{D})\\
&= \arg \max\limits_{\substack{\theta}} \log \frac{P(\mathcal{D}|\ |
13,684 | MLE vs MAP estimation, when to use which? | If the data is less and you have priors available - "GO FOR MAP". If you have a lot data, the MAP will converge to MLE. Thus in case of lot of data scenario it's always better to do MLE rather than MAP. | MLE vs MAP estimation, when to use which? | If the data is less and you have priors available - "GO FOR MAP". If you have a lot data, the MAP will converge to MLE. Thus in case of lot of data scenario it's always better to do MLE rather than MA | MLE vs MAP estimation, when to use which?
If the data is less and you have priors available - "GO FOR MAP". If you have a lot data, the MAP will converge to MLE. Thus in case of lot of data scenario it's always better to do MLE rather than MAP. | MLE vs MAP estimation, when to use which?
If the data is less and you have priors available - "GO FOR MAP". If you have a lot data, the MAP will converge to MLE. Thus in case of lot of data scenario it's always better to do MLE rather than MA |
13,685 | Collinear variables in Multiclass LDA training | Multicollinearity means that your predictors are correlated. Why is this bad?
Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is close to 0 (i.e. two or more variables are almost a linear combination of each other).
More importantly, it makes the est... | Collinear variables in Multiclass LDA training | Multicollinearity means that your predictors are correlated. Why is this bad?
Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is c | Collinear variables in Multiclass LDA training
Multicollinearity means that your predictors are correlated. Why is this bad?
Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is close to 0 (i.e. two or more variables are almost a linear combination of ... | Collinear variables in Multiclass LDA training
Multicollinearity means that your predictors are correlated. Why is this bad?
Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is c |
13,686 | Collinear variables in Multiclass LDA training | As I seem to think gui11aume has given you a great answer, I want to give an example from a slightly different angle that might be illuminating. Consider that a covariate in your discriminant function looks as follows:
$X_1= 5X_2 +3X_3 -X_4$.
Suppose the best LDA has the following linear boundary:
$X_1+2X_2+X_3-2... | Collinear variables in Multiclass LDA training | As I seem to think gui11aume has given you a great answer, I want to give an example from a slightly different angle that might be illuminating. Consider that a covariate in your discriminant functio | Collinear variables in Multiclass LDA training
As I seem to think gui11aume has given you a great answer, I want to give an example from a slightly different angle that might be illuminating. Consider that a covariate in your discriminant function looks as follows:
$X_1= 5X_2 +3X_3 -X_4$.
Suppose the best LDA has t... | Collinear variables in Multiclass LDA training
As I seem to think gui11aume has given you a great answer, I want to give an example from a slightly different angle that might be illuminating. Consider that a covariate in your discriminant functio |
13,687 | Collinear variables in Multiclass LDA training | While the answer that was marked here is correct, I think you were looking for a different explanation to find out what happened in your code. I had the exact same issue running through a model.
Here's whats going on: You're training your model with the predicted variable as part of your data set. Here's an example of ... | Collinear variables in Multiclass LDA training | While the answer that was marked here is correct, I think you were looking for a different explanation to find out what happened in your code. I had the exact same issue running through a model.
Here' | Collinear variables in Multiclass LDA training
While the answer that was marked here is correct, I think you were looking for a different explanation to find out what happened in your code. I had the exact same issue running through a model.
Here's whats going on: You're training your model with the predicted variable ... | Collinear variables in Multiclass LDA training
While the answer that was marked here is correct, I think you were looking for a different explanation to find out what happened in your code. I had the exact same issue running through a model.
Here' |
13,688 | "Least Squares" and "Linear Regression", are they synonyms? | Linear regression assumes a linear relationship between the independent and dependent variable. It doesn't tell you how the model is fitted. Least square fitting is simply one of the possibilities. Other methods for training a linear model is in the comment.
Non-linear least squares is common (https://en.wikipedia.org... | "Least Squares" and "Linear Regression", are they synonyms? | Linear regression assumes a linear relationship between the independent and dependent variable. It doesn't tell you how the model is fitted. Least square fitting is simply one of the possibilities. O | "Least Squares" and "Linear Regression", are they synonyms?
Linear regression assumes a linear relationship between the independent and dependent variable. It doesn't tell you how the model is fitted. Least square fitting is simply one of the possibilities. Other methods for training a linear model is in the comment.
... | "Least Squares" and "Linear Regression", are they synonyms?
Linear regression assumes a linear relationship between the independent and dependent variable. It doesn't tell you how the model is fitted. Least square fitting is simply one of the possibilities. O |
13,689 | "Least Squares" and "Linear Regression", are they synonyms? | In addition to the correct answer of @Student T, I want to emphasize that least squares is a potential loss function for an optimization problem, whereas linear regression is an optimization problem.
Given a certain dataset, linear regression is used to find the best possible linear function, which is explaining the c... | "Least Squares" and "Linear Regression", are they synonyms? | In addition to the correct answer of @Student T, I want to emphasize that least squares is a potential loss function for an optimization problem, whereas linear regression is an optimization problem. | "Least Squares" and "Linear Regression", are they synonyms?
In addition to the correct answer of @Student T, I want to emphasize that least squares is a potential loss function for an optimization problem, whereas linear regression is an optimization problem.
Given a certain dataset, linear regression is used to find ... | "Least Squares" and "Linear Regression", are they synonyms?
In addition to the correct answer of @Student T, I want to emphasize that least squares is a potential loss function for an optimization problem, whereas linear regression is an optimization problem. |
13,690 | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix? [duplicate] | What @davidhigh wrote is correct: if you multiply reduced versions of $\mathbf U_\mathrm{r}$, $\mathbf S_\mathrm{r}$, and $\mathbf V_\mathrm{r}$, as you describe in your question, then you will obtain a matrix $$\tilde{ \mathbf A}=\mathbf U_\mathrm{r}\mathbf S_\mathrm{r}\mathbf V_\mathrm{r}^\top$$ that has exactly the... | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data m | What @davidhigh wrote is correct: if you multiply reduced versions of $\mathbf U_\mathrm{r}$, $\mathbf S_\mathrm{r}$, and $\mathbf V_\mathrm{r}$, as you describe in your question, then you will obtain | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix? [duplicate]
What @davidhigh wrote is correct: if you multiply reduced versions of $\mathbf U_\mathrm{r}$, $\mathbf S_\mathrm{r}$, and $\mathbf V_\mathrm{r}$, as you describe in your question, then you will obtain ... | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data m
What @davidhigh wrote is correct: if you multiply reduced versions of $\mathbf U_\mathrm{r}$, $\mathbf S_\mathrm{r}$, and $\mathbf V_\mathrm{r}$, as you describe in your question, then you will obtain |
13,691 | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix? [duplicate] | It seems that you are not completely aware of what an SVD does. As you wrote, it decomposes a matrix $\mathbf A$ according to
$$\mathbf A = \mathbf U \mathbf S \mathbf V^T,$$
Read the details on the involved matrix dimensions and properties for example here.
Now, dimensionality reduction is done by neglecting small sin... | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data m | It seems that you are not completely aware of what an SVD does. As you wrote, it decomposes a matrix $\mathbf A$ according to
$$\mathbf A = \mathbf U \mathbf S \mathbf V^T,$$
Read the details on the i | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix? [duplicate]
It seems that you are not completely aware of what an SVD does. As you wrote, it decomposes a matrix $\mathbf A$ according to
$$\mathbf A = \mathbf U \mathbf S \mathbf V^T,$$
Read the details on the in... | How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data m
It seems that you are not completely aware of what an SVD does. As you wrote, it decomposes a matrix $\mathbf A$ according to
$$\mathbf A = \mathbf U \mathbf S \mathbf V^T,$$
Read the details on the i |
13,692 | Prove the equivalence of the following two formulas for Spearman correlation | $ \rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}$
Since there are no ties, the $x$'s and $y$'s both consist of the integers from $1$ to $n$ inclusive.
Hence we can rewrite the denominator:
$\frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sum_i (x_i-\bar{x})^2}$
But the deno... | Prove the equivalence of the following two formulas for Spearman correlation | $ \rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}$
Since there are no ties, the $x$'s and $y$'s both consist of the integers from $1$ to $n$ inclusi | Prove the equivalence of the following two formulas for Spearman correlation
$ \rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}$
Since there are no ties, the $x$'s and $y$'s both consist of the integers from $1$ to $n$ inclusive.
Hence we can rewrite the denominator:
$... | Prove the equivalence of the following two formulas for Spearman correlation
$ \rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}$
Since there are no ties, the $x$'s and $y$'s both consist of the integers from $1$ to $n$ inclusi |
13,693 | Prove the equivalence of the following two formulas for Spearman correlation | We see that in the second formula there appears the squared Euclidean distance between the two (ranked) variables: $D^2= \Sigma d_i^2$. The decisive intuition at the start will be how $D^2$ might be related to $r$. It is clearly related via the cosine theorem. If we have the two variables centered, then the cosine in t... | Prove the equivalence of the following two formulas for Spearman correlation | We see that in the second formula there appears the squared Euclidean distance between the two (ranked) variables: $D^2= \Sigma d_i^2$. The decisive intuition at the start will be how $D^2$ might be r | Prove the equivalence of the following two formulas for Spearman correlation
We see that in the second formula there appears the squared Euclidean distance between the two (ranked) variables: $D^2= \Sigma d_i^2$. The decisive intuition at the start will be how $D^2$ might be related to $r$. It is clearly related via th... | Prove the equivalence of the following two formulas for Spearman correlation
We see that in the second formula there appears the squared Euclidean distance between the two (ranked) variables: $D^2= \Sigma d_i^2$. The decisive intuition at the start will be how $D^2$ might be r |
13,694 | Prove the equivalence of the following two formulas for Spearman correlation | The algebra is simpler than it might first appear.
IMHO, there is little profit or insight achieved by belaboring the algebraic manipulations. Instead, a truly simple identity shows why squared differences can be used to express (the usual Pearson) correlation coefficient. Applying this to the special case where the ... | Prove the equivalence of the following two formulas for Spearman correlation | The algebra is simpler than it might first appear.
IMHO, there is little profit or insight achieved by belaboring the algebraic manipulations. Instead, a truly simple identity shows why squared diffe | Prove the equivalence of the following two formulas for Spearman correlation
The algebra is simpler than it might first appear.
IMHO, there is little profit or insight achieved by belaboring the algebraic manipulations. Instead, a truly simple identity shows why squared differences can be used to express (the usual Pe... | Prove the equivalence of the following two formulas for Spearman correlation
The algebra is simpler than it might first appear.
IMHO, there is little profit or insight achieved by belaboring the algebraic manipulations. Instead, a truly simple identity shows why squared diffe |
13,695 | Prove the equivalence of the following two formulas for Spearman correlation | High school students may see the PMCC and Spearman correlation formulae years before they have the algebra skills to manipulate sigma notation, though they may well know the method of finite differences for deducing the polynomial equation for a sequence. So I have tried to write a "high school proof" for the equivalen... | Prove the equivalence of the following two formulas for Spearman correlation | High school students may see the PMCC and Spearman correlation formulae years before they have the algebra skills to manipulate sigma notation, though they may well know the method of finite differenc | Prove the equivalence of the following two formulas for Spearman correlation
High school students may see the PMCC and Spearman correlation formulae years before they have the algebra skills to manipulate sigma notation, though they may well know the method of finite differences for deducing the polynomial equation for... | Prove the equivalence of the following two formulas for Spearman correlation
High school students may see the PMCC and Spearman correlation formulae years before they have the algebra skills to manipulate sigma notation, though they may well know the method of finite differenc |
13,696 | What formula is used for standard deviation in R? | As pointed out by @Gschneider, it computes the sample standard deviation
$$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$
which you can easily check as follows:
> #generate a random vector
> x <- rnorm(n=5, mean=3, sd=1.5)
> n <- length(x)
>
> #sd in R
> sd1 <- sd(x)
>
> #self-written sd
> sd2 <- sqrt(s... | What formula is used for standard deviation in R? | As pointed out by @Gschneider, it computes the sample standard deviation
$$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$
which you can easily check as follows:
> #generate a random vect | What formula is used for standard deviation in R?
As pointed out by @Gschneider, it computes the sample standard deviation
$$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$
which you can easily check as follows:
> #generate a random vector
> x <- rnorm(n=5, mean=3, sd=1.5)
> n <- length(x)
>
> #sd in R
> ... | What formula is used for standard deviation in R?
As pointed out by @Gschneider, it computes the sample standard deviation
$$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$
which you can easily check as follows:
> #generate a random vect |
13,697 | What formula is used for standard deviation in R? | Yes. Technically, it computes the sample variance, and then takes the square root:
> sd
function (x, na.rm = FALSE)
{
if (is.matrix(x))
apply(x, 2, sd, na.rm = na.rm)
else if (is.vector(x))
sqrt(var(x, na.rm = na.rm))
else if (is.data.frame(x))
sapply(x, sd, na.rm = na.rm)
else sqrt(var(as.vector(x), na... | What formula is used for standard deviation in R? | Yes. Technically, it computes the sample variance, and then takes the square root:
> sd
function (x, na.rm = FALSE)
{
if (is.matrix(x))
apply(x, 2, sd, na.rm = na.rm)
else if (is.vector(x))
| What formula is used for standard deviation in R?
Yes. Technically, it computes the sample variance, and then takes the square root:
> sd
function (x, na.rm = FALSE)
{
if (is.matrix(x))
apply(x, 2, sd, na.rm = na.rm)
else if (is.vector(x))
sqrt(var(x, na.rm = na.rm))
else if (is.data.frame(x))
sapply(x,... | What formula is used for standard deviation in R?
Yes. Technically, it computes the sample variance, and then takes the square root:
> sd
function (x, na.rm = FALSE)
{
if (is.matrix(x))
apply(x, 2, sd, na.rm = na.rm)
else if (is.vector(x))
|
13,698 | Visualizing Likert responses using R or SPSS | If you really want to use stacked barcharts with such a large number of items, here are two possible solutions.
Using irutils
I came across this package some months ago.
As of commit 0573195c07 on Github, the code won't work with a grouping= argument. Let's go for Friday's debugging session.
Start by downloading a zipp... | Visualizing Likert responses using R or SPSS | If you really want to use stacked barcharts with such a large number of items, here are two possible solutions.
Using irutils
I came across this package some months ago.
As of commit 0573195c07 on Git | Visualizing Likert responses using R or SPSS
If you really want to use stacked barcharts with such a large number of items, here are two possible solutions.
Using irutils
I came across this package some months ago.
As of commit 0573195c07 on Github, the code won't work with a grouping= argument. Let's go for Friday's d... | Visualizing Likert responses using R or SPSS
If you really want to use stacked barcharts with such a large number of items, here are two possible solutions.
Using irutils
I came across this package some months ago.
As of commit 0573195c07 on Git |
13,699 | Visualizing Likert responses using R or SPSS | I started to write a blog post about recreating many of the charts in the post you mention (Visualizing Likert Item Response Data) in SPSS so I suppose this will be good motivation for finishing it.
As Michelle notes, the fact that you have groups is a new twist compared to the previous questions. And while groups can ... | Visualizing Likert responses using R or SPSS | I started to write a blog post about recreating many of the charts in the post you mention (Visualizing Likert Item Response Data) in SPSS so I suppose this will be good motivation for finishing it.
A | Visualizing Likert responses using R or SPSS
I started to write a blog post about recreating many of the charts in the post you mention (Visualizing Likert Item Response Data) in SPSS so I suppose this will be good motivation for finishing it.
As Michelle notes, the fact that you have groups is a new twist compared to ... | Visualizing Likert responses using R or SPSS
I started to write a blog post about recreating many of the charts in the post you mention (Visualizing Likert Item Response Data) in SPSS so I suppose this will be good motivation for finishing it.
A |
13,700 | Visualizing Likert responses using R or SPSS | Oh well, I came up with the code before you clarified. Should have waited but thought I should post it up so that anyone who comes here can reuse this code.
Dummy data for visualizing
# Response for http://stats.stackexchange.com/questions/25109/visualizing-likert-responses-using-r-or-spss
# Load libraries
library(resh... | Visualizing Likert responses using R or SPSS | Oh well, I came up with the code before you clarified. Should have waited but thought I should post it up so that anyone who comes here can reuse this code.
Dummy data for visualizing
# Response for h | Visualizing Likert responses using R or SPSS
Oh well, I came up with the code before you clarified. Should have waited but thought I should post it up so that anyone who comes here can reuse this code.
Dummy data for visualizing
# Response for http://stats.stackexchange.com/questions/25109/visualizing-likert-responses-... | Visualizing Likert responses using R or SPSS
Oh well, I came up with the code before you clarified. Should have waited but thought I should post it up so that anyone who comes here can reuse this code.
Dummy data for visualizing
# Response for h |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.