idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
9,401 | Empirical CDF vs CDF | Empirical is something you build from data and observations. For instance, suppose you want to know about the distribution of the height of people in a country. You start by measuring people and come up with a histogram that can be approximated to a distribution. Then you calculate the empirical CDF.
If you are using a... | Empirical CDF vs CDF | Empirical is something you build from data and observations. For instance, suppose you want to know about the distribution of the height of people in a country. You start by measuring people and come | Empirical CDF vs CDF
Empirical is something you build from data and observations. For instance, suppose you want to know about the distribution of the height of people in a country. You start by measuring people and come up with a histogram that can be approximated to a distribution. Then you calculate the empirical CD... | Empirical CDF vs CDF
Empirical is something you build from data and observations. For instance, suppose you want to know about the distribution of the height of people in a country. You start by measuring people and come |
9,402 | Empirical CDF vs CDF | According to Dictionary.com, the definitions of "empirical" include:
derived from or guided by experience or experiment.
Hence, the Empirical CDF is the CDF you obtain from your data. This contrasts with the theoretical CDF (often just called "CDF"), which is obtained from a statistical or probabilistic model such as... | Empirical CDF vs CDF | According to Dictionary.com, the definitions of "empirical" include:
derived from or guided by experience or experiment.
Hence, the Empirical CDF is the CDF you obtain from your data. This contrasts | Empirical CDF vs CDF
According to Dictionary.com, the definitions of "empirical" include:
derived from or guided by experience or experiment.
Hence, the Empirical CDF is the CDF you obtain from your data. This contrasts with the theoretical CDF (often just called "CDF"), which is obtained from a statistical or probab... | Empirical CDF vs CDF
According to Dictionary.com, the definitions of "empirical" include:
derived from or guided by experience or experiment.
Hence, the Empirical CDF is the CDF you obtain from your data. This contrasts |
9,403 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguishable from the null at any given sample size.
Here's two distributions, a standard normal (green solid line), and a simi... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguish | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguishable from th... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguish |
9,404 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | I second @Glen_b's answer and add that in general the "absence of evidence is not evidence for absence" problem makes hypothesis tests and $P$-values less useful than they seem. Estimation is often a better approach even in the goodness-of-fit assessment. One can use the Kolmogorov-Smirnov distance as a measure. It'... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | I second @Glen_b's answer and add that in general the "absence of evidence is not evidence for absence" problem makes hypothesis tests and $P$-values less useful than they seem. Estimation is often a | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
I second @Glen_b's answer and add that in general the "absence of evidence is not evidence for absence" problem makes hypothesis tests and $P$-values less useful than they seem. Estimation is often a better appr... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
I second @Glen_b's answer and add that in general the "absence of evidence is not evidence for absence" problem makes hypothesis tests and $P$-values less useful than they seem. Estimation is often a |
9,405 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | A view I think shared by most people is that the hypothesis testing is a probabilistic adaptation of the falsification principle.
If a hypothesis survives continuing and serious attempts to falsify it, then it has "proved its mettle" and can be provisionally accepted, but it can never be established conclusively.
T... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | A view I think shared by most people is that the hypothesis testing is a probabilistic adaptation of the falsification principle.
If a hypothesis survives continuing and serious attempts to falsify | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
A view I think shared by most people is that the hypothesis testing is a probabilistic adaptation of the falsification principle.
If a hypothesis survives continuing and serious attempts to falsify it, then it ... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
A view I think shared by most people is that the hypothesis testing is a probabilistic adaptation of the falsification principle.
If a hypothesis survives continuing and serious attempts to falsify |
9,406 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | The point is that from pure statistical point of view you can't accept, but in practice you do. For instance, if you are estimating the risk of a portfolio using value-at-risk or similar measures, the portfolio return distribution is quite important. That is because the risk is defined by the tail of your distribution.... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | The point is that from pure statistical point of view you can't accept, but in practice you do. For instance, if you are estimating the risk of a portfolio using value-at-risk or similar measures, the | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
The point is that from pure statistical point of view you can't accept, but in practice you do. For instance, if you are estimating the risk of a portfolio using value-at-risk or similar measures, the portfolio r... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
The point is that from pure statistical point of view you can't accept, but in practice you do. For instance, if you are estimating the risk of a portfolio using value-at-risk or similar measures, the |
9,407 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | I think this is a perfect example to illustrate the difference between academic work and practical decision making. In academic settings (where I am), you can argue any way you want to so long as it is deemed reasonable by others. Hence, essentially we end up with having endless, sometimes circular, argy bargy with one... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | I think this is a perfect example to illustrate the difference between academic work and practical decision making. In academic settings (where I am), you can argue any way you want to so long as it i | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
I think this is a perfect example to illustrate the difference between academic work and practical decision making. In academic settings (where I am), you can argue any way you want to so long as it is deemed rea... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
I think this is a perfect example to illustrate the difference between academic work and practical decision making. In academic settings (where I am), you can argue any way you want to so long as it i |
9,408 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | No defendant in court is ever innocent. They are either guilty (reject null hypothesis of innocent) or not guilty (do not reject presumption of innocence).
Absence of evidence is not evidence of absence. | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | No defendant in court is ever innocent. They are either guilty (reject null hypothesis of innocent) or not guilty (do not reject presumption of innocence).
Absence of evidence is not evidence of absen | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
No defendant in court is ever innocent. They are either guilty (reject null hypothesis of innocent) or not guilty (do not reject presumption of innocence).
Absence of evidence is not evidence of absence. | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
No defendant in court is ever innocent. They are either guilty (reject null hypothesis of innocent) or not guilty (do not reject presumption of innocence).
Absence of evidence is not evidence of absen |
9,409 | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis? | Thus, my question is, what is the point of performing such testing if
we can't conclude whether or not the data follow a given distribution?
If you have an alternative distribution (or set of distributions) in mind to compare to then it can be a useful tool.
I would say: I have a set of observations at hand which I... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo | Thus, my question is, what is the point of performing such testing if
we can't conclude whether or not the data follow a given distribution?
If you have an alternative distribution (or set of distr | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypothesis?
Thus, my question is, what is the point of performing such testing if
we can't conclude whether or not the data follow a given distribution?
If you have an alternative distribution (or set of distributions) in... | Distribution hypothesis testing - what is the point of doing it if you can't "accept" your null hypo
Thus, my question is, what is the point of performing such testing if
we can't conclude whether or not the data follow a given distribution?
If you have an alternative distribution (or set of distr |
9,410 | Distribution of difference between two normal distributions | This question can be answered as stated only by assuming the two random variables $X_1$ and $X_2$ governed by these distributions are independent. This makes their difference $X = X_2-X_1$ Normal with mean $\mu = \mu_2-\mu_1$ and variance $\sigma^2=\sigma_1^2 + \sigma_2^2$. (The following solution can easily be gener... | Distribution of difference between two normal distributions | This question can be answered as stated only by assuming the two random variables $X_1$ and $X_2$ governed by these distributions are independent. This makes their difference $X = X_2-X_1$ Normal wit | Distribution of difference between two normal distributions
This question can be answered as stated only by assuming the two random variables $X_1$ and $X_2$ governed by these distributions are independent. This makes their difference $X = X_2-X_1$ Normal with mean $\mu = \mu_2-\mu_1$ and variance $\sigma^2=\sigma_1^2... | Distribution of difference between two normal distributions
This question can be answered as stated only by assuming the two random variables $X_1$ and $X_2$ governed by these distributions are independent. This makes their difference $X = X_2-X_1$ Normal wit |
9,411 | Distribution of difference between two normal distributions | I am providing an answer that is complementary to the one by @whuber in the sense of being what a non-statistician (i.e. someone who does not know much about non-central chi-square distributions with
one degree of freedom etc) might write, and that a neophyte could
follow relatively easily.
Borrowing the assumption of ... | Distribution of difference between two normal distributions | I am providing an answer that is complementary to the one by @whuber in the sense of being what a non-statistician (i.e. someone who does not know much about non-central chi-square distributions with
| Distribution of difference between two normal distributions
I am providing an answer that is complementary to the one by @whuber in the sense of being what a non-statistician (i.e. someone who does not know much about non-central chi-square distributions with
one degree of freedom etc) might write, and that a neophyte ... | Distribution of difference between two normal distributions
I am providing an answer that is complementary to the one by @whuber in the sense of being what a non-statistician (i.e. someone who does not know much about non-central chi-square distributions with
|
9,412 | Distribution of difference between two normal distributions | The distribution of a difference of two normally distributed variates X and Y is also a normal distribution, assuming X and Y are independent (thanks Mark for the comment). Here is a derivation:
http://mathworld.wolfram.com/NormalDifferenceDistribution.html
Here you are asking the absolute difference, based on whuber's... | Distribution of difference between two normal distributions | The distribution of a difference of two normally distributed variates X and Y is also a normal distribution, assuming X and Y are independent (thanks Mark for the comment). Here is a derivation:
http: | Distribution of difference between two normal distributions
The distribution of a difference of two normally distributed variates X and Y is also a normal distribution, assuming X and Y are independent (thanks Mark for the comment). Here is a derivation:
http://mathworld.wolfram.com/NormalDifferenceDistribution.html
He... | Distribution of difference between two normal distributions
The distribution of a difference of two normally distributed variates X and Y is also a normal distribution, assuming X and Y are independent (thanks Mark for the comment). Here is a derivation:
http: |
9,413 | The abundance of P values in absence of a hypothesis | Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already.
With publishing, you have two competing pressures.
The first - and one you should push for at every reasonable opportunity - is to do what makes sense.
The se... | The abundance of P values in absence of a hypothesis | Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already.
With publishing, you have two competing | The abundance of P values in absence of a hypothesis
Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already.
With publishing, you have two competing pressures.
The first - and one you should push for at every reaso... | The abundance of P values in absence of a hypothesis
Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already.
With publishing, you have two competing |
9,414 | The abundance of P values in absence of a hypothesis | The p-value, or more generally, null-hypothesis significance testing (NHST), is slowly holding less and less value. So much so that is has started to get banned in journals.
Most people don't understand what the p-value really tells us and why it tells us this, even though it is used everywhere.
The problem is that the... | The abundance of P values in absence of a hypothesis | The p-value, or more generally, null-hypothesis significance testing (NHST), is slowly holding less and less value. So much so that is has started to get banned in journals.
Most people don't understa | The abundance of P values in absence of a hypothesis
The p-value, or more generally, null-hypothesis significance testing (NHST), is slowly holding less and less value. So much so that is has started to get banned in journals.
Most people don't understand what the p-value really tells us and why it tells us this, even ... | The abundance of P values in absence of a hypothesis
The p-value, or more generally, null-hypothesis significance testing (NHST), is slowly holding less and less value. So much so that is has started to get banned in journals.
Most people don't understa |
9,415 | The abundance of P values in absence of a hypothesis | Is this the same in other disciplines? What is the reason for the obsession with p values?
Greenwald et al. (1996) attempt to deal with this question regarding psychology. As to also applying NHST to baseline differences, presumably the editors will (rightly or wrongly) decide that "non-significant" baseline differenc... | The abundance of P values in absence of a hypothesis | Is this the same in other disciplines? What is the reason for the obsession with p values?
Greenwald et al. (1996) attempt to deal with this question regarding psychology. As to also applying NHST to | The abundance of P values in absence of a hypothesis
Is this the same in other disciplines? What is the reason for the obsession with p values?
Greenwald et al. (1996) attempt to deal with this question regarding psychology. As to also applying NHST to baseline differences, presumably the editors will (rightly or wron... | The abundance of P values in absence of a hypothesis
Is this the same in other disciplines? What is the reason for the obsession with p values?
Greenwald et al. (1996) attempt to deal with this question regarding psychology. As to also applying NHST to |
9,416 | The abundance of P values in absence of a hypothesis | P-values give information about differences between two groups of results ("treatment" vs "control", "A" vs "B", etc.) that sample from two populations. The nature of the difference is formalized in the statement of hypotheses -- e.g. "mean of A is greater than mean of B". Low p-values suggest that the differences are... | The abundance of P values in absence of a hypothesis | P-values give information about differences between two groups of results ("treatment" vs "control", "A" vs "B", etc.) that sample from two populations. The nature of the difference is formalized in | The abundance of P values in absence of a hypothesis
P-values give information about differences between two groups of results ("treatment" vs "control", "A" vs "B", etc.) that sample from two populations. The nature of the difference is formalized in the statement of hypotheses -- e.g. "mean of A is greater than mean... | The abundance of P values in absence of a hypothesis
P-values give information about differences between two groups of results ("treatment" vs "control", "A" vs "B", etc.) that sample from two populations. The nature of the difference is formalized in |
9,417 | The abundance of P values in absence of a hypothesis | "So a laymen like myself expects to not find any p values where there are no hypothesis."
Implicitly, the OP says that in the specific Table he presents, there are no hypotheses that accompany the reported p-values. Just to clear away this small confusion, there certainly are null hypotheses, but they are rather... in... | The abundance of P values in absence of a hypothesis | "So a laymen like myself expects to not find any p values where there are no hypothesis."
Implicitly, the OP says that in the specific Table he presents, there are no hypotheses that accompany the re | The abundance of P values in absence of a hypothesis
"So a laymen like myself expects to not find any p values where there are no hypothesis."
Implicitly, the OP says that in the specific Table he presents, there are no hypotheses that accompany the reported p-values. Just to clear away this small confusion, there cer... | The abundance of P values in absence of a hypothesis
"So a laymen like myself expects to not find any p values where there are no hypothesis."
Implicitly, the OP says that in the specific Table he presents, there are no hypotheses that accompany the re |
9,418 | The abundance of P values in absence of a hypothesis | I got curious and read the paper that OP gave as an example: Abdominal obesity increases the risk of hip fracture. I am not a medical researcher and normally do not read medicine papers.
I was surprised to see that the ONLY place where this paper uses $p$-values is the caption of Table 1 that OP reproduced in the quest... | The abundance of P values in absence of a hypothesis | I got curious and read the paper that OP gave as an example: Abdominal obesity increases the risk of hip fracture. I am not a medical researcher and normally do not read medicine papers.
I was surpris | The abundance of P values in absence of a hypothesis
I got curious and read the paper that OP gave as an example: Abdominal obesity increases the risk of hip fracture. I am not a medical researcher and normally do not read medicine papers.
I was surprised to see that the ONLY place where this paper uses $p$-values is t... | The abundance of P values in absence of a hypothesis
I got curious and read the paper that OP gave as an example: Abdominal obesity increases the risk of hip fracture. I am not a medical researcher and normally do not read medicine papers.
I was surpris |
9,419 | The abundance of P values in absence of a hypothesis | The level of statistical peer-review is not as high as one would think from my experience. For all applied papers I have worked on, all of the statistical comments came from experts in the applied field and not from statisticians. For "top" journals, although there is greater scrutiny, it is not uncommon to see result... | The abundance of P values in absence of a hypothesis | The level of statistical peer-review is not as high as one would think from my experience. For all applied papers I have worked on, all of the statistical comments came from experts in the applied fi | The abundance of P values in absence of a hypothesis
The level of statistical peer-review is not as high as one would think from my experience. For all applied papers I have worked on, all of the statistical comments came from experts in the applied field and not from statisticians. For "top" journals, although there ... | The abundance of P values in absence of a hypothesis
The level of statistical peer-review is not as high as one would think from my experience. For all applied papers I have worked on, all of the statistical comments came from experts in the applied fi |
9,420 | The abundance of P values in absence of a hypothesis | I have to read medical articles often and I feel that the pendulum seems to be swinging from one extreme to another, rather than staying in the central balanced zone.
Following approach seems to work well. If the P value is small, the observed difference is unlikely to be by chance alone. We should, hence, look at the... | The abundance of P values in absence of a hypothesis | I have to read medical articles often and I feel that the pendulum seems to be swinging from one extreme to another, rather than staying in the central balanced zone.
Following approach seems to work | The abundance of P values in absence of a hypothesis
I have to read medical articles often and I feel that the pendulum seems to be swinging from one extreme to another, rather than staying in the central balanced zone.
Following approach seems to work well. If the P value is small, the observed difference is unlikely... | The abundance of P values in absence of a hypothesis
I have to read medical articles often and I feel that the pendulum seems to be swinging from one extreme to another, rather than staying in the central balanced zone.
Following approach seems to work |
9,421 | How does one do a Type-III SS ANOVA in R with contrast codes? | Type III sum of squares for ANOVA are readily available through the Anova() function from the car package.
Contrast coding can be done in several ways, using C(), the contr.* family (as indicated by @nico), or directly the contrasts() function/argument. This is detailed in §6.2 (pp. 144-151) of Modern Applied Statistic... | How does one do a Type-III SS ANOVA in R with contrast codes? | Type III sum of squares for ANOVA are readily available through the Anova() function from the car package.
Contrast coding can be done in several ways, using C(), the contr.* family (as indicated by @ | How does one do a Type-III SS ANOVA in R with contrast codes?
Type III sum of squares for ANOVA are readily available through the Anova() function from the car package.
Contrast coding can be done in several ways, using C(), the contr.* family (as indicated by @nico), or directly the contrasts() function/argument. This... | How does one do a Type-III SS ANOVA in R with contrast codes?
Type III sum of squares for ANOVA are readily available through the Anova() function from the car package.
Contrast coding can be done in several ways, using C(), the contr.* family (as indicated by @ |
9,422 | How does one do a Type-III SS ANOVA in R with contrast codes? | This may look like a bit of self-promotion (and I suppose it is). But I developed an lsmeans package for R (available on CRAN) that is designed to handle exactly this sort of situation. Here is how it works for your example:
> sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep(c(-3,-3,1,3),each=20)+rnorm(80))
> sampl... | How does one do a Type-III SS ANOVA in R with contrast codes? | This may look like a bit of self-promotion (and I suppose it is). But I developed an lsmeans package for R (available on CRAN) that is designed to handle exactly this sort of situation. Here is how it | How does one do a Type-III SS ANOVA in R with contrast codes?
This may look like a bit of self-promotion (and I suppose it is). But I developed an lsmeans package for R (available on CRAN) that is designed to handle exactly this sort of situation. Here is how it works for your example:
> sample.data <- data.frame(IV=re... | How does one do a Type-III SS ANOVA in R with contrast codes?
This may look like a bit of self-promotion (and I suppose it is). But I developed an lsmeans package for R (available on CRAN) that is designed to handle exactly this sort of situation. Here is how it |
9,423 | How does one do a Type-III SS ANOVA in R with contrast codes? | You may want to have a look at this blog post:
Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares
(Spoiler: add options(contrasts=c("contr.sum", "contr.poly")) at the beginning of your script) | How does one do a Type-III SS ANOVA in R with contrast codes? | You may want to have a look at this blog post:
Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares
(Spoiler: add options(contrasts=c("contr.su | How does one do a Type-III SS ANOVA in R with contrast codes?
You may want to have a look at this blog post:
Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares
(Spoiler: add options(contrasts=c("contr.sum", "contr.poly")) at the beginning of your script) | How does one do a Type-III SS ANOVA in R with contrast codes?
You may want to have a look at this blog post:
Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares
(Spoiler: add options(contrasts=c("contr.su |
9,424 | How does one do a Type-III SS ANOVA in R with contrast codes? | When you are doing contrasts, you are doing a specific, stated linear combination of cell means within the context of the appropriate error term. As such, the concept of "Type of SS" is not meaningful with contrasts. Each contrast is essentially the first effect using a Type I SS. "Type of SS" has to do with what is pa... | How does one do a Type-III SS ANOVA in R with contrast codes? | When you are doing contrasts, you are doing a specific, stated linear combination of cell means within the context of the appropriate error term. As such, the concept of "Type of SS" is not meaningful | How does one do a Type-III SS ANOVA in R with contrast codes?
When you are doing contrasts, you are doing a specific, stated linear combination of cell means within the context of the appropriate error term. As such, the concept of "Type of SS" is not meaningful with contrasts. Each contrast is essentially the first ef... | How does one do a Type-III SS ANOVA in R with contrast codes?
When you are doing contrasts, you are doing a specific, stated linear combination of cell means within the context of the appropriate error term. As such, the concept of "Type of SS" is not meaningful |
9,425 | How does one do a Type-III SS ANOVA in R with contrast codes? | The fact that type III tests are used in your place of work is the weakest of reasons to keep using them. SAS has done major damage to statistics in this regard. Bill Venables' exegesis, referenced above, is a great resource on this. Just say no to type III; it's based on a faulty notion of balance and has lower pow... | How does one do a Type-III SS ANOVA in R with contrast codes? | The fact that type III tests are used in your place of work is the weakest of reasons to keep using them. SAS has done major damage to statistics in this regard. Bill Venables' exegesis, referenced | How does one do a Type-III SS ANOVA in R with contrast codes?
The fact that type III tests are used in your place of work is the weakest of reasons to keep using them. SAS has done major damage to statistics in this regard. Bill Venables' exegesis, referenced above, is a great resource on this. Just say no to type I... | How does one do a Type-III SS ANOVA in R with contrast codes?
The fact that type III tests are used in your place of work is the weakest of reasons to keep using them. SAS has done major damage to statistics in this regard. Bill Venables' exegesis, referenced |
9,426 | How does one do a Type-III SS ANOVA in R with contrast codes? | Try the Anova command in the car library. Use the type="III" argument, as it defaults to type II. For example:
library(car)
mod <- lm(conformity ~ fcategory*partner.status, data=Moore, contrasts=list(fcategory=contr.sum, partner.status=contr.sum))
Anova(mod, type="III") | How does one do a Type-III SS ANOVA in R with contrast codes? | Try the Anova command in the car library. Use the type="III" argument, as it defaults to type II. For example:
library(car)
mod <- lm(conformity ~ fcategory*partner.status, data=Moore, contrasts=list | How does one do a Type-III SS ANOVA in R with contrast codes?
Try the Anova command in the car library. Use the type="III" argument, as it defaults to type II. For example:
library(car)
mod <- lm(conformity ~ fcategory*partner.status, data=Moore, contrasts=list(fcategory=contr.sum, partner.status=contr.sum))
Anova(mod... | How does one do a Type-III SS ANOVA in R with contrast codes?
Try the Anova command in the car library. Use the type="III" argument, as it defaults to type II. For example:
library(car)
mod <- lm(conformity ~ fcategory*partner.status, data=Moore, contrasts=list |
9,427 | How does one do a Type-III SS ANOVA in R with contrast codes? | Also self-promoting, I wrote a function for exactly this: https://github.com/samuelfranssens/type3anova
Install as follows:
library(devtools)
install_github(samuelfranssens/type3anova)
library(type3anova)
sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep(c(-3,-3,1,3),each=20)+rnorm(80))
type3anova(lm(DV ~ IV, data... | How does one do a Type-III SS ANOVA in R with contrast codes? | Also self-promoting, I wrote a function for exactly this: https://github.com/samuelfranssens/type3anova
Install as follows:
library(devtools)
install_github(samuelfranssens/type3anova)
library(type3an | How does one do a Type-III SS ANOVA in R with contrast codes?
Also self-promoting, I wrote a function for exactly this: https://github.com/samuelfranssens/type3anova
Install as follows:
library(devtools)
install_github(samuelfranssens/type3anova)
library(type3anova)
sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep... | How does one do a Type-III SS ANOVA in R with contrast codes?
Also self-promoting, I wrote a function for exactly this: https://github.com/samuelfranssens/type3anova
Install as follows:
library(devtools)
install_github(samuelfranssens/type3anova)
library(type3an |
9,428 | Why is gender typically coded 0/1 rather than 1/2, for example? | Reasons to prefer zero-one coding of binary variables:
The mean of a zero-one variable represents the proportion in the category represented by the value one (e.g., the percentage of males).
In a simple regression $y = a + bx$ where $x$ is the zero-one variable, the constant has a straightforward interpretation (e.g.,... | Why is gender typically coded 0/1 rather than 1/2, for example? | Reasons to prefer zero-one coding of binary variables:
The mean of a zero-one variable represents the proportion in the category represented by the value one (e.g., the percentage of males).
In a sim | Why is gender typically coded 0/1 rather than 1/2, for example?
Reasons to prefer zero-one coding of binary variables:
The mean of a zero-one variable represents the proportion in the category represented by the value one (e.g., the percentage of males).
In a simple regression $y = a + bx$ where $x$ is the zero-one va... | Why is gender typically coded 0/1 rather than 1/2, for example?
Reasons to prefer zero-one coding of binary variables:
The mean of a zero-one variable represents the proportion in the category represented by the value one (e.g., the percentage of males).
In a sim |
9,429 | Why is gender typically coded 0/1 rather than 1/2, for example? | It makes it easier to interpret the results. Suppose you had some height data:
Woman A: 165
Woman B: 170
Woman C: 175
Man D: 170
Man E: 180
Man F: 190
and you took a regression of the form Height = a + b * Gender + Residual.
With the 0,1 dummy variable you would get an estimate of a of 170 being the average height o... | Why is gender typically coded 0/1 rather than 1/2, for example? | It makes it easier to interpret the results. Suppose you had some height data:
Woman A: 165
Woman B: 170
Woman C: 175
Man D: 170
Man E: 180
Man F: 190
and you took a regression of the form Height = | Why is gender typically coded 0/1 rather than 1/2, for example?
It makes it easier to interpret the results. Suppose you had some height data:
Woman A: 165
Woman B: 170
Woman C: 175
Man D: 170
Man E: 180
Man F: 190
and you took a regression of the form Height = a + b * Gender + Residual.
With the 0,1 dummy variable ... | Why is gender typically coded 0/1 rather than 1/2, for example?
It makes it easier to interpret the results. Suppose you had some height data:
Woman A: 165
Woman B: 170
Woman C: 175
Man D: 170
Man E: 180
Man F: 190
and you took a regression of the form Height = |
9,430 | Why is gender typically coded 0/1 rather than 1/2, for example? | I had a professor suggest that we code "biologically" with women being 0 and men being 1 - to reflect anatomy. I don't think it was the most sensitive, or PC thing to say in a class, but definitely easy to remember when looking at a dataset 5 years later. | Why is gender typically coded 0/1 rather than 1/2, for example? | I had a professor suggest that we code "biologically" with women being 0 and men being 1 - to reflect anatomy. I don't think it was the most sensitive, or PC thing to say in a class, but definitely ea | Why is gender typically coded 0/1 rather than 1/2, for example?
I had a professor suggest that we code "biologically" with women being 0 and men being 1 - to reflect anatomy. I don't think it was the most sensitive, or PC thing to say in a class, but definitely easy to remember when looking at a dataset 5 years later. | Why is gender typically coded 0/1 rather than 1/2, for example?
I had a professor suggest that we code "biologically" with women being 0 and men being 1 - to reflect anatomy. I don't think it was the most sensitive, or PC thing to say in a class, but definitely ea |
9,431 | Why is gender typically coded 0/1 rather than 1/2, for example? | I had assumed that this was because the field type often used to store gender is a bit field, and bit fields in SQL can only have the values 0 or 1. When you dump out the data, it comes out as 0 or 1, and so that's why you get those particular values.
If you wanted to use 1 and 2, you'd have to use a bigger field type,... | Why is gender typically coded 0/1 rather than 1/2, for example? | I had assumed that this was because the field type often used to store gender is a bit field, and bit fields in SQL can only have the values 0 or 1. When you dump out the data, it comes out as 0 or 1, | Why is gender typically coded 0/1 rather than 1/2, for example?
I had assumed that this was because the field type often used to store gender is a bit field, and bit fields in SQL can only have the values 0 or 1. When you dump out the data, it comes out as 0 or 1, and so that's why you get those particular values.
If y... | Why is gender typically coded 0/1 rather than 1/2, for example?
I had assumed that this was because the field type often used to store gender is a bit field, and bit fields in SQL can only have the values 0 or 1. When you dump out the data, it comes out as 0 or 1, |
9,432 | Why is gender typically coded 0/1 rather than 1/2, for example? | Many good reasons posted so far, but it should also be reflexive. Why would you start counting at 1? It makes lots of numerical algorithms far more complicated. Labeling begins at 0, not 1. If you're not yet convinced of this, I have a nice example of why it's important at http://madhadron.com/?p=69
As for why wome... | Why is gender typically coded 0/1 rather than 1/2, for example? | Many good reasons posted so far, but it should also be reflexive. Why would you start counting at 1? It makes lots of numerical algorithms far more complicated. Labeling begins at 0, not 1. If you | Why is gender typically coded 0/1 rather than 1/2, for example?
Many good reasons posted so far, but it should also be reflexive. Why would you start counting at 1? It makes lots of numerical algorithms far more complicated. Labeling begins at 0, not 1. If you're not yet convinced of this, I have a nice example of ... | Why is gender typically coded 0/1 rather than 1/2, for example?
Many good reasons posted so far, but it should also be reflexive. Why would you start counting at 1? It makes lots of numerical algorithms far more complicated. Labeling begins at 0, not 1. If you |
9,433 | Why is gender typically coded 0/1 rather than 1/2, for example? | The ISO/IEC 5218 standard updates this notion with the following map:
0 = not known,
1 = male,
2 = female,
9 = not applicable.
This is particularly useful in languages where 0 coerces to a false value, such as in JavaScript:
if ( !user.gender ) {
promptForGender();
} | Why is gender typically coded 0/1 rather than 1/2, for example? | The ISO/IEC 5218 standard updates this notion with the following map:
0 = not known,
1 = male,
2 = female,
9 = not applicable.
This is particularly useful in languages where 0 coerces to a false valu | Why is gender typically coded 0/1 rather than 1/2, for example?
The ISO/IEC 5218 standard updates this notion with the following map:
0 = not known,
1 = male,
2 = female,
9 = not applicable.
This is particularly useful in languages where 0 coerces to a false value, such as in JavaScript:
if ( !user.gender ) {
prom... | Why is gender typically coded 0/1 rather than 1/2, for example?
The ISO/IEC 5218 standard updates this notion with the following map:
0 = not known,
1 = male,
2 = female,
9 = not applicable.
This is particularly useful in languages where 0 coerces to a false valu |
9,434 | Why is gender typically coded 0/1 rather than 1/2, for example? | The way I see it personally is phallically 0 typically represents female, as it is the shape of the womb, and considered to be feminine...in almost all sciences (i.e. in biology/genetics pedigree charts) circles, or zeros represent females. Where as more straight edge shapes (triangles, squares, or 1s) tend to represen... | Why is gender typically coded 0/1 rather than 1/2, for example? | The way I see it personally is phallically 0 typically represents female, as it is the shape of the womb, and considered to be feminine...in almost all sciences (i.e. in biology/genetics pedigree char | Why is gender typically coded 0/1 rather than 1/2, for example?
The way I see it personally is phallically 0 typically represents female, as it is the shape of the womb, and considered to be feminine...in almost all sciences (i.e. in biology/genetics pedigree charts) circles, or zeros represent females. Where as more s... | Why is gender typically coded 0/1 rather than 1/2, for example?
The way I see it personally is phallically 0 typically represents female, as it is the shape of the womb, and considered to be feminine...in almost all sciences (i.e. in biology/genetics pedigree char |
9,435 | In simple linear regression, where does the formula for the variance of the residuals come from? | The intuition about the "plus" signs related to the variance (from the fact that even when we calculate the variance of a difference of independent random variables, we add their variances) is correct but fatally incomplete: if the random variables involved are not independent, then covariances are also involved -and c... | In simple linear regression, where does the formula for the variance of the residuals come from? | The intuition about the "plus" signs related to the variance (from the fact that even when we calculate the variance of a difference of independent random variables, we add their variances) is correct | In simple linear regression, where does the formula for the variance of the residuals come from?
The intuition about the "plus" signs related to the variance (from the fact that even when we calculate the variance of a difference of independent random variables, we add their variances) is correct but fatally incomplete... | In simple linear regression, where does the formula for the variance of the residuals come from?
The intuition about the "plus" signs related to the variance (from the fact that even when we calculate the variance of a difference of independent random variables, we add their variances) is correct |
9,436 | In simple linear regression, where does the formula for the variance of the residuals come from? | I find this hard to believe since the ith residual is the difference between the ith observed value and the ith fitted value; if one were to compute the variance of the difference, at the very least I would expect some "pluses" in the resulting expression
(i) The two things are dependent (positively correlated), and w... | In simple linear regression, where does the formula for the variance of the residuals come from? | I find this hard to believe since the ith residual is the difference between the ith observed value and the ith fitted value; if one were to compute the variance of the difference, at the very least I | In simple linear regression, where does the formula for the variance of the residuals come from?
I find this hard to believe since the ith residual is the difference between the ith observed value and the ith fitted value; if one were to compute the variance of the difference, at the very least I would expect some "plu... | In simple linear regression, where does the formula for the variance of the residuals come from?
I find this hard to believe since the ith residual is the difference between the ith observed value and the ith fitted value; if one were to compute the variance of the difference, at the very least I |
9,437 | In simple linear regression, where does the formula for the variance of the residuals come from? | Here's a hybrid of the two previous solutions. The variance of the $i$th residual, by @Glen_b's answer, is $$\operatorname{Var}(y_i-\hat y_i)=\sigma^2(1-h_{ii})$$ where $h_{ii}$ is the $(i,i)$ entry of the hat matrix $H:=X(X^TX)^{-1}X^T$. This entry can be computed as the multiplication
$$h_{ii}=(X)_{i\bullet}\ (X^TX)^... | In simple linear regression, where does the formula for the variance of the residuals come from? | Here's a hybrid of the two previous solutions. The variance of the $i$th residual, by @Glen_b's answer, is $$\operatorname{Var}(y_i-\hat y_i)=\sigma^2(1-h_{ii})$$ where $h_{ii}$ is the $(i,i)$ entry o | In simple linear regression, where does the formula for the variance of the residuals come from?
Here's a hybrid of the two previous solutions. The variance of the $i$th residual, by @Glen_b's answer, is $$\operatorname{Var}(y_i-\hat y_i)=\sigma^2(1-h_{ii})$$ where $h_{ii}$ is the $(i,i)$ entry of the hat matrix $H:=X(... | In simple linear regression, where does the formula for the variance of the residuals come from?
Here's a hybrid of the two previous solutions. The variance of the $i$th residual, by @Glen_b's answer, is $$\operatorname{Var}(y_i-\hat y_i)=\sigma^2(1-h_{ii})$$ where $h_{ii}$ is the $(i,i)$ entry o |
9,438 | What is the difference between a loss function and decision function? | A decision function is a function which takes a dataset as input and gives a decision as output. What the decision can be depends on the problem at hand. Examples include:
Estimation problems: the "decision" is the estimate.
Hypothesis testing problems: the decision is to reject or not reject the null hypothesis.
Clas... | What is the difference between a loss function and decision function? | A decision function is a function which takes a dataset as input and gives a decision as output. What the decision can be depends on the problem at hand. Examples include:
Estimation problems: the "d | What is the difference between a loss function and decision function?
A decision function is a function which takes a dataset as input and gives a decision as output. What the decision can be depends on the problem at hand. Examples include:
Estimation problems: the "decision" is the estimate.
Hypothesis testing probl... | What is the difference between a loss function and decision function?
A decision function is a function which takes a dataset as input and gives a decision as output. What the decision can be depends on the problem at hand. Examples include:
Estimation problems: the "d |
9,439 | What is the difference between a loss function and decision function? | The loss function is what is minimized to obtain a model which is optimal in some sense. The model itself has a decision function which is used to predict.
For example, in SVM classifiers:
loss function: minimizes error and squared norm of the separating hyperplane $\mathcal{L}(\mathbf{w}, \xi) =\frac{1}{2}\|\mathbf{w... | What is the difference between a loss function and decision function? | The loss function is what is minimized to obtain a model which is optimal in some sense. The model itself has a decision function which is used to predict.
For example, in SVM classifiers:
loss funct | What is the difference between a loss function and decision function?
The loss function is what is minimized to obtain a model which is optimal in some sense. The model itself has a decision function which is used to predict.
For example, in SVM classifiers:
loss function: minimizes error and squared norm of the separ... | What is the difference between a loss function and decision function?
The loss function is what is minimized to obtain a model which is optimal in some sense. The model itself has a decision function which is used to predict.
For example, in SVM classifiers:
loss funct |
9,440 | Sample size calculation for mixed models | The longpower package implements the sample size calculations in Liu and Liang (1997) and Diggle et al (2002). The documentation has example code. Here's one, using the lmmpower() function:
> require(longpower)
> require(lme4)
> fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
> lmmpower(fm1, pct.change = 0.3... | Sample size calculation for mixed models | The longpower package implements the sample size calculations in Liu and Liang (1997) and Diggle et al (2002). The documentation has example code. Here's one, using the lmmpower() function:
> require( | Sample size calculation for mixed models
The longpower package implements the sample size calculations in Liu and Liang (1997) and Diggle et al (2002). The documentation has example code. Here's one, using the lmmpower() function:
> require(longpower)
> require(lme4)
> fm1 <- lmer(Reaction ~ Days + (Days|Subject), slee... | Sample size calculation for mixed models
The longpower package implements the sample size calculations in Liu and Liang (1997) and Diggle et al (2002). The documentation has example code. Here's one, using the lmmpower() function:
> require( |
9,441 | Sample size calculation for mixed models | For anything beyond the simple 2 sample tests I prefer to use simulation for sample size or power studies. With prepackaged routines you can sometimes see large differences between the results from the programs based on the assumptions that they are making (and you may not be able to find out what those assumptions ar... | Sample size calculation for mixed models | For anything beyond the simple 2 sample tests I prefer to use simulation for sample size or power studies. With prepackaged routines you can sometimes see large differences between the results from t | Sample size calculation for mixed models
For anything beyond the simple 2 sample tests I prefer to use simulation for sample size or power studies. With prepackaged routines you can sometimes see large differences between the results from the programs based on the assumptions that they are making (and you may not be a... | Sample size calculation for mixed models
For anything beyond the simple 2 sample tests I prefer to use simulation for sample size or power studies. With prepackaged routines you can sometimes see large differences between the results from t |
9,442 | Sample size calculation for mixed models | The simr package uses simulation to estimate power fairly flexibly in linear and generalised linear mixed models. | Sample size calculation for mixed models | The simr package uses simulation to estimate power fairly flexibly in linear and generalised linear mixed models. | Sample size calculation for mixed models
The simr package uses simulation to estimate power fairly flexibly in linear and generalised linear mixed models. | Sample size calculation for mixed models
The simr package uses simulation to estimate power fairly flexibly in linear and generalised linear mixed models. |
9,443 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | Note that the case where the number of heads and the number of tails are equal is the same as "exactly half the time you get heads". So let's stick to counting the number of heads to see if it's half the number of tosses or equivalently comparing the proportion of heads with 0.5.
The more you flip, the larger the numbe... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | Note that the case where the number of heads and the number of tails are equal is the same as "exactly half the time you get heads". So let's stick to counting the number of heads to see if it's half | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
Note that the case where the number of heads and the number of tails are equal is the same as "exactly half the time you get heads". So let's stick to counting the number of heads to see if ... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
Note that the case where the number of heads and the number of tails are equal is the same as "exactly half the time you get heads". So let's stick to counting the number of heads to see if it's half |
9,444 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | Well we know that the Law of Large Numbers is what is guaranteeing the first conclusion of your experiement, namely, that if you flip a fair coin $n$ times, the ratio of heads to tails converges towards 1 as $n$ increases.
So no problems there. However, that about all the Law of Large Numbers tells us in this scenar... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | Well we know that the Law of Large Numbers is what is guaranteeing the first conclusion of your experiement, namely, that if you flip a fair coin $n$ times, the ratio of heads to tails converges towar | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
Well we know that the Law of Large Numbers is what is guaranteeing the first conclusion of your experiement, namely, that if you flip a fair coin $n$ times, the ratio of heads to tails conve... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
Well we know that the Law of Large Numbers is what is guaranteeing the first conclusion of your experiement, namely, that if you flip a fair coin $n$ times, the ratio of heads to tails converges towar |
9,445 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | See Pascal's Triangle.
The likelihood of coin flip outcomes is represented by the numbers along the bottom row. The outcome of equal heads and tails is the middle number. As the tree grows larger (i.e., more flips), the middle number becomes a smaller proportion of the sum of the bottom row. | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | See Pascal's Triangle.
The likelihood of coin flip outcomes is represented by the numbers along the bottom row. The outcome of equal heads and tails is the middle number. As the tree grows larger (i.e | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
See Pascal's Triangle.
The likelihood of coin flip outcomes is represented by the numbers along the bottom row. The outcome of equal heads and tails is the middle number. As the tree grows l... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
See Pascal's Triangle.
The likelihood of coin flip outcomes is represented by the numbers along the bottom row. The outcome of equal heads and tails is the middle number. As the tree grows larger (i.e |
9,446 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | Maybe it helps to outline that this is related to the arcsine law.
It says that for one path of outcomes the probability that the path stays for most the time in the positive or negative domain is much higher than that it is going up and down than you expect from intuition.
Here some links:
http://www.math.unl.edu/~sdu... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | Maybe it helps to outline that this is related to the arcsine law.
It says that for one path of outcomes the probability that the path stays for most the time in the positive or negative domain is muc | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
Maybe it helps to outline that this is related to the arcsine law.
It says that for one path of outcomes the probability that the path stays for most the time in the positive or negative dom... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
Maybe it helps to outline that this is related to the arcsine law.
It says that for one path of outcomes the probability that the path stays for most the time in the positive or negative domain is muc |
9,447 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | While the ratio of heads to tails converges to 1, the range of possible numbers becomes wider. (I'm making the numbers up). Say for 100 throws the probability is 90% that you have between 45% and 55% heads. That's 90% that you get 45 to 55 heads. 11 possibilities for the number of heads. About 9% roughly that you get e... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | While the ratio of heads to tails converges to 1, the range of possible numbers becomes wider. (I'm making the numbers up). Say for 100 throws the probability is 90% that you have between 45% and 55% | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
While the ratio of heads to tails converges to 1, the range of possible numbers becomes wider. (I'm making the numbers up). Say for 100 throws the probability is 90% that you have between 45... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
While the ratio of heads to tails converges to 1, the range of possible numbers becomes wider. (I'm making the numbers up). Say for 100 throws the probability is 90% that you have between 45% and 55% |
9,448 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | Well, one thing to note is that with an even number of flips (otherwise the probability of equal heads and tails flips is of course exactly zero), the most probable outcome will always be the one with exactly as many heads flips as tails flips.
The distribution of $n$ flips is given by the coefficients of the polynomia... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | Well, one thing to note is that with an even number of flips (otherwise the probability of equal heads and tails flips is of course exactly zero), the most probable outcome will always be the one with | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
Well, one thing to note is that with an even number of flips (otherwise the probability of equal heads and tails flips is of course exactly zero), the most probable outcome will always be th... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
Well, one thing to note is that with an even number of flips (otherwise the probability of equal heads and tails flips is of course exactly zero), the most probable outcome will always be the one with |
9,449 | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases? | Suppose you flip a coin twice. There are four possible outcomes: HH, HT, TH, and TT. In two of these, you have an equal number of heads and tails, so there's a 50% chance that you get the same number of heads and tails.
Now suppose you flip a coin 4,306,492,102 times. Do you expect a 50 percent chance that you'll wi... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t | Suppose you flip a coin twice. There are four possible outcomes: HH, HT, TH, and TT. In two of these, you have an equal number of heads and tails, so there's a 50% chance that you get the same numbe | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as the number of flips increases?
Suppose you flip a coin twice. There are four possible outcomes: HH, HT, TH, and TT. In two of these, you have an equal number of heads and tails, so there's a 50% chance that you get the ... | Statistics concept to explain why you're less likely to flip the same number of heads as tails, as t
Suppose you flip a coin twice. There are four possible outcomes: HH, HT, TH, and TT. In two of these, you have an equal number of heads and tails, so there's a 50% chance that you get the same numbe |
9,450 | What does "all else equal" mean in multiple regression? | You are right. Technically, it is any value. However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their respective means. I believe this is a common way to explain it that is not specific to me.
I usually go on to mentio... | What does "all else equal" mean in multiple regression? | You are right. Technically, it is any value. However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their | What does "all else equal" mean in multiple regression?
You are right. Technically, it is any value. However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their respective means. I believe this is a common way to explain it... | What does "all else equal" mean in multiple regression?
You are right. Technically, it is any value. However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their |
9,451 | What does "all else equal" mean in multiple regression? | The math is simple, just take the difference between 2 models with one of the x variables changed by 1 and you will see that it does not matter what the other variables are (given there are no interactions, polynomial, or other complicating terms).
One example:
$y_{[1]} = b_0 + b_1 \times x_1 + b_2 \times x_2$
$y_{[2]}... | What does "all else equal" mean in multiple regression? | The math is simple, just take the difference between 2 models with one of the x variables changed by 1 and you will see that it does not matter what the other variables are (given there are no interac | What does "all else equal" mean in multiple regression?
The math is simple, just take the difference between 2 models with one of the x variables changed by 1 and you will see that it does not matter what the other variables are (given there are no interactions, polynomial, or other complicating terms).
One example:
$y... | What does "all else equal" mean in multiple regression?
The math is simple, just take the difference between 2 models with one of the x variables changed by 1 and you will see that it does not matter what the other variables are (given there are no interac |
9,452 | What does "all else equal" mean in multiple regression? | I believe you are referring to dependence in covariates ($X_i$). So if the model is $$Y=\beta_{0}+\beta_{1}X_1+\beta_{2}X_2$$
the effect of $X_i$ on $Y$ all other things being equal would be $\frac{\Delta{Y}}{\Delta{X_i}}$ for any $\Delta{X_i}$ with all other $X_j$ held constant at any value.
Keep in mind that is pos... | What does "all else equal" mean in multiple regression? | I believe you are referring to dependence in covariates ($X_i$). So if the model is $$Y=\beta_{0}+\beta_{1}X_1+\beta_{2}X_2$$
the effect of $X_i$ on $Y$ all other things being equal would be $\frac{\ | What does "all else equal" mean in multiple regression?
I believe you are referring to dependence in covariates ($X_i$). So if the model is $$Y=\beta_{0}+\beta_{1}X_1+\beta_{2}X_2$$
the effect of $X_i$ on $Y$ all other things being equal would be $\frac{\Delta{Y}}{\Delta{X_i}}$ for any $\Delta{X_i}$ with all other $X_... | What does "all else equal" mean in multiple regression?
I believe you are referring to dependence in covariates ($X_i$). So if the model is $$Y=\beta_{0}+\beta_{1}X_1+\beta_{2}X_2$$
the effect of $X_i$ on $Y$ all other things being equal would be $\frac{\ |
9,453 | What should I check for normality: raw data or residuals? | Why must you test for normality?
The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the theoretical residuals, but are not independent (there are transforms on the residuals that remove some of the dependenc... | What should I check for normality: raw data or residuals? | Why must you test for normality?
The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the | What should I check for normality: raw data or residuals?
Why must you test for normality?
The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the theoretical residuals, but are not independent (there are tra... | What should I check for normality: raw data or residuals?
Why must you test for normality?
The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the |
9,454 | What should I check for normality: raw data or residuals? | First you can "eyeball it" using a QQ-plot to get a general sense here is how to generate one in R.
According to the R manual you can feed your data vector directly into the shapiro.test() function.
If you would like to calculate the residuals yourself yes each residual is calculated that way over your set of observati... | What should I check for normality: raw data or residuals? | First you can "eyeball it" using a QQ-plot to get a general sense here is how to generate one in R.
According to the R manual you can feed your data vector directly into the shapiro.test() function.
I | What should I check for normality: raw data or residuals?
First you can "eyeball it" using a QQ-plot to get a general sense here is how to generate one in R.
According to the R manual you can feed your data vector directly into the shapiro.test() function.
If you would like to calculate the residuals yourself yes each ... | What should I check for normality: raw data or residuals?
First you can "eyeball it" using a QQ-plot to get a general sense here is how to generate one in R.
According to the R manual you can feed your data vector directly into the shapiro.test() function.
I |
9,455 | What should I check for normality: raw data or residuals? | The Gaussian Asuumptions refer to the residuals from the model. There are no assumptions necessary about the original data. As a case in point the distribution of daily beer sales
.After a reasonable model captured the day-of-the-week, holiday/events effects , level shifts/time trends we get | What should I check for normality: raw data or residuals? | The Gaussian Asuumptions refer to the residuals from the model. There are no assumptions necessary about the original data. As a case in point the distribution of daily beer sales
.After a reasonabl | What should I check for normality: raw data or residuals?
The Gaussian Asuumptions refer to the residuals from the model. There are no assumptions necessary about the original data. As a case in point the distribution of daily beer sales
.After a reasonable model captured the day-of-the-week, holiday/events effects ,... | What should I check for normality: raw data or residuals?
The Gaussian Asuumptions refer to the residuals from the model. There are no assumptions necessary about the original data. As a case in point the distribution of daily beer sales
.After a reasonabl |
9,456 | Do underpowered studies have increased likelihood of false positives? | You are correct in that sample size affects power (i.e. 1 - type II error), but not type I error. It's a common misunderstanding that a p-value as such (correctly interpreted) is less reliable or valid when the sample size is small - the very entertaining article by Friston 2012 has a funny take on that [1].
That being... | Do underpowered studies have increased likelihood of false positives? | You are correct in that sample size affects power (i.e. 1 - type II error), but not type I error. It's a common misunderstanding that a p-value as such (correctly interpreted) is less reliable or vali | Do underpowered studies have increased likelihood of false positives?
You are correct in that sample size affects power (i.e. 1 - type II error), but not type I error. It's a common misunderstanding that a p-value as such (correctly interpreted) is less reliable or valid when the sample size is small - the very enterta... | Do underpowered studies have increased likelihood of false positives?
You are correct in that sample size affects power (i.e. 1 - type II error), but not type I error. It's a common misunderstanding that a p-value as such (correctly interpreted) is less reliable or vali |
9,457 | Do underpowered studies have increased likelihood of false positives? | Depending on how you look at it, low power can increase false positive rates in given scenarios.
Consider the following: a researcher tests a treatment. If the test comes back as insignificant, they abandon it and move onto the next treatment. If the test comes back significant, they publish it. Let's also consider th... | Do underpowered studies have increased likelihood of false positives? | Depending on how you look at it, low power can increase false positive rates in given scenarios.
Consider the following: a researcher tests a treatment. If the test comes back as insignificant, they | Do underpowered studies have increased likelihood of false positives?
Depending on how you look at it, low power can increase false positive rates in given scenarios.
Consider the following: a researcher tests a treatment. If the test comes back as insignificant, they abandon it and move onto the next treatment. If th... | Do underpowered studies have increased likelihood of false positives?
Depending on how you look at it, low power can increase false positive rates in given scenarios.
Consider the following: a researcher tests a treatment. If the test comes back as insignificant, they |
9,458 | Do underpowered studies have increased likelihood of false positives? | Low power can't effect the Type-1 error rate, but it could effect the proportion of published results that are type-1 errors.
The reason is that low power reduces the chances of a correct rejection of H0 (Type-2 error) but not the chances of a false rejection of H0 (Type-1 error).
Assume for a second that there are t... | Do underpowered studies have increased likelihood of false positives? | Low power can't effect the Type-1 error rate, but it could effect the proportion of published results that are type-1 errors.
The reason is that low power reduces the chances of a correct rejection o | Do underpowered studies have increased likelihood of false positives?
Low power can't effect the Type-1 error rate, but it could effect the proportion of published results that are type-1 errors.
The reason is that low power reduces the chances of a correct rejection of H0 (Type-2 error) but not the chances of a false... | Do underpowered studies have increased likelihood of false positives?
Low power can't effect the Type-1 error rate, but it could effect the proportion of published results that are type-1 errors.
The reason is that low power reduces the chances of a correct rejection o |
9,459 | Do underpowered studies have increased likelihood of false positives? | In addition to the others answer, a study is usually underpowered when the sample size is small. There are many tests that are only asymptotically valid, and too optimistic or conservative for small n.
Other tests are only valid for small sample sizes if certain conditions are met, but become more robust with a large s... | Do underpowered studies have increased likelihood of false positives? | In addition to the others answer, a study is usually underpowered when the sample size is small. There are many tests that are only asymptotically valid, and too optimistic or conservative for small n | Do underpowered studies have increased likelihood of false positives?
In addition to the others answer, a study is usually underpowered when the sample size is small. There are many tests that are only asymptotically valid, and too optimistic or conservative for small n.
Other tests are only valid for small sample size... | Do underpowered studies have increased likelihood of false positives?
In addition to the others answer, a study is usually underpowered when the sample size is small. There are many tests that are only asymptotically valid, and too optimistic or conservative for small n |
9,460 | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set | The Iris dataset is deservedly widely used throughout statistical science, especially for illustrating various problems in statistical graphics, multivariate statistics and machine learning.
Containing 150 observations, it is small but not trivial.
The task it poses of discriminating between three species of Iris fro... | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set | The Iris dataset is deservedly widely used throughout statistical science, especially for illustrating various problems in statistical graphics, multivariate statistics and machine learning.
Containi | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set
The Iris dataset is deservedly widely used throughout statistical science, especially for illustrating various problems in statistical graphics, multivariate statistics and machine learning.
Containing 150 observations, it i... | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set
The Iris dataset is deservedly widely used throughout statistical science, especially for illustrating various problems in statistical graphics, multivariate statistics and machine learning.
Containi |
9,461 | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set | The dataset is big and interesting enough to be non-trivial, but small enough to "fit in your pocket", and not slow down experimentation with it.
I think a key aspect is that it also teaches about over-fitting. There are not enough columns to give a perfect score: we see this immediately when we look at the scatterplot... | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set | The dataset is big and interesting enough to be non-trivial, but small enough to "fit in your pocket", and not slow down experimentation with it.
I think a key aspect is that it also teaches about ove | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set
The dataset is big and interesting enough to be non-trivial, but small enough to "fit in your pocket", and not slow down experimentation with it.
I think a key aspect is that it also teaches about over-fitting. There are not ... | What aspects of the "Iris" data set make it so successful as an example/teaching/test data set
The dataset is big and interesting enough to be non-trivial, but small enough to "fit in your pocket", and not slow down experimentation with it.
I think a key aspect is that it also teaches about ove |
9,462 | What impact does increasing the training data have on the overall system accuracy? | In most situations, more data is usually better. Overfitting is essentially learning spurious correlations that occur in your training data, but not the real world. For example, if you considered only my colleagues, you might learn to associate "named Matt" with "has a beard." It's 100% valid ($n=4$, even!) when consid... | What impact does increasing the training data have on the overall system accuracy? | In most situations, more data is usually better. Overfitting is essentially learning spurious correlations that occur in your training data, but not the real world. For example, if you considered only | What impact does increasing the training data have on the overall system accuracy?
In most situations, more data is usually better. Overfitting is essentially learning spurious correlations that occur in your training data, but not the real world. For example, if you considered only my colleagues, you might learn to as... | What impact does increasing the training data have on the overall system accuracy?
In most situations, more data is usually better. Overfitting is essentially learning spurious correlations that occur in your training data, but not the real world. For example, if you considered only |
9,463 | What impact does increasing the training data have on the overall system accuracy? | One note: by adding more data (rows or examples, not columns or features) your chances of overfitting decrease rather than increase.
The two paragraph summary goes like this:
Adding more examples, adds diversity. It decreases the generalization error because your model becomes more general by virtue of being trained ... | What impact does increasing the training data have on the overall system accuracy? | One note: by adding more data (rows or examples, not columns or features) your chances of overfitting decrease rather than increase.
The two paragraph summary goes like this:
Adding more examples, ad | What impact does increasing the training data have on the overall system accuracy?
One note: by adding more data (rows or examples, not columns or features) your chances of overfitting decrease rather than increase.
The two paragraph summary goes like this:
Adding more examples, adds diversity. It decreases the gener... | What impact does increasing the training data have on the overall system accuracy?
One note: by adding more data (rows or examples, not columns or features) your chances of overfitting decrease rather than increase.
The two paragraph summary goes like this:
Adding more examples, ad |
9,464 | What impact does increasing the training data have on the overall system accuracy? | Increasing the training data always adds information and should improve the fit. The difficulty comes if you then evaluate the performance of the classifier only on the training data that was used for the fit. This produces optimistically biased assessments and is the reason why leave-one-out cross validation or boot... | What impact does increasing the training data have on the overall system accuracy? | Increasing the training data always adds information and should improve the fit. The difficulty comes if you then evaluate the performance of the classifier only on the training data that was used fo | What impact does increasing the training data have on the overall system accuracy?
Increasing the training data always adds information and should improve the fit. The difficulty comes if you then evaluate the performance of the classifier only on the training data that was used for the fit. This produces optimistica... | What impact does increasing the training data have on the overall system accuracy?
Increasing the training data always adds information and should improve the fit. The difficulty comes if you then evaluate the performance of the classifier only on the training data that was used fo |
9,465 | What impact does increasing the training data have on the overall system accuracy? | Ideally, once you have more training examples you’ll have lower test-error (variance of the model decrease, meaning we are less overfitting), but theoretically, more data doesn’t always mean you will have more accurate model since high bias models will not benefit from more training examples.
See here: In Machine Lear... | What impact does increasing the training data have on the overall system accuracy? | Ideally, once you have more training examples you’ll have lower test-error (variance of the model decrease, meaning we are less overfitting), but theoretically, more data doesn’t always mean you will | What impact does increasing the training data have on the overall system accuracy?
Ideally, once you have more training examples you’ll have lower test-error (variance of the model decrease, meaning we are less overfitting), but theoretically, more data doesn’t always mean you will have more accurate model since high b... | What impact does increasing the training data have on the overall system accuracy?
Ideally, once you have more training examples you’ll have lower test-error (variance of the model decrease, meaning we are less overfitting), but theoretically, more data doesn’t always mean you will |
9,466 | What impact does increasing the training data have on the overall system accuracy? | I agree with @Serendipity:
The performance of neural networks can continually improve as more and more data is provided to the model, BUT the capacity of the model must be adjusted to support the increases in data.
Let's say that you have a very small object detection model (7.2M parameters), it won't be able to learn ... | What impact does increasing the training data have on the overall system accuracy? | I agree with @Serendipity:
The performance of neural networks can continually improve as more and more data is provided to the model, BUT the capacity of the model must be adjusted to support the incr | What impact does increasing the training data have on the overall system accuracy?
I agree with @Serendipity:
The performance of neural networks can continually improve as more and more data is provided to the model, BUT the capacity of the model must be adjusted to support the increases in data.
Let's say that you hav... | What impact does increasing the training data have on the overall system accuracy?
I agree with @Serendipity:
The performance of neural networks can continually improve as more and more data is provided to the model, BUT the capacity of the model must be adjusted to support the incr |
9,467 | What exactly does the term "inverse probability" mean? | "Inverse probability" is a rather old-fashioned way of referring to Bayesian inference; when it's used nowadays it's usually as a nod to history. De Morgan (1838), An Essay on Probabilities, Ch. 3 "On Inverse Probabilities", explains it nicely:
In the preceding chapter, we have calculated the chances of an event,
know... | What exactly does the term "inverse probability" mean? | "Inverse probability" is a rather old-fashioned way of referring to Bayesian inference; when it's used nowadays it's usually as a nod to history. De Morgan (1838), An Essay on Probabilities, Ch. 3 "On | What exactly does the term "inverse probability" mean?
"Inverse probability" is a rather old-fashioned way of referring to Bayesian inference; when it's used nowadays it's usually as a nod to history. De Morgan (1838), An Essay on Probabilities, Ch. 3 "On Inverse Probabilities", explains it nicely:
In the preceding ch... | What exactly does the term "inverse probability" mean?
"Inverse probability" is a rather old-fashioned way of referring to Bayesian inference; when it's used nowadays it's usually as a nod to history. De Morgan (1838), An Essay on Probabilities, Ch. 3 "On |
9,468 | What exactly does the term "inverse probability" mean? | Probability of 'observations' given the 'model'
Typically 'probability' is expressed as the probability of an outcome given a particular experiment/model/setup.
So the probability is about the frequencies of observations given the model. These types of questions are often not so difficult. For instance, in gambling, we... | What exactly does the term "inverse probability" mean? | Probability of 'observations' given the 'model'
Typically 'probability' is expressed as the probability of an outcome given a particular experiment/model/setup.
So the probability is about the frequen | What exactly does the term "inverse probability" mean?
Probability of 'observations' given the 'model'
Typically 'probability' is expressed as the probability of an outcome given a particular experiment/model/setup.
So the probability is about the frequencies of observations given the model. These types of questions ar... | What exactly does the term "inverse probability" mean?
Probability of 'observations' given the 'model'
Typically 'probability' is expressed as the probability of an outcome given a particular experiment/model/setup.
So the probability is about the frequen |
9,469 | What exactly does the term "inverse probability" mean? | Yes, I believe your thinking is a way to view things in that it points out that the prior is the key ingredient to convert conditional probabilities.
My reading is that it is an interpretation of Bayes' theorem, which, as we know, says
$$
P(B|A)=\frac{P(A|B)P(B)}{P(A)}.
$$
Hence, Bayes' theorem provides the result to c... | What exactly does the term "inverse probability" mean? | Yes, I believe your thinking is a way to view things in that it points out that the prior is the key ingredient to convert conditional probabilities.
My reading is that it is an interpretation of Baye | What exactly does the term "inverse probability" mean?
Yes, I believe your thinking is a way to view things in that it points out that the prior is the key ingredient to convert conditional probabilities.
My reading is that it is an interpretation of Bayes' theorem, which, as we know, says
$$
P(B|A)=\frac{P(A|B)P(B)}{P... | What exactly does the term "inverse probability" mean?
Yes, I believe your thinking is a way to view things in that it points out that the prior is the key ingredient to convert conditional probabilities.
My reading is that it is an interpretation of Baye |
9,470 | What exactly does the term "inverse probability" mean? | There are plenty of great answers already, so I'll add a slightly tangential example that I found intriguing. Hopefully its not too far-off from the topic.
Markov chain Monte Carlo methods are often used for Bayesian posterior inference. In typical encounters of Markov chains in probability theory, we ask questions lik... | What exactly does the term "inverse probability" mean? | There are plenty of great answers already, so I'll add a slightly tangential example that I found intriguing. Hopefully its not too far-off from the topic.
Markov chain Monte Carlo methods are often u | What exactly does the term "inverse probability" mean?
There are plenty of great answers already, so I'll add a slightly tangential example that I found intriguing. Hopefully its not too far-off from the topic.
Markov chain Monte Carlo methods are often used for Bayesian posterior inference. In typical encounters of Ma... | What exactly does the term "inverse probability" mean?
There are plenty of great answers already, so I'll add a slightly tangential example that I found intriguing. Hopefully its not too far-off from the topic.
Markov chain Monte Carlo methods are often u |
9,471 | Can't deep learning models now be said to be interpretable? Are nodes features? | Interpretation of deep models is still challenging.
Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challenging to understand.
Even in the case of CNNs which have obvious "feature detector" structures, such as edges and orientati... | Can't deep learning models now be said to be interpretable? Are nodes features? | Interpretation of deep models is still challenging.
Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challengi | Can't deep learning models now be said to be interpretable? Are nodes features?
Interpretation of deep models is still challenging.
Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challenging to understand.
Even in the case of CN... | Can't deep learning models now be said to be interpretable? Are nodes features?
Interpretation of deep models is still challenging.
Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challengi |
9,472 | Can't deep learning models now be said to be interpretable? Are nodes features? | Layers don't map onto successively more abstract features as cleanly as we'd like. A good way to see this is to compare two very popular architectures.
VGG16 consists of many convolutional layers stacked on top of each other with the occasional pooling layer -- a very traditional architecture.
Since then, people have m... | Can't deep learning models now be said to be interpretable? Are nodes features? | Layers don't map onto successively more abstract features as cleanly as we'd like. A good way to see this is to compare two very popular architectures.
VGG16 consists of many convolutional layers stac | Can't deep learning models now be said to be interpretable? Are nodes features?
Layers don't map onto successively more abstract features as cleanly as we'd like. A good way to see this is to compare two very popular architectures.
VGG16 consists of many convolutional layers stacked on top of each other with the occasi... | Can't deep learning models now be said to be interpretable? Are nodes features?
Layers don't map onto successively more abstract features as cleanly as we'd like. A good way to see this is to compare two very popular architectures.
VGG16 consists of many convolutional layers stac |
9,473 | Can't deep learning models now be said to be interpretable? Are nodes features? | The subject of my Ph.D dissertation was to reveal the black-box properties of neural networks, specifically feed-forward neural networks, with one or two hidden layers.
I will take up the challenge to explain to everyone what the weights and bias terms mean, in a one-layer feed-forward neural network. Two different per... | Can't deep learning models now be said to be interpretable? Are nodes features? | The subject of my Ph.D dissertation was to reveal the black-box properties of neural networks, specifically feed-forward neural networks, with one or two hidden layers.
I will take up the challenge to | Can't deep learning models now be said to be interpretable? Are nodes features?
The subject of my Ph.D dissertation was to reveal the black-box properties of neural networks, specifically feed-forward neural networks, with one or two hidden layers.
I will take up the challenge to explain to everyone what the weights an... | Can't deep learning models now be said to be interpretable? Are nodes features?
The subject of my Ph.D dissertation was to reveal the black-box properties of neural networks, specifically feed-forward neural networks, with one or two hidden layers.
I will take up the challenge to |
9,474 | What's the difference between variance scaling initializer and xavier initializer? | Historical perspective
Xavier initialization, originally proposed by Xavier Glorot and Yoshua Bengio in "Understanding the difficulty of training deep feedforward neural networks", is the weights initialization technique that tries to make the variance of the outputs of a layer to be equal to the variance of its inputs... | What's the difference between variance scaling initializer and xavier initializer? | Historical perspective
Xavier initialization, originally proposed by Xavier Glorot and Yoshua Bengio in "Understanding the difficulty of training deep feedforward neural networks", is the weights init | What's the difference between variance scaling initializer and xavier initializer?
Historical perspective
Xavier initialization, originally proposed by Xavier Glorot and Yoshua Bengio in "Understanding the difficulty of training deep feedforward neural networks", is the weights initialization technique that tries to ma... | What's the difference between variance scaling initializer and xavier initializer?
Historical perspective
Xavier initialization, originally proposed by Xavier Glorot and Yoshua Bengio in "Understanding the difficulty of training deep feedforward neural networks", is the weights init |
9,475 | What's the difference between variance scaling initializer and xavier initializer? | Variance scaling is just a generalization of Xavier: http://tflearn.org/initializations/. They both operate on the principle that the scale of the gradients should be similar throughout all layers. Xavier is probably safer to use since it's withstood the experimental test of time; trying to pick your own parameters for... | What's the difference between variance scaling initializer and xavier initializer? | Variance scaling is just a generalization of Xavier: http://tflearn.org/initializations/. They both operate on the principle that the scale of the gradients should be similar throughout all layers. Xa | What's the difference between variance scaling initializer and xavier initializer?
Variance scaling is just a generalization of Xavier: http://tflearn.org/initializations/. They both operate on the principle that the scale of the gradients should be similar throughout all layers. Xavier is probably safer to use since i... | What's the difference between variance scaling initializer and xavier initializer?
Variance scaling is just a generalization of Xavier: http://tflearn.org/initializations/. They both operate on the principle that the scale of the gradients should be similar throughout all layers. Xa |
9,476 | What is the meaning of super script 2 subscript 2 within the context of norms? | You are right about the superscript. The subscript $||.||_p$ specifies the $p$-norm.
Therefore:
$$||x_i||_p=(\sum_i|x_i|^p)^{1/p}$$
And:
$$||x_i||_p^p=\sum_i|x_i|^p$$ | What is the meaning of super script 2 subscript 2 within the context of norms? | You are right about the superscript. The subscript $||.||_p$ specifies the $p$-norm.
Therefore:
$$||x_i||_p=(\sum_i|x_i|^p)^{1/p}$$
And:
$$||x_i||_p^p=\sum_i|x_i|^p$$ | What is the meaning of super script 2 subscript 2 within the context of norms?
You are right about the superscript. The subscript $||.||_p$ specifies the $p$-norm.
Therefore:
$$||x_i||_p=(\sum_i|x_i|^p)^{1/p}$$
And:
$$||x_i||_p^p=\sum_i|x_i|^p$$ | What is the meaning of super script 2 subscript 2 within the context of norms?
You are right about the superscript. The subscript $||.||_p$ specifies the $p$-norm.
Therefore:
$$||x_i||_p=(\sum_i|x_i|^p)^{1/p}$$
And:
$$||x_i||_p^p=\sum_i|x_i|^p$$ |
9,477 | What is the meaning of super script 2 subscript 2 within the context of norms? | $\|x\|_2$ is the Euclidean norm of the vector $x$; $\|x\|_2^2$ is the squared Euclidean norm of $x$. Note that as the Euclidean norm is probably the mostly commonly used norm people routinely abbreviated by $\|x\|$. By definition when assuming a Euclidean vector space: $\|x\|_2 := \sqrt{x_1^2 + x_2^2 + \dots + x_n^2}$.... | What is the meaning of super script 2 subscript 2 within the context of norms? | $\|x\|_2$ is the Euclidean norm of the vector $x$; $\|x\|_2^2$ is the squared Euclidean norm of $x$. Note that as the Euclidean norm is probably the mostly commonly used norm people routinely abbrevia | What is the meaning of super script 2 subscript 2 within the context of norms?
$\|x\|_2$ is the Euclidean norm of the vector $x$; $\|x\|_2^2$ is the squared Euclidean norm of $x$. Note that as the Euclidean norm is probably the mostly commonly used norm people routinely abbreviated by $\|x\|$. By definition when assumi... | What is the meaning of super script 2 subscript 2 within the context of norms?
$\|x\|_2$ is the Euclidean norm of the vector $x$; $\|x\|_2^2$ is the squared Euclidean norm of $x$. Note that as the Euclidean norm is probably the mostly commonly used norm people routinely abbrevia |
9,478 | How to use R prcomp results for prediction? | While I'm unsure as to the nature of your problem, I can tell you that I have used PCA as a means of extracting dominant patterns in a group of predictor variables in the later building of a model. In your example, these would be found in the principle components (PCs), PCAAnalysis$x, and they would be based on the wei... | How to use R prcomp results for prediction? | While I'm unsure as to the nature of your problem, I can tell you that I have used PCA as a means of extracting dominant patterns in a group of predictor variables in the later building of a model. In | How to use R prcomp results for prediction?
While I'm unsure as to the nature of your problem, I can tell you that I have used PCA as a means of extracting dominant patterns in a group of predictor variables in the later building of a model. In your example, these would be found in the principle components (PCs), PCAAn... | How to use R prcomp results for prediction?
While I'm unsure as to the nature of your problem, I can tell you that I have used PCA as a means of extracting dominant patterns in a group of predictor variables in the later building of a model. In |
9,479 | How to use R prcomp results for prediction? | The information from the summary() command you have attached to the question allows you to see, e.g., the proportion of the variance each principal component captures (Proportion of variance). In addition, the cumulative proportion is computed to output. For example, you need to have 23 PCs to capture 75% of the varian... | How to use R prcomp results for prediction? | The information from the summary() command you have attached to the question allows you to see, e.g., the proportion of the variance each principal component captures (Proportion of variance). In addi | How to use R prcomp results for prediction?
The information from the summary() command you have attached to the question allows you to see, e.g., the proportion of the variance each principal component captures (Proportion of variance). In addition, the cumulative proportion is computed to output. For example, you need... | How to use R prcomp results for prediction?
The information from the summary() command you have attached to the question allows you to see, e.g., the proportion of the variance each principal component captures (Proportion of variance). In addi |
9,480 | Expected value of a natural logarithm | In the paper
Y. W. Teh, D. Newman and M. Welling (2006), A Collapsed Variational
Bayesian Inference Algorithm for Latent Dirichlet
Allocation,
NIPS 2006, 1353–1360.
a second order Taylor expansion around $x_0=\mathbb{E}[x]$ is used to approximate $\mathbb{E}[\log(x)]$:
$$
\mathbb{E}[\log(x)]\approx\log(\mathbb{... | Expected value of a natural logarithm | In the paper
Y. W. Teh, D. Newman and M. Welling (2006), A Collapsed Variational
Bayesian Inference Algorithm for Latent Dirichlet
Allocation,
NIPS 2006, 1353–1360.
a second order Taylor expan | Expected value of a natural logarithm
In the paper
Y. W. Teh, D. Newman and M. Welling (2006), A Collapsed Variational
Bayesian Inference Algorithm for Latent Dirichlet
Allocation,
NIPS 2006, 1353–1360.
a second order Taylor expansion around $x_0=\mathbb{E}[x]$ is used to approximate $\mathbb{E}[\log(x)]$:
$$
\... | Expected value of a natural logarithm
In the paper
Y. W. Teh, D. Newman and M. Welling (2006), A Collapsed Variational
Bayesian Inference Algorithm for Latent Dirichlet
Allocation,
NIPS 2006, 1353–1360.
a second order Taylor expan |
9,481 | Expected value of a natural logarithm | Also, if you don't need an exact expression for $\text{E}[\log(X + 1)]$, oftentimes the bound given by Jensen's inequality is good enough:
$$
\log [\text{E}(X) + 1] \geq\text{E}[\log(X + 1)]
$$ | Expected value of a natural logarithm | Also, if you don't need an exact expression for $\text{E}[\log(X + 1)]$, oftentimes the bound given by Jensen's inequality is good enough:
$$
\log [\text{E}(X) + 1] \geq\text{E}[\log(X + 1)]
$$ | Expected value of a natural logarithm
Also, if you don't need an exact expression for $\text{E}[\log(X + 1)]$, oftentimes the bound given by Jensen's inequality is good enough:
$$
\log [\text{E}(X) + 1] \geq\text{E}[\log(X + 1)]
$$ | Expected value of a natural logarithm
Also, if you don't need an exact expression for $\text{E}[\log(X + 1)]$, oftentimes the bound given by Jensen's inequality is good enough:
$$
\log [\text{E}(X) + 1] \geq\text{E}[\log(X + 1)]
$$ |
9,482 | Expected value of a natural logarithm | Suppose that $X$ has probability density $f_X$. Before you start approximating, remember that, for any measurable function $g$, you can prove that
$$
E[g(X)]=\int g(X)\,dP = \int_{-\infty}^\infty g(x)\,f_X(x)\,dx \, ,
$$
in the sense that if the first integral exists, so does the second, and they have the same value... | Expected value of a natural logarithm | Suppose that $X$ has probability density $f_X$. Before you start approximating, remember that, for any measurable function $g$, you can prove that
$$
E[g(X)]=\int g(X)\,dP = \int_{-\infty}^\infty g | Expected value of a natural logarithm
Suppose that $X$ has probability density $f_X$. Before you start approximating, remember that, for any measurable function $g$, you can prove that
$$
E[g(X)]=\int g(X)\,dP = \int_{-\infty}^\infty g(x)\,f_X(x)\,dx \, ,
$$
in the sense that if the first integral exists, so does th... | Expected value of a natural logarithm
Suppose that $X$ has probability density $f_X$. Before you start approximating, remember that, for any measurable function $g$, you can prove that
$$
E[g(X)]=\int g(X)\,dP = \int_{-\infty}^\infty g |
9,483 | Expected value of a natural logarithm | There are two usual approaches:
If you know the distribution of $X$, you may be able to find the distribution of $\ln(1+X)$ and from there find its expectation; alternatively you may be able to use the law of the unconscious statistician directly (that is, integrate $\ln(1+x) f_{X}(x)$ over the domain of $x$).
As yo... | Expected value of a natural logarithm | There are two usual approaches:
If you know the distribution of $X$, you may be able to find the distribution of $\ln(1+X)$ and from there find its expectation; alternatively you may be able to use | Expected value of a natural logarithm
There are two usual approaches:
If you know the distribution of $X$, you may be able to find the distribution of $\ln(1+X)$ and from there find its expectation; alternatively you may be able to use the law of the unconscious statistician directly (that is, integrate $\ln(1+x) f_{... | Expected value of a natural logarithm
There are two usual approaches:
If you know the distribution of $X$, you may be able to find the distribution of $\ln(1+X)$ and from there find its expectation; alternatively you may be able to use |
9,484 | Beta regression of proportion data including 1 and 0 | You could use zero- and/or one inflated beta regression models which combine the beta distribution with a degenerate distribution to assign some probability to 0 and 1 respectively. For details see the following references:
Ospina, R., & Ferrari, S. L. P. (2010). Inflated beta distributions. Statistical Papers, 51(1),... | Beta regression of proportion data including 1 and 0 | You could use zero- and/or one inflated beta regression models which combine the beta distribution with a degenerate distribution to assign some probability to 0 and 1 respectively. For details see th | Beta regression of proportion data including 1 and 0
You could use zero- and/or one inflated beta regression models which combine the beta distribution with a degenerate distribution to assign some probability to 0 and 1 respectively. For details see the following references:
Ospina, R., & Ferrari, S. L. P. (2010). In... | Beta regression of proportion data including 1 and 0
You could use zero- and/or one inflated beta regression models which combine the beta distribution with a degenerate distribution to assign some probability to 0 and 1 respectively. For details see th |
9,485 | Beta regression of proportion data including 1 and 0 | The documentation for the R betareg package mentions that
if y also assumes the extremes 0 and 1, a useful transformation in practice is (y * (n−1) + 0.5) / n where n is the sample size.
http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf
They give the reference Smithson M, Verkuilen J (2006). "A Bet... | Beta regression of proportion data including 1 and 0 | The documentation for the R betareg package mentions that
if y also assumes the extremes 0 and 1, a useful transformation in practice is (y * (n−1) + 0.5) / n where n is the sample size.
http://cran | Beta regression of proportion data including 1 and 0
The documentation for the R betareg package mentions that
if y also assumes the extremes 0 and 1, a useful transformation in practice is (y * (n−1) + 0.5) / n where n is the sample size.
http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf
They give... | Beta regression of proportion data including 1 and 0
The documentation for the R betareg package mentions that
if y also assumes the extremes 0 and 1, a useful transformation in practice is (y * (n−1) + 0.5) / n where n is the sample size.
http://cran |
9,486 | Beta regression of proportion data including 1 and 0 | Came across a current online review piece on 'Zero-One Inflated Beta Models', by Karen Grace-Martin in "The Analysis Factor", outlining the proposed solution (noted above by Matze O in 2013) to address the 0/1 occurrence issue. To quote parts from the non-technical review:
So if a client takes their medication 30 out ... | Beta regression of proportion data including 1 and 0 | Came across a current online review piece on 'Zero-One Inflated Beta Models', by Karen Grace-Martin in "The Analysis Factor", outlining the proposed solution (noted above by Matze O in 2013) to addres | Beta regression of proportion data including 1 and 0
Came across a current online review piece on 'Zero-One Inflated Beta Models', by Karen Grace-Martin in "The Analysis Factor", outlining the proposed solution (noted above by Matze O in 2013) to address the 0/1 occurrence issue. To quote parts from the non-technical r... | Beta regression of proportion data including 1 and 0
Came across a current online review piece on 'Zero-One Inflated Beta Models', by Karen Grace-Martin in "The Analysis Factor", outlining the proposed solution (noted above by Matze O in 2013) to addres |
9,487 | Beta regression of proportion data including 1 and 0 | Don't you do a logit transform to make the variable ranging from minus infinity to plus infinity? I am not sure if data having 0 and 1 should be a problem. Is that showing any error message? By the way, if you only have proportions your analysis will always come out wrong. You need to use weight=argument to glm with th... | Beta regression of proportion data including 1 and 0 | Don't you do a logit transform to make the variable ranging from minus infinity to plus infinity? I am not sure if data having 0 and 1 should be a problem. Is that showing any error message? By the wa | Beta regression of proportion data including 1 and 0
Don't you do a logit transform to make the variable ranging from minus infinity to plus infinity? I am not sure if data having 0 and 1 should be a problem. Is that showing any error message? By the way, if you only have proportions your analysis will always come out ... | Beta regression of proportion data including 1 and 0
Don't you do a logit transform to make the variable ranging from minus infinity to plus infinity? I am not sure if data having 0 and 1 should be a problem. Is that showing any error message? By the wa |
9,488 | Beta regression of proportion data including 1 and 0 | Check out the following, where an ad hoc transformation is mentioned maartenbuis.nl/presentations/berlin10.pdf on slide 17. Also you could modeling 0 and 1 with two separate logistic regressions and then use Beta regression for those not at the boundary. | Beta regression of proportion data including 1 and 0 | Check out the following, where an ad hoc transformation is mentioned maartenbuis.nl/presentations/berlin10.pdf on slide 17. Also you could modeling 0 and 1 with two separate logistic regressions and t | Beta regression of proportion data including 1 and 0
Check out the following, where an ad hoc transformation is mentioned maartenbuis.nl/presentations/berlin10.pdf on slide 17. Also you could modeling 0 and 1 with two separate logistic regressions and then use Beta regression for those not at the boundary. | Beta regression of proportion data including 1 and 0
Check out the following, where an ad hoc transformation is mentioned maartenbuis.nl/presentations/berlin10.pdf on slide 17. Also you could modeling 0 and 1 with two separate logistic regressions and t |
9,489 | Beta regression of proportion data including 1 and 0 | The beta model is for a binary variable that is modeled as Bernoulli-distributed with unknown probability $p$. The beta model calculates a likelihood over $p$, which is beta-distributed.
Your variable is a proportion. You could model the proportion as being beta-distributed with unknown parameters $a, b$. The model ... | Beta regression of proportion data including 1 and 0 | The beta model is for a binary variable that is modeled as Bernoulli-distributed with unknown probability $p$. The beta model calculates a likelihood over $p$, which is beta-distributed.
Your variabl | Beta regression of proportion data including 1 and 0
The beta model is for a binary variable that is modeled as Bernoulli-distributed with unknown probability $p$. The beta model calculates a likelihood over $p$, which is beta-distributed.
Your variable is a proportion. You could model the proportion as being beta-di... | Beta regression of proportion data including 1 and 0
The beta model is for a binary variable that is modeled as Bernoulli-distributed with unknown probability $p$. The beta model calculates a likelihood over $p$, which is beta-distributed.
Your variabl |
9,490 | Bayesian lasso vs ordinary lasso | The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994).
In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is used, then the Maximum a Posterio... | Bayesian lasso vs ordinary lasso | The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994).
In the Bayesian framework, the choice of re | Bayesian lasso vs ordinary lasso
The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994).
In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is ... | Bayesian lasso vs ordinary lasso
The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994).
In the Bayesian framework, the choice of re |
9,491 | Bayesian lasso vs ordinary lasso | "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observ... | Bayesian lasso vs ordinary lasso | "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.The most important application is in data fitting. The best | Bayesian lasso vs ordinary lasso
"Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being... | Bayesian lasso vs ordinary lasso
"Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.The most important application is in data fitting. The best |
9,492 | Bayesian lasso vs ordinary lasso | I feel the current answers to this question do not really answer the questions,
which were "What are differences or advantages of baysian (sic) lasso vs regular lasso?" and "are they the same?"
First, they are not the same.
The key difference is:
The Bayesian lasso attempts to sample from the full posterior distributi... | Bayesian lasso vs ordinary lasso | I feel the current answers to this question do not really answer the questions,
which were "What are differences or advantages of baysian (sic) lasso vs regular lasso?" and "are they the same?"
First, | Bayesian lasso vs ordinary lasso
I feel the current answers to this question do not really answer the questions,
which were "What are differences or advantages of baysian (sic) lasso vs regular lasso?" and "are they the same?"
First, they are not the same.
The key difference is:
The Bayesian lasso attempts to sample f... | Bayesian lasso vs ordinary lasso
I feel the current answers to this question do not really answer the questions,
which were "What are differences or advantages of baysian (sic) lasso vs regular lasso?" and "are they the same?"
First, |
9,493 | Connection between Fisher metric and the relative entropy | In 1946, geophysicist and Bayesian statistician Harold Jeffreys introduced what we today call the Kullback-Leibler divergence, and discovered that for two distributions that are "infinitely close" (let's hope that Math SE guys don't see this ;-) we can write their Kullback-Leibler divergence as a quadratic form whose c... | Connection between Fisher metric and the relative entropy | In 1946, geophysicist and Bayesian statistician Harold Jeffreys introduced what we today call the Kullback-Leibler divergence, and discovered that for two distributions that are "infinitely close" (le | Connection between Fisher metric and the relative entropy
In 1946, geophysicist and Bayesian statistician Harold Jeffreys introduced what we today call the Kullback-Leibler divergence, and discovered that for two distributions that are "infinitely close" (let's hope that Math SE guys don't see this ;-) we can write the... | Connection between Fisher metric and the relative entropy
In 1946, geophysicist and Bayesian statistician Harold Jeffreys introduced what we today call the Kullback-Leibler divergence, and discovered that for two distributions that are "infinitely close" (le |
9,494 | Connection between Fisher metric and the relative entropy | Proof for usual (non-symmetric) KL divergence
Zen's answer uses the symmetrized KL divergence, but the result holds for the usual form as well, since it becomes symmetric for infinitesimally close distributions.
Here's a proof for discrete distributions parameterized by a scalar $\theta$ (because I'm lazy), but can be ... | Connection between Fisher metric and the relative entropy | Proof for usual (non-symmetric) KL divergence
Zen's answer uses the symmetrized KL divergence, but the result holds for the usual form as well, since it becomes symmetric for infinitesimally close dis | Connection between Fisher metric and the relative entropy
Proof for usual (non-symmetric) KL divergence
Zen's answer uses the symmetrized KL divergence, but the result holds for the usual form as well, since it becomes symmetric for infinitesimally close distributions.
Here's a proof for discrete distributions paramete... | Connection between Fisher metric and the relative entropy
Proof for usual (non-symmetric) KL divergence
Zen's answer uses the symmetrized KL divergence, but the result holds for the usual form as well, since it becomes symmetric for infinitesimally close dis |
9,495 | Connection between Fisher metric and the relative entropy | You can find a similar relationship (for a one-dimensional parameter) in equation (3) of the following paper
D. Guo (2009), Relative Entropy and Score Function: New
Information–Estimation Relationships through Arbitrary Additive
Perturbation,
in Proc. IEEE International Symposium on Information Theory,
814–8... | Connection between Fisher metric and the relative entropy | You can find a similar relationship (for a one-dimensional parameter) in equation (3) of the following paper
D. Guo (2009), Relative Entropy and Score Function: New
Information–Estimation Relations | Connection between Fisher metric and the relative entropy
You can find a similar relationship (for a one-dimensional parameter) in equation (3) of the following paper
D. Guo (2009), Relative Entropy and Score Function: New
Information–Estimation Relationships through Arbitrary Additive
Perturbation,
in Proc. IEE... | Connection between Fisher metric and the relative entropy
You can find a similar relationship (for a one-dimensional parameter) in equation (3) of the following paper
D. Guo (2009), Relative Entropy and Score Function: New
Information–Estimation Relations |
9,496 | Deriving the KL divergence loss for VAEs | The encoder distribution is $q(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))$ where $\Sigma=\text{diag}(\sigma_1^2,\ldots,\sigma^2_n)$.
The latent prior is given by $p(z)=\mathcal{N}(0,I)$.
Both are multivariate Gaussians of dimension $n$, for which in general the KL divergence is:
$$
\mathfrak{D}_\text{KL}[p_1\mid\mid p_2] =
\... | Deriving the KL divergence loss for VAEs | The encoder distribution is $q(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))$ where $\Sigma=\text{diag}(\sigma_1^2,\ldots,\sigma^2_n)$.
The latent prior is given by $p(z)=\mathcal{N}(0,I)$.
Both are multivaria | Deriving the KL divergence loss for VAEs
The encoder distribution is $q(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))$ where $\Sigma=\text{diag}(\sigma_1^2,\ldots,\sigma^2_n)$.
The latent prior is given by $p(z)=\mathcal{N}(0,I)$.
Both are multivariate Gaussians of dimension $n$, for which in general the KL divergence is:
$$
\m... | Deriving the KL divergence loss for VAEs
The encoder distribution is $q(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))$ where $\Sigma=\text{diag}(\sigma_1^2,\ldots,\sigma^2_n)$.
The latent prior is given by $p(z)=\mathcal{N}(0,I)$.
Both are multivaria |
9,497 | Why Normalizing Factor is Required in Bayes Theorem? | First, the integral of "likelihood x prior" is not necessarily 1.
It is not true that if:
$0 \leq P(\textrm{model}) \leq 1$ and $ 0 \leq P(\textrm{data}|\textrm{model}) \leq 1$
then the integral of this product with respect to the model (to the parameters of the model, indeed) is 1.
Demonstration. Imagine two discret... | Why Normalizing Factor is Required in Bayes Theorem? | First, the integral of "likelihood x prior" is not necessarily 1.
It is not true that if:
$0 \leq P(\textrm{model}) \leq 1$ and $ 0 \leq P(\textrm{data}|\textrm{model}) \leq 1$
then the integral of t | Why Normalizing Factor is Required in Bayes Theorem?
First, the integral of "likelihood x prior" is not necessarily 1.
It is not true that if:
$0 \leq P(\textrm{model}) \leq 1$ and $ 0 \leq P(\textrm{data}|\textrm{model}) \leq 1$
then the integral of this product with respect to the model (to the parameters of the mod... | Why Normalizing Factor is Required in Bayes Theorem?
First, the integral of "likelihood x prior" is not necessarily 1.
It is not true that if:
$0 \leq P(\textrm{model}) \leq 1$ and $ 0 \leq P(\textrm{data}|\textrm{model}) \leq 1$
then the integral of t |
9,498 | Why Normalizing Factor is Required in Bayes Theorem? | The short answer to your question is that without the denominator, the expression on the right-hand side is merely a likelihood, not a probability, which can only range from 0 to 1. The "normalizing constant" allows us to get the probability for the occurrence of an event, rather than merely the relative likelihood of... | Why Normalizing Factor is Required in Bayes Theorem? | The short answer to your question is that without the denominator, the expression on the right-hand side is merely a likelihood, not a probability, which can only range from 0 to 1. The "normalizing | Why Normalizing Factor is Required in Bayes Theorem?
The short answer to your question is that without the denominator, the expression on the right-hand side is merely a likelihood, not a probability, which can only range from 0 to 1. The "normalizing constant" allows us to get the probability for the occurrence of an... | Why Normalizing Factor is Required in Bayes Theorem?
The short answer to your question is that without the denominator, the expression on the right-hand side is merely a likelihood, not a probability, which can only range from 0 to 1. The "normalizing |
9,499 | Why Normalizing Factor is Required in Bayes Theorem? | You already got two valid answers but let me add my two cents.
Bayes theorem is often defined as:
$$P(\text{model}~ | ~\text{data}) \propto P(\text{model}) \times P(\text{data}~|~\text{model})$$
because the only reason why you need the constant is so that it integrates to 1 (see the answers by others). This is not nee... | Why Normalizing Factor is Required in Bayes Theorem? | You already got two valid answers but let me add my two cents.
Bayes theorem is often defined as:
$$P(\text{model}~ | ~\text{data}) \propto P(\text{model}) \times P(\text{data}~|~\text{model})$$
beca | Why Normalizing Factor is Required in Bayes Theorem?
You already got two valid answers but let me add my two cents.
Bayes theorem is often defined as:
$$P(\text{model}~ | ~\text{data}) \propto P(\text{model}) \times P(\text{data}~|~\text{model})$$
because the only reason why you need the constant is so that it integra... | Why Normalizing Factor is Required in Bayes Theorem?
You already got two valid answers but let me add my two cents.
Bayes theorem is often defined as:
$$P(\text{model}~ | ~\text{data}) \propto P(\text{model}) \times P(\text{data}~|~\text{model})$$
beca |
9,500 | Why doesn't backpropagation work when you initialize the weights the same value? | Symmetry breaking.
If all weights start with equal values and if the solution requires that unequal weights be developed, the system can never learn.
This is because error is propagated back through the weights in proportion to the values of the weights. This means that all hidden units connected directly to the outpu... | Why doesn't backpropagation work when you initialize the weights the same value? | Symmetry breaking.
If all weights start with equal values and if the solution requires that unequal weights be developed, the system can never learn.
This is because error is propagated back through | Why doesn't backpropagation work when you initialize the weights the same value?
Symmetry breaking.
If all weights start with equal values and if the solution requires that unequal weights be developed, the system can never learn.
This is because error is propagated back through the weights in proportion to the values... | Why doesn't backpropagation work when you initialize the weights the same value?
Symmetry breaking.
If all weights start with equal values and if the solution requires that unequal weights be developed, the system can never learn.
This is because error is propagated back through |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.