idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
27,101 | How to search for a statistical procedure in R? | The sos package lets you search the help documentation for all cran packages from within R itself. | How to search for a statistical procedure in R? | The sos package lets you search the help documentation for all cran packages from within R itself. | How to search for a statistical procedure in R?
The sos package lets you search the help documentation for all cran packages from within R itself. | How to search for a statistical procedure in R?
The sos package lets you search the help documentation for all cran packages from within R itself. |
27,102 | How to search for a statistical procedure in R? | Sometimes I just go to crantastic and search for keywords
Search for Box Cox on Crantastic | How to search for a statistical procedure in R? | Sometimes I just go to crantastic and search for keywords
Search for Box Cox on Crantastic | How to search for a statistical procedure in R?
Sometimes I just go to crantastic and search for keywords
Search for Box Cox on Crantastic | How to search for a statistical procedure in R?
Sometimes I just go to crantastic and search for keywords
Search for Box Cox on Crantastic |
27,103 | How to search for a statistical procedure in R? | I would try two things. One is the ?? help search in R, so for Box-Cox I would do
??cox
which should list packages or functions with that text
The other is to try the http://www.rseek.org/ site which is like google just for R. | How to search for a statistical procedure in R? | I would try two things. One is the ?? help search in R, so for Box-Cox I would do
??cox
which should list packages or functions with that text
The other is to try the http://www.rseek.org/ site whic | How to search for a statistical procedure in R?
I would try two things. One is the ?? help search in R, so for Box-Cox I would do
??cox
which should list packages or functions with that text
The other is to try the http://www.rseek.org/ site which is like google just for R. | How to search for a statistical procedure in R?
I would try two things. One is the ?? help search in R, so for Box-Cox I would do
??cox
which should list packages or functions with that text
The other is to try the http://www.rseek.org/ site whic |
27,104 | Discrete and Continuous variables. What is the definition? | A random variable $R$ is said to be continuous if for every real number $t,$ the probability that $R$ equals $t$ is zero $P(R = t) = 0.$ A random variable $R$ is said to be discrete if there exists a countable set of values $t_1, \ldots, t_n, \ldots$ such that $P(R = t_i) > 0$ for all $i$ and $\sum\limits_i P(R = t_i) = 1.$ The Radon-Nikodym and Lebesgue Decomposition theorems show every the cumulative distribution function (a.k.a. CDF) of every random variable can be expressed as
$$
F = aF_{ac} + b F_{dc} + c F_{pm}
$$ where $a, b, c \geq 0$ and $a + b + c = 1,$ where
$F_{ac}$ is the CDF of an absolutely continuous random variable (i.e. $F_{ac}$ admits a density), and $F_{dc}$ is the CDF of degenerated continuous random variable and $F_{pm}$ is the CDF of a discrete random variable (so pm stands for point-mass). It is hard to construct examples of degenerated continuous random variables for their CDF must be continuous, increasing, not constant, and have a zero derivative almost everywhere. A typical example is Cantor's Devil Staircase function (https://en.wikipedia.org/wiki/Cantor_function). So you usually assume that random variables are either absolutely continuous, discrete or mixture of these two types.
EDIT: this question received a lot of attention, so let me expand a bit. This definition is motivated on the 1D case (univariate random variables). The condition that $P(R = t) = 0$ for all $t$ signifies that the CDF of $R$ is a continuous function $\mathbf{R} \to [0,1].$ Indeed, it is a well-known fact that a CDF is non-decreasing function, a fortiori it can only have jump discontinuities. But a jump discontinuity of a CDF is precisely at the "atoms" of the distributions (an "atom" of a random variable $R$ is a value values $t$ such that $P(R = t) > 0$). To see this, we use that the CDF $F$ is already (by definition) continuous on the right, so that $F$ is continuous if and only if is continuous on the left. Now,
$$
F(t) - F(t - \delta) = P(R \leq t) - P(R \leq t - \delta) = P(t - \delta < R \leq t),
$$
by measure-theoretic properties of $P,$ the right hand side converges to $P(R = t),$ so that $F$ is continuous on the left if and only if $P(R = t) = 0,$ which is the main motivation to call an atomless random variable a "continuous random variable." | Discrete and Continuous variables. What is the definition? | A random variable $R$ is said to be continuous if for every real number $t,$ the probability that $R$ equals $t$ is zero $P(R = t) = 0.$ A random variable $R$ is said to be discrete if there exists a | Discrete and Continuous variables. What is the definition?
A random variable $R$ is said to be continuous if for every real number $t,$ the probability that $R$ equals $t$ is zero $P(R = t) = 0.$ A random variable $R$ is said to be discrete if there exists a countable set of values $t_1, \ldots, t_n, \ldots$ such that $P(R = t_i) > 0$ for all $i$ and $\sum\limits_i P(R = t_i) = 1.$ The Radon-Nikodym and Lebesgue Decomposition theorems show every the cumulative distribution function (a.k.a. CDF) of every random variable can be expressed as
$$
F = aF_{ac} + b F_{dc} + c F_{pm}
$$ where $a, b, c \geq 0$ and $a + b + c = 1,$ where
$F_{ac}$ is the CDF of an absolutely continuous random variable (i.e. $F_{ac}$ admits a density), and $F_{dc}$ is the CDF of degenerated continuous random variable and $F_{pm}$ is the CDF of a discrete random variable (so pm stands for point-mass). It is hard to construct examples of degenerated continuous random variables for their CDF must be continuous, increasing, not constant, and have a zero derivative almost everywhere. A typical example is Cantor's Devil Staircase function (https://en.wikipedia.org/wiki/Cantor_function). So you usually assume that random variables are either absolutely continuous, discrete or mixture of these two types.
EDIT: this question received a lot of attention, so let me expand a bit. This definition is motivated on the 1D case (univariate random variables). The condition that $P(R = t) = 0$ for all $t$ signifies that the CDF of $R$ is a continuous function $\mathbf{R} \to [0,1].$ Indeed, it is a well-known fact that a CDF is non-decreasing function, a fortiori it can only have jump discontinuities. But a jump discontinuity of a CDF is precisely at the "atoms" of the distributions (an "atom" of a random variable $R$ is a value values $t$ such that $P(R = t) > 0$). To see this, we use that the CDF $F$ is already (by definition) continuous on the right, so that $F$ is continuous if and only if is continuous on the left. Now,
$$
F(t) - F(t - \delta) = P(R \leq t) - P(R \leq t - \delta) = P(t - \delta < R \leq t),
$$
by measure-theoretic properties of $P,$ the right hand side converges to $P(R = t),$ so that $F$ is continuous on the left if and only if $P(R = t) = 0,$ which is the main motivation to call an atomless random variable a "continuous random variable." | Discrete and Continuous variables. What is the definition?
A random variable $R$ is said to be continuous if for every real number $t,$ the probability that $R$ equals $t$ is zero $P(R = t) = 0.$ A random variable $R$ is said to be discrete if there exists a |
27,105 | Discrete and Continuous variables. What is the definition? | A random variable is a function that maps a sample space to the real numbers. A continuous random variable is such a function such that it can take on any value in an interval - not any arbitrary interval, but an interval which makes sense for any particular random variable under consideration. A discrete random variable is a random variable that can only assume a finite or countably infinity number of distinct values.
For reference, see Mathematical Statistics with Applications, 5th Ed., by Wackerly, Mendenhall, and Scheaffer. The random variable is defined as Definition 2.11 on p. 65. The discrete random variable is defined as Definition 3.1 on p. 76. The continuous random variable is defined on p. 136.
In the referenced textbook, I have never seen exceptions to these definitions, except perhaps the so-called "mixed" random variables that are partly discrete, partly continuous. | Discrete and Continuous variables. What is the definition? | A random variable is a function that maps a sample space to the real numbers. A continuous random variable is such a function such that it can take on any value in an interval - not any arbitrary inte | Discrete and Continuous variables. What is the definition?
A random variable is a function that maps a sample space to the real numbers. A continuous random variable is such a function such that it can take on any value in an interval - not any arbitrary interval, but an interval which makes sense for any particular random variable under consideration. A discrete random variable is a random variable that can only assume a finite or countably infinity number of distinct values.
For reference, see Mathematical Statistics with Applications, 5th Ed., by Wackerly, Mendenhall, and Scheaffer. The random variable is defined as Definition 2.11 on p. 65. The discrete random variable is defined as Definition 3.1 on p. 76. The continuous random variable is defined on p. 136.
In the referenced textbook, I have never seen exceptions to these definitions, except perhaps the so-called "mixed" random variables that are partly discrete, partly continuous. | Discrete and Continuous variables. What is the definition?
A random variable is a function that maps a sample space to the real numbers. A continuous random variable is such a function such that it can take on any value in an interval - not any arbitrary inte |
27,106 | Discrete and Continuous variables. What is the definition? | Supplemental to answers of @WillM and @AdrianKeister, both (+1).
Sometimes we model essentially discrete situations as continuous.
If you want to be fussy about amount of debt on individual credit cards in the US, then that debt is truly discrete at the one cent level. Even interest charges will be rounded to the nearest cent. But if a continuous model such as $\mathsf{Norm}(\mu = 10000, \sigma=2000)$ is approximately correct for a particular group of cardholders, that is easer to deal with than a discrete random variable with something like $1\,200\,000$ discrete values spaced 1-cent apart.
Also, it is common to approximate $\mathsf{Binom}(n=100, p-1/2)$ as $\mathsf{Norm}(\mu=50, \sigma=5).$ Even though this is perhaps done less frequently now that we have software that deals gracefully with 101 discrete outcomes. The probability of getting between 45 and 55 (inclusive) heads in 100 independent tosses of a fair coin is
exactly $0.72875$ to five places--in R using the CDF pbinom or the PDF (PMF) dbinom.
diff(pbinom(c(44,55), 100, .5))
[1] 0.728747
sum(dbinom(45:55, 100, .5))
[1] 0.728747
A normal approximation, using the 'continuity correction', is $0.72867$ to five places. In practical terms, it would take an enormous number of coin
tosses to distinguish between the two answers.
diff(pnorm(c(44.5, 55.5), 50, 5))
[1] 0.7286679
R code for figure:
x = 0:100; PDF = dbinom(x, 100, .5)
hdr="PDF of BINOM(100, .5) with Normal Approximation"
plot(x, PDF, type="h", lwd=2, main=hdr)
abline(h=0, col="green2")
curve(dnorm(x, 50, 5), add=T, col="blue", lwd=2)
abline(v=c(44.5, 55.5), col="red", lty="dotted")
Formally, working below the measure-theoretic level, a continuous
random variable can be defined in terms of its density function. [WMS 6e, p155], which has $f(x)\ge 0,$ for real $x,$ with
$\int_{-\infty}^\infty f(x)\,dx = 1.$ Probabilities are defined
for intervals $[a,b],$ with $a\le b$ as $\int_a^b f(x)\,dx,$ with the consequence that the probability of any single point is $0.$ | Discrete and Continuous variables. What is the definition? | Supplemental to answers of @WillM and @AdrianKeister, both (+1).
Sometimes we model essentially discrete situations as continuous.
If you want to be fussy about amount of debt on individual credit car | Discrete and Continuous variables. What is the definition?
Supplemental to answers of @WillM and @AdrianKeister, both (+1).
Sometimes we model essentially discrete situations as continuous.
If you want to be fussy about amount of debt on individual credit cards in the US, then that debt is truly discrete at the one cent level. Even interest charges will be rounded to the nearest cent. But if a continuous model such as $\mathsf{Norm}(\mu = 10000, \sigma=2000)$ is approximately correct for a particular group of cardholders, that is easer to deal with than a discrete random variable with something like $1\,200\,000$ discrete values spaced 1-cent apart.
Also, it is common to approximate $\mathsf{Binom}(n=100, p-1/2)$ as $\mathsf{Norm}(\mu=50, \sigma=5).$ Even though this is perhaps done less frequently now that we have software that deals gracefully with 101 discrete outcomes. The probability of getting between 45 and 55 (inclusive) heads in 100 independent tosses of a fair coin is
exactly $0.72875$ to five places--in R using the CDF pbinom or the PDF (PMF) dbinom.
diff(pbinom(c(44,55), 100, .5))
[1] 0.728747
sum(dbinom(45:55, 100, .5))
[1] 0.728747
A normal approximation, using the 'continuity correction', is $0.72867$ to five places. In practical terms, it would take an enormous number of coin
tosses to distinguish between the two answers.
diff(pnorm(c(44.5, 55.5), 50, 5))
[1] 0.7286679
R code for figure:
x = 0:100; PDF = dbinom(x, 100, .5)
hdr="PDF of BINOM(100, .5) with Normal Approximation"
plot(x, PDF, type="h", lwd=2, main=hdr)
abline(h=0, col="green2")
curve(dnorm(x, 50, 5), add=T, col="blue", lwd=2)
abline(v=c(44.5, 55.5), col="red", lty="dotted")
Formally, working below the measure-theoretic level, a continuous
random variable can be defined in terms of its density function. [WMS 6e, p155], which has $f(x)\ge 0,$ for real $x,$ with
$\int_{-\infty}^\infty f(x)\,dx = 1.$ Probabilities are defined
for intervals $[a,b],$ with $a\le b$ as $\int_a^b f(x)\,dx,$ with the consequence that the probability of any single point is $0.$ | Discrete and Continuous variables. What is the definition?
Supplemental to answers of @WillM and @AdrianKeister, both (+1).
Sometimes we model essentially discrete situations as continuous.
If you want to be fussy about amount of debt on individual credit car |
27,107 | Discrete and Continuous variables. What is the definition? | Not all physical phenomena are strictly discrete or continuous. Some are obvious, like number of children. But "continuous" has more to do with convenience and physical processes rather than a be-all-end-all data dictionary. For instance, weight can be rounded to the nearest tenth, or hundredth, down to the actual sensitivity of the instrument used to measure weight; it remains discrete in the sense that you can enumerate the possible values starting with 0.0, 0.1, 0.2, ..., but it's also continuous in the sense that theoretically it makes sense to consider weight as real valued. (As a 4th year maths student, you can appreciate that the rationals are dense in the reals, and those are countable.)
The designation (discrete vs. continuous) only matters when you assign a probability model to those values. Ideally, the choice of model is somewhat justified by the physical, random process giving rise to the values. For instance, if you assume the arrivals of cars over a segment of highway are of constant intensity, and independent in inter-arrival times, you can consider a Poisson counting process to estimate the probability of a certain volume of traffic at a particular time. Interestingly, Quetelet developed the BMI because the distribution looked normal, but modern biometricians suggest that we should have created a metric based on dividing by height cubed as it would represent a weight density, rather than surface area. Not withstanding, weight (or mass index) makes sense as a continuous value any way you cut it. | Discrete and Continuous variables. What is the definition? | Not all physical phenomena are strictly discrete or continuous. Some are obvious, like number of children. But "continuous" has more to do with convenience and physical processes rather than a be-all- | Discrete and Continuous variables. What is the definition?
Not all physical phenomena are strictly discrete or continuous. Some are obvious, like number of children. But "continuous" has more to do with convenience and physical processes rather than a be-all-end-all data dictionary. For instance, weight can be rounded to the nearest tenth, or hundredth, down to the actual sensitivity of the instrument used to measure weight; it remains discrete in the sense that you can enumerate the possible values starting with 0.0, 0.1, 0.2, ..., but it's also continuous in the sense that theoretically it makes sense to consider weight as real valued. (As a 4th year maths student, you can appreciate that the rationals are dense in the reals, and those are countable.)
The designation (discrete vs. continuous) only matters when you assign a probability model to those values. Ideally, the choice of model is somewhat justified by the physical, random process giving rise to the values. For instance, if you assume the arrivals of cars over a segment of highway are of constant intensity, and independent in inter-arrival times, you can consider a Poisson counting process to estimate the probability of a certain volume of traffic at a particular time. Interestingly, Quetelet developed the BMI because the distribution looked normal, but modern biometricians suggest that we should have created a metric based on dividing by height cubed as it would represent a weight density, rather than surface area. Not withstanding, weight (or mass index) makes sense as a continuous value any way you cut it. | Discrete and Continuous variables. What is the definition?
Not all physical phenomena are strictly discrete or continuous. Some are obvious, like number of children. But "continuous" has more to do with convenience and physical processes rather than a be-all- |
27,108 | Discrete and Continuous variables. What is the definition? | Here is the most elementary definition I can think of.
If $X$ is a random variable then it has a "distribution function", also known as as CDF. If we let $F(t)$ denote the distribution function then by definition $F(t) = P(X\leq t)$. If this distribution function is continuous (as a function in ordinary calculus) then we say that $X$ is a continuous random variable.
We say that $X$ is a discrete random variable if the "support of $X$" consists of the integers. Recall the "support of a random variable" is the set of possible values the random variable may take. Let us denote the support by $\text{supp}(X)$. Therefore, we say that $X$ is a "discrete random variable" if $\text{supp}(X)\subseteq \mathbb{Z}$. Even more simply, we say $X$ is discrete if the value of $X$ is always some integer value.
Note, the above definition for "discrete" is not the most general definition, but as an initial definition it is good enough. The most important discrete random variables are integer valued anyway. Hence you do not need to be concerned about (yet) what it means to be a "countable set" at the moment.
If $X$ is a discrete random variable then its CDF will look like a staircase, the CDF will be piecewise constant. | Discrete and Continuous variables. What is the definition? | Here is the most elementary definition I can think of.
If $X$ is a random variable then it has a "distribution function", also known as as CDF. If we let $F(t)$ denote the distribution function then b | Discrete and Continuous variables. What is the definition?
Here is the most elementary definition I can think of.
If $X$ is a random variable then it has a "distribution function", also known as as CDF. If we let $F(t)$ denote the distribution function then by definition $F(t) = P(X\leq t)$. If this distribution function is continuous (as a function in ordinary calculus) then we say that $X$ is a continuous random variable.
We say that $X$ is a discrete random variable if the "support of $X$" consists of the integers. Recall the "support of a random variable" is the set of possible values the random variable may take. Let us denote the support by $\text{supp}(X)$. Therefore, we say that $X$ is a "discrete random variable" if $\text{supp}(X)\subseteq \mathbb{Z}$. Even more simply, we say $X$ is discrete if the value of $X$ is always some integer value.
Note, the above definition for "discrete" is not the most general definition, but as an initial definition it is good enough. The most important discrete random variables are integer valued anyway. Hence you do not need to be concerned about (yet) what it means to be a "countable set" at the moment.
If $X$ is a discrete random variable then its CDF will look like a staircase, the CDF will be piecewise constant. | Discrete and Continuous variables. What is the definition?
Here is the most elementary definition I can think of.
If $X$ is a random variable then it has a "distribution function", also known as as CDF. If we let $F(t)$ denote the distribution function then b |
27,109 | Avoiding social discrimination in model building | This paper provides an excellent overview of how to navigate gender bias especially in language-based models: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings - Bolukbasi et. al.. A nice blog summary can be found here:
https://developers.googleblog.com/2018/04/text-embedding-models-contain-bias.html
You'll find a larger compendium of resources here:
https://developers.google.com/machine-learning/fairness-overview/
You'll find a slew of techniques in the above links to mitigate gender bias. Generally speaking they fall into three classes:
1) Under/Over sampling your data. This is intended to oversample high-quality female resumes and under sample male resumes.
2) Subtracting out the "gender subspace." If your model is gender-biased, then you could demonstrate it to be so by using your resume embeddings to directly predict gender. After building such an auxiliary model (even just sampling common terms belonging to either gender, and then applying PCA), you can in effect subtract out this dimension from the model, normalizing the resume to be gender-neutral. This is the main technique used in Bolukbasi's paper.
3) Adversarial Learning. In this case you try to generate additional data by trying to generate more versions of high quality female resumes that are otherwise indistinguishable from real resumes. | Avoiding social discrimination in model building | This paper provides an excellent overview of how to navigate gender bias especially in language-based models: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings - Bolukb | Avoiding social discrimination in model building
This paper provides an excellent overview of how to navigate gender bias especially in language-based models: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings - Bolukbasi et. al.. A nice blog summary can be found here:
https://developers.googleblog.com/2018/04/text-embedding-models-contain-bias.html
You'll find a larger compendium of resources here:
https://developers.google.com/machine-learning/fairness-overview/
You'll find a slew of techniques in the above links to mitigate gender bias. Generally speaking they fall into three classes:
1) Under/Over sampling your data. This is intended to oversample high-quality female resumes and under sample male resumes.
2) Subtracting out the "gender subspace." If your model is gender-biased, then you could demonstrate it to be so by using your resume embeddings to directly predict gender. After building such an auxiliary model (even just sampling common terms belonging to either gender, and then applying PCA), you can in effect subtract out this dimension from the model, normalizing the resume to be gender-neutral. This is the main technique used in Bolukbasi's paper.
3) Adversarial Learning. In this case you try to generate additional data by trying to generate more versions of high quality female resumes that are otherwise indistinguishable from real resumes. | Avoiding social discrimination in model building
This paper provides an excellent overview of how to navigate gender bias especially in language-based models: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings - Bolukb |
27,110 | Avoiding social discrimination in model building | This is not an answer to your question but just a few thoughts that are too long to fit in a comment.
I think one problem we have to consider when thinking about these issues is that every model discriminates, and they will do so on the basis of any association present in the data. That is arguably the whole purpose of a predictive model. For instance, men are genuinely more likely to commit crime than women, so almost any model that has access to this information will draw such an inference.
But that doesn't mean we should convict someone partially on the basis of gender, even though a man will generally appear more likely to have committed a crime (other things equal). Rather we should require direct evidence of a crime when making such decisions, and not information on mere association. As another example: do people who are more likely to get sick really deserve to pay higher insurance premiums?
So when it comes to discrimination, I would argue that the issue deals more with ethical application, rather than models themselves being unfair. If we are worried about perpetuating discrimination or other unfair outcomes when using a model in given situation, then perhaps we should not be using a model. | Avoiding social discrimination in model building | This is not an answer to your question but just a few thoughts that are too long to fit in a comment.
I think one problem we have to consider when thinking about these issues is that every model discr | Avoiding social discrimination in model building
This is not an answer to your question but just a few thoughts that are too long to fit in a comment.
I think one problem we have to consider when thinking about these issues is that every model discriminates, and they will do so on the basis of any association present in the data. That is arguably the whole purpose of a predictive model. For instance, men are genuinely more likely to commit crime than women, so almost any model that has access to this information will draw such an inference.
But that doesn't mean we should convict someone partially on the basis of gender, even though a man will generally appear more likely to have committed a crime (other things equal). Rather we should require direct evidence of a crime when making such decisions, and not information on mere association. As another example: do people who are more likely to get sick really deserve to pay higher insurance premiums?
So when it comes to discrimination, I would argue that the issue deals more with ethical application, rather than models themselves being unfair. If we are worried about perpetuating discrimination or other unfair outcomes when using a model in given situation, then perhaps we should not be using a model. | Avoiding social discrimination in model building
This is not an answer to your question but just a few thoughts that are too long to fit in a comment.
I think one problem we have to consider when thinking about these issues is that every model discr |
27,111 | Avoiding social discrimination in model building | I used to work on a project to develop software management best practices. I observed roughly fifty software teams in the field. Our sample was around 77, but we ended up seeing around a hundred teams. In addition to collecting data on things such as certifications, degrees and so forth, we also collected a variety of psychological and demographic data.
Software development teams have some very significant self-selection effects in it that, while having nothing to do with gender, are strongly correlated with gender. Also, managers tend to replicate themselves. People hire people they are comfortable with, and they are most comfortable with themselves. There is also evidence that people are being rated in a cognitively biased way. Imagine that, as a manager, I highly value prompt arrival at the start of work. I will then rate on that. Another manager, who just cares that the work gets done, may rate on something entirely different as important.
You noted that men use language differently, but it is also true that people with different personalities use language in different ways. There may be ethnic language usage differences as well, see for example the current controversy at Harvard and Asian admissions.
Now you assume that the software firms discriminate against women, but there is another form of gender discrimination going on in the software development industry that you haven’t accounted for. When you control for objective things such as certifications, degrees, tenure and so forth, the average woman earns 40% more than the average man. There are three sources of employment discrimination in the world.
The first is that managers or owners do not wish to hire someone on the basis of some feature. The second is that coworkers do not wish to work with the people with that feature. The third is that customers do not want people who have a feature. It appears the wage discrimination is being triggered by customers because the work product is different, and from the customers’ perspectives, also better. This same feature causes male dental hygienists to take lower pay than women. It is also seen in a bias toward “born here” in world soccer wages.
The best control for this is to understand your data and the social forces involved. Any firm that uses its own data will tend to replicate itself. That may be a very good thing, but it could also make them blind to forces at work. The second control is to understand your objective function. Profits may be a good function, but it may be a bad function. There are values in play in the selection of an objective loss function. Then, finally, there is the issue of testing the data against demographics to determine if unfortunate discrimination is happening.
Finally, and this is a bigger problem in things like AI where you cannot get good interpretative statistics, you will want to control for Yule’s paradox. The classic historical example is the discovery that 44% of men were accepted to UC Berkley while only 35% of women were admitted in 1973. This was a huge difference and statistically significant. It was also misleading.
This was obviously scandalous, and so the university decided to look at which were the offending majors. Well, it turned out that when you controlled for major, there was a statistically significant bias in favor of admitting women. Of the eighty-five majors, six were biased toward women and four toward men, the remainder were not significant. The difference was that women were, disproportionately, applying for the most competitive majors and so few of either gender were getting in. Men were more likely to apply to less competitive majors.
Adding in Yule’s paradox creates an even deeper layer for discrimination. Imagine, instead of a gender test, there was a gender test by type of job. You could possibly pass a company-wide gender neutral test but fail at the task level. Imagine that only women were recruited for V&V and only men for systems administration. You would look gender neutral, and you wouldn’t be.
One potential solution to this is to run competitive AIs that use differing objective criteria of “goodness.” The goal is to widen the net, not narrow it. This can also help avoid another problem in the management literature. While 3% of males are sociopaths, that number climbs substantially as you go further and further up the corporate ladder. You don’t want to be filtering for sociopaths.
Finally, you may not want to consider using AI for certain types of positions. I am job hunting right now. I am also sure I am being filtered out, and I haven’t figured out how to get around it. I am sitting on a very disruptive new technology. The problem is that my work doesn’t match the magic words. Instead, I have the next set of magic words. Right now, I am worth a fortune to the right firm, but in one case where I applied, I received an automated decline in less than a minute. I have a friend who has served as the CIO of federal agencies. He applied for a job where the hiring manager was waiting to see his application come through so he could pretty much be offered the job. It never came through because the filters blocked it.
This sets up the second problem of AI. If I can work out from online resumes who Amazon is hiring, then I can magic word my resume. Indeed, I am working on my resume right now to get it to fit non-human filters. I can also tell from the e-mails from recruiters that some parts of my resume are being zoomed in on and other parts ignored. It is as if the recruiting and hiring process has been taken over by software like Prolog. Logical constraints met? Yes! This is the optimal candidate or set of candidates. Are they optimal?
There isn't a pre-built answer to your question, only problems to engineer around. | Avoiding social discrimination in model building | I used to work on a project to develop software management best practices. I observed roughly fifty software teams in the field. Our sample was around 77, but we ended up seeing around a hundred tea | Avoiding social discrimination in model building
I used to work on a project to develop software management best practices. I observed roughly fifty software teams in the field. Our sample was around 77, but we ended up seeing around a hundred teams. In addition to collecting data on things such as certifications, degrees and so forth, we also collected a variety of psychological and demographic data.
Software development teams have some very significant self-selection effects in it that, while having nothing to do with gender, are strongly correlated with gender. Also, managers tend to replicate themselves. People hire people they are comfortable with, and they are most comfortable with themselves. There is also evidence that people are being rated in a cognitively biased way. Imagine that, as a manager, I highly value prompt arrival at the start of work. I will then rate on that. Another manager, who just cares that the work gets done, may rate on something entirely different as important.
You noted that men use language differently, but it is also true that people with different personalities use language in different ways. There may be ethnic language usage differences as well, see for example the current controversy at Harvard and Asian admissions.
Now you assume that the software firms discriminate against women, but there is another form of gender discrimination going on in the software development industry that you haven’t accounted for. When you control for objective things such as certifications, degrees, tenure and so forth, the average woman earns 40% more than the average man. There are three sources of employment discrimination in the world.
The first is that managers or owners do not wish to hire someone on the basis of some feature. The second is that coworkers do not wish to work with the people with that feature. The third is that customers do not want people who have a feature. It appears the wage discrimination is being triggered by customers because the work product is different, and from the customers’ perspectives, also better. This same feature causes male dental hygienists to take lower pay than women. It is also seen in a bias toward “born here” in world soccer wages.
The best control for this is to understand your data and the social forces involved. Any firm that uses its own data will tend to replicate itself. That may be a very good thing, but it could also make them blind to forces at work. The second control is to understand your objective function. Profits may be a good function, but it may be a bad function. There are values in play in the selection of an objective loss function. Then, finally, there is the issue of testing the data against demographics to determine if unfortunate discrimination is happening.
Finally, and this is a bigger problem in things like AI where you cannot get good interpretative statistics, you will want to control for Yule’s paradox. The classic historical example is the discovery that 44% of men were accepted to UC Berkley while only 35% of women were admitted in 1973. This was a huge difference and statistically significant. It was also misleading.
This was obviously scandalous, and so the university decided to look at which were the offending majors. Well, it turned out that when you controlled for major, there was a statistically significant bias in favor of admitting women. Of the eighty-five majors, six were biased toward women and four toward men, the remainder were not significant. The difference was that women were, disproportionately, applying for the most competitive majors and so few of either gender were getting in. Men were more likely to apply to less competitive majors.
Adding in Yule’s paradox creates an even deeper layer for discrimination. Imagine, instead of a gender test, there was a gender test by type of job. You could possibly pass a company-wide gender neutral test but fail at the task level. Imagine that only women were recruited for V&V and only men for systems administration. You would look gender neutral, and you wouldn’t be.
One potential solution to this is to run competitive AIs that use differing objective criteria of “goodness.” The goal is to widen the net, not narrow it. This can also help avoid another problem in the management literature. While 3% of males are sociopaths, that number climbs substantially as you go further and further up the corporate ladder. You don’t want to be filtering for sociopaths.
Finally, you may not want to consider using AI for certain types of positions. I am job hunting right now. I am also sure I am being filtered out, and I haven’t figured out how to get around it. I am sitting on a very disruptive new technology. The problem is that my work doesn’t match the magic words. Instead, I have the next set of magic words. Right now, I am worth a fortune to the right firm, but in one case where I applied, I received an automated decline in less than a minute. I have a friend who has served as the CIO of federal agencies. He applied for a job where the hiring manager was waiting to see his application come through so he could pretty much be offered the job. It never came through because the filters blocked it.
This sets up the second problem of AI. If I can work out from online resumes who Amazon is hiring, then I can magic word my resume. Indeed, I am working on my resume right now to get it to fit non-human filters. I can also tell from the e-mails from recruiters that some parts of my resume are being zoomed in on and other parts ignored. It is as if the recruiting and hiring process has been taken over by software like Prolog. Logical constraints met? Yes! This is the optimal candidate or set of candidates. Are they optimal?
There isn't a pre-built answer to your question, only problems to engineer around. | Avoiding social discrimination in model building
I used to work on a project to develop software management best practices. I observed roughly fifty software teams in the field. Our sample was around 77, but we ended up seeing around a hundred tea |
27,112 | Avoiding social discrimination in model building | In order to build a model of this kind, it is important to first understand some basic statistical aspects of discrimination and process-outcomes. This requires understanding of statistical processes that rate objects on the basis of characteristics. In particular, it requires understanding the relationship between use of a characteristic for decision-making purposes (i.e., discrimination) and assessment of process-outcomes with respect to said characteristic. We start by noting the following:
Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.
Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.
It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).
You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.
Once you understand these basic aspects of statistical decision-making processes, you will be able to formulate what your actual goal is in this case. In particular, you will need to decide whether you want a non-discriminatory process, which is likely to result in disparities of outcome across groups, or whether you want a discriminatory process designed to yield equal process outcomes (or something close to this). Ethically, this issue mimics the debate over non-discrimination versus affirmative-action.
Let's say I want to build a statistical model to predict some output from personal data, like a five star ranking to help recruiting new people. Let's say I also want to avoid gender discrimination, as an ethical constraint. Given two strictly equal profile apart from the gender, the output of the model should be the same.
It is easy to ensure that the ratings given from the model are not affected by a variable you want to exclude (e.g., gender). To do this, all you need to do is to remove this variable as a predictor in the model, so that it is not used in the rating decision. This will ensure that two profiles that are strictly equal, apart from that variable, are treated the same. However, it will not necessarily ensure that the model does not discriminate on the basis of another variable that is correlated with the excluded variable, and it will not generally lead to outcomes that are equal between genders. This is because gender is correlated with many other characteristics that might be used as predictive variables in your model, so we would generally expect outcomes to be unequal even in the absence of discrimination.
In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.
Should I use the gender (or any data correlated to it) as an input and try to correct their effect, or avoid to use these data?
Correct what exactly? When you say "correct their effect" I am going to assume that you mean that you are considering "correcting" disparities in outcomes that are caused by predictors that are correlated with gender. If that is the case, and you use gender to try to correct an outcome disparity then you are effectively engaging in affirmative action --- i.e., you are programming your model to discriminate positively on gender, with a view to bringing the outcomes closer together. Whether you want to do this depends on your ethical goal in the model (avoiding discrimination vs. obtaining equal outcomes).
How do I check the absence of discrimination against gender?
If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination.
Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender).
How do I correct my model for data that are statistically discriminant but I don't want to be for ethical reasons?
When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't want these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender. Any binary characteristic will be correlated with gender in any case when the proportion of males with that characteristic is different from the proportion of females with that characteristic. (Of course, if those proportions are close you might find that they difference is not "statistically significant".) For more general variables the condition for non-zero correlation is also very weak. Thus, if you remove all characteristics that show evidence of non-zero correlation with gender, you will almost certainly remove a number of important predictors, and you will not have much left. | Avoiding social discrimination in model building | In order to build a model of this kind, it is important to first understand some basic statistical aspects of discrimination and process-outcomes. This requires understanding of statistical processes | Avoiding social discrimination in model building
In order to build a model of this kind, it is important to first understand some basic statistical aspects of discrimination and process-outcomes. This requires understanding of statistical processes that rate objects on the basis of characteristics. In particular, it requires understanding the relationship between use of a characteristic for decision-making purposes (i.e., discrimination) and assessment of process-outcomes with respect to said characteristic. We start by noting the following:
Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.
Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.
It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).
You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.
Once you understand these basic aspects of statistical decision-making processes, you will be able to formulate what your actual goal is in this case. In particular, you will need to decide whether you want a non-discriminatory process, which is likely to result in disparities of outcome across groups, or whether you want a discriminatory process designed to yield equal process outcomes (or something close to this). Ethically, this issue mimics the debate over non-discrimination versus affirmative-action.
Let's say I want to build a statistical model to predict some output from personal data, like a five star ranking to help recruiting new people. Let's say I also want to avoid gender discrimination, as an ethical constraint. Given two strictly equal profile apart from the gender, the output of the model should be the same.
It is easy to ensure that the ratings given from the model are not affected by a variable you want to exclude (e.g., gender). To do this, all you need to do is to remove this variable as a predictor in the model, so that it is not used in the rating decision. This will ensure that two profiles that are strictly equal, apart from that variable, are treated the same. However, it will not necessarily ensure that the model does not discriminate on the basis of another variable that is correlated with the excluded variable, and it will not generally lead to outcomes that are equal between genders. This is because gender is correlated with many other characteristics that might be used as predictive variables in your model, so we would generally expect outcomes to be unequal even in the absence of discrimination.
In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.
Should I use the gender (or any data correlated to it) as an input and try to correct their effect, or avoid to use these data?
Correct what exactly? When you say "correct their effect" I am going to assume that you mean that you are considering "correcting" disparities in outcomes that are caused by predictors that are correlated with gender. If that is the case, and you use gender to try to correct an outcome disparity then you are effectively engaging in affirmative action --- i.e., you are programming your model to discriminate positively on gender, with a view to bringing the outcomes closer together. Whether you want to do this depends on your ethical goal in the model (avoiding discrimination vs. obtaining equal outcomes).
How do I check the absence of discrimination against gender?
If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination.
Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender).
How do I correct my model for data that are statistically discriminant but I don't want to be for ethical reasons?
When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't want these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender. Any binary characteristic will be correlated with gender in any case when the proportion of males with that characteristic is different from the proportion of females with that characteristic. (Of course, if those proportions are close you might find that they difference is not "statistically significant".) For more general variables the condition for non-zero correlation is also very weak. Thus, if you remove all characteristics that show evidence of non-zero correlation with gender, you will almost certainly remove a number of important predictors, and you will not have much left. | Avoiding social discrimination in model building
In order to build a model of this kind, it is important to first understand some basic statistical aspects of discrimination and process-outcomes. This requires understanding of statistical processes |
27,113 | Avoiding social discrimination in model building | This at most will be a partial answer (or no answer at all).
First thing to note is that I agree with @dsaxton completely: all models "discriminate" (at least in some definitions of discrimination) as that is their function. The issue is that models work on summaries and averages and they assign things based on averages. Single individuals are unique and might be completely off the prediction.
Example: consider a simple model that predicts the mentioned five star ranking based on one variable - age. For all people with the same age (say 30) it will produce the same output. However that is a generalisation. Not every person aged 30yr will be the same. And if the model produces different ranks for different ages - it is already discriminating people for their age. Say it gives a rank of 3 for 50 year olds and a rank of 4 for 40 year olds. In reality there will be many 50 year old people that are better at what they do than 40 year old. And they will be discriminated against.
Should I use the gender (or any data correlated to it) as an input and try to correct their effect, or avoid to use these data?
If you want the model to return the same outcome for otherwise equal men and women then you should not include gender in the model. Any data correlated to gender should probably be included. By excluding such covariates you can be making at least 2 types of errors: 1) assuming all men and women are equally distributed across all covariates; 2) if some of those gender-correlated covariates are both relevant to the rating and correlated with gender at the same time - you might vastly reduce the performance of your model by excluding them.
How do I check the absence of discrimination against gender?
Run the model on exactly the same data twice - one time using "male" and another time using "female". If this comes from a text document maybe some words could be substituted.
How do I correct my model for data that are statistically discriminant but I don't want to be for ethical reasons?
Depends on what you want to do. One brutal way to force equality between genders is to run the model on men applicants and women applicants separately. And then choose 50% from one group and 50% from another group.
Your prediction will most likely suffer - as it is unlikely the best set of applicants will include exactly half men and half women. But you would probably be OK ethically? - again this depends on the ethics. I could see an ethical declaration where this type of practice would be illegal as it would also discriminate based on gender but in another way. | Avoiding social discrimination in model building | This at most will be a partial answer (or no answer at all).
First thing to note is that I agree with @dsaxton completely: all models "discriminate" (at least in some definitions of discrimination) as | Avoiding social discrimination in model building
This at most will be a partial answer (or no answer at all).
First thing to note is that I agree with @dsaxton completely: all models "discriminate" (at least in some definitions of discrimination) as that is their function. The issue is that models work on summaries and averages and they assign things based on averages. Single individuals are unique and might be completely off the prediction.
Example: consider a simple model that predicts the mentioned five star ranking based on one variable - age. For all people with the same age (say 30) it will produce the same output. However that is a generalisation. Not every person aged 30yr will be the same. And if the model produces different ranks for different ages - it is already discriminating people for their age. Say it gives a rank of 3 for 50 year olds and a rank of 4 for 40 year olds. In reality there will be many 50 year old people that are better at what they do than 40 year old. And they will be discriminated against.
Should I use the gender (or any data correlated to it) as an input and try to correct their effect, or avoid to use these data?
If you want the model to return the same outcome for otherwise equal men and women then you should not include gender in the model. Any data correlated to gender should probably be included. By excluding such covariates you can be making at least 2 types of errors: 1) assuming all men and women are equally distributed across all covariates; 2) if some of those gender-correlated covariates are both relevant to the rating and correlated with gender at the same time - you might vastly reduce the performance of your model by excluding them.
How do I check the absence of discrimination against gender?
Run the model on exactly the same data twice - one time using "male" and another time using "female". If this comes from a text document maybe some words could be substituted.
How do I correct my model for data that are statistically discriminant but I don't want to be for ethical reasons?
Depends on what you want to do. One brutal way to force equality between genders is to run the model on men applicants and women applicants separately. And then choose 50% from one group and 50% from another group.
Your prediction will most likely suffer - as it is unlikely the best set of applicants will include exactly half men and half women. But you would probably be OK ethically? - again this depends on the ethics. I could see an ethical declaration where this type of practice would be illegal as it would also discriminate based on gender but in another way. | Avoiding social discrimination in model building
This at most will be a partial answer (or no answer at all).
First thing to note is that I agree with @dsaxton completely: all models "discriminate" (at least in some definitions of discrimination) as |
27,114 | Avoiding social discrimination in model building | What the Amazon story shows is that it is very hard to avoid the bias. I doubt that Amazon hired dumb people for this problem, or that they were lacking skills, or that they didn't have enough data, or that they didn't have enough AWS credits to train a better model. The problem was that the complicated machine learning algorithms are very good at learning patterns in the data, gender bias is exactly that kind of pattern. There was bias in the data, as the recruiters (consciously or not), favored male candidates. I'm not saying in here that Amazon is a company that discriminates job candidates, I'm sure they have thousands of anti-discriminatory policies and also hire pretty good recruiters. The problem with this kind of bias and prejudice is that exists no matter how hard you try to fight it. There are tons of psychology experiments showing that people may declare not to be biased (e.g. racist), but still make biased actions, without even realizing it. But answering your question, to have algorithm that is not biased, you would need to start with data that is free of this kind of bias. Machine learning algorithms learn to recognize and repeat the patterns they see in the data, so if your data records biased decisions, the algorithm will likely learn and amplify those bias.
Second thing is managing the data. If you want to prohibit your algorithm from learning to make biased decisions, you should remove all the information that would help if to discriminate between groups of interest (gender in here). This does not mean removing only the information about gender, but also all the information that could lead to identifying gender, and this could be lots of things. There are obvious ones like name and photo, but also indirect ones, e.g. maternal leave in resume, but also education (what if someone went to girls-only school?), or even job history (say that recruiters in your company are not biased, but what if every other recruiter before was biased, so the work history reflects all those biased decisions?), etc. As you can see, identifying those issues may be pretty complicated (another reason why Amazon may have failed).
As about questions 2. and 3., there is no easy answers and I do not feel competent enough to try answering them in detail. There is tons of literature on both prejudice and bias in society, and about algorithmic bias. This is always complicated and there is, unfortunately, no simple recipes for this. Companies, like Google, hire experts whose role is identifying and preventing this kind of bias in algorithms. | Avoiding social discrimination in model building | What the Amazon story shows is that it is very hard to avoid the bias. I doubt that Amazon hired dumb people for this problem, or that they were lacking skills, or that they didn't have enough data, o | Avoiding social discrimination in model building
What the Amazon story shows is that it is very hard to avoid the bias. I doubt that Amazon hired dumb people for this problem, or that they were lacking skills, or that they didn't have enough data, or that they didn't have enough AWS credits to train a better model. The problem was that the complicated machine learning algorithms are very good at learning patterns in the data, gender bias is exactly that kind of pattern. There was bias in the data, as the recruiters (consciously or not), favored male candidates. I'm not saying in here that Amazon is a company that discriminates job candidates, I'm sure they have thousands of anti-discriminatory policies and also hire pretty good recruiters. The problem with this kind of bias and prejudice is that exists no matter how hard you try to fight it. There are tons of psychology experiments showing that people may declare not to be biased (e.g. racist), but still make biased actions, without even realizing it. But answering your question, to have algorithm that is not biased, you would need to start with data that is free of this kind of bias. Machine learning algorithms learn to recognize and repeat the patterns they see in the data, so if your data records biased decisions, the algorithm will likely learn and amplify those bias.
Second thing is managing the data. If you want to prohibit your algorithm from learning to make biased decisions, you should remove all the information that would help if to discriminate between groups of interest (gender in here). This does not mean removing only the information about gender, but also all the information that could lead to identifying gender, and this could be lots of things. There are obvious ones like name and photo, but also indirect ones, e.g. maternal leave in resume, but also education (what if someone went to girls-only school?), or even job history (say that recruiters in your company are not biased, but what if every other recruiter before was biased, so the work history reflects all those biased decisions?), etc. As you can see, identifying those issues may be pretty complicated (another reason why Amazon may have failed).
As about questions 2. and 3., there is no easy answers and I do not feel competent enough to try answering them in detail. There is tons of literature on both prejudice and bias in society, and about algorithmic bias. This is always complicated and there is, unfortunately, no simple recipes for this. Companies, like Google, hire experts whose role is identifying and preventing this kind of bias in algorithms. | Avoiding social discrimination in model building
What the Amazon story shows is that it is very hard to avoid the bias. I doubt that Amazon hired dumb people for this problem, or that they were lacking skills, or that they didn't have enough data, o |
27,115 | Avoiding social discrimination in model building | Should I use the gender (or any data correlated to it) as an input
and try to correct their effect, or avoid to use these data?
There are several implications of this question that boil down to the following, Do I want to be a social engineer; an activist whose role is to change the status quo because I have decided that society is sick and requires therapy? The obvious answer to this depends on whether or not such a change is beneficial or harmful. For example, the answer to "What would we gain from gender equality for nursing staff?" might be that having at least one male nurse available for inserting urinary catheters in males would not require that as many as 50% of nurses be male. So, the social engineering approach examines different cultures, contexts and problems with known gender bias, and posits functional benefits to be had from alterations of the root cause(s) of that bias. This is an essential step in the decision making process. Now, the answer to question 1. is a resounding no, that is, once one has decided that society needs fixing, one just adds a star, or fraction there of (see below), to female applicants, but be very careful of what you wish for because this is affirmative action, which is itself inherently discriminatory. Any AI outcomes will change to reflect the new hiring norms, once those become established as a new functional norm.
How do I check the absence of discrimination against gender?
Simple enough, after ratings are assigned, one does a post hoc analysis to see what the distribution of ratings are for males and female and compare them.
How do I correct my model for data that are statistically
discriminant but I don't want to be for ethical reasons?
This is unavoidably done after the fact, i.e., post hoc. Forethought is also necessary, but the type of forethought most needed is a concerted attempt to examine critically what the social engineer's assumptions are. That is, assuming (for the sake of argument, see below) it to be sociologically justifiable to eliminate all gender bias, one merely adjusts the female ratings to follow the same empirical distribution as the males. In the teaching business this would be called grading on a curve. Further, let us suppose that it may not be desirable to do a full elimination of gender bias (it may be too disruptive to do so), then one can do a partial elimination of bias, e.g., a pairwise weighted average of each native female rating and its fully corrected rating, with whatever weights one wishes to assign that is thought (or tested as being) least harmful and/or most beneficial.
Gender disparity cannot be altered properly by hiring policies alone as in some fields there is a relative scarcity of women candidates. For example, in Poland, 14.3% of IT students were female in 2018, and in Australia 17%. Once hired, retention of women in tech-intensive industries was problematic (Women in business roles in tech-intensive industries leave for other industries at high rates—53% of women, compared to 31% of men.) Thus, female job satisfaction may be more important than hiring policy alone. One first needs to identify a tangible benefit for having any particular percentage of females in the work place, and there are some hints about this, for example, in 2016, women on corporate boards (16%) were almost twice as likely as their male counterparts (9%) to have professional technology experience among 518 Forbes Global 2000 companies. Thus tech-savviness appears to contribute more to female than male net worth. From this discussion, it should be obvious that before making gender specific assumptions, a substantial effort should be directed toward identifying more global concrete benefits of specific policies of which hiring policy is only a small, albeit important, part, and probably not the most important starting point. That latter is plausibly the retention of hires because turnover is bad for moral and may be the root cause of gender bias in hiring.
My management experience has taught me that even small changes in work output (e.g. 10-20%) are quite effective in eventually eliminating wait lists, that is, there is no need to immediately increase output 100% by doubling staff numbers as the effect of that will shorten the wait list only slightly faster than a smaller change will, but will then be disruptive as staff will subsequently be standing around hoping that work will walk in the door. That is, if one decides to do social engineering, it can be harmful to attempt a full correction; it doesn't work that way. Try that with an abrupt course correction in a sailboat, and one may wind up exercising one's swimming lessons. The equivalent for treating gender bias (if the prescription fits), would be to only hire females. That would solve the problem (and create others). So, my advice would be to gradually correct any perceived (better would be demonstrated) problem (e.g., female job retention), to subsequently reorient to see the effect of any policy change, and adjust as needed thereafter.
In summary, effectual social engineering requires a holistic approach to complicated situations, and merely identifying that there may be a problem does not tell us there is one, does not tell us what causes it, does not tell us how to correct it, and indeed all it tells us is that we have to put on our thinking caps. | Avoiding social discrimination in model building | Should I use the gender (or any data correlated to it) as an input
and try to correct their effect, or avoid to use these data?
There are several implications of this question that boil down to th | Avoiding social discrimination in model building
Should I use the gender (or any data correlated to it) as an input
and try to correct their effect, or avoid to use these data?
There are several implications of this question that boil down to the following, Do I want to be a social engineer; an activist whose role is to change the status quo because I have decided that society is sick and requires therapy? The obvious answer to this depends on whether or not such a change is beneficial or harmful. For example, the answer to "What would we gain from gender equality for nursing staff?" might be that having at least one male nurse available for inserting urinary catheters in males would not require that as many as 50% of nurses be male. So, the social engineering approach examines different cultures, contexts and problems with known gender bias, and posits functional benefits to be had from alterations of the root cause(s) of that bias. This is an essential step in the decision making process. Now, the answer to question 1. is a resounding no, that is, once one has decided that society needs fixing, one just adds a star, or fraction there of (see below), to female applicants, but be very careful of what you wish for because this is affirmative action, which is itself inherently discriminatory. Any AI outcomes will change to reflect the new hiring norms, once those become established as a new functional norm.
How do I check the absence of discrimination against gender?
Simple enough, after ratings are assigned, one does a post hoc analysis to see what the distribution of ratings are for males and female and compare them.
How do I correct my model for data that are statistically
discriminant but I don't want to be for ethical reasons?
This is unavoidably done after the fact, i.e., post hoc. Forethought is also necessary, but the type of forethought most needed is a concerted attempt to examine critically what the social engineer's assumptions are. That is, assuming (for the sake of argument, see below) it to be sociologically justifiable to eliminate all gender bias, one merely adjusts the female ratings to follow the same empirical distribution as the males. In the teaching business this would be called grading on a curve. Further, let us suppose that it may not be desirable to do a full elimination of gender bias (it may be too disruptive to do so), then one can do a partial elimination of bias, e.g., a pairwise weighted average of each native female rating and its fully corrected rating, with whatever weights one wishes to assign that is thought (or tested as being) least harmful and/or most beneficial.
Gender disparity cannot be altered properly by hiring policies alone as in some fields there is a relative scarcity of women candidates. For example, in Poland, 14.3% of IT students were female in 2018, and in Australia 17%. Once hired, retention of women in tech-intensive industries was problematic (Women in business roles in tech-intensive industries leave for other industries at high rates—53% of women, compared to 31% of men.) Thus, female job satisfaction may be more important than hiring policy alone. One first needs to identify a tangible benefit for having any particular percentage of females in the work place, and there are some hints about this, for example, in 2016, women on corporate boards (16%) were almost twice as likely as their male counterparts (9%) to have professional technology experience among 518 Forbes Global 2000 companies. Thus tech-savviness appears to contribute more to female than male net worth. From this discussion, it should be obvious that before making gender specific assumptions, a substantial effort should be directed toward identifying more global concrete benefits of specific policies of which hiring policy is only a small, albeit important, part, and probably not the most important starting point. That latter is plausibly the retention of hires because turnover is bad for moral and may be the root cause of gender bias in hiring.
My management experience has taught me that even small changes in work output (e.g. 10-20%) are quite effective in eventually eliminating wait lists, that is, there is no need to immediately increase output 100% by doubling staff numbers as the effect of that will shorten the wait list only slightly faster than a smaller change will, but will then be disruptive as staff will subsequently be standing around hoping that work will walk in the door. That is, if one decides to do social engineering, it can be harmful to attempt a full correction; it doesn't work that way. Try that with an abrupt course correction in a sailboat, and one may wind up exercising one's swimming lessons. The equivalent for treating gender bias (if the prescription fits), would be to only hire females. That would solve the problem (and create others). So, my advice would be to gradually correct any perceived (better would be demonstrated) problem (e.g., female job retention), to subsequently reorient to see the effect of any policy change, and adjust as needed thereafter.
In summary, effectual social engineering requires a holistic approach to complicated situations, and merely identifying that there may be a problem does not tell us there is one, does not tell us what causes it, does not tell us how to correct it, and indeed all it tells us is that we have to put on our thinking caps. | Avoiding social discrimination in model building
Should I use the gender (or any data correlated to it) as an input
and try to correct their effect, or avoid to use these data?
There are several implications of this question that boil down to th |
27,116 | Why does an insignificant regressor become significant if I add some significant dummy variables? [duplicate] | What you have described is a classic example of the phenomenon "confounding." For the sake of argument, suppose you want to know what factors affect the price of a car, and the original model you fitted was:
$Price_i=MPG^*_i + Weight_i + Length_i + GearRatio_i$
*$MPG$ is how many miles per gallon the car gets
The regression results are as follows:
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 4, 69) = 10.93
Model | 246385405 4 61596351.2 Prob > F = 0.0000
Residual | 388679991 69 5633043.35 R-squared = 0.3880
-------------+------------------------------ Adj R-squared = 0.3525
Total | 635065396 73 8699525.97 Root MSE = 2373.4
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg | -90.8697 82.54167 -1.10 0.275 -255.5358 73.79643
weight | 5.330082 1.259779 4.23 0.000 2.816892 7.843272
length | -112.6501 39.26864 -2.87 0.005 -190.9889 -34.31134
gear_ratio | 1747.338 940.8806 1.86 0.068 -129.6674 3624.343
_cons | 7909.196 6803.245 1.16 0.249 -5662.907 21481.3
------------------------------------------------------------------------------
$Weight$ and $Length$ are significantly associated with price at the 5% level, whereas $GearRatio$ is significant at the 10% level. In this example, I will use 10% as the significant level often used in econometrics instead of the customary 5% in statistics/biostatistics.
Now suppose you realize that the country of origin of the car might have something to do with the price, so you enter "Country of origin" ($Country$)--a variable with 4 categories: 1. USA, 2. Japan, 3. Germany, and 4. France/Italy--into your model as dummy variables with "USA" as the reference/omitted category. The resulting model is as follows:
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 7, 66) = 7.05
Model | 271664993 7 38809284.6 Prob > F = 0.0000
Residual | 363400404 66 5506066.72 R-squared = 0.4278
-------------+------------------------------ Adj R-squared = 0.3671
Total | 635065396 73 8699525.97 Root MSE = 2346.5
---------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
----------------+----------------------------------------------------------------
mpg | -43.63664 88.87729 -0.49 0.625 -221.0859 133.8126
weight | 5.627906 1.277128 4.41 0.000 3.078037 8.177775
length | -108.6306 40.96925 -2.65 0.010 -190.4283 -26.83285
gear_ratio | 1036.988 1011.416 1.03 0.309 -982.369 3056.344
|
country |
Germany | 1474.478 786.7092 1.87 0.065 -96.23774 3045.193
Japan | 1508.771 931.8605 1.62 0.110 -351.7485 3369.291
France/Italy | 1513.169 1660.423 0.91 0.365 -1801.972 4828.311
|
_cons | 6825.621 6936.845 0.98 0.329 -7024.236 20675.48
---------------------------------------------------------------------------------
When we added $Country$ into the model, $GearRatio$ was no longer significant at the 10% level and $MPG$ became even more not significant (p was 0.28 in the original model, and became 0.63 after adding $Country$). We also note that the only significant category of $Country$ was $Germany$.
How do we interpret these results?
Recall that dummy variables are entered into the model as a set as $(N-1)$ dummy variables where $N$ is the number of categories in the original variable. Recall also that dummies are interpreted relative to the excluded (reference) category. It is therefore normal for some dummy variables not to be significant in the model if the difference between that category and the reference category is not significant. In our example, German cars are on average USD 1,474.48 more expensive than American cars, whereas Japanese and French/Italian cars are both not significantly different from American cars in terms of $Price$. If you want to know whether the effect of the construct you entered as dummy variables was significant or not, you will need to do an F-test of the joint significance of your dummies, as the p-value given in the model only tells you if the given category was different from the reference or not, and not whether the $Country$ as a whole is significantly associated with $Price$:
test Germany Japan FranceItaly
( 1) Germany = 0
( 2) Japan = 0
( 3) FranceItaly = 0
F( 3, 66) = 1.53
Prob > F = 0.2148
It turns out $Country$ as a whole is not a significant predictor of price (p=0.21), although German cars are significantly more expensive than American cars in this model.
We also noted that some variables that were significant ($GearRatio$) became non-significant after adding $Country$. This means that in the model where we omitted $Country$, the parameter estimate for $GearRatio$ "absorbed" the effect of $Country$. That is, $Country$ is significantly associated with $GearRatio$ and $Price$, and failing to control for $Country$ biased the parameter estimate of $GearRatio$, making it seem more significant than it really is. That is, the "significant" effect of $GearRatio$ on $Price$ we saw in the original model is actually reflecting the effect of $Country$ on $Price$. $GearRatio$, as it turns out, has nothing to do with the $Price$ of a car.
Of course, the reverse can be true too: You CAN have something that was not significant become significant after adding variables to the model. The logic behind it is the same. The originally-not-significant variable was significantly associated with the omitted variable and reflects the effect of the omitted variable in addition to its own effect (plus some other unobservables, which we will ignore for the sake of argument). When you add the omitted variable (the dummies) into the model, the originally-not-significant variable no longer captures the partial effect of the omitted variable but now reflects the "true" effect of that variable...which, it turns out, is significantly associated with the outcome.
(Data: Stata built-in dataset "1978 Automobile Data" from http://www.stata-press.com/data/r13/auto.dta) | Why does an insignificant regressor become significant if I add some significant dummy variables? [d | What you have described is a classic example of the phenomenon "confounding." For the sake of argument, suppose you want to know what factors affect the price of a car, and the original model you fitt | Why does an insignificant regressor become significant if I add some significant dummy variables? [duplicate]
What you have described is a classic example of the phenomenon "confounding." For the sake of argument, suppose you want to know what factors affect the price of a car, and the original model you fitted was:
$Price_i=MPG^*_i + Weight_i + Length_i + GearRatio_i$
*$MPG$ is how many miles per gallon the car gets
The regression results are as follows:
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 4, 69) = 10.93
Model | 246385405 4 61596351.2 Prob > F = 0.0000
Residual | 388679991 69 5633043.35 R-squared = 0.3880
-------------+------------------------------ Adj R-squared = 0.3525
Total | 635065396 73 8699525.97 Root MSE = 2373.4
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg | -90.8697 82.54167 -1.10 0.275 -255.5358 73.79643
weight | 5.330082 1.259779 4.23 0.000 2.816892 7.843272
length | -112.6501 39.26864 -2.87 0.005 -190.9889 -34.31134
gear_ratio | 1747.338 940.8806 1.86 0.068 -129.6674 3624.343
_cons | 7909.196 6803.245 1.16 0.249 -5662.907 21481.3
------------------------------------------------------------------------------
$Weight$ and $Length$ are significantly associated with price at the 5% level, whereas $GearRatio$ is significant at the 10% level. In this example, I will use 10% as the significant level often used in econometrics instead of the customary 5% in statistics/biostatistics.
Now suppose you realize that the country of origin of the car might have something to do with the price, so you enter "Country of origin" ($Country$)--a variable with 4 categories: 1. USA, 2. Japan, 3. Germany, and 4. France/Italy--into your model as dummy variables with "USA" as the reference/omitted category. The resulting model is as follows:
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 7, 66) = 7.05
Model | 271664993 7 38809284.6 Prob > F = 0.0000
Residual | 363400404 66 5506066.72 R-squared = 0.4278
-------------+------------------------------ Adj R-squared = 0.3671
Total | 635065396 73 8699525.97 Root MSE = 2346.5
---------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
----------------+----------------------------------------------------------------
mpg | -43.63664 88.87729 -0.49 0.625 -221.0859 133.8126
weight | 5.627906 1.277128 4.41 0.000 3.078037 8.177775
length | -108.6306 40.96925 -2.65 0.010 -190.4283 -26.83285
gear_ratio | 1036.988 1011.416 1.03 0.309 -982.369 3056.344
|
country |
Germany | 1474.478 786.7092 1.87 0.065 -96.23774 3045.193
Japan | 1508.771 931.8605 1.62 0.110 -351.7485 3369.291
France/Italy | 1513.169 1660.423 0.91 0.365 -1801.972 4828.311
|
_cons | 6825.621 6936.845 0.98 0.329 -7024.236 20675.48
---------------------------------------------------------------------------------
When we added $Country$ into the model, $GearRatio$ was no longer significant at the 10% level and $MPG$ became even more not significant (p was 0.28 in the original model, and became 0.63 after adding $Country$). We also note that the only significant category of $Country$ was $Germany$.
How do we interpret these results?
Recall that dummy variables are entered into the model as a set as $(N-1)$ dummy variables where $N$ is the number of categories in the original variable. Recall also that dummies are interpreted relative to the excluded (reference) category. It is therefore normal for some dummy variables not to be significant in the model if the difference between that category and the reference category is not significant. In our example, German cars are on average USD 1,474.48 more expensive than American cars, whereas Japanese and French/Italian cars are both not significantly different from American cars in terms of $Price$. If you want to know whether the effect of the construct you entered as dummy variables was significant or not, you will need to do an F-test of the joint significance of your dummies, as the p-value given in the model only tells you if the given category was different from the reference or not, and not whether the $Country$ as a whole is significantly associated with $Price$:
test Germany Japan FranceItaly
( 1) Germany = 0
( 2) Japan = 0
( 3) FranceItaly = 0
F( 3, 66) = 1.53
Prob > F = 0.2148
It turns out $Country$ as a whole is not a significant predictor of price (p=0.21), although German cars are significantly more expensive than American cars in this model.
We also noted that some variables that were significant ($GearRatio$) became non-significant after adding $Country$. This means that in the model where we omitted $Country$, the parameter estimate for $GearRatio$ "absorbed" the effect of $Country$. That is, $Country$ is significantly associated with $GearRatio$ and $Price$, and failing to control for $Country$ biased the parameter estimate of $GearRatio$, making it seem more significant than it really is. That is, the "significant" effect of $GearRatio$ on $Price$ we saw in the original model is actually reflecting the effect of $Country$ on $Price$. $GearRatio$, as it turns out, has nothing to do with the $Price$ of a car.
Of course, the reverse can be true too: You CAN have something that was not significant become significant after adding variables to the model. The logic behind it is the same. The originally-not-significant variable was significantly associated with the omitted variable and reflects the effect of the omitted variable in addition to its own effect (plus some other unobservables, which we will ignore for the sake of argument). When you add the omitted variable (the dummies) into the model, the originally-not-significant variable no longer captures the partial effect of the omitted variable but now reflects the "true" effect of that variable...which, it turns out, is significantly associated with the outcome.
(Data: Stata built-in dataset "1978 Automobile Data" from http://www.stata-press.com/data/r13/auto.dta) | Why does an insignificant regressor become significant if I add some significant dummy variables? [d
What you have described is a classic example of the phenomenon "confounding." For the sake of argument, suppose you want to know what factors affect the price of a car, and the original model you fitt |
27,117 | Why does an insignificant regressor become significant if I add some significant dummy variables? [duplicate] | There are two main reasons this can happen.
Adding a significant regressor - whether related to dummies or not - reduces the mean square residual. By reducing estimate of error variance, other regressors may become more significant even if the parameter estimate hardly changes, by reducing the denominator of their t-statistic (equivalently, their partial F). This one needn't involve any dependence between regressors at all.
Adding a regressor can also change the numerator of the t-statistic, by changing the parameter estimate, due to dependence between regressors, which can move coefficients either toward or away from zero; it will also alter the denominator (so it's not as simple as just considering the numerator). Sometimes the overall effect can be to make a previously insignificant regressor significant. | Why does an insignificant regressor become significant if I add some significant dummy variables? [d | There are two main reasons this can happen.
Adding a significant regressor - whether related to dummies or not - reduces the mean square residual. By reducing estimate of error variance, other regre | Why does an insignificant regressor become significant if I add some significant dummy variables? [duplicate]
There are two main reasons this can happen.
Adding a significant regressor - whether related to dummies or not - reduces the mean square residual. By reducing estimate of error variance, other regressors may become more significant even if the parameter estimate hardly changes, by reducing the denominator of their t-statistic (equivalently, their partial F). This one needn't involve any dependence between regressors at all.
Adding a regressor can also change the numerator of the t-statistic, by changing the parameter estimate, due to dependence between regressors, which can move coefficients either toward or away from zero; it will also alter the denominator (so it's not as simple as just considering the numerator). Sometimes the overall effect can be to make a previously insignificant regressor significant. | Why does an insignificant regressor become significant if I add some significant dummy variables? [d
There are two main reasons this can happen.
Adding a significant regressor - whether related to dummies or not - reduces the mean square residual. By reducing estimate of error variance, other regre |
27,118 | Finding outliers without assuming normal distribution | That's because such an algorithm can't exist. You require an assumed distribution in order to be able to classify something as lying outside the range of expected values.
Even if you do assume a normal distribution, declaring data points as outliers is a fraught business. In general, you not only need a good estimate of the true distribution, which is often unavailable, but also a good theoretically supported reason for making your decision (i.e. the subject broke the experimental setup somehow). Such a judgement is usually impossible to codify in an algorithm. | Finding outliers without assuming normal distribution | That's because such an algorithm can't exist. You require an assumed distribution in order to be able to classify something as lying outside the range of expected values.
Even if you do assume a norma | Finding outliers without assuming normal distribution
That's because such an algorithm can't exist. You require an assumed distribution in order to be able to classify something as lying outside the range of expected values.
Even if you do assume a normal distribution, declaring data points as outliers is a fraught business. In general, you not only need a good estimate of the true distribution, which is often unavailable, but also a good theoretically supported reason for making your decision (i.e. the subject broke the experimental setup somehow). Such a judgement is usually impossible to codify in an algorithm. | Finding outliers without assuming normal distribution
That's because such an algorithm can't exist. You require an assumed distribution in order to be able to classify something as lying outside the range of expected values.
Even if you do assume a norma |
27,119 | Finding outliers without assuming normal distribution | This does not directly answer your question, but you may learn something from looking at the outliers dataset in the TeachingDemos package for R and working through the examples on the help page. This may give you a better understanding of some of the issues with automatic outlier detection. | Finding outliers without assuming normal distribution | This does not directly answer your question, but you may learn something from looking at the outliers dataset in the TeachingDemos package for R and working through the examples on the help page. Thi | Finding outliers without assuming normal distribution
This does not directly answer your question, but you may learn something from looking at the outliers dataset in the TeachingDemos package for R and working through the examples on the help page. This may give you a better understanding of some of the issues with automatic outlier detection. | Finding outliers without assuming normal distribution
This does not directly answer your question, but you may learn something from looking at the outliers dataset in the TeachingDemos package for R and working through the examples on the help page. Thi |
27,120 | Finding outliers without assuming normal distribution | R will spit out the outliers as in
dat <- c(6,8.5,-12,1,rnorm(40),-1,10,0)
boxplot(dat)$out
which will draw the boxplot and give
[1] 6.0 8.5 -12.0 10.0 | Finding outliers without assuming normal distribution | R will spit out the outliers as in
dat <- c(6,8.5,-12,1,rnorm(40),-1,10,0)
boxplot(dat)$out
which will draw the boxplot and give
[1] 6.0 8.5 -12.0 10.0 | Finding outliers without assuming normal distribution
R will spit out the outliers as in
dat <- c(6,8.5,-12,1,rnorm(40),-1,10,0)
boxplot(dat)$out
which will draw the boxplot and give
[1] 6.0 8.5 -12.0 10.0 | Finding outliers without assuming normal distribution
R will spit out the outliers as in
dat <- c(6,8.5,-12,1,rnorm(40),-1,10,0)
boxplot(dat)$out
which will draw the boxplot and give
[1] 6.0 8.5 -12.0 10.0 |
27,121 | Finding outliers without assuming normal distribution | As others have said you have stated the question poorly in terms of confidence. There are statistical tests for outlier's like Grubbs' test and Dixon's ratio test that I have referred to on another post. They assume the population distribution is normal although Dixon's test is robust to the normality assumption in small samples. A boxplot is a nice informal way to spot outliers in your data. Usually the whiskers are set at the 5th and 95th percentile and obsevations plotted beyond the whiskers are usually considered to be possible outliers. However this does not involve formal statistical testing. | Finding outliers without assuming normal distribution | As others have said you have stated the question poorly in terms of confidence. There are statistical tests for outlier's like Grubbs' test and Dixon's ratio test that I have referred to on another p | Finding outliers without assuming normal distribution
As others have said you have stated the question poorly in terms of confidence. There are statistical tests for outlier's like Grubbs' test and Dixon's ratio test that I have referred to on another post. They assume the population distribution is normal although Dixon's test is robust to the normality assumption in small samples. A boxplot is a nice informal way to spot outliers in your data. Usually the whiskers are set at the 5th and 95th percentile and obsevations plotted beyond the whiskers are usually considered to be possible outliers. However this does not involve formal statistical testing. | Finding outliers without assuming normal distribution
As others have said you have stated the question poorly in terms of confidence. There are statistical tests for outlier's like Grubbs' test and Dixon's ratio test that I have referred to on another p |
27,122 | Why is AIC not reported with a confidence interval? | AIC estimates $-2n \ \times$ the expected likelihood on a new, unseen data point from the data generating process (DGP) that generated your sample.* Even though the target (the estimand) is not a parameter, it is a meaningful quantity. E.g. it may be interpreted as the expected loss of a point prediction. It is quite natural to wish for a confidence interval around the point estimate (the AIC). This way we can tell not only what the expected loss is but also how uncertain it is. In summary, while I do not have a ready answer for how to obtain the confidence interval and under what conditions your idea of bootstrapping may work, I clearly do see a point in pursuing it.
*See How can we select the best GARCH model by carrying out likelihood ratio test?, Can results for model selection with AIC be interpretable at the population level?, Using AIC/BIC within cross-validation for likelihood based loss functions among other threeads where this idea is employed. | Why is AIC not reported with a confidence interval? | AIC estimates $-2n \ \times$ the expected likelihood on a new, unseen data point from the data generating process (DGP) that generated your sample.* Even though the target (the estimand) is not a para | Why is AIC not reported with a confidence interval?
AIC estimates $-2n \ \times$ the expected likelihood on a new, unseen data point from the data generating process (DGP) that generated your sample.* Even though the target (the estimand) is not a parameter, it is a meaningful quantity. E.g. it may be interpreted as the expected loss of a point prediction. It is quite natural to wish for a confidence interval around the point estimate (the AIC). This way we can tell not only what the expected loss is but also how uncertain it is. In summary, while I do not have a ready answer for how to obtain the confidence interval and under what conditions your idea of bootstrapping may work, I clearly do see a point in pursuing it.
*See How can we select the best GARCH model by carrying out likelihood ratio test?, Can results for model selection with AIC be interpretable at the population level?, Using AIC/BIC within cross-validation for likelihood based loss functions among other threeads where this idea is employed. | Why is AIC not reported with a confidence interval?
AIC estimates $-2n \ \times$ the expected likelihood on a new, unseen data point from the data generating process (DGP) that generated your sample.* Even though the target (the estimand) is not a para |
27,123 | Why is AIC not reported with a confidence interval? | The AIC is not an estimator of a true parameter. It is a data-dependent measurement of the model fit. The model fit is what it is, there is no model fit that is any "truer" than the one you have, because it's the one you have that is measured. But without any true parameter for which the AIC would be an estimator, one cannot have a confidence interval (CI).
I'm by the way not disputing the answer by Richard Hardy. The AIC, as some other quantities such as $R^2$, can be interpreted as estimating something "true but unobservable", in which case one can argue that a CI makes sense. Personally I find the interpretation as measuring fit quality more intuitive and direct, for which one wouldn't have a CI for the reasons above, but I'm not saying that there is no way for it to be well defined and of some use.
Edit: As a response to the addition in the question: "I don't mean to imply that AIC is the same as parameter estimation. I'm asking why we treat goodness-of-fit estimates (AIC, BIC, etc.) differently from estimates that are reported with a CI." - The definition of a CI relies on a parameter being estimated. It says that given the true parameter value the CI catches this value with probability $(1-\alpha)$. As long as you're not interested in that true parameter value, a CI is meaningless. | Why is AIC not reported with a confidence interval? | The AIC is not an estimator of a true parameter. It is a data-dependent measurement of the model fit. The model fit is what it is, there is no model fit that is any "truer" than the one you have, beca | Why is AIC not reported with a confidence interval?
The AIC is not an estimator of a true parameter. It is a data-dependent measurement of the model fit. The model fit is what it is, there is no model fit that is any "truer" than the one you have, because it's the one you have that is measured. But without any true parameter for which the AIC would be an estimator, one cannot have a confidence interval (CI).
I'm by the way not disputing the answer by Richard Hardy. The AIC, as some other quantities such as $R^2$, can be interpreted as estimating something "true but unobservable", in which case one can argue that a CI makes sense. Personally I find the interpretation as measuring fit quality more intuitive and direct, for which one wouldn't have a CI for the reasons above, but I'm not saying that there is no way for it to be well defined and of some use.
Edit: As a response to the addition in the question: "I don't mean to imply that AIC is the same as parameter estimation. I'm asking why we treat goodness-of-fit estimates (AIC, BIC, etc.) differently from estimates that are reported with a CI." - The definition of a CI relies on a parameter being estimated. It says that given the true parameter value the CI catches this value with probability $(1-\alpha)$. As long as you're not interested in that true parameter value, a CI is meaningless. | Why is AIC not reported with a confidence interval?
The AIC is not an estimator of a true parameter. It is a data-dependent measurement of the model fit. The model fit is what it is, there is no model fit that is any "truer" than the one you have, beca |
27,124 | Tools for modeling financial time series | I recommend R (see the time series view on CRAN).
Some useful references:
Econometrics in R, by Grant Farnsworth
Multivariate time series modelling in R | Tools for modeling financial time series | I recommend R (see the time series view on CRAN).
Some useful references:
Econometrics in R, by Grant Farnsworth
Multivariate time series modelling in R | Tools for modeling financial time series
I recommend R (see the time series view on CRAN).
Some useful references:
Econometrics in R, by Grant Farnsworth
Multivariate time series modelling in R | Tools for modeling financial time series
I recommend R (see the time series view on CRAN).
Some useful references:
Econometrics in R, by Grant Farnsworth
Multivariate time series modelling in R |
27,125 | Tools for modeling financial time series | R is great, but I wouldn't really call it "windows based" :) That's like saying the cmd prompt is windows based. I guess it is technically in a window...
RapidMiner is far easier to use [1]. It's a free, open-source, multi-platform, GUI. Here's a video on time series forecasting:
Financial Time Series Modelling - Part 1
Also, don't forget to read:
Forecasting Methods and Principles
[1] No, I don't work for them. | Tools for modeling financial time series | R is great, but I wouldn't really call it "windows based" :) That's like saying the cmd prompt is windows based. I guess it is technically in a window...
RapidMiner is far easier to use [1]. It's a fr | Tools for modeling financial time series
R is great, but I wouldn't really call it "windows based" :) That's like saying the cmd prompt is windows based. I guess it is technically in a window...
RapidMiner is far easier to use [1]. It's a free, open-source, multi-platform, GUI. Here's a video on time series forecasting:
Financial Time Series Modelling - Part 1
Also, don't forget to read:
Forecasting Methods and Principles
[1] No, I don't work for them. | Tools for modeling financial time series
R is great, but I wouldn't really call it "windows based" :) That's like saying the cmd prompt is windows based. I guess it is technically in a window...
RapidMiner is far easier to use [1]. It's a fr |
27,126 | Tools for modeling financial time series | I really like to work with R, because in the end you will find almost anything, and you have a very good support with the mailing lists. The downside of R is that helpful bits which fit your specific problems might be spread over a large range of packages, and you might not always be able to find them. Another point may be a lock-in, with that I mean that after a time learning R, you will probably be unmotivated to relearn another software, but this will happen in any system.
With regard to Matlab being expensive - if on a budget, Octave will work just as well, at least it did for the things I needed to do with it, which were rather basic. | Tools for modeling financial time series | I really like to work with R, because in the end you will find almost anything, and you have a very good support with the mailing lists. The downside of R is that helpful bits which fit your specific | Tools for modeling financial time series
I really like to work with R, because in the end you will find almost anything, and you have a very good support with the mailing lists. The downside of R is that helpful bits which fit your specific problems might be spread over a large range of packages, and you might not always be able to find them. Another point may be a lock-in, with that I mean that after a time learning R, you will probably be unmotivated to relearn another software, but this will happen in any system.
With regard to Matlab being expensive - if on a budget, Octave will work just as well, at least it did for the things I needed to do with it, which were rather basic. | Tools for modeling financial time series
I really like to work with R, because in the end you will find almost anything, and you have a very good support with the mailing lists. The downside of R is that helpful bits which fit your specific |
27,127 | Tools for modeling financial time series | I'm new here, and perhaps "financial time series" has a specific definition... But given that I don't know it, my question for you would be what you mean: quarterly/monthly economic data, daily market prices, hourly or higher-frequency data, etc? And by "modeling", do you mean working with textbook ARIMA/ARCH solutions, or things a bit more exotic (such as dynamic linear systems), or exotic/custom experimentation?
R is flexible and free, though less GUI-fied than most. It also has packages covering everything from daily stock prices to dynamic linear systems and optimization packages. (In fact, the hard part will be deciding which time series and which financial packages to use.)
GRETL is free and has a reasonable GUI, though it's econometric, not really daily market oriented. I've heard of Oxmetrics, which appears to have a very complete every-possible-variant-of-ARCH package available for it. If you're talking monthly/quarterly economic data, you could also use X12-ARIMA, which is a benchmark of sorts.
I've used all kinds of GUIs for programming/processing data, but for some reason RapidMiner's never really clicked with me. Something strange about its workflow that I've just never gotten. | Tools for modeling financial time series | I'm new here, and perhaps "financial time series" has a specific definition... But given that I don't know it, my question for you would be what you mean: quarterly/monthly economic data, daily market | Tools for modeling financial time series
I'm new here, and perhaps "financial time series" has a specific definition... But given that I don't know it, my question for you would be what you mean: quarterly/monthly economic data, daily market prices, hourly or higher-frequency data, etc? And by "modeling", do you mean working with textbook ARIMA/ARCH solutions, or things a bit more exotic (such as dynamic linear systems), or exotic/custom experimentation?
R is flexible and free, though less GUI-fied than most. It also has packages covering everything from daily stock prices to dynamic linear systems and optimization packages. (In fact, the hard part will be deciding which time series and which financial packages to use.)
GRETL is free and has a reasonable GUI, though it's econometric, not really daily market oriented. I've heard of Oxmetrics, which appears to have a very complete every-possible-variant-of-ARCH package available for it. If you're talking monthly/quarterly economic data, you could also use X12-ARIMA, which is a benchmark of sorts.
I've used all kinds of GUIs for programming/processing data, but for some reason RapidMiner's never really clicked with me. Something strange about its workflow that I've just never gotten. | Tools for modeling financial time series
I'm new here, and perhaps "financial time series" has a specific definition... But given that I don't know it, my question for you would be what you mean: quarterly/monthly economic data, daily market |
27,128 | Tools for modeling financial time series | While not exactly cheap, MATLAB is widely used in the financial industry for time series modelling: http://www.mathworks.com | Tools for modeling financial time series | While not exactly cheap, MATLAB is widely used in the financial industry for time series modelling: http://www.mathworks.com | Tools for modeling financial time series
While not exactly cheap, MATLAB is widely used in the financial industry for time series modelling: http://www.mathworks.com | Tools for modeling financial time series
While not exactly cheap, MATLAB is widely used in the financial industry for time series modelling: http://www.mathworks.com |
27,129 | Tools for modeling financial time series | Clearly R
RadidMiner is nice, but switching to thinking in terms of operators takes a moment
Matlab / Octave
If you describe a specific problem, I may be able to get more specific. | Tools for modeling financial time series | Clearly R
RadidMiner is nice, but switching to thinking in terms of operators takes a moment
Matlab / Octave
If you describe a specific problem, I may be able to get more specific. | Tools for modeling financial time series
Clearly R
RadidMiner is nice, but switching to thinking in terms of operators takes a moment
Matlab / Octave
If you describe a specific problem, I may be able to get more specific. | Tools for modeling financial time series
Clearly R
RadidMiner is nice, but switching to thinking in terms of operators takes a moment
Matlab / Octave
If you describe a specific problem, I may be able to get more specific. |
27,130 | Tools for modeling financial time series | At my university, Stata is taught as a programme to do statistical analysis for finance. You can use outreg for example to format tables for publications in financial papers very easily. Programming syntax is not really great I think, you have to declare functions with `variable' for example which is a quirk in my opinion. Amount of different statistical functions however is very vast. | Tools for modeling financial time series | At my university, Stata is taught as a programme to do statistical analysis for finance. You can use outreg for example to format tables for publications in financial papers very easily. Programming s | Tools for modeling financial time series
At my university, Stata is taught as a programme to do statistical analysis for finance. You can use outreg for example to format tables for publications in financial papers very easily. Programming syntax is not really great I think, you have to declare functions with `variable' for example which is a quirk in my opinion. Amount of different statistical functions however is very vast. | Tools for modeling financial time series
At my university, Stata is taught as a programme to do statistical analysis for finance. You can use outreg for example to format tables for publications in financial papers very easily. Programming s |
27,131 | Tools for modeling financial time series | Probably not exactly what you are looking for, but you may check SwiftForecast. It allows you to forecast a time series in an automatic way, without the need of any software. It is quite new, but I find the idea of a "Google style" predictor quite interesting... | Tools for modeling financial time series | Probably not exactly what you are looking for, but you may check SwiftForecast. It allows you to forecast a time series in an automatic way, without the need of any software. It is quite new, but I fi | Tools for modeling financial time series
Probably not exactly what you are looking for, but you may check SwiftForecast. It allows you to forecast a time series in an automatic way, without the need of any software. It is quite new, but I find the idea of a "Google style" predictor quite interesting... | Tools for modeling financial time series
Probably not exactly what you are looking for, but you may check SwiftForecast. It allows you to forecast a time series in an automatic way, without the need of any software. It is quite new, but I fi |
27,132 | Tools for modeling financial time series | You might want to consider using LDT. It is free and while it provides automatic forecasting with stationary vector autoregressive (VAR) models, you can benefit form other types of analysis.
PS: I am the developer of this software. | Tools for modeling financial time series | You might want to consider using LDT. It is free and while it provides automatic forecasting with stationary vector autoregressive (VAR) models, you can benefit form other types of analysis.
PS: I am | Tools for modeling financial time series
You might want to consider using LDT. It is free and while it provides automatic forecasting with stationary vector autoregressive (VAR) models, you can benefit form other types of analysis.
PS: I am the developer of this software. | Tools for modeling financial time series
You might want to consider using LDT. It is free and while it provides automatic forecasting with stationary vector autoregressive (VAR) models, you can benefit form other types of analysis.
PS: I am |
27,133 | In what applications do we prefer Model Selection over Model Averaging? | Model specification is a better approach than either model selection (by which most people refer really to feature selection) or model averaging. Here are some pros and cons.
model specification requires much more up-front thinking with regard to nonlinearities, non-additivities (interactions), and model components to penalize but this thinking pays off in downstream computation and interpretation and results in accurate estimates of uncertainty. One example of model specification would be inclusion of a sensible number of nonlinear basis functions to allow for flexible modeling without assuming linearity, and specifying skeptical prior distributions for non-additive effects in a Bayesian model (or parameters to penalize in a frequentist setting, guided by cross-validation to get penalized maximum likelihood estimation)
model (feature) selection does not work as advertised as you will almost always be disappointed to find that the "found" features do not replicate in future samples, collinearities almost destroy the ability to select, and predictive performance is not as good as fitting full models with appropriate penalization (e.g., ridge regression typically predicts better than lasso or elastic net). Feature selection results in an example model not the model.
model averaging accomplishes the same thing as a carefully pre-specified very flexible single model, but with a lot more work and difficulty in interpretation. That is, if the domain of models being averaged are from the same family. Sometimes model averaging over different model families can take care of model family uncertainty though. For example you may be unsure of the link function to use when Y is binary and you are developing a probability model. Or in a time-to-event analysis you might ponder a proportional hazards model vs. an accelerated failure time model. But judicious use of extra parameters in any one of these models should also be considered. For example I can estimate risk ratios accurately from an odds ratio-based logistic regression model if I include approximately correct interactions in the logistic model. Examples of this may be found at fharrell.com. | In what applications do we prefer Model Selection over Model Averaging? | Model specification is a better approach than either model selection (by which most people refer really to feature selection) or model averaging. Here are some pros and cons.
model specification req | In what applications do we prefer Model Selection over Model Averaging?
Model specification is a better approach than either model selection (by which most people refer really to feature selection) or model averaging. Here are some pros and cons.
model specification requires much more up-front thinking with regard to nonlinearities, non-additivities (interactions), and model components to penalize but this thinking pays off in downstream computation and interpretation and results in accurate estimates of uncertainty. One example of model specification would be inclusion of a sensible number of nonlinear basis functions to allow for flexible modeling without assuming linearity, and specifying skeptical prior distributions for non-additive effects in a Bayesian model (or parameters to penalize in a frequentist setting, guided by cross-validation to get penalized maximum likelihood estimation)
model (feature) selection does not work as advertised as you will almost always be disappointed to find that the "found" features do not replicate in future samples, collinearities almost destroy the ability to select, and predictive performance is not as good as fitting full models with appropriate penalization (e.g., ridge regression typically predicts better than lasso or elastic net). Feature selection results in an example model not the model.
model averaging accomplishes the same thing as a carefully pre-specified very flexible single model, but with a lot more work and difficulty in interpretation. That is, if the domain of models being averaged are from the same family. Sometimes model averaging over different model families can take care of model family uncertainty though. For example you may be unsure of the link function to use when Y is binary and you are developing a probability model. Or in a time-to-event analysis you might ponder a proportional hazards model vs. an accelerated failure time model. But judicious use of extra parameters in any one of these models should also be considered. For example I can estimate risk ratios accurately from an odds ratio-based logistic regression model if I include approximately correct interactions in the logistic model. Examples of this may be found at fharrell.com. | In what applications do we prefer Model Selection over Model Averaging?
Model specification is a better approach than either model selection (by which most people refer really to feature selection) or model averaging. Here are some pros and cons.
model specification req |
27,134 | In what applications do we prefer Model Selection over Model Averaging? | Single models are much easier to interpret than ensembles. They are therefore often more easily accepted by non-technical users, and are easier to troubleshoot (e.g., when a prediction is way off).
Averaged models are much harder to apply null hypothesis significance testing to. This is a very hard criterion in many domains where publishability hinges on p values. | In what applications do we prefer Model Selection over Model Averaging? | Single models are much easier to interpret than ensembles. They are therefore often more easily accepted by non-technical users, and are easier to troubleshoot (e.g., when a prediction is way off).
Av | In what applications do we prefer Model Selection over Model Averaging?
Single models are much easier to interpret than ensembles. They are therefore often more easily accepted by non-technical users, and are easier to troubleshoot (e.g., when a prediction is way off).
Averaged models are much harder to apply null hypothesis significance testing to. This is a very hard criterion in many domains where publishability hinges on p values. | In what applications do we prefer Model Selection over Model Averaging?
Single models are much easier to interpret than ensembles. They are therefore often more easily accepted by non-technical users, and are easier to troubleshoot (e.g., when a prediction is way off).
Av |
27,135 | In what applications do we prefer Model Selection over Model Averaging? | Model averaging is more practical In applications where you don’t have resources to select a model and no need to explain the results in detail. Model selections is much more than comparing AICs, because if that’s all you do to select the models then don’t bother, just average.
A proper selection is a very time consuming expensive process. You can get a PhD working for months to come up with a parsimonious specification, that’s hundreds of thousands $$$, plus ongoing maintenance and opportunity cost of time to market off your product. There are situations when this can be afforded. An example is financial risk management where the cost of error can be existential to the firm, plus you have the regulators scrutinizing the models who also want you to explain the results to minute details. A counter example is marketing and sales, where instead of meticulous model selection you maybe better off with data mining and experimentation. | In what applications do we prefer Model Selection over Model Averaging? | Model averaging is more practical In applications where you don’t have resources to select a model and no need to explain the results in detail. Model selections is much more than comparing AICs, beca | In what applications do we prefer Model Selection over Model Averaging?
Model averaging is more practical In applications where you don’t have resources to select a model and no need to explain the results in detail. Model selections is much more than comparing AICs, because if that’s all you do to select the models then don’t bother, just average.
A proper selection is a very time consuming expensive process. You can get a PhD working for months to come up with a parsimonious specification, that’s hundreds of thousands $$$, plus ongoing maintenance and opportunity cost of time to market off your product. There are situations when this can be afforded. An example is financial risk management where the cost of error can be existential to the firm, plus you have the regulators scrutinizing the models who also want you to explain the results to minute details. A counter example is marketing and sales, where instead of meticulous model selection you maybe better off with data mining and experimentation. | In what applications do we prefer Model Selection over Model Averaging?
Model averaging is more practical In applications where you don’t have resources to select a model and no need to explain the results in detail. Model selections is much more than comparing AICs, beca |
27,136 | In what applications do we prefer Model Selection over Model Averaging? | (This is a very rough thought still, please take it with a grain of salt)
Averaging is a viable strategy against random uncertainty, i.e. variance. OTOH, if the dominating problem of the situation is bias, a selection approach may work.
In practice, the situation may be even more blurred by the additional random uncertainty/variance that stems from judging the "health" of the model with only a limited number of independent cases (for the data I work with, number of idependent cases is often << number of rows in the data).
I'd therefore make the arguement the other way round compared to your example (and rather go in the specification direction Frank Harrell brought to your attention): selection (for low bias) has the prerequisite that variance uncertainty must be low. | In what applications do we prefer Model Selection over Model Averaging? | (This is a very rough thought still, please take it with a grain of salt)
Averaging is a viable strategy against random uncertainty, i.e. variance. OTOH, if the dominating problem of the situation is | In what applications do we prefer Model Selection over Model Averaging?
(This is a very rough thought still, please take it with a grain of salt)
Averaging is a viable strategy against random uncertainty, i.e. variance. OTOH, if the dominating problem of the situation is bias, a selection approach may work.
In practice, the situation may be even more blurred by the additional random uncertainty/variance that stems from judging the "health" of the model with only a limited number of independent cases (for the data I work with, number of idependent cases is often << number of rows in the data).
I'd therefore make the arguement the other way round compared to your example (and rather go in the specification direction Frank Harrell brought to your attention): selection (for low bias) has the prerequisite that variance uncertainty must be low. | In what applications do we prefer Model Selection over Model Averaging?
(This is a very rough thought still, please take it with a grain of salt)
Averaging is a viable strategy against random uncertainty, i.e. variance. OTOH, if the dominating problem of the situation is |
27,137 | What is it called when an experimenter discards results that are too unexpected? | Another answer has mentioned publication bias. However, that is not really what you were asking about, which is data dredging. A pertinent XKCD illustration is: | What is it called when an experimenter discards results that are too unexpected? | Another answer has mentioned publication bias. However, that is not really what you were asking about, which is data dredging. A pertinent XKCD illustration is: | What is it called when an experimenter discards results that are too unexpected?
Another answer has mentioned publication bias. However, that is not really what you were asking about, which is data dredging. A pertinent XKCD illustration is: | What is it called when an experimenter discards results that are too unexpected?
Another answer has mentioned publication bias. However, that is not really what you were asking about, which is data dredging. A pertinent XKCD illustration is: |
27,138 | What is it called when an experimenter discards results that are too unexpected? | One way of framing this is as publication bias, which occurs when the outcome of an experiment influences the decision of whether or not to publish the result. This is a well-known form of bias that infects academic research. I'm not familiar with any "famous" examples, but there are a few works in the medical field that decribe some non-famous examples in Wilmherst (2007).
Examples of publication bias is inherently difficult to detect, since the non-published parts of the example are non-published (and therefore difficult to detect). Generally speaking, publication bias is detected through statistical analysis of reported metrics in published works. Consequently, most of the known "examples" of publication bias in academic literature are inferences of publication bias coming solely from the published works. | What is it called when an experimenter discards results that are too unexpected? | One way of framing this is as publication bias, which occurs when the outcome of an experiment influences the decision of whether or not to publish the result. This is a well-known form of bias that | What is it called when an experimenter discards results that are too unexpected?
One way of framing this is as publication bias, which occurs when the outcome of an experiment influences the decision of whether or not to publish the result. This is a well-known form of bias that infects academic research. I'm not familiar with any "famous" examples, but there are a few works in the medical field that decribe some non-famous examples in Wilmherst (2007).
Examples of publication bias is inherently difficult to detect, since the non-published parts of the example are non-published (and therefore difficult to detect). Generally speaking, publication bias is detected through statistical analysis of reported metrics in published works. Consequently, most of the known "examples" of publication bias in academic literature are inferences of publication bias coming solely from the published works. | What is it called when an experimenter discards results that are too unexpected?
One way of framing this is as publication bias, which occurs when the outcome of an experiment influences the decision of whether or not to publish the result. This is a well-known form of bias that |
27,139 | What is it called when an experimenter discards results that are too unexpected? | Example: Based on a real experiment, names of people and the organization (along with inconsequential details) are omitted to protect the guilty.
In a study comparing two methods (1 and 2) of manufacture, $n=100$ items were tested until failure. (Larger observed values are better.) Summary statistics
for results x1 and x2 of the samples were as below:
summary(x1); length(x1); sd(x1)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1099 2.8264 7.0881 10.0057 12.8520 46.9993
[1] 100
[1] 10.35345
summary(x2); length(x2); sd(x2)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1196 3.2247 8.0975 11.1469 15.9245 56.6384
[1] 100
[1] 10.54756
boxplot(x1, x2, col="skyblue2", horizontal=T, notch=T)
Everyone's favorite was Method 2 (even though more costly), and it had the larger mean.
But the overlapping notches in the boxes suggest no significant difference.
Also, a pooled 2-sample t.test, which "must be OK" because of the large sample
sizes, finds no significant difference. [This was before
Welch t tests became popular.] Experimenters were hoping for
evidence that Method 2 was significantly better.
t.test(x1,x2, var.eq=T)
Two Sample t-test
data: x1 and x2
t = -0.77212, df = 198, p-value = 0.441
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-4.055797 1.773441
sample estimates:
mean of x mean of y
10.00571 11.14689
The consensus was that the "outliers were messing up the t test"
and should be removed. [No one seemed to notice that the new outliers had appeared with the removal of the original ones.]
min(boxplot.stats(x1)$out)
[1] 28.41372
y1 = x1[x1 < 28.4]
min(boxplot.stats(x2)$out)
[1] 36.73661
y2 = x2[x2 < 36.7]
boxplot(y1,y2, col="skyblue2", horizontal=T, notch=T)
Now with the "cleaned-up data" y1 and y2, we have a
t test
significant (just) below 5% level. Great joy, the favorite
won out.
t.test(y1, y2, var.eq=T)
Two Sample t-test
data: y1 and y2
t = -1.9863, df = 186, p-value = 0.04847
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-4.37097702 -0.01493265
sample estimates:
mean of x mean of y
7.660631 9.853586
To 'confirm they got it right', a one-sided ("because we already know which method is best") two-sample Wilcoxon
test finds a significant difference (at very nearly the 5% level, but "nonparametric test are not as powerful"):
wilcox.test(y1, y2, alt="less")$p.val
[1] 0.05310917
Some years later when an economic crunch forced switching
to cheaper Method 1, it became obvious that there was
no practical difference between methods. In keeping with
that revelation, I sampled the data for the current example
in R as below:
set.seed(2021)
x1 = rexp(100, .1)
x2 = rexp(100, .1)
Note: You can Google and find an exact F-test to compare
exponential samples, and it finds no difference, but nobody
thought to use it at the time. | What is it called when an experimenter discards results that are too unexpected? | Example: Based on a real experiment, names of people and the organization (along with inconsequential details) are omitted to protect the guilty.
In a study comparing two methods (1 and 2) of manufact | What is it called when an experimenter discards results that are too unexpected?
Example: Based on a real experiment, names of people and the organization (along with inconsequential details) are omitted to protect the guilty.
In a study comparing two methods (1 and 2) of manufacture, $n=100$ items were tested until failure. (Larger observed values are better.) Summary statistics
for results x1 and x2 of the samples were as below:
summary(x1); length(x1); sd(x1)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1099 2.8264 7.0881 10.0057 12.8520 46.9993
[1] 100
[1] 10.35345
summary(x2); length(x2); sd(x2)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1196 3.2247 8.0975 11.1469 15.9245 56.6384
[1] 100
[1] 10.54756
boxplot(x1, x2, col="skyblue2", horizontal=T, notch=T)
Everyone's favorite was Method 2 (even though more costly), and it had the larger mean.
But the overlapping notches in the boxes suggest no significant difference.
Also, a pooled 2-sample t.test, which "must be OK" because of the large sample
sizes, finds no significant difference. [This was before
Welch t tests became popular.] Experimenters were hoping for
evidence that Method 2 was significantly better.
t.test(x1,x2, var.eq=T)
Two Sample t-test
data: x1 and x2
t = -0.77212, df = 198, p-value = 0.441
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-4.055797 1.773441
sample estimates:
mean of x mean of y
10.00571 11.14689
The consensus was that the "outliers were messing up the t test"
and should be removed. [No one seemed to notice that the new outliers had appeared with the removal of the original ones.]
min(boxplot.stats(x1)$out)
[1] 28.41372
y1 = x1[x1 < 28.4]
min(boxplot.stats(x2)$out)
[1] 36.73661
y2 = x2[x2 < 36.7]
boxplot(y1,y2, col="skyblue2", horizontal=T, notch=T)
Now with the "cleaned-up data" y1 and y2, we have a
t test
significant (just) below 5% level. Great joy, the favorite
won out.
t.test(y1, y2, var.eq=T)
Two Sample t-test
data: y1 and y2
t = -1.9863, df = 186, p-value = 0.04847
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-4.37097702 -0.01493265
sample estimates:
mean of x mean of y
7.660631 9.853586
To 'confirm they got it right', a one-sided ("because we already know which method is best") two-sample Wilcoxon
test finds a significant difference (at very nearly the 5% level, but "nonparametric test are not as powerful"):
wilcox.test(y1, y2, alt="less")$p.val
[1] 0.05310917
Some years later when an economic crunch forced switching
to cheaper Method 1, it became obvious that there was
no practical difference between methods. In keeping with
that revelation, I sampled the data for the current example
in R as below:
set.seed(2021)
x1 = rexp(100, .1)
x2 = rexp(100, .1)
Note: You can Google and find an exact F-test to compare
exponential samples, and it finds no difference, but nobody
thought to use it at the time. | What is it called when an experimenter discards results that are too unexpected?
Example: Based on a real experiment, names of people and the organization (along with inconsequential details) are omitted to protect the guilty.
In a study comparing two methods (1 and 2) of manufact |
27,140 | Definition of sample space | The basic intuition is that:
$\Omega$ is the set of outcomes that can happen.
$\mathcal S$, a $\sigma$-field of subsets of $\Omega$, represents what information is available. It represents what outcomes can be distinguished from each other. It is the set of events where an event is itself a set of outcomes.
You may not be able to tell certain outcomes apart, and these outcomes may be combined into a single event. This structure becomes especially useful when thinking about the arrival of new information over time. It's a useful mathematical structure for dealing with different information.
Example:
Let:
$ww$ denote the outcome where the Cubs win the first two games of the World Series
$wl$ denote the outcome where they win the first game but lose the
second
$lw$ denote the outcome where they lose the first game but win the
second
$ll$ denote the outcome where the Cubs lose both games
The sample space for the first two games is given by:
$$\Omega = \{ww, wl, lw, ll\}$$
Before any games are played:
One possible $\sigma$-field is given by:
$$ S_0 = \left\{ \left\{ \emptyset \right\}, \left\{ww, wl, lw, ll \right\} \right\} $$
This captures what is knowable before any game is played. You can't distinguish between any of the outcomes. And any random variable $X_0$ observable at time $t=0$ should map outcomes $ww, wl, lw, ll$ to the same value. This is captured formally with the notion of a measurable function.
After the first game:
After the first game is played, there is additional information, hence:
$$ S_1 = \left\{ \left\{ \emptyset \right\}, \left\{ww, wl \right\}, \left\{lw, ll \right\}, \left\{ww, wl, lw, ll \right\} \right\} $$
That is, you can tell whether the Cubs won or lost the first game, but you can't tell apart $ww$ from $wl$! You can't tell who won the second game.
Let $C_1$ be a random variable denoting whether Cubs won game 1. $C_1$ is not measurable with respect to $S_0$ but it is measurable with respect to $S_1$.
Let $C_2$ be a random variable denoting whether the Cubs won game 2. $C_2$ is not measurable with respect to $S_1$. The preimage of $C_2(\omega) = \text{TRUE}$ is the set $\{lw, ww\}$, and that set isn't in $S_1$.
After the second game:
And finally you would have $S_2 = 2^{\Omega}$ (i.e. it's the powerset).
$$ S_2 = \left\{ \emptyset, \{ww\}, \{wl\}, \{lw\}, \{ll\}, \left\{ww, ll \right\}, \left\{ww, lw \right\}, \left\{ww, wl \right\}, \ldots \right\} $$
$\mathcal{S} = (S_0, S_1, S_2)$ is called a filtration.
Conclusion
$\Omega$ represents possible outcomes. $S$ is the set of events where each event is a set of outcomes. $S$ captures information in the sense of which outcomes are distinguishable. This distinction between what can happen vs. what is knowable, between outcomes and events, is extremely useful when information is revealed over time. | Definition of sample space | The basic intuition is that:
$\Omega$ is the set of outcomes that can happen.
$\mathcal S$, a $\sigma$-field of subsets of $\Omega$, represents what information is available. It represents what outco | Definition of sample space
The basic intuition is that:
$\Omega$ is the set of outcomes that can happen.
$\mathcal S$, a $\sigma$-field of subsets of $\Omega$, represents what information is available. It represents what outcomes can be distinguished from each other. It is the set of events where an event is itself a set of outcomes.
You may not be able to tell certain outcomes apart, and these outcomes may be combined into a single event. This structure becomes especially useful when thinking about the arrival of new information over time. It's a useful mathematical structure for dealing with different information.
Example:
Let:
$ww$ denote the outcome where the Cubs win the first two games of the World Series
$wl$ denote the outcome where they win the first game but lose the
second
$lw$ denote the outcome where they lose the first game but win the
second
$ll$ denote the outcome where the Cubs lose both games
The sample space for the first two games is given by:
$$\Omega = \{ww, wl, lw, ll\}$$
Before any games are played:
One possible $\sigma$-field is given by:
$$ S_0 = \left\{ \left\{ \emptyset \right\}, \left\{ww, wl, lw, ll \right\} \right\} $$
This captures what is knowable before any game is played. You can't distinguish between any of the outcomes. And any random variable $X_0$ observable at time $t=0$ should map outcomes $ww, wl, lw, ll$ to the same value. This is captured formally with the notion of a measurable function.
After the first game:
After the first game is played, there is additional information, hence:
$$ S_1 = \left\{ \left\{ \emptyset \right\}, \left\{ww, wl \right\}, \left\{lw, ll \right\}, \left\{ww, wl, lw, ll \right\} \right\} $$
That is, you can tell whether the Cubs won or lost the first game, but you can't tell apart $ww$ from $wl$! You can't tell who won the second game.
Let $C_1$ be a random variable denoting whether Cubs won game 1. $C_1$ is not measurable with respect to $S_0$ but it is measurable with respect to $S_1$.
Let $C_2$ be a random variable denoting whether the Cubs won game 2. $C_2$ is not measurable with respect to $S_1$. The preimage of $C_2(\omega) = \text{TRUE}$ is the set $\{lw, ww\}$, and that set isn't in $S_1$.
After the second game:
And finally you would have $S_2 = 2^{\Omega}$ (i.e. it's the powerset).
$$ S_2 = \left\{ \emptyset, \{ww\}, \{wl\}, \{lw\}, \{ll\}, \left\{ww, ll \right\}, \left\{ww, lw \right\}, \left\{ww, wl \right\}, \ldots \right\} $$
$\mathcal{S} = (S_0, S_1, S_2)$ is called a filtration.
Conclusion
$\Omega$ represents possible outcomes. $S$ is the set of events where each event is a set of outcomes. $S$ captures information in the sense of which outcomes are distinguishable. This distinction between what can happen vs. what is knowable, between outcomes and events, is extremely useful when information is revealed over time. | Definition of sample space
The basic intuition is that:
$\Omega$ is the set of outcomes that can happen.
$\mathcal S$, a $\sigma$-field of subsets of $\Omega$, represents what information is available. It represents what outco |
27,141 | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$? | You interpretation is slightly incorrect. The Central Limit Theorem (CLT) implies that
$$\bar{X}_n \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right). $$
This is because CLT is an asymptotic result, and we are in practice dealing with only finite samples. However, when the sample size is large enough, then we assume that the CLT result holds true in approximation, and thus
\begin{align*}
\sqrt{n} \dfrac{\bar{X}_n - \mu}{\sigma} &\overset{\mbox{approx}}{\sim} N(0, 1)\\
\sqrt{n} \dfrac{\bar{X}_n - \mu}{\sigma} . \dfrac{\sigma}{\sqrt{n}} &\overset{\mbox{approx}}{\sim} \dfrac{\sigma}{\sqrt{n}} N \left( 0, 1 \right)\\
{\bar{X}_n - \mu} &\overset{\mbox{approx}}{\sim} N \left(0, \dfrac{\sigma^2}{n}\right)\\
\bar{X}_n - \mu + \mu & \overset{\mbox{approx}}{\sim}\mu + N \left(0, \frac{\sigma^2}{n} \right)\\
\bar{X}_n & \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right).\\
\end{align*}
This is because for a random variable $X$ and constants $a,b $, $\operatorname{Var}(aX) = a^2 \operatorname{Var}(X)$ (this is used in the second step) and $E(b + X) = b + E(X)$, $\operatorname{Var}(b + X) = \operatorname{Var}(X)$ (this is used in the second last step).
Read this for more explanation of the algebra. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\ | You interpretation is slightly incorrect. The Central Limit Theorem (CLT) implies that
$$\bar{X}_n \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right). $$
This is because CLT is an | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$?
You interpretation is slightly incorrect. The Central Limit Theorem (CLT) implies that
$$\bar{X}_n \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right). $$
This is because CLT is an asymptotic result, and we are in practice dealing with only finite samples. However, when the sample size is large enough, then we assume that the CLT result holds true in approximation, and thus
\begin{align*}
\sqrt{n} \dfrac{\bar{X}_n - \mu}{\sigma} &\overset{\mbox{approx}}{\sim} N(0, 1)\\
\sqrt{n} \dfrac{\bar{X}_n - \mu}{\sigma} . \dfrac{\sigma}{\sqrt{n}} &\overset{\mbox{approx}}{\sim} \dfrac{\sigma}{\sqrt{n}} N \left( 0, 1 \right)\\
{\bar{X}_n - \mu} &\overset{\mbox{approx}}{\sim} N \left(0, \dfrac{\sigma^2}{n}\right)\\
\bar{X}_n - \mu + \mu & \overset{\mbox{approx}}{\sim}\mu + N \left(0, \frac{\sigma^2}{n} \right)\\
\bar{X}_n & \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right).\\
\end{align*}
This is because for a random variable $X$ and constants $a,b $, $\operatorname{Var}(aX) = a^2 \operatorname{Var}(X)$ (this is used in the second step) and $E(b + X) = b + E(X)$, $\operatorname{Var}(b + X) = \operatorname{Var}(X)$ (this is used in the second last step).
Read this for more explanation of the algebra. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\
You interpretation is slightly incorrect. The Central Limit Theorem (CLT) implies that
$$\bar{X}_n \overset{\mbox{approx}}{\sim} N \left(\mu, \frac{\sigma^2}{n} \right). $$
This is because CLT is an |
27,142 | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$? | The easiest way to see this is by looking at the mean and the variance of the random variable $\bar X_n$.
So, $\mathcal{N}(0,1)$ states that the mean is zero and the variance is one. Hence, we have for the mean:
$$E\left[\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma}\right]\approx 0$$
Using $E[a\cdot x+b]=a\cdot E[x]+b$, where $a,b$ are constants, we get:
$$\bar{X}_n\approx\mu$$
Now, using $\operatorname{Var}[a\cdot x+b]=a^2\cdot \operatorname{Var}[x]=a^2\cdot \sigma_x^2$, where $a,b$ are constants, we get the following for the variance:
$$\operatorname{Var}\left[\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma}\right]\approx 1$$
$$\operatorname{Var}\left[\bar{X}_n\right]\approx \frac{\sigma^2}{n}$$
Now, we know the mean and the variance of $\bar X_n$, and the Gaussian (normal) distribution with these mean and variance is $\mathcal{N}(\mu,\frac{\sigma^2}{n})$
You may wonder why go through all these algebra? Why not directly prove that $\bar X_n$ converges to $\mathcal{N}(\mu,\frac{\sigma^2}{n})$?
The reason is that in mathematics it's difficult (impossible?) to prove convergence to changing things, i.e. the right had side of the convergence operator $\rightarrow$ has to be fixed in order for mathematicians to use their tricks for proving statements. The $\mathcal{N}(\mu,\frac{\sigma^2}{n})$ expression changes with $n$, which is a problem. So, mathematicians transform the expressions in such a way, that the right hand side is fixed, e.g. $\mathcal{N}(0,1)$ is a nice fixed right hand side. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\ | The easiest way to see this is by looking at the mean and the variance of the random variable $\bar X_n$.
So, $\mathcal{N}(0,1)$ states that the mean is zero and the variance is one. Hence, we have f | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$?
The easiest way to see this is by looking at the mean and the variance of the random variable $\bar X_n$.
So, $\mathcal{N}(0,1)$ states that the mean is zero and the variance is one. Hence, we have for the mean:
$$E\left[\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma}\right]\approx 0$$
Using $E[a\cdot x+b]=a\cdot E[x]+b$, where $a,b$ are constants, we get:
$$\bar{X}_n\approx\mu$$
Now, using $\operatorname{Var}[a\cdot x+b]=a^2\cdot \operatorname{Var}[x]=a^2\cdot \sigma_x^2$, where $a,b$ are constants, we get the following for the variance:
$$\operatorname{Var}\left[\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma}\right]\approx 1$$
$$\operatorname{Var}\left[\bar{X}_n\right]\approx \frac{\sigma^2}{n}$$
Now, we know the mean and the variance of $\bar X_n$, and the Gaussian (normal) distribution with these mean and variance is $\mathcal{N}(\mu,\frac{\sigma^2}{n})$
You may wonder why go through all these algebra? Why not directly prove that $\bar X_n$ converges to $\mathcal{N}(\mu,\frac{\sigma^2}{n})$?
The reason is that in mathematics it's difficult (impossible?) to prove convergence to changing things, i.e. the right had side of the convergence operator $\rightarrow$ has to be fixed in order for mathematicians to use their tricks for proving statements. The $\mathcal{N}(\mu,\frac{\sigma^2}{n})$ expression changes with $n$, which is a problem. So, mathematicians transform the expressions in such a way, that the right hand side is fixed, e.g. $\mathcal{N}(0,1)$ is a nice fixed right hand side. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\
The easiest way to see this is by looking at the mean and the variance of the random variable $\bar X_n$.
So, $\mathcal{N}(0,1)$ states that the mean is zero and the variance is one. Hence, we have f |
27,143 | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$? | It doesn't imply the normality of $\bar{X}_n$, except as an approximation. But if we pretend for a moment that $ \sqrt{n} (\bar{X}_n - \mu) / \sigma$ is exactly standard normal then we have the result that $\tau Z + \mu \sim$ normal$(\mu, \tau^2)$ when $Z \sim$ normal$(0, 1)$. One way to see this is via the moment generating function
\begin{align}
M_{\tau Z + \mu}(t) &= M_Z(\tau t) M_\mu (t) \\
&= e^{t^2 \tau^2 / 2} e^{t \mu} \\
&= e^{t^2 \tau^2 / 2 + t \mu}
\end{align}
which is the normal$(\mu, \tau^2)$ m.g.f. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\ | It doesn't imply the normality of $\bar{X}_n$, except as an approximation. But if we pretend for a moment that $ \sqrt{n} (\bar{X}_n - \mu) / \sigma$ is exactly standard normal then we have the resul | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\mu, \frac{\sigma^2}{n})$?
It doesn't imply the normality of $\bar{X}_n$, except as an approximation. But if we pretend for a moment that $ \sqrt{n} (\bar{X}_n - \mu) / \sigma$ is exactly standard normal then we have the result that $\tau Z + \mu \sim$ normal$(\mu, \tau^2)$ when $Z \sim$ normal$(0, 1)$. One way to see this is via the moment generating function
\begin{align}
M_{\tau Z + \mu}(t) &= M_Z(\tau t) M_\mu (t) \\
&= e^{t^2 \tau^2 / 2} e^{t \mu} \\
&= e^{t^2 \tau^2 / 2 + t \mu}
\end{align}
which is the normal$(\mu, \tau^2)$ m.g.f. | In CLT, why $\sqrt{n}\frac{\bar{X}_n-\mu}{\sigma} \rightarrow N(0,1)$ $\implies$ $\bar{X}_n \sim N(\
It doesn't imply the normality of $\bar{X}_n$, except as an approximation. But if we pretend for a moment that $ \sqrt{n} (\bar{X}_n - \mu) / \sigma$ is exactly standard normal then we have the resul |
27,144 | Is OLS the frequentist approach to linear regression? | OLS by itself does not imply what type of inference (if any) is being done.
I would say it is a mere descriptive statistic.
If you assume some generative model (sampling distribution), and try to infer on the OLS coefficients, you are then free to do frequentist inference or Bayesian. | Is OLS the frequentist approach to linear regression? | OLS by itself does not imply what type of inference (if any) is being done.
I would say it is a mere descriptive statistic.
If you assume some generative model (sampling distribution), and try to in | Is OLS the frequentist approach to linear regression?
OLS by itself does not imply what type of inference (if any) is being done.
I would say it is a mere descriptive statistic.
If you assume some generative model (sampling distribution), and try to infer on the OLS coefficients, you are then free to do frequentist inference or Bayesian. | Is OLS the frequentist approach to linear regression?
OLS by itself does not imply what type of inference (if any) is being done.
I would say it is a mere descriptive statistic.
If you assume some generative model (sampling distribution), and try to in |
27,145 | Is OLS the frequentist approach to linear regression? | In the folk parlance of statistics, OLS tends to be identified as a frequentist approach to parameter estimation because it does not explicitly involve a prior distribution on the parameters being estimated. But strictly speaking, OLS is just a mathematical operation whose result has both frequentist and Bayesian interpretations.
From a frequentist perspective, if the usual linear model assumptions hold, the OLS parameter estimates are equal to the true parameter values plus an error of known distribution derived from the randomness of sampling. This enables us to extract information about the estimates (e.g. p-values and confidence intervals) which have theoretical guarantees that limit the chance of certain types of errors attributable to random sample variation.
From a Bayesian perspective, if the usual linear model assumptions hold, OLS provides the maximum a posteriori (MAP) parameter estimate under a uniform prior. The posterior distribution is a multivariate Gaussian with a peak at the MAP estimate, and we can use the posterior to update our prior on the parameters, or compute credible intervals if we so desire.
In general, the frequentist/Bayesian distinction is fraught with so much folk connotation and association that most statistical methods end up feeling like one or the other to practitioners. But at bottom, if this distinction has any objective meaning, it is about how you interpret probabilities, and your choice of parameter estimation method does not necessarily say anything about how you interpret probabilities. Only your interpretation of the parameter estimates can identify what side of the distinction you are (currently) working in. | Is OLS the frequentist approach to linear regression? | In the folk parlance of statistics, OLS tends to be identified as a frequentist approach to parameter estimation because it does not explicitly involve a prior distribution on the parameters being es | Is OLS the frequentist approach to linear regression?
In the folk parlance of statistics, OLS tends to be identified as a frequentist approach to parameter estimation because it does not explicitly involve a prior distribution on the parameters being estimated. But strictly speaking, OLS is just a mathematical operation whose result has both frequentist and Bayesian interpretations.
From a frequentist perspective, if the usual linear model assumptions hold, the OLS parameter estimates are equal to the true parameter values plus an error of known distribution derived from the randomness of sampling. This enables us to extract information about the estimates (e.g. p-values and confidence intervals) which have theoretical guarantees that limit the chance of certain types of errors attributable to random sample variation.
From a Bayesian perspective, if the usual linear model assumptions hold, OLS provides the maximum a posteriori (MAP) parameter estimate under a uniform prior. The posterior distribution is a multivariate Gaussian with a peak at the MAP estimate, and we can use the posterior to update our prior on the parameters, or compute credible intervals if we so desire.
In general, the frequentist/Bayesian distinction is fraught with so much folk connotation and association that most statistical methods end up feeling like one or the other to practitioners. But at bottom, if this distinction has any objective meaning, it is about how you interpret probabilities, and your choice of parameter estimation method does not necessarily say anything about how you interpret probabilities. Only your interpretation of the parameter estimates can identify what side of the distinction you are (currently) working in. | Is OLS the frequentist approach to linear regression?
In the folk parlance of statistics, OLS tends to be identified as a frequentist approach to parameter estimation because it does not explicitly involve a prior distribution on the parameters being es |
27,146 | Is OLS the frequentist approach to linear regression? | The text from:
"The ordinary least squares solution... and y is the column n-vector"
denotes the frequentist approach to linear regression, and is generally termed "OLS". In Bayesian regression, there is a prior on the parameters. The often used Normal prior on the betas also has a frequentist interpretation: ridge regression. | Is OLS the frequentist approach to linear regression? | The text from:
"The ordinary least squares solution... and y is the column n-vector"
denotes the frequentist approach to linear regression, and is generally termed "OLS". In Bayesian regression, ther | Is OLS the frequentist approach to linear regression?
The text from:
"The ordinary least squares solution... and y is the column n-vector"
denotes the frequentist approach to linear regression, and is generally termed "OLS". In Bayesian regression, there is a prior on the parameters. The often used Normal prior on the betas also has a frequentist interpretation: ridge regression. | Is OLS the frequentist approach to linear regression?
The text from:
"The ordinary least squares solution... and y is the column n-vector"
denotes the frequentist approach to linear regression, and is generally termed "OLS". In Bayesian regression, ther |
27,147 | A fair coin is tossed until a head comes up for the first time. The probability of this happening on an odd number toss is? | Add up the probabilities of the coin coming up heads for the first time on toss 1, 3, 5...
$p_o = 1/2 + 1/2^3 + 1/2^5 + ...$
The $1/2$ term is pretty obvious, it's the probability of the first toss being heads.
The $1/2^3$ term is the probability of getting heads for the first time on the third toss, or the sequence TTH. That sequence has a probability of $1/2 * 1/2 * 1/2$.
The $1/2^5$ term is the probability of getting heads for the first time on the fifth toss, or the sequence TTTTH. That sequence has a probability of $1/2 * 1/2 * 1/2 * 1/2 * 1/2$.
Now we can rewrite the series above as
$p_o = 1/2 + 1/8 + 1/32 + ...$
This is a geometric series that sums to $2/3$. The easiest way to show this is with a visual example. Start with the series
$p = 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + ...$
This is a geometric series that sums to $1$.
If we sum just the even terms of that series, we can see that they sum to $1/3$.
$1/4 + 1/16 + 1/64 + 1/256 + ... = 1/3$
If you eliminate the even terms from the full sequence, you're left with just the odd terms, which must add up to $2/3$.
$p_o = 1/2 + 1/8 + 1/32 + ... = 2/3$ | A fair coin is tossed until a head comes up for the first time. The probability of this happening on | Add up the probabilities of the coin coming up heads for the first time on toss 1, 3, 5...
$p_o = 1/2 + 1/2^3 + 1/2^5 + ...$
The $1/2$ term is pretty obvious, it's the probability of the first toss b | A fair coin is tossed until a head comes up for the first time. The probability of this happening on an odd number toss is?
Add up the probabilities of the coin coming up heads for the first time on toss 1, 3, 5...
$p_o = 1/2 + 1/2^3 + 1/2^5 + ...$
The $1/2$ term is pretty obvious, it's the probability of the first toss being heads.
The $1/2^3$ term is the probability of getting heads for the first time on the third toss, or the sequence TTH. That sequence has a probability of $1/2 * 1/2 * 1/2$.
The $1/2^5$ term is the probability of getting heads for the first time on the fifth toss, or the sequence TTTTH. That sequence has a probability of $1/2 * 1/2 * 1/2 * 1/2 * 1/2$.
Now we can rewrite the series above as
$p_o = 1/2 + 1/8 + 1/32 + ...$
This is a geometric series that sums to $2/3$. The easiest way to show this is with a visual example. Start with the series
$p = 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + ...$
This is a geometric series that sums to $1$.
If we sum just the even terms of that series, we can see that they sum to $1/3$.
$1/4 + 1/16 + 1/64 + 1/256 + ... = 1/3$
If you eliminate the even terms from the full sequence, you're left with just the odd terms, which must add up to $2/3$.
$p_o = 1/2 + 1/8 + 1/32 + ... = 2/3$ | A fair coin is tossed until a head comes up for the first time. The probability of this happening on
Add up the probabilities of the coin coming up heads for the first time on toss 1, 3, 5...
$p_o = 1/2 + 1/2^3 + 1/2^5 + ...$
The $1/2$ term is pretty obvious, it's the probability of the first toss b |
27,148 | A fair coin is tossed until a head comes up for the first time. The probability of this happening on an odd number toss is? | Think recursively - let $p_o$ be the probability of the first head on an odd toss, and let $p_e$ be the probability of the first head on an even toss. Now $p_o+p_e=1$, and we also have that $p_e$ equals the probability of first toss tails times $p_o$. Thus $p_e = 1/2\cdot p_o$; $p_o+1/2\cdot p_o = 1$; $p_o = 2/3$. | A fair coin is tossed until a head comes up for the first time. The probability of this happening on | Think recursively - let $p_o$ be the probability of the first head on an odd toss, and let $p_e$ be the probability of the first head on an even toss. Now $p_o+p_e=1$, and we also have that $p_e$ equa | A fair coin is tossed until a head comes up for the first time. The probability of this happening on an odd number toss is?
Think recursively - let $p_o$ be the probability of the first head on an odd toss, and let $p_e$ be the probability of the first head on an even toss. Now $p_o+p_e=1$, and we also have that $p_e$ equals the probability of first toss tails times $p_o$. Thus $p_e = 1/2\cdot p_o$; $p_o+1/2\cdot p_o = 1$; $p_o = 2/3$. | A fair coin is tossed until a head comes up for the first time. The probability of this happening on
Think recursively - let $p_o$ be the probability of the first head on an odd toss, and let $p_e$ be the probability of the first head on an even toss. Now $p_o+p_e=1$, and we also have that $p_e$ equa |
27,149 | What are the statistical reasons behind defining BMI index as weight/height$^2$? | This review, by Eknoyan (2007) has far more than your probably wanted to know about Quetelet and his invention of the body mass index.
The short version is that BMI looks approximately normally distributed, while weight alone, or weight/height doesn't, and Quetelet was interested in describing a "normal" man via normal distributions. There are some first-principles arguments too, based on how people grow, and some more recent work has attempted to relate that scaling back to some biomechanics.
It's worth noting that the value of the BMI is fairly hotly debated. It does correlate with fatness pretty well, but the cut-offs for underweight/overweight/obese don't quite match up with healthcare outcomes. | What are the statistical reasons behind defining BMI index as weight/height$^2$? | This review, by Eknoyan (2007) has far more than your probably wanted to know about Quetelet and his invention of the body mass index.
The short version is that BMI looks approximately normally distr | What are the statistical reasons behind defining BMI index as weight/height$^2$?
This review, by Eknoyan (2007) has far more than your probably wanted to know about Quetelet and his invention of the body mass index.
The short version is that BMI looks approximately normally distributed, while weight alone, or weight/height doesn't, and Quetelet was interested in describing a "normal" man via normal distributions. There are some first-principles arguments too, based on how people grow, and some more recent work has attempted to relate that scaling back to some biomechanics.
It's worth noting that the value of the BMI is fairly hotly debated. It does correlate with fatness pretty well, but the cut-offs for underweight/overweight/obese don't quite match up with healthcare outcomes. | What are the statistical reasons behind defining BMI index as weight/height$^2$?
This review, by Eknoyan (2007) has far more than your probably wanted to know about Quetelet and his invention of the body mass index.
The short version is that BMI looks approximately normally distr |
27,150 | What are the statistical reasons behind defining BMI index as weight/height$^2$? | From Adolphe Quetelet's "A Treatise on Man and the Development of his
Faculties":
If man increased equally in all dimensions, his weight at different
ages would be as the cube of his height. Now, this is not what we
really observe. The increase of weight is slower, except during the
first year after birth; then the proportion we have just pointed out
is pretty regularly observed. But after this period, and until near
the age of puberty, weight increases nearly as the square of the
height. The development of weight again becomes very rapid at puberty,
and almost stops after the twenty-fifth year. In general, we do not
err much when we assume that during development the squares of the
weight at different ages are as the fifth powers of the height; which
naturally leads to this conclusion, in supporting the specific gravity
constant, that the transverse growth of man is less than the vertical.
See here.
He wasn't interested in characterizing obesity but the relationship between weight and height as he was very interested in biometry and bell curves. Quetelet's findings indicated that BMI had an approximately normal distribution in the population. This signified to him that he had found the "correct" relationship. (interestingly, only a decade or two later Francis Galton would approach the issue of the "distribution of height" in populations and coin the term "Regression to the Mean").
It's worth noting that the BMI has been a scourge of biometry in modern days because of the Framingham's study's far reaching utilization of BMI as a way of identifying obesity. There is still a lack of any good predictor of obesity (and health related outcomes thereof). The waist to hip measurement ratio is a promising candidate. Hopefully as ultrasounds become cheaper and better, doctors will use them to identify not only obesity, but fatty deposits and calcification in organs and make recommendations for care based on those. | What are the statistical reasons behind defining BMI index as weight/height$^2$? | From Adolphe Quetelet's "A Treatise on Man and the Development of his
Faculties":
If man increased equally in all dimensions, his weight at different
ages would be as the cube of his height. Now, t | What are the statistical reasons behind defining BMI index as weight/height$^2$?
From Adolphe Quetelet's "A Treatise on Man and the Development of his
Faculties":
If man increased equally in all dimensions, his weight at different
ages would be as the cube of his height. Now, this is not what we
really observe. The increase of weight is slower, except during the
first year after birth; then the proportion we have just pointed out
is pretty regularly observed. But after this period, and until near
the age of puberty, weight increases nearly as the square of the
height. The development of weight again becomes very rapid at puberty,
and almost stops after the twenty-fifth year. In general, we do not
err much when we assume that during development the squares of the
weight at different ages are as the fifth powers of the height; which
naturally leads to this conclusion, in supporting the specific gravity
constant, that the transverse growth of man is less than the vertical.
See here.
He wasn't interested in characterizing obesity but the relationship between weight and height as he was very interested in biometry and bell curves. Quetelet's findings indicated that BMI had an approximately normal distribution in the population. This signified to him that he had found the "correct" relationship. (interestingly, only a decade or two later Francis Galton would approach the issue of the "distribution of height" in populations and coin the term "Regression to the Mean").
It's worth noting that the BMI has been a scourge of biometry in modern days because of the Framingham's study's far reaching utilization of BMI as a way of identifying obesity. There is still a lack of any good predictor of obesity (and health related outcomes thereof). The waist to hip measurement ratio is a promising candidate. Hopefully as ultrasounds become cheaper and better, doctors will use them to identify not only obesity, but fatty deposits and calcification in organs and make recommendations for care based on those. | What are the statistical reasons behind defining BMI index as weight/height$^2$?
From Adolphe Quetelet's "A Treatise on Man and the Development of his
Faculties":
If man increased equally in all dimensions, his weight at different
ages would be as the cube of his height. Now, t |
27,151 | What are the statistical reasons behind defining BMI index as weight/height$^2$? | BMI is primarily used nowadays because of its ability to approximate abdominal visceral fat volume, useful in studying cardiovascular risk. For a case study analyzing the adequacy of BMI in screening for diabetes see Chapter 15 of http://biostat.mc.vanderbilt.edu/CourseBios330 under Handouts. Several assessments are there. You will see that a better power of height is closer to 2.5 but you can do better than using height and weight. | What are the statistical reasons behind defining BMI index as weight/height$^2$? | BMI is primarily used nowadays because of its ability to approximate abdominal visceral fat volume, useful in studying cardiovascular risk. For a case study analyzing the adequacy of BMI in screening | What are the statistical reasons behind defining BMI index as weight/height$^2$?
BMI is primarily used nowadays because of its ability to approximate abdominal visceral fat volume, useful in studying cardiovascular risk. For a case study analyzing the adequacy of BMI in screening for diabetes see Chapter 15 of http://biostat.mc.vanderbilt.edu/CourseBios330 under Handouts. Several assessments are there. You will see that a better power of height is closer to 2.5 but you can do better than using height and weight. | What are the statistical reasons behind defining BMI index as weight/height$^2$?
BMI is primarily used nowadays because of its ability to approximate abdominal visceral fat volume, useful in studying cardiovascular risk. For a case study analyzing the adequacy of BMI in screening |
27,152 | Odds and odds ratios in logistic regression | The odds is not the same as the probability. The odds is the number of "successes" (deaths) per "failure" (continue to live), while the probability is the proportion of "successes". I find it instructive to compare how one would estimate these two: An estimate of the odds would be the ratio of the number of successes over the number of failures, while an estimate of the probability would be the ratio of the number of success over the total number of observations.
Odds and probabilities are both ways of quantifying how likely an event is, so it is not surprising that there is a one to one relation between the two. You can turn a probability ($p$) into an odds ($o$) using the following formula: $o=\frac{p}{1-p}$. You can turn an odds into a probability like so: $p = \frac{o}{1+o}$.
So to come back to your example:
The baseline probability is .5, so you would expect to find 1 failure per success, i.e. the baseline odds is 1. This odds is multiplied by a factor 5.8, so then the odds would become 5.8, which you can transform back to a probability as: $\frac{5.8}{1+5.8}\approx.85$ or 85%
A two degree change in temperature is association with a change in the odds of death by a factor $5.8^2=33.6$. So the baseline odds is still 1, which means the new odds would be 33.6, i.e. you would expect 33.6 dead fishes for every live fish, or the probability of finding a dead fish is $\frac{33.6}{1+33.6} \approx .97$
A three degree change in temperatue leads to a new odds of death of $1\times 5.8^3\approx195$. So the probability of finding a dead fish = $\frac{195}{1+195}\approx.99$ | Odds and odds ratios in logistic regression | The odds is not the same as the probability. The odds is the number of "successes" (deaths) per "failure" (continue to live), while the probability is the proportion of "successes". I find it instruct | Odds and odds ratios in logistic regression
The odds is not the same as the probability. The odds is the number of "successes" (deaths) per "failure" (continue to live), while the probability is the proportion of "successes". I find it instructive to compare how one would estimate these two: An estimate of the odds would be the ratio of the number of successes over the number of failures, while an estimate of the probability would be the ratio of the number of success over the total number of observations.
Odds and probabilities are both ways of quantifying how likely an event is, so it is not surprising that there is a one to one relation between the two. You can turn a probability ($p$) into an odds ($o$) using the following formula: $o=\frac{p}{1-p}$. You can turn an odds into a probability like so: $p = \frac{o}{1+o}$.
So to come back to your example:
The baseline probability is .5, so you would expect to find 1 failure per success, i.e. the baseline odds is 1. This odds is multiplied by a factor 5.8, so then the odds would become 5.8, which you can transform back to a probability as: $\frac{5.8}{1+5.8}\approx.85$ or 85%
A two degree change in temperature is association with a change in the odds of death by a factor $5.8^2=33.6$. So the baseline odds is still 1, which means the new odds would be 33.6, i.e. you would expect 33.6 dead fishes for every live fish, or the probability of finding a dead fish is $\frac{33.6}{1+33.6} \approx .97$
A three degree change in temperatue leads to a new odds of death of $1\times 5.8^3\approx195$. So the probability of finding a dead fish = $\frac{195}{1+195}\approx.99$ | Odds and odds ratios in logistic regression
The odds is not the same as the probability. The odds is the number of "successes" (deaths) per "failure" (continue to live), while the probability is the proportion of "successes". I find it instruct |
27,153 | Odds and odds ratios in logistic regression | If the regression coefficient of your logistic regression is 1.76 on the logit-scale, then the odds ratio for 1 unit increase in temperature is $\mathrm{OR_{+1}}=\exp(\beta) = \exp(1.76)\approx 5.81$, as you already stated. The odds ratio for an increase in temperature for $a$ degrees is $\mathrm{OR_{+a}}=\exp(\beta\times a)$. In your case, $a$ is 2 and 3, respectively. So the odds ratios for an increase of 2 and 3 degrees are: $\mathrm{OR_{+2}}=\exp(1.76\times 2)\approx 33.78$ and $\mathrm{OR_{+3}}=\exp(1.76\times 3)\approx 196.37$. If in 2012 50% of the fish die, the baseline odds of dying are $0.5/(0.5-1)=1$. The odds ratio for 1 degree increase in temperature is 5.8 and thus, the odds of dying are $5.8\times 1$ (i.e. the odds ratio multiplied by the baseline odds) compared to fish without the increase in temperature. The odds can now be converted to probability by: $5.8/(5.8+ 1)\approx 0.853$. The same is true for an increase of 2 and 3 degrees: $33.78/(33.78+1)\approx 0.971$ and $196.37/(196.37+1)\approx 0.995$. | Odds and odds ratios in logistic regression | If the regression coefficient of your logistic regression is 1.76 on the logit-scale, then the odds ratio for 1 unit increase in temperature is $\mathrm{OR_{+1}}=\exp(\beta) = \exp(1.76)\approx 5.81$, | Odds and odds ratios in logistic regression
If the regression coefficient of your logistic regression is 1.76 on the logit-scale, then the odds ratio for 1 unit increase in temperature is $\mathrm{OR_{+1}}=\exp(\beta) = \exp(1.76)\approx 5.81$, as you already stated. The odds ratio for an increase in temperature for $a$ degrees is $\mathrm{OR_{+a}}=\exp(\beta\times a)$. In your case, $a$ is 2 and 3, respectively. So the odds ratios for an increase of 2 and 3 degrees are: $\mathrm{OR_{+2}}=\exp(1.76\times 2)\approx 33.78$ and $\mathrm{OR_{+3}}=\exp(1.76\times 3)\approx 196.37$. If in 2012 50% of the fish die, the baseline odds of dying are $0.5/(0.5-1)=1$. The odds ratio for 1 degree increase in temperature is 5.8 and thus, the odds of dying are $5.8\times 1$ (i.e. the odds ratio multiplied by the baseline odds) compared to fish without the increase in temperature. The odds can now be converted to probability by: $5.8/(5.8+ 1)\approx 0.853$. The same is true for an increase of 2 and 3 degrees: $33.78/(33.78+1)\approx 0.971$ and $196.37/(196.37+1)\approx 0.995$. | Odds and odds ratios in logistic regression
If the regression coefficient of your logistic regression is 1.76 on the logit-scale, then the odds ratio for 1 unit increase in temperature is $\mathrm{OR_{+1}}=\exp(\beta) = \exp(1.76)\approx 5.81$, |
27,154 | Risk of extinction of Schrödinger's cats | Let $p_k, k=1, \ldots, K=15$ be the survival probabilities for individual snow cats.
Initialize the number of cats accounted for so far $k \leftarrow 0$, the vector of the binomial survival probabilities $(\pi_0^{(0)}, \pi_1^{(0)}, \ldots, \pi_K^{(0)}) \leftarrow (1, 0, \ldots, 0, 0)$
While there still are unaccounted cats, $k \le K$, repeat steps 3-5:
Increase $k \leftarrow k+1$
Update the 0 outcome (death): $\pi_j^{(k)} \leftarrow \pi_j^{(k-1)} (1-p_k), j=0, \ldots, K$
Update the 1 outcome (survival): $\pi_{j+1}^{(k)} \leftarrow \pi_{j+1}^{(k)} + \pi_j^{(k-1)} p_k, j=0, \ldots, K$
I would probably be paranoid about accounting for everything, and make sure that my probabilities still sum up to 1 at each iteration after step 5.
My results are:
4.091e-08
2.647e-06
.00006039
.00069791
.00479963
.02141555
.06519699
.13945642
.21276277
.23238555
.18045155
.09775029
.03565983
.00823336
.0010688
.00005826
The sum of the first three is the prob of being critically endangered, the last 5, the prob of least concern, etc. | Risk of extinction of Schrödinger's cats | Let $p_k, k=1, \ldots, K=15$ be the survival probabilities for individual snow cats.
Initialize the number of cats accounted for so far $k \leftarrow 0$, the vector of the binomial survival probabili | Risk of extinction of Schrödinger's cats
Let $p_k, k=1, \ldots, K=15$ be the survival probabilities for individual snow cats.
Initialize the number of cats accounted for so far $k \leftarrow 0$, the vector of the binomial survival probabilities $(\pi_0^{(0)}, \pi_1^{(0)}, \ldots, \pi_K^{(0)}) \leftarrow (1, 0, \ldots, 0, 0)$
While there still are unaccounted cats, $k \le K$, repeat steps 3-5:
Increase $k \leftarrow k+1$
Update the 0 outcome (death): $\pi_j^{(k)} \leftarrow \pi_j^{(k-1)} (1-p_k), j=0, \ldots, K$
Update the 1 outcome (survival): $\pi_{j+1}^{(k)} \leftarrow \pi_{j+1}^{(k)} + \pi_j^{(k-1)} p_k, j=0, \ldots, K$
I would probably be paranoid about accounting for everything, and make sure that my probabilities still sum up to 1 at each iteration after step 5.
My results are:
4.091e-08
2.647e-06
.00006039
.00069791
.00479963
.02141555
.06519699
.13945642
.21276277
.23238555
.18045155
.09775029
.03565983
.00823336
.0010688
.00005826
The sum of the first three is the prob of being critically endangered, the last 5, the prob of least concern, etc. | Risk of extinction of Schrödinger's cats
Let $p_k, k=1, \ldots, K=15$ be the survival probabilities for individual snow cats.
Initialize the number of cats accounted for so far $k \leftarrow 0$, the vector of the binomial survival probabili |
27,155 | Risk of extinction of Schrödinger's cats | It's unclear what "risk of extinction category" means, but it appears the question asks to compute the distribution of the sum of 15 independent binomial variates having the given expectations. This is a convolution and it's most efficiently done with the Fast Fourier Transform.
Here is an example in R, whose convolve function uses the FFT:
x <- c(0.17,0.46,0.62,0.08,0.40,0.76,0.03,0.47,0.53,0.32,0.21,0.85,0.31,0.38,0.69)
z <- 1
for (u in sort(x)) z <- convolve(z, c(u, 1-u), type="open")
z
[1] 5.826e-05 1.069e-03 8.233e-03 3.566e-02 9.775e-02 1.805e-01 2.324e-01 2.128e-01
[9] 1.395e-01 6.520e-02 2.142e-02 4.800e-03 6.979e-04 6.039e-05 2.647e-06 4.091e-08
As a check, in Mathematica the same results were obtained with
Product[1 - z + z t, {z, x}] // Expand
$4.091095\times 10^{-8} t^{15}+2.647052\times 10^{-6} t^{14}+\cdots+0.0010688 t+0.0000582614$
Now, for instance, the chance of $10$ or fewer poisonings is computed in R as
sum(z[1:11])
[1] 0.9944
Edit
Using R's convolve function is inefficient for larger problems, because it repeatedly performs an FFT and its inverse. It turns out that the direct algorithm for convolution--as described by StasK--is plenty fast enough. Here is an R implementation.
convolve.binomial <- function(p) {
# p is a vector of probabilities of Bernoulli distributions.
# The convolution of these distributions is returned as a vector
# `z` where z[i] is the probability of i-1, i=1, 2, ..., length(p)+1.
n <- length(p) + 1
z <- c(1, rep(0, n-1))
for (p in p) z <- (1-p)*z + p*c(0, z[-n])
return(z)
}
(Thanks to an anonymous editor suggesting improvements that clarify the algorithm.)
This takes $O(n^2)$ time for $n$ distributions--the quadratic behavior is not good--but it's still quite fast. As an example, let's generate 10,000 random probabilities (instead of using 15 given ones) and form the convolution of the corresponding Bernoulli distributions:
x <- runif(10000)
system.time(y <- convolve.binomial(x))
This still takes less than 3 seconds. | Risk of extinction of Schrödinger's cats | It's unclear what "risk of extinction category" means, but it appears the question asks to compute the distribution of the sum of 15 independent binomial variates having the given expectations. This | Risk of extinction of Schrödinger's cats
It's unclear what "risk of extinction category" means, but it appears the question asks to compute the distribution of the sum of 15 independent binomial variates having the given expectations. This is a convolution and it's most efficiently done with the Fast Fourier Transform.
Here is an example in R, whose convolve function uses the FFT:
x <- c(0.17,0.46,0.62,0.08,0.40,0.76,0.03,0.47,0.53,0.32,0.21,0.85,0.31,0.38,0.69)
z <- 1
for (u in sort(x)) z <- convolve(z, c(u, 1-u), type="open")
z
[1] 5.826e-05 1.069e-03 8.233e-03 3.566e-02 9.775e-02 1.805e-01 2.324e-01 2.128e-01
[9] 1.395e-01 6.520e-02 2.142e-02 4.800e-03 6.979e-04 6.039e-05 2.647e-06 4.091e-08
As a check, in Mathematica the same results were obtained with
Product[1 - z + z t, {z, x}] // Expand
$4.091095\times 10^{-8} t^{15}+2.647052\times 10^{-6} t^{14}+\cdots+0.0010688 t+0.0000582614$
Now, for instance, the chance of $10$ or fewer poisonings is computed in R as
sum(z[1:11])
[1] 0.9944
Edit
Using R's convolve function is inefficient for larger problems, because it repeatedly performs an FFT and its inverse. It turns out that the direct algorithm for convolution--as described by StasK--is plenty fast enough. Here is an R implementation.
convolve.binomial <- function(p) {
# p is a vector of probabilities of Bernoulli distributions.
# The convolution of these distributions is returned as a vector
# `z` where z[i] is the probability of i-1, i=1, 2, ..., length(p)+1.
n <- length(p) + 1
z <- c(1, rep(0, n-1))
for (p in p) z <- (1-p)*z + p*c(0, z[-n])
return(z)
}
(Thanks to an anonymous editor suggesting improvements that clarify the algorithm.)
This takes $O(n^2)$ time for $n$ distributions--the quadratic behavior is not good--but it's still quite fast. As an example, let's generate 10,000 random probabilities (instead of using 15 given ones) and form the convolution of the corresponding Bernoulli distributions:
x <- runif(10000)
system.time(y <- convolve.binomial(x))
This still takes less than 3 seconds. | Risk of extinction of Schrödinger's cats
It's unclear what "risk of extinction category" means, but it appears the question asks to compute the distribution of the sum of 15 independent binomial variates having the given expectations. This |
27,156 | Video lectures about data mining? | David Mease has a Statistics / Data Mining Course brought to you by Google with full videos. It is introductory and from what I've seen of the first few videos appears to use R and Excel to demonstrate ideas. | Video lectures about data mining? | David Mease has a Statistics / Data Mining Course brought to you by Google with full videos. It is introductory and from what I've seen of the first few videos appears to use R and Excel to demonstrat | Video lectures about data mining?
David Mease has a Statistics / Data Mining Course brought to you by Google with full videos. It is introductory and from what I've seen of the first few videos appears to use R and Excel to demonstrate ideas. | Video lectures about data mining?
David Mease has a Statistics / Data Mining Course brought to you by Google with full videos. It is introductory and from what I've seen of the first few videos appears to use R and Excel to demonstrat |
27,157 | Video lectures about data mining? | The PASCAL Project's video library (PASCAL is Pattern Analysis, Statistical Modelling and Computational Learning).
I have never found anything that even comes close--either in the number of videos, in the average quality, or in scope.
The Project scope is Machine Learning; each video lecture is annotated with one or more tags which represent hierarchical subject-matter rubrics. For "Data Mining" there are at least several relevant tags:
Data Mining
Text Mining
Semantic Web
Web Mining
Here's the best part: aside from these subject-matter categories, there is an orthogonal classification which you access from the left-hand side panel, and which relates to the lecture format, e.g., Lecture, Keynote, Interview, and of perhaps most interest for you, Tutorial. This is one of the largest categories and includes videos that survey/introduce the entire discipline of Machine Learning (e.g., Introduction to Machine Learning) to more advanced tutorials on individual ML techniques.
A few suggestions for use:
The Computer Science category is perhaps the best top-level category to begin searching or browsing for videos of interest to you (for Data Mining).
Every Video includes a set of slides. I would recommend downloading the slide set and accessing from your local drive during the video which saves bandwidth plus you can annotate the slides with notes as you wish.
When you scan the videos, look for the solid yellow stars that appear down the left-hand side of the thumbnail image of each video--those are the rating for each video.
Finally, you might want to try browsing the library this way: begin at the highest level (all videos); then in the left-hand side panel, select Tutorial, moving down, then select Highest Rated, then select the languages. These selections will only affect your results order (the order in which the videos are shown to you as thumnail images in your browser). | Video lectures about data mining? | The PASCAL Project's video library (PASCAL is Pattern Analysis, Statistical Modelling and Computational Learning).
I have never found anything that even comes close--either in the number of videos, in | Video lectures about data mining?
The PASCAL Project's video library (PASCAL is Pattern Analysis, Statistical Modelling and Computational Learning).
I have never found anything that even comes close--either in the number of videos, in the average quality, or in scope.
The Project scope is Machine Learning; each video lecture is annotated with one or more tags which represent hierarchical subject-matter rubrics. For "Data Mining" there are at least several relevant tags:
Data Mining
Text Mining
Semantic Web
Web Mining
Here's the best part: aside from these subject-matter categories, there is an orthogonal classification which you access from the left-hand side panel, and which relates to the lecture format, e.g., Lecture, Keynote, Interview, and of perhaps most interest for you, Tutorial. This is one of the largest categories and includes videos that survey/introduce the entire discipline of Machine Learning (e.g., Introduction to Machine Learning) to more advanced tutorials on individual ML techniques.
A few suggestions for use:
The Computer Science category is perhaps the best top-level category to begin searching or browsing for videos of interest to you (for Data Mining).
Every Video includes a set of slides. I would recommend downloading the slide set and accessing from your local drive during the video which saves bandwidth plus you can annotate the slides with notes as you wish.
When you scan the videos, look for the solid yellow stars that appear down the left-hand side of the thumbnail image of each video--those are the rating for each video.
Finally, you might want to try browsing the library this way: begin at the highest level (all videos); then in the left-hand side panel, select Tutorial, moving down, then select Highest Rated, then select the languages. These selections will only affect your results order (the order in which the videos are shown to you as thumnail images in your browser). | Video lectures about data mining?
The PASCAL Project's video library (PASCAL is Pattern Analysis, Statistical Modelling and Computational Learning).
I have never found anything that even comes close--either in the number of videos, in |
27,158 | Video lectures about data mining? | Andrew Ng's Stanford University course on Machine learning is available on YouTube, iTunes and Stanford Engineering Everywhere. | Video lectures about data mining? | Andrew Ng's Stanford University course on Machine learning is available on YouTube, iTunes and Stanford Engineering Everywhere. | Video lectures about data mining?
Andrew Ng's Stanford University course on Machine learning is available on YouTube, iTunes and Stanford Engineering Everywhere. | Video lectures about data mining?
Andrew Ng's Stanford University course on Machine learning is available on YouTube, iTunes and Stanford Engineering Everywhere. |
27,159 | Video lectures about data mining? | Tom Mitchell's Machine learning course at Carnegie Mellon University has video lectures. | Video lectures about data mining? | Tom Mitchell's Machine learning course at Carnegie Mellon University has video lectures. | Video lectures about data mining?
Tom Mitchell's Machine learning course at Carnegie Mellon University has video lectures. | Video lectures about data mining?
Tom Mitchell's Machine learning course at Carnegie Mellon University has video lectures. |
27,160 | Video lectures about data mining? | This video series on machine learning looks good:
http://www.youtube.com/playlist?list=PLD0F06AA0D2E8FFBA&feature=plcp | Video lectures about data mining? | This video series on machine learning looks good:
http://www.youtube.com/playlist?list=PLD0F06AA0D2E8FFBA&feature=plcp | Video lectures about data mining?
This video series on machine learning looks good:
http://www.youtube.com/playlist?list=PLD0F06AA0D2E8FFBA&feature=plcp | Video lectures about data mining?
This video series on machine learning looks good:
http://www.youtube.com/playlist?list=PLD0F06AA0D2E8FFBA&feature=plcp |
27,161 | Best way to interact with an R session running in the cloud | I can think of a few ways. I've done this quite a bit and here are the ways I found most useful:
Emacs Daemon mode. ssh into the EC2 instance with the -X switch so it forwards X windows back to your remove machine. Using daemon mode will ensure that you don't lose state if your connection times out or drops
Instead of using the multicore package, use a different parallel backend with the foreach package. That way you can use RStudio, which is fantastic. Foreach is great because you can test your code in non-parallel, then switch to parallel mode by simply changing your backend (1 or 2 lines of code). I recommend the doRedis backend. You're in the cloud, might as well fire up multiple machines! | Best way to interact with an R session running in the cloud | I can think of a few ways. I've done this quite a bit and here are the ways I found most useful:
Emacs Daemon mode. ssh into the EC2 instance with the -X switch so it forwards X windows back to your | Best way to interact with an R session running in the cloud
I can think of a few ways. I've done this quite a bit and here are the ways I found most useful:
Emacs Daemon mode. ssh into the EC2 instance with the -X switch so it forwards X windows back to your remove machine. Using daemon mode will ensure that you don't lose state if your connection times out or drops
Instead of using the multicore package, use a different parallel backend with the foreach package. That way you can use RStudio, which is fantastic. Foreach is great because you can test your code in non-parallel, then switch to parallel mode by simply changing your backend (1 or 2 lines of code). I recommend the doRedis backend. You're in the cloud, might as well fire up multiple machines! | Best way to interact with an R session running in the cloud
I can think of a few ways. I've done this quite a bit and here are the ways I found most useful:
Emacs Daemon mode. ssh into the EC2 instance with the -X switch so it forwards X windows back to your |
27,162 | Best way to interact with an R session running in the cloud | The most convenient way is just to install VNC server and some light environment like XFCE and make yourself a virtual session that you can use from wherever you want (it persists disconnects), i.e. something like this:
Additional goodies are that you can use your local clipboard in the virtual desktop and see R plots way faster than via X11 forwarding or copying image files.
It takes some effort to setup everything right (X init, ssh tunnel), but the internet is full of tutorials how to that. | Best way to interact with an R session running in the cloud | The most convenient way is just to install VNC server and some light environment like XFCE and make yourself a virtual session that you can use from wherever you want (it persists disconnects), i.e. s | Best way to interact with an R session running in the cloud
The most convenient way is just to install VNC server and some light environment like XFCE and make yourself a virtual session that you can use from wherever you want (it persists disconnects), i.e. something like this:
Additional goodies are that you can use your local clipboard in the virtual desktop and see R plots way faster than via X11 forwarding or copying image files.
It takes some effort to setup everything right (X init, ssh tunnel), but the internet is full of tutorials how to that. | Best way to interact with an R session running in the cloud
The most convenient way is just to install VNC server and some light environment like XFCE and make yourself a virtual session that you can use from wherever you want (it persists disconnects), i.e. s |
27,163 | Best way to interact with an R session running in the cloud | I don't know how Amazon EC2 works, so maybe my simple solutions don't work. But I normally use scp or sftp (through WinSCP if I'm on Windows) or git. | Best way to interact with an R session running in the cloud | I don't know how Amazon EC2 works, so maybe my simple solutions don't work. But I normally use scp or sftp (through WinSCP if I'm on Windows) or git. | Best way to interact with an R session running in the cloud
I don't know how Amazon EC2 works, so maybe my simple solutions don't work. But I normally use scp or sftp (through WinSCP if I'm on Windows) or git. | Best way to interact with an R session running in the cloud
I don't know how Amazon EC2 works, so maybe my simple solutions don't work. But I normally use scp or sftp (through WinSCP if I'm on Windows) or git. |
27,164 | Best way to interact with an R session running in the cloud | I'd use rsync to push the scripts and data files to the server, then "nohup Rscript myscript.R > output.out &" to run things and when finished, rsync to pull the results. | Best way to interact with an R session running in the cloud | I'd use rsync to push the scripts and data files to the server, then "nohup Rscript myscript.R > output.out &" to run things and when finished, rsync to pull the results. | Best way to interact with an R session running in the cloud
I'd use rsync to push the scripts and data files to the server, then "nohup Rscript myscript.R > output.out &" to run things and when finished, rsync to pull the results. | Best way to interact with an R session running in the cloud
I'd use rsync to push the scripts and data files to the server, then "nohup Rscript myscript.R > output.out &" to run things and when finished, rsync to pull the results. |
27,165 | Best way to interact with an R session running in the cloud | VIM + tmux + VIM Slime. You get the greatest text editor and ability to send code from editor to R command line (just like in Rstudio). | Best way to interact with an R session running in the cloud | VIM + tmux + VIM Slime. You get the greatest text editor and ability to send code from editor to R command line (just like in Rstudio). | Best way to interact with an R session running in the cloud
VIM + tmux + VIM Slime. You get the greatest text editor and ability to send code from editor to R command line (just like in Rstudio). | Best way to interact with an R session running in the cloud
VIM + tmux + VIM Slime. You get the greatest text editor and ability to send code from editor to R command line (just like in Rstudio). |
27,166 | Best way to interact with an R session running in the cloud | I use R Studio on EC2 all the time thanks to the AMIs created by Louis Aslett. You don't have to know any SSH or anything (other than R, of course). You just need an EC2 account. As mentioned in one of the other answers, R Studio does support parallel computing, via the foreach package for instance. This really enables harnessing the power of EC2. By using a compute-optimized instance (32 cores), I was able to significantly cut down training time for my ML models at almost no cost (a few bucks an hour). | Best way to interact with an R session running in the cloud | I use R Studio on EC2 all the time thanks to the AMIs created by Louis Aslett. You don't have to know any SSH or anything (other than R, of course). You just need an EC2 account. As mentioned in one o | Best way to interact with an R session running in the cloud
I use R Studio on EC2 all the time thanks to the AMIs created by Louis Aslett. You don't have to know any SSH or anything (other than R, of course). You just need an EC2 account. As mentioned in one of the other answers, R Studio does support parallel computing, via the foreach package for instance. This really enables harnessing the power of EC2. By using a compute-optimized instance (32 cores), I was able to significantly cut down training time for my ML models at almost no cost (a few bucks an hour). | Best way to interact with an R session running in the cloud
I use R Studio on EC2 all the time thanks to the AMIs created by Louis Aslett. You don't have to know any SSH or anything (other than R, of course). You just need an EC2 account. As mentioned in one o |
27,167 | How to generate uniform distributed samples with given auto-correlation function | This problem can be approached by first generating samples from the desired distribution and then reordering them to match the desired autocorrelation function.
The R code below demonstrates an approach based on this answer that can be modified for any desired ACF and distribution. The example generates $n=10^6$ samples from $\text{Gamma}(0.9,1)$ whose ACF follows 50 random samples from $\text{Beta}(1,3)$, sorted descending.
The process is as follows.
Generate the desired number of samples, $X=\{x_1,...,x_n\}$, from the target distribution. Set $\alpha_0$ equal to the desired ACF. Initialize the target ACF, $\alpha$, as the desired ACF.
Find a set of weights that, when passed to filter along with $n$ random normal variates, results in a series, $Y$, with $ACF=\alpha$ (see the answer linked above).
Reorder $X$ so that its rank ordering matches that of $Y$. If $X$ is normally distributed, the resulting series should have the desired ACF; however, the more $X$ deviates from normality, the more the ACF will deviate from $\alpha_0$ (the example below has a target distribution of $\text{Gamma}(0.9,1)$, which is very "non-normal"). Update the target ACF, $\alpha$, according to $\alpha'=\frac{\alpha}{2}\Big(\frac{\alpha_0}{ACF}+1\Big)$ and repeat steps 1-3 until the ACF of the reordered $X$ converges.
The function that performs the reordering (it works only for positive values for alpha):
acf.reorder <- function(x, alpha) {
tol <- 1e-5
maxIter <- 10L
n <- length(x)
xx <- sort(x)
y <- rnorm(n)
w0 <- w <- alpha1 <- alpha
m <- length(alpha)
i1 <- sequence((m - 1):1)
i2 <- sequence((m - 1):1, 2:m)
i3 <- cumsum((m - 1):1)
tol10 <- tol/10
iter <- 0L
x <- xx[rank(filter(y, w, circular = TRUE))]
SSE0 <- Inf
f <- function(ww) {
sum((c(1, diff(c(0, cumsum(ww[i1]*(ww[i2]))[i3]))/sum(ww^2)) - alpha1)^2)
}
ACF <- function(x) acf(x, lag.max = m - 1, plot = FALSE)$acf[1:m]
while ((SSE <- sum((ACF(x) - alpha)^2)) > tol) {
if (SSE < SSE0) {
SSE0 <- SSE
w <- w0
}
if ((iter <- iter + 1L) == maxIter) break
w1 <- w0
a <- 0
sse0 <- Inf
while (max(abs(alpha1 - a)) > tol10) {
a <- c(1, diff(c(0, cumsum(w1[i1]*(w1[i2]))[i3]))/sum(w1^2))
if ((sse <- sum((a - alpha1)^2)) < sse0) {
sse0 <- sse
w0 <- w1
} else {
# w0 failed to converge; try optim
w1 <- optim(w0, f, method = "L-BFGS-B")$par
a <- c(1, diff(c(0, cumsum(w1[i1]*(w1[i2]))[i3]))/sum(w1^2))
if (sum((a - alpha1)^2) < sse0) w0 <- w1
break
}
w1 <- (w1*alpha1/a + w1)/2
}
x <- xx[rank(filter(y, w0, circular = TRUE))]
alpha1 <- (alpha1*alpha/ACF(x) + alpha1)/2
}
xx[rank(filter(y, w, circular = TRUE))]
}
Generate samples from the target distribution and specify the desired ACF:
set.seed(1960841256)
x <- rgamma(1e6, 0.9, 1)
alpha <- c(1, sort(rbeta(50, 1, 3), TRUE))
Reorder x and plot its ACF against alpha:
system.time(x <- acf.reorder(x, alpha))
#> user system elapsed
#> 7.13 0.41 7.53
acf(x, lag.max = length(alpha) - 1)
lines(seq_along(alpha) - 1, alpha, col = "green")
The resulting ACF is a good match with the target, and since x has simply been reordered, it is known to have the desired distribution.
Update: Worst of 100 iterations
The original code was breaking too early for some cases as noted by the OP in the updated question. The above code has been modified to correct the behavior. I also removed the restriction that the ACF be strictly positive.
I ran the same procedure 100 times, each with a new random vector for x and alpha. The longest-running iteration took 7.36 seconds, and the worst performing iteration (by maximum absolute difference of the achieved vs. desired ACF) had a maximum absolute error of 0.056. It is plotted below. | How to generate uniform distributed samples with given auto-correlation function | This problem can be approached by first generating samples from the desired distribution and then reordering them to match the desired autocorrelation function.
The R code below demonstrates an approa | How to generate uniform distributed samples with given auto-correlation function
This problem can be approached by first generating samples from the desired distribution and then reordering them to match the desired autocorrelation function.
The R code below demonstrates an approach based on this answer that can be modified for any desired ACF and distribution. The example generates $n=10^6$ samples from $\text{Gamma}(0.9,1)$ whose ACF follows 50 random samples from $\text{Beta}(1,3)$, sorted descending.
The process is as follows.
Generate the desired number of samples, $X=\{x_1,...,x_n\}$, from the target distribution. Set $\alpha_0$ equal to the desired ACF. Initialize the target ACF, $\alpha$, as the desired ACF.
Find a set of weights that, when passed to filter along with $n$ random normal variates, results in a series, $Y$, with $ACF=\alpha$ (see the answer linked above).
Reorder $X$ so that its rank ordering matches that of $Y$. If $X$ is normally distributed, the resulting series should have the desired ACF; however, the more $X$ deviates from normality, the more the ACF will deviate from $\alpha_0$ (the example below has a target distribution of $\text{Gamma}(0.9,1)$, which is very "non-normal"). Update the target ACF, $\alpha$, according to $\alpha'=\frac{\alpha}{2}\Big(\frac{\alpha_0}{ACF}+1\Big)$ and repeat steps 1-3 until the ACF of the reordered $X$ converges.
The function that performs the reordering (it works only for positive values for alpha):
acf.reorder <- function(x, alpha) {
tol <- 1e-5
maxIter <- 10L
n <- length(x)
xx <- sort(x)
y <- rnorm(n)
w0 <- w <- alpha1 <- alpha
m <- length(alpha)
i1 <- sequence((m - 1):1)
i2 <- sequence((m - 1):1, 2:m)
i3 <- cumsum((m - 1):1)
tol10 <- tol/10
iter <- 0L
x <- xx[rank(filter(y, w, circular = TRUE))]
SSE0 <- Inf
f <- function(ww) {
sum((c(1, diff(c(0, cumsum(ww[i1]*(ww[i2]))[i3]))/sum(ww^2)) - alpha1)^2)
}
ACF <- function(x) acf(x, lag.max = m - 1, plot = FALSE)$acf[1:m]
while ((SSE <- sum((ACF(x) - alpha)^2)) > tol) {
if (SSE < SSE0) {
SSE0 <- SSE
w <- w0
}
if ((iter <- iter + 1L) == maxIter) break
w1 <- w0
a <- 0
sse0 <- Inf
while (max(abs(alpha1 - a)) > tol10) {
a <- c(1, diff(c(0, cumsum(w1[i1]*(w1[i2]))[i3]))/sum(w1^2))
if ((sse <- sum((a - alpha1)^2)) < sse0) {
sse0 <- sse
w0 <- w1
} else {
# w0 failed to converge; try optim
w1 <- optim(w0, f, method = "L-BFGS-B")$par
a <- c(1, diff(c(0, cumsum(w1[i1]*(w1[i2]))[i3]))/sum(w1^2))
if (sum((a - alpha1)^2) < sse0) w0 <- w1
break
}
w1 <- (w1*alpha1/a + w1)/2
}
x <- xx[rank(filter(y, w0, circular = TRUE))]
alpha1 <- (alpha1*alpha/ACF(x) + alpha1)/2
}
xx[rank(filter(y, w, circular = TRUE))]
}
Generate samples from the target distribution and specify the desired ACF:
set.seed(1960841256)
x <- rgamma(1e6, 0.9, 1)
alpha <- c(1, sort(rbeta(50, 1, 3), TRUE))
Reorder x and plot its ACF against alpha:
system.time(x <- acf.reorder(x, alpha))
#> user system elapsed
#> 7.13 0.41 7.53
acf(x, lag.max = length(alpha) - 1)
lines(seq_along(alpha) - 1, alpha, col = "green")
The resulting ACF is a good match with the target, and since x has simply been reordered, it is known to have the desired distribution.
Update: Worst of 100 iterations
The original code was breaking too early for some cases as noted by the OP in the updated question. The above code has been modified to correct the behavior. I also removed the restriction that the ACF be strictly positive.
I ran the same procedure 100 times, each with a new random vector for x and alpha. The longest-running iteration took 7.36 seconds, and the worst performing iteration (by maximum absolute difference of the achieved vs. desired ACF) had a maximum absolute error of 0.056. It is plotted below. | How to generate uniform distributed samples with given auto-correlation function
This problem can be approached by first generating samples from the desired distribution and then reordering them to match the desired autocorrelation function.
The R code below demonstrates an approa |
27,168 | How to generate uniform distributed samples with given auto-correlation function | Letting $U_t=\Phi(X_t)$ where $X_t$ is a zero-mean and unit-variance stationary Gaussian process with autocorrelation function $\rho_h$ and $\Phi$ is the standard normal cdf, it follows that each $U_t$ is marginally uniformly distibuted. It follows that the relation between $\rho_h$ and the autocovariance function of $U_t$ is
\begin{align}
\gamma_h&=\operatorname{Cov}(U_t,U_{t+h})
\\&=E(U_tU_{t+h})-E(U_t)E(U_{t+h})
\\&=E(\Phi(X_t)\Phi(X_{t+h}))-1/4
\\&=P(Z_1\le X_t\cap Z_2\le X_{t+h})-1/4
\\&=P\left(\frac{Z_1-X_1}{\sqrt{2}}\le0\cap \frac{Z_2-X_{t+h}}{\sqrt{2}}\le 0\right)-1/4
\\&=\Phi_2\left(\begin{bmatrix}0\\0\end{bmatrix};\frac{\rho_h}2\right)-1/4.
\end{align}
Here $Z_1$ and $Z_2$ are independent standard normal random variables, and $\Phi_2$ is the cdf of the standard bivariate normal distribution with correlation $\rho_h/2$.
Using this result, the above relation simplifies to
$$
\gamma_h = \frac1{2\pi}\operatorname{arcsin}\frac{\rho_h}2.
$$
Solving this equation, we find that the autocorrelation of $X_t$ needed to achieve a target autocovariance $\gamma_h$ of $U_t$ is
$$
\rho_h = 2\sin(2\pi \gamma_h).
$$
Even if the target autocovariance function $\gamma_h$ is positive semi-definite, the above construction may not be feasible. For example, if the target autocovariance function is that of a MA(1)-process with a unit root,
$$
\gamma_h=\begin{cases}
1/12 &\text{for }h=0 \\
1/24 &\text{for }h=1 \\
0 &\text{for }h>1
\end{cases},
$$
this would imply that
$$
\rho_h=\begin{cases}
1 &\text{for }h=0 \\
0.518 &\text{for }h=1 \\
0 &\text{for }h>1
\end{cases}
$$
which is not a positive semi-definite autocorrelation function. | How to generate uniform distributed samples with given auto-correlation function | Letting $U_t=\Phi(X_t)$ where $X_t$ is a zero-mean and unit-variance stationary Gaussian process with autocorrelation function $\rho_h$ and $\Phi$ is the standard normal cdf, it follows that each $U_t | How to generate uniform distributed samples with given auto-correlation function
Letting $U_t=\Phi(X_t)$ where $X_t$ is a zero-mean and unit-variance stationary Gaussian process with autocorrelation function $\rho_h$ and $\Phi$ is the standard normal cdf, it follows that each $U_t$ is marginally uniformly distibuted. It follows that the relation between $\rho_h$ and the autocovariance function of $U_t$ is
\begin{align}
\gamma_h&=\operatorname{Cov}(U_t,U_{t+h})
\\&=E(U_tU_{t+h})-E(U_t)E(U_{t+h})
\\&=E(\Phi(X_t)\Phi(X_{t+h}))-1/4
\\&=P(Z_1\le X_t\cap Z_2\le X_{t+h})-1/4
\\&=P\left(\frac{Z_1-X_1}{\sqrt{2}}\le0\cap \frac{Z_2-X_{t+h}}{\sqrt{2}}\le 0\right)-1/4
\\&=\Phi_2\left(\begin{bmatrix}0\\0\end{bmatrix};\frac{\rho_h}2\right)-1/4.
\end{align}
Here $Z_1$ and $Z_2$ are independent standard normal random variables, and $\Phi_2$ is the cdf of the standard bivariate normal distribution with correlation $\rho_h/2$.
Using this result, the above relation simplifies to
$$
\gamma_h = \frac1{2\pi}\operatorname{arcsin}\frac{\rho_h}2.
$$
Solving this equation, we find that the autocorrelation of $X_t$ needed to achieve a target autocovariance $\gamma_h$ of $U_t$ is
$$
\rho_h = 2\sin(2\pi \gamma_h).
$$
Even if the target autocovariance function $\gamma_h$ is positive semi-definite, the above construction may not be feasible. For example, if the target autocovariance function is that of a MA(1)-process with a unit root,
$$
\gamma_h=\begin{cases}
1/12 &\text{for }h=0 \\
1/24 &\text{for }h=1 \\
0 &\text{for }h>1
\end{cases},
$$
this would imply that
$$
\rho_h=\begin{cases}
1 &\text{for }h=0 \\
0.518 &\text{for }h=1 \\
0 &\text{for }h>1
\end{cases}
$$
which is not a positive semi-definite autocorrelation function. | How to generate uniform distributed samples with given auto-correlation function
Letting $U_t=\Phi(X_t)$ where $X_t$ is a zero-mean and unit-variance stationary Gaussian process with autocorrelation function $\rho_h$ and $\Phi$ is the standard normal cdf, it follows that each $U_t |
27,169 | How to generate uniform distributed samples with given auto-correlation function | Here's a pragmatic and easy approach with room to expand and establish proofs, with the focus on the main problem: how do you generate a correlated uniform sample?
Let $U_1, U_2$ be uniformly distributed and independent on the unit interval. Let $V_1 = U_1$. For a desired correlation $d$, let $G_{1,2}$ be yet another uniformly distributed random variable.
Let $$V_2 = \left\{ \begin{array}{ccc} U_1 & \text{if} & G_{1,2} < d \\
U_2 & \text{if} &G_{1,2} \ge d\end{array} \right.$$
I claim that:
One can show that $V_2$ is indeed uniformly distributed, with a correlation of $d$ with $V_1$
By way of induction, one can set an arbitrary sequence of uniform random variables and create a new sample having a desired covariance structure.
As usual, this site is always most compelled by code, so I can show a couple cases of the spherical AR-1 auto correlation, where the correlation between observations with a lag of 1 is set to $d$, but otherwise it's relatively straightforward to use any structure you want.
do.one <- function(n,N,d) {
u <- matrix(runif(n*N), N, n)
g <- matrix(runif((n-1)*N/2), N, n-1)
v <- u
for( i in 2:n) {
v[g[, i-1] < d,i] <- v[g[, i-1] < d, i-1]
}
acf <- sapply(2:n, function(i) cor(v[,i], v[, 1]))
acf
}
set.seed(123)
ds <- c(0,0.25, 0.5, 0.95)
acfs <- sapply(ds, do.one, n=10, N=1000)
matplot(acfs,type='l', xlab='Lags')
legend('topright', title = 'AR-1 spherical correlation', legend = ds, lty=1:4, col=1:4) | How to generate uniform distributed samples with given auto-correlation function | Here's a pragmatic and easy approach with room to expand and establish proofs, with the focus on the main problem: how do you generate a correlated uniform sample?
Let $U_1, U_2$ be uniformly distribu | How to generate uniform distributed samples with given auto-correlation function
Here's a pragmatic and easy approach with room to expand and establish proofs, with the focus on the main problem: how do you generate a correlated uniform sample?
Let $U_1, U_2$ be uniformly distributed and independent on the unit interval. Let $V_1 = U_1$. For a desired correlation $d$, let $G_{1,2}$ be yet another uniformly distributed random variable.
Let $$V_2 = \left\{ \begin{array}{ccc} U_1 & \text{if} & G_{1,2} < d \\
U_2 & \text{if} &G_{1,2} \ge d\end{array} \right.$$
I claim that:
One can show that $V_2$ is indeed uniformly distributed, with a correlation of $d$ with $V_1$
By way of induction, one can set an arbitrary sequence of uniform random variables and create a new sample having a desired covariance structure.
As usual, this site is always most compelled by code, so I can show a couple cases of the spherical AR-1 auto correlation, where the correlation between observations with a lag of 1 is set to $d$, but otherwise it's relatively straightforward to use any structure you want.
do.one <- function(n,N,d) {
u <- matrix(runif(n*N), N, n)
g <- matrix(runif((n-1)*N/2), N, n-1)
v <- u
for( i in 2:n) {
v[g[, i-1] < d,i] <- v[g[, i-1] < d, i-1]
}
acf <- sapply(2:n, function(i) cor(v[,i], v[, 1]))
acf
}
set.seed(123)
ds <- c(0,0.25, 0.5, 0.95)
acfs <- sapply(ds, do.one, n=10, N=1000)
matplot(acfs,type='l', xlab='Lags')
legend('topright', title = 'AR-1 spherical correlation', legend = ds, lty=1:4, col=1:4) | How to generate uniform distributed samples with given auto-correlation function
Here's a pragmatic and easy approach with room to expand and establish proofs, with the focus on the main problem: how do you generate a correlated uniform sample?
Let $U_1, U_2$ be uniformly distribu |
27,170 | How to generate uniform distributed samples with given auto-correlation function | If I understand correctly your problem, you need to simulate random variables which marginally follow an uniform distribution while the joint distribution is a multivariate uniform distribution with some correlation between your marginals.
Formally:
$X_i\sim U(0,1), \forall_i, i = 1,2,...,K$ , where $X\sim U(a_i, b_i, Cor)$.
In the method I am firstly proposing, your uniform random variables will be uniformly distributed between 0 and 1. However, I also provide a "Normal to Anything" kind of approach.
One trick is to use generate a Multivariate normal distribution, specifying a correlation matrix. Let's say $\Sigma$ is your auto-correlation structure for your $K$ random variables.
$$X \sim MVN(\mu, \Sigma)$$
$\mu$ is a vector of means of size $K$, while $\Sigma$ is a $K*K$ matrix
Then you have to transform your quantiles to probabilities from any normal distribution you want, for one $X$:
$$CDF_{Normal}(X_{i}, \mu_i, \Sigma_{ii}) \sim U(0,1)$$
Now if you assume that your variables are not uniformly distributed between a common support $[a,b], \forall K$ but a variable-dependent support instead. You just need to convert your probabilities obtained previously with an inverse uniform distribution specifying a and b. For a random variable,
$$\theta^{-1}(CDF_{Normal}(X_{i}, \mu_i, \Sigma_{ii}), a_i, b_i)$$
It is worth noting that you will respect globally your correlation structure, while under-estimate it a little bit. Some force-brute methods should be used to correct your final correlation structure.
I can provide script to illustrate the method.
Hope this helps.
set.seed(1234)
#Example function to generate auto-correlation matrix
autocorr.mat <- function(p = 100, rho = 0.9) {
mat <- diag(p)
autocorr <- sapply(seq_len(p), function(i) rho^i)
mat[lower.tri(mat)] <- autocorr[unlist(sapply(seq.int(p-1, 1), function(i) seq_len(i)))]
mat[upper.tri(mat)] <- autocorr[unlist(sapply(seq_len(p - 1), function(i) seq.int(i, 1)))]
return(mat)
}
#Simulate normal data with some autocorrelation structure
autocorrelated.normal = MASS::mvrnorm(1000, mu= rep(0,100), autocorr.mat())
#Let's see the heatmap
heatmap(cor(autocorrelated.normal))
#If you are interested in simulating random uniform between 0 and 1 with your auto-correlation structure
pnorm(autocorrelated.normal)
#If you are interested in simulating random uniform between q and p, here [-10; 10]
qunif(pnorm(autocorrelated.normal), -10,10)
A brief comparison of the respective heatmaps give global concordance of initial correlation structure
heatmap(cor(autocorrelated.normal))
heatmap(cor(pnorm(autocorrelated.normal)))
heatmap(cor(qunif(pnorm(autocorrelated.normal), -10,10))) | How to generate uniform distributed samples with given auto-correlation function | If I understand correctly your problem, you need to simulate random variables which marginally follow an uniform distribution while the joint distribution is a multivariate uniform distribution with s | How to generate uniform distributed samples with given auto-correlation function
If I understand correctly your problem, you need to simulate random variables which marginally follow an uniform distribution while the joint distribution is a multivariate uniform distribution with some correlation between your marginals.
Formally:
$X_i\sim U(0,1), \forall_i, i = 1,2,...,K$ , where $X\sim U(a_i, b_i, Cor)$.
In the method I am firstly proposing, your uniform random variables will be uniformly distributed between 0 and 1. However, I also provide a "Normal to Anything" kind of approach.
One trick is to use generate a Multivariate normal distribution, specifying a correlation matrix. Let's say $\Sigma$ is your auto-correlation structure for your $K$ random variables.
$$X \sim MVN(\mu, \Sigma)$$
$\mu$ is a vector of means of size $K$, while $\Sigma$ is a $K*K$ matrix
Then you have to transform your quantiles to probabilities from any normal distribution you want, for one $X$:
$$CDF_{Normal}(X_{i}, \mu_i, \Sigma_{ii}) \sim U(0,1)$$
Now if you assume that your variables are not uniformly distributed between a common support $[a,b], \forall K$ but a variable-dependent support instead. You just need to convert your probabilities obtained previously with an inverse uniform distribution specifying a and b. For a random variable,
$$\theta^{-1}(CDF_{Normal}(X_{i}, \mu_i, \Sigma_{ii}), a_i, b_i)$$
It is worth noting that you will respect globally your correlation structure, while under-estimate it a little bit. Some force-brute methods should be used to correct your final correlation structure.
I can provide script to illustrate the method.
Hope this helps.
set.seed(1234)
#Example function to generate auto-correlation matrix
autocorr.mat <- function(p = 100, rho = 0.9) {
mat <- diag(p)
autocorr <- sapply(seq_len(p), function(i) rho^i)
mat[lower.tri(mat)] <- autocorr[unlist(sapply(seq.int(p-1, 1), function(i) seq_len(i)))]
mat[upper.tri(mat)] <- autocorr[unlist(sapply(seq_len(p - 1), function(i) seq.int(i, 1)))]
return(mat)
}
#Simulate normal data with some autocorrelation structure
autocorrelated.normal = MASS::mvrnorm(1000, mu= rep(0,100), autocorr.mat())
#Let's see the heatmap
heatmap(cor(autocorrelated.normal))
#If you are interested in simulating random uniform between 0 and 1 with your auto-correlation structure
pnorm(autocorrelated.normal)
#If you are interested in simulating random uniform between q and p, here [-10; 10]
qunif(pnorm(autocorrelated.normal), -10,10)
A brief comparison of the respective heatmaps give global concordance of initial correlation structure
heatmap(cor(autocorrelated.normal))
heatmap(cor(pnorm(autocorrelated.normal)))
heatmap(cor(qunif(pnorm(autocorrelated.normal), -10,10))) | How to generate uniform distributed samples with given auto-correlation function
If I understand correctly your problem, you need to simulate random variables which marginally follow an uniform distribution while the joint distribution is a multivariate uniform distribution with s |
27,171 | How to generate uniform distributed samples with given auto-correlation function | You can try implying AR(p) process coefficients $\phi_i$ from the given ACF $r(p)$. You could apply Yule Walker equations:
form a vector $r$ of ACF for lags $p$: $1, r_1, r_2,\dots, r_p$
construct a correlation matrix $R$ as described in the link above from $r_p$, e.g. the third row would be $(r_2,r_1,1,p_1,\dots,r_{p-2})$
calculate $\phi=R^{-1}r$
Use these coefficients to produce autocorrelated samples | How to generate uniform distributed samples with given auto-correlation function | You can try implying AR(p) process coefficients $\phi_i$ from the given ACF $r(p)$. You could apply Yule Walker equations:
form a vector $r$ of ACF for lags $p$: $1, r_1, r_2,\dots, r_p$
construct a | How to generate uniform distributed samples with given auto-correlation function
You can try implying AR(p) process coefficients $\phi_i$ from the given ACF $r(p)$. You could apply Yule Walker equations:
form a vector $r$ of ACF for lags $p$: $1, r_1, r_2,\dots, r_p$
construct a correlation matrix $R$ as described in the link above from $r_p$, e.g. the third row would be $(r_2,r_1,1,p_1,\dots,r_{p-2})$
calculate $\phi=R^{-1}r$
Use these coefficients to produce autocorrelated samples | How to generate uniform distributed samples with given auto-correlation function
You can try implying AR(p) process coefficients $\phi_i$ from the given ACF $r(p)$. You could apply Yule Walker equations:
form a vector $r$ of ACF for lags $p$: $1, r_1, r_2,\dots, r_p$
construct a |
27,172 | How to generate uniform distributed samples with given auto-correlation function | The following spells out the details of the approach proposed in the other answer by @AdamO and in its comments by @LucaCiti.
For $i=1,2,\dots,\infty$, let $|\phi_i|$ denote the probability that $U_t$ takes a value identical to either $U_{t-i}$ or $1 - U_{t-i}$ and let these two possibilities be determined by the sign of $\phi_i$. Let the remaining fraction
$$
\phi_0=1-\sum_{i=1}^\infty |\phi_i|
$$
denote the probability that $U_t$ takes a uniformly distributed value independent of the history of the process. Clearly, we must have
$$
0\le \phi_0\le 1. \tag{1}
$$
and
$$
-1\le \phi_i\le 1 \tag{2}
$$
for $i=1,2,\dots,\infty$.
Letting $V_t=U_t-\frac12$ denote the mean-centered process, and using the law of total expectation, we have
\begin{align}
E(V_t|V_{t-1},V_{t-2},\dots)
&=|\phi_1|\operatorname{sgn}(\phi_1) V_{t-1} + |\phi_2|\operatorname{sgn}(\phi_2) V_{t-2} + \dots
\\&=\phi_1 V_{t-1} + \phi_2 V_{t-2} + \dots.
\end{align}
Thus it is immedeately clear that $\phi_1,\phi_2,\dots$ are the coefficients in the $\operatorname{AR}(\infty)$ representation of the model.
Unlike ordinary ARMA models, constraint (1) and (2) implies that not all positive semi-definite autocovariance functions are possible via this construction, however. For example, if the target autocovariance function is that of a MA(1) model with MA polynomial $1-\theta B$, the infinite AR polynomial would equal
$$
\frac1{1-\theta B}=1+\theta B+\theta^2 B^2+\dots,
$$
and we would have
$$
\sum_{i=1}^\infty|\phi_i|=\sum_{i=1}^\infty |\theta^i|=\sum_{i=1}^\infty |\theta|^i = \frac{|\theta|}{1-|\theta|}.
$$
Combined with (1) this limits possible values of $\theta$ to
$$
|\theta|\le \frac12
$$
and the correlation at lag 1 to
$$
-\frac25\le \rho_1=\frac{\theta}{1+\theta^2}\le\frac25
$$
In contrast, via the copula described in my other answer the correlation at lag 1 is limited to $|\rho_1|<0.4825837$ only . Semipositive definiteness in itself limits the same correlation to $|\rho_1|\le 1/2$. | How to generate uniform distributed samples with given auto-correlation function | The following spells out the details of the approach proposed in the other answer by @AdamO and in its comments by @LucaCiti.
For $i=1,2,\dots,\infty$, let $|\phi_i|$ denote the probability that $U_t$ | How to generate uniform distributed samples with given auto-correlation function
The following spells out the details of the approach proposed in the other answer by @AdamO and in its comments by @LucaCiti.
For $i=1,2,\dots,\infty$, let $|\phi_i|$ denote the probability that $U_t$ takes a value identical to either $U_{t-i}$ or $1 - U_{t-i}$ and let these two possibilities be determined by the sign of $\phi_i$. Let the remaining fraction
$$
\phi_0=1-\sum_{i=1}^\infty |\phi_i|
$$
denote the probability that $U_t$ takes a uniformly distributed value independent of the history of the process. Clearly, we must have
$$
0\le \phi_0\le 1. \tag{1}
$$
and
$$
-1\le \phi_i\le 1 \tag{2}
$$
for $i=1,2,\dots,\infty$.
Letting $V_t=U_t-\frac12$ denote the mean-centered process, and using the law of total expectation, we have
\begin{align}
E(V_t|V_{t-1},V_{t-2},\dots)
&=|\phi_1|\operatorname{sgn}(\phi_1) V_{t-1} + |\phi_2|\operatorname{sgn}(\phi_2) V_{t-2} + \dots
\\&=\phi_1 V_{t-1} + \phi_2 V_{t-2} + \dots.
\end{align}
Thus it is immedeately clear that $\phi_1,\phi_2,\dots$ are the coefficients in the $\operatorname{AR}(\infty)$ representation of the model.
Unlike ordinary ARMA models, constraint (1) and (2) implies that not all positive semi-definite autocovariance functions are possible via this construction, however. For example, if the target autocovariance function is that of a MA(1) model with MA polynomial $1-\theta B$, the infinite AR polynomial would equal
$$
\frac1{1-\theta B}=1+\theta B+\theta^2 B^2+\dots,
$$
and we would have
$$
\sum_{i=1}^\infty|\phi_i|=\sum_{i=1}^\infty |\theta^i|=\sum_{i=1}^\infty |\theta|^i = \frac{|\theta|}{1-|\theta|}.
$$
Combined with (1) this limits possible values of $\theta$ to
$$
|\theta|\le \frac12
$$
and the correlation at lag 1 to
$$
-\frac25\le \rho_1=\frac{\theta}{1+\theta^2}\le\frac25
$$
In contrast, via the copula described in my other answer the correlation at lag 1 is limited to $|\rho_1|<0.4825837$ only . Semipositive definiteness in itself limits the same correlation to $|\rho_1|\le 1/2$. | How to generate uniform distributed samples with given auto-correlation function
The following spells out the details of the approach proposed in the other answer by @AdamO and in its comments by @LucaCiti.
For $i=1,2,\dots,\infty$, let $|\phi_i|$ denote the probability that $U_t$ |
27,173 | How to generate uniform distributed samples with given auto-correlation function | Uniform distributed samples are the set of samples in which every element is distributed uniformly, if we place further constraints on this, it ceases to be uniform distributed samples. But anyway, if you meanе something else, we can easily upgrade my answer.
Let us just to generate samples with given autocorrelation function. Our idea is put all needed constraints on samples and let scipy.optimize to do everything for us.
While the key concept is simple, optimization and numerical problems can be arbitrary hard in real application and it can be necessary to adjust scipy solvers parameters here and/or optimize some computations.
The code implements this idea:
import numpy as np
import scipy.optimize as sco
import matplotlib.pyplot as plt
def cov(x,y):
return np.sum(x*y)/len(x)
def cor(x,y):
return cov(x,y)/(np.std(x)*np.std(y))
#define our autocorrelation
def autoMoment(x, moment, n):
out = []
for i in range(n):
out = [moment(x,x)]
for i in range(1,n):
out.append(moment(x[i:],x[:-i]))
return np.array(out)
#define generation of function for scipy.optimize
def generateWithSpecificAutomoment(moment, values):
def f(x):
out = autoMoment(x, moment, len(values))
return np.sum((np.array(out) - np.array(values))**2)
return f
desiredAutocorr = [1,0.3,0.1,-0.23,0.03,0.07,-0.03]
initial = np.random.randn(50) #starting point for the solver
solution = sco.minimize(generateWithSpecificAutomoment(cor, desiredAutocorr), initial)
print(solution.success)
out = solution.x #samples obtained after optimization
#check autocorrelation on generated samples
realAutocorr = autoMoment(out, cor, len(desiredAutocorr))
#compare desired and real result
plt.title("Autocorrelations")
plt.plot(desiredAutocorr, linewidth=7, label="desired")
plt.plot(realAutocorr, label="real")
plt.legend()
plt.show()
Generated picture: | How to generate uniform distributed samples with given auto-correlation function | Uniform distributed samples are the set of samples in which every element is distributed uniformly, if we place further constraints on this, it ceases to be uniform distributed samples. But anyway, if | How to generate uniform distributed samples with given auto-correlation function
Uniform distributed samples are the set of samples in which every element is distributed uniformly, if we place further constraints on this, it ceases to be uniform distributed samples. But anyway, if you meanе something else, we can easily upgrade my answer.
Let us just to generate samples with given autocorrelation function. Our idea is put all needed constraints on samples and let scipy.optimize to do everything for us.
While the key concept is simple, optimization and numerical problems can be arbitrary hard in real application and it can be necessary to adjust scipy solvers parameters here and/or optimize some computations.
The code implements this idea:
import numpy as np
import scipy.optimize as sco
import matplotlib.pyplot as plt
def cov(x,y):
return np.sum(x*y)/len(x)
def cor(x,y):
return cov(x,y)/(np.std(x)*np.std(y))
#define our autocorrelation
def autoMoment(x, moment, n):
out = []
for i in range(n):
out = [moment(x,x)]
for i in range(1,n):
out.append(moment(x[i:],x[:-i]))
return np.array(out)
#define generation of function for scipy.optimize
def generateWithSpecificAutomoment(moment, values):
def f(x):
out = autoMoment(x, moment, len(values))
return np.sum((np.array(out) - np.array(values))**2)
return f
desiredAutocorr = [1,0.3,0.1,-0.23,0.03,0.07,-0.03]
initial = np.random.randn(50) #starting point for the solver
solution = sco.minimize(generateWithSpecificAutomoment(cor, desiredAutocorr), initial)
print(solution.success)
out = solution.x #samples obtained after optimization
#check autocorrelation on generated samples
realAutocorr = autoMoment(out, cor, len(desiredAutocorr))
#compare desired and real result
plt.title("Autocorrelations")
plt.plot(desiredAutocorr, linewidth=7, label="desired")
plt.plot(realAutocorr, label="real")
plt.legend()
plt.show()
Generated picture: | How to generate uniform distributed samples with given auto-correlation function
Uniform distributed samples are the set of samples in which every element is distributed uniformly, if we place further constraints on this, it ceases to be uniform distributed samples. But anyway, if |
27,174 | What distribution does the mean of a random sample from a Uniform distribution follow? | First, you might want to look at Wikipedia on Irwin-Hall distribution.
Unless $n$ is very small $A = \bar X =
\frac{1}{n}\sum_{i=1}^{n} X_i,$ where
$X_i$ are independently $\mathsf{Unif}(\theta-.5,\theta+.5)$ has $A \stackrel{aprx}{\sim}\mathsf{Norm}(\mu = \theta, \sigma = 1/\sqrt{12n}).$
[The approximation is quite good for $n \ge 10.$ In fact, in the early days of computation when it was expensive to do operations other than pain arithmetic, a common way to simulate a standard normal random variable was to evaluate $Z = \sum_{1=1}^{12} X_i - 6,$ where $X_i$ were generated as independently standard uniform.]
The following simulation in R uses a million samples of size $n = 12$ with $\theta = 5.$
set.seed(2020) # for reproducibility
m = 10^6; n = 12; th = 5
a = replicate(m, mean(runif(n, th-.5,th+.5)))
mean(a); sd(a); 1/sqrt(12*n)
[1] 5.000153 # aprx 5
[1] 0.08339642 # aprx 1/12
[1] 0.08333333 # 1/12
Thus the mean and standard deviations are consistent
with the results of the Central Limit Theorem.
In R, the Shapiro-Wilk normality test is limited to
5000 observations. We show results for the first 5000
simulated sample means. Those observations are consistent with a normal distribution.
shapiro.test(a[1:5000])
Shapiro-Wilk normality test
data: a[1:5000]
W = 0.99979, p-value = 0.9257
The histogram below compares the simulated distribution of $\bar X$ with the PDF of $\mathsf{Norm}(\mu=5, \sigma=1/12).$
hdr = "Simulated Dist'n of Means of Uniform Samples: n = 12"
hist(a, br=30, prob=T, col="skyblue2", main=hdr)
curve(dnorm(x, 5, 1/sqrt(12*n)), add=T, lwd=2)
abline(v=5+c(-1,1)*1.96/sqrt(12*n), col="red")
This suggests that $$P\left(-1.96 < \frac{\bar X - \theta}{1/\sqrt{12n}} < 1.96\right) = 0.95,$$ so that a very good approximate 95% confidence interval for $\theta$ is of the form $(\bar X \pm 1.96/\sqrt{12n}).$ | What distribution does the mean of a random sample from a Uniform distribution follow? | First, you might want to look at Wikipedia on Irwin-Hall distribution.
Unless $n$ is very small $A = \bar X =
\frac{1}{n}\sum_{i=1}^{n} X_i,$ where
$X_i$ are independently $\mathsf{Unif}(\theta-.5,\t | What distribution does the mean of a random sample from a Uniform distribution follow?
First, you might want to look at Wikipedia on Irwin-Hall distribution.
Unless $n$ is very small $A = \bar X =
\frac{1}{n}\sum_{i=1}^{n} X_i,$ where
$X_i$ are independently $\mathsf{Unif}(\theta-.5,\theta+.5)$ has $A \stackrel{aprx}{\sim}\mathsf{Norm}(\mu = \theta, \sigma = 1/\sqrt{12n}).$
[The approximation is quite good for $n \ge 10.$ In fact, in the early days of computation when it was expensive to do operations other than pain arithmetic, a common way to simulate a standard normal random variable was to evaluate $Z = \sum_{1=1}^{12} X_i - 6,$ where $X_i$ were generated as independently standard uniform.]
The following simulation in R uses a million samples of size $n = 12$ with $\theta = 5.$
set.seed(2020) # for reproducibility
m = 10^6; n = 12; th = 5
a = replicate(m, mean(runif(n, th-.5,th+.5)))
mean(a); sd(a); 1/sqrt(12*n)
[1] 5.000153 # aprx 5
[1] 0.08339642 # aprx 1/12
[1] 0.08333333 # 1/12
Thus the mean and standard deviations are consistent
with the results of the Central Limit Theorem.
In R, the Shapiro-Wilk normality test is limited to
5000 observations. We show results for the first 5000
simulated sample means. Those observations are consistent with a normal distribution.
shapiro.test(a[1:5000])
Shapiro-Wilk normality test
data: a[1:5000]
W = 0.99979, p-value = 0.9257
The histogram below compares the simulated distribution of $\bar X$ with the PDF of $\mathsf{Norm}(\mu=5, \sigma=1/12).$
hdr = "Simulated Dist'n of Means of Uniform Samples: n = 12"
hist(a, br=30, prob=T, col="skyblue2", main=hdr)
curve(dnorm(x, 5, 1/sqrt(12*n)), add=T, lwd=2)
abline(v=5+c(-1,1)*1.96/sqrt(12*n), col="red")
This suggests that $$P\left(-1.96 < \frac{\bar X - \theta}{1/\sqrt{12n}} < 1.96\right) = 0.95,$$ so that a very good approximate 95% confidence interval for $\theta$ is of the form $(\bar X \pm 1.96/\sqrt{12n}).$ | What distribution does the mean of a random sample from a Uniform distribution follow?
First, you might want to look at Wikipedia on Irwin-Hall distribution.
Unless $n$ is very small $A = \bar X =
\frac{1}{n}\sum_{i=1}^{n} X_i,$ where
$X_i$ are independently $\mathsf{Unif}(\theta-.5,\t |
27,175 | What distribution does the mean of a random sample from a Uniform distribution follow? | No, it's not uniform. Intuitively, you would expect that the uncertainty over $\bar X$ decreases as $n$ increases. Also the central limit theorem suggests, as $n$ increases, the distribution approaches normal distribution. Which means, you'll have a peak around $\theta$, and it's going to narrow down as $n\rightarrow\infty$.
For a simple counter-example, if $n=2$, $\bar X$ is going to have triangular distribution, with its center at $\theta$, with the same limits. | What distribution does the mean of a random sample from a Uniform distribution follow? | No, it's not uniform. Intuitively, you would expect that the uncertainty over $\bar X$ decreases as $n$ increases. Also the central limit theorem suggests, as $n$ increases, the distribution approache | What distribution does the mean of a random sample from a Uniform distribution follow?
No, it's not uniform. Intuitively, you would expect that the uncertainty over $\bar X$ decreases as $n$ increases. Also the central limit theorem suggests, as $n$ increases, the distribution approaches normal distribution. Which means, you'll have a peak around $\theta$, and it's going to narrow down as $n\rightarrow\infty$.
For a simple counter-example, if $n=2$, $\bar X$ is going to have triangular distribution, with its center at $\theta$, with the same limits. | What distribution does the mean of a random sample from a Uniform distribution follow?
No, it's not uniform. Intuitively, you would expect that the uncertainty over $\bar X$ decreases as $n$ increases. Also the central limit theorem suggests, as $n$ increases, the distribution approache |
27,176 | What distribution does the mean of a random sample from a Uniform distribution follow? | The Irwin-Hall distribution is the distribution of a sum of $n$ uniform random variables. Therefore, an analytic expression for the density of the mean of $n$ uniform random variables is
$$\frac{1}{n!} \sum_{k=0}^n (-1)^k \binom{n}{k} (x-k)_+^{n-1}$$
By shifting this expression, you get the density of yours. | What distribution does the mean of a random sample from a Uniform distribution follow? | The Irwin-Hall distribution is the distribution of a sum of $n$ uniform random variables. Therefore, an analytic expression for the density of the mean of $n$ uniform random variables is
$$\frac{1}{n! | What distribution does the mean of a random sample from a Uniform distribution follow?
The Irwin-Hall distribution is the distribution of a sum of $n$ uniform random variables. Therefore, an analytic expression for the density of the mean of $n$ uniform random variables is
$$\frac{1}{n!} \sum_{k=0}^n (-1)^k \binom{n}{k} (x-k)_+^{n-1}$$
By shifting this expression, you get the density of yours. | What distribution does the mean of a random sample from a Uniform distribution follow?
The Irwin-Hall distribution is the distribution of a sum of $n$ uniform random variables. Therefore, an analytic expression for the density of the mean of $n$ uniform random variables is
$$\frac{1}{n! |
27,177 | What distribution does the mean of a random sample from a Uniform distribution follow? | This is one case where using Fourier transforms makes for simple solutions. Your density function is $\mathrm{rect}(\theta)$ with its Fourier transform $\mathrm{sinc}(f)$ (where $\mathrm{sinc}(f)=\frac{\sin \pi f}{\pi f}$ with the obvious continuation $\mathrm{sinc}(0)=1$). Adding $n$ variables with that distribution leads to convolving the distribution $n$ times with itself (and dividing by $n$), so the resulting distribution has the Fourier transform $\bigl(\mathrm{sinc}(f)\bigr)^n\over n$. Doing the inverse transform then delivers
$$\int_{-\infty}^\infty \cos(2\pi f\theta){\bigl(\mathrm{sinc}(f)\bigr)^n\over n}\,\mathrm{d}f$$. In contrast to the piece-wise defined function in the $\theta$ domain, this is a single expression and thus properties like the moments of the function can be derived from this representation through the Fourier domain. | What distribution does the mean of a random sample from a Uniform distribution follow? | This is one case where using Fourier transforms makes for simple solutions. Your density function is $\mathrm{rect}(\theta)$ with its Fourier transform $\mathrm{sinc}(f)$ (where $\mathrm{sinc}(f)=\fr | What distribution does the mean of a random sample from a Uniform distribution follow?
This is one case where using Fourier transforms makes for simple solutions. Your density function is $\mathrm{rect}(\theta)$ with its Fourier transform $\mathrm{sinc}(f)$ (where $\mathrm{sinc}(f)=\frac{\sin \pi f}{\pi f}$ with the obvious continuation $\mathrm{sinc}(0)=1$). Adding $n$ variables with that distribution leads to convolving the distribution $n$ times with itself (and dividing by $n$), so the resulting distribution has the Fourier transform $\bigl(\mathrm{sinc}(f)\bigr)^n\over n$. Doing the inverse transform then delivers
$$\int_{-\infty}^\infty \cos(2\pi f\theta){\bigl(\mathrm{sinc}(f)\bigr)^n\over n}\,\mathrm{d}f$$. In contrast to the piece-wise defined function in the $\theta$ domain, this is a single expression and thus properties like the moments of the function can be derived from this representation through the Fourier domain. | What distribution does the mean of a random sample from a Uniform distribution follow?
This is one case where using Fourier transforms makes for simple solutions. Your density function is $\mathrm{rect}(\theta)$ with its Fourier transform $\mathrm{sinc}(f)$ (where $\mathrm{sinc}(f)=\fr |
27,178 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample? | I have comments on five levels.
On this evidence this is a limitation of a particular R function shapiro.test() and need not imply that that there aren't other ways to do it in R, on which I can't advise specifically. It may or may not be of practical relevance to you that no such limit applies to all software. For example, the Stata command swilk isn't limited in quite this way, but the manuals and the command output warn that P-value calculation can't be trusted much for sample sizes above about 5000. (EDIT: this paragraph edited 26 January 2021 in the light of @Ben Bolker's comment and separate answer.)
I can't comment on why that particular function won't perform, but the larger question is why you are doing this kind of testing at all. A good reason not to care is generic: for sample sizes of that order, or even larger, such tests are arguably fairly useless as even minute deviations from normality will qualify as significant at conventional levels. More specifically: why is it important or interesting to test for normality? People often apply such tests to marginal distributions given a widespread myth that marginal normality is a requirement for very many procedures. Where normality is a relevant assumption, or ideal condition, it usually applies to distributions conditional on a structure of mean outcomes or responses.
In response to your specific query of whether subsampling is acceptable, the serious reply in return is acceptable in what sense? A personal reply: as a reader, author and reviewer of statistical papers, and as a statistical journal editor, my reaction would be to suggest that such subsampling is at best awkward and at worst an avoidance of the main issue, which would be to find an implementation without such a limit, or more likely to think about the distribution in different terms.
As often emphasised on CV, and elsewhere, the most helpful and informative way to check departure from normality is a normal quantile plot, often also called a normal probability plot, a normal scores plot, or a probit plot. Such a plot not only provides a visual assessment of degree of non-normality, it makes precise in what sense there are departures from the ideal shape. The lack of an associated P-value is not in practice much of a loss, although the procedure may be given some inferential impetus through confidence levels, simulations and so forth. (EDIT 26 January 2021: yet other terms are Gaussian percentile plot and Gaussian probability plot.)
Specifically, your examples consist of generating lognormal samples and then establishing that indeed they fail to qualify as normal with P-values $\ll 10^{-15}$. That has to seem puzzling, but be reassured that with larger samples your P-values will be, or should be, even more minute, subject to a machine level question of the minimum reportable P-value here. Conversely, it may well be that your real problem lies elsewhere and these examples are no more than incidental illlustrations. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th | I have comments on five levels.
On this evidence this is a limitation of a particular R function shapiro.test() and need not imply that that there aren't other ways to do it in R, on which I can't ad | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample?
I have comments on five levels.
On this evidence this is a limitation of a particular R function shapiro.test() and need not imply that that there aren't other ways to do it in R, on which I can't advise specifically. It may or may not be of practical relevance to you that no such limit applies to all software. For example, the Stata command swilk isn't limited in quite this way, but the manuals and the command output warn that P-value calculation can't be trusted much for sample sizes above about 5000. (EDIT: this paragraph edited 26 January 2021 in the light of @Ben Bolker's comment and separate answer.)
I can't comment on why that particular function won't perform, but the larger question is why you are doing this kind of testing at all. A good reason not to care is generic: for sample sizes of that order, or even larger, such tests are arguably fairly useless as even minute deviations from normality will qualify as significant at conventional levels. More specifically: why is it important or interesting to test for normality? People often apply such tests to marginal distributions given a widespread myth that marginal normality is a requirement for very many procedures. Where normality is a relevant assumption, or ideal condition, it usually applies to distributions conditional on a structure of mean outcomes or responses.
In response to your specific query of whether subsampling is acceptable, the serious reply in return is acceptable in what sense? A personal reply: as a reader, author and reviewer of statistical papers, and as a statistical journal editor, my reaction would be to suggest that such subsampling is at best awkward and at worst an avoidance of the main issue, which would be to find an implementation without such a limit, or more likely to think about the distribution in different terms.
As often emphasised on CV, and elsewhere, the most helpful and informative way to check departure from normality is a normal quantile plot, often also called a normal probability plot, a normal scores plot, or a probit plot. Such a plot not only provides a visual assessment of degree of non-normality, it makes precise in what sense there are departures from the ideal shape. The lack of an associated P-value is not in practice much of a loss, although the procedure may be given some inferential impetus through confidence levels, simulations and so forth. (EDIT 26 January 2021: yet other terms are Gaussian percentile plot and Gaussian probability plot.)
Specifically, your examples consist of generating lognormal samples and then establishing that indeed they fail to qualify as normal with P-values $\ll 10^{-15}$. That has to seem puzzling, but be reassured that with larger samples your P-values will be, or should be, even more minute, subject to a machine level question of the minimum reportable P-value here. Conversely, it may well be that your real problem lies elsewhere and these examples are no more than incidental illlustrations. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th
I have comments on five levels.
On this evidence this is a limitation of a particular R function shapiro.test() and need not imply that that there aren't other ways to do it in R, on which I can't ad |
27,179 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample? | A small historical note/correction: contrary to what is said in (or might be inferred from) other answers here and elsewhere, the limitation of R's Shapiro-Wilk test to <=5000 observations is not:
an accidental limitation in R's implementation
an intentional limitation (as possibly suggested here) imposed to protect users from performing questionable tests
The limitation occurs because R refuses to provide a $p$-value for a range where the original function was not validated. In contrast, the original implementation by Royston (1995), and Stata's swilk function, do provide the p-value but give an error code/warning saying that the $p$-values may not be reliable.
The $p$-values corresponding to a given $W$ statistic are hard to compute: there is a whole series of papers in the literature (see refs below) using sophisticated mathematical techniques to come up with approximations that are computationally efficient and sufficiently accurate over a given range of $n$ to provide reliable estimates of the p-values of the Shapiro-Wilk statistic. Royston (1995) says:
All calculations are carried out for samples larger than 5000, but IFAULT is returned as 2. Although $W$ will be calculated correctly, the accuracy of its $P$-value cannot be guaranteed.
In other words, this is outside the range for which Royston and other authors have painstakingly constructed efficient functions that give good approximations to the $p$-value corresponding to a given value of $W$.
I suspect that the implementations of Shapiro-Wilk $p$-values in modern statistics packages are all based on the Fortran code described in Royston (1995). If you wanted to compute reliable Shapiro-Wilk $p$-values for samples with $n>5000$ (ignoring all the advice given here and elsewhere about why Normality testing on very large data sets is usually just silly), you would have to go back to the papers by Royston 1992 and Verrill and Johnson 1988 and re-do/extend those methods to larger values of $n$ — not a project for the faint-hearted.
Royston, Patrick. “Approximating the Shapiro-Wilk W-Test for Non-Normality.” Statistics and Computing 2, no. 3 (September 1, 1992): 117–19. https://doi.org/10.1007/BF01891203.
———. “Remark AS R94: A Remark on Algorithm AS 181: The W-Test for Normality.” Journal of the Royal Statistical Society. Series C (Applied Statistics) 44, no. 4 (1995): 547–51. https://doi.org/10.2307/2986146.
Verrill, Steve, and Richard A. Johnson. “Tables and Large-Sample Distribution Theory for Censored-Data Correlation Statistics for Testing Normality.” Journal of the American Statistical Association 83, no. 404 (1988): 1192–97. https://doi.org/10.2307/2290156. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th | A small historical note/correction: contrary to what is said in (or might be inferred from) other answers here and elsewhere, the limitation of R's Shapiro-Wilk test to <=5000 observations is not:
an | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample?
A small historical note/correction: contrary to what is said in (or might be inferred from) other answers here and elsewhere, the limitation of R's Shapiro-Wilk test to <=5000 observations is not:
an accidental limitation in R's implementation
an intentional limitation (as possibly suggested here) imposed to protect users from performing questionable tests
The limitation occurs because R refuses to provide a $p$-value for a range where the original function was not validated. In contrast, the original implementation by Royston (1995), and Stata's swilk function, do provide the p-value but give an error code/warning saying that the $p$-values may not be reliable.
The $p$-values corresponding to a given $W$ statistic are hard to compute: there is a whole series of papers in the literature (see refs below) using sophisticated mathematical techniques to come up with approximations that are computationally efficient and sufficiently accurate over a given range of $n$ to provide reliable estimates of the p-values of the Shapiro-Wilk statistic. Royston (1995) says:
All calculations are carried out for samples larger than 5000, but IFAULT is returned as 2. Although $W$ will be calculated correctly, the accuracy of its $P$-value cannot be guaranteed.
In other words, this is outside the range for which Royston and other authors have painstakingly constructed efficient functions that give good approximations to the $p$-value corresponding to a given value of $W$.
I suspect that the implementations of Shapiro-Wilk $p$-values in modern statistics packages are all based on the Fortran code described in Royston (1995). If you wanted to compute reliable Shapiro-Wilk $p$-values for samples with $n>5000$ (ignoring all the advice given here and elsewhere about why Normality testing on very large data sets is usually just silly), you would have to go back to the papers by Royston 1992 and Verrill and Johnson 1988 and re-do/extend those methods to larger values of $n$ — not a project for the faint-hearted.
Royston, Patrick. “Approximating the Shapiro-Wilk W-Test for Non-Normality.” Statistics and Computing 2, no. 3 (September 1, 1992): 117–19. https://doi.org/10.1007/BF01891203.
———. “Remark AS R94: A Remark on Algorithm AS 181: The W-Test for Normality.” Journal of the Royal Statistical Society. Series C (Applied Statistics) 44, no. 4 (1995): 547–51. https://doi.org/10.2307/2986146.
Verrill, Steve, and Richard A. Johnson. “Tables and Large-Sample Distribution Theory for Censored-Data Correlation Statistics for Testing Normality.” Journal of the American Statistical Association 83, no. 404 (1988): 1192–97. https://doi.org/10.2307/2290156. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th
A small historical note/correction: contrary to what is said in (or might be inferred from) other answers here and elsewhere, the limitation of R's Shapiro-Wilk test to <=5000 observations is not:
an |
27,180 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample? | I think Nick Cox points out some of the difficulties with the approach.
A possible alternate recommendation would be to use another normality test. In classes I took we used a test based on skewness and kurtosis due to D'Agostino for larger samples. I implemented these tests in my lolcat statistical package. Consider:
#Install/load step
require(devtools)
install_github("burrm/lolcat")
require(lolcat)
set.seed(1)
#Normal distribution - no rejection
zz <- rnorm(5500)
skewness.test(zz)
kurtosis.test(zz)
# Log normal distribution - rejection on both skewness and kurtosis
zz1 <- exp(zz1)
skewness.test(zz1)
kurtosis.test(zz1)
Interestingly enough, even with a sample size of 5500, skewness/kurtosis would likely not reject with these tests. A log normal distribution would most likely reject, even at substantially lower sample sizes. As an example:
> set.seed(1)
>
> #Normal distribution - no rejection
> zz <- rnorm(5500)
> skewness.test(zz)
D'Agostino Skewness Normality Test
data: input data
skewness = -0.035209, null hypothesis skewness = 0, p-value = 0.286
alternative hypothesis: true skewness is not equal to 0
95 percent confidence interval:
-0.09992690 0.02950877
sample estimates:
skewness z se.est root.b1
-0.03520907 -1.06683621 0.03301991 -0.03519946
> kurtosis.test(zz)
D'Agostino Kurtosis Normality Test
data: input data
kurtosis = -0.052102, null hypothesis kurtosis = 0, p-value = 0.4362
alternative hypothesis: true kurtosis is not equal to 0
95 percent confidence interval:
-0.18151406 0.07731029
sample estimates:
kurtosis z se.est b2
-0.05210189 -0.77868046 0.06602783 2.94685476
>
> # Log normal distribution - rejection on both skewness and kurtosis
> zz1 <- exp(zz1)
> skewness.test(zz1)
D'Agostino Skewness Normality Test
data: input data
skewness = 5.2214, null hypothesis skewness = 0, p-value < 2.2e-16
alternative hypothesis: true skewness is not equal to 0
95 percent confidence interval:
5.156675 5.286111
sample estimates:
skewness z se.est root.b1
5.22139319 63.31231869 0.03301991 5.21996907
> kurtosis.test(zz1)
D'Agostino Kurtosis Normality Test
data: input data
kurtosis = 61.259, null hypothesis kurtosis = 0, p-value < 2.2e-16
alternative hypothesis: true kurtosis is not equal to 0
95 percent confidence interval:
61.13006 61.38888
sample estimates:
kurtosis z se.est b2
61.25946799 44.06817706 0.06602783 64.20270103 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th | I think Nick Cox points out some of the difficulties with the approach.
A possible alternate recommendation would be to use another normality test. In classes I took we used a test based on skewness a | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample?
I think Nick Cox points out some of the difficulties with the approach.
A possible alternate recommendation would be to use another normality test. In classes I took we used a test based on skewness and kurtosis due to D'Agostino for larger samples. I implemented these tests in my lolcat statistical package. Consider:
#Install/load step
require(devtools)
install_github("burrm/lolcat")
require(lolcat)
set.seed(1)
#Normal distribution - no rejection
zz <- rnorm(5500)
skewness.test(zz)
kurtosis.test(zz)
# Log normal distribution - rejection on both skewness and kurtosis
zz1 <- exp(zz1)
skewness.test(zz1)
kurtosis.test(zz1)
Interestingly enough, even with a sample size of 5500, skewness/kurtosis would likely not reject with these tests. A log normal distribution would most likely reject, even at substantially lower sample sizes. As an example:
> set.seed(1)
>
> #Normal distribution - no rejection
> zz <- rnorm(5500)
> skewness.test(zz)
D'Agostino Skewness Normality Test
data: input data
skewness = -0.035209, null hypothesis skewness = 0, p-value = 0.286
alternative hypothesis: true skewness is not equal to 0
95 percent confidence interval:
-0.09992690 0.02950877
sample estimates:
skewness z se.est root.b1
-0.03520907 -1.06683621 0.03301991 -0.03519946
> kurtosis.test(zz)
D'Agostino Kurtosis Normality Test
data: input data
kurtosis = -0.052102, null hypothesis kurtosis = 0, p-value = 0.4362
alternative hypothesis: true kurtosis is not equal to 0
95 percent confidence interval:
-0.18151406 0.07731029
sample estimates:
kurtosis z se.est b2
-0.05210189 -0.77868046 0.06602783 2.94685476
>
> # Log normal distribution - rejection on both skewness and kurtosis
> zz1 <- exp(zz1)
> skewness.test(zz1)
D'Agostino Skewness Normality Test
data: input data
skewness = 5.2214, null hypothesis skewness = 0, p-value < 2.2e-16
alternative hypothesis: true skewness is not equal to 0
95 percent confidence interval:
5.156675 5.286111
sample estimates:
skewness z se.est root.b1
5.22139319 63.31231869 0.03301991 5.21996907
> kurtosis.test(zz1)
D'Agostino Kurtosis Normality Test
data: input data
kurtosis = 61.259, null hypothesis kurtosis = 0, p-value < 2.2e-16
alternative hypothesis: true kurtosis is not equal to 0
95 percent confidence interval:
61.13006 61.38888
sample estimates:
kurtosis z se.est b2
61.25946799 44.06817706 0.06602783 64.20270103 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th
I think Nick Cox points out some of the difficulties with the approach.
A possible alternate recommendation would be to use another normality test. In classes I took we used a test based on skewness a |
27,181 | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample? | I apologize for reviving an old thread, but I came across this in a search and I wanted to add my input in case others have the same question.
Nick Cox has provided some good input to this question, but I would like to present an answer from a different point of view.
In a standardized environment, such as a corporation, it is common to have to meet certain constraints and criteria such as p-values. While I absolutely agree that a graphical assessment such as a normality plot should be completed, these types of analysis are difficult to write policy around.
Establishing quantified constraints is important in a regulated environment, and a p-value is a reasonable solution to this.
Sampling a portion of the sample is indeed a reasonable solution. Think of it this way: You have a manufacturing system in which parts are constantly being produced, let's say legos. Millions (billions?) are created, which is the population.
You select a barrel to perform your assessment on, the barrel contains, well let's say 100,000 legos. That's far more than you need! You grab a scoop and pull out about 100 legos.
See what you've done? You took a sample, the barrel, and then you re-sampled, the scoop.
Now in this example I've done a terrible job of randomizing the components, but I think it still makes a good illustration that we all do this sort of thing every day, whether it's with physical components or sampling a list of data.
So to summarize, it's definitely acceptable to sample your sample, but you want to make sure to get enough random samples that you're not only representing the larger sample datapoints, but the population in general. Your example of using 500 datapoints is still a huge sample to analyze. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th | I apologize for reviving an old thread, but I came across this in a search and I wanted to add my input in case others have the same question.
Nick Cox has provided some good input to this question, b | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying the test to a subsample?
I apologize for reviving an old thread, but I came across this in a search and I wanted to add my input in case others have the same question.
Nick Cox has provided some good input to this question, but I would like to present an answer from a different point of view.
In a standardized environment, such as a corporation, it is common to have to meet certain constraints and criteria such as p-values. While I absolutely agree that a graphical assessment such as a normality plot should be completed, these types of analysis are difficult to write policy around.
Establishing quantified constraints is important in a regulated environment, and a p-value is a reasonable solution to this.
Sampling a portion of the sample is indeed a reasonable solution. Think of it this way: You have a manufacturing system in which parts are constantly being produced, let's say legos. Millions (billions?) are created, which is the population.
You select a barrel to perform your assessment on, the barrel contains, well let's say 100,000 legos. That's far more than you need! You grab a scoop and pull out about 100 legos.
See what you've done? You took a sample, the barrel, and then you re-sampled, the scoop.
Now in this example I've done a terrible job of randomizing the components, but I think it still makes a good illustration that we all do this sort of thing every day, whether it's with physical components or sampling a list of data.
So to summarize, it's definitely acceptable to sample your sample, but you want to make sure to get enough random samples that you're not only representing the larger sample datapoints, but the population in general. Your example of using 500 datapoints is still a huge sample to analyze. | Can a sample larger than 5,000 data points be tested for normality using shapiro.test by applying th
I apologize for reviving an old thread, but I came across this in a search and I wanted to add my input in case others have the same question.
Nick Cox has provided some good input to this question, b |
27,182 | What is exactly meant by a "data set"? | In my experience, "dataset" (or "data set") is an informal term that refers to a collection of data. Generally a dataset contains more than one variable and concerns a single topic; it's likely to concern a single sample.
A mistake I often see writers of Cross Validated questions make is using "dataset" as a synonym for "variable" or "vector". | What is exactly meant by a "data set"? | In my experience, "dataset" (or "data set") is an informal term that refers to a collection of data. Generally a dataset contains more than one variable and concerns a single topic; it's likely to con | What is exactly meant by a "data set"?
In my experience, "dataset" (or "data set") is an informal term that refers to a collection of data. Generally a dataset contains more than one variable and concerns a single topic; it's likely to concern a single sample.
A mistake I often see writers of Cross Validated questions make is using "dataset" as a synonym for "variable" or "vector". | What is exactly meant by a "data set"?
In my experience, "dataset" (or "data set") is an informal term that refers to a collection of data. Generally a dataset contains more than one variable and concerns a single topic; it's likely to con |
27,183 | What is exactly meant by a "data set"? | I think that Wikipedia does a decent job at defining it:
Most commonly a data set corresponds to the contents of a single
database table, or a single statistical data matrix, where every
column of the table represents a particular variable, and each row
corresponds to a given member of the data set in question. The data
set lists values for each of the variables, such as height and weight
of an object, for each member of the data set. Each value is known as
a datum. The data set may comprise data for one or more members,
corresponding to the number of rows.
The term data set may also be used more loosely, to refer to the data
in a collection of closely related tables, corresponding to a
particular experiment or event. An example of this type is the data
sets collected by space agencies performing experiments with
instruments aboard space probes.
In the open data discipline, dataset is the unit to measure the
information released in a public open data repository. The European
Open Data portal aggregates more than half a million datasets. In this
field other definitions have been proposed but currently there is not
an official one. Some other issues (real-time data sources,
non-relational datasets, etc.) increases the difficulty to reach a
consensus about it.
As you can see, the term is somewhat vague. | What is exactly meant by a "data set"? | I think that Wikipedia does a decent job at defining it:
Most commonly a data set corresponds to the contents of a single
database table, or a single statistical data matrix, where every
column o | What is exactly meant by a "data set"?
I think that Wikipedia does a decent job at defining it:
Most commonly a data set corresponds to the contents of a single
database table, or a single statistical data matrix, where every
column of the table represents a particular variable, and each row
corresponds to a given member of the data set in question. The data
set lists values for each of the variables, such as height and weight
of an object, for each member of the data set. Each value is known as
a datum. The data set may comprise data for one or more members,
corresponding to the number of rows.
The term data set may also be used more loosely, to refer to the data
in a collection of closely related tables, corresponding to a
particular experiment or event. An example of this type is the data
sets collected by space agencies performing experiments with
instruments aboard space probes.
In the open data discipline, dataset is the unit to measure the
information released in a public open data repository. The European
Open Data portal aggregates more than half a million datasets. In this
field other definitions have been proposed but currently there is not
an official one. Some other issues (real-time data sources,
non-relational datasets, etc.) increases the difficulty to reach a
consensus about it.
As you can see, the term is somewhat vague. | What is exactly meant by a "data set"?
I think that Wikipedia does a decent job at defining it:
Most commonly a data set corresponds to the contents of a single
database table, or a single statistical data matrix, where every
column o |
27,184 | What is exactly meant by a "data set"? | I think you might need to define data point before you can define data set: why is one primitive and not needing definition, but not vice versa?
At least two definitions make sense to me:
One or more observations (cases, records, rows) for one or more variables (fields. columns).
Whatever is stored as data within a file readable by a program of choice.
Tabular layout is common but I don't think it's part of any definition; how the data are stored can be practically important, naturally.
P.S. The word "format" is so overloaded that to me it's best avoided unless specified unambiguously. I've seen it used for
General or specific text or binary file format
Data structure, e.g. tabular or other
Data storage or variable types, e.g. bit, integer, real, character
Display format controlling presentation, e.g. details on number of decimal places; decimal, hexadecimal or binary display. | What is exactly meant by a "data set"? | I think you might need to define data point before you can define data set: why is one primitive and not needing definition, but not vice versa?
At least two definitions make sense to me:
One or mo | What is exactly meant by a "data set"?
I think you might need to define data point before you can define data set: why is one primitive and not needing definition, but not vice versa?
At least two definitions make sense to me:
One or more observations (cases, records, rows) for one or more variables (fields. columns).
Whatever is stored as data within a file readable by a program of choice.
Tabular layout is common but I don't think it's part of any definition; how the data are stored can be practically important, naturally.
P.S. The word "format" is so overloaded that to me it's best avoided unless specified unambiguously. I've seen it used for
General or specific text or binary file format
Data structure, e.g. tabular or other
Data storage or variable types, e.g. bit, integer, real, character
Display format controlling presentation, e.g. details on number of decimal places; decimal, hexadecimal or binary display. | What is exactly meant by a "data set"?
I think you might need to define data point before you can define data set: why is one primitive and not needing definition, but not vice versa?
At least two definitions make sense to me:
One or mo |
27,185 | What is exactly meant by a "data set"? | There are already some good answers here and I don't think I can penetrate any deeper than Nick Cox or Franck Dernoncourt the issue of whether "dataset" refers to the conceptual collection of related data, or to the particular arrangement of those data e.g. into a table/matrix or a computer-readable file. Franck's extract mentions edge cases like continuously-collected data, or data spread across several tables, which are worth bearing in mind if you assumed there was going to be a simple definition. (Not all statistics software can handle it, but it is very easy to imagine a case where data is stored in a relational database with multiple tables. Is the entire database a single "dataset"?)
One thing I will add though is that datasets aren't generally sets, in the mathematical sense! Sensu stricto either a set contains an object or it doesn't, but can't contain more than one copy of that object. If I roll a die eight times and score 1, 4, 3, 5, 5, 4, 6, 4 then the set of scores rolled is just {1, 3, 4, 5, 6}. Note that the elements could be in any order, I've just written them ascending in value but the set {5, 4, 1, 6, 3} is mathematically equal to it, for instance. This isn't what we usually mean by a dataset though!
A multiset (or bag) allows entries to be repeated, e.g. {1, 4, 3, 5, 5, 4, 6, 4} though note this still doesn't include a sense of order, so is equal to {1, 3, 4, 4, 4, 5, 5, 6}. Perhaps the "set" in "dataset" might best be read as "multiset".
Moreover, if you want order to be preserved, you might instead use a vector: (1, 4, 3, 5, 5, 4, 6, 4) is not the same as (1, 3, 4, 4, 4, 5, 5, 6). The ordering gives us an index which can serve as a kind of identifier — it tells us, for instance, "which four is which?" — and which often serves a purpose for recording observations in their natural temporal or geographic order. When one sees formulae such as $\bar x = \frac{1}{n} \sum_{i=1}^n x_i$ this sort of indexing scheme is assumed. In the context of a set or multiset, what would $x_1$ or $x_2$ mean, given that we can't distinguish a "first" or "second" element due to the lack of ordering?
But vectors are only for recording one variable - for several, it may be more convenient to use a matrix to tabulate with order preserved. For more sophisticated situations such as measuring a property of a three-dimensional grid of voxels over time, you might even move up to arranging the data in a tensor (see e.g. this question).
But note that conceptually a multiset may suffice in most simple situations, even if it's inconvenient for practical purposes. If I tossed a coin simultaneously with rolling the die, and wanted to record the two results together, then I could use a multiset like {(1, H), (3, T), (4, H), (4, H), (4, T), (5, H), (5, T), (6, T)} instead of a matrix. An ordinary set will not suffice, as it wouldn't count the multiplicity of the (4, H), for instance. | What is exactly meant by a "data set"? | There are already some good answers here and I don't think I can penetrate any deeper than Nick Cox or Franck Dernoncourt the issue of whether "dataset" refers to the conceptual collection of related | What is exactly meant by a "data set"?
There are already some good answers here and I don't think I can penetrate any deeper than Nick Cox or Franck Dernoncourt the issue of whether "dataset" refers to the conceptual collection of related data, or to the particular arrangement of those data e.g. into a table/matrix or a computer-readable file. Franck's extract mentions edge cases like continuously-collected data, or data spread across several tables, which are worth bearing in mind if you assumed there was going to be a simple definition. (Not all statistics software can handle it, but it is very easy to imagine a case where data is stored in a relational database with multiple tables. Is the entire database a single "dataset"?)
One thing I will add though is that datasets aren't generally sets, in the mathematical sense! Sensu stricto either a set contains an object or it doesn't, but can't contain more than one copy of that object. If I roll a die eight times and score 1, 4, 3, 5, 5, 4, 6, 4 then the set of scores rolled is just {1, 3, 4, 5, 6}. Note that the elements could be in any order, I've just written them ascending in value but the set {5, 4, 1, 6, 3} is mathematically equal to it, for instance. This isn't what we usually mean by a dataset though!
A multiset (or bag) allows entries to be repeated, e.g. {1, 4, 3, 5, 5, 4, 6, 4} though note this still doesn't include a sense of order, so is equal to {1, 3, 4, 4, 4, 5, 5, 6}. Perhaps the "set" in "dataset" might best be read as "multiset".
Moreover, if you want order to be preserved, you might instead use a vector: (1, 4, 3, 5, 5, 4, 6, 4) is not the same as (1, 3, 4, 4, 4, 5, 5, 6). The ordering gives us an index which can serve as a kind of identifier — it tells us, for instance, "which four is which?" — and which often serves a purpose for recording observations in their natural temporal or geographic order. When one sees formulae such as $\bar x = \frac{1}{n} \sum_{i=1}^n x_i$ this sort of indexing scheme is assumed. In the context of a set or multiset, what would $x_1$ or $x_2$ mean, given that we can't distinguish a "first" or "second" element due to the lack of ordering?
But vectors are only for recording one variable - for several, it may be more convenient to use a matrix to tabulate with order preserved. For more sophisticated situations such as measuring a property of a three-dimensional grid of voxels over time, you might even move up to arranging the data in a tensor (see e.g. this question).
But note that conceptually a multiset may suffice in most simple situations, even if it's inconvenient for practical purposes. If I tossed a coin simultaneously with rolling the die, and wanted to record the two results together, then I could use a multiset like {(1, H), (3, T), (4, H), (4, H), (4, T), (5, H), (5, T), (6, T)} instead of a matrix. An ordinary set will not suffice, as it wouldn't count the multiplicity of the (4, H), for instance. | What is exactly meant by a "data set"?
There are already some good answers here and I don't think I can penetrate any deeper than Nick Cox or Franck Dernoncourt the issue of whether "dataset" refers to the conceptual collection of related |
27,186 | Omitted variable bias: which predictors do I need to include, and why? | This is not necessarily wrong, but not always feasible and also not a free lunch.
An omitted variable may cause (see, e.g., the comments below for additional thoughts on the matter) bias if it is both (a) related to the outcome $Y$ and (b) correlated with the predictor $X$ whose effect on $Y$ you are primarily interested in.
Consider an example: You want to learn about the causal effect of additional schooling on later earnings. Another variable that is most certainly satisfies the conditions (a) and (b) is "motivation" - more motivated people will both be more successful in their jobs (whether they are highly schooled or not) and generally choose to receive more education, as they are likely to like learning, and not find it too painful to study for exams.
So, when comparing earnings of highly schooled and less schooled employees without controlling for motivation, you would likely at least partially not be comparing two groups that only differ in terms of their schooling (whose effect you are interested in) but also in terms of their motivation, so the observed difference in earnings should not only be ascribed to differences in schooling.
Now, it would indeed be a solution to control for motivation by including it into the regression. The likely problem is of course: are you going to have data on motivation? Even if you were to conduct a survey yourself (rather than use say administrative data, that will most likely not have entries on motivation), how would you even measure it?
As to why including everything is not a free lunch: if you have a small sample, including all available covariates may quickly lead to overfitting when prediction is your goal. See for example this very nice discussion. | Omitted variable bias: which predictors do I need to include, and why? | This is not necessarily wrong, but not always feasible and also not a free lunch.
An omitted variable may cause (see, e.g., the comments below for additional thoughts on the matter) bias if it is both | Omitted variable bias: which predictors do I need to include, and why?
This is not necessarily wrong, but not always feasible and also not a free lunch.
An omitted variable may cause (see, e.g., the comments below for additional thoughts on the matter) bias if it is both (a) related to the outcome $Y$ and (b) correlated with the predictor $X$ whose effect on $Y$ you are primarily interested in.
Consider an example: You want to learn about the causal effect of additional schooling on later earnings. Another variable that is most certainly satisfies the conditions (a) and (b) is "motivation" - more motivated people will both be more successful in their jobs (whether they are highly schooled or not) and generally choose to receive more education, as they are likely to like learning, and not find it too painful to study for exams.
So, when comparing earnings of highly schooled and less schooled employees without controlling for motivation, you would likely at least partially not be comparing two groups that only differ in terms of their schooling (whose effect you are interested in) but also in terms of their motivation, so the observed difference in earnings should not only be ascribed to differences in schooling.
Now, it would indeed be a solution to control for motivation by including it into the regression. The likely problem is of course: are you going to have data on motivation? Even if you were to conduct a survey yourself (rather than use say administrative data, that will most likely not have entries on motivation), how would you even measure it?
As to why including everything is not a free lunch: if you have a small sample, including all available covariates may quickly lead to overfitting when prediction is your goal. See for example this very nice discussion. | Omitted variable bias: which predictors do I need to include, and why?
This is not necessarily wrong, but not always feasible and also not a free lunch.
An omitted variable may cause (see, e.g., the comments below for additional thoughts on the matter) bias if it is both |
27,187 | Omitted variable bias: which predictors do I need to include, and why? | the solution for OVB is
to include all those predictors that control the effect of confounding
covariates not all predictors for dependent variable Y.
Yes, this is correct if you are more precise about it. For identification purposes, you should include the variables that control the effect of confounding and avoid those that open confounding paths or mediate the effect you are trying to measure (if you are interested in the total effect) --- that is, you should include those variables that satisfy the backdoor criterion. You should not indiscriminately include all predictors of $Y$, if by predictor you mean anything that "predicts" $Y$ --- this could bias your estimate. You can find a gentle, example-based, introduction to the topic in this Crash Course in Good and Bad Controls.
In this same vein, it's worth noticing that Christoph's answer is not strictly correct:
an omitted variable causes bias if it is both (a) related to the
outcome Y and (b) correlated with the predictor X whose effect on Y
you are primarily interested in
This is not true. Correlational criteria is not necessary nor sufficient to define what a confounder is. This is a common misconception on the definition of confounders, illustrated in this other answer.
Of course, which variables to include that guarantees identification addresses only the matter of getting consistent estimates of the causal quantity of interest. You have many other problems to address, such as the efficiency of your estimate (so you might choose/avoid variables that reduce/increase variance), biases due to misspecification of the functional form etc. | Omitted variable bias: which predictors do I need to include, and why? | the solution for OVB is
to include all those predictors that control the effect of confounding
covariates not all predictors for dependent variable Y.
Yes, this is correct if you are more precise abo | Omitted variable bias: which predictors do I need to include, and why?
the solution for OVB is
to include all those predictors that control the effect of confounding
covariates not all predictors for dependent variable Y.
Yes, this is correct if you are more precise about it. For identification purposes, you should include the variables that control the effect of confounding and avoid those that open confounding paths or mediate the effect you are trying to measure (if you are interested in the total effect) --- that is, you should include those variables that satisfy the backdoor criterion. You should not indiscriminately include all predictors of $Y$, if by predictor you mean anything that "predicts" $Y$ --- this could bias your estimate. You can find a gentle, example-based, introduction to the topic in this Crash Course in Good and Bad Controls.
In this same vein, it's worth noticing that Christoph's answer is not strictly correct:
an omitted variable causes bias if it is both (a) related to the
outcome Y and (b) correlated with the predictor X whose effect on Y
you are primarily interested in
This is not true. Correlational criteria is not necessary nor sufficient to define what a confounder is. This is a common misconception on the definition of confounders, illustrated in this other answer.
Of course, which variables to include that guarantees identification addresses only the matter of getting consistent estimates of the causal quantity of interest. You have many other problems to address, such as the efficiency of your estimate (so you might choose/avoid variables that reduce/increase variance), biases due to misspecification of the functional form etc. | Omitted variable bias: which predictors do I need to include, and why?
the solution for OVB is
to include all those predictors that control the effect of confounding
covariates not all predictors for dependent variable Y.
Yes, this is correct if you are more precise abo |
27,188 | Omitted variable bias: which predictors do I need to include, and why? | Theoretically, including all relevant predictors eliminates the omitted variable bias. However, it might not always be feasible to include all relevant explanatory variables in your regression (due to unawareness of relevant variables or lack of data).
Regarding the lack of knowledge about the omitted variable bias. There are a couple of good lectures out there on the OVB. Looking around, one of the most comprehensive lectures on the omitted variable bias might be this one:
https://economictheoryblog.com/2018/05/04/omitted-variable-bias
It also includes a section that discusses possible strategies against an omitted variable bias. | Omitted variable bias: which predictors do I need to include, and why? | Theoretically, including all relevant predictors eliminates the omitted variable bias. However, it might not always be feasible to include all relevant explanatory variables in your regression (due to | Omitted variable bias: which predictors do I need to include, and why?
Theoretically, including all relevant predictors eliminates the omitted variable bias. However, it might not always be feasible to include all relevant explanatory variables in your regression (due to unawareness of relevant variables or lack of data).
Regarding the lack of knowledge about the omitted variable bias. There are a couple of good lectures out there on the OVB. Looking around, one of the most comprehensive lectures on the omitted variable bias might be this one:
https://economictheoryblog.com/2018/05/04/omitted-variable-bias
It also includes a section that discusses possible strategies against an omitted variable bias. | Omitted variable bias: which predictors do I need to include, and why?
Theoretically, including all relevant predictors eliminates the omitted variable bias. However, it might not always be feasible to include all relevant explanatory variables in your regression (due to |
27,189 | Omitted variable bias: which predictors do I need to include, and why? | Carlos' answer is good in that it addresses a major deficiency in regression modeling practice. The term OVB is very imprecise. Except under atypical mathematical structures, adjusting for other variables will change the effect estimated for a primary regressor. This alone does not mean all such variables should be included in a model.
The "backdoor criterion" specifically addresses confounding bias. An audience of experts will generally not accept/believe results from models which omit confounding variables from adjustment. This is for good reason. Omitted confounders have led to completely incorrect inference in large confirmatory studies, and further led to policies, drug indications, or media coverage which were costly and damaging. The preferred terminology here is confounding bias, rather than merely OVB. This applies to all types of models, including the most prevalent linear regression.
The second most prevalent (perhaps) model is logistic regression. There is another type of "bias" (perhaps) which arises from logistic models unrelated to confounding. You can change the primary effect by adjusting for variables which are uncorrelated with the primary regressor. This is because of the non-collapsibility of the odds ratio. This arises when the primary exposure has a heterogeneous distribution of covariates underlying the baseline risk of the outcome. The slope of the sigmoid which estimates the "averaged out" accumulation of risk per unit difference in a primary regressor is attenuated. This type of bias arises when the target of inference was individual level risk, rather than population averaged.
In general, the advice to modelers is to adjust for prognostic variables, or variables which, despite being unrelated to the primary regressor, are causally predictive of the outcome. Examples might be in a study on lung cancer and smoking, groups of participants by environmental ambient pollution. Assume for the moment no evidence suggests that differences in regionality satisfies backdoor criteria to confound the smoking-cancer relationship. However, the difference in risk for this environmental exposure substantially predicts risk of lung cancer. Adjusting for environmental exposure more finely stratifies these participants so that the apparent differences between smoking and non-smoking, and cancer risk are apparent.
A very nice description of the difference is found here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3147074/pdf/dyr041.pdf | Omitted variable bias: which predictors do I need to include, and why? | Carlos' answer is good in that it addresses a major deficiency in regression modeling practice. The term OVB is very imprecise. Except under atypical mathematical structures, adjusting for other varia | Omitted variable bias: which predictors do I need to include, and why?
Carlos' answer is good in that it addresses a major deficiency in regression modeling practice. The term OVB is very imprecise. Except under atypical mathematical structures, adjusting for other variables will change the effect estimated for a primary regressor. This alone does not mean all such variables should be included in a model.
The "backdoor criterion" specifically addresses confounding bias. An audience of experts will generally not accept/believe results from models which omit confounding variables from adjustment. This is for good reason. Omitted confounders have led to completely incorrect inference in large confirmatory studies, and further led to policies, drug indications, or media coverage which were costly and damaging. The preferred terminology here is confounding bias, rather than merely OVB. This applies to all types of models, including the most prevalent linear regression.
The second most prevalent (perhaps) model is logistic regression. There is another type of "bias" (perhaps) which arises from logistic models unrelated to confounding. You can change the primary effect by adjusting for variables which are uncorrelated with the primary regressor. This is because of the non-collapsibility of the odds ratio. This arises when the primary exposure has a heterogeneous distribution of covariates underlying the baseline risk of the outcome. The slope of the sigmoid which estimates the "averaged out" accumulation of risk per unit difference in a primary regressor is attenuated. This type of bias arises when the target of inference was individual level risk, rather than population averaged.
In general, the advice to modelers is to adjust for prognostic variables, or variables which, despite being unrelated to the primary regressor, are causally predictive of the outcome. Examples might be in a study on lung cancer and smoking, groups of participants by environmental ambient pollution. Assume for the moment no evidence suggests that differences in regionality satisfies backdoor criteria to confound the smoking-cancer relationship. However, the difference in risk for this environmental exposure substantially predicts risk of lung cancer. Adjusting for environmental exposure more finely stratifies these participants so that the apparent differences between smoking and non-smoking, and cancer risk are apparent.
A very nice description of the difference is found here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3147074/pdf/dyr041.pdf | Omitted variable bias: which predictors do I need to include, and why?
Carlos' answer is good in that it addresses a major deficiency in regression modeling practice. The term OVB is very imprecise. Except under atypical mathematical structures, adjusting for other varia |
27,190 | Why would perfectly similar data have 0 mutual information? | Mutual information $I(X, Y)$ can be thought as a measure of reduction in uncertainty about $X$ after observing $Y$:
$$ I(X, Y) = H(X) - H(X|Y)$$
where $H(X)$ is entropy of $X$ and $H(X|Y)$ is conditional entropy of $X$ given $Y$. By symmetry it follows that
$$ I(X, Y) = H(Y) - H(Y|X)$$
However mutual information of a variable with itself is equal to entropy of this variable
$$ I(X, X) = H(X)$$
and is called self-information. This is true since $H(X|Y) = 0$ if values of $X$ are completely determined by $Y$ and this is true for $H(X|X)$. It is so because entropy is a measure of uncertainty and there is no uncertainty in reasoning on values of $X$ given the values of $X$, so
$$ X(X) - X(X|X) = X(X) - 0 = H(X) $$
This is immediately obvious if you think of it in terms of Venn diagrams as illustrated below.
You can also show this using the formula for mutual information and substituting the conditional entropy part, i.e.
$$ H(X|Y) = \sum_{x \in X, y \in Y} p(x, y) \log \frac{p(x,y)}{p(x)} $$
by changing $y$'s into $x$'s and with recalling that $X \cap X = X$, so $p(x, x) = p(x)$. [Notice that this is an informal argumentation, since for continuous variables $p(x, x)$ would not have a density function, while having cumulative distribution function.]
So yes, if you know something about $X$, then learning again about $X$ gives you no more information.
Check Chapter 2 of Elements of Information Theory by Cover and Thomas, or Shanon's original 1948 paper itself for learning more.
As about your second question, this is a common problem that in your data you do observe some values that possibly can occur. In this case the classical estimator for probability, i.e.
$$ \hat p = \frac{n_i}{\sum_i n_i} $$
where $n_i$ is a number of occurrences of $i$th value (out of $d$ categories), gives you $\hat p = 0$ if $n_i = 0$. This is called zero-frequency problem. The easy and commonly applied fix is, as your professor told you, to add some constant $\beta$ to your counts, so that
$$ \hat p = \frac{n_i + \beta}{(\sum_i n_i) + d\beta} $$
The common choice for $\beta$ is $1$, i.e. applying uniform prior based on Laplace's rule of succession, $1/2$ for Krichevsky-Trofimov estimate, or $1/d$ for Schurmann-Grassberger (1996) estimator. Notice however that what you do here is you apply out-of-data (prior) information in your model, so it gets subjective, Bayesian flavor. With using this approach you have to remember of assumptions you made and take them into consideration.
This approach is commonly used, e.g. in R enthropy package. You can find some further information in the following paper:
Schurmann, T., and P. Grassberger. (1996). Entropy estimation of symbol sequences. Chaos, 6, 41-427. | Why would perfectly similar data have 0 mutual information? | Mutual information $I(X, Y)$ can be thought as a measure of reduction in uncertainty about $X$ after observing $Y$:
$$ I(X, Y) = H(X) - H(X|Y)$$
where $H(X)$ is entropy of $X$ and $H(X|Y)$ is conditio | Why would perfectly similar data have 0 mutual information?
Mutual information $I(X, Y)$ can be thought as a measure of reduction in uncertainty about $X$ after observing $Y$:
$$ I(X, Y) = H(X) - H(X|Y)$$
where $H(X)$ is entropy of $X$ and $H(X|Y)$ is conditional entropy of $X$ given $Y$. By symmetry it follows that
$$ I(X, Y) = H(Y) - H(Y|X)$$
However mutual information of a variable with itself is equal to entropy of this variable
$$ I(X, X) = H(X)$$
and is called self-information. This is true since $H(X|Y) = 0$ if values of $X$ are completely determined by $Y$ and this is true for $H(X|X)$. It is so because entropy is a measure of uncertainty and there is no uncertainty in reasoning on values of $X$ given the values of $X$, so
$$ X(X) - X(X|X) = X(X) - 0 = H(X) $$
This is immediately obvious if you think of it in terms of Venn diagrams as illustrated below.
You can also show this using the formula for mutual information and substituting the conditional entropy part, i.e.
$$ H(X|Y) = \sum_{x \in X, y \in Y} p(x, y) \log \frac{p(x,y)}{p(x)} $$
by changing $y$'s into $x$'s and with recalling that $X \cap X = X$, so $p(x, x) = p(x)$. [Notice that this is an informal argumentation, since for continuous variables $p(x, x)$ would not have a density function, while having cumulative distribution function.]
So yes, if you know something about $X$, then learning again about $X$ gives you no more information.
Check Chapter 2 of Elements of Information Theory by Cover and Thomas, or Shanon's original 1948 paper itself for learning more.
As about your second question, this is a common problem that in your data you do observe some values that possibly can occur. In this case the classical estimator for probability, i.e.
$$ \hat p = \frac{n_i}{\sum_i n_i} $$
where $n_i$ is a number of occurrences of $i$th value (out of $d$ categories), gives you $\hat p = 0$ if $n_i = 0$. This is called zero-frequency problem. The easy and commonly applied fix is, as your professor told you, to add some constant $\beta$ to your counts, so that
$$ \hat p = \frac{n_i + \beta}{(\sum_i n_i) + d\beta} $$
The common choice for $\beta$ is $1$, i.e. applying uniform prior based on Laplace's rule of succession, $1/2$ for Krichevsky-Trofimov estimate, or $1/d$ for Schurmann-Grassberger (1996) estimator. Notice however that what you do here is you apply out-of-data (prior) information in your model, so it gets subjective, Bayesian flavor. With using this approach you have to remember of assumptions you made and take them into consideration.
This approach is commonly used, e.g. in R enthropy package. You can find some further information in the following paper:
Schurmann, T., and P. Grassberger. (1996). Entropy estimation of symbol sequences. Chaos, 6, 41-427. | Why would perfectly similar data have 0 mutual information?
Mutual information $I(X, Y)$ can be thought as a measure of reduction in uncertainty about $X$ after observing $Y$:
$$ I(X, Y) = H(X) - H(X|Y)$$
where $H(X)$ is entropy of $X$ and $H(X|Y)$ is conditio |
27,191 | Why would perfectly similar data have 0 mutual information? | To complement Tim's answer with a short and direct answer to your original question: No, similar data do not necessarily have 0 Mutual Information. They only do if they are constant..
Indeed if they are fully identical, their mutual information will be equal to the entropy of any of the two: $I(X, X) = H(X)$. This entropy is only zero in case of constant data, otherwise it may have other values. | Why would perfectly similar data have 0 mutual information? | To complement Tim's answer with a short and direct answer to your original question: No, similar data do not necessarily have 0 Mutual Information. They only do if they are constant..
Indeed if they | Why would perfectly similar data have 0 mutual information?
To complement Tim's answer with a short and direct answer to your original question: No, similar data do not necessarily have 0 Mutual Information. They only do if they are constant..
Indeed if they are fully identical, their mutual information will be equal to the entropy of any of the two: $I(X, X) = H(X)$. This entropy is only zero in case of constant data, otherwise it may have other values. | Why would perfectly similar data have 0 mutual information?
To complement Tim's answer with a short and direct answer to your original question: No, similar data do not necessarily have 0 Mutual Information. They only do if they are constant..
Indeed if they |
27,192 | Why would perfectly similar data have 0 mutual information? | Why would perfectly similar data have 0 mutual information?
The amount of 'alignment information' the algorithm can offer is zero. Nothing to align.
... and I'm not sure if I should manually fix the MI to be 1 if the columns are exactly the same.
No.
MI is an unreliable predictor of spatial proximity in proteins.
See: "Multilevel functional genomics data integration as a tool for understanding physiology: a network biology perspective", (Feb 1 2016), by Peter K. Davidsen, Nil Turan, Stuart Egginton, and Francesco Falciani
"An MI value of zero means that there is no dependency (i.e.,
no information flow) between two variables, whereas an MI
value of 1 indicates a perfect association between them, and,
therefore, a likely strong regulatory interaction between them.".
See: Correction for phylogeny, small number of observations and data redundancy improves the identification of coevolving amino acid pairs using mutual information", (Mar 10 2009), by Cristina Marino Buslje, Javier Santos, Jose Maria Delfino, and Morten Nielsen
"From Equation (1) — that defines the MI between two sites in a Multiple Sequence Alignment (MSA) — it is apparent that diversity is essential to achieve high MI values. Only if all amino acids are present in equal frequencies between two perfectly coevolving pairs will the MI achieve its maximum value. This leads to the observation that fast evolving sites tend to have high values of MI albeit being non-coevolving (Gouveia-Oliveira and Pedersen, 2007). Likewise, slowly evolving sites will only occupy a small fraction of the amino acid space, and hence tend to have low MI values. The extreme case is perfectly conserved amino acids that will always have a MI value of zero. By introducing a correction for low count this behavior is altered.
...
2.2 The algorithm
The MI between two positions in an MSA is given by the relationship:
$$MI(i,j)=\sum_{a,b}P(a_i,b_j)\cdot\log \left(\!\frac{P(a_i,b_j)}{P(a_i)\!\cdot\!P(b_j)}\!\right)\tag{1}$$
where $P(ai, bj)$ is the frequency of amino acid $a$ occurring at position $i$ and amino acid $b$ occurring at position $j$ in the same sequence, $P(ai)$ is the frequency of amino acid $a$ at position $i$ and $P(bj)$ is the frequency of amino acid $b$ at position $j$. We introduced a very simple correction for low number of sequences. The amino acid frequencies, $P(a, b)$, are normalized from $N(a,b)$, the number of times an amino acid pair $(a, b)$ is observed at positions $i$ and $j$ in the MSA. From $N(ai, bi)$, $P(ai, bj)$ is calculated as $(\lambda + N(ai, bj))/N$, where:
$$N=\sum_{a,b}(\lambda + N(a, b)), \; P(a_i)\!=\!\sum_{b}P(a_i,b_j) \text{ and } P(b_j)\!=\!\sum_{a}P(a_i,b_j)$$
It is clear that for MSAs of limited size, a large fraction of the $P(a_i, b_j)$ values will be estimated from a very low number of observations, and their contribution to MI could be highly noisy. To deal with such low counts, a parameter $\lambda$ is introduced. The initial value for the variable $N(a_i, b_j) = \lambda$ is set for all amino acid pairs. Only for MSAs with a small number of sequences, where a large fraction of amino acid pairs remain unobserved, will $\lambda$ influence the amino acids occupancy calculation. For large MSAs, most amino acid pairs will be observed at least once, and the influence of $\lambda$ will be minor. We investigated how the performance depended on the values used for $\lambda$ on a small independent dataset. We tested a range of values $0–0.2$ in steps of $0.01$. The maximal performance was achieved for a value of $\lambda$ equal to $0.05$, but similar results are obtained in the range $0.025–0.075$. This value was consistently found to be optimal for all datasets independently of size, evolutionary model, or rate of evolution (data not shown). When dealing with biological data, MSAs will often suffer from a high degree of unnatural sequence redundancy. It is hence expected that the sequence clustering would improve the accuracy of the MI calculation.".
...
4 Discussion and Conclusions
Here, we have compared two recently published approaches to
lessen the influence of phylogeny and signal noise into the
calculation of MI or coevolution between residues. Furthermore, we have shown how including simple techniques of sequence clustering and low count correction can significantly enhance the estimation of MI between residue pairs. Large-scale benchmarking including both artificial (in silico generated) and biological data demonstrated that this improved method could be applied to achieve accurate prediction of coevolving sites and contacts.
Our results demonstrate that raw MI was the worst predictor of coevolution. The RCW method of Gouveia-Oliveira and Pedersen (2007) outperformed MI. The APC background correction method by Dunn et al. (2008) achieved the highest performance. In this context, the inclusion of low count correction and clustering was shown to improve all three methods. The best performing method for both artificial and natural sequences was the combination of APC correction, clustering and low count correction.
We demonstrated that Z-score transformation calculated from sequence-based permutations significantly improved the prediction
accuracy of the method, and allows an interpretation of predictions
across different protein families. Further, we demonstrate how the
predictive performance of the method depends strongly on the number of sequence clusters rather than the number of sequences in the MSA, and those MSAs with <400 clusters tend to display very low predictive performance values.".
...
More information:
Statistical calculations of mutual information for pairwise protein sequences differs from mutual information calculations for probability space statistics. MI is the expected value of the pointwise mutual information (PMI).
The protein primary structure has an alphabet of 20 naturally occurring amino acids and a conformation determined by folding.
From the supporting information of: "Identification of direct residue contacts in protein–protein interaction by message passing" (Jan 6 2009), by Martin Weigt, Robert A. White, Hendrik Szurmant, James A. Hoch, and Terence Hwa:
"MI is a local measure; it encounters only 1 residue pair at a
time. MI is intrinsically unable to disentangle direct from
indirect coupling. Consequently, prediction of spatial vicinity of
interacting residues by MI is restricted. Therefore a global
approach is proposed that will lead to the notation of direct
information (DI). DI measures the part of MI that results from
the direct coupling of residue pairs.".
...
Recommendation: Please read that document for more information.
Take for example this answer from John Coleman on Stack Overflow:
def MI(sequences,i,j):
sequences = [s for s in sequences if not '-' in [s[i],s[j]]]
Pi = Counter(s[i] for s in sequences)
Pj = Counter(s[j] for s in sequences)
Pij = Counter((s[i],s[j]) for s in sequences)
return sum(Pij[(x,y)]*log(Pij[(x,y)]/(Pi[x]*Pj[y])) for x,y in Pij)
Notice that it has shortcomings, but is easy to understand.
Mutual information for statistical analysis of pairwise protein sequences neglects the complicated three-dimensional structures of proteins and far-reaching consequences for the design and implementation of alignment algorithms. Consideration of structural constraints is particularly important in the treatment of gaps, a notion fundamental to sequence comparison.
What is Mutual Information?
Multiple Sequence Alignments (MSA) of homologues proteins can provide us with at least two types of information; the first one is given by the conserved amino acids at certain positions, while the other is given by the inter-relationship between two or more positions. Mutual Information (MI) from information theory can be used to estimate the extent of the mutual coevolutionary relationship between two positions in a protein family. Mutual information theory is often applied to predict positional correlations in a MSA to make possible the analysis of those positions structurally or functionally important in a given fold or protein family. For example, mutations of essential residues in a protein sequence may occur, only if a compensatory mutation takes place elsewhere within the protein to preserve or restore activity. Compensatory mutations are highly frequent and involve not only functional but also biophysical properties. Since evolutionary variations in the sequences are constrained by a number of requirements, such as maintenance of favorable interactions in direct residue-residue contacts, using the information contained in MSAs may be possible to predict residue pairs which are likely to be close to each other in the three-dimensional structure (Figure 1).
Figure 1. Representation of a MSA and the alpha-carbon structure of one protein of the alignment. Conserved and variable positions are highlighted in yellow. The positions that coevolved are highlighted in purple and light blue. The residues within these positions where change occurred are shown in pink and green. The arrows (middle) represent the interrelation of coevolution and structural information. This Figure is an adaptation of Figure 1 of (Marks et al., 2011).
Mutual Information is a measurement of the uncertainty reduction for a MSA of homologous proteins. The MI between two positions (two columns in the MSA) reflects the extent to which knowing the amino acid at one position allows us to predict the amino acid identity at the other position.
...
Intuitively, mutual information measures the information that $X$ and $Y$ share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if $X$ and $Y$ are independent, then knowing $X$ does not give any information about $Y$ and vice versa, so their mutual information is zero. At the other extreme, if $X$ is a deterministic function of $Y$ and $Y$ is a deterministic function of $X$ then all information conveyed by $X$ is shared with $Y$: knowing $X$ determines the value of $Y$ and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in $Y$ (or $X$) alone, namely the entropy of $Y$ (or $X$). Moreover, this mutual information is the same as the entropy of $X$ and as the entropy of $Y$. (A very special case of this is when $X$ and $Y$ are the same random variable).
Mutual information is a measure of the inherent dependence expressed in the joint distribution of $X$ and $Y$ relative to the joint distribution of $X$ and $Y$ under the assumption of independence. Mutual information therefore measures dependence in the following sense: $I ( X ; Y ) = 0$ if and only if $X$ and $Y$ are independent random variables. This is easy to see in one direction: if $X$ and $Y$ are independent, then $p_{(X,Y)}(x,y)=p_{X}(x)\cdot p_{Y}(y)$, and therefore:
$$\log {\left({\frac {p_{(X,Y)}(x,y)}{p_{X}(x)\,p_{Y}(y)}}\right)}=\log 1=0.$$
Moreover, mutual information is nonnegative (i.e. $I ( X ; Y ) \ge 0$ see below) and symmetric (i.e. $I ( X ; Y ) = I ( Y ; X )$ see below).
Nonnegativity
Using Jensen's inequality on the definition of mutual information we can show that $I ( X ; Y )$ is non-negative, i.e.:
$$I ( X ; Y ) ≥ 0$$
Symmetry
$$I ( X ; Y ) = I ( Y ; X )$$
Pointwise mutual information, Jensen–Shannon divergence, and Statistical coupling analysis are naive methods where the measurements are not independent.
Wikipedia: Direct Coupling Analysis - Direct Couplings and Indirect Correlation:
The central point of DCA is to interpret the $J_{ij}$ (which can be represented as a $q\times q$ matrix if there are $q$ possible symbols) as direct couplings. If two positions are under joint evolutionary pressure (for example to maintain a structural bond), one might expect these couplings to be large because only sequences with fitting pairs of symbols should have a significant probability. On the other hand, a large correlation between two positions does not necessarily mean that the couplings are large, since large couplings between e.g. positions $i,j$ and $j,k$ might lead to large correlations between positions $i$ and $k$, mediated by position $j$.[1] In fact, such indirect correlations have been implicated in the high false positive rate when inferring protein residue contacts using correlation measures like mutual information.[16]
[1] "Direct-coupling analysis of residue co-evolution captures native contacts across many protein families" (Oct 25 2011), by Faruck Morcos, Andrea Pagnani, Bryan Lunt, Arianna Bertolino, Debora S. Marks, Chris Sander, Riccardo Zecchina, José N. Onuchic, Terence Hwa, and Martin Weigt
[16] "Disentangling Direct from Indirect Co-Evolution of Residues in Protein Alignments", (Jan 1 2010), by Lukas Burger and Erik van Nimwegen
After the shortcomings of raw MI are corrected it can be used to guide other algorithms towards more accurate pairwise protein sequencing.
... Should I not do this at all? The professor I'm working with suggested I incorporate a pseudo-count for every other non-existing amino acid and ignoring a manual fix for when i = j.
That's a great hint. | Why would perfectly similar data have 0 mutual information? | Why would perfectly similar data have 0 mutual information?
The amount of 'alignment information' the algorithm can offer is zero. Nothing to align.
... and I'm not sure if I should manually fix the | Why would perfectly similar data have 0 mutual information?
Why would perfectly similar data have 0 mutual information?
The amount of 'alignment information' the algorithm can offer is zero. Nothing to align.
... and I'm not sure if I should manually fix the MI to be 1 if the columns are exactly the same.
No.
MI is an unreliable predictor of spatial proximity in proteins.
See: "Multilevel functional genomics data integration as a tool for understanding physiology: a network biology perspective", (Feb 1 2016), by Peter K. Davidsen, Nil Turan, Stuart Egginton, and Francesco Falciani
"An MI value of zero means that there is no dependency (i.e.,
no information flow) between two variables, whereas an MI
value of 1 indicates a perfect association between them, and,
therefore, a likely strong regulatory interaction between them.".
See: Correction for phylogeny, small number of observations and data redundancy improves the identification of coevolving amino acid pairs using mutual information", (Mar 10 2009), by Cristina Marino Buslje, Javier Santos, Jose Maria Delfino, and Morten Nielsen
"From Equation (1) — that defines the MI between two sites in a Multiple Sequence Alignment (MSA) — it is apparent that diversity is essential to achieve high MI values. Only if all amino acids are present in equal frequencies between two perfectly coevolving pairs will the MI achieve its maximum value. This leads to the observation that fast evolving sites tend to have high values of MI albeit being non-coevolving (Gouveia-Oliveira and Pedersen, 2007). Likewise, slowly evolving sites will only occupy a small fraction of the amino acid space, and hence tend to have low MI values. The extreme case is perfectly conserved amino acids that will always have a MI value of zero. By introducing a correction for low count this behavior is altered.
...
2.2 The algorithm
The MI between two positions in an MSA is given by the relationship:
$$MI(i,j)=\sum_{a,b}P(a_i,b_j)\cdot\log \left(\!\frac{P(a_i,b_j)}{P(a_i)\!\cdot\!P(b_j)}\!\right)\tag{1}$$
where $P(ai, bj)$ is the frequency of amino acid $a$ occurring at position $i$ and amino acid $b$ occurring at position $j$ in the same sequence, $P(ai)$ is the frequency of amino acid $a$ at position $i$ and $P(bj)$ is the frequency of amino acid $b$ at position $j$. We introduced a very simple correction for low number of sequences. The amino acid frequencies, $P(a, b)$, are normalized from $N(a,b)$, the number of times an amino acid pair $(a, b)$ is observed at positions $i$ and $j$ in the MSA. From $N(ai, bi)$, $P(ai, bj)$ is calculated as $(\lambda + N(ai, bj))/N$, where:
$$N=\sum_{a,b}(\lambda + N(a, b)), \; P(a_i)\!=\!\sum_{b}P(a_i,b_j) \text{ and } P(b_j)\!=\!\sum_{a}P(a_i,b_j)$$
It is clear that for MSAs of limited size, a large fraction of the $P(a_i, b_j)$ values will be estimated from a very low number of observations, and their contribution to MI could be highly noisy. To deal with such low counts, a parameter $\lambda$ is introduced. The initial value for the variable $N(a_i, b_j) = \lambda$ is set for all amino acid pairs. Only for MSAs with a small number of sequences, where a large fraction of amino acid pairs remain unobserved, will $\lambda$ influence the amino acids occupancy calculation. For large MSAs, most amino acid pairs will be observed at least once, and the influence of $\lambda$ will be minor. We investigated how the performance depended on the values used for $\lambda$ on a small independent dataset. We tested a range of values $0–0.2$ in steps of $0.01$. The maximal performance was achieved for a value of $\lambda$ equal to $0.05$, but similar results are obtained in the range $0.025–0.075$. This value was consistently found to be optimal for all datasets independently of size, evolutionary model, or rate of evolution (data not shown). When dealing with biological data, MSAs will often suffer from a high degree of unnatural sequence redundancy. It is hence expected that the sequence clustering would improve the accuracy of the MI calculation.".
...
4 Discussion and Conclusions
Here, we have compared two recently published approaches to
lessen the influence of phylogeny and signal noise into the
calculation of MI or coevolution between residues. Furthermore, we have shown how including simple techniques of sequence clustering and low count correction can significantly enhance the estimation of MI between residue pairs. Large-scale benchmarking including both artificial (in silico generated) and biological data demonstrated that this improved method could be applied to achieve accurate prediction of coevolving sites and contacts.
Our results demonstrate that raw MI was the worst predictor of coevolution. The RCW method of Gouveia-Oliveira and Pedersen (2007) outperformed MI. The APC background correction method by Dunn et al. (2008) achieved the highest performance. In this context, the inclusion of low count correction and clustering was shown to improve all three methods. The best performing method for both artificial and natural sequences was the combination of APC correction, clustering and low count correction.
We demonstrated that Z-score transformation calculated from sequence-based permutations significantly improved the prediction
accuracy of the method, and allows an interpretation of predictions
across different protein families. Further, we demonstrate how the
predictive performance of the method depends strongly on the number of sequence clusters rather than the number of sequences in the MSA, and those MSAs with <400 clusters tend to display very low predictive performance values.".
...
More information:
Statistical calculations of mutual information for pairwise protein sequences differs from mutual information calculations for probability space statistics. MI is the expected value of the pointwise mutual information (PMI).
The protein primary structure has an alphabet of 20 naturally occurring amino acids and a conformation determined by folding.
From the supporting information of: "Identification of direct residue contacts in protein–protein interaction by message passing" (Jan 6 2009), by Martin Weigt, Robert A. White, Hendrik Szurmant, James A. Hoch, and Terence Hwa:
"MI is a local measure; it encounters only 1 residue pair at a
time. MI is intrinsically unable to disentangle direct from
indirect coupling. Consequently, prediction of spatial vicinity of
interacting residues by MI is restricted. Therefore a global
approach is proposed that will lead to the notation of direct
information (DI). DI measures the part of MI that results from
the direct coupling of residue pairs.".
...
Recommendation: Please read that document for more information.
Take for example this answer from John Coleman on Stack Overflow:
def MI(sequences,i,j):
sequences = [s for s in sequences if not '-' in [s[i],s[j]]]
Pi = Counter(s[i] for s in sequences)
Pj = Counter(s[j] for s in sequences)
Pij = Counter((s[i],s[j]) for s in sequences)
return sum(Pij[(x,y)]*log(Pij[(x,y)]/(Pi[x]*Pj[y])) for x,y in Pij)
Notice that it has shortcomings, but is easy to understand.
Mutual information for statistical analysis of pairwise protein sequences neglects the complicated three-dimensional structures of proteins and far-reaching consequences for the design and implementation of alignment algorithms. Consideration of structural constraints is particularly important in the treatment of gaps, a notion fundamental to sequence comparison.
What is Mutual Information?
Multiple Sequence Alignments (MSA) of homologues proteins can provide us with at least two types of information; the first one is given by the conserved amino acids at certain positions, while the other is given by the inter-relationship between two or more positions. Mutual Information (MI) from information theory can be used to estimate the extent of the mutual coevolutionary relationship between two positions in a protein family. Mutual information theory is often applied to predict positional correlations in a MSA to make possible the analysis of those positions structurally or functionally important in a given fold or protein family. For example, mutations of essential residues in a protein sequence may occur, only if a compensatory mutation takes place elsewhere within the protein to preserve or restore activity. Compensatory mutations are highly frequent and involve not only functional but also biophysical properties. Since evolutionary variations in the sequences are constrained by a number of requirements, such as maintenance of favorable interactions in direct residue-residue contacts, using the information contained in MSAs may be possible to predict residue pairs which are likely to be close to each other in the three-dimensional structure (Figure 1).
Figure 1. Representation of a MSA and the alpha-carbon structure of one protein of the alignment. Conserved and variable positions are highlighted in yellow. The positions that coevolved are highlighted in purple and light blue. The residues within these positions where change occurred are shown in pink and green. The arrows (middle) represent the interrelation of coevolution and structural information. This Figure is an adaptation of Figure 1 of (Marks et al., 2011).
Mutual Information is a measurement of the uncertainty reduction for a MSA of homologous proteins. The MI between two positions (two columns in the MSA) reflects the extent to which knowing the amino acid at one position allows us to predict the amino acid identity at the other position.
...
Intuitively, mutual information measures the information that $X$ and $Y$ share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if $X$ and $Y$ are independent, then knowing $X$ does not give any information about $Y$ and vice versa, so their mutual information is zero. At the other extreme, if $X$ is a deterministic function of $Y$ and $Y$ is a deterministic function of $X$ then all information conveyed by $X$ is shared with $Y$: knowing $X$ determines the value of $Y$ and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in $Y$ (or $X$) alone, namely the entropy of $Y$ (or $X$). Moreover, this mutual information is the same as the entropy of $X$ and as the entropy of $Y$. (A very special case of this is when $X$ and $Y$ are the same random variable).
Mutual information is a measure of the inherent dependence expressed in the joint distribution of $X$ and $Y$ relative to the joint distribution of $X$ and $Y$ under the assumption of independence. Mutual information therefore measures dependence in the following sense: $I ( X ; Y ) = 0$ if and only if $X$ and $Y$ are independent random variables. This is easy to see in one direction: if $X$ and $Y$ are independent, then $p_{(X,Y)}(x,y)=p_{X}(x)\cdot p_{Y}(y)$, and therefore:
$$\log {\left({\frac {p_{(X,Y)}(x,y)}{p_{X}(x)\,p_{Y}(y)}}\right)}=\log 1=0.$$
Moreover, mutual information is nonnegative (i.e. $I ( X ; Y ) \ge 0$ see below) and symmetric (i.e. $I ( X ; Y ) = I ( Y ; X )$ see below).
Nonnegativity
Using Jensen's inequality on the definition of mutual information we can show that $I ( X ; Y )$ is non-negative, i.e.:
$$I ( X ; Y ) ≥ 0$$
Symmetry
$$I ( X ; Y ) = I ( Y ; X )$$
Pointwise mutual information, Jensen–Shannon divergence, and Statistical coupling analysis are naive methods where the measurements are not independent.
Wikipedia: Direct Coupling Analysis - Direct Couplings and Indirect Correlation:
The central point of DCA is to interpret the $J_{ij}$ (which can be represented as a $q\times q$ matrix if there are $q$ possible symbols) as direct couplings. If two positions are under joint evolutionary pressure (for example to maintain a structural bond), one might expect these couplings to be large because only sequences with fitting pairs of symbols should have a significant probability. On the other hand, a large correlation between two positions does not necessarily mean that the couplings are large, since large couplings between e.g. positions $i,j$ and $j,k$ might lead to large correlations between positions $i$ and $k$, mediated by position $j$.[1] In fact, such indirect correlations have been implicated in the high false positive rate when inferring protein residue contacts using correlation measures like mutual information.[16]
[1] "Direct-coupling analysis of residue co-evolution captures native contacts across many protein families" (Oct 25 2011), by Faruck Morcos, Andrea Pagnani, Bryan Lunt, Arianna Bertolino, Debora S. Marks, Chris Sander, Riccardo Zecchina, José N. Onuchic, Terence Hwa, and Martin Weigt
[16] "Disentangling Direct from Indirect Co-Evolution of Residues in Protein Alignments", (Jan 1 2010), by Lukas Burger and Erik van Nimwegen
After the shortcomings of raw MI are corrected it can be used to guide other algorithms towards more accurate pairwise protein sequencing.
... Should I not do this at all? The professor I'm working with suggested I incorporate a pseudo-count for every other non-existing amino acid and ignoring a manual fix for when i = j.
That's a great hint. | Why would perfectly similar data have 0 mutual information?
Why would perfectly similar data have 0 mutual information?
The amount of 'alignment information' the algorithm can offer is zero. Nothing to align.
... and I'm not sure if I should manually fix the |
27,193 | Why would perfectly similar data have 0 mutual information? | The problem in the MI you're calculating isn't that the two columns are identical, rather that they're constants (or that you're effectively treating them as constants by estimating the probability of the vectors components with their empirical frequency in a vector that only has one value). Since the probability density of a constant is 0 everywhere but at a single value and 1 at that value, the MI between two constants is 0 (which is what you're seeing).
The mutual information between something and itself is the self-information (which makes sense intuitively as well as mathematically). It's not hard to show from definitions.
I don't really know your problem that well but it sounds like what you could do is estimate your probabilities a little better. Instead of taking
$P(x_{i}=k) = \frac{num~of ~elements~ equal~ to ~k in~ your~ vector}{num~ of~ elements~ in ~your~ vector}$ you could take $P(x_{i}=k)= \frac{num~ of~ elements~ equal~ to~ k~ in~ your~ population}{num~ of~ elements~ in~ your~ population}$ . Of course, that's being naive and assuming that observations are independent and similar but if that's true you should be ok. | Why would perfectly similar data have 0 mutual information? | The problem in the MI you're calculating isn't that the two columns are identical, rather that they're constants (or that you're effectively treating them as constants by estimating the probability of | Why would perfectly similar data have 0 mutual information?
The problem in the MI you're calculating isn't that the two columns are identical, rather that they're constants (or that you're effectively treating them as constants by estimating the probability of the vectors components with their empirical frequency in a vector that only has one value). Since the probability density of a constant is 0 everywhere but at a single value and 1 at that value, the MI between two constants is 0 (which is what you're seeing).
The mutual information between something and itself is the self-information (which makes sense intuitively as well as mathematically). It's not hard to show from definitions.
I don't really know your problem that well but it sounds like what you could do is estimate your probabilities a little better. Instead of taking
$P(x_{i}=k) = \frac{num~of ~elements~ equal~ to ~k in~ your~ vector}{num~ of~ elements~ in ~your~ vector}$ you could take $P(x_{i}=k)= \frac{num~ of~ elements~ equal~ to~ k~ in~ your~ population}{num~ of~ elements~ in~ your~ population}$ . Of course, that's being naive and assuming that observations are independent and similar but if that's true you should be ok. | Why would perfectly similar data have 0 mutual information?
The problem in the MI you're calculating isn't that the two columns are identical, rather that they're constants (or that you're effectively treating them as constants by estimating the probability of |
27,194 | Why would perfectly similar data have 0 mutual information? | In your example,
GGGGGGGGGG
GGGGGGGGGG
a zero Mutual Information (MI) is not caused because two variables are "perfectly similar'. In fact, being perfectly similar maximises the MI.
In that case, the reason for zero MI is something else: entropy of each variable ( H(X) or H(Y) ) is an upper bound for MI. But the entropy of each variable is zero. However, the upper bound of MI (the entropy of each variable) is zero. So, "despite" MI being maximal, under that constraint (zero), MI has to be zero.
In other words, if you look at the first variable, GGGGGGGGGG;
it has zero entropy; it is a static works: there is no information. Hence, there is nothing to get shared between the variables.
Hence, the question is framed incorrectly. Your question is a "why" followed by an incorrect statement.
Intuitive summary:
An incorrect understanding:
Similar variables $\implies$ MI is zero.
A correct understanding:
Similar variables $\implies$ MI is maximal.
No change in variables $\implies$ MI is minimal (MI is zero) | Why would perfectly similar data have 0 mutual information? | In your example,
GGGGGGGGGG
GGGGGGGGGG
a zero Mutual Information (MI) is not caused because two variables are "perfectly similar'. In fact, being perfectly similar maximises the MI.
In that case, the | Why would perfectly similar data have 0 mutual information?
In your example,
GGGGGGGGGG
GGGGGGGGGG
a zero Mutual Information (MI) is not caused because two variables are "perfectly similar'. In fact, being perfectly similar maximises the MI.
In that case, the reason for zero MI is something else: entropy of each variable ( H(X) or H(Y) ) is an upper bound for MI. But the entropy of each variable is zero. However, the upper bound of MI (the entropy of each variable) is zero. So, "despite" MI being maximal, under that constraint (zero), MI has to be zero.
In other words, if you look at the first variable, GGGGGGGGGG;
it has zero entropy; it is a static works: there is no information. Hence, there is nothing to get shared between the variables.
Hence, the question is framed incorrectly. Your question is a "why" followed by an incorrect statement.
Intuitive summary:
An incorrect understanding:
Similar variables $\implies$ MI is zero.
A correct understanding:
Similar variables $\implies$ MI is maximal.
No change in variables $\implies$ MI is minimal (MI is zero) | Why would perfectly similar data have 0 mutual information?
In your example,
GGGGGGGGGG
GGGGGGGGGG
a zero Mutual Information (MI) is not caused because two variables are "perfectly similar'. In fact, being perfectly similar maximises the MI.
In that case, the |
27,195 | Representing experimental data | I like this rule of thumb:
If you need the line to guide the eye (i.e. to show a trend that without the line would not be visible as clearly), you should not put the line.
Humans are extremely good at recognizing patterns (we're rather on the side of seeing trends that do not exist than missing an existing trend). If we are not able to get the trend without line, we can be pretty sure that no trend can be conclusively shown in the data set.
Talking about the second graph, the only indication of the uncertainty of your measurement points are the two red squares of C:O 1.2 at 700 °C. The spread of these two means that I would not accept e.g.
that there is a trend at all for C:O 1.2
that there is a difference between 2.0 and 3.6
and for sure the curved models are overfitting the data.
without very good reasons given. That, however, would again be a model.
edit: answer to Ivan's comment:
I'm chemist and I'd say that there is no measurement without error - what is acceptable will depend on the experiment and instrument.
This answer is not against showing experimental error but all for showing and taking it into account.
The idea behind my reasoning is that the graph shows exactly one repeated measurement, so when the discussion is how complex a model should be fit (i.e. horizontal line, straight line, quadratic, ...) this can give us an idea of the measurement error. In your case, this means that you would not be able to fit a meaningful quadratic (spline), even if you had a hard model (e.g. thermodynamic or kinetic equation) suggesting that it should be quadratic - you just don't have enough data.
To illustrate this:
df <-data.frame (T = c ( 700, 700, 800, 900, 700, 800, 900, 700, 800, 900),
C.to.O = factor (c ( 1.2, 1.2, 1.2, 1.2, 2 , 2 , 2 , 3.6, 3.6, 3.6)),
tar = c (21.5, 18.5, 19.5, 19, 15.5, 15 , 6 , 16.5, 9, 9))
Here's a linear fit together with its 95% confidence interval for each of the C:O ratios:
ggplot (df, aes (x = T, y = tar, col = C.to.O)) + geom_point () +
stat_smooth (method = "lm") +
facet_wrap (~C.to.O)
Note that for the higher C:O ratios the confidence interval ranges far below 0. This means that the implicit assumptions of the linear model are wrong. However, you can conclude that the linear models for the higher C:O contents are already overfit.
So, stepping back and fitting a constant value only (i.e. no T dependence):
ggplot (df, aes (x = T, y = tar, col = C.to.O)) + geom_point () +
stat_smooth (method = "lm", formula = y ~ 1) +
facet_wrap (~C.to.O)
The complement is to model no dependence on C:O:
ggplot (df, aes (x = T, y = tar)) + geom_point (aes (col = C.to.O)) +
stat_smooth (method = "lm", formula = y ~ x)
Still, the confidence interval would cover a horizontal or even slightly ascending lines.
You could go on and try e.g. allowing different offsets for the three C:O ratios, but using equal slopes.
However, already few more measurements would drastically improve the situation - note how much narrower the confidence intervals for C:O = 1 : 1 are, where you have 4 measurements instead of only 3.
Conclusion: if you compare my points of which conclusions I'd be sceptical of, they were reading way too much into the few available points! | Representing experimental data | I like this rule of thumb:
If you need the line to guide the eye (i.e. to show a trend that without the line would not be visible as clearly), you should not put the line.
Humans are extremely goo | Representing experimental data
I like this rule of thumb:
If you need the line to guide the eye (i.e. to show a trend that without the line would not be visible as clearly), you should not put the line.
Humans are extremely good at recognizing patterns (we're rather on the side of seeing trends that do not exist than missing an existing trend). If we are not able to get the trend without line, we can be pretty sure that no trend can be conclusively shown in the data set.
Talking about the second graph, the only indication of the uncertainty of your measurement points are the two red squares of C:O 1.2 at 700 °C. The spread of these two means that I would not accept e.g.
that there is a trend at all for C:O 1.2
that there is a difference between 2.0 and 3.6
and for sure the curved models are overfitting the data.
without very good reasons given. That, however, would again be a model.
edit: answer to Ivan's comment:
I'm chemist and I'd say that there is no measurement without error - what is acceptable will depend on the experiment and instrument.
This answer is not against showing experimental error but all for showing and taking it into account.
The idea behind my reasoning is that the graph shows exactly one repeated measurement, so when the discussion is how complex a model should be fit (i.e. horizontal line, straight line, quadratic, ...) this can give us an idea of the measurement error. In your case, this means that you would not be able to fit a meaningful quadratic (spline), even if you had a hard model (e.g. thermodynamic or kinetic equation) suggesting that it should be quadratic - you just don't have enough data.
To illustrate this:
df <-data.frame (T = c ( 700, 700, 800, 900, 700, 800, 900, 700, 800, 900),
C.to.O = factor (c ( 1.2, 1.2, 1.2, 1.2, 2 , 2 , 2 , 3.6, 3.6, 3.6)),
tar = c (21.5, 18.5, 19.5, 19, 15.5, 15 , 6 , 16.5, 9, 9))
Here's a linear fit together with its 95% confidence interval for each of the C:O ratios:
ggplot (df, aes (x = T, y = tar, col = C.to.O)) + geom_point () +
stat_smooth (method = "lm") +
facet_wrap (~C.to.O)
Note that for the higher C:O ratios the confidence interval ranges far below 0. This means that the implicit assumptions of the linear model are wrong. However, you can conclude that the linear models for the higher C:O contents are already overfit.
So, stepping back and fitting a constant value only (i.e. no T dependence):
ggplot (df, aes (x = T, y = tar, col = C.to.O)) + geom_point () +
stat_smooth (method = "lm", formula = y ~ 1) +
facet_wrap (~C.to.O)
The complement is to model no dependence on C:O:
ggplot (df, aes (x = T, y = tar)) + geom_point (aes (col = C.to.O)) +
stat_smooth (method = "lm", formula = y ~ x)
Still, the confidence interval would cover a horizontal or even slightly ascending lines.
You could go on and try e.g. allowing different offsets for the three C:O ratios, but using equal slopes.
However, already few more measurements would drastically improve the situation - note how much narrower the confidence intervals for C:O = 1 : 1 are, where you have 4 measurements instead of only 3.
Conclusion: if you compare my points of which conclusions I'd be sceptical of, they were reading way too much into the few available points! | Representing experimental data
I like this rule of thumb:
If you need the line to guide the eye (i.e. to show a trend that without the line would not be visible as clearly), you should not put the line.
Humans are extremely goo |
27,196 | Representing experimental data | As JeffE says: the points are the data. In general, it's good to avoid adding curves as much as possible. One reason for adding curve is that it makes the graph nicer to the eye, by making the points and the trend between the points more readable. This is particularly true if you have few data points.
However, there are other ways to display sparse data, that may be better than a scatter plot. One possibility is a bar chart, where the various bars are much more visible than your single points. A color code (similar to what you already have in your figure) will help see the trends in each data series (or the data series could be split, and presented next to each other in smaller individual bar charts).
Finally, if you really want to add some sort of line between your symbols, there are two cases:
If you expect a certain model to be valid for your data (linear, harmonic, whatever), you should fit your data on the model, explain the model in the text and comment on the agreement between data and model.
If you do not have any reasonable model for the data, you should not include extra assumptions in your graph. In particular, this means you should not include any type of lines between your points except strait lines. The nice “spline fit” interpolations that Excel (and other software) can draw are a lie. There is no valid reason for your data to follow that particular mathematical model, so you should stick to straight line segments.
Furthermore, in that case it can be nice to add a disclaimer somewhere in the figure caption, like “lines are only guides for the eye”. | Representing experimental data | As JeffE says: the points are the data. In general, it's good to avoid adding curves as much as possible. One reason for adding curve is that it makes the graph nicer to the eye, by making the points | Representing experimental data
As JeffE says: the points are the data. In general, it's good to avoid adding curves as much as possible. One reason for adding curve is that it makes the graph nicer to the eye, by making the points and the trend between the points more readable. This is particularly true if you have few data points.
However, there are other ways to display sparse data, that may be better than a scatter plot. One possibility is a bar chart, where the various bars are much more visible than your single points. A color code (similar to what you already have in your figure) will help see the trends in each data series (or the data series could be split, and presented next to each other in smaller individual bar charts).
Finally, if you really want to add some sort of line between your symbols, there are two cases:
If you expect a certain model to be valid for your data (linear, harmonic, whatever), you should fit your data on the model, explain the model in the text and comment on the agreement between data and model.
If you do not have any reasonable model for the data, you should not include extra assumptions in your graph. In particular, this means you should not include any type of lines between your points except strait lines. The nice “spline fit” interpolations that Excel (and other software) can draw are a lie. There is no valid reason for your data to follow that particular mathematical model, so you should stick to straight line segments.
Furthermore, in that case it can be nice to add a disclaimer somewhere in the figure caption, like “lines are only guides for the eye”. | Representing experimental data
As JeffE says: the points are the data. In general, it's good to avoid adding curves as much as possible. One reason for adding curve is that it makes the graph nicer to the eye, by making the points |
27,197 | Representing experimental data | 1-Your professor is making a valid point.
2-Your plot definitely does not increase readability IMHO.
3-From my understanding this is not the right forum to ask this sort of a question really and you should ask it at cross-validated. | Representing experimental data | 1-Your professor is making a valid point.
2-Your plot definitely does not increase readability IMHO.
3-From my understanding this is not the right forum to ask this sort of a question really and you s | Representing experimental data
1-Your professor is making a valid point.
2-Your plot definitely does not increase readability IMHO.
3-From my understanding this is not the right forum to ask this sort of a question really and you should ask it at cross-validated. | Representing experimental data
1-Your professor is making a valid point.
2-Your plot definitely does not increase readability IMHO.
3-From my understanding this is not the right forum to ask this sort of a question really and you s |
27,198 | Representing experimental data | Sometimes joining points makes sense, especially if they are very dense.
And then it may make sense to interpolate (e.g. with a spline). However, if it is anything more advanced than spline of order one (for which it is visibly obvious that it is just joining points), you need to mention it.
However, for the case of a few points, or a dozen, points, it is not the case. Just leave the points as they are, with markers. If you want to fit a line (or another curve), it is a model. You can do add it, but be explicit - e.g. "line represents linear regression fit". | Representing experimental data | Sometimes joining points makes sense, especially if they are very dense.
And then it may make sense to interpolate (e.g. with a spline). However, if it is anything more advanced than spline of order o | Representing experimental data
Sometimes joining points makes sense, especially if they are very dense.
And then it may make sense to interpolate (e.g. with a spline). However, if it is anything more advanced than spline of order one (for which it is visibly obvious that it is just joining points), you need to mention it.
However, for the case of a few points, or a dozen, points, it is not the case. Just leave the points as they are, with markers. If you want to fit a line (or another curve), it is a model. You can do add it, but be explicit - e.g. "line represents linear regression fit". | Representing experimental data
Sometimes joining points makes sense, especially if they are very dense.
And then it may make sense to interpolate (e.g. with a spline). However, if it is anything more advanced than spline of order o |
27,199 | Representing experimental data | I think there are cases where one is not proposing an explicit model, yet needs some kind of guide to the eye. My rule then is to avoid curves like the plague and stick to piecewise straight lines between successive points of a series.
For one, this assumption is more obvious to readers. Also the spikiness is good at keeping readers away from assuming trends unsupported by data. If at all, this only highlights noise and outliers.
The stuff I'm wary of is cursory (non-rigorous, non-explicit) use of splines, quadratics, regression etc. Very often this makes it seem there are trends where there are none. A good example of abuse are the curves drawn by @Ivan. With 3 datapoints I don't think any maxima or minima in the underlying model are obvious. | Representing experimental data | I think there are cases where one is not proposing an explicit model, yet needs some kind of guide to the eye. My rule then is to avoid curves like the plague and stick to piecewise straight lines be | Representing experimental data
I think there are cases where one is not proposing an explicit model, yet needs some kind of guide to the eye. My rule then is to avoid curves like the plague and stick to piecewise straight lines between successive points of a series.
For one, this assumption is more obvious to readers. Also the spikiness is good at keeping readers away from assuming trends unsupported by data. If at all, this only highlights noise and outliers.
The stuff I'm wary of is cursory (non-rigorous, non-explicit) use of splines, quadratics, regression etc. Very often this makes it seem there are trends where there are none. A good example of abuse are the curves drawn by @Ivan. With 3 datapoints I don't think any maxima or minima in the underlying model are obvious. | Representing experimental data
I think there are cases where one is not proposing an explicit model, yet needs some kind of guide to the eye. My rule then is to avoid curves like the plague and stick to piecewise straight lines be |
27,200 | Boxplot for several distributions? | (This is really a comment, but because it requires an illustration it has to be posted as a reply.)
Ed Tufte redesigned the boxplot in his Visual Display of Quantitative Information (p. 125, First Edition 1983) precisely to enable "informal, exploratory data analysis, where the research worker's time should be devoted to matters other than drawing lines." I have (in a perfectly natural manner) extended his redesign to accommodate drawing outliers in this example showing 70 parallel boxplots:
I can think of several ways to improve this further, but it's characteristic of what one might produce in the heat of exploring a complex dataset: we are content to make visualizations that let us see the data; good presentation can come later.
Compare this to a conventional rendition of the same data:
Tufte presents several other redesigns based on his principle of "maximizing the data ink ratio." Their value lies in illustrating how this principle can help us design effective exploratory graphics. As you can see, the mechanics of plotting them amounts to finding any graphics platform in which you can draw point markers and lines. | Boxplot for several distributions? | (This is really a comment, but because it requires an illustration it has to be posted as a reply.)
Ed Tufte redesigned the boxplot in his Visual Display of Quantitative Information (p. 125, First Edi | Boxplot for several distributions?
(This is really a comment, but because it requires an illustration it has to be posted as a reply.)
Ed Tufte redesigned the boxplot in his Visual Display of Quantitative Information (p. 125, First Edition 1983) precisely to enable "informal, exploratory data analysis, where the research worker's time should be devoted to matters other than drawing lines." I have (in a perfectly natural manner) extended his redesign to accommodate drawing outliers in this example showing 70 parallel boxplots:
I can think of several ways to improve this further, but it's characteristic of what one might produce in the heat of exploring a complex dataset: we are content to make visualizations that let us see the data; good presentation can come later.
Compare this to a conventional rendition of the same data:
Tufte presents several other redesigns based on his principle of "maximizing the data ink ratio." Their value lies in illustrating how this principle can help us design effective exploratory graphics. As you can see, the mechanics of plotting them amounts to finding any graphics platform in which you can draw point markers and lines. | Boxplot for several distributions?
(This is really a comment, but because it requires an illustration it has to be posted as a reply.)
Ed Tufte redesigned the boxplot in his Visual Display of Quantitative Information (p. 125, First Edi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.